How International Regulators Are Shaping AI Transparency and Ethics Oversight

PublisherSol Minion Developmenthttps:https://assets.solminion.co/logo.svgPublished Artificial Intelligence regulationTransparencyAI Act

One by one, regulatory bodies around the world are working to establish clear guidelines and oversight for AI. This is no easy task, with rapid developments making it difficult to keep up with all the potential ways the technology can be used. Nevertheless, the EU is leading the way with the AI Act. It's the first sweeping legislation, passed by the European Parliament, to shape AI transparency and ethics measures and manage its risks. 

The EU tends to adopt more restrictive policies around data use and privacy than other regulatory bodies. However, these first steps will likely influence how other governments shape their AI laws. As expressed in a recent MIT Technology Review article, “By becoming the first to formalize rules around AI, the EU retains its first-mover advantage. Much like the GDPR, the AI Act could become a global standard.”

Key Aspects of the AI Act

The European AI Office, formed in February 2024, will serve as an informational hub for responsible AI development and guidance, as well as the AI Act’s investigative and enforcement modality.  Noncompliant firms can expect fines coming out to between 1.5% and 7% of their global annual sales turnover, based on the severity of the violation. The legislation’s Regulatory Framework defines four levels of risk for AI systems. It also lists strict requirements for risk mitigation, human oversight, and high-quality data sets for those in high-risk categories.

Foundation models and systems built upon them are required to comply with EU copyright laws and disclose more information about their security, energy efficiency, and the data sets their systems were trained on, according to MIT Technology Review.  The most rigid requirements apply to only the most powerful AI models, as defined by the amount of computing power needed to train them. Since only the companies themselves currently have access to this information, it seems that this assessment will be left up to them for now. 

One notable exception in the AI Act is that it won’t apply to systems exclusively created for military and defense uses. This aspect of the legislation has been contentious, but it currently leaves room for police forces to use AI biometric ID systems in public, provided they have court approval. It can also only be used in the investigation of 16 specific crimes, including terrorism, human trafficking, sexual exploitation of children, and drug trafficking. In its current form, the legislation allows law enforcement to use high-risk systems that don’t meet the EU’s standard under “exceptional circumstances relating to public security.” 

The AI Act’s Risk Categories

Below is a breakdown of the AI Act’s risk categories and how AI systems fall into each one, according to the EU’s official website:

Unacceptable Risk

AI systems that pose clear threats to the safety, livelihoods, and rights of people will be banned (e.g., government social scoring tools or toys that use AI voice assistance and encourage dangerous behavior). 

High Risk

AI systems used in high risk settings will be required to meet strict obligations before they are allowed on the market. This includes:

Limited Risk

Lack of transparency in AI usage is categorized as a limited risk scenario, and includes tools like AI chatbots. Under the AI Act, providers are required to inform humans when they are interacting with AI so they can make informed decisions. AI-generated images, video, and audio must also be labeled as such when published to inform about issues of public interest. 

Minimal Or No Risk

Commonly-used systems, such as AI-enabled video games or spam filters, are considered minimal or no risk use cases. The majority of AI systems used in Europe fall into this category and can be used without restriction.

AI Oversight in the United States

While the Biden administration issued an executive order in October 2023 outlining a framework for AI oversight and regulation, it is not legally binding.  Several AI bills have been introduced to Congress as of February 2024, and are listed with summaries on the Brennan Center for Justice’s AI Legislation Tracker

The administration has also released its draft policy on AI, which outlines how government agencies may and may not utilize AI systems. Similar to exceptions noted in the EU’s AI Act, the draft memorandum explicitly states that it “does not cover AI when it is used as a component of a national security system.” 

Potential Risks of Exceptions to AI Regulations

Analysis from the Brennan Center for Justice is critical of this two-tiered approach, citing the rapid integration of AI systems in highly consequential operations with a lack of public information about it. According to the article, the draft policy “allows for the development of a separate—and likely less protective—set of rules for AI systems such as facial recognition, social media monitoring, and algorithmic risk scoring of travelers, which directly affect people in the United States.”

The article alludes to precedents already set, including a $3.4 million Department of Homeland Security (DHS) contract with researchers to create algorithms that assess risks by combing through social media accounts to identify “pro-terrorist” sympathies for use by immigration authorities. The FBI has also “contracted with Clearview AI, a notorious company that scrapes photos from the internet to create faceprints without consent, claiming access to 30 billion facial photos,” despite evidence that their system disproportionally misidentifies and classifies people of color, trans people, women, and others in marginalized groups. With six documented cases of Black people being wrongfully arrested and incarcerated based on facial recognition, privacy and civil rights advocates are likely to continue pushing for limitations on these uses for AI systems. 

How are you assessing AI use within your organization to mitigate risks?