The European Union has long harbored ambitions of assuming leadership in the AI industry. As a component of its digital strategy, the EU aims to enact regulations on artificial intelligence (AI) to foster optimal conditions for the advancement and utilization of this pioneering technology.
In April 2021, the European Commission proposed the first EU regulatory framework for AI. AI systems with versatile applications are assessed and categorized based on the risks they pose to users. Varying levels of risk will correspond to differing degrees of regulation.
European Parliament members have voted in favor of the draft EU AI Act on 13th March 2024. The EU AI Act is poised to come into effect in the upcoming weeks, pending final procedural and linguistic checks. The implementation of this act will carry significant weight and impact in shaping the regulation of AI within the EU and globally.
The AI Act has established precise definitions for various stakeholders in the AI landscape: providers, deployers, importers, distributors, and product manufacturers. This mandates accountability for all entities engaged in the development, deployment, importation, distribution, or manufacturing of AI models. Furthermore, the AI Act extends its jurisdiction to encompass providers and users of AI systems situated outside the EU, such as those in China, if the system's output is intended for use within the EU.
2. What are the requirements of the Act?
The EU AI Act has a risk-based approach. It mandates that general-purpose AI models, including generative AI systems like large language models (LLMs) and foundation models, comply with a classification system organized into different tiers of systematic risk:
(Source: European Commission: Shaping Europe’s digital future)
Low-risk systems like spam filters or video games are subject to minimal requirements under the law, primarily entailing transparency obligations. This kind of system should inform users that the content is AI generated.
High-risk AI systems, such as autonomous vehicles, medical devices, and critical infrastructure (such as water, gas, and electric systems), necessitate developers and users to comply with supplementary regulatory obligations, including implementing risk management measures to ensure accuracy, robustness, and accountability framework incorporating human oversight.
AI systems that have adverse impacts on safety or fundamental rights will be classified as high-risk and will be categorized into two distinct groups:
1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
2) AI systems falling into specific areas that will have to be registered in an EU database:
Before being introduced to the market and throughout their lifecycle, all high-risk AI systems will undergo assessment. Individuals will retain the right to lodge complaints about AI systems with designated national authorities.
Prohibited AI systems, with few exceptions, include those presenting unacceptable risks, such as social scoring, facial recognition, emotion recognition, and remote biometric identification systems in public spaces.
Certain exceptions may be permitted for law enforcement purposes. “Real-time” remote biometric identification systems will be permissible in a restricted number of severe cases. Conversely, post remote biometric identification systems, where identification is conducted after a substantial delay, will only be authorized for the prosecution of serious crimes and solely following court approval.
3. What about ChatGPT?
Generative AI models such as ChatGPT will not be categorized as high-risk but will be obligated to adhere to transparency requirements and EU copyright law. This entails:
4. Deep fakes
Deep fakes are now defined under the EU AI Act as “AI-generated or manipulated image, audio, or video content that resembles existing persons, objects, places, or other entities or events and would falsely appear to a person to be authentic or truthful”.
The finalized text of the EU AI Act outlines transparency requirements for providers and deployers of specific AI systems and general-purpose AI models (GPAI) that are more stringent than earlier drafts. These obligations include transparency mandates for deployers of deep fakes, with exceptions granted in cases where the use is authorized by law for detecting, preventing, investigating, and prosecuting criminal offenses.
In instances where the content constitutes an obviously artistic work, transparency obligations are limited to disclosing the presence of generated or manipulated content in a manner that does not impede the presentation or enjoyment of the artwork.
5. What are the penalties for non-compliance?
Similar to the methodology employed under the European General Data Protection Regulation, fines for breaches of the Act will be calculated either as a percentage of the offending party’s global annual turnover in the preceding financial year or as a fixed sum, whichever is greater:
Nevertheless, there will be proportional limits on administrative fines imposed on small and medium enterprises as well as start-ups.
6. Next steps
The agreed-upon text is anticipated to be formally adopted in April 2024. It will become fully applicable 24 months following its entry into force, thus in 2026, but certain provisions will come into effect sooner:
+33 - (0)1 85 56 02 10
Paris, London
info@aurilex.com
Aurilex Law Firm
Head office: 26 Avenue de la Grande Armée, 75017 Paris