THE AI Law (Regulation (EU) 2024/1689 establishing harmonized rules on artificial intelligence) provides AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation aims to reduce administrative and financial burdens on businesses, in particular small and medium-sized enterprises (SMEs).
The AI Act is part of a broader set of policy measures to support the development of trustworthy AI, which also includes AI Innovation Package and the Coordinated AI plan. Together, these measures ensure the security and fundamental rights of people and businesses in relation to AI. They also strengthen adoption, investment and innovation in AI across the EU.
The AI Act is the first-ever comprehensive legal framework on AI in the world. The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, security and ethical principles and by addressing risks linked to models of Very powerful and impactful AI.
Why do we need AI rules?
The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose little to no risk and can help solve many societal challenges, some AI systems create risks that we must manage to avoid undesirable outcomes.
For example, it is often impossible to discover why an AI system made a decision or prediction and took a particular action. Thus, it can become difficult to assess whether a person has been unfairly disadvantaged, for example when making a hiring decision or when applying for social assistance.
Although existing legislation provides some protection, it is insufficient to address the specific challenges that AI systems may pose.
The new rules:
- respond to risks specifically created by AI applications
- ban AI practices that pose unacceptable risks
- determine a list of high-risk applications
- define clear requirements for AI systems for high-risk applications
- define specific obligations for deployers and providers of high-risk AI applications
- require a conformity assessment before a given AI system is put into service or placed on the market
- put in place enforcement measures after a given AI system is placed on the market
- establish a governance structure to European and at national level
A risk-based approach
The regulatory framework defines 4 levels of risk for AI systems:
All AI systems considered a clear threat to people’s safety, livelihoods and rights are banned, from social assessment by governments to toys using voice assistance that encourage dangerous behavior.
High risk
AI systems identified as high risk include AI technology used in:
- critical infrastructure (e.g. transport), which could endanger the lives and health of citizens
- educational or vocational training, which may determine access to education and career path in a person’s life (e.g. exam grading)
- product safety components (e.g. application of AI in robot-assisted surgery)
- employment, worker management and access to self-employment (e.g. CV sorting software for recruitment procedures)
- essential private and public services (e.g. credit rating denying citizens the opportunity to obtain a loan)
- law enforcement that may interfere with people’s fundamental rights (e.g. assessing the reliability of evidence)
- management of migration, asylum and border control (e.g. automated review of visa applications)
- the administration of justice and democratic processes (e.g. AI solutions to search for court decisions)
High-risk AI systems are subject to strict obligations before they can be placed on the market:
- adequate risk assessment and mitigation systems
- high quality of the datasets feeding the system to minimize risks and discriminatory results
- logging of activity to ensure traceability of results
- detailed documentation providing all the necessary information about the system and its purpose allowing authorities to assess its compliance
- clear and adequate information to the deployer
- appropriate human monitoring measures to minimize risks
- high level of robustness, safety and precision
All remote biometric identification systems are considered high risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is in principle prohibited.
Narrow exceptions are strictly defined and regulated, for example when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offense.
These uses are subject to authorization by a judicial or other independent body and appropriate limitations in time, geographic scope and databases consulted.
Limited risk
Limited risk rrefers to the risks associated with the lack of transparency in the use of AI. The AI law introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust. For example, when using AI systems such as chatbots, humans must be aware that they are interacting with a machine so that they can make an informed decision whether to continue or step back. Vendors must also ensure that AI-generated content is identifiable. Additionally, AI-generated texts published with the aim of informing the public on matters of public interest should be labeled as artificially generated. This also applies to audio and video content that constitutes deep fakes.
Minimal or no risk
The AI Act allows free use of AI with minimal risk. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.
How does this all work in practice for high-risk AI system providers?
Once an AI system is on the market, authorities are responsible for market surveillance, deployers provide human oversight and monitoring, and vendors have a post-market surveillance system. Suppliers and deployers will also report serious incidents and malfunctions.
A solution for reliable use of large AI models
Increasingly, general-purpose AI models are becoming components of AI systems. These models can perform and adapt countless different tasks.
Although general-purpose AI models can enable better and more powerful AI solutions, it is difficult to oversee all capabilities.
There, the AI Act introduces transparency obligations for all general-purpose AI models to enable a better understanding of these models and additional risk management obligations for high-performing and impactful models. These additional obligations include self-assessment and mitigation of systemic risks, reporting serious incidents, conducting model testing and evaluation, and cybersecurity requirements.
Evolutionary legislation
As AI is a rapidly evolving technology, regulation has an evolutionary approach, allowing the rules to adapt to technological developments. AI applications must remain reliable even after being released to the market. This requires continuous quality and risk management by providers.
Application and implementation
THE European AI Officeestablished in February 2024 within the Commission, oversees the application and implementation of the AI Law with Member States. It aims to create an environment in which AI technologies respect human dignity, rights and trust. It also promotes collaboration, innovation and AI research among various stakeholders. Furthermore, it engages in international dialogue and cooperation on AI-related issues, recognizing the need for global alignment on AI governance. Through these efforts, the European AI Office strives to position Europe as a leader in the ethical and sustainable development of AI technologies.
Next steps
The AI law entered into force on August 1 and will be fully applicable 2 years later, with a few exceptions: bans will take effect after six months, governance rules and obligations of AI models to general use will become applicable after 12 months and the rules relating to AI systems – integrated into regulated products – will apply after 36 months. To facilitate the transition to the new regulatory framework, the Commission has launched the AI Pacta voluntary initiative that aims to support future implementation and invites AI developers from Europe and beyond to comply in advance with the main obligations of the AI Act.