Regulations, agreed during negotiations with Member States in December 2023, was approved by MEPs with 523 votes for, 46 against and 49 abstentions.
It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability against high-risk AI, while boosting innovation and making Europe a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.
Prohibited apps
The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the Internet or CCTV footage to create databases of facial recognition. Emotion recognition in the workplace and in schools, social scoring, predictive policing (when it relies solely on profiling a person or assessing their characteristics), and AI that manipulates behavior human or exploits people’s vulnerabilities will also be prohibited.
Exemptions from application of the law
The use of biometric identification (RBI) systems by law enforcement is in principle prohibited, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are respected, for example its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorization. Such uses may include, for example, the targeted search for a missing person or the prevention of a terrorist attack. The use of such systems a posteriori (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorization and being linked to a criminal offense.
Obligations for high-risk systems
Clear obligations are also provided for other high-risk AI systems (due to their significant potential harm to health, security, fundamental rights, the environment, democracy and the state by right). Examples of high-risk uses of AI include critical infrastructure, education and job training, employment, essential private and public services (e.g. healthcare, banking), certain systems law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). . Such systems must assess and mitigate risks, keep usage logs, be transparent and accurate, and ensure human oversight. Citizens will have the right to file complaints regarding AI systems and receive explanations for decisions based on high-risk AI systems that affect their rights.
Transparency requirements
General purpose AI (GPAI) systems and the GPAI models on which they are based must meet certain transparency requirements, including compliance with European copyright law and the publication of detailed summaries of the content used for the formation. The most powerful GPAI models that may present systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.
In addition, artificial or manipulated images, audio or video content (“deepfakes”) must be clearly labeled as such.
Measures to support innovation and SMEs
Regulatory sandboxes and real-world testing will need to be set up at national level and made accessible to SMEs and start-ups, to develop and train innovative AI before it is placed on the market.
Quotes
During Tuesday’s plenary debate, the co-rapporteur of the Internal Market Committee Brando Benifei (S&D, Italy) said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, fight discrimination and provide transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be created to help businesses start complying with the rules before they come into force. We have ensured that human beings and European values are at the very center of AI development.
Co-rapporteur of the Civil Liberties Committee Dragos Tudorache (Renew, Romania) said: “The EU has kept its promises. We have linked the concept of artificial intelligence to the fundamental values that underpin our societies. However, there is still much work to be done, which goes beyond the AI law itself. AI will push us to rethink the social contract at the heart of our democracies, our educational models, our labor markets and the way we wage war. The AI Act is the starting point for a new governance model built around technology. We must now focus on putting this law into practice.”
Next steps
The regulation is still subject to final review by lawyer-linguists and should be definitively adopted before the end of the legislature (through the so-called corrigendum procedure). The law must also be formally approved by the Council.
It will enter into force twenty days after its publication in the Official Journal, and will be fully applicable 24 months after its entry into force, with the exception of: prohibitions of prohibited practices, which will apply six months after the date of entry into force ; codes of good practice (nine months after entry into force); general rules on AI, including governance (12 months after entry into force); and bonds for high-risk systems (36 months).
Background
The law on artificial intelligence responds directly to the citizen proposals of the Conference on the Future of Europe (COFE), more concretely for proposition 12(10) on strengthening the competitiveness of the EU in strategic sectors, proposition 33(5) on a safe and trustworthy society, including combating misinformation and ensuring that humans are ultimately in control, proposal 35 on the promotion of digital innovation (3), while guaranteeing human control and (8) reliable and responsible use of AI, establishing safeguards and ensuring transparency, and proposition 37 (3) on the use of AI and digital tools to improve citizens’ access to information, including people with disabilities.