European institutions have reached an agreement on the regulation of artificial intelligence (AI), including generative AI. Celebrated as a historic milestone and of great importance for European society and economy, this is an unprecedented step towards the development of secure and reliable systems. Artificial intelligence by all actors, public and private. “This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected against high-risk AI, while boosting innovation and making Europe a leader in this area”, declared the European Parliament.
At the same time, agreements are also being made to advance the governance of Artificial Intelligencewhether through principles and codes of conduct, such as Hiroshima ProcessTHE Biden Executive OrderTHE OECD updated, the UNESCO Ethical Recommendation on AI, the Bletchley Declaration or the Council of Europe AI Convention which is expected to be unveiled in 2024. All of these initiatives aim to mitigate the risks inherent in the design and development of AI, including generative AI.
What does the regulation include?
THE AI Law is based on a risk-based approach. The higher the risk of the AI system, the stricter the obligations. There are even use cases that are directly prohibited. For systems classified as high risk and predefined as such in the law, certain obligations are established, called “minimum requirements”, which must be respected before placing the system on the market and once the system is marketed. For other AI systems that are not classified as high risk, voluntary measures apply. The agreement reached does not appear to apply to AI systems provided under free and open source licenses, unless they are high-risk AI systems or under other exceptions. In addition, this regulation does not apply to AI models made available to the public under a free and open source license whose parameters are made public, except for certain obligations.
Different types of actors stand out in the value chain: AI system suppliers, importers/distributors and system users, each with different responsibilities. The heaviest obligations are imposed on AI providers.
One of the most important events since the adoption of the European Commission’s proposal has been the emergence of fundamental models and general-purpose AI systems capable of generating new content (i.e. GPT), which caused great social concern. The AI law finally includes regulation of generative AI (GPAI, general purpose AI), which has led to heated debate over whether the technology should be regulated, regardless of its risk. In the end, the compromise reached was to regulate GPAI models and systems, but in principle to impose obligations of transparency and cooperation on the providers of these systems and models, so that users have sufficient information to be able to comply with regulatory requirements.
General purpose AI models are classified as systemic risk if they have high impact capabilities or if the European AI Office decides, on its own initiative, or following a qualified warning. a scientific group, that the model has equivalent capabilities or impact. A model is presumed to have high impact capabilities if, among other things, the cumulative amount of calculations used to train it (measured in FLOPs) is greater than 10^25 or is used by a certain number of clients. The European Commission will have the power to adjust thresholds and add indicators and benchmarks based on technological developments.
Generative AI providers must maintain technical documentation of the model, provide information to other providers that integrate the model, comply with European copyright law and publish a detailed summary of the content used for training, and cooperate with the Commission and the national authorities.
Suppliers of models with systemic risk are also required to carry out assessments according to standardized protocols, assess and mitigate systemic risk at EU level, maintain serious incident reports, conduct contradictory tests and to guarantee an adequate level of cyber protection. The development of codes of conduct at EU level is encouraged and facilitated to contribute to the proper implementation of the Regulation.
Prohibited Uses and Exemptions for AI
Another major debate has been the expansion of prohibited uses, including recognizing emotions in work and education, and predicting individual crimes. However, exceptions were made for specific situations, such as searching for crime victims.
In addition, fines for non-compliance have been relaxed: 7% of total turnover or €35 million (whichever is greater) for placing prohibited uses on the market; 3% or 15 million euros for breach of other obligations and 1.5% or 7.5 million euros for providing incorrect information. Special treatment will be given to SMEs.
On the other hand, matters related to national security, military and defense applications, AI systems used exclusively for research and innovation, as well as non-professional use of AI, are excluded from regulations and governed by specific rules.
Impact on AI innovation
Some think that the new EU Artificial intelligence The regulation is a case of overregulation that could stifle technological innovation, particularly with regard to the regulatory treatment of generative AI. Many of these voices claim that there is no “Big Tech” in Europe, but only in the United States, where (self)regulation is left to the companies themselves. And this could lead to a competitive disadvantage for European companies.
Others, however, argue that it is a myth that regulating AI is anti-innovation. It is true that there are elements which support this hypothesis. First, the inclusion of regulatory sandboxes, i.e. real-world testing and open source, can facilitate innovation. Secondly, the regulation provides a clear standard and a defined governance model, ensuring legal certainty, which is extremely important for businesses.
In this context, it remains crucial to promote innovation with public policies that promote and encourage investment in innovative projects and ecosystems, foster a culture of entrepreneurship, attract talent and research projects with a concrete application vocation.
It is not a question of finding a balance between innovation and regulation, but of responsible innovation from the design stage. This regulation requires players in high-risk AI systems to assess potential negative impacts in advance, rather than after market launch. It is much easier and less expensive to prevent or mitigate potential negative impacts up front, before major investments are made, rather than doing a post-market launch (“tear down and fix”).