The European law on AI is still not set in stone and the European elections could shake up the situation. For now, the tech industry fears the law will stifle competition.
Agreed after a 36-hour marathon of negotiations this month, the European AI law is being hailed as historic, but reactions from the continent’s tech sector, rights groups and politicians have been mixed.
The European Union last Friday agreed to a world-first set of interim rules to regulate artificial intelligence (AI), but the details of the legislation are still being worked out before being set in stone.
The rules classify AI applications into four risk levels and impose the strictest rules on high-risk and prohibited AI.
One of the friction points that led to the negotiations was how the basic models, the technology behind OpenAI ChatGPT, would be regulated.
“Never a good idea”
France and Germany have warned against excessive regulation as they want to protect their champion AI start-ups.
“We can decide to regulate much faster and much stronger than our main competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea,” said French President Emmanuel Macron on Monday, December 11.
“When I look at France, it is probably the leading country in terms of artificial intelligence in continental Europe. We are neck and neck with the British. They will not have this regulation on the founding models. But above all, we We’re all very far away.” behind the Chinese and the Americans,” he added, referring to the Mistral, French AI start-up.
The EU plans to regulate foundation models by ensuring that developers provide documentation including training methods and data. They will also be regulated by giving users the right to file complaints and prohibiting them from discrimination.
Companies that fail to comply with these rules face fines of 35 million euros, or 7 percent of global turnover. Some say this goes too far.
“Potentially disastrous consequences”
The Computer & Communications Industry Association said the text was a significant departure from the “reasonable risk-based approach” proposed by the Commission, which prioritized innovation over overly prescriptive regulation.
The organization said the law imposes “strict obligations” on developers of cutting-edge technologies that underpin many downstream systems and is therefore likely to hamper innovation in Europe. This could lead to an exodus of AI talent, he warns.
“Unfortunately, speed seems to have prevailed over quality, with potentially disastrous consequences for the European economy. The negative impact could be felt well beyond the AI sector alone,” said Daniel Friedlaender, vice-president senior president and director of CCIA Europe.
“Do not support the European champions”
France Digitale, an independent organization that represents European start-ups and investors, said high-risk AI will need to obtain CE marking, which is a lengthy and costly process, which could harm start-ups.
But the group welcomes the fact that start-ups operating in high-risk sectors can petition against this status and demonstrate that their AI does not present high risk and should be reclassified.
As for generative AI and foundation models, France Digitale said the regulations are “very strict” and could also harm companies because they will have to disclose their private business models, which other companies could then copy.
“We called for regulating not technology as such, but the uses of technology. The solution adopted today by Europe amounts to regulating mathematics, which does not make much sense,” said declared the group.
France Digitale also warned that the Commission could add additional criteria through delegated acts, which could prove risky for start-ups which “need visibility and predictability to develop their economic model”.
“We cannot change the rules of the game at any time,” the group said.
“Adopt” copyright rules
Most AI models are trained from material found online, which has led to a series of copyright lawsuits from artists and the companies that represent them against music companies. AI.
The law has strict copyright rules, which include compliance with current European copyright law. Companies must also make public a summary of the content they use to train general-purpose AI models.
This requirement for transparency and this policy of respecting current EU rules has been welcomed by the European Authors’ Societies (GESAC), which represent 32 European Authors’ Societies and more than a million authors.
“Strong implementation allowing rights holders to properly exercise their rights under European law is crucial to ensure that the agreed principles have a real impact in practice,” said Véronique Desbrosse, director general of the association .
“Author societies are eager to embrace this new market and generate value for creators and businesses alike, while contributing to both innovation and creation in Europe.”
Cybersecurity and facial recognition
The EU AI law implements strict restrictions on facial recognition technology and other behavioral signals, barring law enforcement exceptions.
Restrictions on the technology used for facial recognition have been welcomed, as have data protection rules.
Although there is no specific legislation to protect data, the law is designed to work alongside the rules of the EU GDPR, the EU data protection regulation.
However, cybersecurity sector head Valmiki Mukherjee told Euronews Next that the law could face similar challenges to GDPR.
“Applying the law to general-purpose AI systems without restricting their use by labeling them all as high risk could be a challenge,” he said.
“There is also a potential problem with creating a large international surveillance system to prevent surveillance-based AI. It is unclear how this will work with cybersecurity standards that are still being developed “.
“Powerful technology that stands the test of time”
While the initial draft text is still being finalized, a process which some commentators believe could continue until January 2024 or even beyond, there is another time pressure, the new European Parliament elections in June, which could shake up the points still pending. must be agreed.
“There does not appear to be enough time before the parliamentary elections to move the AI Accountability Directive through the legislative process, which will therefore need to be taken up by the new Parliament and the new Commission that it will name,” Benjamin said. Docquir, IT and data manager, at international law firm Osborne Clarke.
The new European Parliament may also need to decide on legislation on AI in the workplace.
Another factor to decide will be the regulation of open source AI software, which allows computer code to be freely copied and reused, giving anyone permission to create their own chatbot.
OpenAI and Google have warned that open source software can be dangerous because the technology can be used to spread misinformation.
Given that AI technology is developing rapidly and EU AI law is unlikely to be enforced by EU members for another two years, the regulation may already be too old despite efforts aimed at making it more flexible.
“As for what might change in AI law, lawmakers have worked to make it flexible, but the emergence of generative AI has demonstrated the difficulty of sustaining such a powerful technology,” Docquir said.