Having recently pass With the Artificial Intelligence Act, the European Union is set to bring into force some of the strictest AI regulations in the world.
Potentially harmful AI applications have been designated “unacceptable” and will be illegal except for government, law enforcement, and scientific studies under specific conditions.
As was the case with the EU’s General Data Protection Regulation, this new legislation will add obligations to anyone carrying out business activities in the 27 member states, not just companies based there.
Those responsible for writing it said the aim was to protect the rights and freedoms of citizens while promoting innovation and entrepreneurship. But the law’s roughly 460 published pages contain much more than that.
If you run a business that operates in Europe or sells to European consumers, there are some important things you need to know. Here are what I think are the key takeaways for anyone looking to prepare for potentially significant changes.
When does it come into force?
The Artificial Intelligence Act was adopted by the European Parliament on March 13 and is expected to soon become law when adopted by the European Council. It will take up to 24 months for all of these measures to be implemented, but enforcement of some aspects, such as newly banned practices, could begin to occur in as little as six months.
As was the case with the GDPR, this deadline aims to give companies time to ensure their compliance. After this period, they are exposed to significant sanctions in the event of non-compliance. These are hierarchical, with the most serious being reserved for those who violate the ban on “unacceptable uses”. The highest fines can go up to 30 million euros, or 6% of the company’s overall turnover (whichever is higher).
However, the impact on a company’s reputation, if it is found to be in breach of the new law, would be even more damaging. Confidence is everything in the world of AI, and companies that show they can’t be trusted risk being punished even more by consumers.
Certain uses of AI will be prohibited
The law states that “AI should be a human-centered technology. It must serve as a tool to serve people, with the ultimate goal of increasing human well-being.
To do this, the EU has banned the use of AI for potentially dangerous purposes, including:
- Using AI to influence or change behavior in harmful ways.
- Biometric classification to infer political and religious beliefs or sexual preferences or orientations.
- Social rating systems that could lead to discrimination.
- Remote identification of people via biometrics in public places (facial recognition systems for example.)
There are certain exemptions. There is a list of situations where law enforcement can deploy “unacceptable” AI, including preventing terrorism and locating missing persons. There are also exemptions for scientific studies.
The law states that “AI should be a human-centered technology. It must serve as a tool to serve people, with the ultimate goal of increasing human well-being. It is therefore good to see that limiting the damage that can be caused has been placed at the heart of the new laws.
However, there is some ambiguity and openness around some terms, which could potentially leave things open to interpretation. Could using AI to target marketing of products such as fast food and high-sugar soft drinks influence behavior in harmful ways? And how can we determine whether a social rating system will lead to discrimination in a world where we are accustomed to being verified and rated by a multitude of government and private agencies?
This is an area where we will have to wait for more guidance or information on how law enforcement will be applied to understand the full implications.
High-risk AI
In addition to uses deemed unacceptable, the law divides AI tools into three other categories: high, limited and minimal risk.
High-risk AI includes use cases such as self-driving cars and medical applications. Companies involved in these or areas of similar risk will find themselves facing stricter rules as well as stricter data quality and protection obligations.
Limited and minimal risk use cases could include applications of AI solely for entertainment purposes, such as in video games, or in creative processes such as generating text, video or sound.
There will be fewer requirements here, although there will still be expectations for transparency and ethical use of intellectual property.
Transparency
The law makes it clear that AI must be as transparent as possible. Again, there is some ambiguity here, at least in the eyes of someone like me who is not a lawyer. Provisions are formulated, for example, in cases where it is necessary to “protect trade secrets and confidential business information”. But it’s not yet clear how this will be interpreted when cases start coming before the courts.
The law covers transparency in two ways. First, it decrees that AI-generated images must be clearly marked to limit the harm that can be caused by deception, deepfakes and misinformation.
It also covers the models themselves in a way that seems particularly aimed at big tech AI vendors like Google, Microsoft, and OpenAI. Again, this depends on risk, with developers of high-risk systems required to provide detailed information about what they do, how they work and what data they use. Provisions are also put in place regarding human oversight and accountability.
Requiring AI-generated images to be marked as such seems like a good idea in theory, but it could be difficult to enforce, as criminals and deception spreaders are unlikely to comply. On the other hand, it could help establish a framework of trust, which will be essential to enable effective use of AI.
When it comes to big tech, I think it will probably come down to the question of what they are willing to disclose. If regulators accept the likely objections that documentation of algorithms, weightings, and data sources constitute confidential business information, then these provisions could prove quite ineffective.
It’s important to note, however, that even small companies that build bespoke systems for niche industries and markets could, in theory, be affected by this situation. Unlike tech giants, they may not have the legal muscle to argue their case in court, putting them at a disadvantage when it comes to innovation. Care must be taken to ensure that this does not become an unintended consequence of the act.
What does this mean for the future of AI regulation?
First, it shows that politicians are starting to take steps to address the enormous regulatory challenges posed by AI. While I am generally positive about the impact I expect AI to have on our lives, we cannot ignore that it also has enormous potential to cause harm, either deliberately or accidentally. Any political will to resolve this problem is therefore a good thing.
But writing and publishing laws is the relatively easy part. It’s about putting regulatory, enforcement and cultural frameworks in place to support change, which takes real effort.
The EU AI law is the first of its kind, but is widely expected to be followed by new regulations across the world, including in the UNITED STATES And China.
This means it is essential that business leaders, wherever they are in the world, take steps to ensure they are prepared for the changes ahead.
Two key takeaways from EU AI law are that every organization will need to understand where its own tools and applications fall on the risk scale and take steps to ensure its AI operations are as transparent as possible .
Additionally, there is a real need to stay informed about the ever-changing AI regulatory landscape. The relatively slow pace at which the law is changing means that you should not be taken by surprise.
But above all, I believe the key message is the importance of building a positive culture around ethical AI. Ensuring your data is clean and unbiased, your algorithms are explainable, and any potential risks of harm are clearly identified and mitigated is the best way to ensure you are prepared for any legislation that may emerge in the future.