The landmark European AI Act includes a complex web of sections, definitions, guidelines and headings, making it difficult to navigate. But understanding the AI Act is essential for organizations looking to innovate with AI while complying with legal and ethical standards.
Arnoud Engelfriet is the Chief Knowledge Officer at ICTRecht, an Amsterdam-based law firm specializing in the fields of IT, privacy, security, algorithms and data law. In his role as Director of the ICTRecht Academy, Engelfriet is responsible for disseminating and deepening knowledge related to AI legislation.
In his book AI and algorithms: mastering legal and ethical compliancePublished by Technics, Engelfriet explores AI legislation – the AI Law included – as part of a broader conversation around the ethical development, management and use of AI.
The introduction of new AI guidelines often raises concerns: will the legislation stifle creativity? Do teams have the skills to ensure compliance? To answer these questions, organizations need to understand current and upcoming legislation so they can build and deploy more reliable AI systems.
Compliance and innovation
From August 2024, the highly anticipated The AI law is now in forceWith staggered implementation dates ranging from six months to more than a year, organizations still have time to understand exactly what compliance with the law entails.
A common concern among businesses is that legislation could stifle creativity, especially given the rapid pace of AI development.
“Compliance and innovation have always been somewhat at odds,” Engelfriet said.
He stressed, however, that the AI Act’s tiered approach and flexibility left room for markets to tailor compliance requirements in some cases. “We don’t see the AI Act as something that’s going to kill or cancel all kinds of AI innovation,” he said.
For example, the regulatory sandbox law guidelines provide organizations with a space to build and test new AI systems safely, away from the market and end users. The key requirement is that the technology being tested is not yet in production.
“It will be slower than before, but at the same time it will be a little safer for your customers, for the environment,” he said.
Ensuring trustworthy AI
The AI Act, like many AI guidelines designed for consumer safety, aims to make AI more trustworthy. But what does “trustworthy AI” actually mean?
The term gained prominence in 2019, after it was included in the first AI bill. While the exact definition remains somewhat ambiguous, the law outlines three main characteristics of trustworthy AI, Engelfriet said: it must be legal, technically robust and ethical.
Arnoud EngelfrietKnowledge Manager, ICTRecht
But, Engelfriet emphasizes, trust ultimately lies in the humans behind the AI system, not the technology itself. “You can’t really trust a machine,” he said. “You can only trust the designers and the operators.”
The AI Act addresses the legal aspect by consolidating laws and guidelines in one place. It takes into account technical robustness – defined as the ability of an AI system to operate reliably within its intended use – by requiring transparency about what the system is designed to do, such as make automated decisions or function as a chatbotand to ensure that it consistently operates successfully from a technical point of view.
Ethics, the final aspect of trustworthy AI, has received increasing attention since the rise of Generative AI end of 2022. A 2023 study analyzed over 200 different AI ethics guidelines, highlighting the field’s fragmented approach. Ethics guidelines aim to mitigate the many risks associated with AI, from data protection – often linked to GDPR conformity — to prevention of prejudice and safety concerns. Ethical compliance means ensuring that AI systems do not perpetuate bias or cause physical harm, Engelfriet said.
The Trustworthy AI Assessment Checklist, developed by the European Commission’s High-Level Expert Group on AI, provides a practical framework for ethical guidelines for AI. While the framework is generic enough to apply to all sectors, Engelfriet cautioned that it will likely need to be adapted to specific organizational needs.
The AI Compliance Officer
With multiple iterations of legislation, complex regulatory requirements, and a vast amount of information to consider, it’s easy for compliance teams to feel overwhelmed by AI initiatives. To accommodate the growing need for multifaceted conformityAI compliance officers can help organizations build AI systems or integrate AI into their workflows, Engelfriet said.
“We’re seeing a lot of people struggling… people who are working on earlier versions of the law, for example,” Engelfriet said. Businesses may also struggle to understand the fine print or decipher where their organization and its use of AI fits into the AI law’s hierarchical grid.
For this purpose, ICTRecht has created a course to complete AI and algorithmsdesigned to teach employees how to integrate AI compliance into their business. The course is accessible to everyone; no knowledge of AI compliance, AI law, or AI in general is required. Engelfriet meets people from a variety of professional roles in the classroom, with many course participants coming from data, privacy, and risk functions.
The common thread? “They want to expand their business into AI compliance, which is a good thing, because… AI compliance goes beyond data protection,” Engelfriet said.
Overall, the AI Act sets the tone for future AI regulations, Engelfriet said. As seen with the GDPR, the first legislations are often the most influential. Companies would therefore do well to adopt the EU AI Act proactively and comprehensively.
Click here for an excerpt from chapter 2 of AI and algorithms which covers the main parts of European AI law, including important definitions, risk levels and related legislation.
Olivia Wisbey is an associate editor at TechTarget Enterprise AI. She holds a bachelor’s degree in English literature and political science from Colgate University, where she served as a writing consultant in the university’s Writing and Speaking Center.