As discussed in In previous articles, AI regulation and security was high on governments’ agendas, discussed in industry forums, and was at the heart of the recent boardroom and management fiasco. governance of OpenAI.
European law on AI: THE timer for adoption is tdisgusting
After numerous legal, economic, political and technical comments as well as lobbying from public and private actors, the European Parliament approved, on March 13, 2024, the European law on AI and is currently being published in the Official Journal of the EU. between May and July and 20 days later, coming into force and kicking off the 12-24 month period during which different elements become enforceable (such as general purpose AI, covering services such as ChatGPT).
Key provisions and compliance requirements: A risk-based approach
Billed as the first comprehensive AI law globally, it will impose strict requirements on the entire AI ecosystem of suppliers, users, manufacturers and distributors of AI systems on the EU market. The law follows other major EU digital legislation, such as the GDPR, the Digital Services Act (DSA), the Digital Markets Act, the Data Act and the Cyber Security Act. resilience.
In a nutshell, the law introduces a risk-based approach and categorization of systems based on risk levels with specific compliance requirements. The prohibited category includes things like social scoring, exploitation of vulnerable people, behavioral manipulation or facial recognition systems in public spaces for law enforcement (with exceptions).
Implications for businesses: Your AI journey can become very Dear
The law specifically defines requirements for general purpose AI (base models) that pose systemic risks (those trained with more than 10^25 FLOPS of computing, e.g. GPT4) in terms of transparency on technical data and training, guarantees against illegal production, energy consumption and more. The law also provides exceptions for research and proposes regulatory sandboxes allowing SMEs and innovative companies to develop and test in real-world conditions before bringing solutions to the market and enabling safe innovation.
Penalties for non-compliance for companies would amount to 7% (or €35 million whichever is greater) of global turnover for banned systems, and 3% (or €15 million euros, whichever is greater) for high-risk AI systems, or penalties for incorrect provision. or misleading information to authorities. Strict compliance enforcement should be overseen by national authorities designated by each EU member state as well as a centralized European AI office for oversight.
Prepare your organization: strategies towards Compliance and adaptation
The act is not without criticism because he is unclear on specific definitions and approach to categorizing systems, causes ambiguity about what would be subject to compliance, adds compliance costs and burden and high liability risks, and attempts to regulate a technology that is nascent, rapidly evolving and subject to change, sparking concerns about slowing innovation and siphoning investment away from the EU.
To prepare your organization, we recommend considering the following:
- Organize awareness sessions with leaders and teams involved in AI-based services, covering 360 aspects of EU AI law (legal, commercial, technical, operations, compliance) and formulate or update your AI strategy.
- Assess and categorize your AI solutions/services/products and vendors and create an initial view of your posture, risk areas, and probability and mitigation strategies.
- Streamline your AI posture risk classification, ensure enterprise-wide AI policies and governance are in place, consider adopting a dashboard with accountability metrics , transparency and compliance.
- Adopt a “Know Your Model” policy to evaluate, create, or request model maps (or similar documentation) describing how models were trained, refined, and are expected to perform. Follow the Transparency Index of popular models/services and challenge providers and partners on their EU law compliance plans and actions.
- Align your AI development best practices to be at least equivalent to those in regulatory sandboxes, including trusted AI practices, reinforcement learning from human feedback (RLHF) , MLOps and Red Team to ensure compliance, real-world testing and consistent service. monitoring for unexpected behavior.
We are living in an era of innovation and rapid adoption of AI, with leading companies competing aggressively and introducing services early and often, while governments worry about risks to people caused by poor actors and negligence, as well as by the immaturity of technology. If you feel Yesour organization walks on a razor‘s board and needing a life preserver to cross quickly and safely, extend your hand.