On March 13, 2024, the European Parliament adopted the Artificial Intelligence Act (AI Act), establishing the world’s first extensive legal framework dedicated to artificial intelligence. This imposes EU-wide regulations that emphasize data quality, transparency, human oversight and accountability. With potential fines of up to €35 million, or 7% of global annual turnover, the law has profound implications for a wide range of businesses operating within the EU.
The AI Act classifies AI systems based on the risk they pose, with strict compliance required for high-risk categories. This regulatory framework prohibits certain AI practices deemed unacceptable and carefully defines the obligations of entities involved at all stages of the AI system lifecycle, including suppliers, importers, distributors and users.
For cybersecurity teams and organizational leaders, the AI Act marks a vital transition phase requiring immediate and strategic action to align with new compliance standards. Here are several key areas of focus for organizations:
1. Conduct in-depth audits of AI systems
Periodic audits are required under EU AI law, requiring organizations to regularly verify that AI software providers and the organizations themselves maintain a robust quality management system. This involves carrying out detailed audits to map and categorize AI systems according to risk categories specified by law.
These external audits examine the technical elements of AI implementations and examine the contexts in which these technologies are used. This includes practices surrounding data management to ensure standards are met for high-risk categories. The audit process includes providing a report to the AI software provider and may involve additional testing of AI systems that have been certified under the Union Technical Documentation Assessment . The more specific scope of these audits remains to be clarified.
It is essential to recognize that generative AI, an integral part of the supply chain, shares similar security vulnerabilities with other web applications. For these AI security risks, organizations can turn to established open source resources. OWASP CycloneDX provides a comprehensive bill of materials (BOM) standard, enhancing capabilities for managing AI-related cyber risks within supply chains.
Current frameworks such as OVAL, STIX, CVE and CWE, designed to classify vulnerabilities and disseminate threat intelligence, are being enhanced to improve their relevance for emerging technologies such as large language models (LLM) and predictive models.
As these improvements progress, it is expected that organizations will also use these well-established and known systems for AI models. Specifically, CVE will be used for vulnerability identification, while STIX will play a crucial role in distributing cyber threat intelligence, contributing to the effective management of risks associated with AI/ML security audits.
2. Invest in AI mastery and ethical AI practices
Understanding the capabilities of AI and its ethical implications is crucial for all levels of an organization, including users of these software solutions.
According to Tania Duarte and Ismaël Kherroubi Garcia of the Joseph Rowntree Foundation, ethical AI practices should be encouraged to guide the development and use of AI in ways that respect societal values and legal standards and “The lack of a concerted effort to improve AI knowledge in the UK means that public debates about AI often do not start from pragmatic, fact-based assessments of these technologies and their capabilities.“.
3. Establish strong governance frameworks
Organizations must develop strong governance frameworks to proactively manage AI risks. These frameworks should include policies and procedures that ensure continued compliance and adapt to changing regulatory landscapes. Governance mechanisms should facilitate risk assessment and management, but also incorporate transparency and accountability, essential to maintaining public and regulator trust.
OWASP Software Component Verification Standard (SCVS) supports a community-led initiative to define a framework that includes identifying activities, controls, and best practices needed to mitigate risks associated with AI software supply chains. This could be a starting point for anyone looking to develop or improve their AI governance framework.
4. Adopt best practices in AI security and ethics
Cybersecurity teams must be at the forefront of adopting best practices in AI security and ethics. This involves securing AI systems against potential threats and ensuring that ethical considerations are integrated throughout the AI lifecycle. Best practices should be informed by industry standards and regulatory guidelines, tailored to an organization’s specific contexts.
THE OWASP Top 10 for LLMs (AI workload) aims to inform developers, designers, architects, managers and organizations about potential security risks when deploying and managing large language models. The project provides a list of the 10 most critical vulnerabilities often seen in LLM applications, highlighting their potential impact, ease of exploitation, and prevalence in real-world applications.
5. Engage in dialogue with regulators
To promote understanding and effective implementation of AI law, organizations should engage in ongoing dialogue with regulators. Participation in industry consortia and regulatory discussions can help organizations stay abreast of interpretive guidance and evolving expectations, while also contributing to the development of practical regulatory approaches.
If you are still unsure how the upcoming regulations will affect your organization, the official EU AI Act website has provided a compliance checker to determine whether or not your AI system will be subject to regulatory standards.
The EU AI Law is a transformative piece of legislation that sets a global benchmark for AI regulation. For cybersecurity teams and organizational leaders, this presents both challenges and opportunities to pioneer the areas of AI security and compliance. By adopting a culture of transparency, accountability and proactive risk management, organizations can not only comply with AI law, but also lead by example in the responsible use of AI technologies, fostering thus a trustworthy AI ecosystem.
Image credit: Tanaonte / Dreamstime.com
Nigel Douglas is Senior Developer Advocate, Open Source Strategy, Sysdig.