This article is by Si West, Director of Customer Engagement at ResilienceIt looks at how AI is regulated by the latest EU legislation.
The European AI law came into force on 1st On August 1, 2024, the goal is to build trust in artificial intelligence. This law is the first major regulation on AI by a major authority and the first attempt to define AI in legislation. It therefore has the potential to become a global benchmark for balancing AI security and transparency with innovation. It will have significant implications for how companies around the world manage cybersecurity, allocate resources, balance the role of AI with that of humans, and will push companies to focus even more on their cyber resilience.
AI, a double-edged sword for cybersecurity
AI and machine learning have the potential to transform cybersecurity, both positively and negatively. An April 2024 report from the National Cyber Security Centre found that AI lowers the barriers that prevent new cybercriminals from engaging in illegal activities. This allows threat actors to use AI to increase the effectiveness of malicious operations, including reconnaissance, phishing, and coding. AI can also aid in the development of malware, allowing it to evade detection by current security filters.
On the other hand, AI also plays a crucial role in combating these threats. Security controls, such as anomaly detection, fraud detection, and behavioral analytics, all use AI to identify these activities and assess an organization’s risk exposure. This helps businesses monitor, analyze, and respond to cyber threats in real-time. By processing vast amounts of data, businesses can manage cyber risks more proactively.
Importance of human intervention in AI tools
To address transparency and accountability issues, the law requires that users be informed when they interact with AI systems, monitor AI decision-making processes, and intervene if necessary.
A human intervention model is essential as AI plays an increasing role in security measures. Security controls, such as cyber risk modeling and simulations, already rely on AI, but human involvement is essential to actively manage cyber threats. These controls must be continuously monitored and updated to keep up with evolving threats, and humans are essential to improve the feedback loop.
The law will therefore give more power to individuals, by encouraging employee training to improve their understanding and management of AI systems, and will give companies the ability to intervene quickly to prevent harm.
A risk-based approach
The European AI Act classifies AI applications according to their level of risk, from the lowest to the most unacceptable, with high-risk AI systems subject to stricter requirements. This helps minimise the harmful consequences of AI, particularly in sectors such as healthcare, where errors can have serious consequences. High-risk AI also includes those that have access to financial services, critical infrastructure or employment.
However, security is not limited to qualitative and hierarchical categories, which do not allow for nuanced impact on businesses. Translating risk into quantitative and financial terms allows leaders to take more concrete steps to manage risk and better understand how businesses are affected.
Quantifying cyber risk is critical for organizations to adopt a comprehensive incident response strategy. The Resilience solution, for example, uses integrated breach and attack simulations and modeling to translate cyber risk into business value, enabling financial leaders to make better investment decisions on security controls and insurance coverage. In doing so, organizations can manage risk more effectively and build cyber resilience.
Challenges businesses may face in staying compliant
Companies must ensure that their AI systems comply with regulatory standards set by the European AI Act, including transparency, safety, fairness and accountability. This will likely result in additional costs for companies, including investments in technology, documentation capabilities and potentially higher insurance premiums.
SMEs with limited financial resources are particularly vulnerable, as their business is typically focused on growth rather than building strong cyber resilience. They may perceive cyber risk management as an additional burden and struggle to allocate resources effectively.
Additionally, keeping up with the evolution of AI technology makes investments costly in terms of time and resources. Tailor-made solutions such as the Resilience solution can offer a practical approach to quantifying cyber risks and help determine where to invest. Furthermore, companies that work closely with insurers can better understand AI risks and best practices to mitigate them in line with the European AI Act.
The law must strike a balance between regulation and innovation, preserving a business-friendly environment while putting in place essential ethical and security measures. This will help maintain the EU’s global competitiveness in AI, while developing its role in cyber resilience. As the law is likely to establish a global framework for AI governance, businesses around the world need to improve their risk management to meet new regulatory standards and embrace cyber resilience.