Since the advent of generative AI, its potential to increase privacy and cybersecurity challenges has become a major concern. As a result, government agencies and industry experts are hotly debating how to regulate the AI industry.
So where are we going, and how will the intersection of AI and cybersecurity play out? Considering the lessons learned from previous efforts to regulate the cybersecurity market over the past few decades, achieving a similar outcome for AI is a daunting prospect. However, change is essential if we are to create a regulatory framework that protects against the negative potential of AI without blocking the positive uses it already offers.
Part of the challenge is that the existing compliance environment is already increasingly complex. For UK multinationals, for example, the work required to comply with regulations such as GDPR, PSN, DORA and NIS, to name a few, is significant. This does not include customer or government requirements to meet information standards such as ISO 27001, ISO 22301, ISO 9001 and Cyber Essentials.
Added to this are the rules put in place by individual companies, such as technology vendors and their customers conducting cybersecurity audits on each other. In both cases, organizations have specific and sometimes unique questions they want to ask, some requiring evidence and proof. As a result, the overall compliance task becomes even more nuanced and complex – a challenge that, at this point, is only likely to increase.
It goes without saying that these rules and regulations are extremely important to ensure minimum standards of performance and to protect the rights of individuals and businesses. However, the lack of international coordination and uniformity of approaches risks making the task of compliance untenable.
New rules at home and abroad
Take the EU Artificial Intelligence Actwhich was adopted in March this year and aims to ensure “security and respect for fundamental rights, while stimulating innovation”. It covers a wide range of important cybersecurity issues, from limitations on the use of biometric identification systems by law enforcement and bans on social scoring and AI used to manipulate or exploit user vulnerabilities to consumers’ rights to file complaints and receive meaningful explanations.
Failure to comply can result in significant consequences fines up to EUR 35 million or 7% of global annual turnover for prohibited AI applications, EUR 15 million or 3% of turnover for violations of obligations under the AI Act and EUR 7.5 million or 1.5% of turnover for providing incorrect information.
Additionally, it aims to address cybersecurity risks faced by AI system developers. Article 15 states that “high-risk AI systems must be resilient to attempts by unauthorized third parties to alter their use, outcomes, or performance by exploiting vulnerabilities in the system.”
While this also applies to the UK for organisations doing business in the EU, there are also moves underway to introduce additional legislation that would allow for better localisation of regulations. In February, the UK government published its response to a White Paper consultation process The goal is to determine the direction of AI regulation in this country, including cybersecurity. Depending on the outcome of the election, it remains to be seen how this plays out, but regardless of which party is in power, additional regulation is inevitable. Elsewhere, lawmakers are also busy preparing their own approaches to how AI should be governed, and from the US and Canada to China, Japan and India, new rules are arriving as part of a rapidly changing environment.
Regulatory challenges
As these various local and regional laws come into force, the level of complexity for organizations that build, use or secure AI technologies increases. The practical challenges are considerable, particularly because AI decision-making processes are opaque, making it difficult to explain or audit how they were implemented – a factor that is already a requirement in some regulatory environments.
There are also concerns that strict AI regulations could stifle innovation, particularly for smaller companies and open source initiatives, while larger stakeholders may support regulation to limit competition. Some also speculate that under these circumstances, AI startups may relocate to countries with less stringent regulatory requirements, potentially leading to a “race to the bottom” in regulatory standards and the security risks that this could bring.
Add to that the fact that AI is very resource-intensive—a fact that raises concerns about sustainability and energy consumption and opens the door to greater regulatory oversight—and the list can seem endless. Ultimately, however, one of the most important requirements for effectively regulating AI is for governments to cooperate, where possible, to develop unified and consistent regulations. For example, existing privacy laws and considerations vary by region, but the fundamental security principles must remain the same.
If these issues are not addressed, it is more likely that we will see organizations regularly breaking the rules and, equally worrying, gaps will emerge in AI-related cybersecurity that threat actors will be ready to exploit.
Richard Starnes is CISO at Six degrees.