The UK faces a critical moment as the integration of AI brings both vast opportunities and significant challenges.
As the country grapples with the complexities of protecting digital infrastructure, the intersection of AI-driven innovations and cybersecurity imperatives requires a strategic approach to harness the potential while mitigating inherent risks.
Microsoft in collaboration with Goldsmiths, University of London has released a major report on the state of cybersecurity in the UK – and the coming impact of AI attack vectors (and AI mitigation). It is called “Mission Critical: Unlocking the UK – AI Opportunities through Cybersecurity”.
With cyberattacks already costing UK organizations more than £87 billion each year, cybersecurity is a priority for many. The same goes for the juxtaposed position of AI – in terms of opportunities and threats. Perhaps most astonishingly, the study reveals that “in the age of AI”, around 87% of UK organizations are vulnerable to cyberattacks – and only 13% of these are considered resilient to cyberattacks. cybercriminality.
The historic Bletchley Declaration, signed by 28 countries in November 2023, warns of the risks associated with the capabilities of AI models. But here’s a key point: The Bletchley Declaration also begins with this sentence: “Artificial intelligence (AI) presents enormous opportunities on a global scale: it has the potential to transform and improve human well-being , peace and prosperity. »
So while cyberattacks represent a significant risk – with AI (and GenAI) driving much of the new, emerging and ever-converging risk – it is also true that AI holds great promise in terms of building stronger, more comprehensive and proactive cybersecurity defenses. In support of this, Microsoft sets out 5 fundamental pillars to strengthen the UK’s defenses against AI-based attacks, namely supporting widespread adoption, targeting investment, nurturing talent, driving research and sharing knowledge and take a leadership position in AI governance.
Cybersecurity in the UK in context
Although the potential benefits of AI-driven cybersecurity are significant (increased resilience, reduced attack costs, etc.), UK organizations are arguably lagging behind in adoption. And it speaks to deeper concerns about the state of cybersecurity in the UK. A significant problem is the lack of awareness of the true costs of cyberattacks, including recovery times. This highlights the need for accountability for cybersecurity to extend beyond IT and across the entire organization. The Microsoft report divides UK organizations into three categories:
· Resilient (13%): Secure by design, using AI extensively for risk detection and response.
· Vulnerable (48%): Has basic defenses but requires additional investment and AI adoption.
· High risk (39%): Does not prioritize cybersecurity and lacks sophisticated AI across the organization.
This is clearly a small proportion of organizations that fall into the resilient category, and a significant group that remains at high risk. The UK’s national AI strategy provides a useful framework for organizations. Additionally, university partnerships support companies that may not have internal AI innovation capabilities. However, a worrying number of organizations remain unprepared to deal with a cyber threat landscape that continues to grow in scope, scale, sophistication and diversification. Key concerns such as ethics and privacy must also be addressed alongside cybersecurity efforts.
In particular, decision makers (49%) and security experts (70%) are concerned that increased use of AI poses risks to their organization, while the majority of senior security professionals (60% and of decision-makers (52%) fear that current geopolitical tensions will increase cyber risks for their organization. There are also interesting disparities between sectors, with technology companies (70%) and financial institutions (65%) falling far behind ahead of retailers (26%) and education (29%).
Using AI to Fight AI
The AI cybersecurity industry is expected to be worth $135 billion by 2030. There’s a reason for that: the report suggests that organizations that use AI-based cybersecurity are twice as resilient to attacks than those who do not, while also suffering 20% less. costs in the event of an attack.
In total, the UK economy could save billions of dollars each year through the widespread adoption of AI-based cybersecurity. UK leaders need to close skills gaps, upskill existing employees and attract new tech-savvy talent. It also helps address challenges related to data quality, alert prioritization, and security posture.
The drive by UK businesses to adopt AI-driven cybersecurity isn’t just about money: it’s also about building a stronger security landscape and maintaining a global leadership position. Merging advancements in cybersecurity and AI is crucial for systems to self-correct and adapt to evolving threats.
Although only a minority of UK organizations are considered truly resilient, there are clear steps business leaders can take to make their organizations more secure and help the UK grow. Implementing AI in terms of threat protection is an essential start.
Core Capabilities and Key Behaviors
Microsoft suggests that capabilities should be combined with behaviors. When it comes to cybersecurity, a strong defense is based on well-designed systems, clear procedures and effective cybersecurity tools. Organizations need qualified cybersecurity professionals who are familiar with the AI aspects of cybersecurity and have sufficient budget to invest in both protection and mitigation.
As cyber threats evolve with AI, organizations must also use AI to become more adaptable and automated in their defense and invest in research, development and innovative AI-based cybersecurity solutions, often through to the support of external partners.
When it comes to behaviors: Organizations must foster a culture that embraces good cybersecurity governance and is receptive to innovative security approaches. This includes clear policies on the responsible use of AI and promoting knowledge sharing within the organization to develop cybersecurity expertise.
Leaders must actively champion cybersecurity best practices and empower employees at all levels to contribute to a secure environment. Building trust, both internally and with external stakeholders, is essential for effective decision-making and optimal preparation against cyber threats. The United Kingdom’s launch of the first AI Security Summit late 2023 at Bletchley Park is a good example of cross-sector collaboration collaboration in action, particularly regarding AI risks at the development frontier.
Next steps for the UK
The UK has the potential to become a global hub for AI innovation and development. To achieve this goal, the country must invest strategically to strengthen its cybersecurity posture, including through the use of AI-based defenses.
According to Clare Barclay, CEO, Microsoft UK“I know from my conversations with business and government leaders across the UK that they want to use AI to strengthen their defense against cyberattacks. This will not only facilitate their operations but will also help attract future investments.
Steps in Microsoft’s success plan include leadership as the government must prioritize cybersecurity and support the responsible development of AI. This includes encouraging investments in cyber resilience, secure AI development and sandbox testing environments.
Efforts should be made to promote collaboration and knowledge sharing between industry and government through public-private partnerships, open source alliances and threat awareness.
Develop a skilled labor is crucial, with the attraction and development of cyber talent absolutely necessary to compete in the global landscape. Microsoft itself has already committed to investing £2.5 billion in the UK to develop and grow AI skills, data centers and security. Finally, collaboration is essential for success: business, government and academia must work together to strengthen defenses, share knowledge, share responsibilities and, ultimately, equally share value.
About the Author
A highly experienced technology director, professor of advanced technologies and global strategic advisor on digital transformation, Sally Eaves specializes in the application of emerging technologies, including the disciplines of AI, 5G, cloud, security and ‘IoT, for the transformation of businesses and IT. as well as large-scale social impact, particularly from a sustainability and DEI perspective.
An international speaker and author, Sally was one of the first recipients of the Frontier Technology and Social Impact Award, presented at the United Nations, and has been described as the “torchbearer of ethical technology”, founding Aspirational Futures to improve the inclusion, diversity and belonging in the tech space and beyond. Sally is also Chair of the Global Cyber Trust at GFCYBER.