Cybersecurity has always been a complex area. Its adversarial nature means that the margins between failure and success are much finer than in other sectors. As technology evolves, these margins become even thinner, with attackers and defenders scrambling to exploit them and gain a competitive advantage. This is especially true for AI.
In February, the World Economic Forum (WEF) published an article titled “AI and cybersecurity: how to manage risks and opportunities“, highlighting the existing and potential impacts of AI on cybersecurity. The bottom line: AI benefits both the good guys and the bad guys, so it’s essential that the good guys do everything they can to protect it. ‘adopt.
This article will examine and expand on some of the key points of the WEF.
Advantages and opportunities for attackers
Before exploring how AI can improve cybersecurity, it’s worth exploring some of the opportunities it offers cybercriminals. After all, it would be difficult to combat threats if we don’t really understand what they are.
Of all the issues discussed, deepfakes are perhaps the most concerning. As the WEF says, more than 4 billion people are eligible to vote this year, and deepfakes will undoubtedly play a role. In the UK alone, both THE Prime Minister and the Leader of the Opposition have fallen for fake AI-generated content. One might be tempted to assume that modern voters can identify digitally manipulated video. Yet, just look at the example of the WEF, which shows a deepfake misleading a Hong Kong finance employee to the tune of $25 million realize that this is not necessarily the case.
Keeping with the theme of social engineering, AI has made phishing scams easier to create and harder to detect. Before ChatGPT launched in November 2022, we felt like we were on the cusp of tackling phishing scams; Obviously, they did not disappear, but awareness of their existence grew day by day and people knew more and more how to identify them. Spelling mistakes, poor grammar and clunky English were all telltale signs of a scam. However, today’s scammers, with great language patterns (LLM) at your fingertips, can create and distribute phishing scams on a massive scale and without any of the errors that would have revealed them before.
Benefits and Opportunities for Defenders
But all is not gloomy; AI also has huge benefits for cybersecurity professionals. The WEF provides a general overview of how the cybersecurity industry can leverage AI, but it’s worth taking a closer look at some of these use cases.
AI frees up time for security teams. By automating mundane, repetitive tasks, security teams can spend more time and energy innovating and improving their business environments, protecting themselves from more advanced threats.
AI is also an invaluable resource for accelerating detection and response times. AI tools continuously monitor network traffic, user behavior, and system logs for anomalies, reporting any issues to security teams as soon as they arise. This means security teams can proactively prevent attacks instead of simply reacting to an incident after it has occurred.
According to the ISC2 Cybersecurity Workforce Study, the cybersecurity industry is currently short 4 million workers. This is an alarming figure, but one that AI can help reduce. While the WEF argues that AI can be used to educate people about cybersecurity and train the next generation of professionals, both valid arguments, it overlooks the fact that AI could reduce the need for cybersecurity workers by automating a much of the work they do. need to make.
AI regulation and collaboration
While AI regulation is undoubtedly important, according to the WEF, for “the development, use and implementation of AI technologies in a way that will benefit societies while limiting the harm that they can cause,” it is perhaps more important that government, industry and academia, and civil society sing to the same anthem. Conflicting motivations and priorities could prove disastrous.
As such, the WEF’s AI Governance Alliance, launched in April 2023, brings these groups together around a common goal: championing responsible global design and publication of transparent and inclusive AI systems. In a world where competition reigns supreme, initiatives such as this are essential to ensure we keep security in mind when developing AI systems.
Here are some recent examples of AI regulation:
- European AI law
- The United Nations advisory body on AI governance
- The UK AI White Paper
- The US AI Security Executive Order
But while they were well-intentioned, many of them sparked negative reactions. In particular, the European AI law, which the European Parliament adopted in March, has seen significant critical industry for stifling innovation. This drives home the key takeaway from the WEF paper: collaboration is vital if we are to develop AI safely. As the WEF has attempted to do with the AI Governance Alliance, it is important that all groups with a vested interest in AI – particularly cybersecurity professionals – are involved in the regulatory process. This is uncharted territory and we will all be safer together.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire.