A recent survey of 400 information security leaders from businesses in the UK and US found that 72% believe AI solutions would lead to security breaches. Conversely, 80% say they intend to implement AI tools to defend against AI. It’s another reminder of the promise and threat of AI. On the one hand, AI can be used to create unprecedented security features and enable cyber security experts to go on the offensive against hackers. On the other hand, AI will lead to automated attacks on an industrial scale and incredible levels of sophistication. For tech companies caught in the middle of this war, the big questions are how worried should they be and what can they do to protect themselves?
First, let’s take a step back and look at the current state of affairs. According to data compiled by security company Cobalt, cybercrime is expected to cost the global economy $9.5 trillion in 2024. 75% of security professionals have observed an increase in cyberattacks over the past year and Costs of these hacks are expected to increase by at least 15%. % every year. For businesses, the numbers are also pretty grim: IBM reported that the average data breach in 2023 costs $4.45 million, a 15% increase since 2020.
In this context, the cost of cybersecurity insurance has increased by 50% and companies now spend $215 billion on risk management products and services. Healthcare, finance and insurance organizations and their partners are most exposed to the risk of attack. The technology industry is particularly exposed to these challenges given the volume of sensitive data often processed by startups, their limited resources compared to large multinationals, and a culture focused on rapid evolution, often at the expense of Informatique infrastructure and procedures.
VP of Engineering, Storyblok.
The challenge of differentiating AI attacks
The most telling statistic comes from CFO magazine: it reports that 85% of cybersecurity professionals attribute the increase in cyberattacks in 2024 to the use of generative AI by bad actors. However, looking a little closer you will see that there are no clear statistics on what attacks these were and, therefore, what impact they actually had. Indeed, one of the most pressing problems we face is that it is incredibly difficult to determine whether a cybersecurity incident was caused using generative AI. It can automate the creation of phishing emails, social engineering attacks, or other types of malicious content.
However, since it aims to mimic human content and responses, it can be very difficult for someone to differentiate it from human-made content. As a result, we do not yet know the scale of AI-based generative attacks or their effectiveness. If we can’t yet quantify the problem, it becomes difficult to know how concerned we should be about it.
For startups, this means the best solution is to focus on threats and mitigate them more generally. All indications are that existing cybersecurity measures and solutions, underpinned by best practices in data governance, are up to the challenge of today’s AI threat.
The biggest cybersecurity risk
Ironically, the greatest existential threat to organizations does not necessarily come from the diabolically brilliant use of AI, but rather from their own human employees use it carelessly or fail to follow existing safety procedures. For example, employees sharing sensitive business information while using services such as ChatGPT There is a risk that the data will be retrieved later, which could lead to confidential data leaks and subsequent hacks. Reducing this threat means putting in place appropriate data protection systems and better education of generative AI users about the risks involved.
Training extends to helping employees understand current AI capabilities, particularly in countering phishing and social engineering attacks. Recently, a finance executive at a large company paid $25 million to fraudsters after being duped by a fake conference call impersonating the company’s CFO. So far, it’s so scary. However, as you read about the incident, you’ll see that it wasn’t ultra-sophisticated from an AI perspective – it was just a small step above a scam from a few years ago years that had misled the financial departments of many companies (many of which were startups). to send money to fake customer accounts by impersonating their CEO’s email address. In both cases, if basic security and compliance checks, or even common sense, had been followed, the scam would have been quickly discovered. Teaching your employees how AI can be used to generate the voices or appearance of other people and how to spot these hacks is as important as having a robust security infrastructure.
Simply put, AI clearly poses a long-term threat to cybersecurity, but, until we see greater sophistication, current security measures are sufficient if followed to the letter. Nonetheless, businesses must continue to follow strict cybersecurity best practices, as well as review their processes and educate their employees as the threat evolves. The cybersecurity industry is accustomed to new threats and the methods of bad actors, which is nothing new, but businesses cannot afford to use outdated security technologies or procedures.
We list the best cloud antivirus.
This article was produced as part of TechRadarPro’s Expert Insights channel, where we feature the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you would like to contribute, find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro