From casual users to global enterprises, people are turning to AI tools to improve their productivity. But they’re not the only ones relying on AI to make their lives easier. Cyberattackers are also deploying AI to refine phishing emails, scan for vulnerabilities in a target’s security systems, and launch attacks in real time.
And fraudsters often have the advantage. Indeed, cyberattackers are focusing on a far more powerful lure than correcting the grammar of an email or streamlining everyday processes: they are implementing AI to obtain huge financial rewards.
For most of us, it’s easy to get distracted by the bells and whistles of AI, but too many organizations are deploying AI without understanding how it works or its implications. And as we let our guard down, hackers are launching increasingly sophisticated attacks, powered by AI.
As many organizations adopt AI at a rapid pace to gain efficiencies and reduce operating costs through technology and headcount reduction, they also risk sacrificing security. With staff reductions becoming commonplace, those who remain are struggling to keep up with the latest threats and properly maintain systems. This is an opportunity that could provide a window of opportunity for malicious actors to launch a cyberattack. Now and in the future, caution, along with the right cybersecurity tools and strategies, is more important than ever.
Alerts triggered
Thanks to AI, cyberattacks are becoming more efficient and lucrative. Cybercrime is expected to cost the world dearly $9.5 trillion This year, according to Cybersecurity Ventures. And those damages are expected to grow 15% per year over the next two years, reaching $10.5 trillion per year by 2025.
According to a Cybersecurity Report 2023 According to Sapio Research and Deep Instinct, 75% of cybersecurity professionals surveyed have seen an increase in attacks over the past 12 months, and 85% of them attributed this increase to generative AI.
Hackers have long used AI to refine their work: adapting their writing style, translation, and tone, producing malicious code, and training large language models (LLMs) with misinformation. Hackers are now using AI to create attacks as a service on the dark web, lowering the barrier to entry for criminals looking to exploit people and organizations for profit. Cybersecurity experts have been sounding the alarm for some time now.
In March 2023, a Europol reporta European police agency, said that with ChatGPT, it is now possible for attackers to “impersonate an organization or individual in a very realistic way, even with a basic knowledge of the English language.” Additionally, by using LLMs, online fraud can be “created more quickly, in a much more authentic manner, and at a significantly increased scale,” the report said.
And in February 2024, Microsoft has detailed how attackers from China, Iran and Russia used its AI to write code that evades detection or to discover vulnerabilities in potential victims’ technology tools.
In most attacks, cybercriminals still take advantage of human error. They bet on us clicking on a malicious link, sharing sensitive information as part of a spear phishing campaign, and not updating software with the latest patches.
AI tools can certainly prevent some attacks. IBM Global Survey on the Cost of a Data Breach in 2023 Researchers found that the tools sped up the identification and containment of breaches by an average of 100 days.
But as threats increase, many organizations are shifting their cybersecurity efforts to other uses of AI, even replacing humans with AI tools and robots. When it comes to cybersecurity, however, the shift to AI must be accompanied by many offensive strategies. Eliminating humans from the equation immediately is not the solution.
Human problem, human solution
What I know from nearly 30 years in cybersecurity is that malicious actors, nation states, and other criminals are always one step ahead. The rapid replacement of humans with AI tools and robots will only make organizations more vulnerable to cyberattacks.
To truly secure an organization’s data, intellectual property, and other sensitive information, humans need to be part of the equation in the near future. We have something that AI tools don’t: empathy, intuition, and critical thinking. And when we sense that something is wrong and an attack is underway, we can work to find a solution, especially when we work in collaboration with other humans.
The beginnings of AI
Leaders and technologists have rightly hailed AI as a transformative technology, and it’s already changing how organizations operate. But this technology is still in its infancy. Don’t let the bells and whistles distract you.
These growing threats make it essential for businesses and organizations to embrace AI, but with caution. Organizations must ensure that the right human resources remain in place and that they use AI correctly. whether for cybersecurity or other activities.
Ultimately, AI cannot be our only defense against AI-generated threats. Humans flagging a potential breach and working with other humans to address it remains a critical piece of the puzzle. I fear that a massive breach is on the horizon if we do not move forward with intelligent, human-driven strategies. The best defense still requires people, processes, and technology.