With $109.5 billion of growth expected by 2030, the global AI cybersecurity market is booming – and it’s not hard to see why. According to a recent survey of security professionals, three-quarters (75%) have observed an increase in cyberattacks. Of these, the study found that an even larger proportion (an overwhelming 85%) blamed AI.
So what is the role of AI in cybersecurity? Does this enable or compromise our online freedoms and security? This is a Techopedia question I wanted to explore. But unlike human experts, these investigations have instead focused on another type of expertise… that of AI itself. So they asked five popular AI language models – ChatGPT, Claude, Llama, Perplexity and Bard (now Gemini) – what they considered to be the top cybersecurity threats on the Internet.
Plot twist: it was AI! Yes, all five AIs, in different ways, have implicated themselves as one of the main culprits in the ongoing Internet battle against malicious hackers, fraudsters and thieves.
AI fuels the flames of cyberattacks in 2024
There are many ways in which AI contributes to online security around the world (an entire branch of research, Defensive AI, is dedicated to it). However, in Techopedia’s cybersecurity-related conversations with five different AI platforms, 80% directly referenced AI’s own role, not in keeping the internet safe, but in enabling it. Danger.
All five discussed AI’s role in creating and spreading malware and ransomware – a key vector of phishing attacks – while several discussed AI’s role in carrying out more sophisticated cyberattacks using “advanced encryption methods and exploiting zero-day vulnerabilities.” AI can also be involved in a host of other types of cyber intrusions, including Distributed Denial of Service (DDoS) attacks, brute force attacks, and identity theft.
Phishing attacks become more sophisticated thanks to AI
Phishing attacks are becoming more and more complex. The ingenuity of these scams increases along with their impact on organizations and individuals as well: Business Email Compromise (BEC) Attacksaccording to the FBI’s Internet Crime Report, represents approximately $2.7 billion in losses per year.
Additionally, AI plays an evolving role – at least according to AI.
“Phishing techniques have become increasingly sophisticated, often leveraging AI and machine learning to create highly convincing fake messages and websites,” ChatGPT contributed, while Perplexity AI also mentioned “ AI-assisted phishing attacks. Meanwhile, Llama also referred to “AI-based phishing attacks and malware” in three classic AI cases involving the leading cybersecurity threat on the Internet… AI itself!
AI promotes fake news and the spread of misinformation
In Techopedia’s analysis of AI’s response to top internet cybersecurity threats, four of the five AI platforms consulted were flagged deep fakes as a critical concern.
Deepfakes – AI-generated synthetic media designed to manipulate or replace existing video, image or audio content with a fabricated version – can make for harmless comic fodder. However, deepfakes can also help fan the flames of disinformation and “fake news” campaigns and – with 2024, crucial political elections in the UK and US, as well as major geopolitical struggles raging in Europe and the Middle East – this can have serious and tangible real-world consequences.
Llama reiterated this, saying: “Deepfakes and AI-generated scams can have serious consequences, such as influencing political decisions or causing public panic. Claude, another AI, added: “The Internet facilitates the rapid spread of false or misleading content on social media and other platforms, which can manipulate public opinion, influence elections, promote extremist views, and more.” Moreover. »
Bard was the only AI to reference the growing complexity of this type of AI-powered cybercrime, writing: “Deepfakes are becoming more sophisticated, making it harder to distinguish between real and fake content . This, coupled with the spread of misinformation and disinformation, can have a chilling effect on democracy, fuel social division and erode trust in information sources.
State-sponsored attacks are on the rise
Given that the month of January 2024 alone brought six major state-sponsored cyberattacks With the governments of Australia, Canada and Ukraine all falling prey to the long tendrils of state-sponsored cybercriminals, it’s no surprise that AIs have become alarmed.
“These attacks can target critical infrastructure, steal intellectual property, and influence political processes,” ChatGPT wrote. Bard added that “cyberattacks targeting critical infrastructure like hospitals and schools are becoming more frequent and disruptive.”
Perplexity AI called state-sponsored and attacks against critical national infrastructure (CNI) a “significant concern, with major elections taking place in various countries”, while Llama instead pointed to “geopolitical tensions; some countries are using cyberwar as a form of espionage or sabotage”.
Data breaches are commonplace – but it’s not all AI’s fault
Another discovery from Techopedia’s conversations with five AI tools? This AI plays a role in data breaches as an evolving and ever-increasing cybersecurity threat.
Perplexity AI wrote that 2024 will see “the likelihood of data leaks (increase) and the development of new methods to bypass authentication”, while ChatGPT wrote that “with increasing amounts of personal data stored online , data breaches remain a significant threat.”
However, statistics suggest that humans are not entirely free of guilt; about nine out of ten (88%) data breaches are caused by human error. Additionally, there is a strong case for AI’s role in mitigating human fallibility. According to a 2023 survey by IBMThe use of automation and AI has saved organizations approximately $1.8 million in costs related to data breaches and helped businesses identify and contain these leaks on average over 100 days.
AI becomes an increasingly dangerous ethical minefield
Four of the five AIs mentioned in Techopedia’s findings cited data privacy as a major issue characterizing the AI debate in 2024 and beyond.
Claude has been the most vocal critic, stating that “there is a greater risk that companies and governments will collect user data without consent. Location tracking, browser history monitoring and backdoors in devices and apps contribute to this.” (This is also entirely true – as we recently discussed in a report on data brokerage and its frightening implications for anonymity.)
But data privacy was far from the only ethical conundrum discussed: surveillance, hate speech, cyberbullying, digital inequality, free speech, internet censorship, and algorithmic bias and discrimination have all emerged as part of the ever-growing ethical quagmire of AI.
Bard put it well when he stated that “the gap between those who have and those who do not have access to technology and the internet persists, limiting access to education, health care and economic opportunities , while further widening existing social and economic disparities.”
Top AI Tips for Staying Safe on the Internet
It’s not just a criticism of their AI colleagues that Techopedia’s conversations with five different AI language models revealed; it was also some practical tips for staying safe online.
This includes making your staff aware of existing and emerging hacking methods – particularly those based on AI – and creating strong, unique passwords to protect your accounts and data. AI language models also recommend implementing multi-factor authentication, protecting your home network, and keeping your software up to date.
About the Author:
Rob Binns is a writer, editor and content strategist based in Melbourne, Australia. He has led a wide range of content across industries and sectors: with deep expertise in cybersecurity and VPNs, as well as specializations in digital payments, enterprise software and e-commerce.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire.