Artificial intelligence is already changing the way we interact with technology. But it can be difficult to identify where this can have the most operational impact. The use cases for AI are broad, but work best when applied to specific tasks as a force multiplier for human teams. For many organizations, one of the most important investments in AI will be cybersecurity.
Cyberattacks are among the biggest risks for a modern organization, regardless of its size. OUR research identified an 8% increase in weekly cyberattacks globally in the first half of 2023 alone. Their impact can range from paying ransoms to ceasing services in important sectors of the economy, or even disrupting essential services , as we saw with the Colonial pipeline infringe.
Threat actors are rapidly adopting new technologies to more effectively exploit their targets, including and especially artificial intelligence. In 2021, when the Colonial Pipeline attack occurred, cybersecurity incidents resulted in a successful breach 18% of the time, according to the Verizon Data Breach Investigation Report. Since then, the success rate has increased to more than 30%. As threat actors use AI to become more effective, it is essential that organizations around the world move in tandem, not only to respond to these threats, but also to prevent them.
Threat actors and AI
Cyber threat actors are leveraging AI in ways that are having a massive impact at cloud scale. This is perhaps most visible in attacks based on social engineering.
According to SavoirBe4, at least 70% of malicious breaches come from social engineering or phishing attacks. This means that attackers are not necessarily exploiting a technical vulnerability, but rather persuading users to give up their legitimate access credentials, usually by sending an email with a malicious attachment disguised as a legitimate sender. This attack vector has become even more dangerous after the launch of generative AI models in 2022.
Threat actors are experts in finding malicious applications tied to technological advancements, and ChatGPT is no exception. They discovered that despite its protection measures, they could easily use the tool to craft malicious emails intended for phishing campaigns. Before that, many phishing emails contained obvious red flags: poor grammar, abnormal word choices, typos, and other discrepancies that raised questions. This happy last line of defense has disappeared as bad actors use generative AI to create formally perfect and often personalized phishing lures. These engines typically have a natural text-to-speech feature, which can be used to create malicious files for deployment.
Generative AI lowers the barriers to entry throughout the attack lifecycle. The generative AI boom may already be having an impact: our research shows that email attacks increased in 2023, accounting for 86% of all file-based attacks we recorded. Other types of AI also amplify the capability of threat actors by automating attacks, finding vulnerabilities, managing botnets, and more. They use artificial intelligence as a force multiplier.
Mitigating your risks optimizes your cyber resilience
Over the past few years, we have seen attacks on entities ranging from multinational corporations to regional utilities to individual schools and hospitals. A large portion of these organizations have very limited cybersecurity expertise, and the threat actors are simply opportunistic. In the first half of 2023healthcare organizations, for example, suffered 1,634 cyberattacks alone. per week, a jump of 18% compared to last year.
The financial impact of an attack can be severe and varied: risks range from the initial ransom, to leaks of commercially sensitive information, to the cost of idle machines and a wide range of possibilities beyond. In some cases, legal proceedings ensue and generate colonies in hundreds of millions of dollars. As claims rise and insurance companies recognize the scale of cyber risk, the insurance industry revised its premiums be prohibitive for most organizations.
At the same time, even the most well-funded organizations cannot fund security teams with the human staff and expertise to address the modern threat environment at scale without a force multiplier. This is where defensive AI comes into play as an essential foundation for any organization. No matter what other technologies or innovations you implement, they will always be at risk of a cyberattack that freezes operations or exposes the business to potentially catastrophic liability.
Additionally, new technologies also provide new entry points for bad actors; we see it acutely with Internet of Things (IoT) devices. As cybercriminals adapt and increasingly use AI in their attacks, organizations must use AI to combat this threat from a prevention point of view. Current point product suites generate significant, avoidable blind spots and limited interoperability. Implement a consolidated cybersecurity platform that uses AI to refine proactive detection and remediation continuously over time, for example, or to identify anomalous behavior within strictly defined limits. zero trust policiesexponentially strengthens cyber resilience against attacks of all kinds.
AI is leading to breakthroughs in commerce, healthcare, education, logistics, and other areas critical to our society. We cannot take this progress for granted by neglecting to protect it. Prevention-focused cybersecurity is achievable for organizations of all sizes with AI-powered solutions. Establishing this type of consolidated security posture is the next era of protection.
Rupal Hollenbeck is president of Check Point Software Technologies. Check Point is a partner of Fortune Brainstorm AI.