Artificial Intelligence and Machine Learning
,
Next-generation technologies and secure development
How to responsibly harness the potential of AI to strengthen cybersecurity defenses
The ability of cybercriminals to unleash devastating AI-enhanced attacks conjures up frightening visions of cyberattacks that are bigger, broader, and harder for organizations to detect and prevent.
See also: A Leader’s Guide to Implementing Generative AI
Fortunately, this is not yet the case, as far as we can tell.
The evolution of generative AI over the past year has undoubtedly spawned a torrent of eyebrow-raising what-if scenarios, but as Verizon’s 2024 Data Breach Investigations Report shows, there appears to be a disconnect between the perceived capabilities of generative AI and its actual use in cyberattacks. The 2024 DBIR reports a tremendous amount of “hype” around generative AI and very few actual “mentions” of generative AI, beyond traditional attack types and vectors such as phishing, malware, vulnerabilities, and ransomware.
Despite the hype, it’s critical that security leaders focus on AI risks now. Many organizations have already begun evaluating how AI can be used to improve their cyber defenses, particularly in detecting and triaging cyberattacks. “Most people don’t even realize that an AI-enhanced attack is happening because the impact of AI is so nuanced,” said Chris Novak, senior director of cybersecurity consulting at Verizon. “If you work in technology or generative AI, you can see how easy it is to manipulate data to achieve desired effects.”
With more than 20 years of experience in cybersecurity, Novak and his team work to support clients in both the public and private sectors. He says AI-powered algorithms are used on the “promise” side of the equation to analyze vast amounts of data in near real time, quickly identifying anomalies that help organizations intercept potential threats. The need for speed is clear, as attack incidents can occur and propagate quickly, making a strong case for AI-powered cybersecurity monitoring tools. “The faster you can respond to an incident, the better,” he said.
AI algorithms can be used to detect patterns and behaviors with a level of accuracy that exceeds traditional or manual methods. “AI-based systems can provide more accurate threat assessments by continuously learning from new data and adapting to emerging threats,” Novak said.
Threats to AI: Real and Perceived
As businesses strive to embrace AI advances, they must also prepare for complex challenges, including in areas such as generative AI and deepfake technologies. Gen AI models, for example, are capable of creating convincing fake personas or generating realistic phishing emails, which could allow attackers to target individuals or organizations and evade traditional security measures.
However, as the DBIR 2024 reports, deepfakes appear to be more of a potential AI problem than phishing, as traditional low-tech phishing methods still work quite well to catch unsuspecting victims. Advances in deepfake technologies have already generated fraud and disinformation situations, according to the DBIR, suggesting that deepfakes are a more immediate concern due to their potential to create convincing fake content.
Today, artificial intelligence represents an emerging challenge. “Defenders must learn how to adapt their cybersecurity strategies to combat evolving AI-driven threats, while learning how to leverage AI/ML to strengthen defensive capabilities,” Novak said.
Threats from within
Through its insider threat program, Verizon has been able to establish a baseline of normal behavior, which is used to help identify and mitigate risks posed by employees or other authorized users. AI algorithms can then detect deviations or anomalies that may suggest insider threats, such as unauthorized access to sensitive information or unusual data transfers. “If people know there is a strong, robust insider threat program in place and their actions are being monitored, that’s often an effective deterrent,” Novak said.
Consider a customer service representative trying to access account information without the customer’s prior consent. “That’s a red flag that it’s not part of the normal actions taken to review user accounts,” Novak said, adding that AI can sift through network data to see these types of behavioral patterns and help quickly connect the dots.
AI can also play a big role in endpoint detection and response, or EDR, Novak said. It can be used today to monitor and analyze endpoints for signs of malicious activity or anomalous behavior. Using AI and learning from datasets, learning algorithms can identify malware patterns, unusual process executions, or unauthorized system changes that could indicate a security breach. For example, if a ransomware attack encrypts files on an endpoint, AI-driven EDR can detect the encryption process and alert security teams to take immediate action to contain such a threat.
Best practices in AI governance
Using AI without strong governance is like driving a high-powered sports car without brakes or traffic laws. While the speed and power of a car can create exhilarating possibilities, the need for controls and traffic rules ensures safety and prevents accidents. AI’s powerful capabilities require governance to ensure its ethical and responsible use. Novak said good AI governance should include:
- A rigorous review process for AI applications, ensuring they meet ethical standards and legal requirements;
- Strict access protocols for generative AI tools to prevent misuse and protect data privacy, as well as robust authentication measures to help security officers monitor and track usage;
- Education and awareness programs to help employees understand the risks associated with AI and what they need to know to use AI tools responsibly;
Regular training sessions and updates are also helpful to help keep staff informed of emerging threats and best practices.
How to Stay Ahead of Cybercriminals
Organizations must take a strategic approach to AI, considering both its benefits and risks. New AI capabilities, such as advanced natural language processing and automated security responses, are being evaluated across many industries to help security teams improve threat detection and reduce incident response times.
On the other hand, the risk of AI-based attacks, while not yet commonplace, requires continuous monitoring to anticipate all emerging threats. Cybercriminals are often the first to adopt new technologies to fuel their exploits, requiring defenders to be proactive in understanding and adopting AI-enhanced cybersecurity.
By using AI strategically and responsibly, companies can strengthen their cybersecurity defenses to better manage cybersecurity risks while preparing for new exploits that leverage AI for cyberattacks. Ultimately, Novak said, the best defense against cyberthreats “will always be a balanced approach that leverages human ingenuity as well as the computational power of AI.”
CISOs and security leaders looking to learn more about how to responsibly leverage the power of AI to strengthen cybersecurity defenses should consider the latest insights from Verizon. here.