As artificial intelligence (AI) becomes more accessible and advanced, malicious threat actors are increasingly turning to AI-based tools in their arsenal. From automated phishing campaigns to deepfakes to malicious malware, bad actors are looking to use AI capabilities to defeat traditional defenses.
Rather than reacting after the fact, AI also enables a more proactive security posture. The solutions can continuously monitor networks and endpoints, stop threats, and alert teams of suspicious behavior that could represent the early stages of an intrusion lifecycle.
This is the main part of what I will talk about RSA 2024 in my session, AI-enabled threat actors or AI-enhanced cyber tools: who wins?? As leader of BlackBerry’s product engineering and data science teams, I’m excited to share the progress we’ve made to strengthen our AI-powered Cylance® solutions.
Our researchers discovered new tactics from several advanced persistent threat groups targeting critical infrastructure. Through in-depth analysis of these attacks, our data scientists enhanced Cylance’s AI models to more accurately identify malicious behaviors and tools.
I’d like to share a brief overview of my session and hope you’ll join me for some lively Q&As at RSA in May.
AI in cybersecurity: challenges and opportunities ahead
By developing AI-enhanced detection and response tools, defenders can gain insights to identify emerging threats. Machine learning models can analyze large amounts of data at machine speed to detect subtle anomalies and patterns that may indicate the start of an attack. When trained to look for malicious intent, machine learning models do a fantastic job of identifying new, unseen suspicious behaviors.
However, AI systems are not infallible. Adversarial actors have shown that they can manipulate the inference of ML (machine learning) models via small perturbations of the input data to evade detection. Defenders should take precautions to minimize these risks and protect their AI tools.
My RSA presentation will cover the following:
-
An examination of the data science and modeling tools that threat actors could or may be using to create targeted attacks leveraging ML techniques.
-
Approaches Defenders Can Take to Address the Increase in AI/ML-Based Threat Discovery
-
An overview of adversarial attacks against the ML model itself and ways to reduce this risk
-
Explore a powerful tool for advocates: predictive and behavioral modeling and how we solve the challenges we face.
Cyber defenders have tools to fight back in the AI arms race, but only if they implement strategies to minimize risks and protect their systems. A balanced, carefully managed approach, combining the strengths of threat research and AI, may be defenders’ best hope of keeping pace with bad actors in the long term.