Historically, cyberattacks were laborious, meticulously planned, and required extensive manual research. However, with the advent of AI, threat actors have leveraged their capabilities to orchestrate attacks with exceptional efficiency and power. This technological change allows them to execute more sophisticated and harder-to-detect attacks on a large scale. They can also manipulate machine learning algorithms to disrupt operations or compromise sensitive data, thereby amplifying the impact of their criminal activities.
Malicious actors are increasingly turning to AI to analyze and refine their attack strategies, significantly increasing their chances of success. These AI-based attacks are characterized by their stealthy and unpredictable nature, making them adept at bypassing traditional security measures that rely on fixed rules and historical attack data. In The 2023 Global Chief Information Security Officer (CISO) Survey Conducted by research firm Heidrick & Struggles, AI emerged as the most frequently recognized significant threat expected over the next five years. Therefore, organizations must prioritize raising awareness about these AI-based cyber threats and strengthening their defenses accordingly.
Characteristics of AI-based cyberattacks
AI-based cyberattacks have the following characteristics:
- Automated target profiling: AI streamlines attack research, using data analysis and machine learning to effectively profile targets by scraping information from public records, social media and company websites.
- Effective information collection: AI accelerates the recognition phase, which is the first active step during an attack, by automating the search for targets across various online platforms, thereby improving efficiency.
- Custom Attacks: AI analyzes data to create personalized phishing messages with high precision, increasing the chances of a successful deception.
- Employee targeting: AI identifies key personnel within organizations with access to sensitive information.
- Reinforcement learning: The AI uses reinforcement learning for real-time adaptation and continuous improvement of attacks, adjusting tactics based on previous interactions to remain agile and improve its success rate while staying one step ahead of security defenses.
Types of AI-based cyberattacks
Advanced Phishing Attacks
A recent report from cybersecurity company SlashNext reveals alarming statistics: since the fourth quarter of 2022, malicious phishing emails have increased by 1,265%, with credential phishing seeing a spike of 967%. Cybercriminals use generative AI tools like ChatGPT to create highly targeted and sophisticated business email compromise (BEC) and phishing messages.
The days of poorly written emails from the “Prince of Nigeria” in broken English are a thing of the past. Today, phishing emails are remarkably convincing, reflecting the tone and structure of official communication from reliable sources. Malicious actors use AI to create highly convincing emails, which poses a challenge in distinguishing their authenticity.
To protect yourself against AI-based phishing attacks:
- Implement advanced email filtering and anti-phishing software to detect and block suspicious emails.
- Educate employees on recognizing phishing indicators and conduct regular phishing awareness training.
- Enforce multi-factor authentication and keep software regularly updated to mitigate known vulnerabilities.
Advanced Social Engineering Attacks
AI-generated social engineering attacks involve the manipulation and deception of individuals through AI algorithms to craft convincing personas, messages, or scenarios. These methods leverage psychological principles to trick targets into disclosing sensitive information or taking certain actions.
Examples of AI-generated social engineering attacks include:
- AI-generated chatbots or virtual assistants are capable of human-like interactions and engage in conversations with individuals to gather sensitive information or manipulate their behavior.
- Powered by AI deep fake the technology presents a significant threat by generating authentic audio and video content for spoofing and disinformation campaigns. Using AI text-to-speech tools, malicious attackers collect and analyze audio data to accurately imitate the target’s voice, making deception easier in various scenarios.
- Manipulation of social media via AI-generated profiles or automated bots that spread propaganda, fake news or malicious links.
Strategies to Protect Against AI Social Engineering Attacks
- Advanced threat detection: Implement AI-powered threat detection systems that can identify telltale patterns of social engineering attacks.
- Email filtering and anti-phishing tools: Use AI-powered solutions to block malicious emails before they reach users’ inboxes.
- Multi-Factor Authentication (MFA): Implement MFA to add an extra layer of security against unauthorized access.
- Employee training and security awareness programs: Educate employees to recognize and report social engineering tactics, including AI-based techniques, through awareness campaigns and ongoing training sessions.
Ransomware attacks
THE NCSC Assessment examines the impact of AI on cyber operations and how the threat landscape will evolve over the next two years. It shows how AI reduces barriers for novice cybercriminals, hackers and hacktivists, by improving information access and collection capabilities. This increased efficiency is already being exploited by threat actors, including ransomware groups, in various cyber operations such as reconnaissance, phishing and coding. These trends are expected to persist beyond 2025.
To defend against AI-based ransomware attacks:
- Advanced threat detection: Use AI-powered systems to spot ransomware patterns and anomalies in network activity.
- Network segmentation: Divide the network to limit the spread of ransomware.
- Backup and recovery: Back up critical data regularly and verify recovery processes.
- Patch Management: Keep systems updated to patch vulnerabilities exploited by ransomware.
Adversarial AI
Evasion and poisoning attacks are two types of contradictory attacks in the context of artificial intelligence (AI) and machine learning (ML) models.
Poisoning attacks: These involve inserting malicious data into the training dataset of an AI or ML model. The goal is to manipulate model behavior by subtly changing the training data, leading to biased predictions or compromised performance. By injecting poisoned data during training, attackers can compromise the integrity and reliability of the model.
Evasion Attacks: These attacks aim to fool a machine learning model into creating input data. The goal is to change the model’s prediction by subtly changing the inputs, resulting in misclassification of the data. These adjustments are meticulously designed to remain visually imperceptible to humans. Evasion attacks are prevalent in different AI applications, such as image recognition, natural language processing, and speech recognition.
How to defend against opposing AI:
- Contradictory training: Train the model to recognize adversarial examples using the tools available for automatic discovery.
- Change of models: Use multiple random models in the system for predictions, which makes it more difficult for attackers because they are unaware of the currently used model.
- Generalized models: Combine multiple models to create a generalized model, making it difficult for threat actors to fool them all.
- Responsible AI: Use responsible AI frameworks to address unique security vulnerabilities in machine learning, as traditional security frameworks may be insufficient.
Malicious GPTs
Malicious GPTs involve the manipulation of pre-trained generative transformers (GPTs) for offensive purposes, exploiting their vast cyber threat intelligence. Custom GPTs, using vast data sets, can potentially bypass existing security systems, ushering in a new era of adaptive and evasive AI-generated threats. It should be noted that these are only theoretical at this stage and have not yet been actively used at the time of writing this article.
- VerGPT: used to generate fraudulent emails, hate speech and distribute malware, allowing cybercriminals to execute Business Email Compromise (BEC) attacks to influence recipients.
- FraudGPT: has the ability to generate undetectable malware, phishing pages, undisclosed hacking tools, identify leaks and vulnerabilities and perform additional functions.
- PoisonGPT: Poison GPT is designed to spread misinformation online by injecting false details into historical events. This tool allows malicious actors to fabricate information, distort reality and influence public perception.
Conclusion
AI-generated attacks pose a serious threat, capable of causing widespread damage and disruption. To prepare for these threats, organizations must invest in defensive AI technologies, foster a culture of security awareness, and continually update their defense strategies. By remaining vigilant and proactive, organizations can better protect themselves against this new and evolving threat.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire.