Updated December 24, 2024: This article, originally published on December 23, now includes details on new AI threat research conducted by security group Palo Alto Networks Unit 42 that could have a positive impact on Gmail users as well as others.
The most popular free messaging platform on the planet is under attack by hackers wielding AI-powered threats. With 2.5 billion users, according to Google figures, Gmail is not the only target of such attacks, but it is certainly the most important. Here’s what you need to know and do to protect yourself. Right away.
The threat of AI to billions of Gmail users explained
Gmail is certainly not immune to advanced attacks from bad actors looking to exploit the trove of sensitive data found in the average inbox. As I recently pointed out, there is a notification attack in progress on Google Calendar that depends on Gmail to succeed, and Google itself has warned against a second wave of Gmail attacks which include extortion and invoice-based phishing, for example. While Apple also warns iPhone users about spyware attacks, an infamous ransomware gang resurrects and claims February 3 as the next attack datethis is not the time for cyber-satisfaction. Certainly not when a giant in the security vendor world, McAfee, issued a new warning It confirmed what I was saying about the biggest threat facing Gmail users: terrifyingly convincing AI-powered phishing attacks.
“Fraudsters use artificial intelligence to create fake videos or highly realistic audio recordings that pretend to be authentic content from real people,” McAfee warned. “As deepfake technology becomes more accessible and affordable, even people without prior experience can produce compelling content.” So imagine what people, threat actors, scammers and hackers with prior experience, can occur through an AI-based attack that could trick a seasoned cybersecurity professional into handing over credentials that could have resulted in their Gmail account being hacked, with all the consequences that come with it. this could result.
Compelling AI-powered attacks targeting Gmail users
In October, a Microsoft security solutions consultant named Sam Mitrovic went viral after I reported that he had almost fallen victim to an AI-based attack. So compelling and typical of the latest wave of cyberattacks targeting Gmail users that it’s worth briefly mentioning again. It started a week before, let me explain:
Mitrovic received a notification about a Gmail account recovery attempt, apparently from Google. He ignored it, and the phone call about him also came. Google who followed a week later. Then it all happened again. This time, Mitrovic picked up: an American voice, claiming to come from Google support, confirmed that there was suspicious activity on the Gmail account. To summarize this long story, please go ahead read the original, it’s definitely worth itthe number the call came from appeared to be Google following a quick search, and the caller was happy to send a confirmation email. However, as a security consultant, Mitrovic spotted something a less experienced user might not have: the “To” field was a cleverly obfuscated address that wasn’t actually a real Google address. As I wrote at the time: “It is almost certain that the attacker would have continued to the point where the so-called recovery process would have been initiated”, which would have served to capture login information and very probably a session cookie to enable 2FA. also bypass.
Unit 42 researchers develop new adversarial machine learning algorithm that could help Gmail and other users defend against AI-powered malware
Recently published research from the Palo Alto Networks-based Unit 42 group detailed how, by developing an adversarial machine learning algorithm to use large language models to generate malicious JavaScript code at scale, detection of these AI-powered threats in the wild can be reduced by up to 10%. One of the main problems facing users and those working to defend them against cyberthreats is that while “LLMs struggle to create malware from scratch,” Unit 42 researchers Lucas Hu, Shaown Sarker, Billy Melicher, Alex Starov, Wei Wang, Nabeel Mohamed and Tony Li said: “Criminals can easily use them to rewrite or obfuscate existing malware, making it harder to detect.” It is relatively easy for defenders to detect commercially available obfuscation tools because their fingerprints are well known and their actions already cataloged. LLMs have been a game-changer in obfuscation, tipping the scales in favor of attackers because through AI prompts, they can “perform transformations that appear much more natural,” the report states, “which makes detection of this malware more difficult.” The ultimate goal is, through the use of multiple layers of such transformations, to fool malware classifiers into believing that the malicious code is, in fact, completely harmless.
Unit 42 successfully created an algorithm using the LLMs themselves to rewrite malicious JavaScript code, continually applying a number of rewriting steps to fool static analysis models. “At each step,” the researchers said, “we also used a behavior analysis tool to ensure that the behavior of the program remained unchanged.” Why is this important? Because given the availability of generative AI tools for attackers, as we have seen for example in various attacks against Gmail users, the scale of malicious code variants and the difficulty of detecting them will continue to increase. grow. Unit 42’s work shows how defenders “can use the same tactics to rewrite malicious code to generate training data that can improve the robustness of ML models.” Indeed, Unit 42 said that using the mentioned rewriting technique, it was able to develop a new deep learning-based malicious JavaScript detector, which “currently works with advanced URL filtering, detecting dozens thousands of JavaScript-based attacks every week.
What Gmail and McAfee recommend you do to mitigate ongoing AI attacks
When it comes to mitigation tips, some may be more relevant than others. Follow recent advice from the Federal Bureau of Investigation, among others, which suggested verifying phishing emails by checking for spelling mistakes and grammatical inconsistencies. As I pointed out, this is very outdated advice and, as such, rather useless in today’s AI-driven threat landscape.
McAfee’s advice is to “protect yourself by double-checking any unexpected requests through a reliable alternative method and relying on security tools designed to detect deepfake manipulation,” and that’s much better.
The best, however, is the advice from Google itself when it comes to mitigate attacks against Gmail users and can be broken down into these main points:
- If you receive a warning, avoid clicking on links, downloading attachments, or entering personal information. “Google uses advanced security to warn you about unsafe messages, dangerous content, or misleading websites,” Google said, “even if you don’t receive a warning, don’t click on links, don’t download files and do not enter personal information in emails, messages, web pages or pop-ups from unreliable or unknown providers.
- Do not respond to requests for your private information via email, text message or phone call and always protect your personal and financial information.
- If you think a security email that appears to be from Google might be fake, go directly to myaccount.google.com/notifications. “On this page,” Google said, “you can check the recent security activity of your Google account.”
- Be wary of urgent-sounding messages that appear to come from people you trust, like a friend, family member, or someone from work.
- If you click on a link and are asked to enter the password for your Gmail, Google, or other service account: don’t do it. “Instead, go directly to the website you want to use,” Google said, and that includes logging into your Google/Gmail account.