Abnormal Security published examples of cyberattacks which illustrate how cybercriminals are beginning to leverage generative artificial intelligence (AI) to launch cyberattacks.
For example, a cybercriminal posed as a Netflix customer service representative to encourage a potential victim to urgently renew their subscription by clicking on a URL. The attack is difficult to detect because it uses what appears to be a genuine support domain associated with Teeela, an online toy shopping app, and an email address hosted on Zendesk, a customer support platform from trust.
Other examples include similar attacks involving cybercriminals posing as representatives of cosmetics companies and insurance companies.
Abnormal Security CISO Mike Britton said cybercriminals continue to exploit Generative AI technologies, detecting these types of social engineering attacks will be increasingly difficult for the average end user. In fact, the only way for organizations to consistently detect these types of attacks is to rely on cybersecurity platforms that use AI to identify end-user behavior known to be good, he added .
Any deviation from this behavior can then be flagged for further review. Indeed, organizations can leverage AI to combat increasingly sophisticated attacks because generative AI technologies make it easier for cybercriminals to create emails that appear legitimate, Britton said.
These tactics and techniques will become even more difficult to detect as cybercriminals leverage generative AI platforms to create so-called deepfakes using audio and video that, at first glance, will appear just as legitimate, he added.
It is unclear how cybersecurity will need to evolve as generative AI, despite existing protections, becomes increasingly commonly used to launch attacks based on social engineering techniques that are often at the heart of a Business Email Compromise (BEC). In theory, organizations could migrate to other collaboration platforms, but many of these platforms are subject to the same types of social engineering tactics that cybercriminals use to compromise email, Britton noted.
There is no doubt that BEC and other similar types of attacks commonly used to perpetrate fraud will see an exponential increase in the coming year. Although organizations could invest more in training end users to recognize these attacks, the increased sophistication of these attacks enabled by generative AI will make them difficult for any human to detect. The only viable approach will be to rely more on machines to identify signals indicating abnormal behavior, such as an email containing malware that includes links to some type of external command and control center.
In the meantime, organizations should be especially careful when relying on email to manage any type of transaction. In the same way that today fewer and fewer people answer the phone without first knowing who is calling, one day it may be that no one responds to an email without first knowing where it came from and if they can verify that the person who sent it is the person who sent it. someone they know.