During a March 2024 National Association of State Chief Information Officers call with government and business IT leaders, an old security issue was highlighted that has become one of the Top Current Threats: Cybersecurity awareness training for end users is back at the top of the government’s cybersecurity agenda. , and we’ve seen this play out before. Or did we?
A new generation of AI-generated phishing attacks, through emails, text messages, voice messages and even videos, is targeting government organizations in unprecedented ways. These new, intelligent cyberattacks pose new challenges for organizations’ defenders because they are delivered without typos, formatting errors, and other errors seen in previous targeted phishing and spear phishing campaigns.
AI-generated deepfakes are even scarier and can imitate a person’s voice, face and gestures. New cyberattack tools can spread disinformation and fraudulent messages at unprecedented scale and sophistication.
Simply put, AI-generated fraud is harder than ever to detect and stop. Recent examples from 2024 include fake posts impersonating President Biden, Florida Governor Ron DeSantis, and private sector CEOs. Beyond the electoral and political impacts, a deepfake video of the CFO of a multinational company recently tricked staff into making bank transfers, resulting in a loss of $26 million.
So how can businesses address these new data risks?
In recent years, the industry has worked to move beyond traditional end-user security awareness training and toward a more comprehensive set of measures to combat directed cyberattacks. against people.
Simply put: effective security awareness training truly changes security culture. People engage and start asking questions, they understand and report risks, and they realize that safety is not just a workplace issue. It is about their personal safety and that of their family as well.
The term many are now adopting is “human risk management” (HRM). Research and consulting firm Forrester describes HRM as “solutions that manage and reduce cybersecurity risks posed by and to humans through: detection and measurement of human security behaviors and quantification of human risk; launch policy and training interventions based on human risk; educate and enable staff to protect themselves and their organization from cyberattacks; build a positive safety culture.
So what does this mean for combating AI-generated deepfakes?
First, employees must (re)train to detect this new generation of sophisticated phishing attacks. They must know how to authenticate the source and the content received. This includes showing them what to look for, for example:
- Inconsistencies in audio or video quality
- Incompatible lip sync or voice sync
- Unnatural facial movements
- Unusual behavior or speech patterns
- Source verification
- Improve detection skills
- Using watermarks for images and videos
Second, provide tools, processes, and techniques to verify the authenticity of messages. If and when these new tools are not available, establish a process to ensure employees feel empowered to question the legitimacy of messages by going through a vetting process that will be encouraged by management. Also report deepfake content: If you come across a deepfake that involves you or someone you know, report it to the platform hosting the content.
Third, consider new business technology tools that use AI to detect fraudulent messages. That’s right: You may need to fight fire with fire by using the next generation of cyber tools to stop these AI-generated messages, the same way email security tools detect and disable traditional phishing links and quarantine spam messages. Some new tools allow staff to check messages and images for fraud, although this cannot be done automatically for all incoming emails.
This new generation of cyberattacks using deepfakes to deceive humans is essentially undermining trust in all things digital. Indeed, digital trust is increasingly difficult to achieve for governments and current trends are not encouraging and require immediate action.
As Albert Einstein once said: “He who does not care for the truth in small things cannot be trusted in important matters.” »
This story originally appeared in the May/June 2024 issue of Government Technology review. Click here to read the full digital edition online.