We are all used to receiving poorly worded emails from someone living in a far away country who is down on their luck and struggling to transfer a huge sum of money, offering to forgo a significant portion if you are only ready to help him. Historically, these scams have been easy to detect and identify simply by the suspicious email address or poor use of language and formatting.
However, with the advent of artificial intelligence (AI), the risk of being caught out by improving the quality of the approach increases. Using the email scam mentioned above as an example, AI could easily be used to create a perfectly written email and even suggest plausible reasons why you should help.
This could help facilitate a more sustainable and targeted campaign, building trust and relationships over time, making it more likely that an unsuspecting person will be defrauded or manipulated.
Beyond email scams, AI is already being used to open up new avenues for people to be deceived. There have been examples of AI successfully imitating and cloning people’s voices and being used to try to coerce unsuspecting relatives into losing money.
Few parents would think twice before transferring money in response to a call from their child saying they were in trouble and needed money. A real and horrific example of an AI scam is when an American mother received a call of his “kidnapped daughter”.
The complexity and credibility of AI scams is growing at an unprecedented rate. A bold and successful example was demonstrated: a finance employee was scammed into transferring a large sum of money while he attend a deepfake video conference where they were the only real person on the call.
Governments are considering legislating and promoting safety around AI, which will be helpful, especially when perpetrators are caught and can get justice. Some AI engines take a responsible and proactive approach to filtering and refuse to answer questions that could be used for illegal purposes.
However, the sad reality is that legislation and the moral compass of some AI developers are unlikely to prevent fraudsters from using AI for illicit gains as the technology becomes more commoditized, easier to develop, operate and access.
How to Protect Yourself Against AI-Advanced Scams
So how can individuals and organizations protect themselves in the future? Traditional security controls and simple actions can be put in place that, at the very least, will make life more difficult for criminals.
Checking a sender’s real email address rather than their display name often helps quickly identify non-legitimate emails. However, you need to be careful, as authors may use names that can easily be confused with legitimate domains. For example, replacing an O with a 0 (zero) or simply adding a number after a legitimate domain.
Using the old security adage that something you know for identification purposes can be a useful approach to verifying someone’s identity. For example, you could agree with your family members on a phrase or word that you would use to verify that it is you or perhaps use a nickname that is not known outside of the immediate family.
Businesses need to have policies and processes in place. For example, a simple reminder mechanism or multiple levels of verification can easily be implemented to significantly reduce risk. These processes and policies should also be supported by employee training and be tested regularly.
Finally, making people aware of these types of scams, including talking to colleagues and family members, will significantly reduce the risk of people being caught unawares. Consider using the different steps described above. While not entirely foolproof, they at least raise the bar for falling victim to an AI scammer.