“Business is tough these days, isn’t it?” Everyone is short. Everyone is optimized and efficient,” says Handa. “Adding another layer of costs or perceived costs is not something that companies like to do, or do easily.
“But unfortunately, when they have an attack, it kind of wakes everyone up and reminds them why the cost might have been a good idea,” he says.
AI makes fraud detection more difficult
Over the past year, Handa has seen an increase in data thefts, business email compromises, and attempts to trick employees into transferring money to fraudsters. These projects are seeing increasing success, with a “level of sophistication (that) is probably helped by the development of AI,” says Handa.
AI tools can help threat actors create fake voices and write convincing documents, making it harder for businesses to detect fraud. Handa highlighted tools that can help bad actors access a user’s inbox, analyze emails they’ve sent, and generate emails in the same voice. Threat actors can also use tools to analyze recordings of an individual’s voice and generate deepfakes that convincingly resemble the person.
The Blakes report cited a recent example of this type of identity theft, in which a Hong Kong multinational company was tricked into sending a threat actor the equivalent of US$25.6 million. An employee in the company’s finance department had received instructions to complete the transaction during several video conference calls with someone he believed to be the company’s chief financial officer.