Former US President Donald Trump posing with black voters, President Joe Biden discouraging people from voting by telephone or the Pope in a puffy white jacket: deepfakes of videos, photos and audio recordings have spread across various Internet platforms, aided by technological advances in large language models like Midjourney, Google’s Gemini or OpenAIIt’s ChatGPT.
With quick and appropriate adjustment, anyone can create seemingly real images or make the voices of leading political or business figures and artists say whatever they want. Although creating a deepfake is not a criminal offense in itself, many governments are nonetheless moving toward stricter regulation when using artificial intelligence to avoid harm to the parties involved.
Besides the main route of deepfakes, which involves creating non-consensual pornographic content primarily involving female celebrities, this technology can also be used to commit identity fraud by making fake IDs or impersonation from others by telephone. Like our chart based on the most recent annual report from identity verification provider Sumsub shows, cases of identity fraud linked to deepfakes skyrocketed between 2022 and 2023 in many countries around the world.
For example, the number of fraud attempts in the Philippines increased by 4,500% year-over-year, followed by countries like Vietnam, the United States and Belgium. While the capabilities of so-called artificial intelligence could further increase, as evidenced by products such as the Sora AI video generator, deepfake fraud attempts could also expand to other areas . “We have seen deepfakes become more and more convincing in recent years and this will only continue and expand to new types of fraud, as seen with voice deepfakes,” says Pavel Goldman-Kalaydin, head of artificial intelligence and machine learning at Sumsub. in the aforementioned report. “Consumers and businesses must remain extra vigilant about synthetic fraud and look to multi-layered anti-fraud solutions, not just deepfake detection.”
These assessments are shared by many cybersecurity experts. For example, a survey of 199 cybersecurity executives attending the World Economic Forum’s annual cybersecurity meeting in 2023, 46% of respondents were most concerned about “the advancement of adversarial capabilities – phishing, malware development, deepfakes” in terms of the risks that artificial intelligence poses for cybersecurity in the future.