Artificial Intelligence and Machine Learning
,
Cybercrime
,
Election security
FBI Sees Rise in AI-Based Fraud; Meta reports low use of election interference
Artificial intelligence: what is it for? According to the old war song, the answer isn’t “absolutely nothing,” but so far, it isn’t “absolutely everything” either.
See also: How to simplify cybersecurity
New findings indicate where generative AI and deepfakes are popular – fraud – and where they are not – election interference.
This assessment of election interference comes at the end of a year in which more than 2 billion people in more than 50 countries voted in major elections, including in countries like the United States, India, Indonesia, France, Germany, Mexico, Taiwan and the United States. Kingdom.
Despite the challenges, Meta, the parent company of Facebook, Instagram and WhatsApp, found that less than 1% of misinformation about elections, politics and social issues posted on its sites this year was generated by AI. said Nick Clegg, its president of global affairs, in a blog post published Tuesday.
Although “the risk of widespread deepfakes and AI-driven disinformation campaigns” remains real, “those risks have not materialized in any meaningful way” this year, at least on Meta’s platforms, said Clegg, former British Deputy Prime Minister.
Last month, the Center for Emerging Technologies and Security said in one report, they found “no evidence that AI-enabled disinformation measurably changed the outcome of an election” in a major election.
The researchers found that “misleading content generated by AI effectively shaped American electoral discourse by amplifying other forms of misinformation and stoking political debates.” While this remains difficult to measure, due to a lack of data on the actual effect on voter behavior, researchers said the actual impact appears minimal.
Exaggerating the threat posed by foreign interference efforts risks playing into the hands of adversaries by amplifying their efforts. Analyzing the threat that deepfakes pose to democracy, the first director of the UK’s National Cyber Security Centre, Ciaran Martin, said in June, “the reality is that so far the UK has suffered very little from successful cyber interference in elections.” Although Russia attempted to disrupt the 2014 Scottish referendum, a parliamentary inquiry found “Russian efforts were mostly laughable”.
Despite this, these efforts continue. As part of its efforts to combat what it calls “coordinated and inauthentic behavior,” Meta said that this year alone it has shut down 20 new covert influence operations. Since 2017, the company said it has disrupted 39 covert influence operations attributed to Russia, 30 linked to Iran and 11 linked to China.
Clegg said the company has seen attempts to manage such operations – particularly from Russia, which continues to dominate – moving away from Facebook to “platforms with fewer safeguards than ours”, such as X and Telegram.
Bad days for fraudsters using AI
The situation is different on the fraud front. Need to create a credible cryptocurrency investment site, perhaps with the claimed support of a real celebrity? Need to create a high volume of legitimate-looking social media profiles and activity? Want to trick someone who speaks a foreign language into falling in love with you and sending you lots of money for your supposed medical problems? Need it all on a large scale and in less time?
This week, the FBI warned that criminals have done all this and more by doubling down on their use of AI generation and deepfake tools.
“Criminals use AI-generated text to appear credible to the reader in social engineering, spear phishing, and financial fraud schemes such as dating, investment, and other trust schemes, or to overcome common indicators of fraud schemes,” he said.
More advanced use cases studied by law enforcement include criminals using AI-generated audio clips to trick banks into granting them access to accounts, or using “the voice of a loved one to pose as a close relative in a crisis situation, requesting immediate financial assistance or demanding immediate financial assistance.” a ransom,” the office warned.
The main defenses against such attacks, according to the FBI, include creating “a secret word or phrase with your family to verify their identity”, which can also work well in a professional environment – for example, in part of a stronger defense against CEO fraud (see: Best Cyber Extortion Defenses to Fight Virtual Kidnappers).
Many scammers attempt to exploit their victims before they have time to think and reflect. Therefore, never hesitate to hang up, independently find a phone number for the caller’s supposed organization and contact them directly, it says.
The FBI’s warnings are far from academic. At a recent cybersecurity event I attended, security professionals discussed how phishing attacks are increasingly difficult for employees to spot and report, apparently due to the criminal use of generation AI for messaging. “If it’s poorly written, it’s probably coming from HR,” said one CISO.