OpenAI removed accounts associated with well-known propaganda operations in Russia, China and Iran; an Israeli political campaign company; and a previously unknown group from Russia that the company’s researchers dubbed “Bad Grammar.” The groups used OpenAI’s technology to write articles, translate them into different languages, and create software that helped them automatically post to social media.
None of these groups managed to gain much ground; the social media accounts associated with them reached few users and just a handful of followers, said Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team. However, the OpenAI report shows that propagandists active for years on social networks are using AI technology to boost their campaigns.
“We’ve seen them generate a higher volume of text and with fewer errors than these operations traditionally handle,” Nimmo, who previously worked for Meta tracking’s influence operations, said in a briefing with the journalists. Nimmo said it’s possible other groups are still using OpenAI’s tools without the company’s knowledge.
“Now is not the time for complacency. History shows that influence operations that have spent years failing can suddenly explode if no one is looking for them,” he said.
Governments, political parties and activist groups have used social media to trying to influence politics for years. Following concerns about Russian influence during the 2016 presidential election, social media platforms began to take a closer look at how their sites were being used to influence voters. Companies generally prohibit governments and political groups from hiding concerted efforts to influence users, and political ads must disclose who funded them.
As AI tools capable of generating realistic text, images, and even videos become more widely available, disinformation researchers have raised concerns that it will become even more difficult to detect and respond to fake news or covert influence operations online. Hundreds of millions of people are voting in elections around the world this year, and generative AI deepfakes have already proliferated.
OpenAI, Google and other AI companies have been working on technology to identify deepfakes created with their own tools, but that technology is still to prove. Some AI experts believe deepfake detectors will never be completely effective.
Earlier this year, a group affiliated with the Chinese Communist Party released an AI-generated audio recording of one Taiwanese election candidate allegedly supporting another. However, the politician, Foxconn founder Terry Gou, did not support the other politician.
In January, New Hampshire primary voters received a robocall claiming to be from President Biden but it was quickly discovered that it was an AI. Last week, a Democratic operative who said he ordered the robocall was indicted for voter suppression and candidate impersonation.
OpenAI’s report details how the five groups used the company’s technology in their attempted influence operations. Spamouflage, a group from China, used OpenAI’s technology to search for social media activity and write posts in Chinese, Korean, Japanese and English, the company said. An Iranian group known as the International Virtual Media Union also used OpenAI’s technology to create articles that it published on its site.
Bad Grammar, a previously unknown group, used OpenAI technology to create a program capable of automatically posting to the messaging app Telegram. Bad Grammar then used OpenAI technology to generate posts and comments in Russian and English saying the United States should not support Ukraine, according to the report.
The report also reveals that an Israeli political campaign company called Stoic used OpenAI to generate pro-Israeli messages on the War in Gaza and targeting them to people in Canada, the United States and Israel, OpenAI said. On Wednesday, Facebook owner Meta also made Stoic’s work public, saying it had removed 510 Facebook accounts and 32 Instagram accounts used by the group. Some accounts were hacked, while others belonged to fictitious people, the company told reporters.
The accounts in question often commented on the pages of well-known individuals or media organizations, posing as pro-Israel American students, African Americans and others. These comments support the Israeli military and warn Canadians that “radical Islam” threatens the country’s liberal values, Meta said.
AI came into play in making some comments, which seemed strange and out of context to real Facebook users. The operation went poorly, the company said, attracting only about 2,600 legitimate subscribers.
Meta acted after the Atlantic Council’s digital forensic research lab discovered the network on X.
Over the past year, disinformation researchers have suggested that AI chatbots could be used to have long, detailed conversations with specific people online, trying to influence them in a certain direction. AI tools could also potentially ingest large amounts of data about individuals and tailor messages directly to them.
OpenAI hasn’t found any of these more sophisticated uses for AI, Nimmo said. “This is indeed an evolution rather than a revolution,” he said. “None of this is to say that we might not see this in the future.”
Joseph Menn contributed to this report.