Major technology companies signed a pact on Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.
Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they are responding to AI-generated deepfakes that deliberately mislead voters. Twelve other companies, including Elon Musk’s X, also signed the deal.
“Everyone recognizes that no technology company, no government, no civil society organization is able to address the advent of this technology and its potential harmful use alone,” said Nick Clegg, president of Meta’s global affairs association. parent company of Facebook and Instagram, in an interview before the summit.
The deal is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that falsify or deceptively alter the appearance, voice or actions of political candidates, officials election officials and other key stakeholders in a democratic election, or who provide false information to voters about when, where and how they can legally vote.
Companies do not commit to banning or removing deepfakes. Instead, the agreement outlines the methods they will use to attempt to detect and label misleading AI content when it is created or distributed on their platforms. It notes that companies will share best practices and provide “rapid and proportionate responses” when this content begins to spread.
The vagueness of the commitments and the absence of binding requirements probably helped convince a wide range of companies, but disappointed advocates were looking for stronger assurances.
“The language is not as strong as one would expect,” said Rachel Orey, senior associate director of the elections project at the Bipartisan Policy Center. “I think we should give Caesar’s whereabouts and recognize that corporations have a vested interest in ensuring their tools are not used to undermine free and fair elections. That said, this is voluntary and we will monitor whether they follow through.
Clegg said each company “rightly has its own set of content policies.”
“This is not about imposing a straitjacket on everyone,” he said. “And anyway, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play whack-a-mole and find everything that you think , could mislead someone.”
Several political leaders from Europe and the United States also joined Friday’s announcement. European Commission Vice President Vera Jourova said that while such an agreement cannot be exhaustive, “it contains very impactful and positive elements.” She also urged fellow politicians to take responsibility and not use AI tools in misleading ways and warned that AI-powered disinformation could lead to “the end of democracy, not just in states.” members of the EU”.
The agreement reached in the German city annual safety meeting comes as more than 50 countries must organize national elections in 2024. Bangladesh, Taiwan, Pakistan and more recently Indonesia have already done it.
Attempts at AI-generated election interference have already startedlike when AI robocalls that mimic The American President Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary elections last month.
A few days before Elections in Slovakia in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig elections. Fact-checkers were quick to identify them as false as they spread on social media.
Politicians have also experimented with this technology, using AI Chatbots to communicate with voters by adding AI-generated images to ads.
The agreement calls on platforms to “pay attention to context and in particular to safeguard educational, documentary, artistic, satirical and political expression.”
He said companies would focus on making their policies transparent to users and work to educate the public on how they can avoid falling for AI fakes.
Most companies have already said they are putting safeguards in place on their own generative AI tools capable of manipulating images and audio, while also working to identify and label AI-generated content so that social media users know if what they are seeing is real. But most of the proposed solutions have not yet been deployed and companies have been forced to do more.
This pressure is heightened in the United States, where Congress has yet to pass laws regulating AI in politics, leaving companies to govern themselves.
The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign ads.
Many social media companies already have policies in place to deter misleading posts about electoral processes – AI-generated or not. Meta says it removes misinformation about “dates, locations, times, and methods of voting, voter registration, or census participation” as well as other false messages intended to interfere with a person’s civic participation .
Jeff Allen, co-founder of the Integrity Institute and former Facebook data scientist, said the deal appears to be a “positive step,” but he would still like to see social media companies take further steps to combat against misinformation, such as creating content recommendations. systems that don’t prioritize engagement above all else.
Lisa Gilbert, executive vice president of the advocacy group Public Citizen, argued Friday that the deal is “not enough” and that AI companies should “hold back technologies” such as hyperrealistic ones. text to video generators “until substantial and adequate safeguards are in place to help us avoid many potential problems.”
Besides the companies that helped broker Friday’s deal, other signatories include chatbot developers Anthropic and Inflection AI; voice clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for creating the Stable Diffusion image generator.
Notably missing is another popular AI image generator, Midjourney. The San Francisco-based startup did not immediately respond to a request for comment Friday.
The inclusion of a previous announcement on the current agreement — was one of the surprises of Friday’s agreement. Musk significantly reduced content moderation teams after taking over the old Twitter and described himself as a “free speech absolutist.”
In a statement released Friday, Linda Yaccarino, CEO of X, said that “every citizen and every business has a responsibility to ensure free and fair elections.”
“X is committed to playing its role, collaborating with its peers to combat AI threats while protecting free speech and maximizing transparency,” she said.
__
The Associated Press receives support from several private foundations to improve its explanatory coverage of elections and democracy. Learn more about AP’s Democratic Initiative here. The AP is solely responsible for all content.