Deepfake AI and the ability to easily replicate the voices and faces of political candidates are putting US lawmakers on alert as the 2024 presidential election approaches.
Deepfake AI is a type of technology used to create realistic but fraudulent voices, videos and photos. As policymakers have introduced bills targeting AI and deepfakes – including the bipartisan No Fakes Act aimed at establishing federal rules on how a person’s voice, name or face can be used – none advanced to the House or Senate.
At an April 16 hearing by the Senate subcommittee assessing the risks of deepfake AI and its impact on elections, Sen. Richard Blumenthal (D-Connecticut) said the threat of political deepfakes is real and that Congress must act and “stop this AI nightmare.” “. Bad actors are already using AI to spread misinformation about candidates, especially about President Joe Biden. In January, thousands of New Hampshire voters received robocalls posing as Biden and telling them not to vote in the state’s primary election.
Beyond voice cloning, Blumenthal, chairman of the Subcommittee on Privacy, Technology and the Law, said deepfake images and videos are “incredibly easy” for anyone to create.
“A deluge of deception, misinformation and counterfeiting is about to descend on the American public,” he said. “Their arrival will take the form of political ads and other forms of disinformation enabled by artificial intelligence. There is a clear and present danger to our democracy.”
Lawmakers worry about foreign and local bad actors
National security officials have long shared concerns about the impact of fake AI and foreign disinformation on elections, which Microsoft verified earlier this month, Blumenthal said.
US Senator Richard Blumenthal
Microsoft released a report showing that members of the Chinese Communist Party are using fake social media accounts to generate divisive content and possibly influence the US presidential election. The report says there has also been increased use of Chinese AI-generated content, encouraging disruptive discussions on many topics, including the Maui wildfires in August 2023.
“When the American people can no longer distinguish fact from fiction, it will be impossible to have a democracy,” Blumenthal said.
The spread of disinformation and the use of fake AI are also concerning on a smaller scale, not just when they are created by foreign bad actors or when they target global figures, he added.
While deepfakes that target widely recognized figures like Biden’s impersonation attract enough attention to be detected and reported, Blumenthal said, it’s harder to catch deepfakes at a more restricted level, such as during local and national elections. As local media diminishes, he said, there are unlikely to be as many reliable sources verifying candidates’ statements, photos or videos.
David Scanlan, the New Hampshire secretary of state who helped stem the effects of Biden AI-generated robocalls that spread in January, said at the hearing that what concerned him most about the The incident was the ease with which a random member of the public created the call. A New Orleans-based magician, Paul Carpenter, made the call after being paid to do so by Democratic political consultant Steve Kramer.
“Add to that what happened with the video, you could be showing candidates in compromising situations that never existed, it could be a state election official giving false information on elections and worse,” Scanlan said. “To me, this is incredibly problematic.”
The deepfake AI problem is not limited to one political party or a single primary election, but has affected multiple candidates in multiple elections across the country. This underscores the need for Congress to intervene, said Sen. Josh Hawley (R-Mo.), ranking member of the Senate subcommittee.
“The dangers of this technology without guardrails and safety features are becoming painfully obvious,” he said.
The fight against deepfakes involves watermarking and platform responsibility
Zohaib Ahmed, CEO and co-founder of Resemble AI, said during the hearing that clear labeling of AI-generated content will be necessary to avoid bad situations resulting from deepfake AI technology. future. Resemble AI builds AI voice generators for businesses as well as deepfake audio detection products.
Ahmed said Congress should enforce rules requiring platforms to use AI watermark AI deepfake technology or detection that will allow users to determine whether content is real or fake. Ahmed also recommended establishing a certification program to monitor these technologies and ensure their accuracy.
“AI watermarking technology is a readily available solution that already helps verify the integrity of audio content,” he said.
Congress must pass legislation requiring content platforms to be responsible for detect and remove deepfakes and AI-generated media, Ben Colman, CEO and co-founder of deepfake detection company Reality Defender, said during the hearing. Colman praised the bipartisan Protect Elections From Deceptive AI Act, which would prohibit the use of AI to generate misleading content about candidates.
“AI developments are moving quickly – legislation needs to move faster,” he said.
Makenzie Holland is a senior writer covering big tech and federal regulations. Before joining TechTarget Editorial, she was a general assignment reporter for the Wilmington StarNews and crime and education reporter at Wabash Plain Dealership.