Senior cyberspace and intelligence officials told a Senate committee on Wednesday that the United States is prepared to confront threats of election interference later this year, but stressed that AI-generated content would further challenge authorities challenged in their ability to verify false content.
These remarks come just under six months before the November US elections which take place alongside dozens of other elections across the world this year.
“Since 2016, we have seen declassified intelligence assessments name a range of influence actors who have engaged in, or at least considered, election influence and interference activities – including Iran , Russia and the PRC, but also Cuba, Venezuela, Hezbollah and a series of profit-motivated foreign hacktivists and cybercriminals,” said Senate Intelligence Committee Chairman Mark Warner, D-Va. in his opening remarks.
“Have we thought through the process that we do when one of these (election interference) scenarios occurs? said Vice President Marco Rubio, R-Fla.
“If tomorrow there was a… very compelling video of a candidate that… came out in the 72 hours before election day, where that candidate said a racist comment or did something horrible, but that’s not true – who is responsible for letting people know this? it’s wrong ? he said.
National Intelligence Director Avirl Haines touted numerous tools available in the intelligence community to detect and dismantle false election content, including a DARPA-backed tool. media authentication tool.
CISA Director Jen Easterly also said her agency works directly with AI companies like OpenAI to handle election threats, encouraging them to direct their users to web pages run by the National Association of Secretaries of State. State which provides election resources in a bipartisan manner.
She said Americans should have confidence in the security of the upcoming election, but stressed that the United States cannot be complacent. The threats facing Americans who vote in November are “more complex than ever.”
The hearing highlighted the challenges of managing election information and results: Who should Americans trust in the final vote, and if false information proliferates on social media, which U.S. officials will tell Americans that the content is a sham?
Lawmakers clashed with Haines over the notification process involved in telling the public where fake news comes from and whether ODNI should be the harbinger of content policing rather than simply attribute content to malicious actors.
Sen. James Risch, R-Idaho, brought up a disputed 2020 missive on whether the infamous Hunter Biden laptop story was Russian disinformationcalling it “deplorable”.
Who would speak up and say that this letter is “patently false,” he asked Haines.
“I don’t think it’s appropriate for me to determine what’s true and what’s false in these circumstances,” Haines responded, arguing that it was not his role to determine what current officials were saying. or former intelligence officers.
Sen. Angus King, I-Maine, said ODNI should focus on whether election allegations are part of foreign disinformation operations, which could sometimes involve declassifying IC information to warn the public.
“I don’t want the U.S. government to be the truth police,” he said. “That’s not the job of the U.S. government.”
Consumer-facing AI tools have given everyday people a host of ways to increase productivity in their workplaces and daily lives, but researchers and officials have for months expressed fears about how whose platforms can be used to sow political discord through technologies like voice cloning and imaging.
Tech and AI companies, however, pledged in February to watermark election-related AI-generated content. some critics are wary to know whether voluntary measures are strong enough to reduce false and misleading images or texts disseminated on social networks.
A loss of confidence in electoral systems Officials nationwide fear it could lead to a repeat of allegations of widespread voter fraud that emerged during the 2020 presidential election, which ended with the Jan. 6 attack on the U.S. Capitol.
Nationally, election workers fear facing threats of violence voters who do not accept the results of the election.
In March, top federal agencies resumed discussions with social media companies about removing misinformation on their sites in the run-up to the November election, a stark reversal after the Biden administration for months froze the communications with social platforms in ongoing First Amendment case before the Supreme Court, Warner. said last week.
“If the bad guy started launching AI-based tools that threatened election officials in key communities, that would clearly fall into the category of foreign interference,” he said at the time, while emphasizing that this would not necessarily require a formal definition of disinformation. and can be considered a “completely different attack vector”.
AI companies have been found to scan chat logs to weed out malicious actors or hackers looking to increase their approaches in networks, Nextgov/FCW Previously reported. Among many use cases, foreign image makers have enhanced their disinformation campaigns by using generative AI to publish their fraudulent messages in English. it looks more realistic.
“We’re going to count on you,” Warner told witnesses in his closing remarks. “This is the most important election ever.”