WASHINGTON — The Democratic National Committee observed earlier this year that campaigns nationwide were experimenting with artificial intelligence. So the organization contacted a handful of influential party campaign committees with a request: sign guidelines that would commit them to using technology “responsibly.”
The draft agreement, a copy of which was obtained by The Associated Press, was hardly full of revolutionary ideas. He asked campaigns to verify the work of AI tools, protect against bias and avoid using AI to create misleading content.
“Our goal is to use this new technology both effectively and ethically, and in a way that advances – rather than undermines – the values we champion in our campaigns,” the draft states.
The plan came to nothing.
Instead of fostering agreement, the guidelines have sparked debate about the value of such commitments, particularly those governing rapidly evolving technology. Among the concerns raised by Democratic campaign organizations: Such a commitment could cripple their ability to deploy AI and discourage donors with ties to the AI industry. Some committee officials were also upset that the DNC only gave them a few days to agree to the guidelines.
The proposal’s abandonment highlighted internal divisions over campaign tactics and the party’s uncertainty over how best to use AI, amid warnings from experts that the technology is accelerating the proliferation of misinformation.
Hannah Muldavin, senior spokeswoman for the Democratic National Committee, said the group is not giving up on consensus building.
The DNC, she said, “will continue to collaborate with our sister committees to discuss ideas and issues important to Democratic campaigns and to American voters, including AI.”
“It is not uncommon for ideas and plans to change, especially in the midst of a busy election year, and any documents on this topic reflect early and ongoing conversations,” Muldavin said, adding that “the DNC and our partners take the opportunities and challenges seriously. presented by AI.
The squabbles come as campaigns increasingly rely on artificial intelligence (computer systems, software or processes that mimic aspects of human work and cognition) to optimize workloads. This includes using large language models to write fundraising emails, text supporters, and create chatbots to answer voter questions.
This trend is expected to continue as the November general election approaches, with campaigns turning to supercharged generative AI tools to create text and images, as well as clone human voices and create videos at speed dazzling.
Last year, the Republican National Committee used AI-generated imagery in a TV spot predicting a dystopian future under President Joe Biden.
Much of that adoption, however, has been overshadowed by concerns about how campaigns could use artificial intelligence in ways that mislead voters. Experts have warned that AI has become so powerful that it has made it easy to generate “deeply false” videos, audio clips and other media targeting opposing candidates. Some states have passed laws regulating how generative artificial intelligence can be used. But Congress has so far not passed any bills regulating artificial intelligence at the federal level.
In the absence of regulation, the DNC sought a set of guidelines that it could cite as proof that the party was taking the threat and promises of AI seriously. He sent the proposal in March to the five Democratic campaign committees that seek to elect candidates for office in the House, Senate, governor, state legislature and state attorneys general, according to the draft agreement.
The goal was for each committee to agree on a set of AI safeguards, and the DNC proposed issuing a joint statement proclaiming that such guidelines would ensure that campaigns could use “the tools that they need to prevent the spread of misinformation and disinformation, while allowing campaigns to take place safely.” , use generative AI responsibly to engage more Americans in our democracy.
The Democratic committee hoped the statement would be signed by Chairman Jaime Harrison and leaders of the other organizations.
Democratic activists said the proposal was met with a thud. Some senior committee leaders feared the deal could have unintended consequences, perhaps limiting how campaigns use AI, according to several Democratic operatives familiar with the outreach work.
And that could send the wrong message to tech companies and executives working on AI, many of whom help fill campaign coffers during election years.
Some of the Democratic Party’s most prolific donors are prominent tech entrepreneurs and AI evangelists, including Sam Altman, CEO of OpenAI, and Eric Schmidt, former CEO of Google.
Altman has donated more than $200,000 to the Biden campaign and his joint Democratic fundraising committee since early last year, according to Federal Election Commission data, and Schmidt’s contributions to those groups exceeded $500,000 during the same period.
Two other AI supporters, Dustin Moskovitz, co-founder of Facebook, and Reid Hoffman, co-founder of LinkedIn, have donated more than $900,000 to Biden’s joint fundraising committee this cycle, according to same data.
The DNC’s plan caught the committees off guard because it came with little explanation, other than a desire to get each committee to agree to the list of best practices within days, said several Democratic operatives who spoke undercover of anonymity because they were “I am not authorized to discuss the matter. Aides to the Democratic Congressional Campaign and Democratic Senatorial Campaign committees said they felt pressured by a DNC schedule that urged them to sign quickly.
Representatives for the Association of Democratic Attorneys General did not respond to The Associated Press’ request for comment. Spokespeople for the Democratic Governors Association and the Democratic Legislative Campaign Committee declined to comment.
The Republican National Committee did not respond to questions about its AI guidelines. The Biden campaign also declined to comment when asked about the DNC’s efforts.
The four-page agreement – “Guidelines on the Responsible Use of Generative AI in Campaigns” – covered everything from ensuring that artificial intelligence systems were untrustworthy without a human verifying their work through to notifying voters when they interact with AI-generated content or systems.
“As the explosive rise of generative AI transforms every aspect of public life, including political campaigns, it is more important than ever to limit the potential threat this new technology poses to the rights of voters and exploit it to build innovative and effective campaigns and strengthen the effectiveness of this new technology. , a more inclusive democracy,” the proposal states.
The guidelines have been divided into five sections including headings such as “Offering Humane Alternatives, Consideration and Retreat” and “Providing Advice and Explanation.” The proposed rules would have required committees to ensure that “a real person is responsible for approving AI-generated content and is responsible for how, where, and to whom it is deployed.”
The guideline emphasizes that “users should always be aware when interacting with an AI robot” and emphasizes that any image or video created by AI “must be reported” as such. And he stressed that campaigns should use AI to help aides, not replace them.
“Campaigns are a human-led and human-driven activity,” the agreement reads. “Use efficiencies to educate more voters and focus more on quality control and sustainability.” »
He also urged campaigns not to use “generative AI to create misleading content.” Period.”
Image credits: Generative illustration by Wirestock via Dreamstime.com, AP/Manuel Balce Ceneta