London
CNN
—
Donald Trump is preparing to enter the White House for the second time. His agenda will include overseeing the development of artificial intelligence, potentially the most powerful technology of our time.
The president-elect has promised to “roll back excessive regulations” and tapped tech billionaire Elon Musk, another critic of government rules, to help lead the effort. More precisely, the Republican Party, in its electoral platformsaid he would repeal a radical decision decree signed by President Joe Biden which defines actions manage national security risks related to AI and prevent discrimination by AI systems, among other goals.
The Republican document says the executive order contains “radical left-wing ideas” that hinder innovation.
Sandra Wachter, professor of technology and regulation at the Oxford Internet Institute at the University of Oxford, is closely following the developments. AI is rife with risks that “needed to be addressed yesterday” with robust regulation, she told CNN.
Here are some of the dangers of unrestricted AI.
For years, AI systems have demonstrated their ability to reproduce societal biases, for example on race and gender – because these systems are trained on data about the past actions of humans, many of whom have these prejudices. When AI is used to decide who to hire or approve a mortgagethe result can often be discriminatory.
“Bias is inherent in these technologies because they look at historical data to try to predict the future… they learn who was hired in the past, who was imprisoned in the past,” Wachter said. “And so, very often and almost always, these decisions are biased. »
Without strong safeguards, she added, “these problematic decisions from the past will be carried into the future.”
The use of AI in predictive law enforcement is one example, said Andrew Strait, associate director of the Ada Lovelace Institute, a London-based non-profit organization researching AI safety and ethics.
Some police departments in the United States have used AI-based software trained on historical crime data to predict where future crimes are likely to occurhe noted. Because this data often reflects excessive police surveillance in certain communities, Strait said, predictions based on this data lead police to focus their attention on those same communities and report more crimes there. Meanwhile, other areas with potentially the same or higher levels of crime are less policed.
AI is capable of generating deceptive images, sounds and videos that can be used to make it appear as if a person did or said something that they did not. This, in turn, could be used to influence elections or create fake pornographic images to harass people, among other potential abuses.
AI-generated images circulated widely on social media ahead of the US presidential election earlier this month, including fake images. images of Kamala Harrisreposted by Musk himself.
In May, the U.S. Department of Homeland Security said in a bulletin distributed to state and local officials: seen by CNNthat AI would likely provide foreign agents and domestic extremists “increased opportunities for interference” during elections.
And in January, more than 20,000 people in New Hampshire received a robocall — an automated message delivered over the phone — that used AI to impersonate Biden’s voice advising them not to vote in the presidential primary. Behind the robocalls was, as he admitted, Steve Kramer, who worked for Rep. Dean Phillips’ long-running Democratic primary campaign against Biden. Phillips’ campaign denied have a role in robocalls.
Also over the past year, the targets of AI-generated non-consensual pornographic images have been high-profile women like Taylor Swift and Rep. Alexandria Ocasio-Cortez at girls in high school.
Dangerous misuse and existential risk
AI researchers and industry players have highlighted the even greater risks posed by this technology. They range from ChatGPT providing easy access to comprehensive information on how to commit crimeslike the export of weapons to sanctioned countries, to AI free yourself from human control.
“You can use AI to create very sophisticated cyberattacks, you can automate hacking, you can actually create an autonomous weapons system that can harm the world,” Manoj Chaudhary, chief technology officer at Jitterbit, told CNN , an American software company. .
In March, a report commissioned by the US State Department warned of the “catastrophic” national security risks posed by the rapid evolution of AI, calling for “emergency” regulatory safeguards alongside other measures. The most advanced AI systems could, in the worst case, “pose a problem”. threat of extinction to the human species,” the report states.
A related paper says AI systems could be used to implement “high-impact cyberattacks capable of crippling critical infrastructure,” among a litany of risks.
In addition to Biden’s executive order, his administration also guaranteed commitments of 15 major technology companies last year to strengthen the security of their AI systems, although all commitments are voluntary.
And Democratic-led states like Colorado and New York have passed their own AI laws. In New York, for example, any company using AI to recruit workers must hire an independent auditor to verify that the system is impartial.
A “patchwork of (US AI regulations) is developing, but it’s very fragmented and not very comprehensive,” said Strait of the Ada Lovelace Institute.
It is “too early to be sure” whether the new Trump administration will expand these rules or roll back them, he noted.
However, he fears that a repeal of Biden’s executive order could mean the end of the US government’s AI Safety Institute. The order created this “incredibly important institution,” Strait told CNN, tasking it with examining risks emerging from cutting-edge AI models before they are made public.
It is possible that Musk will push for stricter regulation of AI, as he has done before. He is ready to play a leading role in the next administration as co-head of a new “Department of Government Effectiveness” or DOGE.
Musk has repeatedly expressed concern that AI could be a problem existential threat to humanity, even if one of its companies, xAI, itself develops a generative AI chatbot.
Musk was “a very big supporter” of a now-abandoned bill in California, Strait noted. THE Invoice aimed to prevent some of the most catastrophic consequences of AI, such as systems potentially spinning out of control. Gavin Newsom, the Democratic governor of California, vetoed the bill in September, citing the threat it posed to innovation.
Musk is “very concerned about (the) catastrophic risk of AI. It’s possible that this could be the subject of a future executive order from Trump,” Strait said.
But Trump’s inner circle is not limited to Musk and includes JD Vance. The new vice president said in July, he worried about “preemptive attempts at overregulation” in the field of AI because they would “consolidate existing technology players and make it harder for new entrants to create the innovation that will power the next generation of Americans.” growth.”
Musk’s Tesla (TSLA) can be described as one of these historical technological players. Last year, Musk dazzled investors by talking about Tesla’s investment in AI and, in his latest publication of resultsthe company said it remains focused on “making critical investments in AI projects,” among other priorities.