Because AI is only loosely regulated, the responsibility falls on company insiders, the employees wrote, calling on companies to lift nondisclosure agreements and give workers protections that allow them to speak out of their concerns anonymously.
The move comes as OpenAI faces a staff exodus. Many critics have seen significant departures — including that of OpenAI co-founder Ilya Sutskever and senior researcher Jan Leike — as a rebuke of company leaders, who some employees say seek profit over profit. security of OpenAI technologies.
Daniel Kokotajlo, a former OpenAI employee, said he left the startup because of the company’s disregard for the risks of artificial intelligence.
TO CATCH UP
Summarized stories to stay informed quickly
“I have lost hope that they would act responsibly, especially in the pursuit of artificial general intelligence,” he said in a statement, referring to a highly controversial term for computer equivalents. to the power of the human brain.
“They and others have bought into the ‘move fast and break things’ approach, and that is the opposite of what is needed for such a powerful and poorly understood technology,” Kokotajlo said.
Liz Bourgeois, a spokesperson for OpenAI, said the company agrees that “rigorous debate is crucial given the importance of this technology.” Representatives for Anthropic and Google did not immediately respond to a request for comment.
The employees said that lack of government control, AI workers are the “rare people” capable of holding companies to account. They said they were hamstrung by “broad confidentiality agreements” and that ordinary whistleblower protections were “insufficient” because they focus on illegal activities and the risks they warn about are not yet regulated.
The letter calls on AI companies to commit to four principles to enable greater transparency and protection of whistleblowers. These principles constitute a commitment not to enter into or apply agreements prohibiting criticism of risks; a call to establish an anonymous process for current and former employees to raise concerns; support a culture of criticism; and a promise not to retaliate against current and former employees who share confidential information to raise alarms “after other processes have failed.”
The Washington Post reported in December that OpenAI’s top executives had raised fears of reprisals from CEO Sam Altman – warnings that preceded the chief’s temporary ouster. In a recent podcast interview, former OpenAI board member Helen Toner said part of the nonprofit’s decision to remove Altman as CEO at the end of last year was due in part to its lack of frank communication about safety.
“He gave us inaccurate information about the small number of formal security processes the company had in place, meaning it was basically impossible for the board to know how well those security processes worked,” she said.The TED show on AI” in May.
The letter was endorsed by AI luminaries including Yoshua Bengio and Geoffrey Hinton, considered the “godfathers” of AI, as well as famed computer scientist Stuart Russell.