Ilya Sutskever, one of OpenAI’s founders who participated in an unsuccessful bid to oust CEO Sam Altman, said he was creating an artificial intelligence company focused on security.
Sutskever, a respected AI researcher who left the creator of ChatGPT last month, said in a social media post Wednesday that he created Safe Superintelligence Inc. with two co-founders. The company’s sole focus and objective is to safely develop “superintelligence” – a reference to AI systems that are smarter than humans.
The company pledged not to be distracted by management overhead or product cycles, and under its business model, work on safety and security would be insulated from short-term business pressures, said Sutskever and co-founders Daniel Gross and Daniel Levy in a prepared statement. statement.
The three said Safe Superintelligence is an American company based in Palo Alto, California, and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.
Sutskever was part of a group that unsuccessfully tried last year to oust Altman. The boardroom shakeup, which Sutskever later said he regretted, also led to a period of internal unrest centered on whether OpenAI executives were prioritizing commercial opportunities over the security of AI.
At OpenAI, Sutskever co-led a team focused on safely developing better-than-human AI, known as artificial general intelligence, or AGI. When he left OpenAI, he said he had plans for a very personally significant project, but gave no details.
Sutskever said it was his choice to leave OpenAI.
Days after his departure, his co-team leader Jan Leike also resigned and criticized OpenAI for letting security take a back seat to brilliant products. OpenAI later announced the formation of a safety and security committee, but it is made up primarily of company insiders.
First publication: June 20, 2024 | 5:58 p.m. STI