AI leaders including Meta, Google, Microsoft and Samsung Electronics have joined forces to guard against the dangers of artificial intelligence.
Joining 14 other leaders AI and technology companies who signed the new “Frontier AI Safety Commitments,” the companies will each publish safety frameworks on how they will measure the risks of their frontier AI models.
These companies include Amazon, AnthropicCohere, G42, IBM, Inflection AI and Mistral AI, alongside Naver, OpenAITechnology Innovation Institute, xAI and Zhipu.ai.
Subscribe to the mobile marketing magazine
Click here to get the latest marketing news delivered to your inbox every Thursday for free
At the same time, the frameworks aim to determine when serious risks, if not properly mitigated, would be “deemed intolerable” and what companies will do to ensure the thresholds are not exceeded.
As a result, they committed to “not develop or deploy any model or system at all” if mitigation measures fail to keep risks below thresholds.
Meta, President of Global Affairs, Nick Clegg, said: “It is more critical than ever to ensure that safety and innovation go hand in hand, as the industry makes significant progress in developing the AI technology.
“To that end, from Bletchley last year we launched our latest cutting-edge open source template, Llama 3, along with a new open source security tool to ensure that developers using our templates have what they have. need. to deploy them safely. As we have long said, democratizing access to this technology is essential to both advancing innovation and bringing value to as many people as possible.
Brad Smith, vice president and president of Microsoft, continued: “The technology industry must continue to adapt its policies and practices, as well as its frameworks, to keep pace with science and societal expectations. »
Meanwhile, Anna Makanju, vice president of global affairs at OpenAI, said that “the field of AI security is evolving rapidly” and that the leading AI company is “pleased to endorse the focus put by commitments to refining approaches alongside science.”
Whereas, Tom Lue, Google DeepMind’s General Counsel and Head of Governance, concluded that “these commitments will help establish important AI security best practices at the border among leading developers.”