The largest and most influential artificial intelligence (AI) companies are joining forces to develop a security-first approach to the development and use of generative AI.
The Coalition for Secure AI, also known as CoSAI, aims to provide the tools to mitigate AI risks. The goal is to create standardized safeguards, security technologies, and tools for secure model development.
“Our initial workstreams include AI and software supply chain security and preparing defenders for an evolving cyber landscape,” CoSAI said in a statement.
THE initial efforts According to Google, one of the coalition’s founding members, the coalition’s goals include creating a security bubble and checks and balances around access to and use of AI, as well as creating a framework to protect AI models from cyberattacks. Google, OpenAI, and Anthropic own the most widely used large-scale language models (LLMs). Other members include infrastructure providers Microsoft, IBM, Intel, Nvidia, and PayPal.
“AI developers need—and end users deserve—an AI security framework that meets the moment and responsibly seizes the opportunities that lie ahead. CoSAI is the next step on that journey, and we can expect more updates in the months ahead,” wrote Heather Adkins, Google’s vice president of security engineering, and Phil Venables, Google Cloud’s chief information security officer.
AI safety as a priority
AI security has raised many cybersecurity concerns since ChatGPT launched in 2022. These include the misuse of social engineering to penetrate systems and the creation of deepfake videos to spread false information. At the same time, security companies, such as Trend Micro and CrowdStrike, are now turning to AI to help businesses eradicate threats.
Safety, trust and transparency of AI are important because the results could lead organizations to make wrong, and sometimes harmful, actions and decisions, said Gartner analyst Avivah Litan.
“AI cannot operate on its own without safeguards to keep it in check: errors and exceptions must be highlighted and studied,” Litan explains.
AI safety concerns could multiply with technologies such as AI agents, which are add-ons that generate more accurate responses from personalized data.
“The right tools need to be in place to automatically correct all but the most opaque exceptions,” Litan says.
US President Joe Biden has challenged the private sector to prioritize the safety and ethics of AI. He is concerned about AI’s potential to spread inequality and compromise national security.
In July 2023, President Biden issued an executive order requiring commitments from the large companies that are now part of CoSAI to develop safety standards, share safety test results, and prevent the misuse of AI for biological materials, fraud, and deception.
CoSAI will work with other organizations, including the Frontier Model Forum, Partnership on AI, OpenSSF and MLCommons, to develop common standards and best practices.
MLCommons told Dark Reading this week that in the fall of this year it will release an AI safety benchmarking suite that will evaluate LLMs on responses related to hate speech, exploitation, child abuse, and sex crimes.
CoSAI will be managed by OASIS Open, which, like the Linux Foundation, manages open source development projects. OASIS is best known for its work around the XML standard and for the ODF file format, which is an alternative to Microsoft Word’s .doc file format.