More than a dozen tech companies have teamed up to launch an industry group dedicated to making artificial intelligence applications safer.
The Coalition for Secure AI, or CoSAI, was announcement Today at the Aspen Security Forum, the company will be under the umbrella of OASIS, a nonprofit organization that oversees the development of dozens of open-source software projects. Many of these projects aim to simplify cybersecurity tasks, such as automating breach response workflows.
CoSAI’s founding members include OpenAI and Anthropic PBC, the two best-funded startups in the vast language model ecosystem, as well as rivals Cohere Inc. and GenLab. In the public cloud market, the consortium is backed by Amazon Web Services Inc., Microsoft Corp. and Google LLC. They are joined by Nvidia Corp., Intel Corp., IBM Corp., Cisco Systems Inc., PayPal Holdings Inc., Wiz Inc. and Chainguard Inc.
The coalition was launched with two main goals. The first is to develop tools and technical guidance that will help organizations secure their AI applications. The other goal, according to the group’s backers, is to create an ecosystem where companies can share AI-related cybersecurity best practices and technologies.
“CoSAI was created by the need to democratize the knowledge and advancements critical to the secure integration and deployment of AI,” said David LaBianca, CoSAI Board Co-Chair. “With the help of OASIS Open, we look forward to continuing this work and collaboration between leading companies, experts, and academia.”
CoSAI is launching three open source projects, or initiatives, to achieve these goals. Each project tackles a different subset of the tasks involved in securing AI applications.
According to CoSAI, the first initiative aims to help software development teams analyze their machine learning workloads to detect cybersecurity risks. To this end, the consortium will develop a taxonomy of common vulnerabilities and ways to address them. CoSAI members will also create a cybersecurity dashboard designed to help developers monitor vulnerabilities in AI systems and report any issues they detect to other stakeholders.
According to CoSAI, its second inaugural project aims to make it easier to mitigate cybersecurity risks related to AI. The goal is to simplify the process of identifying “investments and mitigation techniques to address the security impacts of AI use,” Google cybersecurity leaders Heather Adkins and Phil Venables wrote in a statement. blog post Today.
The third initiative presented today by CoSAI aims to address risks related to the software supply chain. These are vulnerabilities caused by software components that a company obtains from external sources such as GitHub repositories.
Before an AI application can be analyzed for vulnerabilities in external components, developers must first identify the external components it contains. This process can be time-consuming in large software projects with a significant number of code files. One of CoSAI’s priorities will be to simplify the workflow.
At the same time, consortium members will develop ways to address cybersecurity risks associated with third-party AI models. Many AI application projects rely on neural networks from the open source ecosystem because creating a custom algorithm can be prohibitively expensive. In theory, an external neural network could introduce vulnerabilities into a software project that could allow hackers to launch cyberattacks.
CoSAI plans to launch further cybersecurity initiatives in the future. These initiatives will be overseen by a technical steering committee composed of AI experts from the private sector and academia.
Picture: Google
Your vote of support is important to us and helps us keep the content FREE.
Clicking below supports our mission to provide free, in-depth, and relevant content.
Join our community on YouTube
Join the community of over 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies Founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU