Legit Security, an application security posture management (ASPM) platform, has launched the cybersecurity industry’s first AI discovery capabilities. This technology will enable information security managers and AppSec teams to explore where and when AI code is used, providing greater control and visibility to secure application delivery while maintaining momentum. software development.
As developers rapidly harness the potential of AI and large language models (LLMs) to advance and deploy capabilities, a variety of new risks are emerging. This includes AI-generated code that potentially harbors unknown vulnerabilities or flaws that could pose a risk to the entire application. Additionally, legal issues may arise from AI-generated code if there are copyright restrictions.
An additional risk lies in the inappropriate implementation of AI functionalities, which could lead to data exposure. Despite these potential threats, security teams often have limited understanding of the use of AI-generated code, creating security blind spots that impact both the organization and the chain. software procurement.
“There is a significant disconnect between what CISOs and their teams believe to be true and what is actually happening in development,” comments Dr. Gary McGraw, co-founder of the Berryville Institute of Machine Learning (BILM) and author of Software Security. “This gap in understanding is particularly intense about why, when and how AI technology is used by developers.”
The recent BIML publication “An Architectural Risk Analysis of Large Language Models” identified 81 risks specific to LLM, including one of the top ten. These risks, says Dr. McGraw, cannot be mitigated without a comprehensive understanding of the areas in which AI is used.
Legit Security’s platform enables security managers, including CISOs, product security managers, and security architects, to have a comprehensive view of potential risks throughout the development pipeline. With this clear view of the development lifecycle, customers can rest assured that their code is secure, compliant, and traceable. These new AI code discovery capabilities enhance the platform by filling a visibility gap, enabling security to act preemptively and reduce the risk of legal exposure, while maintaining compliance.
“AI offers huge potential for developers to deliver and innovate faster, but there is a need to understand the risks that can be introduced,” notes Liav Caspi, co-founder and chief technology officer at Legit Security . “Our goal is to ensure nothing gets in the way of developers, while providing them with the confidence that comes with visibility and control over the application of AI and LLMs. When we showed one of our clients how and where AI was used, it was a revelation. “.
Legit’s AI code discovery capabilities provide a myriad of benefits including a comprehensive view of the development environment, full visibility of the application environment such as repositories using LLM, MLOps services and tools code generation. This unique platform can detect LLM and GenAI development and enforce organizational security policies such as requiring all AI-generated code to be reviewed by a human.
Other features include real-time notifications of GenAI code, providing greater transparency and accountability as well as guardrails to prevent the deployment of vulnerable code into production. Legit can also alert on LLM risks, by analyzing the code of LLM applications to detect security risks.