This could include incorrect model operation, suspicious behavior patterns, or malicious input. Attackers can also attempt to abuse inputs via frequency, by performing controls such as rate-limiting APIs. Attackers may also seek to undermine the integrity of the model’s behavior, leading to undesirable model outcomes, such as failure to detect fraud or make decisions that may have safety and security implications. Controls recommended here include things like detecting strange or conflicting inputs and choosing a model design that is robust to evasion.
Threats related to development time
In the context of AI systems, OWASP’s AI Exchange discusses development time threats as they relate to the development environment used for data and model engineering outside of the usual AI development framework. applications. This includes activities such as collecting, storing and preparing data and models, as well as protecting against attacks such as data leaks, poisonings and supply chain attacks.
Specific controls cited include protecting development data and using methods such as encrypting data at rest, implementing data access control, including least privileged access, and ensuring implementing operational controls to protect the security and integrity of stored data.
Additional controls include the security of the development of the systems involved, including the people, processes and technologies involved. This includes implementing controls such as personnel security for developers and protecting the source code and configurations of development environments and their endpoints through mechanisms such as virus scanning and security management. vulnerabilities, as in traditional application security practices. Compromises of development endpoints could result in impacts to development environments and associated training data.
The AI Exchange also mentions AI and ML BOMs to help mitigate supply chain threats. He recommends using MITER ATLAS ML Supply Chain Compromise as a resource to mitigate provenance and pedigree issues and also conduct activities such as verifying signatures and using dependency checking tools.
AppSec Execution Threats
The AI Exchange highlights that AI systems are ultimately computer systems and may have similar weaknesses and vulnerabilities that are not specific to AI but impact computer systems including AI is part. These controls are, of course, addressed by long-standing application security standards and best practices, such as those from OWASP. Application Security Verification Standard (ASVS).
That said, AI systems have unique attack vectors that are also addressed, such as execution model poisoning and theft, insecure output handling, and direct prompt injection, this the latter also being cited in OWASP. LLM Top 10, claiming first place among the listed threats/risks. This is due to the popularity of GenAI and LLM platforms over the past 12-24 months.
To address some of these AI-specific runtime AppSec threats, AI Exchange recommends controls such as execution model and I/O integrity to combat model poisoning. For execution model theft, controls such as execution model confidentiality (e.g., access control, encryption) and model obfuscation make it difficult for attackers to understand the model in a deployed environment and extract information to fuel their attacks.
To address insecure output handling, recommended controls include encoding model output to avoid traditional injection attacks.
Rapid injection attacks can be particularly harmful to LLM systems, aiming to create input to cause the LLM to unknowingly execute the attackers’ goals via direct or indirect rapid injections. These methods can be used to trick the LLM into disclosing sensitive data such as personal data and intellectual property. To manage direct prompt injection, the OWASP LLM Top 10 is cited, and key recommendations to avoid its occurrence include applying privileged control for LLM access to back-end systems, segregating content external user prompts and establishing trust boundaries between the LLM and external sources. .
Finally, AI Exchange addresses the risk of leaking sensitive input data at runtime. Consider GenAI prompts being leaked to a party they shouldn’t be, such as in a mid-attack scenario. GenAI prompts may contain sensitive data, such as company secrets or personal information that attackers might want to capture. Controls here include protecting the transport and storage of model parameters through techniques such as access control, encryption, and minimizing retention of ingested prompts.
Community collaboration on AI is essential to ensure security
As the industry continues its journey toward adoption and exploration of AI capabilities, it is critical that the security community continues to learn how to secure AI systems and their use. This includes internally developed applications and systems with AI capabilities, as well as organizational interaction with external AI platforms and providers.
The OWASP AI Exchange is an excellent open resource that practitioners can explore to better understand both potential risks and attack vectors, as well as recommended controls and mitigations to address AI-specific risks. AI. As OWASP AI Exchange pioneer and AI security leader, Rob van der Veer declared Recently, much of AI security relies on the work of data scientists and AI security standards and guidelines such as AI Exchange can help with this.
Security professionals should primarily focus on the blue and green controls listed in the OWASP AI Exchange Browser, which often include integrating long-standing AppSec and cybersecurity controls and techniques into systems using AI.