Brief
The Cybersecurity Agency (CSA) has just published Guidelines for securing AI systems («Guidelines“) and a Companion Guide to Securing AI Systems («Companion guide“).
The guidelines advocate a “secure by design” and “secure by default” approach, which considers both existing cybersecurity threats and emerging risks, such as adversarial machine learning. The goal is to provide system owners with principles for raising awareness and implementing security controls throughout the AI lifecycle.
The companion guide is an open, collaborative resource and, while not mandatory, offers guidance on useful metrics and controls informed by industry best practices, academic knowledge, and resources such as the MITRE ATLAS database and the OWASP Top 10 for Machine Learning and Generative AI.
The CSA is currently seeking comments on the Guidelines and Companion Guide. Interested organizations have until 11:59 p.m. on September 15, 2024 to submit their comments to Aisecurity@csa.gov.sg.
Guidelines for securing AI systems
The guidelines support securing AI systems throughout their lifecycle, with a focus on cybersecurity risks rather than on safety, fairness, transparency, or misuse of AI in cyberattacks. Organizations are encouraged to:
- Raise awareness and conduct risk assessments during the planning and design phase
- Secure supply chains, choose appropriate models, track and protect AI assets, and secure development environments during the development phase
- Secure deployment infrastructure, establish incident management procedures, and release responsibly during the deployment phase
- Monitor inputs and outputs, securely manage updates, and establish vulnerability disclosure processes during operations and maintenance
- Properly Dispose of End-of-Life Data and Models
Companion Guide to Securing AI Systems
The Companion Guide is a more detailed document intended to help system owners implement the Guidelines and outline practical controls that system owners can consider when adopting AI systems. For example, the Companion Guide explains how organizations should:
- Start by assessing the risks
- Identify relevant measures/controls in checklists for each phase of the AI lifecycle covering planning and design, development, deployment, operation and maintenance, and end of life
The accompanying guide also provides a detailed walkthrough and implementation examples showing how the controls can be applied to AI systems.
The Guidelines and Companion Guide are welcome developments that underscore CSA’s commitment to a collaborative and proactive approach to fostering the security of AI systems. As Singapore continues to position itself at the forefront of technological innovation, resources such as the Guidelines and Companion Guide will continue to play an important role in building trust by ensuring that Singapore’s AI systems remain robust and resilient to vulnerabilities.
* * * * *
© 2024 Baker & McKenzie.Wong & Leow. All rights reserved. Baker & McKenzie.Wong & Leow is a limited liability partnership and is a member firm of Baker & McKenzie International, a global law firm with member law firms worldwide. Consistent with common terminology used in professional services organizations, reference to a “director” means a person who is a partner, or equivalent, in such law firm. Similarly, reference to an “office” means an office of such law firm. This may be referred to as “lawyer advertising” requiring notice in some jurisdictions. Past performance does not guarantee a similar result.