NIST provides a set of resources to help CISOs and security leaders protect their technologies. These include: NIST Cybersecurity Framework and NIST’s Artificial Intelligence Risk Management Framework both focus on cybersecurity risks to AI systemsAlthough they share some commonalities, they also have fundamental differences.
Let’s look at each paper and examine how to use the NIST frameworks for AI.
What is NIST CSF?
THE NIST Cybersecurity Framework (CSF), formerly known as the Critical Infrastructure Cybersecurity Improvement Framework, is the de facto standard for cybersecurity risk management. Originating from Executive Order 13636 in 2013, NIST collaboratively created the CSF as a clear and concise approach to organizing and communicating cybersecurity risks to executive leadership.
Launched in 2014, the first iteration of the CSF was a flexible, repeatable tool to help organizations of all types and sizes manage cybersecurity using the following features:
- Identify.
- Protect.
- Detect.
- Answer.
- To recover.
The CSF 2.0, updated in 2024, added a sixth function – govern – to the guide. The goal is to give organizations a way to put in place governance, risk management and compliance (GRC) capabilities that make risk management a repeatable and measurable process from top to bottom.
What is RMF AI?
NIST has published the AI Risk Management Framework (AI RMF) in 2023 to, in part, “cultivate public trust in the design, development, use, and evaluation of AI technologies and systems.”
The AI RMF uses the following four functions to help CISOs and security leaders organize and communicate AI risks:
- Govern.
- Map.
- Manage.
- Measure.
These functions aim to establish GRC capabilities within an organization with respect to AI systems.
While the CSF and AI RMF have similar goals, the scope of the AI RMF is slightly different. The AI RMF focuses on companies that develop AI software. As such, it focuses on the design, development, deployment, testing, evaluation, verification, and validation of AI systems.
However, most organizations are not software developers; rather, they use AI as a tool to become more effective or efficient. To this end, organizations implementing the RMF AI must take a different approach than they do with the CSF. This is not necessarily bad news. Both frameworks were designed to be flexible in their implementation while providing a solid foundation for managing risk.
How to use both frameworks together
The obvious point of intersection between the CSF and the AI RMF lies in their respective governance functions. Many organizations attempt to implement each category or subcategory of both frameworks to manage risk from a principles-based perspective. For organizations with sufficient resources and dedicated staff, this is possible. But many organizations have tight budgets and want to implement these frameworks together.
A simple solution for CISOs and security leaders is to start with a small committee of current employees to discuss technology risks on a recurring basis. This committee can use simple templates to identify, assess, and manage risks. A small, diverse team brings perspective to these critical risk decisions. For example, consider the specific cybersecurity risks of AI, including: deepfakesdata leaks in AI prompts and AI Hallucinations.
Once the risks have been identified and analyzed to address them, take stock of the AI systems the organization has or is using. These include AI assistants, ChatGPT, Dall-E, or other generative AI systems. Use an employee survey or analyze performance data from the network monitoring system to determine which systems are in use. Create a list of these systems and use it to inform the next step.
Next, align AI systems with the identified AI risks. This can be as simple as a spreadsheet that allows the organization to manage risks and assets. From there, decide what actions to take to mitigate risks to assets. This step depends on the context and risk posture of the organization. A good place to start is to outline policies governing how employees use and interact with AI systems. Training and awareness can help reduce risks.
The NIST CSF and the AI RMF are excellent resources for organizing and communicating a technology risk portfolio. Using these NIST frameworks for AI together can seem daunting, given their size and scope. Yet, given the flexible nature of both, it is doable with a small team of dedicated professionals. Use this team to identify risks, catalog assets, and decide how to move forward in a strategy that works best for the organization’s unique risk context.
Matthew Smith is a Virtual CISO and management consultant specializing in cybersecurity and AI risk management.