To mark Cybersecurity Awareness Month, global technology trade association ITI released a new paper on Tuesday that provides detailed suggestions on how policymakers can improve the cybersecurity of AI models and systems.
While AI models and systems introduce a new threat vector for malicious actors to exploit, they also provide an opportunity to strengthen proactive cybersecurity measures, ITI said.
In his article, AI Security Policy PrinciplesITI offers five suggestions on how policymakers can help strengthen the cybersecurity of AI systems and reiterates how AI can be used to improve proactive cybersecurity measures.
For example, AI is used to thwart dynamic, rapidly evolving threats, and AI-based analytical tools can help identify new tactics, techniques, and behaviors of sophisticated and well-resourced adversaries.
“Cyber threats to AI models and systems know no borders and continue to evolve. The technology industry is urging policymakers around the world to prioritize engagement with like-minded partners and allies to promote a common and consistent approach to AI security,” said Courtney Lang, vice president of policy at ITI. “The new ITI policy guide aims to give lawmakers the tools to develop interoperable AI security frameworks that protect consumers, mitigate potential risks, and empower the global cybersecurity ecosystem and workforce. work in the age of AI.
The ITI AI Security Policy Principles outline five key principles that policymakers should follow:
- Leverage existing cybersecurity practices, standards and controls where they are already sufficient;
- Coordinate with like-minded allies and partners to ensure policy approaches to AI security are global and interoperable;
- Ensure that any AI security policy reflects a holistic approach throughout the AI lifecycle and value chain;
- Using public-private partnerships to achieve cybersecurity outcomes through AI; And
- Ensure adequate support for fundamental AI R&D as well as training and development of the existing cybersecurity workforce.