Singapore plans to issue guidelines soon that it says will offer “practical measures” to strengthen security artificial intelligence (AI) tools and systems. The Cyber Security Agency (CSA) is expected to release its draft technical guidelines for securing AI systems for public consultation later this month, according to Janil Puthucheary, Singapore’s Senior Minister of State for the Ministry of Communications and Information.
The voluntary guidelines can be adopted alongside existing security processes that organizations implement to address potential risks in AI systems, Puthucheary said during his keynote address Wednesday at the Association of Information Security Professionals (AiSP) AI Security Summit.
Through these technical guidelines, the CSA hopes to provide a useful reference for cybersecurity professionals looking to improve the security of their AI tools, the minister said. He further urged industry and community to do their part to ensure AI tools and systems remain safe and secure against malicious threats, even if techniques continue to evolve.
Also: The Best VPN Services (and How to Choose the Right One for You)
“Over the last couple of years, AI has proliferated rapidly and been deployed in a wide variety of spaces,” Puthucheary said. “This has had a significant impact on the threat landscape. We know that this rapid development and adoption AI exposed we to many new risks(including) adversarial machine learning, which allows attackers to compromise the function of the model.”
He highlighted how security vendor McAfee managed to compromise Mobileye by changing the speed limit signs that the AI system was trained to recognize.
AI is creating new security risks, and organisations in both the public and private sectors must work to understand this evolving threat landscape, Puthucheary said. He added that the Government Technology Agency (GovTech), which is responsible for the Singapore government’s information systems, is developing capabilities to simulate potential attacks on AI systems to understand how they can impact the security of these platforms. “Doing so will help us put the right safeguards in place,” he said.
Puthucheary added that efforts to better protect against existing threats must continue, as AI is vulnerable to “classic” cyber threats, such as those targeting data privacy. He noted that the increasing adoption of AI will expand the attack surface through which data can be exposed, compromised or leaked. He said AI can be leveraged to create increasingly sophisticated malware, such as See GPTwhich can be difficult for existing security systems to detect.
Also: Cybersecurity teams need new skills even as they struggle to manage existing systems
At the same time, AI can be used to improve cyber defense and equip security professionals with the ability to identify risks faster, at scale and with greater accuracy, the minister said. He added that machine learning-based security tools can help detect anomalies and initiate autonomous actions to mitigate potential threats.
According to Puthucheary, AiSP is in the process of setting up an AI special interest group, where its members can exchange views on developments and capabilities. Established in 2008, AiSP describes itself as an industry group focused on developing the technical skills and interests of Singapore’s cybersecurity community.
Also: AI is transforming cybersecurity and businesses must become aware of this threat
In April, the U.S. National Security Agency’s AI Security Center released a fact sheet, Deploying AI systems safelywhich it said offered best practices in deploying and operating AI systems.
Developed jointly with the U.S. Cybersecurity and Information Security Agency, the guidance aims to improve the integrity and availability of AI systems and mitigate known vulnerabilities in these systems. The document also describes methodologies and controls to detect and respond to malicious activity targeting AI systems and associated data.