Artificial Intelligence and Machine Learning
,
Geographical focus: the United Kingdom
,
Geo-specific
Guidance is first step towards global standard, says AI minister
The UK government has published voluntary guidelines intended to help developers and suppliers of artificial intelligence protect models from hacking and potential sabotage.
See also: Does Office 365 deliver the email security and resiliency businesses need?
Published on Wednesday, the British government’s code of practice on AI lists recommendations such as monitoring AI system behavior and performing model testing.
“Organizations across the UK face a complex cybersecurity landscape, and we want to ensure they have the confidence to adopt AI into their infrastructure,” said the Minister for AI and Property intellectual, Jonathan Camrose.
The UK government has said businesses should strengthen AI supply chain security and reduce potential risks from vulnerable AI systems, such as data loss. The guidelines recommend measures such as purchasing secure software components, including external templates, frameworks, or APIs, only from verified third-party developers and ensuring the integrity of training data from publicly available sources.
“Particular attention should be given to the use of open source models, where the responsibility for model maintenance and security becomes complex,” the guide states.
Other measures include training AI developers in secure coding, implementing security guardrails for different AI models, and providing the ability to interpret and explain AI models.
The UK government intends to turn these guidelines into a global standard to promote safety by design in AI systems. As part of this plan, the government has opened a consultation attractive responses until July 10.
The Conservative government pledged at a summit in November to promote a shared global approach to AI security (see: UK AI Security Summit to focus on risk and governance).
The guidance comes just days after the UK’s AI Safety Institute released an AI model evaluation platform called Inspect, which allow startups, universities and AI developers to evaluate the specific capabilities of individual models and produce a score based on their results
The US and UK AI Safety Institutes said in April they would work together to develop safety assessment mechanisms and guidance for emerging risks (see: US and UK team up to align on AI security and share resources).