Government security agencies in Five Eyes cohort countries have issued new guidelines for the secure deployment of artificial intelligence systems, particularly to ensure that existing network vulnerabilities are not further exploited by emerging technology.
Released Monday afternoon, the best practices document is co-authored by the Cybersecurity and Infrastructure Security Agency, the FBI, the Australian Cyber Security Center of the Australian Signals Directorate, the Canadian Center for Cyber Security, the New Zealand’s National Cyber Security Center and the UK’s National Cyber Security Centre. Cybersecurity Center.
The guidelines set three main objectives: improving the confidentiality and integrity of AI systems; ensure that known cybersecurity vulnerabilities are secured; and implement a series of robust protection measures to detect and prevent malicious activity.
Given the prevalence of new AI software in established digital networks, security authorities are monitoring how to get married best the potential of AI while mitigating disaster risks. Current security concerns surrounding AI systems typically revolve around leveraging the data they are trained on.
“Malicious actors targeting AI systems can use attack vectors unique to AI systems, as well as standard techniques used against traditional IT” the guide indicates. “Due to the wide variety of attack vectors, defenses must be diverse and comprehensive. Advanced threat actors often combine multiple vectors to execute more complex operations. Such combinations can penetrate layered defenses more effectively.
While best practices touch on familiar advice, such as refining and securing the digital network and environmental governance, the guidance also notably recommends a thorough review of software before and during implementation.
The agencies note that leveraging cryptographic protocols and digital signatures to confirm the integrity and origin of each artifact passing through the system and storing all forms of code for later validation and to track any modifications is essential to mitigate the risks.
The guide recommends automating detection, analysis, and response capabilities across a network with a deployed AI model, as the authors say this level of automation can help reduce the workload on IT teams and security – with the caveat of using good judgment about when to bring a human perspective into the digital network supply chain.
“When considering using other AI capabilities to make automation more efficient, carefully weigh the risks and benefits, and ensure there is a human in the loop where needed” , indicates the guide.
The need to protect weighted AI models is also included in the document. Weights in AI neural networks help a system make a specific decision regarding the input data or information. Increasing numerical weights affects the overall result of the AI model.
Security agencies Five Eyes recommend “aggressively” isolating weight storage in a highly restricted digital area and implementing additional hardware protections to prevent malicious actors from manipulating them.
“AI systems are software systems,” the paper concludes. “As such, deploying organizations should prefer systems that are secure by design, where the designer and developer of the AI system take an active interest in the positive security outcomes for the system once operational. »