NSA, GAO and industry leaders say AI can augment the workforce, but the work must be verifiable and explainable.
Cybersecurity leaders highlighted ways artificial intelligence can strengthen government defenses at the inaugural conference. GovCIO Summit on AI in Media and Research in Tysons Corner, Virginia, on Thursday. AI augments the cyber workforce by doing things people can’t do, according to Kevin Walsh, director of the Government Accountability Office’s information technology and cybersecurity team.
“Cyber is perfect for AI because of that, because of the amount of interesting data we have,” Walsh said. “The volume is indescribable.”
Organizations must be concerned about AI implementation and make the right decisions about AI, according to Tyson Brooks, technical director of the National Security Agency. Artificial Intelligence Security Center. He highlighted the importance of human-AI collaboration, emphasizing that AI improves threat detection and response, but cannot replace human judgment.
“Just because you add an AI system or component doesn’t mean it’s going to make it better,” Brooks said. “We’re going to need to get this human in the loop.”
Agencies need to be able to audit and explain AI systems, Brooks added.
“At the AI Security Center, we address the complex levels of resilience and reliability, right down to the mathematical layer. We want to understand the math behind (any new AI system),” Brooks said. “Can you explain your mathematical equations on some of these algorithms that you propose the government uses to secure its systems? »
The rapid pace of evolution remains a challenge for the explainability of systems, Walsh warned.
“I love explainability. I like that people can actually know,” Walsh said. “There are, however, some AI systems where this is becoming increasingly difficult to achieve.”
The effectiveness and explainability of AI depends heavily on the quality of the data used to train it. Agencies must ensure accurate and reliable AI results to secure data from handling and poisoningBrooks said.
“If your data is corrupted and your data has been groomed, then the decision that you’re going to make, the decision that the AI system produces, is going to be wrong,” Brooks said. “Those are the kinds of things you also need to consider… when you’re making life and death decisions every day, (you need to know) that your data is secure.”
Bad data threatens AI systems, said Hansang Bae, CTO of Zscaler US Government Solutions Public Sector, and cyber adversaries are learning to use it to their advantage.
“There is a new horizon of cyberspace, that of data poisoning,” Bae said. “If you think that China does not think about how to poison dataand it’s so simple to do, just look at any open source network or social network which is full of useless information.
The Artificial Intelligence Security Center provides guidance on how to ensure data security for AI systems, Brooks said. Agencies, industry and academia are working together to find the best answers to these cybersecurity and AI questions, he added.
“We need to have that element of collaboration and understanding the full spectrum of how data can actually be manipulated and poisoned. And then we brought together the greatest minds working on this to understand and provide the solutions and guidance to secure this type of data as well,” Brooks said.
AI, Walsh said, can give agencies and industry an advantage over cyber adversaries by making cybersecurity personnel more effective.
“In cyberspace, we have been caught in the traditional game of cat and mouse, in which the good guys and the bad guys are always chasing the others, and AI could give us more advantages simply through the resources we have. can put to good use,” Walsh said. said. “It’s going to be difficult for Russia and Iran, maybe not China, but these hackers, to put together the kind of resources that can attack at 3 a.m.
Agencies rely on AI to protect their systems, but resilience is crucial. Resilient systems can withstand attacks and recover quickly, minimizing damage, Bae added.
“We’re at a point where we can pivot and talk about cyber resilience,” Bae said. “I want the resilience part to come into play, because despite what you do, (the AI system will say) ‘I’m going to protect you.’ And then I’ll warn you that you’re heading for an accident. Your lead is almost gone, so I’ll warn you about that.
The future of cybersecurity and AI requires human oversight to enable decision-making and nuances that are often lost, Brooks said. In national security environments, this surveillance is paramount, he added.
“That human element is going to have to be there, because that human logic, common sense is going to have to play a role before the final decision is actually made, particularly when we’re talking about loss of life scenarios,” Brooks said. .
The workforce works in tandem with AI, Bae said, to ensure small issues don’t become major problems.
“Cyber AI is there to help the operator focus on the things that matter, even if they seem small, because we know they will sprout into an oak tree,” Bae said.