As state and local governments grapple with an ever-changing digital and computing landscape in the form of artificial intelligence, counties have an “obligation” to learn how to integrate artificial intelligence into cybersecurity efforts, officials said.
“This is very important for our local governments because they have to figure out how to not only best use artificial intelligence, but also how to protect our constituents and protect information. How do we find that balance?” asked Sen. Mary Beth Carozza (R-Eastern Shore).
She was moderating a panel Wednesday at the Maryland Association of Counties’ summer conference titled “Sentinels of the Digital World: AI’s Role in Strengthening Cybersecurity.” It was one of several sessions and discussions this week on the many ways government agencies can use AI, including in economic development, welfare enrollment and communications.
Among other uses, AI is becoming an increasingly important player in cybersecurity systems.
Stephen Pereira, Calvert County’s director of technology services, who was on the panel, said cybersecurity is now an arms race: using AI to counter cyberattacks from bad actors using AI to hack systems.
“If you don’t use AI in cybersecurity, you won’t have real-time information about ransomware attacks and… you won’t be able to act with the same speed and real-time information,” he said. “Hackers are using AI. The only way to fight AI is to use AI.”
The only way to fight AI is to use AI
But Pereira urged county and state officials to also consider the downsides of using AI.
He noted that AI requires a lot of energy to operate.
“There are other factors to consider, such as the environmental threat these systems pose. They consume a lot of energy and data. They are extremely expensive to operate,” Pereira said.
The use of AI carries its own cybersecurity risks regarding the distribution and use of data. He also noted that AI performing tasks that people are typically paid to do could lead to “massive job losses” and other economic threats.
He even spoke of the “existential threat” of AI, as some people who are not adequately informed about its uses, capabilities and limitations may not trust AI programs.
“Do you think Ai will make us completely obsolete and destroy the human race? I don’t know. I think it’s unlikely, I wouldn’t rule it out,” he joked.
Timothy Gilday, senior director of emerging technologies at General Dynamics Information Technology, agreed that there are general “trust issues” when it comes to AI.
“Trust is what holds back widespread adoption, whether it’s an app, a vending machine or a new car model. Any new technology eventually runs into resistance if it’s something we’re not used to,” Gilday said.
“How long will it take us to adopt it, because we have trust issues with it?” he asked during Wednesday’s panel. “With AI, I would say the bar is much higher.”
But Gilday argues that education around AI will help address this trust issue.
“Education and awareness are, I think, the biggest barriers to AI adoption… It’s more about helping people understand,” he said. “We’re not dealing with an unwieldy monster. It’s a code.”
Carozza believes the roundtables should help “all of us in state and local government do the best we can on AI and cybersecurity.” And as a member of the Senate Education, Energy and Environment Committee, she said she’s interested in learning more about AI-related issues.
“Our president has asked all of us to step up,” she said, referring to Sen. Brian Feldman (D-Montgomery).