COLUMBIA, Mo. (KFVS) – Artificial intelligence isn’t quite ready to handle cybersecurity for a local government or business on its own yet, but it could be in the near future, according to researchers at the University of Missouri’s Center for Cybersecurity Education, Research and Infrastructure.
A recent study was co-authored by University of Missouri researcher Prasad Calyam, who is the Greg L. Gilliom Professor of Cybersecurity in Electrical Engineering and Computer Science.
Calyam’s team found that chatbot engines such as Open AI’s Chat GPT and Google’s “Bard,” since renamed “Gemini,” performed relatively well on a hacking exam used to test the knowledge of cybersecurity professionals.
“What we found interesting is that the responses are, in general, pretty good, and we hope that as these chatbots mature, they will get even better,” Calyam said.
However, the bots’ performance suffered when asked to respond with advice, which the team identified as ineffective at worst and downright harmful at best.
“This could potentially be bad because if you are an expert and you rely on bad advice, the vulnerability could be exposed even more than it is, so the problem can become even more serious,” Calyam said.
The team’s conclusion is that while AI now has applications in cybersecurity, robots are not currently configured to operate at the level of consistency required to handle cybersecurity independently.
“These AI tools can be a good starting point for investigating issues before consulting an expert,” Calyam said. “They can also be good training tools for those who work with information technology or want to learn the basics of identifying and explaining emerging threats.”
The centre said that with so much of our lives now taking place online, it is more vital than ever that cybersecurity is a top priority for governments, businesses and consumers.
Copyright 2024 KFVS. All rights reserved.