A study published Tuesday by English software company Immersive Labs found that in testing whether people could exploit generative AI chatbots, the technology fell short of human ingenuity.
The report analyzes a public challenge the company launched in 2023 in which participants with varying levels of technical skills successfully tricked a generative AI chatbot into divulging a password, showing that the technology has cybersecurity weaknesses that could easily be exploited by bad actors determined to breach the software’s defenses.
The report found that participants’ persistence was similar to that of cybercriminals, who repeatedly probe networks with various attack techniques until they find vulnerabilities.
As state and local governments look for ways to integrate AI into their digital services and back offices, cyberpsychologist John Blythe, one of the study’s authors, told StateScoop that it is imperative that they first address the knowledge gap with employees through cybersecurity training programs designed to address human psychology.
Immersive Labs designed 10 levels of generative AI chatbots, each harder to trick into revealing a secret word. Some participants invited the robots to encode their password in a base 64 number (instead of using the usual base 10 number system). Others simply asked the robot to write the password backwards, while others asked the robot to speak a password in Morse code.
“People were not only creative, but really able to leverage their problem-solving skills and cognitive flexibility,” John Blythe, director of cyberpsychology at Immersive Labs, told StateScoop. “A technique that might have worked at one level might not have worked at the next level, so they had to adapt the types of techniques they used to try to fool the robot.”
Most participants (88%) were able to extract at least the first level password. The report says this shows there is a relatively low barrier to circumventing basic generative AI security protocols. The researchers said the tests highlight the urgency for organizations to implement stronger security measures, including “data loss prevention controls, strict contextual input validation filtering to prevent and recognize attempts manipulation of the GenAI output”.
Cyberpsychology
Cyberpsychology studies the impact of new technologies on human behavior, such as social media, technological addictions, online romantic relationships and, of course, cybersecurity.
“When it comes to cybersecurity, we work to understand what stops people from engaging in cybersecurity best practices, and how we can design interventions and training to overcome these psychological barriers,” Blythe said.
He said psychology can be used to understand the psychological tactics involved in social engineering, when a bad actor manipulates or tricks a person into gaining access to the system, such as phishing emails.
“The attacker mentality, the psychology of hackers and attackers, helps us more effectively manage that human element, which we know contributes to a significant number of security breaches,” Blythe said.
As state and local governments prepare to adopt their budgets For the next fiscal year beginning July 1, many are planning funds for cybersecurity efforts, including employee training.
Most organizations, Blythe said, focus their cybersecurity training on closing the knowledge gap among their employees. He said they were wrong in assuming that increased training would reduce the risk of cyberattacks caused by human error, such as a staff member falling for a phishing scheme.
“What we know from psychology is that simply disclosing information to people very rarely changes their behavior,” Blythe said. “We see it in public health, we see it in drunk driving, climate change and safety. »
Blythe said there are many explanations for why people fall prey to social engineering tactics: They may have a poor perception of risk or lack confidence. Some cybersecurity policies interfere with how employees normally perform their jobs, which could cause them to ignore their training altogether.
A recent audit of Missouri Cybersecurity Practices found that 20% of state employees have not completed required monthly cybersecurity training.
“(Cybersecurity) campaigns tend to be most effective when they target people’s personal lives,” Blythe said.
He said traditional cybersecurity training programs tend to be overloaded with corporate jargon and often don’t resonate with employees who might better interact with training that addresses cyber threats against families or celebrities.
“It’s not enough to have clear behavior at the forefront of any campaign to say, ‘Be careful with Generation AI,’” he said. “It could be using a strong password, installing software updates, anything that’s really going to create a connection that will lead to that attitude change that you really want to achieve.”
Fighting human ingenuity with humans
The intelligent prompts that participants delivered to chatbots during the Immersive Labs experience were akin to traditional prompt injection techniques that hackers have used for decades. A web form that allows a user to enter a snippet of PHP code, for example, might allow that user to do something the system designer hadn’t intended, like get a list of users or keywords. pass.
Generative AI interfaces, which rely on tons of data, present an additional security challenge. Noting the relative ease with which novice users found ways to extract passwords, even if they were poorly secured, the report recommends that organizations form interdisciplinary teams of experts to develop comprehensive policies for password mining. Generative AI.
Dozens States have created working groups to identify the risks and benefits of generative AI. Many states have developed policies and laws establishing guardrails for how generative AI can be used in state government.
California this month, it announced it would test generative AI tools over a six-month trial period across four departments to address various operational challenges. In February, Oklahoma Task Force submitted its final recommendations to Governor Kevin Stitt on how the state can use AI to make government more effective.
The Immersive Labs report also encourages organizations to implement contingency plans and security mechanisms, such as automated shutdown procedures, regular backups of data and system configurations, in case a cyberattack would violate a network or malfunction of a generative AI system. “Using monitoring and human intervention mechanisms alongside systems can provide an additional layer of control and resilience,” the study says.
Blythe emphasized the need to not leave software designers out of the conversation and to promote the creation of generative AI tools with built-in cybersecurity protections against rapid injection attacks.
“We need better collaboration between governments, academia and industry to conduct research so that we can understand what these harms are, but also have a more coordinated effort to design out these potential harms,” he said. he declared, adding that there is a trap.
“An inherent flaw in generative AI is that it is impossible to completely engineer this type of attack, because human ingenuity will always win out.”