Today, generational AI chatbots are popping up in everything from email clients to HR tools, providing a user-friendly and seamless path to better business productivity. But there’s a problem: Too often, employees don’t think about the data security of the prompts they use to get responses from the chatbot.
In fact, more than a third (38%) of employees share sensitive work information with AI tools without their employer’s permission, according to a survey this week by the US National Cybersecurity Alliance (NCA). And that’s a problem.
The NCA survey (which surveyed 7,000 people worldwide) found that Gen Z and millennial workers are more likely to share sensitive work information without obtaining permission: 46% and 43%. respectively, admitted to this practice, compared to 26% and 14% of Generation X and baby boomers, respectively.
Concrete consequences of sharing data with chatbots
The problem is that most of the most popular chatbots capture any information that users put in the prompts, which could be things like proprietary revenue data, top-secret design plans, sensitive emails, customer data, etc. – and send them back to Large Language Models (LLM), where they are used to train the next generation of GenAI.
And that means that someone could later access that data using the right prompts, because it’s now part of the recoverable data lake. Or, perhaps the data is kept for internal use at the LLM, but its storage is not configured correctly. The dangers of this – like Samsung discovered it in a high-profile incident – are relatively well understood by safety professionals – but not so much by ordinary workers.
The creator of ChatGPT, OpenAI, warns in its user guide“We are unable to remove specific prompts from your history. Please do not share any sensitive information in your conversations.” But it’s difficult for the average worker to constantly think about data exposure. Lisa Plaggemier, executive director of NCA, notes a case that illustrates how risk can easily translate into real-world attacks.
“A financial services company integrated a GenAI chatbot to respond to customer inquiries,” Plaggemier tells Dark Reading. “Employees inadvertently enter customer financial information to contextualize it, which the chatbot then stores insecurely. This not only led to a significant data breach, but also allowed attackers to access sensitive customer information, demonstrating how easily confidential data can be compromised through the misuse of these tools.
Galit Lubetzky Sharon, CEO of Wing, offers another concrete example (without naming names).
“An employee of a multinational company for whom English was a second language accepted a mission in the United States,” she explains. “In order to improve his written communications with his US-based colleagues, he innocently started using Grammarly to improve his written communications. Not knowing that the application was allowed to train on the employee’s data , the employee sometimes used Grammarly to improve communications regarding confidential information and proprietary data. There was no malicious intent, but this scenario highlights the hidden risks of AI.
A lack of training and the rise of “Shadow AI”
One of the reasons for the high percentage of people willing to roll the dice is certainly lack of training. While Samsungs around the world could take action to lock down AI use, the NCA survey found that 52% of employee participants have not yet received training on safe use of AI, compared to only 45% of respondents who actively use AI.
“This statistic suggests that many organizations underestimate the importance of training, perhaps due to budgetary constraints or a lack of understanding of potential risks,” says Plaggemier. And in the meantime, she adds, “this data highlights the gap between recognizing potential dangers and having the knowledge to mitigate them.” Employees may understand that risks exist, but lack of proper education leaves them vulnerable to the severity of these threats, especially in environments. where productivity often takes precedence over safety.”
Worse yet, this lack of knowledge contributes to the rise of “shadow AI,” in which unapproved tools are used outside of the organization’s security framework.
“As employees prioritize efficiency, they may adopt these tools without fully understanding the long-term implications for data security and compliance, leaving organizations vulnerable to significant risks,” warns Plaggetier.
It’s time for businesses to implement GenAI best practices
It’s clear that prioritizing immediate business needs over long-term security strategies can leave businesses vulnerable. But when it comes to deploying AI before security is ready, the golden lure of all those productivity improvements – sanctioned or not – can often prove too strong to resist.
“As AI systems become more commonplace, it is critical for organizations to view training not only as a compliance requirement, but also as a vital investment in protecting their data and integrity. brand,” explains Plaggetier. “To effectively reduce risk exposure, companies should implement clear guidelines regarding the use of GenAI tools, including the types of information that can and cannot be shared.”
Morgan Wright, chief security advisor at SentinelOne, advocates starting the guideline development process with first principles: “The biggest risk is not defining the problem you’re solving with chatbots,” notes t -he. “Understanding what needs to be addressed helps create the appropriate policies and operational safeguards to protect privacy and intellectual property. It’s emblematic of the old saying: “When you only have a hammer, the whole world is a nail.” »
There are also technological steps that organizations should take to strengthen AI risks.
“Implementing strict access controls and monitoring the use of these tools can also help mitigate risks,” adds Plaggemier. “Implementing data masking techniques can prevent sensitive information from being entered into GenAI platforms. Regular audits and the use of AI monitoring tools can also ensure compliance and detect any unauthorized attempts. authorized access to sensitive data.”
There are other ideas as well. “Some companies have limited the amount of data entered into a query (e.g. 1,024 characters),” says Wright. “It could also involve segmenting the parts of the organization that deal with sensitive data. But for now, there is no clear solution or approach that can resolve this thorny issue to everyone’s satisfaction .”
The danger to businesses can also be exacerbated by adding GenAI capabilities to third-party software as a solution (SaaS) applications, warns Wing’s Sharon – it’s an area too often overlooked.
“As new features are added, even for very well-known SaaS applications, the terms and conditions of those applications are often updated, and 99% of users don’t pay attention to them,” she explains. “It’s not uncommon for applications to set by default that they can use data to train their AI models.”
She notes that an emerging category of SaaS security tools called SaaS Security Posture Management (SSPM) is developing ways to monitor which applications are using AI and even monitor changes to things like terms and conditions.
“Tools like this are useful for IT teams to assess risks and make changes to policies or even access on an ongoing basis,” she explains.