October is Cybersecurity Awareness Month, where we are all reminded to update the antivirus software on our devices, use strong passwords and multi-factor authenticationand be extra careful about email phishing scams.
However, one area where cybersecurity appears to be lacking is the general understanding of cybersecurity issues. security and privacy risks associated with the use of AI at work.
Survey Shows Lack of AI Training and Fear of AI High
New research from National Cybersecurity Alliance finds a surprising – and troubling – lack of awareness among workers surveyed regarding the pitfalls of AI.
- Of those surveyed, 55% of participants who use AI at work said they had not received any training regarding the risks of AI.
- While 65% of respondents expressed concern and worry about some types of AI-related cybercrime.
- Yet despite this potential threat, 38% (nearly four in ten employees) admitted to sharing confidential work information with an AI tool without their employer’s knowledge.
- The highest incidences of unauthorized sharing occurred among younger workers: Generation Z (46%) and Millennials (43%).
“Whenever I talk to people about AI, they don’t understand that the (AI) models are still learning and they don’t understand that they are contributing to it, whether they know it or not” , explained Lisa Plaggetier, general director of IA. director of NCA during a Zoom call.
October is the busiest month of the year for Plaggemier, who gives dozens of conferences to organizations to raise awareness about cybersecurity and the use of AI.
“I think the average person still thinks of these AI tools as if they were Google’s search function. We think about what we get, but we don’t think about what we give up, what we put in. I don’t think a lot of people understand that when they put information into an AI, that it actually goes into the learning, the data lake of the training model.
Training is not enough, effective training is key
Plaggemier said that while many financial and high-tech organizations have policies and procedures in place, the overwhelming majority of companies do not.
“I have seen financial services that could be completely locked down. If it is a technology company, it may announce AI tools that it has deemed safe to use in its environment. Then there are a number of companies that are somewhere in the middle, and there are still a number of organizations that haven’t figured out their AI policy at all,” he said. -she declared.
She noted that the NCA offers conferences and training to help spark discussions about AI and cybersecurity, but sometimes that’s not enough.
“Even the companies that provide training are not sufficiently contained, I think. I spoke to someone who works for a large Fortune 100 organization. He had just joined this company and they had completed their cybersecurity training – and it was Really explicit on AI. And then he walked in and found a group of developers entering all of their code into an AI model – in direct violation of the policy and training they had followed. Even the most experienced technical employees don’t always connect the dots,” Plaggemier said.
AI training in the workplace starts with leadership
She notes that individual workers must adhere to AI policies and procedures put in place by their employer, but that companies must first establish these guidelines. “I really think it’s up to the employer to figure out what your policies are and figure out how you’re going to take advantage of this technology while protecting yourself from risk,” Plaggetier concluded.