Shadow IT – the use of software, hardware, systems and services that have not been approved by an organization’s IT/IT security departments – has been a problem for around 20 years and an area difficult for IT managers to manage effectively.
Similar to shadow computing, Shadow AI refers to any AI-based products and platforms used within your organization that these departments are unaware of. While personal use of an AI application may be considered harmless and low risk, Samsung (for example) was instantly hit with repercussions when its employees’ use of ChatGPT results in sensitive intellectual property leaking online.
But the risk of ghost AI is threefold:
1) Entering data or content into these applications may cause intellectual property at risk
2) As the number of AI-enabled applications increase, the risks of misuse also increase, with aspects such as data governance and regulations such as GDPR being key considerations
3) There is a reputational risk linked to the uncontrolled production of AI. With far-reaching ramifications for regulatory violations, this poses a real headache for IT teams trying to keep track of them.
Mitigating the Risks of Shadow AI
Four steps must be taken to mitigate the threat posed by shadow AI. All are interdependent, and the absence of any of the four will leave a gap in mitigation:
1. Rank your use of AI
Establishing a risk matrix for the use of AI within your organization and defining how it will be used will allow you to have productive conversations about the use of AI for the entire business.
Risk can be considered on a continuum, from the low risk of using GenAI as a “virtual assistant”, through “co-pilot” applications, and up to higher risk areas such as data integration. AI in your own products.
Categorizing based on the company’s potential risk appetite will allow you to determine which AI-based applications can be approved for use in your organization. This will be of critical importance when developing your acceptable use policy, training and discovery processes.
2. Build an Acceptable Use Policy
Once your use of AI has been classified, an acceptable use policy for your entire organization should be defined to ensure that all employees know exactly what they can and cannot do when they interact with approved AI-based applications.
Clearly explaining what constitutes acceptable use is essential to ensuring the security of your data and will allow you to take enforcement action if necessary.
3. Create employee training based on your AI usage and acceptable use policy, and ensure all employees complete the training.
Generative AI is as fundamental as the introduction of the Internet to the workplace. Training should start from the outset to ensure employees know what they are using and how to use it effectively and safely.
Transformative technologies always require a learning curve, and individuals cannot be left to their own devices when these skills are so important. Investing now in your employees’ ability to use generative AI safely will both help your organization’s productivity and mitigate data misuse.
4. Have the right discovery tools to monitor active use of AI within your organization
IT asset management (ITAM) tools were working on AI discovery capabilities even before ChatGPT made headlines last year. Organizations can only manage what they are able to see, and this is even more true for AI-based applications, as many AI-based applications are free and cannot be tracked by means traditional documents such as expense receipts or purchase orders.
This is particularly important for tools incorporating AI, where the user is not necessarily aware that AI is being used. Many employees do not understand the intellectual property implications in such circumstances, and active monitoring is essential with an ITAM solution featuring software asset discovery for AI tools.
A strong security posture requires the implementation of these four steps; without all four pieces, there is a hole in your Shadow AI defense system.
Conclusion
While no industry is more at risk from shadow AI than another, large organizations or well-known brands are generally most likely to suffer significant reputational damage from its implications, and they should take a more cautious approach.
Industries and businesses of all sizes need to reap the benefits of AI. However, having appropriate procedures and guidance in place as part of an integrated cybersecurity strategy is a crucial part of adopting this transformative technology.
AI has already brought permanent changes to the way organizations operate, and embracing this change will set businesses up for future success.
Generative AI is yet another technology where perimeter-level threat prevention can only be partially successful. We must detect what is being used in the shadows.