Some call it “ghost AI.”
Others call it “the new shadow IT for generative AI (GenAI).”
Many others use traditional terms such as “governance for GenAI” or “GenAI policies and compliance.”
Cyber professionals use related terms with broader implications, such as “secure GenAI,” “GenAI cybersecurity,” or “GenAI security policies.”
And a growing number of public and private sector groups prefer to call this “guardrails for GenAI.”
But whatever terminology you use, many conversations have emerged on this topic around the world. Everyone is trying to “get to yes” in our rapidly evolving world of GenAI applications, many of which are currently available for free through your favorite internet browser or on your smartphone as an app.
Meanwhile, my favorite prediction for 2024 is that “Bring Your Own AI (BYOAI) will dominate businesses”.
As I shared about this Digital decoding podcast When it comes to top cybersecurity predictions for 2024, GenAI is dominating the conversations. But CISOs are struggling to understand how they can gain visibility into what is actually being used by their company’s end users right now with these AI tools.
This GenAI conversation was the focus of this recent Kiteworks webinar:
WHAT ARE THE SECURITY ISSUES WITH THE FREE GenAI APPS?
Last fall, Forbes published this article contributed by Dell: What is Shadow AI and what can it do about it? Here is an exerpt :
“Shadow AI is a term describing the unauthorized or ad hoc use of generative AI within an organization outside of IT governance. Research shows that about 49% of people have used generative AI, and more than a third use it daily, according to Salesforce. In the workplace, this might mean employees accessing generative AI tools like ChatGPT to perform tasks like drafting text, creating images, or even writing code. For IT, this can translate into a governance nightmare that requires deciding what use of AI to allow or restrict to support staff while keeping the business secure.
“As if that wasn’t enough for computing, the use of generative AI is accelerating. According to the same Salesforce survey, 52% of respondents said their use of generative AI is increasing compared to when they started. This means the threat of shadow AI is here for IT – and it’s growing. »
In early summer 2023, I was one of the first to write about the new challenges emerging for cybersecurity teams around the world in this viral post. CSO Magazine article: Has generative AI quietly ushered in a new era of shadow IT on steroids? Here is an exerpt :
“What concerns me is not the variety, productivity gains, or many other benefits of GenAI tools. It’s more a question of whether these new tools now constitute a sort of Trojan horse for businesses. Are end users taking matters into their own hands by using these apps and ignoring acceptable use policies and procedures of unapproved applications in the process? I believe the answer for many organizations is yes. …
“But what concerns me most is the astonishing growth of generative AI applications, as well as the speed with which these applications are being adopted for a myriad of reasons. Indeed, if the Internet can be described as an accelerator of good and evil – which I believe is true – generative AI accelerates this acceleration in both directions.
“Simply put, it’s hard to compete with free. Most organizations move slowly in acquiring new technology, and this budgeting and deployment process can take months or even years. End users, who are likely already violating policies by using these free generative AI tools, are generally reluctant to band together and insist that the company’s CTO (or other executives) purchase new products that could end up costing millions of dollars for business use over time. . That ROI could come in the next few years, but in the meantime they’re experimenting with free versions because everyone else is doing it.
After writing this article, CIO Magazine came out and proclaimed: Shadow AI will be much worse than Shadow IT.
“Shadow AI has the potential to eclipse Shadow IT. How and why you ask? With Shadow IT, your developers were really the only points of failure in the equation; with generative AI, every user has the potential to be one. This means you have to rely on everyone, from administrators to executives, to make the right decision every time they use GenAI. This requires that you place a high degree of trust in user behavior, but it also requires your users to self-govern in a way that could cripple their own speed and agility if they constantly question their own actions . There is an easier way, but we’ll get to that later.
(As an aside, I actually think shadow AI is a subset of shadow computing, so I’m not sure this statement makes logical sense. When I worked for Michigan State, users End users sometimes used their own cloud technology rather than an enterprise solution. Nevertheless, I agree that shadow AI has significantly accelerated the shadow IT problem in new ways.)
WE’VE BEEN HERE BEFORE: A QUICK HISTORY LESSON
Yes, we have encountered similar issues before. As I have written many times, we must learn from history.
In an article titled “Shadow AI represents a new generation of threats to enterprise IT“, the authors identify a series of risks that need to be considered with Shadow AI. These include:
- Functional risks
- Operational risks
- Legal risks
- Resource risks
They recommend starting with leadership:
“First, leaders need to know how much money is being spent on AI – sanctioned or not.
“Second, groups previously working outside the scope of institutional risk controls need to be brought into the fold. Their projects must comply with the company’s risk management requirements, and even its technical choices.
Third, the authors of the article encourage the classification of data, the creation of a set of AI policies, and the education and training of employees.
In the CSO Magazine In this article, I outline pragmatic steps security teams should take, such as:
“Some readers may think we already covered this shadow IT problem years ago: it’s a classic Cloud Access Security Broker (CASB) problem. To a large extent, they would be right. Companies such as Netscope And Zscalerknown for offering CASB solutions in their product suites, offer toolsets to manage corporate policies for managing generative AI applications.
“Undoubtedly, other solutions are available to help manage generative AI applications from leading CASB vendors, and this article provides more potential CASB options. However, these CASB solutions must be deployed and configured correctly for CASB to contribute to governance.
“To be clear, CASB tools still don’t solve all the problems with your generative AI applications. Organizations still need to answer other questions related to licensing, application proliferation, security and privacy policies and procedures, and more. There are also training, product evaluation, and workflow management considerations to take into account. Simply put, who is exploring different options and optimizing which generative AI approaches are most relevant to your public or private sector organization or particular industry?
FINAL THOUGHTS
I like this Government Technology, which describes how AI ushered in a new era, and how AI was included in the 2024 State of the State addresses (with ratings assigned as 1-5 stars for each, depending on the amount of technology mentioned).
What is clear is that this problem continues to get worse and will not go away. It is necessary for federal, state, and local cybersecurity teams to take steps to assist in monitoring and managing ongoing safeguards for the use of GenAI.
This July 2023 guide from the UK Cyber Security Center can help on shadow IT (and I think shadow AI is included under that umbrella. See the section on cloud services.)
Another article from Venturebeat, “Why IT leaders should rely on shadow AI to modernize governance,” also offers several useful solutions.