New search for Menlo Security reveals how the explosive growth of generative AI is creating new cybersecurity challenges for businesses. As tools like ChatGPT become integrated into everyday workflows, businesses need to urgently re-evaluate their security strategies.
“Employees are integrating AI into their daily work. Controls can’t just block it, but we also can’t let it run wild,” Andrew Harding, vice president of product marketing at Menlo Security, said in an exclusive interview with VentureBeat. “There has been steady growth in generative AI site visits and power users across the enterprise, but challenges persist for security and IT teams. We need tools that apply controls to AI tools and help CISOs manage this risk while supporting the productivity gains and insights that GenAI can generate.
An increase in the use and abuse of AI
THE new report from Menlo Security paints a worrying picture. Visits to generative AI sites within enterprises have increased by more than 100% in the last six months alone. The number of frequent generative AI users also jumped 64% during the same period. But this ubiquitous integration into everyday workflows has revealed dangerous new vulnerabilities.
While many organizations are rightly adopting more security policies around the use of generative AI, most are using an ineffective domain-by-domain approach, researchers say. As Harding told VentureBeat: “Organizations are beefing up security measures, but there’s a catch. Most apply these policies only on a domain basis, which is no longer enough.
This piecemeal tactic simply cannot keep pace with the constant emergence of new generative AI platforms. The report found that attempts to upload files to generative AI sites increased by an alarming 80% over 6 months – a direct result of additional features. And the risks go far beyond the potential loss of data during downloads.
Researchers warn that generative AI could also seriously amplify phishing scams. As Harding pointed out, “AI-based phishing is just smarter phishing. Businesses need real-time phishing protection that would prevent OpenAI “phishing” from becoming a problem in the first place.
From novelty to necessity
So how did we get here? Generative AI exploded seemingly overnight with ChatGPT-mania sweeping the world. However, the technology has emerged gradually over years of research.
OpenAI launched its first generative AI system called GPT-1 (Pre-Driven Generative Transformer) in June 2018. This and other early systems were limited but demonstrated their potential. In April 2022, Google Brain built on this principle with Palm — an AI model with 540 billion parameters.
When OpenAI unveiled DALL-E for image generation in early 2021, generative AI captured widespread public intrigue. But it was OpenAI’s ChatGPT debut in November 2022, this really started the frenzy.
Almost immediately, users began integrating ChatGPT and similar tools into their daily workflows. People casually questioned the bot for everything from creating the perfect email to debugging code. It seemed like AI could do almost anything.
But for businesses, this dazzling integration introduces major risks often overlooked in the media hype. Generative AI systems are inherently only as safe, ethical and accurate as the data used to train them. They can unintentionally reveal biases, share misinformation, and transfer sensitive data.
These models extract training data from vast swaths of the public Internet. Without rigorous monitoring, control over ingested content is limited. So if proprietary information is published online, models can easily absorb that data – and leak it later.
The researchers also warn that generative AI could also seriously amplify phishing scams. As Harding told VentureBeat, “AI-powered phishing is just smarter phishing. Businesses need real-time phishing protection that would prevent OpenAI “phishing” from becoming a problem in the first place.
The balancing act
So what can be done to balance security and innovation? Experts advocate a tiered approach. As Harding recommends, this includes “copy-and-paste limits, security policies, session monitoring, and group-level controls on generative AI platforms.”
The past proves the prologue. Organizations must learn from previous technology inflection points. Widely used technologies like cloud, mobile and web inherently introduce new risks. Companies have gradually adapted their security strategies to align with changing technology paradigms over time.
The same measured, proactive approach is required for generative AI. The action window closes quickly. As Harding warned, “There has been steady growth in generative AI site visits and power users in the enterprise, but challenges persist for security and IT teams.” »
Security strategies must evolve – and quickly – to accommodate the unprecedented adoption of generative AI in organizations. For businesses, it is imperative to find the balance between security and innovation. Otherwise, generative AI risks spinning dangerously out of control.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about technology and transformative business transactions. Discover our Briefings.