Do the Security Benefits of Generative AI Outweigh the Disadvantages? Only 39% of security professionals say the rewards exceed the risksaccording to a new report from CrowdStrike.
In 2024, CrowdStrike surveyed 1,022 security researchers and practitioners across the US, APAC, EMEA and other regions. The results revealed that cyber professionals are deeply concerned about the challenges associated with AI. While 64% of those surveyed purchased generative AI tools to work or do research, the majority remains cautious: 32% still explore the tools, while only 6% actively use them.
What are generative AI security researchers looking for?
According to the report:
- The most important motivation for adopting generative AI is not to address a skills shortage or meet executive mandates, but to improve the ability to respond and defend against cyberattacks.
- General-purpose AI may not appeal to cybersecurity professionals. Instead, they want generative AI combined with security expertise.
- 40% of respondents said that rewards and risks of generative AI are “comparable”. Meanwhile, 39% said the rewards outweighed the risks, and 26% said the rewards did not.
“Security teams want to deploy GenAI as part of a platform to derive more value from existing tools, improve the analyst experience, accelerate integration, and eliminate the complexity of integrating new point solutions “, the report said.
Measuring ROI is an ongoing challenge when adopting generative AI products. CrowdStrike found that quantifying ROI was the top economic concern among respondents. The next two biggest concerns were the cost of licensing AI tools and unpredictable or confusing pricing models.
CrowdStrike has divided ways to assess AI ROI into four categories, ranked by importance:
- Cost optimization through platform consolidation and more efficient use of security tools (31%).
- Reduced security incidents (30%).
- Less time spent managing security tools (26%).
- Shorter training cycles and associated costs (13%).
Adding AI to an existing platform rather than purchasing a standalone AI product could “realize additional savings associated with broader platform consolidation efforts,” CrowdStrike said .
SEE: Ransomware group claimed responsibility for late November cyberattack that disrupted operations at Starbucks and other organizations.
Could generative AI introduce more security problems than it solves?
Conversely, generative AI itself must be secure. CrowdStrike’s survey found that security professionals were most concerned about data exposure to LLMs behind AI products and attacks launched against generative AI tools.
Other concerns include:
- A lack of guardrails or controls in generative AI tools.
- AI hallucinations.
- Insufficient public policy regulations for the use of generative AI.
Nearly all respondents (around 9 in 10) said their organization has implemented new security policies or is developing policies around generative AI governance in the next year.
How Organizations Can Leverage AI to Protect Against Cyber Threats
Generative AI can be used for brainstorming, research, or analysis, with the understanding that its information often needs to be double-checked. Generative AI can extract data from disparate sources in a single window in different formats, reducing the time it takes to investigate an incident. Many automated security platforms offer generative AI assistants, such as Microsoft Safety co-pilot.
GenAI can protect against cyber threats via:
- Threat detection and analysis.
- Automated incident response.
- Phishing detection.
- Improved security analytics.
- Synthetic data for training.
However, organizations should consider security and privacy controls as part of any generative AI purchase. This can protect sensitive data, comply with regulations, and mitigate risks such as data breaches or misuse. Without appropriate safeguards, AI tools can expose vulnerabilities, generate harmful results, or violate privacy laws, resulting in financial, legal, and reputational damage.