Main highlights:
-
Security professionals express cautious optimism about the potential of generative AI to strengthen cybersecurity defenses, recognizing its ability to improve operational efficiency and response to threats.
-
Organizations are proactively developing governance structures for generative AI, recognizing the importance of establishing strong policies and enforcement mechanisms to mitigate associated risks.
-
Generative AI is expected to become a key factor in cybersecurity purchasing decisions by the end of 2024, with its applications expected to be ubiquitous in security operations, underscoring the shift toward more enterprise-integrated cybersecurity solutions. ‘AI.
As the digital landscape evolves, so does the field of cybersecurity, now on the cusp of an era of transformation powered by generative AI. In Research recently conducted by TechTarget’s Enterprise Strategy Group, sponsored by Check Point, uncovers compelling insights and statistics that highlight the critical role of generative AI in shaping the future of cybersecurity. To collect data for this report, TechTarget’s Enterprise Strategy Group conducted a comprehensive online survey of IT and cybersecurity professionals in private and public sector organizations in North America between November 6, 2023 and on November 21, 2023. To qualify for this survey, respondents must be involved in supporting and securing, as well as using, generative AI technologies.
The main objectives of this research were:
- Identify current use and projects for generative AI.
- Establish how generative AI influences the balance of power between cyber adversaries and cyber defenders.
- Determine how organizations are approaching generative AI governance, policy, and policy enforcement.
- Watch for how organizations will apply generative AI to cybersecurity use cases.
eBook – Generative AI for Cybersecurity – Research Sponsored by ESG – 2024
Here is an exploration of the results:
Generative AI is now well established and will be omnipresent by the end of 2024
92% of respondents agree that machine learning has improved the effectiveness and efficiency of cybersecurity technologies.
– The adoption paradox: While 87% of security professionals recognize the potential of generative AI to strengthen cybersecurity defenses, a sense of caution is palpable. This stems from the fact that the same technologies can also be exploited by adversaries to orchestrate more sophisticated cyberattacks.
– Strategic governance and policy development: A staggering 75% of organizations are not just passively observing, but actively developing governance policies for the use of generative AI in cybersecurity. This proactive approach indicates a significant move towards integrating AI into the cybersecurity fabric, ensuring its deployment is both effective and responsible.
– Investment and impact: research predicts a pivotal trend: By the end of 2024, generative AI will influence cybersecurity purchasing decisions for more than 60% of organizations. This statistic demonstrates growing confidence in AI’s capabilities to revolutionize security operations, from threat detection to incident response.
– Operational efficiency and threat response: One of the striking statistics from the study is that 80% of security teams surveyed expect generative AI to significantly improve operational efficiency. Additionally, 65% expect it to improve their threat response times, highlighting the technology’s potential to not only augment but actively accelerate security workflows.
– Challenges and concerns: Despite the optimism, the research also highlights prevailing concerns. Around 70% of respondents highlighted the challenge of integrating generative AI into existing security infrastructures, while 60% highlighted risks associated with potential bias and ethical considerations.
GenAI’s balance of power tilts toward the cyber adversary’s advantage
Of course, cyber adversaries also have access to open GenAI applications and have the technical capabilities to develop their own LLMs. WormGPT and FraudGPT are the first examples of LLMs designed for use by cybercriminals and hackers. Will cyber adversaries use and benefit from LLMs? More than three-quarters of respondents (76%) not only think so, but also believe that cyber adversaries will gain the greatest benefit (compared to cyber defenders) from generative AI innovation. Alarmingly, most security professionals believe that cyber adversaries are already using GenAI and still gaining an advantage with new technologies. Respondents also believe GenAI could lead to an increase in threat volume because it would make it easier for unskilled cyber adversaries to develop more sophisticated attacks. Security and IT professionals are also concerned about deep fakes and automated attacks.
Conclusion: navigating the new frontier
ESG’s research illuminates the complex but promising horizon of generative AI in cybersecurity. It presents a narrative of cautious optimism, where the potential for innovation is balanced with an acute awareness of the challenges ahead. As organizations navigate this new frontier, the learnings from this study serve as a beacon, guiding the development of strategies that are not only technologically advanced, but also ethically grounded and strategically sound.
Essentially, the future of cybersecurity, as this research describes, is not just about adopting generative AI, but doing so in a thoughtful, responsible, and ultimately transformative way.