Since the introduction of ChatGPT in November 2022, generative AI (GenAI) has been described from a novelty and economic boon to a threat to humanity. As this debate continued, GenAI took center stage at the RSA 2023 conference with the introduction and subsequent hype around Microsoft Security Copilot. Since then, many other providers have introduced similar features. Few would object to the idea that GenAI (and AI in general) will have a profound impact on society and the global economy, but in the short term it introduces new risks as employees connect to GenAI applications, share data and create a large local language. models (LLM) which are specific to them. These actions will inevitably expand the attack surface, open new threat vectors, introduce software vulnerabilities, and lead to data leaks.
Despite these risks, generative AI holds great potential in cybersecurity. Generative AI could help improve the productivity of security teams, accelerate threat detection, automate remedial actions, and guide incident response. These potential benefits are so compelling that many CISOs are already experimenting with GenAI or creating their own security LLMs. At the same time, security professionals remain concerned about how cybercriminals could use GenAI as part of attack campaigns and how they can defend against these advances.
Have organizations adopted GenAI for cybersecurity today, and what will they do in the future? To better understand these trends, TechTarget’s Enterprise Strategy Group surveyed 370 IT and cybersecurity professionals in organizations across North America (U.S. and Canada) responsible for managing cyber risks, Analysis of threat intelligence and security operations, with visibility into current GenAI usage and strategic plans.