The uncontrolled spread of generative AI has already had notable effects, both good and bad, on the daily lives of cybersecurity professionals, a study published this week by the non-profit group ISC2. The study – which surveyed more than 1,120 cybersecurity professionals, most of them CISSP certified and in leadership positions – revealed a considerable degree of optimism about the role of generative AI in security . More than four in five (82%) said they would at least “somewhat agree” that AI is likely to improve the efficiency with which they can do their jobs.
Respondents also saw many potential applications for generative AI in cybersecurity work, according to the study. Everything from actively detecting and blocking threats, identifying potential security weaknesses, to analyzing user behavior have been cited as potential use cases for generative AI. Automating repetitive tasks has also been seen as a potentially valuable use of technology.
Will generative AI help hackers more than security professionals?
There is, however, less consensus on whether the overall impact of generative AI will be positive from a cybersecurity perspective. Serious concerns about social engineering, deepfakes and misinformation – as well as a slight majority saying AI could make parts of their jobs obsolete – mean more respondents think AI could benefit bad actors more than security professionals.
“The fact that cybersecurity professionals name these types of information attacks and deception as the greatest concern is understandably of great concern to organizations, governments, and citizens in this highly political year,” the authors write of the study.
In fact, some of the most important issues cited by respondents are less concrete cybersecurity concerns and more general regulatory and ethical concerns. Fifty-nine percent said the current lack of regulation around generative AI is a real problem, with 55% citing privacy concerns and 52% saying data poisoning (accidental or other) was a concern.
Due to these concerns, significant minorities said they were blocking employee access to generative AI tools: 12% said their ban was complete and 32% said it was partial. Just 29% said they allowed access to generative AI tools, while 27% said they hadn’t discussed the issue or were unsure of their company’s policy. organization in this area.