88% of cybersecurity professionals believe AI will have a significant impact on their jobsnow or in the near future, and 35% have already witnessed its effects, according to ISC2’s AI study, AI Cyber 2024.
Impact of AI on cybersecurity professionals
Although the role of AI in managing health problems is very positive cyber attacksThese findings also recognize the urgent demand from professionals for the industry to be prepared to mitigate cyber risks and protect the entire ecosystem.
Respondents are very positive about the potential of AI. Overall, 82% agree that AI will improve the effectiveness of their work as cybersecurity professionals. This statement is countered by 56%, noting that AI will make parts of their job obsolete.
Again, the obsolescence of job functions is not necessarily a negative, but rather the evolving nature of people’s role in cybersecurity in the face of autonomous and rapidly evolving software solutions, particularly those responsible for ensuring repetitive and time-consuming cybersecurity. Tasks.
75% of respondents are moderately to extremely concerned about AI being used for cyberattacks or other attacks. malicious activities. Deepfakes, disinformation and social engineering are the three main concerns of cyber professionals.
Focus on AI governance
There is a growing disparity between AI expertise and the level of preparedness to address these concerns. 60% are confident in their ability to lead their organization’s secure adoption of AI. 41% of participants have minimal or no expertise in securing AI and ML technologies. 82% believe there is a need for comprehensive and specific regulations governing the safe and ethical use of AI.
Despite concerns, only 27% of participants said their organization had formal policies in place on the safe and ethical use of AI, and 39% said their organization was currently discussing a formal policy. When asked “who should regulate the safe and ethical use of AI,” the study found that cyber professionals hope for global coordination between national governments and a consortium of AI experts.
“Cybersecurity professionals are anticipating both the opportunities and challenges that AI presents, and are concerned that their organizations lack the expertise and awareness to securely introduce AI into their operations” , said the CEO of ISC2. Claire Rosso, CC. “This creates a tremendous opportunity for cybersecurity professionals to lead, applying their expertise in secure technology and ensuring its safe and ethical use. In fact, ISC2 has developed AI workshops to foster the expert-led collaboration that cybersecurity workers need to meet this challenge.
Respondents were clear that governments need to take more leadership if organizational policy is to catch up, although 72% agreed that different types of AI will require tailored regulations. Regardless, 63% said regulation of AI should come from collaborative government efforts (ensuring standardization across borders) and 54% want national governments to take the lead.
The survey found that 12% of respondents said their organization had blocked all access to generative AI tools in the workplace.
AI is everywhere, and while the cybersecurity industry has quickly adopted AI and ML as part of its latest generation of defense and surveillance technologies, so have bad actors, who rely on the same technology to increase sophistication, speed and security. precision specific to them cybercrime activities.