According to a new study from ISACA, almost half of companies exclude cybersecurity teams when developing, integrating and implementing AI solutions.
Only about a quarter (26%) of cybersecurity professionals or teams in Oceania are involved in developing policy governing the use of AI technology in their business, and almost half (45%) ) declare that they are not involved in the development, integration or implementation of AI. solutions, according to the 2024 State of Cybersecurity Survey Report recently released by global IT professionals association ISACA.
In response to new questions posed by the annual Adobe-sponsored study, security teams in Oceania indicated that they primarily use AI to:
-
Automation of threat detection/response (36% vs. 28% globally);
-
Endpoint security (33% vs. 27% globally);
-
Automation of routine security tasks (22% vs. 24% globally); And
-
Fraud detection (6% versus 13% globally).
Jamie Norton, an Australian-based cybersecurity expert and ISACA board member, highlighted the critical role of cybersecurity professionals in shaping AI policy.
“ISACA’s findings reveal a significant gap. Only around a quarter of cybersecurity professionals in Oceania are involved in AI policymaking, a worrying statistic given the growing presence of AI technologies across all sectors,” he said. declared. “The integration of AI into cybersecurity and broader business solutions must be guided by responsible policies. Cyber professionals are essential in this process to ensure that AI is implemented securely, ethically and in compliance with regulatory standards. Without their expertise, organizations are exposed to unnecessary vulnerabilities.
To help cybersecurity professionals engage in creating and integrating AI policies, ISACA has developed a comprehensive document, Considerations for Implementing a Generative Artificial Intelligence Policy , as well as other resources and certifications.
“Cybersecurity teams are uniquely positioned to develop and protect AI systems, but it is important that we equip them with the tools necessary to navigate this transformative technology,” added Norton. “ISACA’s guidance document on AI provides a valuable roadmap, addressing critical issues such as how to secure AI systems, adhere to ethical principles and define conditions of acceptable use. »
Explore the latest developments in AI
In addition to the findings of the 2024 State of Cybersecurity AI Survey Report, ISACA has developed AI resources to help cybersecurity and other digital trust professionals navigate this new technology.
This includes a white paper on European AI law. Businesses should be aware of the timing and actions to be taken related to the European AI law, which puts in place requirements for certain AI systems used in the European Union and prohibits certain uses of AI, including most will apply from August 2, 2026. The new white paper, Understanding EU AI Law: Requirements and Next Steps, recommends some key steps, including establishing audits and traceability, adaptation of existing cybersecurity and privacy policies and programs, and the designation of an AI manager who can be responsible for monitoring the AI tools used. and the company’s broader approach to AI.
The second resource covers authentication in the age of deepfakes. Cybersecurity professionals should be aware of the benefits and risks of AI-based adaptive authentication, according to a new ISACA resource, Examining Authentication in the Deepfake Era. Although AI can improve security by being used in adaptive authentication systems that adapt to each user’s behavior, making it more difficult for attackers to gain access, AI systems can also be manipulated by through adversarial attacks, are susceptible to bias in AI algorithms and may be equipped with ethical principles. and privacy concerns. Other developments, including research into integrating AI with quantum computing that could have implications for cybersecurity authentication, should be monitored, the paper said.