Only 27% of cybersecurity professionals or teams in India are involved in developing policies governing the use of AI technology in their business, and half (50%) say they are not involved in the development, integrating or implementing AI solutions, according to the 2024 State of Cybersecurity Survey Report recently released by ISACA, a global trade association that promotes trust in technology.
In response to new questions posed by the annual Adobe-sponsored study, which features feedback from more than 1,800 cybersecurity professionals worldwide on topics related to the cybersecurity workforce and threat landscape, security teams in India reported that they primarily use AI to:
- Endpoint Security (31%)
- Automating threat detection/response (29%)
- Automation of routine security tasks (27%) Fraud detection (17%)
“In light of cybersecurity staffing issues and increased stress among professionals facing a complex threat landscape, the potential of AI to automate and streamline certain tasks and alleviate workloads is certainly worth exploring. explored,” says Jon Brandt, ISACA Director, Professional Practices and Innovation. “But cybersecurity leaders cannot focus solely on the role of AI in security operations. It is imperative that the security function is involved in the development, integration and implementation of any AI solution within their business – including existing products that will later receive AI capabilities.
“AI holds promise for improving cybersecurity operations, but for the benefits to be fully realized, cybersecurity teams must be integrated into the AI governance process. The fact that only 27% of these teams in India are currently involved in AI policymaking is a missed opportunity to ensure that AI is implemented securely and responsibly,” says RV Raghu, Director of Versatilist Consulting India Pvt Ltd and Ambassador of ISACA India. “There is an urgent need for organizations to rethink how they integrate cybersecurity professionals into AI decision-making. The strategic importance of collaboration between AI and cybersecurity experts should not be overlooked by organizations.
Explore the latest developments in AI
In addition to the findings of the 2024 State of Cybersecurity AI Survey Report, ISACA has developed AI resources to help cybersecurity and other digital trust professionals navigating this transformational technology:
White Paper on European AI Law: Companies should be aware of the timing and actions to be taken related to the European AI law, which puts in place requirements for certain AI systems used in the European Union and prohibits certain uses of AI, most of which will apply from August 2, 2026. The new white paper, Understanding EU AI Law: Requirements and Next Steps, recommends some key steps, including establishing audits and traceability, adaptation of existing cybersecurity and privacy policies and programs, and the designation of an AI manager who can be responsible for monitoring the AI tools used. and the company’s broader approach to AI.
Authentication in the age of deepfakes: Cybersecurity professionals should be aware of both the benefits and risks of AI-based adaptive authentication, says ISACA’s new resource, Examining Authentication in the Deepfake Era. Although AI can improve security by being used in adaptive authentication systems that adapt to each user’s behavior, making access more difficult for attackers, AI systems can also be manipulated through adversarial attacks, are susceptible to bias in AI algorithms and may be equipped with ethical principles. and privacy concerns. Other developments, including research into integrating AI with quantum computing that could have implications for cybersecurity authentication, should be monitored, the paper said.
Policy considerations on AI: Organizations adopting a generative AI policy can ask themselves a set of key questions to ensure they cover their bases, according to ISACA’s Considerations for Implementing an Artificial Intelligence Policy generative, notably “Who is impacted by the scope of the policy?” ”, “What does good behavior look like, and what are the acceptable conditions of use?” and “How will your organization ensure legal and compliance requirements are met?” »
Advancing AI knowledge and skills
ISACA has also expanded its training and accreditation options to help the professional community keep pace with the evolving AI and cybersecurity landscape:
Machine learning: Neural Networks, Deep Learning, Large Language Models — ISACA’s latest on-demand AI course, joining the recent Machine Learning for Business Enablement course, as well as others on topics like Essentials of AI, Governance, Ethics and Auditing, are accessible through the ISACA online portal at the learner’s convenience and offers continuing professional education (CPE) credits.
Certified Cybersecurity Operations Analyst – As emerging technologies such as automated systems using AI evolve, the role of the cyber analyst will become more critical in protecting digital ecosystems. ISACA’s upcoming Certified Cybersecurity Operations Analyst certification, launching in Q1 2025, focuses on the technical skills needed to assess threats, identify vulnerabilities, and recommend countermeasures to prevent cyber incidents.