Artificial intelligence (AI) in cybersecurity presents a complex picture of risks and rewards. According to Hyperproof’s 5th Annual Benchmark Report, AI technologies are at the forefront of both enabling sophisticated cyberattacks and strengthening defenses against them. This duality highlights the critical need for nuanced application and vigilant management of AI in cybersecurity risk management practices.
This blog provides a high-level overview of our findings from Chapter 2 of the report, focused on AI. Download the full report for more detailed exploration and to access additional chapters.
AI as an enabler of cybersecurity and business risks
AI represents a double-edged sword in the world of cybersecurity. Whether serving as a spear for threat actors to better attack businesses or helping defend those same organizations, the topic of AI is polarizing and nuanced in the cybersecurity space.
The dual capabilities of AI present a unique challenge – and opportunity – for Governance, Risk and Compliance (GRC) professionals. On the one hand, AI technology can streamline workflows, improve detection mechanisms, and provide predictive insights that can anticipate cyberattacks. On the other hand, the introduction of AI into business operations may also lead to new vulnerabilities.
Examples include more advanced phishing schemes and AI-based password guessing techniques faced by the global market. According to our 2024 IT Risk and Compliance Benchmark Report, 39% of respondents are wary of the business risks posed by generative AI technologies, with 22% expressing extreme concern.
This apprehension is counterbalanced by optimism 61% leveraging AI to streamline screening recommendations And 59% who use it to facilitate documentation review. Strategic implementation of AI can not only increase the effectiveness of security measures, but also requires a new level of vigilance and adaptive security strategies to mitigate the risks introduced by these advanced technologies.
Global regulatory response and the need for AI frameworks
The global regulatory environment is rapidly adapting to the challenges and opportunities presented by AI in cybersecurity. As AI technologies evolve, it becomes necessary to put in place robust regulatory frameworks that can address the complex cybersecurity, privacy and ethics implications of AI. 23% of respondents to our survey use NIST Cybersecurity Framework (CSF) to manage AI risksmaking it the most commonly used AI framework.
In response to the growing challenges of AI, the NIST AI Risk Management Framework (RMF)released in early 2023, has already become the second most commonly used framework for AI.
16% of respondents use NIST AI RMF to address generative AI risks. This framework provides structured guidance on managing the risks posed by AI technologies for individuals, organizations and society. Other most used frameworks in response to the risk presented by generative AI include ISO 15288 or ISO 12207 (12%), ISO 27001 (9%), and the NIST Privacy Framework (9%).
As regulators around the world continue to refine and introduce new guidance, organizations must remain agile, ensuring their AI deployments are transparent, accountable, and aligned with existing and emerging regulations to foster trust and ensure compliance.
Industry-specific AI concerns and practices
Concerns about AI risks vary widely across industries, particularly in highly regulated industries like aviation, banking, FinTech, and health tech. These sectors report higher levels of concern due to the direct impact AI can have on critical operational aspects and strict regulatory environments in which they operate.
For example, in banking and FinTech, AI presents both an opportunity to improve customer experience and a risk in terms of financial security and data privacy. The 2024 IT Risk and Compliance Benchmark Report reveals that 44% of respondents in the banking and FinTech sectors are “concerned” and 31% are “very concerned” about AI risks..
These numbers reflect the complex balance organizations must maintain in leveraging AI for competitive advantage and managing potential threats to avoid costly disruptions and ensure regulatory compliance.
Strategic responses to AI challenges
In light of the evolving AI landscape, organizations are reevaluating their cybersecurity strategies to integrate AI capabilities while managing associated risks. Many organizations now prioritize regular activities internal audits and revise their existing control frameworks to better align with the AI-driven threat landscape.
An impressive 80% of respondents consider their AI strategy a crucial part of their operational planning, highlighting the growing recognition of the role of AI in improving cybersecurity posture. These strategic adaptations are not simply reactive measures but are part of a broader, proactive approach to cybersecurity that integrates AI as a central element of defense strategy. This allows organizations to not only defend themselves, but also more effectively anticipate potential cyber threats.
41% of respondents plan to conduct regular audits to mitigate business risks associated with AI tools in 2024. In addition, 40% plan to modify controls within an existing framework to integrate AI into their risk management program.
Another 35% plan to use a tool to monitor and evaluate generative AI system usagewhile 32% plan to add an additional frame. This last point is particularly interesting since 32% of respondents also say they put off adding additional frameworks due to time spent on other daily manual tasks.
On the other end of the spectrum, only 3% plan to block or sanction the use of generative AI tools within their organization. This demonstrates that embracing the use of AI is likely a business imperative for many, as it gives businesses the opportunity to further streamline their operations, enabling a more efficient workforce.
Optimizing GRC with AI amid economic changes
The current economic climate has imposed constraints that are prompting organizations to seek more effective ways to manage CRM. AI is increasingly seen as an essential tool, offering the ability to automate routine tasks, streamline data analysis and optimize regulatory compliance processes. This shift is particularly relevant as organizations are forced to do more with less, making AI an attractive option for improving GRC operations.
According to our survey results, automation and AI-driven analytics are playing a central role in transforming GRC workflows, enabling GRC professionals to focus on strategic risk management and compliance activities. compliance that require more nuanced human judgment.
Looking ahead, security and compliance professionals surveyed were asked in which areas of AI they see the most potential when applying it to risk and compliance. 65% predict AI will help optimize workflowswhile 52% say AI will make it easier to perform tasks manually. Lately, 49% expect AI to be most useful in automation.
How respondents use AI doesn’t stop there. We asked them if they were using AI to streamline workflows, and the majority said yes. 61% use AI to recommend relevant controls for a given settingwhile 59% said they use it to review documentation. Another 41% use AI to write policiesand finally 30% helped him merge multiple documents.
Only 7% say they are not using AI to streamline their workflows.
Adopt AI with caution and curiosity
The findings of our annual benchmark report show a clear story: AI’s role in cybersecurity is complex, with both promising opportunities and formidable challenges. As AI continues to reshape the world of cybersecurity, organizations must take a cautious but curious approach: take advantage of the benefits of AI to improve security and operational capabilities and stay aware (and prepared) of the risks that she introduces.
This balanced approach will require ongoing education, vigilant risk management, and a strategic commitment to aligning AI applications with business objectives and regulatory requirements. Ultimately, by navigating this complex landscape with informed strategies and robust frameworks, organizations can harness the potential of AI to secure their operations against an increasingly dynamic threat environment, ensuring resilience and compliance in an AI-integrated future.
To learn more about the IT risk and compliance landscape in 2024, download the full IT Risk and Compliance Benchmark Report Today.
The post office The Dual Benefits of AI in Cybersecurity: Lessons from the 2024 Benchmark Survey Report appeared first on Hyperresistant.
***This is a Security Bloggers Network syndicated blog from Hyperresistant Written by Courtney Chatterton. Read the original message at: https://hyperproof.io/resource/ai-in-cybersecurity-2024-benchmark-report/