In this article, you’ll find excerpts from reports we’ve recently covered, which offer statistics and insights on the cybersecurity challenges and issues arising from the expansion of AI.
Security professionals are cautiously optimistic about AI
Cloud Security Alliance and Google Cloud | The State of AI and Security Survey Report | April 2024
- 55% of organizations plan to adopt GenAI solutions within this year, signaling a substantial increase in GenAI integration.
- 48% of professionals expressed confidence in their organization’s ability to implement a strategy to leverage AI for security.
- 12% of security professionals believe AI will completely replace their role.
AI abuse and disinformation campaigns threaten financial institutions
FS-ISAC | Navigating Cyber 2024 | March 2024
- Threat actors can use generative AI to write malware, and more experienced cybercriminals could exfiltrate information or inject tainted data into the large language models (LLMs) that train GenAI.
- Recent advances in quantum computing and AI are expected to challenge established cryptographic algorithms.
Companies increasingly block AI transactions for security reasons
Zscaler | AI Security Report 2024 | March 2024
- Today, companies block 18.5% of all AI transactions, an increase of 577% between April and January, for a total of more than 2.6 billion blocked transactions.
- Some of the most popular AI tools are also the most blocked. Indeed, ChatGPT has the distinction of being both the most used and the most blocked AI application.
Fraudsters are exploiting tax season anxiety with AI tools
McAfee | Study on tax fraud 2024 | March 2024
- Of the people who clicked on fraudulent links from so-called tax services, 68% lost money. Of these, 29% lost more than $2,500 and 17% lost more than $10,000.
- 9% of Americans are confident in their ability to spot deepfake videos or recognize AI-generated sounds, such as misinterpretations from IRS agents.
Advanced AI, analytics and automation are essential to address the complexity of the technology stack
Dynatrace | The State of Observability 2024 | March 2024
- 97% of technology leaders believe traditional AIOps models are incapable of handling data overload.
- 88% of organizations say the complexity of their technology stack has increased over the past 12 months, and 51% say it will continue to increase.
- 72% of organizations have adopted AIOps to reduce the complexity of managing their multicloud environment.
Today’s biggest AI security challenges
Hidden layer | AI Threat Landscape Report 2024 | March 2024
- 98% of companies surveyed consider some of their AI models critical to their success, and 77% of them have experienced vulnerabilities in their AI systems in the past year.
- 61% of IT leaders recognize shadow AI, i.e. solutions that are not officially known or under the control of IT, as a problem within their organization.
- Researchers revealed the extensive use of AI in modern businesses, noting on average 1,689 AI models actively used by businesses. This has made AI security a top priority, as 94% of IT leaders will dedicate funds to protecting their AI in 2024.
AI tools expose businesses to the risk of data exfiltration
Code42 | Annual Data Exposure Report 2024 | March 2024
- Since 2021, there has been an average 28% increase in monthly data exposure, loss, leak and theft events caused by insiders.
- While 99% of companies have data protection solutions in place, 78% of cybersecurity leaders admit that sensitive data has still been breached, leaked or exposed.
95% think LLMs make phishing detection more difficult
Last pass | LastPass Survey 2024 | March 2024
- More than 95% of respondents believe that dynamic content via Large Language Models (LLM) makes detecting phishing attempts more difficult.
- Phishing will remain the leading social engineering threat to businesses throughout 2024, surpassing other threats such as business email compromise, vishing, smishing, or baiting.
How AI is reshaping the cybersecurity job landscape
ISC2 | AI Cyber 2024 | February 2024
- 88% of cybersecurity professionals believe AI will have a significant impact on their work now or in the near future, and 35% have already witnessed its effects.
- 75% of respondents are moderately to extremely concerned about AI being used for cyberattacks or other malicious activities.
- The survey found that 12% of respondents said their organization had blocked all access to generative AI tools in the workplace.
Companies ban or limit use of GenAI due to privacy risks
Cisco | Cisco 2024 Data Privacy Benchmark Study | February 2024
- 63% have established limits on what data can be captured, 61% have limits on which employees can use GenAI tools, and 27% said their organization has completely banned GenAI applications at this time.
- Despite the costs and demands that privacy laws can place on organizations, 80% of respondents said privacy laws have had a positive impact on them, and only 6% said the impact was negative.
- 91% of organizations recognize that they need to do more to reassure their customers that their data is only used for the intended and legitimate purposes of AI.
Unlocking the full potential of GenAI through the reinvention of work
Accenture | Work, labor, workers: reimagined in the age of generative AI | January 2024
- While 95% of workers see value in working with GenAI, 60% are also concerned about job loss, stress and burnout.
- 47% of reinventors are already thinking bigger, recognizing that their processes will require significant changes to take full advantage of GenAI.
Adversaries exploit trends and target popular GenAI applications
Netscope | Cloud and Threat Report 2024 | January 2024
- In 2023, ChatGPT was the most popular generative AI application, accounting for 7% of enterprise usage.
- Half of all enterprise users interact with between 11 and 33 cloud applications each month, with the top 1% using more than 96 applications per month.