Artificial intelligence (AI) has a long and rich history. The ancient Greeks, for example, told stories of Talos, an enormous automaton which stood guard on the coasts of Crete. In the 17th century, Gottfried Leibniz, Thomas Hobbes, and René Descartes explored the possibility that all rational thought could be as systematic as algebra or geometry. In the mid-20th century, a British computer scientist Alan Turing was seriously investigating the possibility of “artificial intelligence”, and by 1956 “artificial intelligence research” was an established academic discipline.
Fast forward to today, and artificial intelligence is everywhere. The launch of ChatGPT in late 2022 marked a huge step forward for AI, inspiring organizations and individuals around the world to implement the technology into their lives and work. It has also sparked discussions about ethics, forcing the world to consider bias, privacy, and responsible development in the context of artificial intelligence.
The dawn of the AI era – as we can safely call our post-ChatGPT world – has also raised questions for the cybersecurity industry. For example, will AI be a force for good or evil? How can AI improve cybersecurity? And how is AI transform the current threat landscape? These are all questions that the Cloud Security Alliance (CSA) has attempted to answer.
In November 2023, the ASC distributed an online survey to nearly 2,500 cybersecurity experts to gather their thoughts on AI and better understand:
- Current security challenges
- Perceptions of AI in Cybersecurity
- Knowledge of the industry with AI
- Projects for using AI in industry
- Impact of AI on staffing and training
Let’s look at some of the key findings of the CSA State of AI and Security Survey Report.
Will AI benefit attackers or defenders?
The CSA study finds that most (63%) of security professionals surveyed are cautiously optimistic about AI, believing it will improve threat detection and response. However, these same respondents are divided on the question of who will benefit most from AI: 34% think security teams will benefit the most, and 25% think AI will favor bad actors. In comparison, 31% believe the technology has the same benefits for defenders as it does for attackers.
Are security professionals worried that AI will replace them?
Despite fears that AI will make many jobs obsolete, most cybersecurity professionals believe the technology will empower them, not replace them. Most respondents believe AI will improve their skills (30%), support them in their role (28%), or automate many of their tasks and free up time for more advanced work. Only a tiny number (12) of those surveyed fear that AI will replace them entirely. However, more than half of security professionals surveyed are concerned about possible over-reliance on AI, highlighting the importance of balancing AI-based and human-based security approaches.
How do leaders’ and staff’s perspectives on AI differ?
Quite predictably, senior executives are (or at least claim to be) much more familiar with AI than their colleagues: 51% of executives surveyed say they are very familiar with AI, compared to just 11% of staff. The same proportion of senior executives reported having a “clear” understanding of AI, compared to just 14% of staff. Unfortunately, we can only speculate whether these results truly represent a lack of knowledge or whether this is simply a particularly concerning example of pride.
What we do know, however, is that most employees (74%) are confident in their leaders’ knowledge of the security implications of AI. Even though only 14% of employees say they have a clear understanding of AI, these results are based on shaky foundations. Similarly, 84% of respondents said their executives and boards are advocating for AI adoption. It is clear, however, that executives and boards need to do more to train their staff in AI to achieve this goal.
Will organizations implement AI in 2024?
According to the CSA report, at the time of its release, more than half (55%) of organizations planned to implement generative AI in the next year. They planned to explore the following use cases:
- Creating rules – 21%
- Attack simulation – 19%
- Compliance Violation Monitoring – 19%
- Network detection – 16%
- Reduce false positives – 16%
- Training development and support – 15%
- Classification of anomalies – 14%
- Natural language to search – 13%
- Threat Summary – 13%
- Data loss prevention, IP protection – 13%
- User behavior analysis – 11%
- Automated reporting – ten%
- Endpoint detection – ten%
- Event Log Summary – 9%
- Forensic analysis – 9%
- Chatbot – 9%
- Incident Summary – 8%
- Configuration drift – 8%
- Recommendations for action/remediation – 8%
- Code analysis – 7%
What are the main challenges of implementing AI?
According to the survey, security professionals believe that the shortage of qualified personnel (33%) is the main challenge to implementing AI in cybersecurity. This involves finding new staff with the right skills and upskilling existing staff.
Other challenges include resource allocation (11%), understanding AI risks (10%), and implementation cost (85%). Somewhat surprisingly, respondents did not consider traditional barriers such as regulation and data privacy. compliance concerns as major challenges.
Overall, cybersecurity professionals are cautiously optimistic about AI. However, their understanding of technology leaves something to be desired; this perhaps reflects an overconfidence and lack of preparation that could prove disastrous. Senior executives must ensure their understanding of AI is where they believe it is and work to improve the knowledge of their staff. Although organizations naturally want to implement AI into their business processes, they need to address knowledge and skills gaps before doing so.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire.