The AI landscape is evolving faster than ever, and this is especially true for cybersecurity. A lot South African companies are now focusing their investments on application security, with a strong emphasis on AI-based solutions to strengthen their cybersecurity efforts. This trend demonstrates a growing awareness of the potential benefits of AI, even as the technology itself advances.
David Roth, chief revenue officer at Trend Micro, And Jeff Pollard, vice president and principal analyst at Forrester, hosted a webinar to clarify the hype around AI and machine learning in security strategies, highlighting that generative AI (Gen AI) introduces a new layer of complexity.
Here are some 5 key points to consider highlighted during the webinar:
-
Be aware of the impact on skills
The lure of a “new, fancier toy” isn’t the only thing driving interest in generative AI for cybersecurity. Security teams are in dire need of help: they are overworked, understaffed, and facing constantly evolving threats. So it’s no surprise that when the AI generation came on the scene, people started fantasizing about fully autonomous security operations centers (SOCs), staffed with Terminator-style malware hunters.
However, current Gen AI systems are not yet ready to operate autonomously. Instead of closing the skills gap, the AI generation could introduce additional training challenges in the medium term. Additionally, integrating these AI tools into existing workflows is time-consuming, even for experienced workers.
Despite these obstacles, there are currently some very promising uses of Gen AI in security. By improving what teams can already do, AI can help them achieve better results with less repetitive work. This is especially true in areas such as application development, detection and response.
-
Understand how to get quick wins
Gen AI revolutionizes security teams by automating documentation tasks such as action summaries, event drafting and reporting, allowing security professionals to focus on incidents, reducing the time-consuming and tedious process .
However, strong communication skills are still required for these positions, and AI-generated reports should not be used to replace professional development.
The AI generation can also advise future steps and obtain insights from knowledge bases faster than humans. However, it is essential that AI results meet business needs. If a procedure requires seven steps and AI only recommends four, a person must ensure that all steps are completed in order to meet the objectives and remain compliant. Skipping steps can have catastrophic consequences.
-
Pay attention to data gaps that impact AI performance
Security companies can take advantage of the big data opportunity by using AI generation to become more proactive, identify changes in attack surfaces, and run attack path simulations. It can help teams anticipate possible problems, even if it cannot accurately predict hazards.
However, an organization’s knowledge of its systems and configurations determines the effectiveness of this process. Knowledge gaps lead to AI performance gaps and, unfortunately, many organizations continue to face challenges due to scattered data and documentation.
Standardized data management and good data hygiene should be top priorities for security teams.
4. Introduce security measures for shadow AI
Businesses around the world are rightly concerned about AI leaking sensitive information, whether through unauthorized tools or even approved AI-enhanced software. In the past, hackers had to know how to break into systems to obtain this data, but now a simple prompt can make it accessible.
Businesses must protect against unauthorized use of AI and ensure appropriate use of approved tools, particularly when developing applications using extended language models (LLM), securing data, applications, LLM and prompts.
These concerns boil down to a few main issues: bringing your own AI, enterprise applications, and product security. All require their own security measures and affect the responsibilities of the chief information security officer (CISO), even if the CISO does not directly manage these projects.
-
Don’t get caught off guard
Think about the early days of cloud computing and the hysteria surrounding shadow IT applications: there is a lot to learn from that era. When security professionals called unauthorized programs “shadow IT,” business leaders called them “product-driven growth.” Their ban only pushed their use underground, thus making the situation worse.
Now is the time to develop security-focused AI plans, get familiar with the technology, and prepare for its big moment. Remember how the cloud caught security professionals off guard, despite plenty of warnings? Given the complexity and power of AI, we simply cannot afford to be unprepared this time around.
Generation AI is gaining momentum in cybersecurity, but it won’t immediately solve the skills gap. By learning from past experiences with shadow IT and cloud adoption, teams can better prepare for the transformative future of AI. Preparation and proactive management are key to harnessing the true power of AI and keeping businesses competitive.