The widespread adoption of artificial intelligence (AI), particularly generative AI (GenAI), has revolutionized organizational landscapes and transformed both the cyberthreat and cybersecurity landscapes.
AI as a powerful cybersecurity tool
As organizations manage increasing amounts of data on a daily basis, AI offers advanced capabilities that would be more difficult to achieve using traditional methods.
According to the recently published “best practices” report by the Spanish National Cryptology Center (NCC), when applied to cybersecurity, AI can:
- Advanced threat detection and response
- Use historical data to anticipate threats and vulnerabilities
- Reduce the risk of unauthorized access by accurately authenticating individuals with advanced biometrics, user behavior, and more.
- Identify phishing attempts
- Evaluate security configurations and policies to identify possible weaknesses
In addition to helping security teams perform these tasks more accurately, AI also helps them improve their work speed.
AI Cybersecurity Risks
But speed is also the goal of cybercriminals when harnessing the power of AI: it allows them to quickly adapt their attacks to new security measures.
According to the NCC, the use of AI in cybersecurity comes with challenges and limitations:
- Adversarial attacks on AI models – Intended to deceive or confuse machine learning models, to force AI-based systems to make incorrect or malicious decisions
- Overreliance on automated solutions – For reasons such as lack of interpretability, automation failures, false sense of security, etc., AI systems should be used in tandem with traditional methods and techniques, not in place of them .
- False positives and false negatives could lead to undetected security breaches or unnecessary disruption
- Confidentiality and ethics – There are concerns about how personal data is collected, stored and used
Finally, GenAI, which can be used by security professionals To improve their system testing processes, cybercriminals can also exploit them to generate malware variants, deepfakes, fake websites, and convincing elements. phishing emails.
Governments step up their efforts
With AI technology continuing to improve, cybercriminals will surely find new ways to compromise systems.
Last October, President Biden issued an executive order with the intention of managing risks and ensuring safe, secure and trustworthy AI.
Shortly after, the The UK’s National Cyber Security Center (NCSC) has published security guidelines for developers and suppliers of AI-based systems to ensure secure development and deployment of AI systems.