Artificial intelligence (AI) is a tool that MSSPs, MSPs, and various types of cybersecurity companies can leverage to defend their clients. Cyber adversaries are also leveraging AI for their own financial and disruptive activities. We know there will be victories and defeats on both sides.
Experts from various cybersecurity companies reached out to MSSP Alert to offer their AI trends and predictions for 2024. Keep reading to find out what they tell us about AI in the new year and beyond. coming – and how it all contributes to maintaining a cyber-secure world.
An increase in vulnerabilities linked to disinformation and election years
“We expect to see a greater number of AI attacks using fake news to spread disinformation during a (US) presidential election year. Media companies – print, cinema, streaming, etc. – are all highly regulated, but the Internet itself is not. This allows bad actors to take advantage of the influence of celebrities and world leaders to create AI-generated versions of these public figures to spread fake news and give them a sense of legitimacy without ever having to verify facts. Without any government legislation to crack down on these tactics, bad actors will be able to cause behavioral changes in society.
– Sam Curry, Vice President and CISO, Zscaler
Generative AI Creates Security Opportunities
“Generative AI and machine learning (ML) are increasing the frequency and complexity of cyberattacks, creating new pressures on businesses.. This technology can enable cybercriminals to launch sophisticated, stealthy attacks such as deepfakes or self-evolving malware, compromising systems at scale. To counter these advanced threats and fight fire with fire, businesses must employ AI-enabled cybersecurity. This technology has the potential to transform the industry by improving enterprise posture through automated configuration and compliance strengthening, overcoming micro-segmentation challenges, refining least privilege access, improving reporting And much more.
– Margareta Petrovic, Global Managing Partner, and Dr KPS Sandhu,
Head of Global Strategic Initiatives, Cybersecurity, Tata Consultancy Services (TCS)
Advanced AI to trigger social engineering attacks
“Commercial and open source AI capabilities, including Large Language Models (LLMs) like ChatGPT and LLaMA, and countless variations, will help attackers design thoughtful and effective social engineering campaigns. With the increasing integration of AI systems with amounts of personal information via social media sites, from LinkedIn to Reddit, we will see even low-level attackers be able to create targeted and compelling campaigns based on social engineering .
– Kevin O’Connor, Director of Threat Research, Adlumina
The rise of “ghost AI”
“In 2024, the widespread use of generative AI in the workplace will bring new cybersecurity challenges, including “Shadow AI.” Employees integrating AI tools into workflows without leadership knowledge create cybersecurity and data privacy risks. Without governance, organizations cannot see what tools employees are using or how much sensitive information is at risk. Companies will begin to adopt a managed AI policy that can reduce the risks associated with shadow AI. Educate teams on safe AI practices, set clear usage policies, implement monitoring of AI tool usage, and update security protocols as AI technology advances. AI evolves, will be essential to harnessing the benefits of AI while minimizing data security risks.
– Michael Crandell, CEO, Bitwarden
Evolution of AI Deterrence Testing and Security Posture Testing
“AI cybersecurity will advance over the next year with an increased focus on AI red teaming and bug bounties. Following industry leaders like Google now including generative AI threats in their bug bounty programs, the practice will expand to identify and address unique AI vulnerabilities, such as model manipulation or rapid injection attacks. The AI Red Team (Offensive Security Testing) will continue to employ diverse teams for comprehensive AI system assessments, focusing on empathy and detailed test scenarios. The combination of AI Red Teaming and incentivized bug bounties will be crucial to securing AI systems against sophisticated cybersecurity threats, reflecting a proactive, industry-wide approach to security. AI.
– Josh Aaron, CEO, Aiden
AI versus AI
“Attackers are increasingly using AI and ML to develop more sophisticated attacks, but AI can also be used to counter these attacks. This arms race between AI-driven defense and AI-assisted offense will drive innovation in the cybersecurity industry, leading to ever more advanced security solutions. AI-powered security solutions are already being used to identify and prioritize threats, automate incident response, and personalize security controls. In the future, these solutions will become even more sophisticated, learning from experience and adapting to new threats in real time. This will enable AI-based cyber defense systems to proactively identify and neutralize automated AI-powered attacks before they cause damage. In this evolving cybersecurity landscape, organizations must embrace AI and ML to stay ahead of the curve.
– Brian Roche, Director of Product, Veracode
Emergence of a “poly-crisis” from AI-based cyberattacks
“By 2024, criminals will have an easier time using AI to attack not only traditional IT, but also cloud containers and, increasingly, ICS and OT environments, driving the emergence of a “poly-crisis”. Such a scenario not only threatens the financial impact, but also has a simultaneous impact on human life, as part of cascading effects. Critical IT infrastructure will be increasingly at risk due to the growing geopolitical threat. Cyber defense will be automated, leveraging AI to adapt to new attack models.
– Agnidipta Sarkar, vice president of CISO Advisory, Color tokens