The impact of generative artificial intelligence (AI) models like Google’s Bard and OpenAI’s ChatGPT, and many others, is hard to ignore. Much has been written about how these models have fundamentally changed our ability to create new data, including text, images, and code, throughout 2023. At the same time, the use of large models of language (LLM) and generative AI for cybersecurity is attracting growing interest. , which experts believe will present both opportunities and challenges for the industry in the coming months.
Anshuman Sharma, Associate Director CSIRT & Investigative Response, APJ, Verizon Business explains that threat actors can leverage generative AI to escalate security breaches, orchestrate large-scale social engineering attacks, create deepfake audiovisual content more realistic and develop sophisticated, self-evolving malware. strains and launch phishing attacks.
Sanjoy Paul, Senior Professor of Technology at Hero Vired, added that generative AI significantly increases the risk associated with password security, allowing cybercriminals to crack passwords more effectively, especially those that are weak or reused. It also creates new forms of malware that can evade traditional security systems, posing a significant challenge to cybersecurity defenses.
“The interaction between AI models and data privacy is also becoming a major concern. AI models, if intelligently leveraged, can inadvertently disclose sensitive information, presenting a potential threat to proprietary company data. The cyber risk of generative AI extends beyond code and text generation; it also includes creating sophisticated photo, video and voice content,” he says.
Despite these challenges, generative AI has a positive impact on cybersecurity, making it a double-edged sword when it comes to cybersecurity. It helps developers produce faster and more effective defense mechanisms against cybercrime. Historically, new technologies have been exploited by hackers, but they have also enabled the development of stronger defense methods.
Traditional security measures often struggle to keep up with the rapid development of new malware strains. However, generative AI can analyze patterns of known threats and generate patterns to predict and detect new anomalies, explains Sharma, adding that “its algorithms can analyze large amounts of historical data and learn to identify patterns that analysts humans might ignore. By training on various datasets, these algorithms can detect anomalies, unusual behaviors, and potential threats that have never been encountered before.
Aaron Bugal, Field CTO – APJ at Sophos, said: “Generative AI, such as ChatGPT, introduces an interesting situation for cybersecurity. On the one hand, it raises concerns because it can be used for insidious purposes like malware cloaking, social engineering, and even acting as a personal assistant “living off the land.” This highlights the need for vigilance and innovative defense measures. On the other hand, the potential of generative AI to augment human capabilities in tasks such as cross-domain detection, automation and threat analysis. presents a promising avenue for defenders.
Generative AI also plays a crucial role in the fight against deepfakes, where its detection, analysis and response are essential. By leveraging advanced algorithms and machine learning, AI helps identify subtle manipulations indicative of misleading content. Bugal says its integration with authentication mechanisms, using blockchain and cryptography techniques, strengthens the digital landscape against manipulation.
It also offers the possibility of creating more sophisticated biometric authentication systems. AI-generated deepfake detection models, for example, can differentiate between authentic biometric data and those created by impostors, providing an additional layer of security to authentication processes.
Abhinanda Sarkar, academic director of Great Learning, mentions that generative AI also has profound implications for cybersecurity training. By creating realistic cyber threat scenarios, it provides cybersecurity professionals with hands-on experience in a safe and controlled environment.
This experiential learning approach ensures security teams have the skills needed to address real-world cybersecurity challenges.
Muraleedhar Pai, executive director and chief technology officer at Maveric Systems, explains: “While cybercriminals can exploit AI for more sophisticated attacks, CISOs can leverage the same technology to develop robust defense mechanisms and create systems impregnable.
“Ultimately, cybersecurity success depends on the adaptability and innovation of security professionals as well as the integration of AI technologies into comprehensive, multi-layered defense strategies. As AI evolves, regulations and ethical considerations will play a crucial role in determining the balance between offensive and defensive uses of generative AI in cybersecurity. »
According to a new report from Bloomberg Intelligence (BI), released in June 2023, the generative AI market is poised to reach nearly $1.3 trillion over the next 10 years, up from a market of just 40 billion dollars in 2022. Growth could grow at a CAGR. by 42%, driven by increased data usage, digital ads, cloud storage innovation, specialist software and services.
Meanwhile, a September report from market research firm Fortune Business Insights said the cybersecurity market size was valued at $153.65 billion in 2022 and is expected to grow from $172.32 billion in 2023 to $424.97 billion in 2030, representing a CAGR of 13.8% during the period. forecast. In India itself, the cybersecurity products segment has grown more than 3.5 times to around $3.7 billion in 2023 from around $1 billion in 2019, the Data Security Council of India said. That said, CISOs and security teams can explore new ways to use generative AI as a valuable ally for cybersecurity defenses.