Syed Ahmed is senior vice president of engineering at Action Softwarea leading marketing automation platform.
As more businesses go online, cybersecurity is becoming an increasingly important concern for protecting your company’s data and financial assets. Security breaches are even more prevalent due to bots, data leaks and phishing. AI is therefore an essential tool for strengthening security.
Data security
Samsung has recently experienced a sensitive code leak which was inadvertently uploaded to ChatGPT. It turns out that a Samsung employee submitted code containing sensitive information to ChatGPT for review. By OpenAI policies, unless its training models are explicitly disabled, ChatGPT may use user-submitted data to improve its models. As a result, Samsung has had to impose strict limits on how its employees use ChatGPT.
Although AI has a long history In the tech world, this recent increase in access to generative AI tools means organizations need to act quickly to prioritize data security. Encouraging users to browse the web in privacy mode is a simple step to limit leaks. An article from Micro Trend notes that private browsing “can help protect you harmful websites and phishing attempts.
Better yet, use privacy mode when using a language learning model (LLM). While this may often require a subscription, it ultimately helps limit how your prompts are used to train the LLM itself, thereby protecting company assets. One example is ChatGPT Enterprise, an enterprise-oriented variant of ChatGPT. It states that “customer prompts or data are not used for training models.
Some companies have internal safeguards to combat such leaks. According to a Business Insider article, if Amazon employees try to visit a ChatGPT site on their work computer, they see a message pop-up reminding them that use of the site “cannot be approved by Amazon Security.” Such reminders can effectively dissuade developers from diving into AI willy-nilly.
In addition to the risk of AI absorbing data, bad actors can actively exploit it to create more phishing attempts than ever before. Celia Surridge, a spokesperson for the Better Business Bureau, noted that “consumer scams, especially during the holidays or any Amazon Prime sales period, are really concerning.” Brian Schnese, senior risk consultant for Hub International, paints a brilliant picture of how this works: “I can go to ChatGPT and type ‘please write a request to my supplier to ask them to change my cabling instructions’ and he spits out a message. perfect request.
Protection against AI: with AI
Fortunately, some developers are fighting fire with fire. Norton’s Genie, its new spam detector, uses AI to analyze both user-submitted photos and the text of suspicious emails to determine whether they are spam or not. Sahil Pruthi, product manager at Norton’s parent company, noted that instead of ask loved ones for help, a user “should be able to have that answer at their fingertips and within seconds.” Other antivirus apps like McAfee And Sophos also offer AI-based protection against AI phishing.
However, sometimes the AI has other plans. Snapchat is known worldwide for allowing users to chat with disappearing private messages. Its new AI chatbot feature, MyAI, ended up broadcast users’ Live Stories so everyone can see it. Although Snapchat eventually fixed the problem, the risk of an AI chatbot accidentally piercing the security veil is all too real.
Thus, faced with evolving risks and solutions, organizations have several options. When implemented correctly, AI-enhanced security can provide many benefits, such as automating tasks (for example, generating and sending a Slack message to your IT team following suspicious behavior) or identification of threats based on superior AI pattern recognition algorithms. A recent article in Security underlines the importance of a holistic approach which combines AI with “human expertise, rigorous testing, continuous monitoring and collaboration between stakeholders to ensure robust security measures”.
Ideally, AI-based solutions to enhance security should be comprehensive in how they analyze and respond to security issues. In a recent article on the Linea AI platform, Cyberhaven CEO Howard Ting highlighted the importance of an approach that “embodies the collective intelligence and vision of top security analysts and experts.” applies it on a large scale, not only to sorting and investigation, but to detection.”
Additionally, the power of LLM can improve security through AI. Microsoft’s February 2024 issue of its Cyber Signals newsletter highlighted how LLMs can “uncover patterns and trends in cyber threats, add valuable context to threat intelligence.”
AI is here and only getting more advanced. This means that it is a necessary part of a person’s safety arsenal. By working with AI, we can chart a smoother path and protect ourselves against any malicious AI practices.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Am I eligible?