Amid all the hype, CISOs urgently need practical guidance on how to establish AI security practices to defend their organizations as they play catch-up in deployment and planning. With the right combination of cybersecurity policy and advanced tools, businesses can achieve their goals today and lay the foundation to address the evolving complexities of AI in the future.
When the most talented people working on a new technology say that mitigating its risks should be a global priority, it’s probably wise to pay attention. That’s what happened on May 30, 2023, when the Center for AI Safety released a report open letter signed by more than 350 scientists and business leaders warning of the most extreme potential dangers posed by AI.
As much of the media coverage that followed has pointed out, fearing the absolute worst-case scenario can actually be a dangerous distraction from addressing the AI risks we already face today, such as internal bias and fabricated facts. The last of these risks recently made headlines when it was discovered that a lawyer’s AI-generated legal brief contained fully fabricated cases.
Our other AI blogs have looked at some of the immediate AI security risks that enterprise CISOs should be thinking about: the ability of AI to impersonate humans and perpetrate sophisticated phishing schemes; the lack of clarity over ownership of data captured and generated from public AI platforms; and the complete lack of trustworthiness, which includes not only bad information created by AI, but also AI being “poisoned” by bad information it absorbs from the internet and other sources.
I spoke with ChatGPT about network security facts after being given incorrect information and forced to divulge the correct answer that he seemed to know all along. While ChatGPT states as a feature of its Enterprise version that no training will be done on your data, of course, not all employees and contractors will be using an Enterprise version only. And even if a private language AI instance is to be used, the impact of a breach of any AI, whether public or private, is worth considering.
If these are the risks, the next obvious question is: “What can CISOs do to strengthen AI security in their organization?”
Good policy is the foundation of AI safety
Enterprise IT security leaders have learned the hard way over the past decade that banning the use of certain software and devices usually has negative effects and can even increase risks to the business. If an application or solution is convenient enough, or if what is approved by the company does not meet all of the needs or wants of users, they find a way to stick with the tools they prefer, leading to the problem of shadow IT.
Considering that ChatGPT was acquired over 100 million users In just two months of launch, other generative AI platforms are already well integrated into users’ workflows. Banning them could create a “shadow AI” problem more perilous than any workarounds that have already been implemented. Furthermore, many companies are pushing AI adoption as a way to increase productivity and would now have a hard time blocking its use. If the policy decision is to ban unapproved AI, it needs to be detected and potentially blocked.
CISOs must therefore provide access to AI tools, supported by sound policies on how to use them. Examples of such policies are starting to circulate online for large language models like ChatGPT, as well as guidance on how to assess AI security risks. But there is no standard approach yet. Even the IEEE has not yet fully addressed the issue, and while the quality of information online is steadily improving, it is not always reliable. Any organization looking for AI security policy templates must be very selective.
Four key considerations for AI security policy
Given the nature of the risks described above, protecting the confidentiality and integrity of corporate data is an obvious goal for AI security. Therefore, any corporate policy should, at a minimum:
1. Prohibit sharing sensitive or private information with public AI platforms or third-party solutions outside the company’s control. “Until further clarification is provided, companies should instruct all employees who use ChatGPT and other public generative AI tools to treat the information they share as if they were posting it to a public website or social platform,” he said. Gartner I put it on recently.
2. Don’t “cross the streams.” Maintain clear rules for separating different types of data, so that personally identifiable information and anything subject to legal or regulatory protection are never combined with data that can be shared with the public. This may require implementing a corporate data classification system if one does not already exist.
3. Validate or verify any information generated by an AI platform to confirm that it is true and accurate. The risk to a company of publishing demonstrably false AI results is enormous, both reputationally and financially. Platforms that can generate citations and footnotes should be required to do so, and those references should be verified. Otherwise, any claims made in AI-generated text should be fact-checked before the content is used. “While[ChatGPT]gives the illusion of performing complex tasks, it has no knowledge of the underlying concepts,” Gartner warns. “It simply makes predictions.”
4. Adopt and adapt—a zero trust posture. Zero Trust is an effective way to manage the risks associated with user, device, and application access to enterprise IT resources and data. The concept has gained popularity as organizations have struggled to address the dissolution of traditional enterprise network boundaries. While AI’s ability to mimic trusted entities likely challenges Zero Trust architectures, controlling untrusted connections is even more important. Emerging threats presented by AI make Zero Trust vigilance essential.
Choosing the right tools
AI security policies can be strengthened and enforced through technology. New AI tools are being developed to help spot AI-generated scams and schemes, plagiarized texts, and other misuses. These will eventually be deployed to monitor network activity, acting almost like radar or red light cameras to spot malicious AI activity.
Extended Detection and Response (XDR) solutions can already be used today to monitor for anomalous behavior in the enterprise IT environment. XDR uses AI and machine learning to process massive volumes of telemetry data (i.e., remotely collected) to control network standards in volume. While not a creative, generative type of AI like ChatGPT, XDR is a trained tool that can perform specific security tasks with high accuracy and reliability.
Other types of monitoring tools such as security information and event management (SIEM), application firewalls, and data loss prevention solutions can also be used to manage users’ web browsing and software usage, and to monitor information leaving the enterprise IT environment, thereby minimizing risk and potential data loss.
Know your limits
Beyond defining smart enterprise policies around AI security and fully utilizing current and new tools as they emerge, organizations must be clear about how much risk they are willing to tolerate to leverage AI capabilities. An article published by Society for Human Resources Management recommends that organizations formally determine their risk tolerance to help make decisions about the extent to which AI can be used and for what purposes.
The story of AI is only just beginning to be written, and no one really knows what the future holds. What is certain is that AI is here to stay, and despite the risks it poses, it has much to offer if we build and use it wisely. In the future, we will increasingly see AI itself deployed to combat the malicious uses of AI, but for now, the best defense is to start with a thoughtful and clear-eyed approach.
Further information
To learn more about Trend Micro’s thought leadership in AI security, check out these resources: