Balancing the opportunities and risks of new technologies has been a long-standing challenge, and artificial intelligence is no exception. Like cloud computing, data analytics, and robotic process automation in recent years, AI offers the potential to improve productivity and bring other benefits to businesses, while also presenting threats that require careful consideration.
Amazon stressed caution among its engineers when using ChatGPT while highlighting The potential of AI in products like AlexaIn our company, we strive to balance the need for a thoughtful approach to mitigating risk while encouraging innovation in the development and use of AI in our platform and our company.
According to Gartner, Spending on AI software expected to reach $297 billion by 2027Despite all the buzz, a recent survey conducted by AuditBoard and The Harris Poll reveals that Less than half of employed Americans (42%) Companies say their company has a formal policy regarding the use of non-company-provided AI tools for work, exposing them to potential ethical, legal, and privacy risks. Recent regulations such as the European AI Act and signaling guidelines such as the U.S. Executive Order for the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence underscore the importance of getting your house in order now, before penalties are imposed for non-compliance.
Businesses must prepare for AI-augmented decision-making across their operations by acting now to put guidelines in place to effectively manage AI risks while leveraging new capabilities to stay ahead of the competition.
AI may seem daunting, but there’s no need to reinvent the wheel. When we look closely, AI risks resemble many known risks for which we already have processes and policies in place to mitigate them, including data governance, identity and access management, and data loss prevention. Here are three ways organizations can leverage proactive policies and well-established processes to manage AI risks while harnessing its revolutionary potential:
1. Green light safe AI use cases with acceptable use policies.
Without stifling innovation and productivity, organizations must balance the opportunities offered by the technology with its acceptable use. A blanket ban on AI will likely result in “shadow AI,” or unauthorized use of AI outside of IT governance. Taking a thoughtful approach to developing acceptable use policies can green-light certain use cases without stifling creativity. Strong policies should provide a list of approved generative AI tools, establish guidance on permitted categories of data or datasets, and identify high-risk use cases to avoid. Restrictions should be put in place to prohibit the use of specific data for model training, limit automated decision-making, and ensure ethical considerations. The policy should also outline procedures for requesting, reviewing, and approving new use cases.
2. Minimize risk with AI key control policies.
Key control policies can play a critical role in reducing the risk of data breaches or misuse. A well-designed AI key control policy will ensure that AI adoption is compliant with regulations and policies, that only properly authorized data is exposed to AI solutions, and that only authorized personnel have access to datasets, models, and the AI tools themselves. Isolation from core systems reduces the risk of data ending up in third-party systems, and audit logs and monitoring make it easier to detect unusual activity or violations. A final key step is to require human involvement, with AI recommendations always reviewed by human operators with a clear trail of the results reviewed and decisions made.
3. Integrate AI considerations into tool selection processes
Approving new tools in the AI era is not so much about creating a new process as it is about ensuring that your existing third-party risk management processes can accommodate the nuances of AI. At our company, when reviewing proposals for new tools, we evaluate potential benefits and risks, alignment with organizational goals, and compliance with ethical standards and regulations. Key questions to ask when selecting AI-related tools include:
-
Do we have permission to provide this data to this tool?
-
Is this tool a “subcontractor” that must be disclosed to customers?
-
Can we prevent AI from using sensitive data for training purposes?
-
Can I choose not to allow tools to use my data to train models used by other parties?
-
Can we detect data flow to unauthorized tools?
Once tools are added to an approved list, guidance is provided on what datasets are allowed. In addition, we provide guidance on the limitations of AI, establish restrictions on automated decision-making processes, determine prohibited uses, and consider ethics to ensure fairness and accountability.
By developing these processes, organizations can foster innovation while prioritizing ethical considerations and societal impact.
Managing AI Risks is a Team Sport
AI risks and opportunities are not the domain of any one person or function within the organization. They affect all of us. It will take collaboration and communication to create a cohesive approach to managing AI risks while fostering innovation. We must remember that there is no one-size-fits-all approach to managing AI; it will be different for every organization.
To get the ball rolling, initiate discussions about the potential risks and challenges associated with adopting AI in your organization. Then, work with key stakeholders to develop a comprehensive AI policy that enables innovation while mitigating risk. By leveraging proven risk management principles and fostering ongoing, open communication, we can minimize the threat, share the responsibility, and reap the rewards of AI.