Generative artificial intelligence (AI) has emerged as a new technology with the potential to completely reshape business practices and create new avenues for innovation. This will have a significant impact on virtually every industrial sector. Organizations have rushed to adopt it, as evidenced by a recent Gallagher survey of 1,000 business owners. In this survey, a large majority (81%) indicated that they plan to maintain or increase their investments in AI in the near term.
However, alongside new efficiencies, AI tools will bring new risks to the forefront for those who use them. AI risk will require enterprise risk management programs to incorporate effective management strategies, which will undoubtedly be a new process for most entities.
Main risks associated with using generative AI
While we are still assessing all risks associated with generative AI, we have identified several high-risk categories that merit special attention from business leaders, including:
- Data bias and fairness… AI models and their underlying assumptions can potentially inject bias into decision-making and perpetuate discriminatory practices.
- Privacy issues… Several privacy laws related to the collection, storage, and sharing of personally identifiable information will likely apply to the use of AI. Careful consideration of compliance obligations related to these issues should be a priority.
- Data quality… Relying on incomplete or incorrect data can lead to erroneous analyzes and results.
- Intellectual property and data ownership… Proprietary rights and trade secrets may not be adequately protected, increasing the risk of consent and proprietary rights disputes.
- Regulatory risk… Several states have passed bills focused on compliance requirements for those who provide AI platforms and those who use them. Global privacy regimes have already adopted laws along these lines.
The director of artificial intelligence: a new role
As modern risk management-focused organizations leverage AI to stay competitive, they may need to take on a new role: that of chief artificial intelligence officer (CAIO). The role will require a strong ability to balance innovation with AI risk management, and may include:
- Strategic leadership in AI: Create and implement the overall AI strategy, with a view to improving operational efficiency, improving customer experience and identifying new revenue streams.
- Risk management and compliance: Establish a framework for the safe and responsible use of AI, as it aligns with both the organization’s ethical standards and those that external parties generally expect. This should also extend to compliance with regulatory requirements as they evolve.
- Governance programs:Establish formal structures to oversee AI initiatives and projects. These SOPs should help ensure that the organization meets ethical and regulatory standards with a focus on fairness, transparency, data security, and prevention of unintended consequences.
- Internal cross-collaboration:Close coordination with leaders across multiple divisions and the C-suite. This should drive collaboration across various stakeholders, including but not limited to Legal, IT, Privacy, Operations, Marketing, human resources, sales and risk management.
- Performance measurement and continuous improvement:Promote a culture of continuous innovation focused on AI. Periodic assessments of AI tool performance and return on investment in AI resources while staying up to date on new technologies that align with the organization’s current and future goals.
Where to start: new guidelines for managing AI risks
Several organizations have recently released suggested frameworks for risk-based standards in AI program implementation:
While not all organizations are ready to adopt a CAIO at this point, they should think carefully about such an investment. This role will become more important as AI becomes a standard requirement to remain competitive. Most businesses have already adopted AI in one form or another, and all indications are that its use will continue to increase rapidly. Litigation and regulatory risks have increased at the rate of engagement in AI, requiring risk managers to be at the forefront of AI risks as they arise.
Topics
AssurTech
Data driven
Artificial intelligence
Risk management
Was this article helpful?
Here are other articles you might like.
Interested in Ai?
Receive automatic alerts for this topic.