While the race for success Generative AI intensifies, the ethical debate around technology also continues to intensify. And the stakes continue to rise.
According to Gartner, “organizations have a responsibility to ensure that the AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of leaders say AI Ethics is important to their enterprise-wide approach to AI, but less than 25% have operationalized ethical governance principles.
AI is also high on the US government’s list of concerns. In late February, President Mike Johnson and Democratic leader Hakeem Jeffries announcement creating a bipartisan AI task force to explore how Congress can ensure America continues to be at the forefront of global AI innovation. The Task Force will also examine the safeguards needed to protect the nation from current and emerging threats and ensure the development of safe and reliable technology.
Clearly, good governance is essential to address the risks associated with AI. But what does good AI governance look like? A new IBM product case study by Gartner provides some answers. The study details how to establish a governance framework to manage AI ethics issues. We’ll take a look.
Why AI Governance Matters
As businesses increasingly adopt AI in their daily operations, the ethical use of the technology has become a hot topic. The problem is that organizations often rely on general corporate principles, combined with legal or independent review boards, to assess the ethical risks of each AI use case.
However, according to the Gartner case study, AI ethical principles may be too general or abstract. Then, project managers struggle to decide whether individual AI use cases are ethical or not. Meanwhile, legal teams and review boards lack visibility into how AI is actually used in businesses. All of this opens the door to unethical use (intentional or not) of AI and the business and compliance risks that come with it.
Given the potential impact, the problem must first be addressed at the governance level. Then, subsequent organizational implementation with appropriate checks and balances must follow.
Four main roles of the AI governance framework
According to the case study, IBM’s business and privacy leaders developed a governance framework to address ethical concerns surrounding AI projects. This framework has four main roles:
-
Political Advisory Committee: Senior leaders are responsible for determining global regulatory and public policy goals, as well as privacy, data and technology ethics risks and strategies.
-
AI Ethics Committee: Co-chaired by IBM Research’s Global Head of AI Ethics and Chief Privacy and Trust Officer, the Board includes a cross-functional, centralized team that defines, maintains and advises on policies , IBM AI ethics practices and communications.
-
AI Ethics Focal Points: Each business unit has focal points (business unit representatives) who act as the first point of contact to proactively identify and assess technology ethics issues, mitigate risks for individual use cases, and escalate projects to the AI Ethics Committee for review. Much of AI governance depends on these individuals, as we will see later.
-
Advocacy Network: A local network of employees who promote a culture of ethical, responsible and trustworthy AI technology. These advocates contribute to open workflows and help scale AI ethics initiatives across the organization.
Risk-based assessment criteria
If an AI ethics issue is identified, the focal point assigned to the use case business unit will initiate an assessment. The focal point carries out this process on the front line, allowing low-risk cases to be triaged. For higher risk cases, a formal risk assessment is carried out and forwarded to the AI Ethics Committee for review.
Each use case is evaluated using guidelines including:
-
Related Properties and Intended Use: Investigates the nature, intended use, and risk level of a particular use case. Could the use case cause harm? Who is the end user? Are individual rights being violated?
-
Regulatory conformity : Determines whether data will be treated securely and in accordance with applicable privacy laws and industry regulations.
-
Previously reviewed use cases: Provides insights and next steps from use cases previously reviewed by the AI Ethics Committee. Includes a list of AI use cases that require board approval.
-
Alignment with AI Ethics Principles: Determines whether use cases meet core requirements, such as alignment with the principles of fairness, transparency, explainability, robustness, and privacy.
Benefits of an AI governance framework
According to the Gartner report, implementing an AI governance framework benefited IBM by:
-
Evolving AI ethics: Focal points ensure compliance and initiate reviews in their respective business units, enabling AI ethics review at scale.
-
Growing strategic alignment of AI ethics vision: Focal Points connect with technical, thought and business leaders in AI ethics across the enterprise and around the world.
-
Accelerate the completion of low-risk projects and proposals: By sorting out low-risk services or projects, focal points allow projects to be reviewed more quickly.
-
Improve board readiness: By empowering focal points to guide AI ethics early in the process, the AI ethics committee can more effectively review all remaining use cases.
With great power comes great responsibility
When ChatGPT debuted in June 2020, the whole world was abuzz with crazy expectations. NOW, current AI trends point to more realistic expectations about technology. Standalone tools like ChatGPT may capture the popular imagination, but effective integration into established services will drive deeper change across all sectors.
There is no doubt that AI opens the door to powerful new tools and techniques to get the job done. However, the associated risks are also real. The high capabilities of multimodal AI and lower barriers to entry are also ripe for abuse: deepfakes, privacy concerns, perpetuation of bias, and even circumvention of CAPTCHA safeguards may become increasingly easy for malicious groups.
While bad actors are already using AI, legitimate business must also take preventative measures to keep employees, customers, and communities safe.
ChatGPT states: “Negative consequences may include biases perpetuated by AI algorithms, violations of privacy, exacerbation of societal inequalities, or unintended harm caused to individuals or communities. Additionally, unethical AI practices could have trust implications, reputational damage, or legal ramifications.
To protect against these types of risks, AI ethics governance is essential.