Laws and policies establish minimum thresholds that businesses must meet. These rules can vary widely in jurisdiction and scope. Some, like New York City’s law governing the use of automated employment decision-making tools, focus on a particular capacity. Others have a broader impact. For example, the EU’s proposed AI bill covers all AI systems developed, deployed or used within the EU, regardless of sector. Businesses must understand and follow the new patchwork of regulations to ensure they are successful in complying.
Three additional categories complete the list of governance mechanisms: voluntary guidelines, which governments and industry often develop together; standards, which can serve as references to verify compliance with regulatory requirements; and certification programs, which can demonstrate (to customers, industry partners, and regulators) that a company’s AI processes comply with underlying standards.
By understanding the impact and interaction of these mechanisms, organizations can shape their own AI governance more precisely and effectively. To this end, we have found that certain best practices can make the process faster and easier.
Ensuring good AI governance
Good AI governance starts at the top, which means RAI is a CEO-level issue. This directly relates to customer confidence in how a company uses technology and responds to organizational and regulatory risks. Setting the right tone at senior management is critical to facilitating and supporting a culture that prioritizes RAI. A survey carried out in 2023 by MIT Loan Management Review and BCG found that organizations whose CEOs participate in RAI initiatives realize 58% more business benefits than those whose CEOs are not involved.
We recommend that companies create a committee of senior leaders to oversee the development and implementation of their RAI program. The committee’s first task should be to create the principles, policies and guardrails that will govern the use of AI throughout the organization. These principles may be high-level, but they are essential because they serve as a guide for developing and using AI appropriately. The most effective principles are linked to an organization’s mission and values, making it easier to determine which types of AI systems to adopt and which not pursue.
The next step is to establish links with existing corporate governance structures, such as the risk committee. Too often, companies inadvertently create a phantom risk function, but linkages help prevent this. They also ensure clear escalation paths and decision-making authority to resolve potential issues.
Other best practices include developing a framework for flagging inherently high-risk AI applications for further review, consulting voluntary guidelines for information on AI best practices. industry and monitoring high-profile litigation to get a sense of — and start preparing for — where the courts might land. on legal issues that are still evolving, such as the impact of GenAI on intellectual property rights.
Building a comprehensive RAI program takes time, but savvy companies can speed up the process. It’s a path worth taking. With good governance, AI can generate value and growth without generating risk.
Subscribe to our e-alert on artificial intelligence.
protected by reCaptcha