Should organizations consider creating dedicated roles for responsible AI leaders? It’s a decision that makes sense, but shouldn’t that be part of the roles of newly ordained Chief Information Officers or AI Officers anyway?
Regardless, the need for responsible AI is urgent. Misguided recommendations, hallucinations and privacy violations are rampant. The most pressing concern is the algorithm bias that is pervasive in AI models across the world, according to Arnab Chakrabortyrecently appointed head of AI at Accenture.
“As technology becomes smarter, threats will pose greater challenges,” he says. “Since AI learns from the datasets it is trained on, it is quite possible that these datasets contain unintentional, demographic-related biases, such as racial biases, or gender biases and even to income. »
We asked Chakraborty why a dedicated role like Chief AI Officer is needed today, and how it helps pave the way for similar roles in organizations grappling with responsible and ethical AI deployments .
There are distinct roles that an AI manager performs versus those of an AI manager. “Responsible AI is a senior management agenda, and stakeholders need to understand its importance and use,” he says. “The Chief AI Officer manages AI strategy, R&D and the use of AI for themselves and their customers. The Chief AI Officer ensures that AI deployment is fair, robust and explainable, and that it improves efficiency, does good for people and delivers value in a transparent and accountable manner.
At the same time, the Head of AI will be able to help raise awareness and educate senior management colleagues on the urgency of responsible AI. “Right now, leaders across the board might take on the role of an AI leader, but we aspire to a future where all leaders more broadly embody the essence of an AI leader. ‘AI,’ says Chakraborty. A dedicated leader can “create a level of objectivity, oversight and focus to operationalize responsible AI.”
Deploying AI ethically means “going beyond metrics such as financial performance to take a holistic perspective, taking into account the causes and effects of AI on society and individuals,” insists he. “The first question leaders should ask themselves is whether they have established AI standards and principles for their organization, and whether they relate to their people, their organization’s goals and values. »
A key part of this task is defining responsible AI. Chakraborty defines it as “taking intentional steps to design, deploy, and use AI to create value and build trust by protecting against the potential risks of AI.”
Above all, he adds, “any AI implementation must be human-centered from its design. Responsible AI must be fair, without unwanted biases or unintended negative circumstances. It must be secure, enabled by compliance, data privacy and cybersecurity, but must not be all about compliance. It should be a complete C-suite strategy.
The mandate of an AI leader is to understand “why AI does what it does and the ever-evolving capabilities of the technology,” he adds. “It is important to monitor and audit AI, especially when it comes to ethical judgments, because as it stands, ethical judgments in our society are created on the structure of values and principles. AI automates these judgments devoid of ethical and moral sensitivities. Addressing these unintended risks requires a responsible and robust AI framework, making it a key priority across all boardrooms.
Inevitably, of course, legal departments will be involved in the AI discussion – if they haven’t already. “As global regulations come into force, in-house legal teams will become critical in leading the way in AI adoption and providing effective legal advice to improve efficiency, provide safeguards for use of AI and protect organizations. »