As the global rollout of generative AI unfolds, companies are grappling with a host of ethical and governance concerns: Should my employees fear for their jobs? How can I ensure AI models are trained properly and transparently? What should I do about hallucinations and toxicity? While not a silver bullet, keeping humans in the AI loop is a good way to address a good sample of AI concerns.
It is remarkable how much progress has been made in generative AI since OpenAI shocked the world with the launch of ChatGPT just a year and a half ago. While other AI trends have come and gone, large language models (LLMs) have captured the attention of technologists, business leaders, and consumers alike.
Companies are collectively investing trillions of dollars to get a head start on GenAI, which is expected create trillions of new value in just a few years. And while there has been a slight pullback in recent times, many are expecting significant returns on investment (ROI), such as Google Cloud’s new study which found that 86% of GenAI adopters see growth of 6% or more in the company’s annual turnover.
So what’s going on?
We’re at an interesting point in the GenAI revolution. The technology has proven its readiness, and early adopters are reporting some success. What’s holding back celebrations of GenAI’s grand success, it seems, are some of the more complex issues around topics like ethics, governance, security, privacy, and regulation.
In other words, we can implement GenAI. But the big question is whether we should. If the answer to that question is yes, the next question is: how can we implement it while complying with ethics, governance, security, and privacy standards, not to mention new regulations, such as the European AI law?
To get an overview of the issue, Datanami spoke with Cousineau, Vice President of Data Model and Governance at Thomson ReutersThe Toronto, Ontario-based company has been in the information business for nearly a century, and last year its more than 25,000 employees helped the company generate approximately $6.8 billion in revenue across four divisions, including legal, tax and accounting services, government and the Reuters news agency.
As the head of Thomson Reuters’ Responsible AI practice, Cousineau has considerable influence over how the publicly traded company implements AI. When she took over in 2021, her first goal was to implement a company-wide program to centralize and standardize how it builds responsible and ethical AI.
As Cousineau explains, she began by leading her team to establish a set of principles for AI and data. Once those principles were established, they then developed a series of policies and procedures to guide how those principles would be implemented in practice, including with new AI and data systems as well as existing ones.
When ChatGPT landed globally in late November 2022, Thomson Reuters was ready.
“We had time to build this before generative AI took off,” she says. “But it allowed us to react more quickly because we had already done the groundwork and the program was working, so we didn’t have to start trying to build this. We just had to continually refine those checkpoints and those implementations, and we’re still doing that with generative AI.”
Building Responsible AI
Thomson Reuters is no stranger to AI. The company had been working with forms of AI, machine learning and natural language processing (NLP) for decades before Cousineau arrived. The company had “notoriously excellent practices” in AI, she says. What it lacked, however, was the centralization and standardization needed to take it to the next level.
Data impact assessments (DIAs) are a critical way for the company to stay on top of potential AI risks. Working with Thomson Reuters lawyers, Cousineau’s team conducts a comprehensive risk analysis of a proposed AI use case, from the type of data involved and the proposed algorithm, to the domain and of course the intended use.
“The overall landscape is different from jurisdiction to jurisdiction, from a legislative perspective. That’s why we work closely with the Office of General Counsel,” Cousineau says. “But to actually embed ethics into AI systems, our strength is working with teams to put the actual controls in place, before regulation requires us to do so.”
Cousineau’s team developed a handful of new internal tools to help the data and AI teams stay on track. For example, it developed a centralized model repository, where a record of all the company’s AI models was kept. In addition to improving the productivity of Thomson Reuters’ 4,300 data scientists and AI engineers, who have an easier way to discover and reuse models, it also allowed Cousineau’s team to layer governance. “It’s a double win,” she says.
Another important tool is the Responsible AI Hub, where specific risks associated with an AI use case are presented and different teams can work together to mitigate the challenges. These mitigations can be in the form of a piece of code, a check, or even a new process, depending on the nature of the risk (such as privacy, copyright infringement, etc.).
But for other types of AI applications, one of the best ways to ensure responsible AI is to keep humans involved.
Humans in the Loop
Thomson Reuters has many effective processes in place to mitigate AI risks, even in niche environments, Cousineau says. But when it comes to keeping humans involved, the company advocates a multi-pronged approach that ensures human participation at the design, development and deployment stages, she says.
“One of the checkpoints we have in the model documentation is a description of the human oversight that developers and product owners would establish,” she says. “Once it’s deployed, there are a number of ways to review it.”
For example, humans are in the loop when it comes to guiding customers and consumers through the use of Thomson Reuters products. The company also has teams dedicated to user training, she says. The company also places disclaimers in some AI products reminding users that the system should be used for research purposes only.
“Human involvement is a very important concept that we integrate into all stages of the production chain,” Cousineau explains. “And even once the deployment is complete, we use (human involvement) to perform measurements.”
Humans play a critical role in monitoring AI models and applications at Thomson Reuters, including tracking model drift, monitoring overall model performance, including precision, recall and confidence scores. Subject excerpts and lawyers also review the results of its AI systems, she said.
“Using human reviewers is part of that system,” she explains. “That’s where human involvement in the loop is going to be critical for organizations, because you can get feedback from users to make sure the model is still working the way you intended. So humans are still actively involved in the loop.”
The Engagement Factor
Human involvement doesn’t just improve AI systems, whether it’s greater accuracy, fewer hallucinations, better recall, or fewer privacy violations. It does all of that, but there’s another important factor business owners need to keep in mind: it reminds workers that they are essential to the success of the business and that AI won’t replace them.
“That’s the interesting thing about bringing the human into the loop, the value of having that active human engagement and ultimately maintaining control and ownership of that system. That’s where a lot of the comfort lies.”
Cousineau recalls recently attending a roundtable discussion on AI hosted by Snowflake And Join She met with executives at Thomson Reuters and other companies, where this question came up. “No matter what industry, they’re all comfortable with having a human in the loop,” she says. “They don’t want a human out of the loop, and I don’t see why they would want to either.”
As companies plan for their AI future, business leaders will need to find a balance between humanity and AI. It’s a goal they’ve had to achieve with every technological advancement over the past two millennia.
“Human intervention allows you to know what the system can and cannot do, and then you have to optimize that to your advantage,” Cousineau explains. “All technology has its limits. There are limits to doing things entirely manually, for sure. There’s only so much time in the day. So you have to find that balance and then be able to have a human approach in the loop. That’s something that everyone is willing to do.”
Related articles:
Five questions to ask before the European AI law comes into force
What’s holding back GenAI’s ROI?
AI ethics are still in their infancy