Every anecdote about built-in errors and biases raises concerns that an increase in AI will lead to a decrease in humanity in the functioning of society. So how do we make AI more human? How do we ensure that the benefits of AI are delivered as enhancements, not replacements?
With:
Carter Cousineau – Vice President of Responsible AI, Thomson Reuters.
Artificial intelligence is quickly emerging as a powerful research assistant for legal professionals. But as these AI tools enter the marketplace, how do we navigate the ethical and cultural dilemmas that come with them? From societal fears and anxieties about AI to the risks of irresponsible use, it’s essential to approach AI with a clear understanding of what can go wrong.
Carter Cousineau, vice president of responsible AI at Thomson Reuters, sees a number of major concerns surrounding AI today. These include fears of loss of privacy and digital manipulation, as well as apprehensions about fairness and bias. When it comes to integrating AI into knowledge resources, Cousineau sees governance work as essential to creating trustworthy AI-based systems.
“When you think about integrating AI into systems where knowledge and trusted content resources are fundamental, governance work is critical,” he says.
“Ensuring AI-driven processes are accountable, transparent, and ethical. Responsible AI frameworks and processes help mitigate and manage AI risks, while improving product integrity and reliability for customers and employees.”
In our conversation, we explore the importance of transparency and interpretability in building trustworthy AI systems, and look at how Thomson Reuters has handled its own AI integration work.
Additionally, we examine what law firms need to do to prepare their data – and their cultures – to take advantage of AI-powered research tools, and offer guidance on how to develop processes to improve value and outcomes for clients.