Cansu Canca, Director of Responsible AI Practices and Associate Research Professor, has been named one of Mozilla’s Rise25 for her work “fostering an AI environment built on equality and empowerment.”
Cansu Canca is full of questions — but that’s her job.
The Director of Responsible AI Practice at the Institute for Experiential AI and associate research professor in the Department of Philosophy and Religion at Northeastern University, Canca has made a name for herself as an ethicist tackling the use of artificial intelligence.
As the founder of the AI Ethics LabCanca has a team of “philosophers and computer scientists, whose goal is to help the industry – that means companies as well as startups, or organizations like law enforcement or hospitals – develop and deploy AI systems in a responsible and ethical way,” she explains.
Canca has also worked with organizations such as the World Economic Forum and Interpol.
But what does “ethics” mean when it comes to AI? That’s precisely where the question lies, according to Canca.
“A lot of companies come to us and say, ‘Here’s a model we’re considering using. Is it fair?'”
But, she notes, there are “different definitions of justice, different definitions of distributive justice, different definitions of fairness. They conflict with each other. It’s a big theoretical question. How do we define fairness?”
“Saying ‘we optimized this for fairness’ means absolutely nothing until you have a working, proper definition” — which varies from project to project, she also notes.
Canca was named one of Mozilla’s Rise25 honorees, which recognizes individuals “leading the next wave of AI – using philanthropy, collective power and open source principles to ensure the future of AI is responsible, trustworthy, inclusive and centered on human dignity,” the organization said. wrote in his ad.
Awarded to five people in each of five categories, Canca was named the “Agents of Change” categoryamong others whose “work fosters an AI environment of equality and empowerment,” the organization wrote.
“The biggest risk with AI is that it’s systematic and efficient. So the risk is,” Canca says, “if you build a bad system, it’s systematically and efficiently bad.”
A poorly designed or implemented system, she continues, could be “systematically and effectively discriminatory, for example.”
“But the flip side of that is that if you create a good system that is better than today’s, it will be consistently and efficiently good, and it will make it consistently and efficiently better than today’s.”
The arrival of artificial intelligence has brought enormous instability to business and industry, Canca says, but it also brings enormous opportunities for change.
“If we haven’t already, we need to admit that the world is a hugely unethical place and that we are creating a huge amount of suffering, unnecessary suffering,” Canca says. “Any change is an opportunity to solve some of these problems, and any change carries the risk of creating more.”
“This is the perfect time, a good opportunity, a good reason to look at what we are doing and ask ourselves if this life has meaning.”
“We ask ourselves these questions all the time when we make decisions about public health, environmental ethics or business ethics,” Canca notes. “It’s just that it’s a different application area now,” with its own unique dilemmas and opportunities.
But even though “it’s all fundamentally a philosophical question,” Canca says, the key to fair choices must be found at the beginning of the design process, as part of a “systematic practice.”
Fairness, impartiality, justice — and what exactly is meant by those terms — must be part of “the design of the AI system and product,” she says.
She calls it an “ethical by design approach, where you don’t just make a value judgment, you turn it into a design action.”
Recently, Canca began thinking about how to encourage corporate interest.
“When you think about responsible AI implementation, incentives are very important,” she says. “We usually think about incentives from a policy perspective, but you know, the market is faster and more agile.”
Responsible investing would give companies “another reason not to just follow policy,” she says, but to ask, “How can we get the best investment?” Maybe if you have better practices, investors will take notice.
The artificial intelligence revolution isn’t slowing down, and neither is Canca. She recently collaborated on the Responsible AI Handbook for Investorspublished by the World Economic Forum, and, in the public sector, has “created a law enforcement toolkit and trained police officers around the world” in the responsible use of AI.
“Organizations like law enforcement are in desperate need of useful systems that would help them with their extremely difficult jobs,” she says. “But they are also one of the most critical, the ones that could cause the most damage if used incorrectly.”
Originally interested in health and medical ethics, Canca first looked at patients, doctors, and insurance programs. But when AI systems come to health care, “they’re making predictions, they’re allocating resources,” she says. “And those are very meaningful decisions, but no one really knows how the system works.”
At first, she thought other ethicists were already working on this issue, but when she looked hard, she found no groups working on this topic from an ethical, that is, philosophical, perspective, she says.
“There are lawyers and artificial intelligence scientists working on this. But the fundamental question we ask ourselves, namely, ‘What is the right thing to do?’, is an ethical question.”
“We learn from all these other fields,” she continues, “but the question itself is a question of philosophy.”
In the year before the COVID-19 pandemic, Canca lectured around the world on this neglected topic. “You can’t answer AI ethics questions in practice if you don’t have people who are experts in AI and if you don’t have people who are experts in ethics.”
“I really, really pushed hard.”
All these efforts have not gone unnoticed. The Mozilla Rise25 Awards ceremony will be held on August 13 in Dublin.