People wonder how to make AI as safe as possible.
To this end, we have a set of words like “responsible AI”, “ethical AI” and “explainable AI”.
Importantly, these are not synonyms: each of these terms addresses its own facet of how to work carefully on technologies that have so much potential, for better or worse.
Some companies tend to think that by working slowly and steadily they will solve many of the biggest concerns. Here is part of a OpenAI statementwhich seems important to me given the reputation of this company in the AI era:
“Before launching a new system, we conduct rigorous testing, engage external experts for feedback, work to improve the model’s behavior with techniques such as reinforcement learning with human feedback, and build large systems of security and surveillance. …We believe that powerful AI systems should be subject to rigorous security assessments. Regulation is needed to ensure that such practices are adopted, and we are actively engaging with governments on the best form such regulation could take.
Ok, so what else is there to add on a practical level?
I recently hosted Dr. Cansu Canca who spoke about some of the needs for safe AI in our time. Canca is the Director of Responsible AI Practice at Northeastern University and is also the founder of the AI Ethics Lab.
First, she talked about cars and how they have improved our lives, while also being a great source of danger. Millions of people die in car accidents, but at the same time it would be difficult to do without the progress and benefits that this transportation has brought us.
“Do we want the technology? » she asked, citing the pros and cons, and relating this question to AI. “Do we want technology to be part of our lives? The right question is not “should we have cars or not”, but the right question is: “how can we ensure that cars are safer?” »
She cited safety features, crash tests and infrastructure as examples of this type of traffic insurance.
“We have infrastructure just to make sure we can use the cars, but not die in large numbers,” she said.
In an AI analogy, Canca described the risks of harm posed by AI: Research shows that women, minorities, and marginalized groups tend to have fewer options or opportunities in driven systems by AI models.
The problem, she says, is that much of this harm and injustice is rooted: in the training data, in the models we choose, and in the trade-offs we build into the AI system. Existing ethical problems are only amplified by the model.
“The strength of AI lies in its efficiency and systematized approach,” she said. “If we build unfair AI systems, we end up discriminating effectively. »
Citing examples such as disparate medical care and inequities in hiring outcomes, she said this type of injustice is “what can happen” but “not a necessary consequence” of the use of AI .
Regarding the types of guardrails we need, Canca mentioned two main elements: infrastructure and processes.
“Every AI system should ideally go through a structured and accountable lifecycle and workflow,” she said, endorsing impact analysis, risk mitigation and “ethics right from the start.” design” as tools. “(There is) a need to have a governance system in place in (any) organization.”
All of this, she suggested, will help, because, in her view, regulation alone is unlikely to be enough.
“There will be many questions that remain unanswered by law,” Canca said. “We need to think about incentives – and for that, investors are the key players. »
Investors, she noted, will seek to apply certain criteria to their investments, which she characterized thus:
“Do (companies) have a governance structure in place responsible for AI? Do they have the expertise and competence, do they have an adequate workflow to integrate ethics? Can they guarantee that every AI system goes through this workflow? »
Canca is also working closely with the World Economic Forum, as a member of their Responsible AI Stewardship Working Group, to create a Responsible AI Handbook for Investors.
All of this is important, and NIST American Institute for Artificial Intelligence Security, Led by U.S. Commerce Secretary Gina Raimondo, it is bringing together more than 200 partners to, in the agency’s words, “develop scientifically and empirically based guidelines and standards for measurement and policy AI, thereby laying the foundation for AI security worldwide.” This type of thinking will likely be part of this conversation. The organization’s work and corresponding efforts, such as an executive order from the White House, are big news in the tech world as we consider what is likely to help sustain our progress in AI on rails. Anyway, that’s kind of what we’re hearing right now: “we’re working on it.” In reality, though, everyone will need to work on it together, which is why I encourage so much dialogue around AI, in the classroom, at conferences, and elsewhere.