Researchers at Stanford University say AI ethics practitioners report a lack of institutional support at their companies.
Tech companies that promised to support the ethical development of artificial intelligence (AI) are failing to deliver on their commitments, with security taking a back seat to performance metrics and product launches, according to a new report from the researchers at Stanford University.
Despite publishing AI principles and employing social scientists and engineers to conduct research and develop technical solutions related to AI ethics, many private companies have yet to prioritized the adoption of ethical safeguards, Stanford’s Institute for Human-Centered Artificial Intelligence said in a report released Thursday. .
“Companies often ‘talk’ about the ethics of AI, but rarely ‘take action’ by adequately resourcing and empowering teams working on responsible AI,” said researchers Sanna J Ali , Angele Christin, Andrew Smart and Riitta Katila in the report Walking. The March of AI Ethics in Tech Companies.
Drawing on the experiences of 25 “AI ethics practitioners,” the report says workers involved in promoting AI ethics complain of a lack of institutional support and being isolated from other teams within large organizations, despite promises to the contrary.
Employees reported a culture of indifference or hostility due to product managers who view their work as detrimental to a company’s productivity, revenue or product release schedule, the report said.
“It was risky to be very vocal about further slowing down (the development of AI),” said one person interviewed for the report. “That wasn’t built into the process.”
The report does not name the companies in which the interviewed employees worked.
Governments and academics have expressed concerns about the speed of AI development, with ethical questions touching on everything from the use of private data to racial discrimination and copyright infringement.
These concerns have grown since OpenAI’s release of ChatGPT last year and the subsequent development of competing platforms such as Google’s Gemini.
Employees told Stanford researchers that ethical issues are often not considered until very late in the game, making adjustments to new applications or software difficult, and that ethical considerations are often disrupted by frequent reorganization of teams.
“Metrics regarding AI model engagement or performance are so high priority that ethics-related recommendations that could negatively affect these metrics require compelling quantitative evidence,” the report states.
“Yet quantitative measures of ethics or fairness are difficult to find and define given that companies’ existing data infrastructures are not suited to such measures.”