When the CEO of Google Sundar Pichai emailed his employees this month the company’s priorities for 2024, with developing AI responsibly at the top of the list. Some employees are now wondering whether Google will live up to this goal. The small team that served as the main internal watchdog for AI ethics has lost its leader and is being restructured, according to four people familiar with the changes. A Google spokesperson said its work would continue in a stronger form in the future, but declined to provide details.
Google’s responsible innovation team, known as RESIN, was located within the Office of Compliance and Integrity, within the company’s Global Affairs division. He reviewed internal projects to check their compatibility with Google AI Principles which define the rules for the development and use of technology, a crucial role for the company races to compete in generative AI. RESIN conducted more than 500 exams last year, including for Bard chatbotaccording to a Annual Report on the principles of AI, work that Google published this month.
RESIN’s role appears uncertain since its leader and founder Jen Gennai, director of responsible innovation, suddenly left the position this month, say the sources, who spoke on the condition of anonymity to discuss the personnel changes. . Gennai’s LinkedIn profile lists her as an AI ethics and compliance advisor at Google as of this month, which sources say suggests she will be leaving soon depending on how past departures from the company play out. are unrolled.
Google split Gennai’s team of about 30 people in two, according to the sources. Company spokesperson Brian Gabriel said 10% of RESIN’s staff would remain in place while 90% of the team had been transferred to Trust and Safety, which fights abuse of Google services and resides also in the global business division. No one appears to have been fired, sources said. The rationale for the changes and how responsibilities will be distributed could not be learned. Some sources say they have not been told how revisions to AI principles will be handled in the future.
Gabriel declined to say how RESIN’s work on AI projects will be handled in the future, but described the shakeup as a signal of Google’s commitment to responsible AI development. The move “placed this particular responsible AI team at the center of our well-established trust and safety efforts, which are integrated into our product reviews and plans,” he says. “This will help us strengthen and expand our responsible innovation work across the company.”
Do you have any advice?
Google is known for frequently shuffling its ranks, but RESIN has remained largely intact since the group’s inception. Although other teams, and hundreds more people, work on AI monitoring at Google, RESIN was the largest, with a mission spanning all of Google’s core services.
In addition to the departure of its leader, Gennai, RESIN also saw the departure this month of one of its most influential members, Sara Tangdall, senior specialist in the ethics of AI principles. She is now the director responsible for AI products at Salesforce, according to her LinkedIn profile. Tangdall declined to comment and Gennai did not respond to calls for comment.
AI Uprising
Google created its Responsible Innovation Team in 2018, shortly after AI experts and others at the company made it public. stood up in protest against a Pentagon contract called Project Maven that used Google algorithms to analyze drone surveillance footage. RESIN has become the main manager of a set of AI principles introduced after protests, which claims that Google will use AI to benefit people, and never to make weapons or violate human rights. Gennai helped write the principles.
Google teams could submit projects for review by RESIN, which provided feedback and sometimes blocked ideas considered to violate AI principles. The group stopped the release of AI image generators And speech synthesis algorithms which could be used to create deepfakes.
Seeking advice on AI principles is not mandatory for most teams, unlike assessments for risks to privacy, which every project must undergo. But Gennai said early evaluations of AI systems are paying off in avoiding costly ethical violations. “If implemented correctly, responsible AI improves products by discovering and working to reduce the harm that unfair biases can cause, improving transparency and increasing security.” she said at a Google conference in 2022.