Part of a broader suite of EY tools, techniques and enablers designed to support the responsible development and use of AI, the Global Responsible AI Framework is a flexible set of guiding principles and practical actions.
EY’s multidisciplinary teams of digital ethicists, IT risk practitioners, data scientists and subject matter resources leveraged the global Responsible AI framework to assess biopharmaceutical Responsible AI principles, as well as as well as how these have been deployed and understood across the business.
We overlaid the global Responsible AI framework onto the model the client had already created, interviewing key stakeholders and reviewing relevant documentation.
“We invested time in understanding the client’s environment, and our experience in AI governance also allowed us to ask the right questions at the right time,” says Catriona Campbell, head of technology and of EY UKI client innovation.
We assessed how successful the company was in mitigating AI risks throughout its lifecycle, from problem identification to modeling, deployment and ongoing monitoring.
To determine whether the client had developed and implemented AI in accordance with its Responsible AI principles, we also assessed a sample of key AI projects, including prediction, adverse event tracking and early detection of diseases.
Our review found that the biopharmaceutical company did not always manage project-specific AI risks in accordance with its Responsible AI principles. “The EY audit highlighted a number of gaps in our approach, allowing us to set minimum requirements for business teams working with AI, which we are already working on,” says head of governance of the biopharmaceutical company’s AI.