Part of a broader suite of EY tools, techniques and tools designed to support the responsible development and use of AI, the Global Responsible AI Framework is a flexible set of guiding principles and practical actions.
EY multidisciplinary teams of digital ethicists, cyber risk practitioners, data scientists and specialist resources leveraged the global Responsible AI framework to assess the biopharmaceutical company’s Responsible AI principles and how they were deployed and understood across the business.
We overlaid the global Responsible AI framework on top of the model the client had already created, interviewing key stakeholders and reviewing relevant documentation.
“We invested time in understanding the client’s environment, and our experience in AI governance also enabled us to ask the right questions at the right time,” says Catriona Campbell, EY UKI’s technology and client innovation leader.
We assessed how well the company had managed to mitigate AI risks throughout its lifecycle, from problem identification to modeling, deployment and continuous monitoring.
To determine whether the client had developed and implemented AI in accordance with its Responsible AI Principles, we also assessed a sample of key AI projects, including forecasting, adverse event tracking, and early disease detection.
Our review found that the biopharmaceutical company was not always managing AI-related risks specific to its projects in line with its Responsible AI Principles. “The EY audit highlighted a number of gaps in our approach, which allowed us to define minimum requirements for business teams working with AI, something we are already working on,” says the biopharmaceutical company’s AI governance lead.