The ‘A Roadmap for Ethical AI in Healthcare’ conference at the Science Gallery in London on November 14 saw wide-ranging discussions exploring how we can address the ethical dilemmas surrounding the implementation of artificial intelligence (AI) in clinical care.
It featured an exceptional line-up of expert speakers from government, academia, medicine, industry and the patient community. Dr Raquel Iniesta, Reader in Statistical Learning for Precision Medicine at the Institute of Psychiatry, Psychology and Neuroscience at King’s College London, hosted the day-long event.
Thank you to all our speakers and audience for a fascinating and enjoyable event. We have demonstrated that the debate to make AI ethical in healthcare must involve many sectors, and that valuable insights can arise from bringing together policymakers, academics, patients, developers, industry and clinicians.
Dr Raquel Iniesta
What factors do we need for ethical integration of AI into healthcare?
Sara Cerdas, former MEP, delivered the first opening speech, highlighting the potential of AI to revolutionize healthcare systems and the creation of the European Health Data Space (EHDS). She highlighted the need to foster patient trust, alleviate technological stress among clinicians, and address critical issues such as bias, equity, data governance, and regulatory compliance. In addition to the EHDS, she discussed legislative efforts surrounding the European Union’s Artificial Intelligence Act.
She was followed by Dr Rupa Chilvers, Deputy Director for Life Sciences and Innovation at the Welsh Government, who used an example of a cancer diagnosis to illustrate the risks of creating a system of two-speed health. It raised potential issues around decision-making pathways and the need to test and evaluate service design from the start in order to deliver an ethical, AI-enhanced healthcare system.
The first panel discussed important factors in ensuring ethical integration, covering topics such as accountability and cybersecurity.
Ms Cerdas and Dr Chilvers were joined by Sarah Markham (Patient Representative, Visiting Researcher, King’s IoPPN), Robin Carpenter (Head of AI Governance and Policy, Newton’s Tree), Beatrix Fletcher (Program Manager of IA, Guy’s & St. Thomas NHS Foundation). Trust) and Basab Bhattacharya (Head of Clinical Informatics and Radiologist, Barking, Havering and Redbridge University Hospitals NHS Trust). Panelists were divided on whether there is currently enough regulation for AI to be implemented safely, raising questions about the training and support available to clinicians.
Human actions to align AI with ethical values in healthcare
Professor Payam Barnaghi, Professor of Artificial Intelligence Applied to Medicine at Imperial College London, highlighted in his afternoon talk that involving clinicians from the start of the modeling process will ensure that the tools AI technologies reflect the experiences of healthcare professionals and provide the greatest clinical utility.
Professor Susan Shelmerdine, consultant pediatric AI radiologist at Great Ormond Street Hospital, has called for greater awareness and development of AI tools for children’s healthcare, while warning that her research has shown that children, as digital natives, are more wary of the use of AI than their parents. in their medical care.
Most of the time we anthropomorphize AI and think that it can do the same thing as humans, but they analyze the world in a different way than us.
Professor Susan Shelmerdine
The second roundtable focused on the human aspect of implementation. Professor Barnaghi and Professor Shelmerdine were joined by Dr Nenad Tomasev (Senior Research Scientist at Google DeepMind), Dr Ellie Asgari (Consultant Nephrologist, Guy’s and St Thomas’ Hospital) and two patient representatives from South London and Maudsley NHS Foundation Trust – Jennie. Wilson Bradley and Emma Shellard.
Key themes emerged around the role of the patient and managing bias in data. Patients must be meaningfully involved throughout development, and their safety, care and well-being must always be the priority. Even if a global approach, involving humans in the circuit, is desirable, it faces large-scale challenges.
Concluding the panel, several speakers agreed that feedback loops and continuous retraining of models will be necessary to successfully implement AI.
We must remember that AI is not just a thing, it is a spectrum, just like risk.
Dr. Ellie Asgari
Roadmap for Ethical AI in Healthcare
Dr Iniesta leads the Fair Modeling Lab in the Department of Biostatistics and Health Informatics at King’s IoPPN. His research, supported by the NIHR Maudsley BRCdelves into this field and published his work in the journal AI and ethics which describes five facts that can help ensure ethical AI in healthcare. You can also read his two-part blog on the subject here: First part And Part two.
The conference was part of an international partnership grant to study the human role in ensuring ethical implementation of AI in healthcare. Dr Iniesta is the principal investigator on this grant from Responsible AI UK. The event was also supported by UKRI – UK Research and Innovation.
Thank you for the collaboration of the Disruptive & Emerging Technology Alliance (DETA), the Catalan government and the UOC (Universitat Oberta de Catalunya) of Barcelona.
Recordings of the event will be shared on this web page soon; you can also contact maudsley.brc@kcl.ac.uk if you would like to be notified as soon as they become available.