VIENNA — In European Respiratory Society (ERS) Congress 2024Experts discussed the benefits and risks of artificial intelligence (AI) in medicine and explored the ethical implications and practical challenges.
With more than 600 AI-powered medical devices registered with the U.S. Food and Drug Administration since 2020, AI is rapidly invading healthcare systems. But like any other medical device, AI tools must be carefully evaluated and adhere to strict regulations.
Joshua Hatherley, PhD, a postdoctoral researcher at the School of Philosophy and History of Ideas at Aarhus University in Denmark, said traditional bioethical principles—autonomy, beneficence, nonmaleficence, and justice—remain a crucial framework for assessing the ethics of using AI tools in medicine. However, he said the emerging fifth principle of “explainability” has gained attention because of the unique characteristics of AI systems.
“Everyone is excited about AI right now, but there are still many open questions about how much we can trust it and to what extent we can use it,” said Ana Catalina Hernandez Padilla, a clinical researcher at the University of Limoges in France. Medscape Medical News.
Joseph Alderman, MBChB, clinical researcher in AI and digital health at the Institute of Inflammation and Ageing at the University of Birmingham, UK, said it is an exciting time to be working in AI and health, but he believes clinicians should “be part of the story” and advocate for safe, effective and equitable AI.
The pros
Alderman said AI had enormous potential to improve healthcare and the patient experience.
One exciting area of application for AI is the informed consent process. Conversational AI models, such as large language models, can provide patients with an unlimited platform to discuss risks, benefits, and recommendations, potentially improving patient understanding and engagement. AI systems can also predict the preferences of noncommunicative patients by analyzing their social networks and medical data, which can improve proxy decision-making and ensure that treatment matches patient preferences, Hatherley explained.
Another important benefit of AI is its ability to improve patient outcomes through better resource allocation. For example, AI can help optimize hospital bed allocation, leading to more efficient use of resources and better patient health outcomes.
AI systems can reduce medical errors and improve diagnoses or treatment plans through large-scale data analysis, enabling faster and more accurate decision-making. They can handle administrative tasks, reduce clinician burnout, and allow healthcare professionals to focus more on patient care.
AI also promises to advance health equity by improving access to quality care in underserved areas. In rural hospitals or developing countries, AI can help fill gaps in clinical expertise, potentially leveling the playing field in access to health care.
The disadvantages
Despite its potential, AI in medicine presents several risks that require careful ethical consideration. One of the main concerns is the possibility of bias built into AI systems.
For example, advice from an AI agent may prioritize certain outcomes, such as survival, based on general norms rather than patient-specific values, which may not align with patients’ preferences for quality of life over longevity. “This can interfere with patients’ autonomous decisions,” Hatherley said.
AI systems also have limited generalizability. Models trained on a specific patient population may perform poorly when applied to different groups due to changes in demographic or clinical characteristics. This can result in less accurate or inappropriate recommendations in real-world situations. “These technologies work on the very narrow population on which the tool was developed, but they don’t necessarily work in the real world,” Alderman said.
Another major risk is algorithmic bias, which can exacerbate health disparities. AI models trained on biased datasets can perpetuate or exacerbate existing inequities in health care delivery, leading to suboptimal care for marginalized populations. “We have evidence that algorithms directly discriminate against people with certain characteristics,” Alderman said.
The black box of AI
AI systems, particularly those that use deep learning, often operate as “black boxes,” meaning their internal decision-making processes are opaque and difficult to interpret. Hatherley said this lack of transparency raises significant concerns about trust and accountability in clinical decision-making.
While explainable AI methods have been developed to provide insight into how these systems generate their recommendations, these explanations often fail to fully capture the reasoning process. Hatherley explained that this is like using a pharmaceutical drug without a clear understanding of the mechanisms by which it works.
This opacity in AI decision-making can create distrust among doctors and patients, limiting its effective use in healthcare. “We don’t really know how to interpret the information it provides,” Hernandez says.
She said that while younger clinicians are more likely to experiment with AI tools, older practitioners still prefer to trust their own senses while looking at a patient as a whole and observing how their disease is evolving. “They’re not just checking boxes. They’re interpreting all of these variables together to make a medical decision,” she said.
“I’m very optimistic about the future of AI,” Hatherley concluded. “There are still many challenges to overcome, but ultimately it’s not enough to talk about how AI needs to be adapted to humans. We also need to talk about how humans need to adapt to AI.”
Hatherley, Alderman and Hernandez reported no relevant financial relationships.
Manuela Callari is a freelance science journalist specializing in human and planetary health. Her articles have appeared in The Medical Republic, Rare Disease Advisor, The Guardian, MIT Technology Review, and others.