As the role of artificial intelligence tools in healthcare continues to grow, there are few ethical guidelines for healthcare professionals when it comes to informing patients of how and when these tools are used, and when their consent is required. Without a standard approach, hospital systems and clinicians run the risk of fostering distrust in their patients and the public.
In a recently published report, a Heritage College of Osteopathic Medicine researcher addresses this ethical concern by proposing a framework to help providers navigate patient notification and informed consent practices when using AI.
“The goal of this work is to provide a practical guide for identifying when informed consent or notification is necessary, or when we really don’t need to alert anyone,” said Devora Shapiro, Ph.D. ., associate professor of medical ethics at Heritage College, Cleveland.
“We just want to make sure that facilities are actually taking the time to explain things that are relevant and necessary for patients to understand, so that patients can make decisions in their best interest…that was the motivation behind the production of this article: advice.”
Informed consent is the process by which a healthcare provider informs a patient about the risks, benefits, and alternatives of a given procedure or intervention, especially those that are complex or high risk. This process allows patients to make informed choices about their care, providing them with a sense of autonomy and establishing pathways of trust and communication between the patient and their care provider, Shapiro said.
According to Shapiro, patient consent is necessary when AI is used in high-risk procedures, but also in decisions where AI-assisted recommendations have a significant impact on patient outcomes or treatment progression . Currently, no AI tool completely replaces the role of a healthcare provider; They are primarily support resources that help accomplish simple tasks like allocating hospital beds or more analytical work like interpreting radiology tests.
Shapiro and lead author Susannah Rose, Ph.D., associate professor of biomedical informatics at Vanderbilt University, propose five key criteria for determining when and how patients should be informed if AI is used in care health.
They include the degree of independence the AI has in making decisions, the degree to which the AI model deviates from the established model. medical officesif the AI interacts directly with patientsthe potential risk introduced into patient care and the practical challenges of implementing the notification and consent process.
Rose and Shapiro’s framework is published In CHESTand is primarily aimed at hospital administrators for use in developing facility-wide AI reporting practices.
The proposed framework also categorizes AI technologies into three levels and assigns a scoring system to determine how informative consent is necessary.
“We’re not pretending we’ve answered all the questions, but we think it’s a really solid starting point,” Shapiro said. “We also encourage people to continue to have conversations about this and continue to address more concerns. That would be a wonderful thing.”
More information:
Susannah L. Rose et al, An Ethically Supported Framework for Determining Patient Notification and Informed Consent Practices When Using Artificial Intelligence in Healthcare, CHEST (2024). DOI: 10.1016/j.chest.2024.04.014
Provided by
Ohio University
Quote: Exploring AI Ethics in Healthcare (October 8, 2024) retrieved October 9, 2024 from https://medicalxpress.com/news/2024-10-exploring-ethics-ai-health.html
This document is subject to copyright. Except for fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.