In a recent review article published in Digital medicine npjResearchers investigated the ethical implications of deploying large language models (LLMs) in healthcare through a systematic review.
Their findings indicate that while LLMs offer significant benefits such as improved data analysis and decision support, persistent ethical concerns about fairness, bias, transparency, and confidentiality underscore the need for defined ethical guidelines and human oversight in their application.
Study: The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large-Scale Language Models (LLM)Photo credit: Summit Art Creations/Shutterstock.com
Background
LLMs have gained widespread interest due to their advanced artificial intelligence (AI) capabilities, demonstrated vividly since OpenAI released ChatGPT in 2022.
This technology has rapidly developed in various sectors, including medicine and healthcare, showing promise for clinical decision-making, diagnosis, and patient communication tasks.
However, beyond their potential benefits, concerns have emerged about their ethical implications. Previous research has highlighted risks such as the dissemination of inaccurate medical information, privacy violations related to the handling of sensitive patient data, and the perpetuation of biases based on gender, culture or race.
Despite these concerns, there is a notable lack of comprehensive studies that systematically address the ethical challenges associated with the integration of LLMs into healthcare. The existing literature focuses on specific cases rather than providing a holistic overview.
Methods
Addressing existing gaps in this area is essential as healthcare environments require rigorous ethical standards and regulations.
In this systematic review, researchers mapped the ethical landscape surrounding the role of LLMs in healthcare to identify potential benefits and harms to inform future discussions, policies, and guidelines aimed at governing the ethical use of LLMs.
The researchers developed an evaluation protocol on practical applications and ethical considerations, registered in the International Prospective Register of Systematic Reviews. No ethical approval was required.
They searched relevant publication databases and preprint servers to collect data, considering preprints because of their prevalence in technology fields and their potential relevance not yet indexed in databases.
Inclusion criteria were based on intervention, setting, and outcomes, with no restriction on publication type but excluding work solely related to medical education or academic writing.
After an initial review of titles and abstracts, data were extracted and coded using a structured form. Quality assessment focused descriptively on procedural quality criteria to distinguish peer-reviewed papers, critically drawing on the results for validity and completeness when writing the report.
Results
The study analyzed 53 articles to explore the ethical implications and applications of LLMs in healthcare. Four main themes emerged from the research: clinical applications, patient support applications, healthcare professional support, and public health perspectives.
In clinical applications, LLMs show potential to aid in initial diagnosis and triage of patients, using predictive analytics to identify health risks and recommend treatments.
However, there are concerns about their accuracy and potential biases in their decision-making processes. These biases could lead to erroneous diagnoses or treatment recommendations, highlighting the need for healthcare professionals to exercise careful monitoring.
Patient assistance applications focus on LLMs that help individuals access medical information, manage symptoms, and navigate healthcare systems.
Although LLMs can improve health literacy and communication across language barriers, data privacy and the reliability of medical advice generated by these models remain important ethical considerations.
To support healthcare professionals, LLMs are offered to automate administrative tasks, summarize patient interactions, and facilitate medical research.
While this automation can improve efficiency, concerns remain about its impact on professional skills, the integrity of research results, and the risk of bias in automated data analysis.
From a public health perspective, LLMs offer opportunities to monitor epidemics, improve access to health information, and improve public health communication.
However, the study highlights risks such as the spread of misinformation and the concentration of AI power in the hands of a few companies, which could exacerbate health disparities and undermine public health efforts.
Overall, although LLMs represent promising advances in healthcare, their ethical deployment requires careful consideration of bias, privacy concerns, and the need for human oversight to mitigate potential harms and ensure equitable access and patient safety.
Conclusions
Researchers found that LLMs such as ChatGPT are widely explored in healthcare for their potential to improve efficiency and patient care by rapidly analyzing large datasets and providing personalized insights.
However, ethical concerns remain, including bias, transparency issues, and the generation of misleading information called hallucinations, which can have serious consequences in clinical settings.
The study is part of broader research into the ethics of AI, highlighting the complexities and risks involved in deploying AI in healthcare.
Strengths of this study include a comprehensive literature review and a structured categorization of LLM applications and ethical issues.
Limitations include the evolving nature of ethical review in this field, the reliance on preprint sources, and the predominance of North American and European perspectives.
Future research should focus on defining robust ethical guidelines, improving algorithm transparency, and ensuring equitable deployment of LLMs in global healthcare settings.