The World Health Organization (WHO) issues new guidelines on ethics and governance of major multimodal models (LMM) – a fast-growing type of generative artificial intelligence (AI) technology with applications in healthcare.
The guide provides more than 40 recommendations for governments, technology companies and healthcare providers to ensure the appropriate use of MMLs to promote and protect population health.
LMMs can accept one or more types of data input, such as text, videos, and images, and generate various outputs not limited to the type of data input. LMMs are unique in their imitation of human communication and their ability to perform tasks for which they have not been explicitly programmed. LMMs have been adopted faster than any mainstream app in history, with several platforms – such as ChatGPT, Bard and Bert – entering the public consciousness in 2023.
“Generative AI technologies have the potential to improve healthcare, but only if those who develop, regulate and use these technologies fully identify and consider the associated risks,” said Dr. Jeremy Farrar, a senior scientist in head of the WHO. “We need transparent information and policies to manage the design, development and use of LMMs to achieve better health outcomes and overcome persistent health inequities. »
Potential benefits and risks
The new WHO guidelines describe five main health applications of MMLs:
- Diagnosis and clinical care, such as answering patients’ written questions;
- Patient-guided use, for example to study symptoms and treatment;
- Clerical and administrative tasks, such as documenting and summarizing patient visits in electronic health records;
- Medical and nursing training, including providing trainees with simulated patient encounters, and;
- Scientific research and drug development, particularly to identify new compounds.
Even as MMLs begin to be used for specific health-related purposes, there are also documented risks of false, inaccurate, biased, or incomplete reporting, which could harm individuals using this information to make health care decisions. health. Additionally, LMMs can be trained on low-quality or biased data, whether by race, ethnicity, ancestry, sex, gender identity, or age.
The guidance also details broader risks to health systems, such as the accessibility and affordability of best-performing MMLs. LMMS may also encourage “automation bias” on the part of healthcare professionals and patients, whereby errors that would otherwise have been identified are overlooked or difficult choices are inappropriately delegated to an LMM. LMMs, like other forms of AI, are also vulnerable to cybersecurity risks that could endanger patient information or the reliability of these algorithms and, more broadly, healthcare delivery.
To create safe and effective MMLs, WHO highlights the need to involve diverse stakeholders: governments, technology companies, healthcare providers, patients and civil society, at all stages of the development and deployment of these technologies, including their monitoring and regulation.
“Governments of all countries must jointly lead efforts to effectively regulate the development and use of AI technologies, such as MMLs,” said Dr Alain Labrique, WHO Director for Digital Health and innovation in the Scientific Division.
Key recommendations
The new WHO guidance includes recommendations for governments, which have primary responsibility for setting standards for the development and deployment of MMLs, as well as their integration and use for public health purposes and medical. For example, governments should:
- Invest in or provide public or nonprofit infrastructure, including computing power and public data sets, available to developers in the public, private, and nonprofit sectors, that requires users to adhere to principles and values ethics in exchange for access.
- Use laws, policies and regulations to ensure that LMMs and applications used in healthcare and medicine, regardless of the risks or benefits associated with AI technology, meet the ethical obligations and standards of human rights that affect, for example, dignity, autonomy or confidentiality.
- Designate an existing or new regulatory agency to evaluate and approve MMLs and applications for use in health care or medicine, as resources permit.
- Introduce mandatory post-publication audits and impact assessments, including for data protection and human rights, by independent third parties when an LMM is deployed at scale. The audit and impact assessments must be published and must include results and impacts disaggregated by user type, including for example by age, race or disability.
The guidance also includes the following key recommendations for LMM developers to ensure that:
- LMMs are not designed only by scientists and engineers. Potential users and all direct and indirect stakeholders, including medical providers, scientific researchers, healthcare professionals and patients, should be involved from the earliest stages of AI development in a structured, inclusive and transparent and should have the opportunity to raise ethical questions, express concerns and provide input to the AI application under consideration.
- LMMs are designed to perform well-defined tasks with the precision and reliability needed to improve health systems capacity and advance patient interests. Developers must also be able to predict and understand potential side outcomes.
Editor’s note
The new Ethics and Governance of AI for Health document, Guidelines on Large Multimodal Models, is based on WHO guidelines published in June 2021. Access the publication here