In short
On January 18, 2024, the World Health Organization (WHO) released new guidelines on the ethics and governance of artificial intelligence (AI) for health, focusing on large multimodal models (LMM). The WHO guidelines summarize the broad applications of LMMs in the health sector and include recommendations for governments, who have primary responsibility for setting standards for the development and deployment of LMMs, as well as their integration and their use for public health and medical purposes.
Background
LMMs are a rapidly growing generative AI technology with applications in the healthcare sector. Specifically, LMMs can accept one or more types of data input and generate various outputs that are not limited to the type of data fed into the algorithm. LMMs are known as “general-purpose baseline models”, although it has not yet been proven whether LMMs can accomplish a wide range of tasks and goals.
LMMs have been adopted faster than any consumer application in history. LMMs are attracting interest because they can facilitate human-machine interaction to mimic human communication and generate responses to questions or data inputs that appear human and authoritative.
WHO has therefore published guidelines to help Member States map the benefits and challenges associated with the use of MMLs for health, establishing more than 40 recommendations aimed at various stakeholders, including governments, healthcare providers healthcare and technology companies, with the aim of ensuring responsible use. of LMM to safeguard and improve public health.
Potential benefits and risks
The new WHO guidelines define five main applications and challenges of using MMLs in the health sector:
1. Diagnosis and clinical care
LMMs can be used to aid diagnosis in areas such as radiology and medical imaging, tuberculosis and oncology. It was hoped that clinicians could use AI to integrate patient records during the consultation to identify at-risk patients, use it to facilitate difficult treatment decisions, and detect clinical errors.
Risks arise from the use of LMMs in diagnosis and clinical care, such as inaccurate, incomplete, biased or false responses; data quality and data bias; automation bias; degradation of doctors’ skills; and informed consent.
2. Patient-centered applications
AI tools can be used to augment self-care, where patients can take responsibility for their own care, such as taking medications, improving their nutrition and diet, engaging in physical activity, treating wounds, or administering medications. injections. This can be done through LMM-powered chatbots, health monitoring and risk forecasting tools.
Risks arise from the use of LMM in patient-centered applications, such as inaccurate, incomplete, or false reporting; risks of emotional manipulation of chatbots; data privacy issues; deterioration of interactions between clinicians, lay people and patients; and the delivery of health care outside the health system, which is generally subject to greater regulatory scrutiny.
3. Office functions and administrative tasks
LMMs can be used to assist healthcare professionals with the clerical, administrative, and financial aspects of practicing medicine.
Risks arising from such use could be potential inaccuracies or inconsistencies of LMMs, where a slight change to a prompt or question may generate a completely different response.
4. Medical and nursing training
LMMs are also expected to play a role in medical and nursing education, potentially being used to create dynamic texts that, compared to generic texts, are tailored to the specific needs and questions of the student.
The risk arising from such use is that healthcare professionals suspend their judgment or that of a human peer in favor of that of a computer.
5. Scientific and medical research and drug development
LMMs can potentially expand the ways in which AI can be used to support scientific and medical research and drug discovery. For example, they can generate text for use in a scientific article, summarize texts, edit texts, analyze scientific research, etc.
General concerns about the use of LMMs in scientific research include lack of accountability, bias toward high-income countries, and “hallucinating” by summarizing or citing academic articles that do not exist.
Key recommendations
WHO has included several recommendations for the development and deployment of LMMs, including the following:
- Governments should invest in public infrastructure, enact laws and regulations to ensure that MMLs and healthcare apps meet ethical obligations and human rights standards, task regulatory agencies with evaluate and approve LMMs, and implement mandatory post-publication audits and impact assessments by independent third parties.
- LMM developers should involve diverse stakeholders from the early stages of development and design LMMs to perform well-defined tasks accurately and reliably, thereby improving health systems capabilities and promoting patient well-being.
WHO guidance on AI ethics and governance, focusing on LMM, is available here.
* * * * *
© 2024 Baker & McKenzie. Wong & Leow. All rights reserved. Baker & McKenzie.Wong & Leow is incorporated as a limited company and is a member firm of Baker & McKenzie International, an international law firm with member law firms around the world. Consistent with common terminology used in professional services organizations, the reference to a “principal” means a person who is a partner, or equivalent, in such a law firm. Likewise, the reference to an “office” means an office of such a law firm. This may be referred to as “attorney advertising” requiring advance notice in some jurisdictions. Previous results do not guarantee a similar result.