The introduction of artificial intelligence-based healthcare technologies (AI) could be “dangerous” for people in low-income countries, the World Health Organization (WHO) has warned.
The organization, which today released a report outlining new guidelines on large multimodal models (LMM), asserts that it is essential that the uses of developing technology are not determined solely by technology companies and those from rich countries. If models are not trained on data from people living in disadvantaged areas, these populations could be poorly served by the algorithms, the agency says.
“The very last thing we want to see happen as part of this technological leap forward is the spread or amplification of inequality and prejudice in the social fabric of countries around the world,” said Alain Labrique, director of WHO for digital health and innovation. , said today during a press briefing.
Overtaken by the events
The WHO has published its first guidelines on AI in healthcare in 2021. But the organization was prompted to update them less than three years later because of the rise and availability of LMMs. Also called generative AI, these models, in particular the one that powers the popular chatbot ChatGPTprocess and produce texts, videos and images.
MMLs have been “adopted faster than any mainstream application in history,” says the WHO. Healthcare is a popular target. Models can produce clinical notes, fill out forms, and help doctors diagnose and treat patients. Several companies and healthcare providers are developing specific AI tools.
Google AI performs better at the bedside than human doctors and makes better diagnoses
The WHO says its guidelines, issued as advice to member states, aim to ensure that the explosive growth of MMLs promotes and protects public health, rather than undermining it. In a worst-case scenario, the organization warns of a global “race to the bottom,” in which companies seek to be first to launch applications, even if they don’t work and aren’t secure. It even raises the prospect of “model collapse,” a cycle of misinformation in which LMMs trained on inaccurate or false information pollute public sources of information, such as the Internet.
“Generative AI technologies have the potential to improve healthcare, but only if those who develop, regulate and use these technologies fully identify and consider the associated risks,” said Jeremy Farrar, chief scientist at the WHO.
The operation of these powerful tools should not be left to technology companies alone, the agency warns. “Governments of all countries must jointly lead efforts to effectively regulate the development and use of AI technologies,” Labrique said. And civil society groups and people receiving healthcare must contribute to all stages of LMM development and deployment, including their monitoring and regulation.
Excluding academia
In its report, the WHO warns of the potential for “industrial capture” of MML development, given the high cost of training, deployment and maintenance of these programs. There is already compelling evidence that the biggest companies are crowding out universities and governments in AI research, the report says, with “unprecedented” numbers of PhD students and faculty leaving academia for the industry.
The guidelines recommend that independent third parties conduct and publish mandatory audits after the publication of widely deployed LMMs. Such audits should assess how well a tool protects both data and human rights, WHO adds.
It also suggests that software developers and programmers who work on LMMs that may be used in health care or scientific research should receive the same type of ethics training as doctors. And it says governments could require developers to register early algorithms, to encourage the publication of negative results and avoid bias and hype in publication.