The FDA’s Center for Biologics Evaluation and Research, the Center for Drug Evaluation and Research, the Center for Devices and Radiological Health, and the Office of Combination Products released a joint document that details how different FDA centers will collaborate to maintain public health and create ethical innovations in the face of artificial intelligence (AI). Regulators have included 4 focus areas that address the use of AI throughout the lifecycle of medical products, according to the letter.
The document focuses on four areas: fostering collaboration to protect public health; advance the development of regulatory approaches to support innovations; promote the development of standards, guidelines, best practices and tools for the life cycle of medical products; and support research related to evaluating and monitoring AI performance.
With the primary goal of protecting public health, the Centers collaborate with developers, patient groups, academia, global regulators and more to establish patient-centered regulatory approaches for health equity . As part of this goal, the agencies will collect feedback on the transparency, explainability, governance, bias, cybersecurity and quality assurance of AI medical products, as well as develop educational initiatives promoting the use safe and responsible AI in these products. They will also collaborate globally to create standards, guidelines and best practices on the consistent use and evaluation of AI tools across the medical landscape.
In the second focus area, policies will be developed to ensure predictability and clarity of regulation regarding the use of AI, including monitoring trends and issues to detect knowledge gaps and opportunities in the AI product lifecycle; develop methodology to evaluate algorithms, identify and mitigate bias, and ensure AI is robust to coincide with changes in clinical inputs and conditions; and building on existing initiatives for the assessment and regulation of AI, according to the letter.
In addition, the Centers will also publish guidelines on the use of AI products, including, but not limited to, final guidelines on marketing submission recommendations for predetermined change control plans for software functions of AI-enabled devices; draft guidance on lifecycle management considerations and pre-market submission recommendations for AI-enabled device software functions; and draft guidance on considerations for using AI to support regulatory decision-making regarding drugs and biologics, the letter said.
The centers will also refine and expand considerations for safe and ethical use of AI, including transparency, addressing security and cybersecurity issues, and identifying best practices for long-term and global surveillance real. Additionally, best practices will include documenting and ensuring that data is used to train the models appropriate for the patient population. There will also be ongoing monitoring of AI tools to ensure standards are met and performance and reliability are maintained.
In the final area of focus, the centers aim to identify different projects where biases may be introduced into AI development lifecycles and how best to address them, according to the letter. They will also support projects that take into account health inequalities associated with AI, in order to promote equity and ensure the representativeness of the data. Finally, the centers will also support ongoing monitoring of AI tools, the letter states.
In conclusion, the document states that agencies will adapt regulatory approaches for the use of AI in medical products to ensure the safety of patients and healthcare professionals.
Reference
FDA. Artificial intelligence and medical products: how CBER, CDER, CFRH and OCP work together. March 2024. Accessed March 15, 2024. https://www.fda.gov/media/177030/download?utm_medium=email&utm_source=govdelivery
(This article was originally published by our sister publication, Pharmacy hours.)