Illustrations by Denis Freitas
Artificial intelligence (AI) is not new, but the emergence of generative AI has opened new perspectives and ethical considerations for this technology. Clinicians, computer scientists, and ethicists are working at the University of Rochester to integrate trustworthy and ethical AI into medical diagnosis and treatment.
Caroline Easton, Ph.D.professor of psychiatry at the University of Rochester Medical Center (URMC), used AI to refine an app that uses avatar coaches to guide patients through cognitive behavioral therapy. Used as a complement to clinician-centered therapy, the app allows users to customize their avatar coaches to meet their specific needs.
AI tools can also serve as a second set of eyes for radiologists, but URMC’s chair of imaging sciences Dr. Jennifer Harvey, claims that AI cannot replace radiologists.
“Radiologists are still much better at synthesizing findings that AI tools can’t do,” said Harvey, who is also the Dr. Stanley M. Rogoff and Dr. Raymond Gramiak Professor of Radiology. “For example, a chest CT scan might have one or two findings that AI flags, but the radiologist has to put all of the findings together to generate likely diagnoses.”
To be able to use algorithms to detect and treat diseases, clinicians must have a high level of confidence in their accuracy. But generative AI can sometimes “get it wrong,” according to Michael Hasselberg, NP, MS, PhDassociate professor of psychiatry, clinical nursing and data science, and the university’s first chief digital health officer.
AI is only as reliable as the data it is trained on. Nationally recognized AI ethics expert Jonathan Herington, Ph.D.who is also an assistant professor of philosophy and bioethics, warns that AI can perpetuate social and cultural biases. One way to address these biases is to be more thoughtful about the data used to train the system.
Another solution is to “always have a human in the loop” — whether that’s a radiologist synthesizing AI results on a CT scan, or an FDA regulator evaluating whether an AI tool is safe and effective.
Although FDA certification is not currently required for AI tools, Dr. Chris Kanan, associate professor of computer science, can attest to the benefits of this additional step. Kanan worked with Paige.AI to develop Paige Prostate, the first FDA-approved AI-assisted pathology tool. According to Kanan, FDA certification increases hospitals’ and clinics’ confidence in a product and the likelihood that a product will be covered by insurance.
Want to know more? Read the full article in the University News Center.