According to the results of a cross-sectional survey published in Open JAMA Network.
The vast majority of respondents (n = 204) said they would benefit from dedicated AI training (93.1%), although 75% did not know of appropriate resources to do so. Only 13.8% and 7.8% said AI prognosis and clinical decision models, respectively, could be used clinically when only researchers could explain them. Additionally, 81.3% and 84.8%, respectively, said they should be explainable by oncologists and 13.8% and 23.0%, respectively, said they should be explainable by patients . When clinicians were presented with a scenario in which an FDA-approved AI decision-making model selects a different regimen than the oncologist had originally planned to recommend, the most common response was to present both options and let the patient decide (36.8%).
Additionally, most respondents indicated that patients should consent to the use of AI tools in treatment decisions (81.4%) and 56.4% said consent was necessary for treatment decisions. diagnostic. Most respondents (90.7%) said that AI developers should be responsible for legal problems caused by the technology and a majority responded that oncologists should protect patients from biased AI (76.5 %). Only 27.9% of respondents indicated they were confident in their ability to identify how representative the data used in an AI model was, including 66.0% of those who thought it was representative. clinician’s responsibility to protect patients from biased AI tools.
“American oncologists have reported that AI must be explainable by oncologists, but not necessarily by patients, and that patients must consent to the use of AI for cancer treatment decisions,” the authors wrote. authors of the study. “Less than half of oncologists viewed medico-legal issues related to AI use as the responsibility of physicians, and although most reported feeling responsible for protecting patients from biased AI, few reported confidence in their ability to do so.”
Investigators conducted a population-based survey from November 15, 2022 to July 31, 2023 among practicing oncologists in the United States. The study authors created a survey instrument including 24 questions in areas such as familiarity with AI, predictions, explainability, bias, deference and responsibilities. Clinicians received paper surveys by mail, followed by reminder letters including an electronic survey option and telephone calls for non-respondents.
The aim of the survey was “to assess oncologists’ views on the ethical areas of using AI in clinical care, including familiarity, predictions, explainability, bias, deference and responsibilities. The main outcome was respondents’ opinions on the need for patients to give informed consent for the use of an AI model when making treatment decisions.
Additional results indicated that the survey response rate was 52.7% (n=204/387). Respondents were from 37 states and were mostly male (63.7%), non-Hispanic White (62.7%), had no prior training in AI (53.4%), knew at least least 2 AI models (69.1%) and were doctors. oncologists (61.8%). Among the 202 clinicians who indicated their practice setting, 60 practiced in a university setting and 142 practiced in another setting.
Additional survey data showed that respondents from academic backgrounds were more likely to choose the AI recommendation over their initial recommendation (OR, 2.99; 95% CI, 1.39-6, 47; P. = 0.004) or defer the decision to the patient (OR, 2.56; 95% CI, 1.19-5.51; P. = 0.02) when faced with a conflicting recommendation situation. Additionally, results from a multivariable logistic regression model revealed that clinicians outside of academia (OR, 1.72; 95% CI, 0.77-3.82; P. = 0.19) and those without prior training in AI (OR, 2.62; 95% CI, 1.15-1.15; P. = 0.02) were more likely to have a preference for patient consent when using an AI treatment decision model compared to their counterparts. Compared to those in other settings, clinicians in academic practices were more likely to report that they could explain AI models (OR, 2.08; 95% CI, 1.06-4.12) and predict that AI would improve the management of adverse effects (OR, 1.93; 95). % CI, 1.01-3.73) and end-of-life decision making (OR, 2.06; 95% CI, 1.11-3.84).
“Ethical AI in cancer care requires consideration of stakeholder positions,” the study authors wrote in conclusion. “This cross-sectional study highlights potential issues related to accountability and compliance with AI as well as associations with practice setting. Our findings suggest that implementation of AI in oncology must include rigorous evaluations of its effects on care decisions and decisional accountability when issues arise with AI use. AI arise.
Reference
Hantel A, Walsh TP, Marron JM et al. Oncologists’ views on the ethical implications of using artificial intelligence for cancer care. JAMA Open Network. 2024;7(3):e244077. doi:10.1001/jamanetworkopen.2024.4077