The Impact of Expectation Management and Model Transparency on Radiologists’ Trust and Utilization of AI Recommendations for Lung Nodule Assessment on Computed Tomography: Simulated Use Study

JMIR AI Pub Date : 2024-03-13 DOI:10.2196/52211
Lotte J S Ewals, Lynn J J Heesterbeek, Bin Yu, Kasper van der Wulp, Dimitrios Mavroeidis, M. Funk, Chris C P Snijders, Igor Jacobs, Joost Nederend, J. Pluyter
{"title":"The Impact of Expectation Management and Model Transparency on Radiologists’ Trust and Utilization of AI Recommendations for Lung Nodule Assessment on Computed Tomography: Simulated Use Study","authors":"Lotte J S Ewals, Lynn J J Heesterbeek, Bin Yu, Kasper van der Wulp, Dimitrios Mavroeidis, M. Funk, Chris C P Snijders, Igor Jacobs, Joost Nederend, J. Pluyter","doi":"10.2196/52211","DOIUrl":null,"url":null,"abstract":"\n \n Many promising artificial intelligence (AI) and computer-aided detection and diagnosis systems have been developed, but few have been successfully integrated into clinical practice. This is partially owing to a lack of user-centered design of AI-based computer-aided detection or diagnosis (AI-CAD) systems.\n \n \n \n We aimed to assess the impact of different onboarding tutorials and levels of AI model explainability on radiologists’ trust in AI and the use of AI recommendations in lung nodule assessment on computed tomography (CT) scans.\n \n \n \n In total, 20 radiologists from 7 Dutch medical centers performed lung nodule assessment on CT scans under different conditions in a simulated use study as part of a 2×2 repeated-measures quasi-experimental design. Two types of AI onboarding tutorials (reflective vs informative) and 2 levels of AI output (black box vs explainable) were designed. The radiologists first received an onboarding tutorial that was either informative or reflective. Subsequently, each radiologist assessed 7 CT scans, first without AI recommendations. AI recommendations were shown to the radiologist, and they could adjust their initial assessment. Half of the participants received the recommendations via black box AI output and half received explainable AI output. Mental model and psychological trust were measured before onboarding, after onboarding, and after assessing the 7 CT scans. We recorded whether radiologists changed their assessment on found nodules, malignancy prediction, and follow-up advice for each CT assessment. In addition, we analyzed whether radiologists’ trust in their assessments had changed based on the AI recommendations.\n \n \n \n Both variations of onboarding tutorials resulted in a significantly improved mental model of the AI-CAD system (informative P=.01 and reflective P=.01). After using AI-CAD, psychological trust significantly decreased for the group with explainable AI output (P=.02). On the basis of the AI recommendations, radiologists changed the number of reported nodules in 27 of 140 assessments, malignancy prediction in 32 of 140 assessments, and follow-up advice in 12 of 140 assessments. The changes were mostly an increased number of reported nodules, a higher estimated probability of malignancy, and earlier follow-up. The radiologists’ confidence in their found nodules changed in 82 of 140 assessments, in their estimated probability of malignancy in 50 of 140 assessments, and in their follow-up advice in 28 of 140 assessments. These changes were predominantly increases in confidence. The number of changed assessments and radiologists’ confidence did not significantly differ between the groups that received different onboarding tutorials and AI outputs.\n \n \n \n Onboarding tutorials help radiologists gain a better understanding of AI-CAD and facilitate the formation of a correct mental model. If AI explanations do not consistently substantiate the probability of malignancy across patient cases, radiologists’ trust in the AI-CAD system can be impaired. Radiologists’ confidence in their assessments was improved by using the AI recommendations.\n","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"2013 6","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/52211","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Many promising artificial intelligence (AI) and computer-aided detection and diagnosis systems have been developed, but few have been successfully integrated into clinical practice. This is partially owing to a lack of user-centered design of AI-based computer-aided detection or diagnosis (AI-CAD) systems. We aimed to assess the impact of different onboarding tutorials and levels of AI model explainability on radiologists’ trust in AI and the use of AI recommendations in lung nodule assessment on computed tomography (CT) scans. In total, 20 radiologists from 7 Dutch medical centers performed lung nodule assessment on CT scans under different conditions in a simulated use study as part of a 2×2 repeated-measures quasi-experimental design. Two types of AI onboarding tutorials (reflective vs informative) and 2 levels of AI output (black box vs explainable) were designed. The radiologists first received an onboarding tutorial that was either informative or reflective. Subsequently, each radiologist assessed 7 CT scans, first without AI recommendations. AI recommendations were shown to the radiologist, and they could adjust their initial assessment. Half of the participants received the recommendations via black box AI output and half received explainable AI output. Mental model and psychological trust were measured before onboarding, after onboarding, and after assessing the 7 CT scans. We recorded whether radiologists changed their assessment on found nodules, malignancy prediction, and follow-up advice for each CT assessment. In addition, we analyzed whether radiologists’ trust in their assessments had changed based on the AI recommendations. Both variations of onboarding tutorials resulted in a significantly improved mental model of the AI-CAD system (informative P=.01 and reflective P=.01). After using AI-CAD, psychological trust significantly decreased for the group with explainable AI output (P=.02). On the basis of the AI recommendations, radiologists changed the number of reported nodules in 27 of 140 assessments, malignancy prediction in 32 of 140 assessments, and follow-up advice in 12 of 140 assessments. The changes were mostly an increased number of reported nodules, a higher estimated probability of malignancy, and earlier follow-up. The radiologists’ confidence in their found nodules changed in 82 of 140 assessments, in their estimated probability of malignancy in 50 of 140 assessments, and in their follow-up advice in 28 of 140 assessments. These changes were predominantly increases in confidence. The number of changed assessments and radiologists’ confidence did not significantly differ between the groups that received different onboarding tutorials and AI outputs. Onboarding tutorials help radiologists gain a better understanding of AI-CAD and facilitate the formation of a correct mental model. If AI explanations do not consistently substantiate the probability of malignancy across patient cases, radiologists’ trust in the AI-CAD system can be impaired. Radiologists’ confidence in their assessments was improved by using the AI recommendations.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
期望管理和模型透明度对放射科医生信任和使用计算机断层扫描肺结节评估人工智能建议的影响:模拟使用研究
目前已开发出许多前景广阔的人工智能(AI)和计算机辅助检测与诊断系统,但成功融入临床实践的却寥寥无几。部分原因是基于人工智能的计算机辅助检测或诊断(AI-CAD)系统缺乏以用户为中心的设计。 我们的目的是评估不同的上机教程和人工智能模型可解释性水平对放射科医生对人工智能的信任度以及在计算机断层扫描(CT)肺结节评估中使用人工智能建议的影响。 作为2×2重复测量准实验设计的一部分,共有来自荷兰7家医疗中心的20名放射科医生在不同条件下对CT扫描进行了肺结节评估。研究设计了两种类型的人工智能入门教程(反思型与信息型)和两种级别的人工智能输出(黑箱与可解释型)。放射科医生首先接受信息型或反思型入门教程。随后,每位放射科医生评估了 7 份 CT 扫描,首先没有人工智能建议。然后向放射科医生展示人工智能建议,他们可以调整自己的初步评估。半数参与者通过黑盒人工智能输出接收建议,半数参与者接收可解释的人工智能输出。在上岗前、上岗后和评估 7 次 CT 扫描后,我们都对心理模型和心理信任度进行了测量。我们记录了放射科医生是否改变了他们对发现的结节、恶性肿瘤预测和每次 CT 评估的后续建议的评估。此外,我们还分析了根据人工智能建议,放射科医生对其评估的信任度是否发生了变化。 两种不同的入职教程都显著改善了 AI-CAD 系统的心理模型(信息型 P=.01 和反射型 P=.01)。使用 AI-CAD 后,可解释的 AI 输出组的心理信任度明显下降(P=.02)。根据人工智能建议,放射科医生改变了 140 次评估中 27 次报告的结节数量、140 次评估中 32 次的恶性肿瘤预测以及 140 次评估中 12 次的随访建议。这些改变主要是增加了报告的结节数量,提高了恶性肿瘤的估计概率,并提前了随访时间。在 140 项评估中,有 82 项评估的放射科医生对所发现结节的信心发生了变化;在 140 项评估中,有 50 项评估的放射科医生对恶性肿瘤的估计概率发生了变化;在 140 项评估中,有 28 项评估的放射科医生对随访建议的信心发生了变化。这些变化主要是信心的增加。在接受不同入门教程和人工智能输出结果的组别中,评估结果发生变化的次数和放射科医生的信心并无明显差异。 入门教程有助于放射科医生更好地了解 AI-CAD,促进形成正确的心理模型。如果人工智能的解释不能始终如一地证实不同病例的恶性肿瘤概率,放射科医生对 AI-CAD 系统的信任就会受损。通过使用人工智能建议,放射科医生对其评估结果的信心有所提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Ensuring Appropriate Representation in Artificial Intelligence-Generated Medical Imagery: Protocol for a Methodological Approach to Address Skin Tone Bias. How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review. Targeting COVID-19 and Human Resources for Health News Information Extraction: Algorithm Development and Validation. Understanding AI's Role in Endometriosis Patient Education and Evaluating Its Information and Accuracy: Systematic Review. Identifying Marijuana Use Behaviors Among Youth Experiencing Homelessness Using a Machine Learning-Based Framework: Development and Evaluation Study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1