Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review

Victoria Tucci, J. Saary, Thomas E. Doyle
{"title":"Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review","authors":"Victoria Tucci, J. Saary, Thomas E. Doyle","doi":"10.21037/jmai-21-25","DOIUrl":null,"url":null,"abstract":"Objective: We performed a comprehensive review of the literature to better understand the trust dynamics between medical artificial intelligence (AI) and healthcare expert end-users. We explored the factors that influence trust in these technologies and how they compare to established concepts of trust in the engineering discipline. By identifying the qualitatively and quantitatively assessed factors that influence trust in medical AI, we gain insight into understanding how autonomous systems can be optimized during the development phase to improve decision-making support and clinician-machine teaming. This facilitates an enhanced understanding of the qualities that healthcare professional users seek in AI to consider it trustworthy. We also highlight key considerations for promoting on-going improvement of trust in autonomous medical systems to support the adoption of medical technologies into practice. Background: decision support systems introduces challenges and barriers to adoption and implementation into clinical practice. Methods: We searched databases including, Ovid MEDLINE, Ovid EMBASE, Clarivate Web of Science, and Google Scholar, as well as gray literature, for publications from 2000 to July 15, 2021, that reported features of AI-based diagnostic and clinical decision support systems that contribute to enhanced end-user trust. Papers discussing implications and applications of medical AI in clinical practice were also recorded. Results were based on the quantity of papers that discussed each trust concept, either quantitatively or qualitatively, using frequency of concept commentary as a proxy for importance of a respective concept. Conclusions: Explainability, transparency, interpretability, usability, and education are among the key identified factors thought to influence a healthcare professionals’ trust in medical AI and enhance clinician-machine teaming in critical decision-making healthcare environments. We also identified the need to better evaluate and incorporate other critical factors to promote trust by consulting medical professionals when developing AI systems for clinical decision-making and diagnostic support.","PeriodicalId":73815,"journal":{"name":"Journal of medical artificial intelligence","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of medical artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21037/jmai-21-25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

Objective: We performed a comprehensive review of the literature to better understand the trust dynamics between medical artificial intelligence (AI) and healthcare expert end-users. We explored the factors that influence trust in these technologies and how they compare to established concepts of trust in the engineering discipline. By identifying the qualitatively and quantitatively assessed factors that influence trust in medical AI, we gain insight into understanding how autonomous systems can be optimized during the development phase to improve decision-making support and clinician-machine teaming. This facilitates an enhanced understanding of the qualities that healthcare professional users seek in AI to consider it trustworthy. We also highlight key considerations for promoting on-going improvement of trust in autonomous medical systems to support the adoption of medical technologies into practice. Background: decision support systems introduces challenges and barriers to adoption and implementation into clinical practice. Methods: We searched databases including, Ovid MEDLINE, Ovid EMBASE, Clarivate Web of Science, and Google Scholar, as well as gray literature, for publications from 2000 to July 15, 2021, that reported features of AI-based diagnostic and clinical decision support systems that contribute to enhanced end-user trust. Papers discussing implications and applications of medical AI in clinical practice were also recorded. Results were based on the quantity of papers that discussed each trust concept, either quantitatively or qualitatively, using frequency of concept commentary as a proxy for importance of a respective concept. Conclusions: Explainability, transparency, interpretability, usability, and education are among the key identified factors thought to influence a healthcare professionals’ trust in medical AI and enhance clinician-machine teaming in critical decision-making healthcare environments. We also identified the need to better evaluate and incorporate other critical factors to promote trust by consulting medical professionals when developing AI systems for clinical decision-making and diagnostic support.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
影响医疗保健专业人员对医疗人工智能信任的因素:叙述性综述
目的:我们对文献进行了全面的回顾,以更好地了解医疗人工智能(AI)和医疗保健专家最终用户之间的信任动态。我们探索了影响这些技术中信任的因素,以及它们与工程学科中已建立的信任概念的比较。通过确定影响医疗人工智能信任的定性和定量评估因素,我们深入了解如何在开发阶段优化自主系统,以改善决策支持和临床医生-机器团队。这有助于增强对医疗保健专业用户在人工智能中寻求的品质的理解,从而认为它值得信赖。我们还强调了促进持续改善对自主医疗系统的信任的关键考虑因素,以支持将医疗技术应用于实践。背景:决策支持系统为临床实践的采用和实施带来了挑战和障碍。方法:我们检索了包括Ovid MEDLINE、Ovid EMBASE、Clarivate Web of Science和谷歌Scholar在内的数据库以及灰色文献,检索了2000年至2021年7月15日期间的出版物,这些出版物报道了基于人工智能的诊断和临床决策支持系统有助于增强终端用户信任的特征。还记录了讨论医疗人工智能在临床实践中的影响和应用的论文。结果基于讨论每个信任概念的论文数量,无论是定量的还是定性的,使用概念评论的频率作为各自概念重要性的代理。结论:可解释性、透明度、可解释性、可用性和教育是影响医疗专业人员对医疗人工智能信任的关键因素,并在关键的医疗决策环境中增强临床医生与机器的合作。我们还发现,在开发用于临床决策和诊断支持的人工智能系统时,需要更好地评估和纳入其他关键因素,通过咨询医疗专业人员来促进信任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.30
自引率
0.00%
发文量
0
期刊最新文献
Artificial intelligence in periodontology and implantology—a narrative review Exploring the capabilities and limitations of large language models in nuclear medicine knowledge with primary focus on GPT-3.5, GPT-4 and Google Bard Hybrid artificial intelligence outcome prediction using features extraction from stress perfusion cardiac magnetic resonance images and electronic health records Analysis of factors influencing maternal mortality and newborn health—a machine learning approach Efficient glioma grade prediction using learned features extracted from convolutional neural networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1