Examining Physicians’ Explanatory Reasoning in Re-Diagnosis Scenarios for Improving AI Diagnostic Systems

IF 2.2 Q3 ENGINEERING, INDUSTRIAL Journal of Cognitive Engineering and Decision Making Pub Date : 2022-04-21 DOI:10.1177/15553434221085114
Lamia Alam, Shane T. Mueller
{"title":"Examining Physicians’ Explanatory Reasoning in Re-Diagnosis Scenarios for Improving AI Diagnostic Systems","authors":"Lamia Alam, Shane T. Mueller","doi":"10.1177/15553434221085114","DOIUrl":null,"url":null,"abstract":"AI systems are increasingly being developed to provide the first point of contact for patients. These systems are typically focused on question-answering and integrating chat systems with diagnostic algorithms, but are likely to suffer from many of the same deficiencies in explanation that have plagued medical diagnostic systems since the 1970s (Shortliffe, 1979). To provide better guidance about how such systems should approach explanations, we report on an interview study in which we identified explanations that physicians used in the context of re-diagnosis or a change in diagnosis. Seven current and former physicians with a variety of specialties and experience were recruited to take part in the interviews. Several high-level observations were made by reviewing the interview notes. Nine broad categories of explanation emerged from the thematic analysis of the explanation contents. We also present these in a diagnosis meta-timeline that encapsulates many of the commonalities we saw across diagnoses during the interviews. Based on the results, we provided some design recommendations to consider for developing diagnostic AI systems. Altogether, this study suggests explanation strategies, approaches, and methods that might be used by medical diagnostic AI systems to improve user trust and satisfaction with these systems.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"63 - 78"},"PeriodicalIF":2.2000,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cognitive Engineering and Decision Making","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/15553434221085114","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 3

Abstract

AI systems are increasingly being developed to provide the first point of contact for patients. These systems are typically focused on question-answering and integrating chat systems with diagnostic algorithms, but are likely to suffer from many of the same deficiencies in explanation that have plagued medical diagnostic systems since the 1970s (Shortliffe, 1979). To provide better guidance about how such systems should approach explanations, we report on an interview study in which we identified explanations that physicians used in the context of re-diagnosis or a change in diagnosis. Seven current and former physicians with a variety of specialties and experience were recruited to take part in the interviews. Several high-level observations were made by reviewing the interview notes. Nine broad categories of explanation emerged from the thematic analysis of the explanation contents. We also present these in a diagnosis meta-timeline that encapsulates many of the commonalities we saw across diagnoses during the interviews. Based on the results, we provided some design recommendations to consider for developing diagnostic AI systems. Altogether, this study suggests explanation strategies, approaches, and methods that might be used by medical diagnostic AI systems to improve user trust and satisfaction with these systems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
研究医生在再诊断场景中的解释推理,以改进人工智能诊断系统
越来越多的人工智能系统被开发出来,为患者提供第一个接触点。这些系统通常专注于问答和集成聊天系统与诊断算法,但可能会遭受许多相同的缺陷,解释自20世纪70年代以来一直困扰着医疗诊断系统(Shortliffe, 1979)。为了更好地指导此类系统如何处理解释,我们报告了一项访谈研究,在该研究中,我们确定了医生在重新诊断或改变诊断时使用的解释。7名具有不同专业和经验的现任和前任医生被招募参加面试。通过审查面谈记录,提出了若干高级别意见。通过对解释内容的专题分析,可归纳出九大类解释。我们还在诊断元时间轴中展示了这些,该时间轴包含了我们在访谈中看到的诊断中的许多共性。基于结果,我们提供了一些设计建议,以供开发诊断人工智能系统时考虑。总之,本研究提出了医疗诊断人工智能系统可能使用的解释策略、方法和方法,以提高用户对这些系统的信任和满意度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.60
自引率
10.00%
发文量
21
期刊最新文献
Is the Pull-Down Effect Overstated? An Examination of Trust Propagation Among Fighter Pilots in a High-Fidelity Simulation A Taxonomy for AI Hazard Analysis Understanding Automation Failure Integrating Function Allocation and Operational Event Sequence Diagrams to Support Human-Robot Coordination: Case Study of a Robotic Date Thinning System Adapting Cognitive Task Analysis Methods for Use in a Large Sample Simulation Study of High-Risk Healthcare Events.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1