Be careful what you explain: Benefits and costs of explainable AI in a simulated medical task

Tobias Rieger , Dietrich Manzey , Benigna Meussling , Linda Onnasch , Eileen Roesler
{"title":"Be careful what you explain: Benefits and costs of explainable AI in a simulated medical task","authors":"Tobias Rieger ,&nbsp;Dietrich Manzey ,&nbsp;Benigna Meussling ,&nbsp;Linda Onnasch ,&nbsp;Eileen Roesler","doi":"10.1016/j.chbah.2023.100021","DOIUrl":null,"url":null,"abstract":"<div><p>We investigated the impact of explainability instructions with respect to system limitations on trust behavior and trust attitude when using an artificial intelligence (AI) support agent to perform a simulated medical task. In an online experiment (<em>N</em> = 128), participants performed a visual estimation task in a simulated medical setting (i.e., estimate the percentage of bacteria in a visual stimulus). All participants were supported by an AI that gave perfect recommendations for all but one color of bacteria (i.e., error-prone color with 50% reliability). We manipulated between-subjects whether participants knew about the error-prone color (XAI condition) or not (nonXAI condition). The analyses revealed that participants showed higher trust behavior (i.e., lower deviation from the AI recommendation) for the non-error-prone trials in the XAI condition. Moreover, participants showed lower trust behavior for the error-prone color in the XAI condition than in the nonXAI condition. However, this behavioral adaptation only applied to the subset of error-prone trials in which the AI gave correct recommendations, and not to the actual erroneous trials. Thus, designing explainable AI systems can also come with inadequate behavioral adaptations, as explainability was associated with benefits (i.e., more adequate behavior in non-error-prone trials), but also costs (stronger changes to the AI recommendations in correct error-prone trials).</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100021"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212300021X/pdfft?md5=221d729df96546eae8913e787fa04ac8&pid=1-s2.0-S294988212300021X-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S294988212300021X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We investigated the impact of explainability instructions with respect to system limitations on trust behavior and trust attitude when using an artificial intelligence (AI) support agent to perform a simulated medical task. In an online experiment (N = 128), participants performed a visual estimation task in a simulated medical setting (i.e., estimate the percentage of bacteria in a visual stimulus). All participants were supported by an AI that gave perfect recommendations for all but one color of bacteria (i.e., error-prone color with 50% reliability). We manipulated between-subjects whether participants knew about the error-prone color (XAI condition) or not (nonXAI condition). The analyses revealed that participants showed higher trust behavior (i.e., lower deviation from the AI recommendation) for the non-error-prone trials in the XAI condition. Moreover, participants showed lower trust behavior for the error-prone color in the XAI condition than in the nonXAI condition. However, this behavioral adaptation only applied to the subset of error-prone trials in which the AI gave correct recommendations, and not to the actual erroneous trials. Thus, designing explainable AI systems can also come with inadequate behavioral adaptations, as explainability was associated with benefits (i.e., more adequate behavior in non-error-prone trials), but also costs (stronger changes to the AI recommendations in correct error-prone trials).

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
小心你的解释:可解释的人工智能在模拟医疗任务中的收益和成本
我们研究了当使用人工智能(AI)支持代理执行模拟医疗任务时,关于系统限制的可解释性指令对信任行为和信任态度的影响。在一项在线实验中(N = 128),参与者在模拟医疗环境中执行视觉估计任务(即,估计视觉刺激中细菌的百分比)。所有参与者都得到了人工智能的支持,除了一种细菌颜色(即易出错的颜色,可靠性为50%),人工智能对所有细菌颜色都给出了完美的建议。我们在受试者之间操纵受试者是否知道容易出错的颜色(XAI条件)或不知道(非XAI条件)。分析显示,在XAI条件下,对于不容易出错的试验,参与者表现出更高的信任行为(即与AI建议的偏差较低)。此外,参与者对易出错颜色的信任行为在XAI条件下比在非XAI条件下更低。然而,这种行为适应只适用于人工智能给出正确建议的易出错试验的子集,而不适用于实际错误的试验。因此,设计可解释的AI系统也可能伴随着不适当的行为适应,因为可解释性与利益(即在不容易出错的试验中更适当的行为)相关,但也与成本(在正确容易出错的试验中对AI建议进行更强的更改)相关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Can ChatGPT read who you are? Understanding young adults’ attitudes towards using AI chatbots for psychotherapy: The role of self-stigma Aversion against machines with complex mental abilities: The role of individual differences Differences between human and artificial/augmented intelligence in medicine Integrating sound effects and background music in Robotic storytelling – A series of online studies across different story genres
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1