探索说明内容和形式对用户理解和信任的影响

Antonio Rago, Bence Palfi, Purin Sukpanichnant, Hannibal Nabli, Kavyesh Vivek, Olga Kostopoulou, James Kinross, Francesca Toni
{"title":"探索说明内容和形式对用户理解和信任的影响","authors":"Antonio Rago, Bence Palfi, Purin Sukpanichnant, Hannibal Nabli, Kavyesh Vivek, Olga Kostopoulou, James Kinross, Francesca Toni","doi":"arxiv-2408.17401","DOIUrl":null,"url":null,"abstract":"In recent years, various methods have been introduced for explaining the\noutputs of \"black-box\" AI models. However, it is not well understood whether\nusers actually comprehend and trust these explanations. In this paper, we focus\non explanations for a regression tool for assessing cancer risk and examine the\neffect of the explanations' content and format on the user-centric metrics of\ncomprehension and trust. Regarding content, we experiment with two explanation\nmethods: the popular SHAP, based on game-theoretic notions and thus potentially\ncomplex for everyday users to comprehend, and occlusion-1, based on feature\nocclusion which may be more comprehensible. Regarding format, we present SHAP\nexplanations as charts (SC), as is conventional, and occlusion-1 explanations\nas charts (OC) as well as text (OT), to which their simpler nature also lends\nitself. The experiments amount to user studies questioning participants, with\ntwo different levels of expertise (the general population and those with some\nmedical training), on their subjective and objective comprehension of and trust\nin explanations for the outputs of the regression tool. In both studies we\nfound a clear preference in terms of subjective comprehension and trust for\nocclusion-1 over SHAP explanations in general, when comparing based on content.\nHowever, direct comparisons of explanations when controlling for format only\nrevealed evidence for OT over SC explanations in most cases, suggesting that\nthe dominance of occlusion-1 over SHAP explanations may be driven by a\npreference for text over charts as explanations. Finally, we found no evidence\nof a difference between the explanation types in terms of objective\ncomprehension. Thus overall, the choice of the content and format of\nexplanations needs careful attention, since in some contexts format, rather\nthan content, may play the critical role in improving user experience.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring the Effect of Explanation Content and Format on User Comprehension and Trust\",\"authors\":\"Antonio Rago, Bence Palfi, Purin Sukpanichnant, Hannibal Nabli, Kavyesh Vivek, Olga Kostopoulou, James Kinross, Francesca Toni\",\"doi\":\"arxiv-2408.17401\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, various methods have been introduced for explaining the\\noutputs of \\\"black-box\\\" AI models. However, it is not well understood whether\\nusers actually comprehend and trust these explanations. In this paper, we focus\\non explanations for a regression tool for assessing cancer risk and examine the\\neffect of the explanations' content and format on the user-centric metrics of\\ncomprehension and trust. Regarding content, we experiment with two explanation\\nmethods: the popular SHAP, based on game-theoretic notions and thus potentially\\ncomplex for everyday users to comprehend, and occlusion-1, based on feature\\nocclusion which may be more comprehensible. Regarding format, we present SHAP\\nexplanations as charts (SC), as is conventional, and occlusion-1 explanations\\nas charts (OC) as well as text (OT), to which their simpler nature also lends\\nitself. The experiments amount to user studies questioning participants, with\\ntwo different levels of expertise (the general population and those with some\\nmedical training), on their subjective and objective comprehension of and trust\\nin explanations for the outputs of the regression tool. In both studies we\\nfound a clear preference in terms of subjective comprehension and trust for\\nocclusion-1 over SHAP explanations in general, when comparing based on content.\\nHowever, direct comparisons of explanations when controlling for format only\\nrevealed evidence for OT over SC explanations in most cases, suggesting that\\nthe dominance of occlusion-1 over SHAP explanations may be driven by a\\npreference for text over charts as explanations. Finally, we found no evidence\\nof a difference between the explanation types in terms of objective\\ncomprehension. Thus overall, the choice of the content and format of\\nexplanations needs careful attention, since in some contexts format, rather\\nthan content, may play the critical role in improving user experience.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.17401\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.17401","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,人们引入了各种方法来解释 "黑箱 "人工智能模型的输出结果。然而,人们对用户是否真正理解和信任这些解释并不十分了解。在本文中,我们将重点放在评估癌症风险的回归工具的解释上,并研究解释的内容和格式对以用户为中心的理解度和信任度指标的影响。在内容方面,我们尝试了两种解释方法:一种是流行的 SHAP,它基于博弈论概念,因此对于日常用户来说可能比较复杂;另一种是闭塞-1,它基于特征闭塞,可能更容易理解。在格式方面,我们将 SHAP 解释以图表(SC)的形式呈现,这是传统的做法;将闭塞-1 解释以图表(OC)和文本(OT)的形式呈现,其简单的性质也适合这种做法。实验相当于用户研究,询问两种不同专业水平的参与者(普通人和受过一定医学培训的人)对回归工具输出解释的主观和客观理解及信任程度。在这两项研究中,如果根据内容进行比较,我们发现在主观理解力和信任度方面,闭塞-1解释明显优于SHAP解释。然而,在控制格式的情况下,对解释的直接比较在大多数情况下只显示了OT解释优于SC解释的证据,这表明闭塞-1解释优于SHAP解释可能是由于文字解释优于图表解释。最后,在客观理解方面,我们没有发现解释类型之间存在差异的证据。因此,总体而言,解释内容和格式的选择需要仔细斟酌,因为在某些情况下,格式而非内容可能在改善用户体验方面起到关键作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Exploring the Effect of Explanation Content and Format on User Comprehension and Trust
In recent years, various methods have been introduced for explaining the outputs of "black-box" AI models. However, it is not well understood whether users actually comprehend and trust these explanations. In this paper, we focus on explanations for a regression tool for assessing cancer risk and examine the effect of the explanations' content and format on the user-centric metrics of comprehension and trust. Regarding content, we experiment with two explanation methods: the popular SHAP, based on game-theoretic notions and thus potentially complex for everyday users to comprehend, and occlusion-1, based on feature occlusion which may be more comprehensible. Regarding format, we present SHAP explanations as charts (SC), as is conventional, and occlusion-1 explanations as charts (OC) as well as text (OT), to which their simpler nature also lends itself. The experiments amount to user studies questioning participants, with two different levels of expertise (the general population and those with some medical training), on their subjective and objective comprehension of and trust in explanations for the outputs of the regression tool. In both studies we found a clear preference in terms of subjective comprehension and trust for occlusion-1 over SHAP explanations in general, when comparing based on content. However, direct comparisons of explanations when controlling for format only revealed evidence for OT over SC explanations in most cases, suggesting that the dominance of occlusion-1 over SHAP explanations may be driven by a preference for text over charts as explanations. Finally, we found no evidence of a difference between the explanation types in terms of objective comprehension. Thus overall, the choice of the content and format of explanations needs careful attention, since in some contexts format, rather than content, may play the critical role in improving user experience.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Abductive explanations of classifiers under constraints: Complexity and properties Explaining Non-monotonic Normative Reasoning using Argumentation Theory with Deontic Logic Towards Explainable Goal Recognition Using Weight of Evidence (WoE): A Human-Centered Approach A Metric Hybrid Planning Approach to Solving Pandemic Planning Problems with Simple SIR Models Neural Networks for Vehicle Routing Problem
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1