Antonio Rago, Bence Palfi, Purin Sukpanichnant, Hannibal Nabli, Kavyesh Vivek, Olga Kostopoulou, James Kinross, Francesca Toni
{"title":"探索说明内容和形式对用户理解和信任的影响","authors":"Antonio Rago, Bence Palfi, Purin Sukpanichnant, Hannibal Nabli, Kavyesh Vivek, Olga Kostopoulou, James Kinross, Francesca Toni","doi":"arxiv-2408.17401","DOIUrl":null,"url":null,"abstract":"In recent years, various methods have been introduced for explaining the\noutputs of \"black-box\" AI models. However, it is not well understood whether\nusers actually comprehend and trust these explanations. In this paper, we focus\non explanations for a regression tool for assessing cancer risk and examine the\neffect of the explanations' content and format on the user-centric metrics of\ncomprehension and trust. Regarding content, we experiment with two explanation\nmethods: the popular SHAP, based on game-theoretic notions and thus potentially\ncomplex for everyday users to comprehend, and occlusion-1, based on feature\nocclusion which may be more comprehensible. Regarding format, we present SHAP\nexplanations as charts (SC), as is conventional, and occlusion-1 explanations\nas charts (OC) as well as text (OT), to which their simpler nature also lends\nitself. The experiments amount to user studies questioning participants, with\ntwo different levels of expertise (the general population and those with some\nmedical training), on their subjective and objective comprehension of and trust\nin explanations for the outputs of the regression tool. In both studies we\nfound a clear preference in terms of subjective comprehension and trust for\nocclusion-1 over SHAP explanations in general, when comparing based on content.\nHowever, direct comparisons of explanations when controlling for format only\nrevealed evidence for OT over SC explanations in most cases, suggesting that\nthe dominance of occlusion-1 over SHAP explanations may be driven by a\npreference for text over charts as explanations. Finally, we found no evidence\nof a difference between the explanation types in terms of objective\ncomprehension. Thus overall, the choice of the content and format of\nexplanations needs careful attention, since in some contexts format, rather\nthan content, may play the critical role in improving user experience.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring the Effect of Explanation Content and Format on User Comprehension and Trust\",\"authors\":\"Antonio Rago, Bence Palfi, Purin Sukpanichnant, Hannibal Nabli, Kavyesh Vivek, Olga Kostopoulou, James Kinross, Francesca Toni\",\"doi\":\"arxiv-2408.17401\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, various methods have been introduced for explaining the\\noutputs of \\\"black-box\\\" AI models. However, it is not well understood whether\\nusers actually comprehend and trust these explanations. In this paper, we focus\\non explanations for a regression tool for assessing cancer risk and examine the\\neffect of the explanations' content and format on the user-centric metrics of\\ncomprehension and trust. Regarding content, we experiment with two explanation\\nmethods: the popular SHAP, based on game-theoretic notions and thus potentially\\ncomplex for everyday users to comprehend, and occlusion-1, based on feature\\nocclusion which may be more comprehensible. Regarding format, we present SHAP\\nexplanations as charts (SC), as is conventional, and occlusion-1 explanations\\nas charts (OC) as well as text (OT), to which their simpler nature also lends\\nitself. The experiments amount to user studies questioning participants, with\\ntwo different levels of expertise (the general population and those with some\\nmedical training), on their subjective and objective comprehension of and trust\\nin explanations for the outputs of the regression tool. In both studies we\\nfound a clear preference in terms of subjective comprehension and trust for\\nocclusion-1 over SHAP explanations in general, when comparing based on content.\\nHowever, direct comparisons of explanations when controlling for format only\\nrevealed evidence for OT over SC explanations in most cases, suggesting that\\nthe dominance of occlusion-1 over SHAP explanations may be driven by a\\npreference for text over charts as explanations. Finally, we found no evidence\\nof a difference between the explanation types in terms of objective\\ncomprehension. Thus overall, the choice of the content and format of\\nexplanations needs careful attention, since in some contexts format, rather\\nthan content, may play the critical role in improving user experience.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.17401\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.17401","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Exploring the Effect of Explanation Content and Format on User Comprehension and Trust
In recent years, various methods have been introduced for explaining the
outputs of "black-box" AI models. However, it is not well understood whether
users actually comprehend and trust these explanations. In this paper, we focus
on explanations for a regression tool for assessing cancer risk and examine the
effect of the explanations' content and format on the user-centric metrics of
comprehension and trust. Regarding content, we experiment with two explanation
methods: the popular SHAP, based on game-theoretic notions and thus potentially
complex for everyday users to comprehend, and occlusion-1, based on feature
occlusion which may be more comprehensible. Regarding format, we present SHAP
explanations as charts (SC), as is conventional, and occlusion-1 explanations
as charts (OC) as well as text (OT), to which their simpler nature also lends
itself. The experiments amount to user studies questioning participants, with
two different levels of expertise (the general population and those with some
medical training), on their subjective and objective comprehension of and trust
in explanations for the outputs of the regression tool. In both studies we
found a clear preference in terms of subjective comprehension and trust for
occlusion-1 over SHAP explanations in general, when comparing based on content.
However, direct comparisons of explanations when controlling for format only
revealed evidence for OT over SC explanations in most cases, suggesting that
the dominance of occlusion-1 over SHAP explanations may be driven by a
preference for text over charts as explanations. Finally, we found no evidence
of a difference between the explanation types in terms of objective
comprehension. Thus overall, the choice of the content and format of
explanations needs careful attention, since in some contexts format, rather
than content, may play the critical role in improving user experience.