人工智能系统反事实解释中的分类和连续特征

Greta Warren, R. Byrne, Markt. Keane
{"title":"人工智能系统反事实解释中的分类和连续特征","authors":"Greta Warren, R. Byrne, Markt. Keane","doi":"10.1145/3581641.3584090","DOIUrl":null,"url":null,"abstract":"Recently, eXplainable AI (XAI) research has focused on the use of counterfactual explanations to address interpretability, algorithmic recourse, and bias in AI system decision-making. The proponents of these algorithms claim they meet users’ requirements for counterfactual explanations. For instance, many claim that the output of their algorithms work as explanations because they prioritise \"plausible\", \"actionable\" or \"causally important\" features in their generated counterfactuals. However, very few of these claims have been tested in controlled psychological studies, and we know very little about which aspects of counterfactual explanations help users to understand AI system decisions. Furthermore, we do not know whether counterfactual explanations are an advance on more traditional causal explanations that have a much longer history in AI (in explaining expert systems and decision trees). Accordingly, we carried out two user studies to (i) test a fundamental distinction in feature-types, between categorical and continuous features, and (ii) compare the relative effectiveness of counterfactual and causal explanations. The studies used a simulated, automated decision-making app that determined safe driving limits after drinking alcohol, based on predicted blood alcohol content, and user responses were measured objectively (users’ predictive accuracy) and subjectively (users’ satisfaction and trust judgments). Study 1 (N=127) showed that users understand explanations referring to categorical features more readily than those referring to continuous features. It also discovered a dissociation between objective and subjective measures: counterfactual explanations elicited higher accuracy of predictions than no-explanation control descriptions but no higher accuracy than causal explanations, yet counterfactual explanations elicited greater satisfaction and trust judgments than causal explanations. Study 2 (N=211) found that users were more accurate for categorically-transformed features compared to continuous ones, and also replicated the results of Study 1. The findings delineate important boundary conditions for current and future counterfactual explanation methods in XAI.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Categorical and Continuous Features in Counterfactual Explanations of AI Systems\",\"authors\":\"Greta Warren, R. Byrne, Markt. Keane\",\"doi\":\"10.1145/3581641.3584090\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, eXplainable AI (XAI) research has focused on the use of counterfactual explanations to address interpretability, algorithmic recourse, and bias in AI system decision-making. The proponents of these algorithms claim they meet users’ requirements for counterfactual explanations. For instance, many claim that the output of their algorithms work as explanations because they prioritise \\\"plausible\\\", \\\"actionable\\\" or \\\"causally important\\\" features in their generated counterfactuals. However, very few of these claims have been tested in controlled psychological studies, and we know very little about which aspects of counterfactual explanations help users to understand AI system decisions. Furthermore, we do not know whether counterfactual explanations are an advance on more traditional causal explanations that have a much longer history in AI (in explaining expert systems and decision trees). Accordingly, we carried out two user studies to (i) test a fundamental distinction in feature-types, between categorical and continuous features, and (ii) compare the relative effectiveness of counterfactual and causal explanations. The studies used a simulated, automated decision-making app that determined safe driving limits after drinking alcohol, based on predicted blood alcohol content, and user responses were measured objectively (users’ predictive accuracy) and subjectively (users’ satisfaction and trust judgments). Study 1 (N=127) showed that users understand explanations referring to categorical features more readily than those referring to continuous features. It also discovered a dissociation between objective and subjective measures: counterfactual explanations elicited higher accuracy of predictions than no-explanation control descriptions but no higher accuracy than causal explanations, yet counterfactual explanations elicited greater satisfaction and trust judgments than causal explanations. Study 2 (N=211) found that users were more accurate for categorically-transformed features compared to continuous ones, and also replicated the results of Study 1. The findings delineate important boundary conditions for current and future counterfactual explanation methods in XAI.\",\"PeriodicalId\":118159,\"journal\":{\"name\":\"Proceedings of the 28th International Conference on Intelligent User Interfaces\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 28th International Conference on Intelligent User Interfaces\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3581641.3584090\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 28th International Conference on Intelligent User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3581641.3584090","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

最近,可解释人工智能(XAI)研究的重点是使用反事实解释来解决人工智能系统决策中的可解释性、算法追索权和偏见。这些算法的支持者声称,它们满足了用户对反事实解释的要求。例如,许多人声称,他们的算法的输出可以作为解释,因为他们在生成的反事实中优先考虑“合理的”、“可操作的”或“因果重要的”特征。然而,这些说法很少在对照心理学研究中得到测试,我们对反事实解释的哪些方面有助于用户理解人工智能系统的决策知之甚少。此外,我们不知道反事实解释是否是传统因果解释的进步,后者在人工智能(解释专家系统和决策树)中有着更长的历史。因此,我们进行了两项用户研究,以(i)测试特征类型的基本区别,在分类和连续特征之间,以及(ii)比较反事实和因果解释的相对有效性。这些研究使用了一个模拟的自动化决策应用程序,该应用程序根据预测的血液酒精含量确定饮酒后的安全驾驶限制,并客观地测量用户的反应(用户的预测准确性)和主观地(用户的满意度和信任判断)。研究1 (N=127)表明,用户更容易理解涉及分类特征的解释,而不是涉及连续特征的解释。它还发现了客观和主观测量之间的分离:反事实解释比无解释控制描述引起更高的预测准确性,但不高于因果解释的准确性,然而反事实解释比因果解释引起更高的满意度和信任判断。研究2 (N=211)发现,与连续特征相比,用户对分类转换后的特征更准确,也重复了研究1的结果。这些发现为当前和未来的XAI反事实解释方法描绘了重要的边界条件。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Categorical and Continuous Features in Counterfactual Explanations of AI Systems
Recently, eXplainable AI (XAI) research has focused on the use of counterfactual explanations to address interpretability, algorithmic recourse, and bias in AI system decision-making. The proponents of these algorithms claim they meet users’ requirements for counterfactual explanations. For instance, many claim that the output of their algorithms work as explanations because they prioritise "plausible", "actionable" or "causally important" features in their generated counterfactuals. However, very few of these claims have been tested in controlled psychological studies, and we know very little about which aspects of counterfactual explanations help users to understand AI system decisions. Furthermore, we do not know whether counterfactual explanations are an advance on more traditional causal explanations that have a much longer history in AI (in explaining expert systems and decision trees). Accordingly, we carried out two user studies to (i) test a fundamental distinction in feature-types, between categorical and continuous features, and (ii) compare the relative effectiveness of counterfactual and causal explanations. The studies used a simulated, automated decision-making app that determined safe driving limits after drinking alcohol, based on predicted blood alcohol content, and user responses were measured objectively (users’ predictive accuracy) and subjectively (users’ satisfaction and trust judgments). Study 1 (N=127) showed that users understand explanations referring to categorical features more readily than those referring to continuous features. It also discovered a dissociation between objective and subjective measures: counterfactual explanations elicited higher accuracy of predictions than no-explanation control descriptions but no higher accuracy than causal explanations, yet counterfactual explanations elicited greater satisfaction and trust judgments than causal explanations. Study 2 (N=211) found that users were more accurate for categorically-transformed features compared to continuous ones, and also replicated the results of Study 1. The findings delineate important boundary conditions for current and future counterfactual explanation methods in XAI.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Interactive User Interface for Dialogue Summarization Human-Centered Deferred Inference: Measuring User Interactions and Setting Deferral Criteria for Human-AI Teams Drawing with Reframer: Emergence and Control in Co-Creative AI Don’t fail me! The Level 5 Autonomous Driving Information Dilemma regarding Transparency and User Experience It Seems Smart, but It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making Task
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1