Expl(Ai)Ned: The Impact of Explainable Artificial Intelligence on Cognitive Processes

Kevin Bauer, Moritz von Zahn, O. Hinz
{"title":"Expl(Ai)Ned: The Impact of Explainable Artificial Intelligence on Cognitive Processes","authors":"Kevin Bauer, Moritz von Zahn, O. Hinz","doi":"10.2139/ssrn.3872711","DOIUrl":null,"url":null,"abstract":"This paper explores the interplay of feature-based explainable AI (XAI) tech- niques, information processing, and human beliefs. Using a novel experimental protocol, we study the impact of providing users with explanations about how an AI system weighs inputted information to produce individual predictions (LIME) on users’ weighting of information and beliefs about the task-relevance of information. On the one hand, we find that feature-based explanations cause users to alter their mental weighting of available information according to observed explanations. On the other hand, explanations lead to asymmetric belief adjustments that we inter- pret as a manifestation of the confirmation bias. Trust in the prediction accuracy plays an important moderating role for XAI-enabled belief adjustments. Our results show that feature-based XAI does not only superficially influence decisions but re- ally change internal cognitive processes, bearing the potential to manipulate human beliefs and reinforce stereotypes. Hence, the current regulatory efforts that aim at enhancing algorithmic transparency may benefit from going hand in hand with measures ensuring the exclusion of sensitive personal information in XAI systems. Overall, our findings put assertions that XAI is the silver bullet solving all of AI systems’ (black box) problems into perspective.","PeriodicalId":158556,"journal":{"name":"Leibniz Institute for Financial Research SAFE Working Paper Series","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Leibniz Institute for Financial Research SAFE Working Paper Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3872711","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

This paper explores the interplay of feature-based explainable AI (XAI) tech- niques, information processing, and human beliefs. Using a novel experimental protocol, we study the impact of providing users with explanations about how an AI system weighs inputted information to produce individual predictions (LIME) on users’ weighting of information and beliefs about the task-relevance of information. On the one hand, we find that feature-based explanations cause users to alter their mental weighting of available information according to observed explanations. On the other hand, explanations lead to asymmetric belief adjustments that we inter- pret as a manifestation of the confirmation bias. Trust in the prediction accuracy plays an important moderating role for XAI-enabled belief adjustments. Our results show that feature-based XAI does not only superficially influence decisions but re- ally change internal cognitive processes, bearing the potential to manipulate human beliefs and reinforce stereotypes. Hence, the current regulatory efforts that aim at enhancing algorithmic transparency may benefit from going hand in hand with measures ensuring the exclusion of sensitive personal information in XAI systems. Overall, our findings put assertions that XAI is the silver bullet solving all of AI systems’ (black box) problems into perspective.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Expl(Ai)Ned:可解释人工智能对认知过程的影响
本文探讨了基于特征的可解释人工智能(XAI)技术、信息处理和人类信仰之间的相互作用。使用一种新的实验协议,我们研究了向用户提供有关人工智能系统如何权衡输入信息以产生个人预测(LIME)的解释对用户对信息的权重和对信息任务相关性的信念的影响。一方面,我们发现基于特征的解释导致用户根据观察到的解释改变他们对可用信息的心理权重。另一方面,解释导致不对称的信念调整,我们将其解释为确认偏差的表现。对预测精度的信任在xai支持的信念调整中起着重要的调节作用。我们的研究结果表明,基于特征的XAI不仅表面上影响决策,而且真正改变内部认知过程,具有操纵人类信念和强化刻板印象的潜力。因此,当前旨在提高算法透明度的监管努力可能会受益于确保在XAI系统中排除敏感个人信息的措施。总的来说,我们的研究结果表明,XAI是解决所有人工智能系统(黑箱)问题的灵丹妙药。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Time-Varying Granger Causality Tests for Applications in Global Crude Oil Markets: A Study on the DCC-MGARCH Hong Test The Geography of Banks in the United States (1990-2020) Incentives, Self-Selection, and Coordination of Motivated Agents for the Production of Social Goods Expl(Ai)Ned: The Impact of Explainable Artificial Intelligence on Cognitive Processes The Impact of Temporal Framing on the Marginal Propensity to Consume
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1