The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems

Mahsan Nourani, Samia Kabir, Sina Mohseni, E. Ragan
{"title":"The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems","authors":"Mahsan Nourani, Samia Kabir, Sina Mohseni, E. Ragan","doi":"10.1609/hcomp.v7i1.5284","DOIUrl":null,"url":null,"abstract":"Machine learning and artificial intelligence algorithms can assist human decision making and analysis tasks. While such technology shows promise, willingness to use and rely on intelligent systems may depend on whether people can trust and understand them. To address this issue, researchers have explored the use of explainable interfaces that attempt to help explain why or how a system produced the output for a given input. However, the effects of meaningful and meaningless explanations (determined by their alignment with human logic) are not properly understood, especially with users who are non-experts in data science. Additionally, we wanted to explore how explanation inclusion and level of meaningfulness would affect the user’s perception of accuracy. We designed a controlled experiment using an image classification scenario with local explanations to evaluate and better understand these issues. Our results show that whether explanations are human-meaningful can significantly affect perception of a system’s accuracy independent of the actual accuracy observed from system usage. Participants significantly underestimated the system’s accuracy when it provided weak, less human-meaningful explanations. Therefore, for intelligent systems with explainable interfaces, this research demonstrates that users are less likely to accurately judge the accuracy of algorithms that do not operate based on human-understandable rationale.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"62","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/hcomp.v7i1.5284","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 62

Abstract

Machine learning and artificial intelligence algorithms can assist human decision making and analysis tasks. While such technology shows promise, willingness to use and rely on intelligent systems may depend on whether people can trust and understand them. To address this issue, researchers have explored the use of explainable interfaces that attempt to help explain why or how a system produced the output for a given input. However, the effects of meaningful and meaningless explanations (determined by their alignment with human logic) are not properly understood, especially with users who are non-experts in data science. Additionally, we wanted to explore how explanation inclusion and level of meaningfulness would affect the user’s perception of accuracy. We designed a controlled experiment using an image classification scenario with local explanations to evaluate and better understand these issues. Our results show that whether explanations are human-meaningful can significantly affect perception of a system’s accuracy independent of the actual accuracy observed from system usage. Participants significantly underestimated the system’s accuracy when it provided weak, less human-meaningful explanations. Therefore, for intelligent systems with explainable interfaces, this research demonstrates that users are less likely to accurately judge the accuracy of algorithms that do not operate based on human-understandable rationale.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
智能系统中有意义和无意义解释对信任和感知系统准确性的影响
机器学习和人工智能算法可以帮助人类做出决策和分析任务。虽然这种技术显示出了希望,但人们是否愿意使用和依赖智能系统,可能取决于人们是否能够信任和理解它们。为了解决这个问题,研究人员探索了可解释接口的使用,试图帮助解释系统为什么或如何为给定的输入产生输出。然而,有意义和无意义的解释(由它们与人类逻辑的一致性决定)的影响并没有得到正确的理解,特别是对于非数据科学专家的用户。此外,我们想探索解释包含和意义水平如何影响用户对准确性的感知。我们设计了一个对照实验,使用带有局部解释的图像分类场景来评估和更好地理解这些问题。我们的研究结果表明,解释是否具有人类意义可以显着影响系统准确性的感知,而不依赖于从系统使用中观察到的实际准确性。当系统提供的解释不够有力、不够人性化时,参与者明显低估了系统的准确性。因此,对于具有可解释界面的智能系统,本研究表明,用户不太可能准确判断不基于人类可理解的基本原理运行的算法的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection Crowdsourced Clustering via Active Querying: Practical Algorithm with Theoretical Guarantees BackTrace: A Human-AI Collaborative Approach to Discovering Studio Backdrops in Historical Photographs Confidence Contours: Uncertainty-Aware Annotation for Medical Semantic Segmentation Humans Forgo Reward to Instill Fairness into AI
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1