Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability

IF 20.1 1区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE International Journal of Information Management Pub Date : 2023-04-01 DOI:10.1016/j.ijinfomgt.2022.102538
Lukas-Valentin Herm , Kai Heinrich , Jonas Wanner , Christian Janiesch
{"title":"Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability","authors":"Lukas-Valentin Herm ,&nbsp;Kai Heinrich ,&nbsp;Jonas Wanner ,&nbsp;Christian Janiesch","doi":"10.1016/j.ijinfomgt.2022.102538","DOIUrl":null,"url":null,"abstract":"<div><p>Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore lack explainability and vice versa. However, there is little to no empirical evidence of this tradeoff from an end user perspective. We aim to provide empirical evidence by conducting two user experiments. Using two distinct datasets, we first measure the tradeoff for five common classes of machine learning algorithms. Second, we address the problem of end user perceptions of explainable artificial intelligence augmentations aimed at increasing the understanding of the decision logic of high-performing complex models. Our results diverge from the widespread assumption of a tradeoff curve and indicate that the tradeoff between model performance and explainability is much less gradual in the end user’s perception. This is a stark contrast to assumed inherent model interpretability. Further, we found the tradeoff to be situational for example due to data complexity. Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception.</p></div>","PeriodicalId":48422,"journal":{"name":"International Journal of Information Management","volume":"69 ","pages":"Article 102538"},"PeriodicalIF":20.1000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Management","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S026840122200072X","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 27

Abstract

Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore lack explainability and vice versa. However, there is little to no empirical evidence of this tradeoff from an end user perspective. We aim to provide empirical evidence by conducting two user experiments. Using two distinct datasets, we first measure the tradeoff for five common classes of machine learning algorithms. Second, we address the problem of end user perceptions of explainable artificial intelligence augmentations aimed at increasing the understanding of the decision logic of high-performing complex models. Our results diverge from the widespread assumption of a tradeoff curve and indicate that the tradeoff between model performance and explainability is much less gradual in the end user’s perception. This is a stark contrast to assumed inherent model interpretability. Further, we found the tradeoff to be situational for example due to data complexity. Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
停止按机器学习算法的可解释性排序!以用户为中心的性能和可解释性调查
机器学习算法使现代智能系统能够进行高级决策。研究表明,它们的模型性能和可解释性之间存在权衡。具有更高性能的机器学习模型通常基于更复杂的算法,因此缺乏可解释性,反之亦然。然而,从最终用户的角度来看,这种权衡几乎没有经验证据。我们的目的是通过进行两次用户实验来提供经验证据。使用两个不同的数据集,我们首先测量了五类常见的机器学习算法的折衷。其次,我们解决了终端用户对可解释的人工智能增强的感知问题,旨在提高对高性能复杂模型决策逻辑的理解。我们的结果偏离了折衷曲线的普遍假设,并表明在最终用户的感知中,模型性能和可解释性之间的折衷远没有那么渐进。这与假定的固有模型可解释性形成了鲜明对比。此外,我们发现这种权衡是情境性的,例如由于数据的复杂性。我们的第二个实验结果表明,虽然可解释的人工智能增强可以用来提高可解释性,但解释类型在最终用户感知中起着至关重要的作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Information Management
International Journal of Information Management INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
53.10
自引率
6.20%
发文量
111
审稿时长
24 days
期刊介绍: The International Journal of Information Management (IJIM) is a distinguished, international, and peer-reviewed journal dedicated to providing its readers with top-notch analysis and discussions within the evolving field of information management. Key features of the journal include: Comprehensive Coverage: IJIM keeps readers informed with major papers, reports, and reviews. Topical Relevance: The journal remains current and relevant through Viewpoint articles and regular features like Research Notes, Case Studies, and a Reviews section, ensuring readers are updated on contemporary issues. Focus on Quality: IJIM prioritizes high-quality papers that address contemporary issues in information management.
期刊最新文献
Collaborative AI in the workplace: Enhancing organizational performance through resource-based and task-technology fit perspectives Personal data strategies in digital advertising: Can first-party data outshine third-party data? Using the influence of human-as-machine representation for self-improvement products The exploration of users’ perceived value from personalization and virtual conversational agents to enable a smart home assemblage– A mixed method approach Extending the unified theory of acceptance and use of technology for sustainable technologies context
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1