Lukas-Valentin Herm , Kai Heinrich , Jonas Wanner , Christian Janiesch
{"title":"Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability","authors":"Lukas-Valentin Herm , Kai Heinrich , Jonas Wanner , Christian Janiesch","doi":"10.1016/j.ijinfomgt.2022.102538","DOIUrl":null,"url":null,"abstract":"<div><p>Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore lack explainability and vice versa. However, there is little to no empirical evidence of this tradeoff from an end user perspective. We aim to provide empirical evidence by conducting two user experiments. Using two distinct datasets, we first measure the tradeoff for five common classes of machine learning algorithms. Second, we address the problem of end user perceptions of explainable artificial intelligence augmentations aimed at increasing the understanding of the decision logic of high-performing complex models. Our results diverge from the widespread assumption of a tradeoff curve and indicate that the tradeoff between model performance and explainability is much less gradual in the end user’s perception. This is a stark contrast to assumed inherent model interpretability. Further, we found the tradeoff to be situational for example due to data complexity. Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception.</p></div>","PeriodicalId":48422,"journal":{"name":"International Journal of Information Management","volume":"69 ","pages":"Article 102538"},"PeriodicalIF":20.1000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Management","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S026840122200072X","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 27
Abstract
Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore lack explainability and vice versa. However, there is little to no empirical evidence of this tradeoff from an end user perspective. We aim to provide empirical evidence by conducting two user experiments. Using two distinct datasets, we first measure the tradeoff for five common classes of machine learning algorithms. Second, we address the problem of end user perceptions of explainable artificial intelligence augmentations aimed at increasing the understanding of the decision logic of high-performing complex models. Our results diverge from the widespread assumption of a tradeoff curve and indicate that the tradeoff between model performance and explainability is much less gradual in the end user’s perception. This is a stark contrast to assumed inherent model interpretability. Further, we found the tradeoff to be situational for example due to data complexity. Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception.
期刊介绍:
The International Journal of Information Management (IJIM) is a distinguished, international, and peer-reviewed journal dedicated to providing its readers with top-notch analysis and discussions within the evolving field of information management. Key features of the journal include:
Comprehensive Coverage:
IJIM keeps readers informed with major papers, reports, and reviews.
Topical Relevance:
The journal remains current and relevant through Viewpoint articles and regular features like Research Notes, Case Studies, and a Reviews section, ensuring readers are updated on contemporary issues.
Focus on Quality:
IJIM prioritizes high-quality papers that address contemporary issues in information management.