Why does my model fail?: contrastive local explanations for retail forecasting

Ana Lucic, H. Haned, M. de Rijke
{"title":"Why does my model fail?: contrastive local explanations for retail forecasting","authors":"Ana Lucic, H. Haned, M. de Rijke","doi":"10.1145/3351095.3372824","DOIUrl":null,"url":null,"abstract":"In various business settings, there is an interest in using more complex machine learning techniques for sales forecasting. It is difficult to convince analysts, along with their superiors, to adopt these techniques since the models are considered to be \"black boxes,\" even if they perform better than current models in use. We examine the impact of contrastive explanations about large errors on users' attitudes towards a \"black-box\" model. We propose an algorithm, Monte Carlo Bounds for Reasonable Predictions. Given a large error, MC-BRP determines (1) feature values that would result in a reasonable prediction, and (2) general trends between each feature and the target, both based on Monte Carlo simulations. We evaluate on a real dataset with real users by conducting a user study with 75 participants to determine if explanations generated by MC-BRP help users understand why a prediction results in a large error, and if this promotes trust in an automatically-learned model. Our study shows that users are able to answer objective questions about the model's predictions with overall 81.1% accuracy when provided with these contrastive explanations. We show that users who saw MC-BRP explanations understand why the model makes large errors in predictions significantly more than users in the control group. We also conduct an in-depth analysis of the difference in attitudes between Practitioners and Researchers, and confirm that our results hold when conditioning on the users' background.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"47","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351095.3372824","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 47

Abstract

In various business settings, there is an interest in using more complex machine learning techniques for sales forecasting. It is difficult to convince analysts, along with their superiors, to adopt these techniques since the models are considered to be "black boxes," even if they perform better than current models in use. We examine the impact of contrastive explanations about large errors on users' attitudes towards a "black-box" model. We propose an algorithm, Monte Carlo Bounds for Reasonable Predictions. Given a large error, MC-BRP determines (1) feature values that would result in a reasonable prediction, and (2) general trends between each feature and the target, both based on Monte Carlo simulations. We evaluate on a real dataset with real users by conducting a user study with 75 participants to determine if explanations generated by MC-BRP help users understand why a prediction results in a large error, and if this promotes trust in an automatically-learned model. Our study shows that users are able to answer objective questions about the model's predictions with overall 81.1% accuracy when provided with these contrastive explanations. We show that users who saw MC-BRP explanations understand why the model makes large errors in predictions significantly more than users in the control group. We also conduct an in-depth analysis of the difference in attitudes between Practitioners and Researchers, and confirm that our results hold when conditioning on the users' background.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
为什么我的模型失败了?:零售预测的对比地方性解释
在各种商业环境中,人们对使用更复杂的机器学习技术进行销售预测很感兴趣。很难说服分析师和他们的上级采用这些技术,因为这些模型被认为是“黑盒子”,即使它们比目前使用的模型表现得更好。我们研究了关于大错误的对比解释对用户对“黑盒”模型的态度的影响。我们提出了一个算法,蒙特卡洛边界合理预测。在给定较大误差的情况下,MC-BRP基于蒙特卡罗模拟,确定(1)可以产生合理预测的特征值,以及(2)每个特征与目标之间的一般趋势。我们通过对75名参与者进行用户研究,对真实用户的真实数据集进行评估,以确定MC-BRP生成的解释是否有助于用户理解预测导致大误差的原因,以及这是否促进了对自动学习模型的信任。我们的研究表明,当提供这些对比解释时,用户能够以81.1%的总体准确率回答有关模型预测的客观问题。我们发现,看到MC-BRP解释的用户比对照组的用户更能理解为什么模型在预测中出现了很大的错误。我们还对实践者和研究者的态度差异进行了深入的分析,并证实了我们的结果在用户背景条件下是成立的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Dirichlet uncertainty wrappers for actionable algorithm accuracy accountability and auditability Algorithmic targeting of social policies: fairness, accuracy, and distributed governance Regulating transparency?: Facebook, Twitter and the German Network Enforcement Act CtrlZ.AI zine fair: critical perspectives Fairness, accountability, transparency in AI at scale: lessons from national programs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1