From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence

David Alvarez-Melis, Harmanpreet Kaur, Hal Daum'e, Hanna M. Wallach, Jennifer Wortman Vaughan
{"title":"From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence","authors":"David Alvarez-Melis, Harmanpreet Kaur, Hal Daum'e, Hanna M. Wallach, Jennifer Wortman Vaughan","doi":"10.1609/hcomp.v9i1.18938","DOIUrl":null,"url":null,"abstract":"We take inspiration from the study of human explanation to inform the design and evaluation of interpretability methods in machine learning. First, we survey the literature on human explanation in philosophy, cognitive science, and the social sciences, and propose a list of design principles for machine-generated explanations that are meaningful to humans. Using the concept of weight of evidence from information theory, we develop a method for generating explanations that adhere to these principles. We show that this method can be adapted to handle high-dimensional, multi-class settings, yielding a flexible framework for generating explanations. We demonstrate that these explanations can be estimated accurately from finite samples and are robust to small perturbations of the inputs. We also evaluate our method through a qualitative user study with machine learning practitioners, where we observe that the resulting explanations are usable despite some participants struggling with background concepts like prior class probabilities. Finally, we conclude by surfacing design implications for interpretability tools in general.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/hcomp.v9i1.18938","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15

Abstract

We take inspiration from the study of human explanation to inform the design and evaluation of interpretability methods in machine learning. First, we survey the literature on human explanation in philosophy, cognitive science, and the social sciences, and propose a list of design principles for machine-generated explanations that are meaningful to humans. Using the concept of weight of evidence from information theory, we develop a method for generating explanations that adhere to these principles. We show that this method can be adapted to handle high-dimensional, multi-class settings, yielding a flexible framework for generating explanations. We demonstrate that these explanations can be estimated accurately from finite samples and are robust to small perturbations of the inputs. We also evaluate our method through a qualitative user study with machine learning practitioners, where we observe that the resulting explanations are usable despite some participants struggling with background concepts like prior class probabilities. Finally, we conclude by surfacing design implications for interpretability tools in general.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从人为解释到模型可解释性:一个基于证据权重的框架
我们从人类解释的研究中获得灵感,为机器学习中可解释性方法的设计和评估提供信息。首先,我们调查了哲学、认知科学和社会科学中关于人类解释的文献,并提出了一系列对人类有意义的机器生成解释的设计原则。利用信息论中的证据权重概念,我们开发了一种方法来生成符合这些原则的解释。我们表明,这种方法可以适应于处理高维,多类设置,产生一个灵活的框架来生成解释。我们证明这些解释可以从有限的样本中准确估计,并且对输入的小扰动具有鲁棒性。我们还通过与机器学习从业者进行定性用户研究来评估我们的方法,在那里我们观察到,尽管一些参与者在先验类概率等背景概念上挣扎,但最终的解释是可用的。最后,我们总结了一般可解释性工具的设计含义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection Crowdsourced Clustering via Active Querying: Practical Algorithm with Theoretical Guarantees BackTrace: A Human-AI Collaborative Approach to Discovering Studio Backdrops in Historical Photographs Confidence Contours: Uncertainty-Aware Annotation for Medical Semantic Segmentation Humans Forgo Reward to Instill Fairness into AI
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1