AI explainability 360: hands-on tutorial

Vijay Arya, R. Bellamy, Pin-Yu Chen, Amit Dhurandhar, M. Hind, Samuel C. Hoffman, Stephanie Houde, Q. Liao, Ronny Luss, A. Mojsilovic, Sami Mourad, Pablo Pedemonte, R. Raghavendra, John T. Richards, P. Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
{"title":"AI explainability 360: hands-on tutorial","authors":"Vijay Arya, R. Bellamy, Pin-Yu Chen, Amit Dhurandhar, M. Hind, Samuel C. Hoffman, Stephanie Houde, Q. Liao, Ronny Luss, A. Mojsilovic, Sami Mourad, Pablo Pedemonte, R. Raghavendra, John T. Richards, P. Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang","doi":"10.1145/3351095.3375667","DOIUrl":null,"url":null,"abstract":"This tutorial will teach participants to use and contribute to a new open-source Python package named AI Explainability 360 (AIX360) (https://aix360.mybluemix.net), a comprehensive and extensible toolkit that supports interpretability and explainability of data and machine learning models. Motivation for the toolkit. The AIX360 toolkit illustrates that there is no single approach to explainability that works best for all situations. There are many ways to explain: data vs. model, direct vs. post-hoc explanation, local vs. global, etc. The toolkit includes ten state of the art algorithms that cover different dimensions of explanations along with proxy explainability metrics. Moreover, one of our prime objectives is for AIX360 to serve as an educational tool even for non-machine learning experts (viz. social scientists, healthcare experts). To this end, the toolkit has an interactive demonstration, highly descriptive Jupyter notebooks covering diverse real-world use cases, and guidance materials, all helping one navigate the complex explainability space. Compared to existing open-source efforts on AI explainability, AIX360 takes a step forward in focusing on a greater diversity of ways of explaining, usability in industry, and software engineering. By integrating these three aspects, we hope that AIX360 will attract researchers in AI explainability and help translate our collective research results for practicing data scientists and developers deploying solutions in a variety of industries. Regarding the first aspect of diversity, Table 1 in [1] compares AIX360 to existing toolkits in terms of the types of explainability methods offered. The table shows that AIX360 not only covers more types of methods but also has metrics which can act as proxies for judging the quality of explanations. Regarding the second aspect of industry usage, AIX360 illustrates how these explainability algorithms can be applied in specific contexts (please see Audience, goals, and outcomes below). In just a few months since its initial release, the AIX360 toolkit already has a vibrant slack community with over 120 members and has been forked almost 80 times accumulating over 400 stars. This response leads us to believe that there is significant interest in the community in learning more about the toolkit and explainability in general. Audience, goals, and outcomes. The presentations in the tutorial will be aimed at an audience with different backgrounds and computer science expertise levels. For all audience members and especially those unfamiliar with Python programming, AIX360 provides an interactive experience (http://aix360.mybluemix.net/data) centered around a credit approval scenario as a gentle and grounded introduction to the concepts and capabilities of the toolkit. We will also teach all participants which type of explainability algorithm is most appropriate for a given use case, not only for those in the toolkit but also from the broader explainability literature. Knowing which explainability algorithms apply to which contexts and understanding when to use them can benefit most people, regardless of their technical background. The second part of the tutorial will consist of three use cases featuring different industry domains and explanation methods. Data scientists and developers can gain hands-on experience with the toolkit by running and modifying Jupyter notebooks, while others will be able to follow along by viewing rendered versions of the notebooks. Here is a rough agenda of the tutorial: 1) Overture: Provide a brief introduction to the area of explainability as well as introduce common terms. 2) Interactive Web Experience: The AIX360 interactive web experience (http://aix360.mybluemix.net/data) is intended to show a non-computer science audience how different explainability methods may suit different stakeholders in a credit approval scenario (data scientists, loan officers, and bank customers). 3) Taxonomy: We will next present a taxonomy that we have created for organizing the space of explanations and guiding practitioners toward an appropriate choice for their applications. 4) Installation: We will transition into a Python environment and ask participants to install the AIX360 package on their machines using provided instructions. 5) Example Use Cases in Finance, Government, and Healthcare: We will take participants through three use-cases in various application domains in the form of Jupyter notebooks. 6) Metrics: We will briefly showcase the two explainability metrics currently available through the toolkit. 7) Future Directions: The final segment will be to discuss future directions and how participants can contribute to the toolkit.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351095.3375667","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

This tutorial will teach participants to use and contribute to a new open-source Python package named AI Explainability 360 (AIX360) (https://aix360.mybluemix.net), a comprehensive and extensible toolkit that supports interpretability and explainability of data and machine learning models. Motivation for the toolkit. The AIX360 toolkit illustrates that there is no single approach to explainability that works best for all situations. There are many ways to explain: data vs. model, direct vs. post-hoc explanation, local vs. global, etc. The toolkit includes ten state of the art algorithms that cover different dimensions of explanations along with proxy explainability metrics. Moreover, one of our prime objectives is for AIX360 to serve as an educational tool even for non-machine learning experts (viz. social scientists, healthcare experts). To this end, the toolkit has an interactive demonstration, highly descriptive Jupyter notebooks covering diverse real-world use cases, and guidance materials, all helping one navigate the complex explainability space. Compared to existing open-source efforts on AI explainability, AIX360 takes a step forward in focusing on a greater diversity of ways of explaining, usability in industry, and software engineering. By integrating these three aspects, we hope that AIX360 will attract researchers in AI explainability and help translate our collective research results for practicing data scientists and developers deploying solutions in a variety of industries. Regarding the first aspect of diversity, Table 1 in [1] compares AIX360 to existing toolkits in terms of the types of explainability methods offered. The table shows that AIX360 not only covers more types of methods but also has metrics which can act as proxies for judging the quality of explanations. Regarding the second aspect of industry usage, AIX360 illustrates how these explainability algorithms can be applied in specific contexts (please see Audience, goals, and outcomes below). In just a few months since its initial release, the AIX360 toolkit already has a vibrant slack community with over 120 members and has been forked almost 80 times accumulating over 400 stars. This response leads us to believe that there is significant interest in the community in learning more about the toolkit and explainability in general. Audience, goals, and outcomes. The presentations in the tutorial will be aimed at an audience with different backgrounds and computer science expertise levels. For all audience members and especially those unfamiliar with Python programming, AIX360 provides an interactive experience (http://aix360.mybluemix.net/data) centered around a credit approval scenario as a gentle and grounded introduction to the concepts and capabilities of the toolkit. We will also teach all participants which type of explainability algorithm is most appropriate for a given use case, not only for those in the toolkit but also from the broader explainability literature. Knowing which explainability algorithms apply to which contexts and understanding when to use them can benefit most people, regardless of their technical background. The second part of the tutorial will consist of three use cases featuring different industry domains and explanation methods. Data scientists and developers can gain hands-on experience with the toolkit by running and modifying Jupyter notebooks, while others will be able to follow along by viewing rendered versions of the notebooks. Here is a rough agenda of the tutorial: 1) Overture: Provide a brief introduction to the area of explainability as well as introduce common terms. 2) Interactive Web Experience: The AIX360 interactive web experience (http://aix360.mybluemix.net/data) is intended to show a non-computer science audience how different explainability methods may suit different stakeholders in a credit approval scenario (data scientists, loan officers, and bank customers). 3) Taxonomy: We will next present a taxonomy that we have created for organizing the space of explanations and guiding practitioners toward an appropriate choice for their applications. 4) Installation: We will transition into a Python environment and ask participants to install the AIX360 package on their machines using provided instructions. 5) Example Use Cases in Finance, Government, and Healthcare: We will take participants through three use-cases in various application domains in the form of Jupyter notebooks. 6) Metrics: We will briefly showcase the two explainability metrics currently available through the toolkit. 7) Future Directions: The final segment will be to discuss future directions and how participants can contribute to the toolkit.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AI解释性360:动手教程
本教程将教参与者使用一个名为AI Explainability 360 (AIX360) (https://aix360.mybluemix.net)的新开源Python包,这是一个全面且可扩展的工具包,支持数据和机器学习模型的可解释性和可解释性。工具箱的动机。AIX360工具包说明,没有一种单一的可解释性方法最适合所有情况。有很多解释方法:数据vs.模型,直接vs.事后解释,局部vs.全局,等等。该工具包包括10种最先进的算法,涵盖了解释的不同维度以及代理可解释性度量。此外,我们的主要目标之一是让AIX360成为非机器学习专家(即社会科学家、医疗保健专家)的教育工具。为此,该工具包有一个交互式演示、高度描述性的Jupyter笔记本(涵盖了各种实际用例)和指导材料,所有这些都有助于人们在复杂的可解释性空间中导航。与现有的关于人工智能解释性的开源努力相比,AIX360在关注更多样化的解释方式、工业可用性和软件工程方面迈出了一步。通过整合这三个方面,我们希望AIX360能够吸引AI解释性方面的研究人员,并帮助将我们的集体研究成果转化为实践数据科学家和开发人员在各种行业中部署解决方案。关于多样性的第一个方面,[1]中的表1比较了AIX360与现有工具包所提供的可解释性方法的类型。从表中可以看出,AIX360不仅涵盖了更多类型的方法,而且还提供了可以作为判断解释质量的代理的指标。关于行业使用的第二个方面,AIX360说明了如何将这些可解释性算法应用于特定的上下文中(请参阅下面的受众、目标和结果)。AIX360工具包在最初发布后的几个月里,已经拥有了一个充满活力的懒散社区,拥有超过120名成员,已经被分叉了近80次,积累了超过400颗星星。这种反应使我们相信,社区对学习更多关于工具包和一般可解释性的信息非常感兴趣。受众、目标和结果。本教程中的演示将针对具有不同背景和计算机科学专业水平的听众。对于所有读者,特别是那些不熟悉Python编程的读者,AIX360提供了一个以信用审批场景为中心的交互式体验(http://aix360.mybluemix.net/data),作为对该工具包的概念和功能的温和而基础的介绍。我们还将教导所有参与者哪种类型的可解释性算法最适合给定的用例,不仅适用于工具包中的算法,也适用于更广泛的可解释性文献。知道哪种可解释性算法适用于哪种上下文中并理解何时使用它们可以使大多数人受益,而不管他们的技术背景如何。本教程的第二部分将包括三个用例,它们具有不同的行业领域和解释方法。数据科学家和开发人员可以通过运行和修改Jupyter笔记本来获得使用该工具包的实践经验,而其他人则可以通过查看笔记本的呈现版本来跟随。1)序曲:简要介绍可解释性领域,并介绍常用术语。2)交互式Web体验:AIX360交互式Web体验(http://aix360.mybluemix.net/data)旨在向非计算机科学的受众展示不同的可解释性方法如何适用于信贷审批场景中的不同利益相关者(数据科学家、信贷员和银行客户)。3)分类法:接下来我们将介绍一个分类法,这个分类法是我们为组织解释空间和指导从业者为他们的应用程序做出适当选择而创建的。4)安装:我们将过渡到Python环境,并要求参与者按照提供的说明在他们的机器上安装AIX360包。5)金融、政府和医疗保健中的示例用例:我们将以Jupyter笔记本的形式向参与者介绍不同应用领域中的三个用例。6)度量标准:我们将简要展示目前通过工具包可用的两个可解释性度量标准。7)未来方向:最后一部分将讨论未来的方向以及参与者如何为工具包做出贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Dirichlet uncertainty wrappers for actionable algorithm accuracy accountability and auditability Algorithmic targeting of social policies: fairness, accuracy, and distributed governance Regulating transparency?: Facebook, Twitter and the German Network Enforcement Act CtrlZ.AI zine fair: critical perspectives Fairness, accountability, transparency in AI at scale: lessons from national programs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1