可视化审计师:用于检测和总结模型偏差的交互式可视化

David Munechika, Zijie J. Wang, Jack Reidy, Josh Rubin, Krishna Gade, K. Kenthapadi, Duen Horng Chau
{"title":"可视化审计师:用于检测和总结模型偏差的交互式可视化","authors":"David Munechika, Zijie J. Wang, Jack Reidy, Josh Rubin, Krishna Gade, K. Kenthapadi, Duen Horng Chau","doi":"10.1109/VIS54862.2022.00018","DOIUrl":null,"url":null,"abstract":"As machine learning (ML) systems become increasingly widespread, it is necessary to audit these systems for biases prior to their de-ployment. Recent research has developed algorithms for effectively identifying intersectional bias in the form of interpretable, underper-forming subsets (or slices) of the data. However, these solutions and their insights are limited without a tool for visually understanding and interacting with the results of these algorithms. We propose Visual Auditor, an interactive visualization tool for auditing and summarizing model biases. Visual Auditor assists model validation by providing an interpretable overview of intersectional bias (bias that is present when examining populations defined by multiple features), details about relationships between problematic data slices, and a comparison between underperforming and overper-forming data slices in a model. Our open-source tool runs directly in both computational notebooks and web browsers, making model auditing accessible and easily integrated into current ML development workflows. An observational user study in collaboration with domain experts at Fiddler AI highlights that our tool can help ML practitioners identify and understand model biases.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases\",\"authors\":\"David Munechika, Zijie J. Wang, Jack Reidy, Josh Rubin, Krishna Gade, K. Kenthapadi, Duen Horng Chau\",\"doi\":\"10.1109/VIS54862.2022.00018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As machine learning (ML) systems become increasingly widespread, it is necessary to audit these systems for biases prior to their de-ployment. Recent research has developed algorithms for effectively identifying intersectional bias in the form of interpretable, underper-forming subsets (or slices) of the data. However, these solutions and their insights are limited without a tool for visually understanding and interacting with the results of these algorithms. We propose Visual Auditor, an interactive visualization tool for auditing and summarizing model biases. Visual Auditor assists model validation by providing an interpretable overview of intersectional bias (bias that is present when examining populations defined by multiple features), details about relationships between problematic data slices, and a comparison between underperforming and overper-forming data slices in a model. Our open-source tool runs directly in both computational notebooks and web browsers, making model auditing accessible and easily integrated into current ML development workflows. An observational user study in collaboration with domain experts at Fiddler AI highlights that our tool can help ML practitioners identify and understand model biases.\",\"PeriodicalId\":190244,\"journal\":{\"name\":\"2022 IEEE Visualization and Visual Analytics (VIS)\",\"volume\":\"49 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Visualization and Visual Analytics (VIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VIS54862.2022.00018\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Visualization and Visual Analytics (VIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VIS54862.2022.00018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

随着机器学习(ML)系统变得越来越广泛,有必要在部署之前对这些系统进行偏差审计。最近的研究开发了一种算法,可以有效地识别以数据的可解释的、表现不佳的子集(或切片)形式出现的交叉偏差。然而,如果没有一个工具来直观地理解和与这些算法的结果交互,这些解决方案及其见解是有限的。我们提出Visual Auditor,一个用于审计和总结模型偏差的交互式可视化工具。Visual Auditor通过提供可解释的交叉偏差(在检查由多个特征定义的总体时出现的偏差)概述、问题数据片之间的关系细节以及模型中表现不佳和过度形成的数据片之间的比较来帮助模型验证。我们的开源工具直接在计算笔记本和web浏览器中运行,使模型审计易于访问并轻松集成到当前的ML开发工作流程中。与Fiddler AI领域专家合作的一项观察性用户研究强调,我们的工具可以帮助ML从业者识别和理解模型偏差。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases
As machine learning (ML) systems become increasingly widespread, it is necessary to audit these systems for biases prior to their de-ployment. Recent research has developed algorithms for effectively identifying intersectional bias in the form of interpretable, underper-forming subsets (or slices) of the data. However, these solutions and their insights are limited without a tool for visually understanding and interacting with the results of these algorithms. We propose Visual Auditor, an interactive visualization tool for auditing and summarizing model biases. Visual Auditor assists model validation by providing an interpretable overview of intersectional bias (bias that is present when examining populations defined by multiple features), details about relationships between problematic data slices, and a comparison between underperforming and overper-forming data slices in a model. Our open-source tool runs directly in both computational notebooks and web browsers, making model auditing accessible and easily integrated into current ML development workflows. An observational user study in collaboration with domain experts at Fiddler AI highlights that our tool can help ML practitioners identify and understand model biases.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Paths Through Spatial Networks Explaining Website Reliability by Visualizing Hyperlink Connectivity Volume Puzzle: visual analysis of segmented volume data with multivariate attributes VIS 2022 Program Committee The role of extended reality for planning coronary artery bypass graft surgery
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1