Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

Umang Bhatt, Yunfeng Zhang, Javier Antorán, Q. Liao, P. Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, R. Krishnan, Jason Stanley, Omesh Tickoo, L. Nachman, R. Chunara, Adrian Weller, Alice Xiang
{"title":"Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty","authors":"Umang Bhatt, Yunfeng Zhang, Javier Antorán, Q. Liao, P. Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, R. Krishnan, Jason Stanley, Omesh Tickoo, L. Nachman, R. Chunara, Adrian Weller, Alice Xiang","doi":"10.1145/3461702.3462571","DOIUrl":null,"url":null,"abstract":"Algorithmic transparency entails exposing system properties to various stakeholders for purposes that include understanding, improving, and contesting predictions. Until now, most research into algorithmic transparency has predominantly focused on explainability. Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders. However, understanding a model's specific behavior alone might not be enough for stakeholders to gauge whether the model is wrong or lacks sufficient knowledge to solve the task at hand. In this paper, we argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions. First, we discuss methods for assessing uncertainty. Then, we characterize how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems. Finally, we outline methods for displaying uncertainty to stakeholders and recommend how to collect information required for incorporating uncertainty into existing ML pipelines. This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness. We aim to encourage researchers and practitioners to measure, communicate, and use uncertainty as a form of transparency.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"151","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3461702.3462571","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 151

Abstract

Algorithmic transparency entails exposing system properties to various stakeholders for purposes that include understanding, improving, and contesting predictions. Until now, most research into algorithmic transparency has predominantly focused on explainability. Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders. However, understanding a model's specific behavior alone might not be enough for stakeholders to gauge whether the model is wrong or lacks sufficient knowledge to solve the task at hand. In this paper, we argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions. First, we discuss methods for assessing uncertainty. Then, we characterize how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems. Finally, we outline methods for displaying uncertainty to stakeholders and recommend how to collect information required for incorporating uncertainty into existing ML pipelines. This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness. We aim to encourage researchers and practitioners to measure, communicate, and use uncertainty as a form of transparency.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
作为透明形式的不确定性:测量、沟通和使用不确定性
算法透明性需要将系统属性暴露给各种涉众,以达到理解、改进和争论预测的目的。到目前为止,大多数关于算法透明性的研究主要集中在可解释性上。可解释性试图为机器学习模型对利益相关者的行为提供原因。然而,仅仅理解模型的特定行为可能不足以让涉众判断模型是否错误或缺乏足够的知识来解决手头的任务。在本文中,我们主张通过估计和传达与模型预测相关的不确定性来考虑透明度的补充形式。首先,我们讨论了评估不确定性的方法。然后,我们描述了如何使用不确定性来减轻模型不公平,增强决策,并建立可信赖的系统。最后,我们概述了向利益相关者显示不确定性的方法,并建议如何收集将不确定性纳入现有ML管道所需的信息。这项工作是一项跨学科的综述,从机器学习、可视化/人机交互、设计、决策和公平等方面的文献中提取。我们的目标是鼓励研究人员和从业人员测量、交流和使用不确定性作为一种透明度形式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Beyond Reasonable Doubt: Improving Fairness in Budget-Constrained Decision Making using Confidence Thresholds Measuring Automated Influence: Between Empirical Evidence and Ethical Values Artificial Intelligence and the Purpose of Social Systems Ethically Compliant Planning within Moral Communities Co-design and Ethical Artificial Intelligence for Health: Myths and Misconceptions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1