外推和人工智能透明度:为什么机器学习模型应该揭示它们何时做出超出训练范围的决策

IF 6.5 1区 社会学 Q1 SOCIAL SCIENCES, INTERDISCIPLINARY Big Data & Society Pub Date : 2023-01-01 DOI:10.1177/20539517231169731
Xuenan Cao, Roozbeh Yousefzadeh
{"title":"外推和人工智能透明度:为什么机器学习模型应该揭示它们何时做出超出训练范围的决策","authors":"Xuenan Cao, Roozbeh Yousefzadeh","doi":"10.1177/20539517231169731","DOIUrl":null,"url":null,"abstract":"The right to artificial intelligence (AI) explainability has consolidated as a consensus in the research community and policy-making. However, a key component of explainability has been missing: extrapolation, which can reveal whether a model is making inferences beyond the boundaries of its training. We report that AI models extrapolate outside their range of familiar data, frequently and without notifying the users and stakeholders. Knowing whether a model has extrapolated or not is a fundamental insight that should be included in explaining AI models in favor of transparency, accountability, and fairness. Instead of dwelling on the negatives, we offer ways to clear the roadblocks in promoting AI transparency. Our commentary accompanies practical clauses useful to include in AI regulations such as the AI Bill of Rights, the National AI Initiative Act in the United States, and the AI Act by the European Commission.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":6.5000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Extrapolation and AI transparency: Why machine learning models should reveal when they make decisions beyond their training\",\"authors\":\"Xuenan Cao, Roozbeh Yousefzadeh\",\"doi\":\"10.1177/20539517231169731\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The right to artificial intelligence (AI) explainability has consolidated as a consensus in the research community and policy-making. However, a key component of explainability has been missing: extrapolation, which can reveal whether a model is making inferences beyond the boundaries of its training. We report that AI models extrapolate outside their range of familiar data, frequently and without notifying the users and stakeholders. Knowing whether a model has extrapolated or not is a fundamental insight that should be included in explaining AI models in favor of transparency, accountability, and fairness. Instead of dwelling on the negatives, we offer ways to clear the roadblocks in promoting AI transparency. Our commentary accompanies practical clauses useful to include in AI regulations such as the AI Bill of Rights, the National AI Initiative Act in the United States, and the AI Act by the European Commission.\",\"PeriodicalId\":47834,\"journal\":{\"name\":\"Big Data & Society\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Big Data & Society\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1177/20539517231169731\",\"RegionNum\":1,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL SCIENCES, INTERDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Big Data & Society","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/20539517231169731","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 1

摘要

人工智能(AI)的可解释性权利已成为研究界和政策制定界的共识。然而,可解释性的一个关键组成部分一直缺失:外推,它可以揭示一个模型是否在其训练范围之外进行推断。我们报告说,人工智能模型经常在不通知用户和利益相关者的情况下推断其熟悉数据范围之外的数据。了解模型是否进行了外推是解释人工智能模型时应该包含的基本见解,有利于透明度、问责制和公平性。我们没有停留在负面因素上,而是提供了一些方法来清除促进人工智能透明度的障碍。我们的评论附有实用条款,有助于纳入人工智能法规,如《人工智能权利法案》、美国《国家人工智能倡议法案》和欧盟委员会的《人工智能法案》。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Extrapolation and AI transparency: Why machine learning models should reveal when they make decisions beyond their training
The right to artificial intelligence (AI) explainability has consolidated as a consensus in the research community and policy-making. However, a key component of explainability has been missing: extrapolation, which can reveal whether a model is making inferences beyond the boundaries of its training. We report that AI models extrapolate outside their range of familiar data, frequently and without notifying the users and stakeholders. Knowing whether a model has extrapolated or not is a fundamental insight that should be included in explaining AI models in favor of transparency, accountability, and fairness. Instead of dwelling on the negatives, we offer ways to clear the roadblocks in promoting AI transparency. Our commentary accompanies practical clauses useful to include in AI regulations such as the AI Bill of Rights, the National AI Initiative Act in the United States, and the AI Act by the European Commission.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Big Data & Society
Big Data & Society SOCIAL SCIENCES, INTERDISCIPLINARY-
CiteScore
10.90
自引率
10.60%
发文量
59
审稿时长
11 weeks
期刊介绍: Big Data & Society (BD&S) is an open access, peer-reviewed scholarly journal that publishes interdisciplinary work principally in the social sciences, humanities, and computing and their intersections with the arts and natural sciences. The journal focuses on the implications of Big Data for societies and aims to connect debates about Big Data practices and their effects on various sectors such as academia, social life, industry, business, and government. BD&S considers Big Data as an emerging field of practices, not solely defined by but generative of unique data qualities such as high volume, granularity, data linking, and mining. The journal pays attention to digital content generated both online and offline, encompassing social media, search engines, closed networks (e.g., commercial or government transactions), and open networks like digital archives, open government, and crowdsourced data. Rather than providing a fixed definition of Big Data, BD&S encourages interdisciplinary inquiries, debates, and studies on various topics and themes related to Big Data practices. BD&S seeks contributions that analyze Big Data practices, involve empirical engagements and experiments with innovative methods, and reflect on the consequences of these practices for the representation, realization, and governance of societies. As a digital-only journal, BD&S's platform can accommodate multimedia formats such as complex images, dynamic visualizations, videos, and audio content. The contents of the journal encompass peer-reviewed research articles, colloquia, bookcasts, think pieces, state-of-the-art methods, and work by early career researchers.
期刊最新文献
From rules to examples: Machine learning's type of authority Outlier bias: AI classification of curb ramps, outliers, and context Artificial intelligence and skills in the workplace: An integrative research agenda Redress and worldmaking: Differing approaches to algorithmic reparations for housing justice The promises and challenges of addressing artificial intelligence with human rights
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1