谁对数据偏差负责?

C. Parkey
{"title":"谁对数据偏差负责?","authors":"C. Parkey","doi":"10.1080/09332480.2021.1915032","DOIUrl":null,"url":null,"abstract":"39 Accountability for misuse of data is a big question in using data science and machine learning (ML) to advance society. Are the data collectors, model builders, or users ultimately accountable? The benefits of data sharing are widely recognized by the scientific community, but headlines can also be seen in the news about models that are released with known bias or without any impact monitoring and reporting in place. Examples include “Florida scientist says she was fired for not manipulating COVID-19 Data” and “Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.” after a paper by Timnit Gebru that highlighted the risk of large language models was accepted. Organizations such as the World Health Organization (WHO) have pages of policies qualifying how data were collected, the limitations, and restrictions on use. At the same time, whistleblowers and researchers alike are pushing back, attempting to hold companies and states accountable for their misuse of data. While there is no clear answer, the question of accountability at multiple levels can be explored, as well as how to begin implementing systems of accountability now instead of waiting for regulations to provide guidance.","PeriodicalId":88226,"journal":{"name":"Chance (New York, N.Y.)","volume":"29 1","pages":"39 - 43"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Who is Accountable for Data Bias?\",\"authors\":\"C. Parkey\",\"doi\":\"10.1080/09332480.2021.1915032\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"39 Accountability for misuse of data is a big question in using data science and machine learning (ML) to advance society. Are the data collectors, model builders, or users ultimately accountable? The benefits of data sharing are widely recognized by the scientific community, but headlines can also be seen in the news about models that are released with known bias or without any impact monitoring and reporting in place. Examples include “Florida scientist says she was fired for not manipulating COVID-19 Data” and “Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.” after a paper by Timnit Gebru that highlighted the risk of large language models was accepted. Organizations such as the World Health Organization (WHO) have pages of policies qualifying how data were collected, the limitations, and restrictions on use. At the same time, whistleblowers and researchers alike are pushing back, attempting to hold companies and states accountable for their misuse of data. While there is no clear answer, the question of accountability at multiple levels can be explored, as well as how to begin implementing systems of accountability now instead of waiting for regulations to provide guidance.\",\"PeriodicalId\":88226,\"journal\":{\"name\":\"Chance (New York, N.Y.)\",\"volume\":\"29 1\",\"pages\":\"39 - 43\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Chance (New York, N.Y.)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/09332480.2021.1915032\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chance (New York, N.Y.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/09332480.2021.1915032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

39在使用数据科学和机器学习(ML)推动社会进步的过程中,对数据滥用的问责是一个大问题。数据收集者、模型构建者或用户是否负有最终责任?数据共享的好处得到了科学界的广泛认可,但也可以在新闻中看到一些模型的头条新闻,这些模型发布时带有已知的偏见,或者没有进行任何影响监测和报告。例如,在Timnit Gebru的一篇强调大型语言模型风险的论文被接受后,“佛罗里达州的科学家说她因为没有操纵COVID-19数据而被解雇”和“谷歌研究人员说她因为强调人工智能偏见的论文而被解雇”。世界卫生组织(世卫组织)等组织有详细说明数据收集方式、限制和使用限制的政策页面。与此同时,举报人和研究人员都在反击,试图让公司和国家为滥用数据负责。虽然没有明确的答案,但可以探讨多层次问责制的问题,以及如何现在就开始实施问责制,而不是等待法规提供指导。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Who is Accountable for Data Bias?
39 Accountability for misuse of data is a big question in using data science and machine learning (ML) to advance society. Are the data collectors, model builders, or users ultimately accountable? The benefits of data sharing are widely recognized by the scientific community, but headlines can also be seen in the news about models that are released with known bias or without any impact monitoring and reporting in place. Examples include “Florida scientist says she was fired for not manipulating COVID-19 Data” and “Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.” after a paper by Timnit Gebru that highlighted the risk of large language models was accepted. Organizations such as the World Health Organization (WHO) have pages of policies qualifying how data were collected, the limitations, and restrictions on use. At the same time, whistleblowers and researchers alike are pushing back, attempting to hold companies and states accountable for their misuse of data. While there is no clear answer, the question of accountability at multiple levels can be explored, as well as how to begin implementing systems of accountability now instead of waiting for regulations to provide guidance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multiple discoveries in causal inference: LATE for the party. Bayes Factors for Forensic Decision Analyses with R Three Welcome Arrivals for 2023: 1. Florence Nightingale Bayesian Probability for Babies Fresh Perspective
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1