{"title":"谁对数据偏差负责?","authors":"C. Parkey","doi":"10.1080/09332480.2021.1915032","DOIUrl":null,"url":null,"abstract":"39 Accountability for misuse of data is a big question in using data science and machine learning (ML) to advance society. Are the data collectors, model builders, or users ultimately accountable? The benefits of data sharing are widely recognized by the scientific community, but headlines can also be seen in the news about models that are released with known bias or without any impact monitoring and reporting in place. Examples include “Florida scientist says she was fired for not manipulating COVID-19 Data” and “Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.” after a paper by Timnit Gebru that highlighted the risk of large language models was accepted. Organizations such as the World Health Organization (WHO) have pages of policies qualifying how data were collected, the limitations, and restrictions on use. At the same time, whistleblowers and researchers alike are pushing back, attempting to hold companies and states accountable for their misuse of data. While there is no clear answer, the question of accountability at multiple levels can be explored, as well as how to begin implementing systems of accountability now instead of waiting for regulations to provide guidance.","PeriodicalId":88226,"journal":{"name":"Chance (New York, N.Y.)","volume":"29 1","pages":"39 - 43"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Who is Accountable for Data Bias?\",\"authors\":\"C. Parkey\",\"doi\":\"10.1080/09332480.2021.1915032\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"39 Accountability for misuse of data is a big question in using data science and machine learning (ML) to advance society. Are the data collectors, model builders, or users ultimately accountable? The benefits of data sharing are widely recognized by the scientific community, but headlines can also be seen in the news about models that are released with known bias or without any impact monitoring and reporting in place. Examples include “Florida scientist says she was fired for not manipulating COVID-19 Data” and “Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.” after a paper by Timnit Gebru that highlighted the risk of large language models was accepted. Organizations such as the World Health Organization (WHO) have pages of policies qualifying how data were collected, the limitations, and restrictions on use. At the same time, whistleblowers and researchers alike are pushing back, attempting to hold companies and states accountable for their misuse of data. While there is no clear answer, the question of accountability at multiple levels can be explored, as well as how to begin implementing systems of accountability now instead of waiting for regulations to provide guidance.\",\"PeriodicalId\":88226,\"journal\":{\"name\":\"Chance (New York, N.Y.)\",\"volume\":\"29 1\",\"pages\":\"39 - 43\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Chance (New York, N.Y.)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/09332480.2021.1915032\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chance (New York, N.Y.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/09332480.2021.1915032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
39 Accountability for misuse of data is a big question in using data science and machine learning (ML) to advance society. Are the data collectors, model builders, or users ultimately accountable? The benefits of data sharing are widely recognized by the scientific community, but headlines can also be seen in the news about models that are released with known bias or without any impact monitoring and reporting in place. Examples include “Florida scientist says she was fired for not manipulating COVID-19 Data” and “Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.” after a paper by Timnit Gebru that highlighted the risk of large language models was accepted. Organizations such as the World Health Organization (WHO) have pages of policies qualifying how data were collected, the limitations, and restrictions on use. At the same time, whistleblowers and researchers alike are pushing back, attempting to hold companies and states accountable for their misuse of data. While there is no clear answer, the question of accountability at multiple levels can be explored, as well as how to begin implementing systems of accountability now instead of waiting for regulations to provide guidance.