审计算法:经验教训和数据最小化的风险

G. G. Clavell, M. M. Zamorano, C. Castillo, Oliver Smith, A. Matic
{"title":"审计算法:经验教训和数据最小化的风险","authors":"G. G. Clavell, M. M. Zamorano, C. Castillo, Oliver Smith, A. Matic","doi":"10.1145/3375627.3375852","DOIUrl":null,"url":null,"abstract":"In this paper, we present the Algorithmic Audit (AA) of REM!X, a personalized well-being recommendation app developed by Telefónica Innovación Alpha. The main goal of the AA was to identify and mitigate algorithmic biases in the recommendation system that could lead to the discrimination of protected groups. The audit was conducted through a qualitative methodology that included five focus groups with developers and a digital ethnography relying on users comments reported in the Google Play Store. To minimize the collection of personal information, as required by best practice and the GDPR [1], the REM!X app did not collect gender, age, race, religion, or other protected attributes from its users. This limited the algorithmic assessment and the ability to control for different algorithmic biases. Indirect evidence was thus used as a partial mitigation for the lack of data on protected attributes, and allowed the AA to identify four domains where bias and discrimination were still possible, even without direct personal identifiers. Our analysis provides important insights into how general data ethics principles such as data minimization, fairness, non-discrimination and transparency can be operationalized via algorithmic auditing, their potential and limitations, and how the collaboration between developers and algorithmic auditors can lead to better technologies","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":"{\"title\":\"Auditing Algorithms: On Lessons Learned and the Risks of Data Minimization\",\"authors\":\"G. G. Clavell, M. M. Zamorano, C. Castillo, Oliver Smith, A. Matic\",\"doi\":\"10.1145/3375627.3375852\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present the Algorithmic Audit (AA) of REM!X, a personalized well-being recommendation app developed by Telefónica Innovación Alpha. The main goal of the AA was to identify and mitigate algorithmic biases in the recommendation system that could lead to the discrimination of protected groups. The audit was conducted through a qualitative methodology that included five focus groups with developers and a digital ethnography relying on users comments reported in the Google Play Store. To minimize the collection of personal information, as required by best practice and the GDPR [1], the REM!X app did not collect gender, age, race, religion, or other protected attributes from its users. This limited the algorithmic assessment and the ability to control for different algorithmic biases. Indirect evidence was thus used as a partial mitigation for the lack of data on protected attributes, and allowed the AA to identify four domains where bias and discrimination were still possible, even without direct personal identifiers. Our analysis provides important insights into how general data ethics principles such as data minimization, fairness, non-discrimination and transparency can be operationalized via algorithmic auditing, their potential and limitations, and how the collaboration between developers and algorithmic auditors can lead to better technologies\",\"PeriodicalId\":93612,\"journal\":{\"name\":\"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-02-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"21\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3375627.3375852\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3375627.3375852","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21

摘要

在本文中,我们提出了REM!X是由Telefónica Innovación Alpha开发的个性化健康推荐应用程序。AA的主要目标是识别和减轻推荐系统中可能导致受保护群体歧视的算法偏见。审计是通过定性方法进行的,其中包括5个开发者焦点小组和基于b谷歌Play Store用户评论的数字人种学。为了最大限度地减少个人信息的收集,根据最佳实践和GDPR bbb的要求,REM!X应用程序没有收集用户的性别、年龄、种族、宗教或其他受保护的属性。这限制了算法评估和控制不同算法偏差的能力。因此,间接证据被用作部分缓解缺乏受保护属性数据的问题,并使机管局能够确定即使没有直接的个人标识符,仍有可能存在偏见和歧视的四个领域。我们的分析为如何通过算法审计实现数据最小化、公平、非歧视和透明度等一般数据伦理原则、它们的潜力和局限性以及开发人员和算法审计人员之间的合作如何带来更好的技术提供了重要的见解
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Auditing Algorithms: On Lessons Learned and the Risks of Data Minimization
In this paper, we present the Algorithmic Audit (AA) of REM!X, a personalized well-being recommendation app developed by Telefónica Innovación Alpha. The main goal of the AA was to identify and mitigate algorithmic biases in the recommendation system that could lead to the discrimination of protected groups. The audit was conducted through a qualitative methodology that included five focus groups with developers and a digital ethnography relying on users comments reported in the Google Play Store. To minimize the collection of personal information, as required by best practice and the GDPR [1], the REM!X app did not collect gender, age, race, religion, or other protected attributes from its users. This limited the algorithmic assessment and the ability to control for different algorithmic biases. Indirect evidence was thus used as a partial mitigation for the lack of data on protected attributes, and allowed the AA to identify four domains where bias and discrimination were still possible, even without direct personal identifiers. Our analysis provides important insights into how general data ethics principles such as data minimization, fairness, non-discrimination and transparency can be operationalized via algorithmic auditing, their potential and limitations, and how the collaboration between developers and algorithmic auditors can lead to better technologies
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Bias in Artificial Intelligence Models in Financial Services Privacy Preserving Machine Learning Systems AIES '22: AAAI/ACM Conference on AI, Ethics, and Society, Oxford, United Kingdom, May 19 - 21, 2021 To Scale: The Universalist and Imperialist Narrative of Big Tech AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, USA, May 19-21, 2021
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1