首页 > 最新文献

Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society最新文献

英文 中文
Towards Equity and Algorithmic Fairness in Student Grade Prediction 学生成绩预测的公平性与算法公平性
Pub Date : 2021-05-14 DOI: 10.1145/3461702.3462623
Weijie Jiang, Z. Pardos
Equity of educational outcome and fairness of AI with respect to race have been topics of increasing importance in education. In this work, we address both with empirical evaluations of grade prediction in higher education, an important task to improve curriculum design, plan interventions for academic support, and offer course guidance to students. With fairness as the aim, we trial several strategies for both label and instance balancing to attempt to minimize differences in algorithm performance with respect to race. We find that an adversarial learning approach, combined with grade label balancing, achieved by far the fairest results. With equity of educational outcome as the aim, we trial strategies for boosting predictive performance on historically underserved groups and find success in sampling those groups in inverse proportion to their historic outcomes. With AI-infused technology supports increasingly prevalent on campuses, our methodologies fill a need for frameworks to consider performance trade-offs with respect to sensitive student attributes and allow institutions to instrument their AI resources in ways that are attentive to equity and fairness.
教育成果的公平性和人工智能在种族方面的公平性已经成为教育领域日益重要的话题。在这项工作中,我们通过对高等教育成绩预测的实证评估来解决这两个问题,这是改进课程设计,计划学术支持干预措施以及为学生提供课程指导的重要任务。以公平为目标,我们尝试了几种标签和实例平衡策略,以尽量减少算法在种族方面的性能差异。我们发现,结合年级标签平衡的对抗性学习方法取得了迄今为止最公平的结果。以教育成果的公平性为目标,我们在历史上服务不足的群体中试验了提高预测绩效的策略,并发现这些群体的成功抽样与他们的历史结果成反比。随着人工智能技术在校园越来越普遍,我们的方法填补了对框架的需求,以考虑敏感学生属性方面的性能权衡,并允许机构以关注公平和公平的方式利用其人工智能资源。
{"title":"Towards Equity and Algorithmic Fairness in Student Grade Prediction","authors":"Weijie Jiang, Z. Pardos","doi":"10.1145/3461702.3462623","DOIUrl":"https://doi.org/10.1145/3461702.3462623","url":null,"abstract":"Equity of educational outcome and fairness of AI with respect to race have been topics of increasing importance in education. In this work, we address both with empirical evaluations of grade prediction in higher education, an important task to improve curriculum design, plan interventions for academic support, and offer course guidance to students. With fairness as the aim, we trial several strategies for both label and instance balancing to attempt to minimize differences in algorithm performance with respect to race. We find that an adversarial learning approach, combined with grade label balancing, achieved by far the fairest results. With equity of educational outcome as the aim, we trial strategies for boosting predictive performance on historically underserved groups and find success in sampling those groups in inverse proportion to their historic outcomes. With AI-infused technology supports increasingly prevalent on campuses, our methodologies fill a need for frameworks to consider performance trade-offs with respect to sensitive student attributes and allow institutions to instrument their AI resources in ways that are attentive to equity and fairness.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116959427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
On the Validity of Arrest as a Proxy for Offense: Race and the Likelihood of Arrest for Violent Crimes 论逮捕作为犯罪代理的有效性:种族与暴力犯罪逮捕的可能性
Pub Date : 2021-05-11 DOI: 10.1145/3461702.3462538
Riccardo Fogliato, Alice Xiang, Z. Lipton, D. Nagin, A. Chouldechova
Re-offense risk is considered in decision-making at many stages of the criminal justice system, from pre-trial, to sentencing, to parole. To aid decision-makers in their assessments, institutions increasingly rely on algorithmic risk assessment instruments (RAIs). These tools assess the likelihood that an individual will be arrested for a new criminal offense within some time window following their release. However, since not all crimes result in arrest, RAIs do not directly assess the risk of re-offense. Furthermore, disparities in the likelihood of arrest can potentially lead to biases in the resulting risk scores. Several recent validations of RAIs have therefore focused on arrests for violent offenses, which are viewed as being more accurate and less biased reflections of offending behavior. In this paper, we investigate biases in violent arrest data by analysing racial disparities in the likelihood of arrest for White and Black violent offenders. We focus our study on 2007--2016 incident-level data of violent offenses from 16 US states as recorded in the National Incident Based Reporting System (NIBRS). Our analysis shows that the magnitude and direction of the racial disparities depend on various characteristics of the crimes. In addition, our investigation reveals large variations in arrest rates across geographical locations and offense types. We discuss the implications of the observed disconnect between re-arrest and re-offense in the context of RAIs and the challenges around the use of data from NIBRS to correct for the sampling bias.
在刑事司法系统的许多阶段,从预审、量刑到假释,都要考虑再犯的风险。为了帮助决策者进行评估,机构越来越依赖于算法风险评估工具(RAIs)。这些工具评估一个人在释放后的一段时间内因新的刑事犯罪而被捕的可能性。然而,由于不是所有的犯罪都会导致逮捕,RAIs并不直接评估再次犯罪的风险。此外,被捕可能性的差异可能会导致风险评分的偏差。因此,最近几次对RAIs的验证都集中在暴力犯罪的逮捕上,这被认为是对犯罪行为更准确、更少偏见的反映。在本文中,我们通过分析白人和黑人暴力罪犯被捕可能性的种族差异来调查暴力逮捕数据中的偏见。我们的研究重点是2007- 2016年美国16个州的暴力犯罪事件级数据,这些数据记录在国家事件报告系统(NIBRS)中。我们的分析表明,种族差异的程度和方向取决于犯罪的各种特征。此外,我们的调查显示,不同地理位置和罪行类型的逮捕率差异很大。我们讨论了在RAIs背景下观察到的再逮捕和再犯罪之间脱节的含义,以及围绕使用NIBRS数据来纠正抽样偏差的挑战。
{"title":"On the Validity of Arrest as a Proxy for Offense: Race and the Likelihood of Arrest for Violent Crimes","authors":"Riccardo Fogliato, Alice Xiang, Z. Lipton, D. Nagin, A. Chouldechova","doi":"10.1145/3461702.3462538","DOIUrl":"https://doi.org/10.1145/3461702.3462538","url":null,"abstract":"Re-offense risk is considered in decision-making at many stages of the criminal justice system, from pre-trial, to sentencing, to parole. To aid decision-makers in their assessments, institutions increasingly rely on algorithmic risk assessment instruments (RAIs). These tools assess the likelihood that an individual will be arrested for a new criminal offense within some time window following their release. However, since not all crimes result in arrest, RAIs do not directly assess the risk of re-offense. Furthermore, disparities in the likelihood of arrest can potentially lead to biases in the resulting risk scores. Several recent validations of RAIs have therefore focused on arrests for violent offenses, which are viewed as being more accurate and less biased reflections of offending behavior. In this paper, we investigate biases in violent arrest data by analysing racial disparities in the likelihood of arrest for White and Black violent offenders. We focus our study on 2007--2016 incident-level data of violent offenses from 16 US states as recorded in the National Incident Based Reporting System (NIBRS). Our analysis shows that the magnitude and direction of the racial disparities depend on various characteristics of the crimes. In addition, our investigation reveals large variations in arrest rates across geographical locations and offense types. We discuss the implications of the observed disconnect between re-arrest and re-offense in the context of RAIs and the challenges around the use of data from NIBRS to correct for the sampling bias.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127127929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Unpacking the Expressed Consequences of AI Research in Broader Impact Statements 在更广泛的影响声明中揭示人工智能研究的表达后果
Pub Date : 2021-05-11 DOI: 10.1145/3461702.3462608
Priyanka Nanayakkara, J. Hullman, N. Diakopoulos
The computer science research community and the broader public have become increasingly aware of negative consequences of algorithmic systems. In response, the top-tier Neural Information Processing Systems (NeurIPS) conference for machine learning and artificial intelligence research required that authors include a statement of broader impact to reflect on potential positive and negative consequences of their work. We present the results of a qualitative thematic analysis of a sample of statements written for the 2020 conference. The themes we identify broadly fall into categories related to how consequences are expressed (e.g., valence, specificity, uncertainty), areas of impacts expressed (e.g., bias, the environment, labor, privacy), and researchers' recommendations for mitigating negative consequences in the future. In light of our results, we offer perspectives on how the broader impact statement can be implemented in future iterations to better align with potential goals.
计算机科学研究界和广大公众越来越意识到算法系统的负面影响。作为回应,机器学习和人工智能研究的顶级神经信息处理系统(NeurIPS)会议要求作者包括一份具有更广泛影响的声明,以反映其工作的潜在积极和消极后果。我们提出了对2020年会议发言稿样本进行定性专题分析的结果。我们确定的主题大致分为与如何表达结果(例如,效价,特异性,不确定性),表达影响的领域(例如,偏见,环境,劳动,隐私)以及研究人员对未来减轻负面后果的建议相关的类别。根据我们的结果,我们提供了关于如何在未来的迭代中实现更广泛的影响声明以更好地与潜在目标保持一致的观点。
{"title":"Unpacking the Expressed Consequences of AI Research in Broader Impact Statements","authors":"Priyanka Nanayakkara, J. Hullman, N. Diakopoulos","doi":"10.1145/3461702.3462608","DOIUrl":"https://doi.org/10.1145/3461702.3462608","url":null,"abstract":"The computer science research community and the broader public have become increasingly aware of negative consequences of algorithmic systems. In response, the top-tier Neural Information Processing Systems (NeurIPS) conference for machine learning and artificial intelligence research required that authors include a statement of broader impact to reflect on potential positive and negative consequences of their work. We present the results of a qualitative thematic analysis of a sample of statements written for the 2020 conference. The themes we identify broadly fall into categories related to how consequences are expressed (e.g., valence, specificity, uncertainty), areas of impacts expressed (e.g., bias, the environment, labor, privacy), and researchers' recommendations for mitigating negative consequences in the future. In light of our results, we offer perspectives on how the broader impact statement can be implemented in future iterations to better align with potential goals.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122874099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
The Theory, Practice, and Ethical Challenges of Designing a Diversity-Aware Platform for Social Relations 设计具有多样性意识的社会关系平台的理论、实践和伦理挑战
Pub Date : 2021-05-11 DOI: 10.1145/3461702.3462595
Laura Schelenz, Ivano Bison, Matteo Busso, Amalia de Götzen, D. Gática-Pérez, Fausto Giunchiglia, L. Meegahapola, S. Ruiz-Correa
Diversity-aware platform design is a paradigm that responds to the ethical challenges of existing social media platforms. Available platforms have been criticized for minimizing users' autonomy, marginalizing minorities, and exploiting users' data for profit maximization. This paper presents a design solution that centers the well-being of users. It presents the theory and practice of designing a diversity-aware platform for social relations. In this approach, the diversity of users is leveraged in a way that allows like-minded individuals to pursue similar interests or diverse individuals to complement each other in a complex activity. The end users of the envisioned platform are students, who participate in the design process. Diversity-aware platform design involves numerous steps, of which two are highlighted in this paper: 1) defining a framework and operationalizing the "diversity" of students, 2) collecting "diversity" data to build diversity-aware algorithms. The paper further reflects on the ethical challenges encountered during the design of a diversity-aware platform.
意识到多样性的平台设计是一种回应现有社交媒体平台道德挑战的范式。现有平台因最小化用户自主权、边缘化少数群体和利用用户数据实现利润最大化而受到批评。本文提出了一种以用户福祉为中心的设计方案。提出了设计一个具有多样性意识的社会关系平台的理论和实践。在这种方法中,利用用户的多样性,允许志同道合的个人追求相似的兴趣,或者允许不同的个人在复杂的活动中相互补充。设想的平台的最终用户是学生,他们参与设计过程。多样性感知平台的设计涉及许多步骤,本文重点介绍了两个步骤:1)定义框架并将学生的“多样性”付诸实施;2)收集“多样性”数据以构建多样性感知算法。本文进一步思考了在设计一个具有多样性意识的平台时所遇到的伦理挑战。
{"title":"The Theory, Practice, and Ethical Challenges of Designing a Diversity-Aware Platform for Social Relations","authors":"Laura Schelenz, Ivano Bison, Matteo Busso, Amalia de Götzen, D. Gática-Pérez, Fausto Giunchiglia, L. Meegahapola, S. Ruiz-Correa","doi":"10.1145/3461702.3462595","DOIUrl":"https://doi.org/10.1145/3461702.3462595","url":null,"abstract":"Diversity-aware platform design is a paradigm that responds to the ethical challenges of existing social media platforms. Available platforms have been criticized for minimizing users' autonomy, marginalizing minorities, and exploiting users' data for profit maximization. This paper presents a design solution that centers the well-being of users. It presents the theory and practice of designing a diversity-aware platform for social relations. In this approach, the diversity of users is leveraged in a way that allows like-minded individuals to pursue similar interests or diverse individuals to complement each other in a complex activity. The end users of the envisioned platform are students, who participate in the design process. Diversity-aware platform design involves numerous steps, of which two are highlighted in this paper: 1) defining a framework and operationalizing the \"diversity\" of students, 2) collecting \"diversity\" data to build diversity-aware algorithms. The paper further reflects on the ethical challenges encountered during the design of a diversity-aware platform.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"86 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120841406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Accounting for Model Uncertainty in Algorithmic Discrimination 算法判别中模型不确定性的核算
Pub Date : 2021-05-10 DOI: 10.1145/3461702.3462630
Junaid Ali, Preethi Lahoti, K. Gummadi
Traditional approaches to ensure group fairness in algorithmic decision making aim to equalize "total" error rates for different subgroups in the population. In contrast, we argue that the fairness approaches should instead focus only on equalizing errors arising due to model uncertainty (a.k.a epistemic uncertainty), caused due to lack of knowledge about the best model or due to lack of data. In other words, our proposal calls for ignoring the errors that occur due to uncertainty inherent in the data, i.e., aleatoric uncertainty. We draw a connection between predictive multiplicity and model uncertainty and argue that the techniques from predictive multiplicity could be used to identify errors made due to model uncertainty. We propose scalable convex proxies to come up with classifiers that exhibit predictive multiplicity and empirically show that our methods are comparable in performance and up to four orders of magnitude faster than the current state-of-the-art. We further pro- pose methods to achieve our goal of equalizing group error rates arising due to model uncertainty in algorithmic decision making and demonstrate the effectiveness of these methods using synthetic and real-world datasets
在算法决策中,确保群体公平的传统方法旨在使总体中不同子群体的“总”错误率相等。相反,我们认为公平方法应该只关注平衡由于模型不确定性(又称认知不确定性)而引起的错误,这种不确定性是由于缺乏对最佳模型的了解或由于缺乏数据而引起的。换句话说,我们的建议要求忽略由于数据中固有的不确定性而产生的误差,即任意不确定性。我们在预测多重性和模型不确定性之间建立了联系,并认为预测多重性的技术可以用来识别由于模型不确定性而产生的错误。我们提出了可扩展的凸代理来提出具有预测多样性的分类器,并且经验表明,我们的方法在性能上是可比较的,并且比当前最先进的方法快了四个数量级。我们进一步提出了一些方法来实现我们的目标,即平衡算法决策中由于模型不确定性而产生的组错误率,并使用合成和现实世界的数据集证明了这些方法的有效性
{"title":"Accounting for Model Uncertainty in Algorithmic Discrimination","authors":"Junaid Ali, Preethi Lahoti, K. Gummadi","doi":"10.1145/3461702.3462630","DOIUrl":"https://doi.org/10.1145/3461702.3462630","url":null,"abstract":"Traditional approaches to ensure group fairness in algorithmic decision making aim to equalize \"total\" error rates for different subgroups in the population. In contrast, we argue that the fairness approaches should instead focus only on equalizing errors arising due to model uncertainty (a.k.a epistemic uncertainty), caused due to lack of knowledge about the best model or due to lack of data. In other words, our proposal calls for ignoring the errors that occur due to uncertainty inherent in the data, i.e., aleatoric uncertainty. We draw a connection between predictive multiplicity and model uncertainty and argue that the techniques from predictive multiplicity could be used to identify errors made due to model uncertainty. We propose scalable convex proxies to come up with classifiers that exhibit predictive multiplicity and empirically show that our methods are comparable in performance and up to four orders of magnitude faster than the current state-of-the-art. We further pro- pose methods to achieve our goal of equalizing group error rates arising due to model uncertainty in algorithmic decision making and demonstrate the effectiveness of these methods using synthetic and real-world datasets","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117329145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Who Gets What, According to Whom? An Analysis of Fairness Perceptions in Service Allocation
Pub Date : 2021-05-10 DOI: 10.1145/3461702.3462568
Jacqueline Hannan, H. Chen, K. Joseph
Algorithmic fairness research has traditionally been linked to the disciplines of philosophy, ethics, and economics, where notions of fairness are prescriptive and seek objectivity. Increasingly, however, scholars are turning to the study of what different people perceive to be fair, and how these perceptions can or should help to shape the design of machine learning, particularly in the policy realm. The present work experimentally explores five novel research questions at the intersection of the "Who," "What," and "How" of fairness perceptions. Specifically, we present the results of a multi-factor conjoint analysis study that quantifies the effects of the specific context in which a question is asked, the framing of the given question, and who is answering it. Our results broadly suggest that the "Who" and "What," at least, matter in ways that are 1) not easily explained by any one theoretical perspective, 2) have critical implications for how perceptions of fairness should be measured and/or integrated into algorithmic decision-making systems.
算法公平研究传统上一直与哲学、伦理学和经济学学科联系在一起,在这些学科中,公平的概念是规范性的,并寻求客观性。然而,学者们越来越多地转向研究不同的人对公平的看法,以及这些看法如何能够或应该帮助塑造机器学习的设计,特别是在政策领域。目前的工作实验性地探讨了公平感知的“谁”,“什么”和“如何”的交叉点上的五个新的研究问题。具体来说,我们提出了一项多因素联合分析研究的结果,该研究量化了提问的特定背景、给定问题的框架以及谁在回答问题的影响。我们的研究结果广泛地表明,“谁”和“什么”至少在某种程度上是重要的,1)不容易用任何一种理论观点来解释,2)对公平的感知应该如何衡量和/或整合到算法决策系统中具有关键意义。
{"title":"Who Gets What, According to Whom? An Analysis of Fairness Perceptions in Service Allocation","authors":"Jacqueline Hannan, H. Chen, K. Joseph","doi":"10.1145/3461702.3462568","DOIUrl":"https://doi.org/10.1145/3461702.3462568","url":null,"abstract":"Algorithmic fairness research has traditionally been linked to the disciplines of philosophy, ethics, and economics, where notions of fairness are prescriptive and seek objectivity. Increasingly, however, scholars are turning to the study of what different people perceive to be fair, and how these perceptions can or should help to shape the design of machine learning, particularly in the policy realm. The present work experimentally explores five novel research questions at the intersection of the \"Who,\" \"What,\" and \"How\" of fairness perceptions. Specifically, we present the results of a multi-factor conjoint analysis study that quantifies the effects of the specific context in which a question is asked, the framing of the given question, and who is answering it. Our results broadly suggest that the \"Who\" and \"What,\" at least, matter in ways that are 1) not easily explained by any one theoretical perspective, 2) have critical implications for how perceptions of fairness should be measured and/or integrated into algorithmic decision-making systems.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124030891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Reconfiguring Diversity and Inclusion for AI Ethics 重新配置人工智能伦理的多样性和包容性
Pub Date : 2021-05-06 DOI: 10.1145/3461702.3462622
Nicole Chi, Emma Lurie, D. Mulligan
Activists, journalists, and scholars have long raised critical questions about the relationship between diversity, representation, and structural exclusions in data-intensive tools and services. We build on work mapping the emergent landscape of corporate AI ethics to center one outcome of these conversations: the incorporation of diversity and inclusion in corporate AI ethics activities. Using interpretive document analysis and analytic tools from the values in design field, we examine how diversity and inclusion work is articulated in public-facing AI ethics documentation produced by three companies that create application and services layer AI infrastructure: Google, Microsoft, and Salesforce. We find that as these documents make diversity and inclusion more tractable to engineers and technical clients, they reveal a drift away from civil rights justifications that resonates with the "managerialization of diversity" by corporations in the mid-1980s. The focus on technical artifacts - such as diverse and inclusive datasets - and the replacement of equity with fairness make ethical work more actionable for everyday practitioners. Yet, they appear divorced from broader DEI initiatives and relevant subject matter experts that could provide needed context to nuanced decisions around how to operationalize these values and new solutions. Finally, diversity and inclusion, as configured by engineering logic, positions firms not as "ethics owners" but as ethics allocators; while these companies claim expertise on AI ethics, the responsibility of defining who diversity and inclusion are meant to protect and where it is relevant is pushed downstream to their customers.
长期以来,活动家、记者和学者们一直对数据密集型工具和服务中的多样性、代表性和结构性排斥之间的关系提出关键问题。我们在绘制企业人工智能伦理新兴图景的工作基础上,以这些对话的一个结果为中心:将多样性和包容性纳入企业人工智能伦理活动。使用来自设计领域价值的解释性文档分析和分析工具,我们研究了谷歌、微软和Salesforce这三家创建应用程序和服务层人工智能基础设施的公司如何在面向公众的人工智能伦理文档中阐述多样性和包容性工作。我们发现,由于这些文件使多样性和包容性对工程师和技术客户来说更容易处理,它们揭示了一种偏离民权理由的趋势,这与20世纪80年代中期企业的“多样性管理化”产生了共鸣。对技术产物(如多样化和包容性的数据集)的关注,以及用公平取代公平,使道德工作对日常从业人员来说更具可操作性。然而,他们似乎脱离了更广泛的DEI倡议和相关主题专家,这些专家可以为如何实施这些价值观和新解决方案的细微决策提供所需的背景。最后,由工程逻辑配置的多样性和包容性,将企业定位为道德分配者,而不是“道德所有者”;虽然这些公司声称在人工智能伦理方面具有专业知识,但定义多样性和包容性意味着保护谁以及与何处相关的责任却被推到了下游的客户身上。
{"title":"Reconfiguring Diversity and Inclusion for AI Ethics","authors":"Nicole Chi, Emma Lurie, D. Mulligan","doi":"10.1145/3461702.3462622","DOIUrl":"https://doi.org/10.1145/3461702.3462622","url":null,"abstract":"Activists, journalists, and scholars have long raised critical questions about the relationship between diversity, representation, and structural exclusions in data-intensive tools and services. We build on work mapping the emergent landscape of corporate AI ethics to center one outcome of these conversations: the incorporation of diversity and inclusion in corporate AI ethics activities. Using interpretive document analysis and analytic tools from the values in design field, we examine how diversity and inclusion work is articulated in public-facing AI ethics documentation produced by three companies that create application and services layer AI infrastructure: Google, Microsoft, and Salesforce. We find that as these documents make diversity and inclusion more tractable to engineers and technical clients, they reveal a drift away from civil rights justifications that resonates with the \"managerialization of diversity\" by corporations in the mid-1980s. The focus on technical artifacts - such as diverse and inclusive datasets - and the replacement of equity with fairness make ethical work more actionable for everyday practitioners. Yet, they appear divorced from broader DEI initiatives and relevant subject matter experts that could provide needed context to nuanced decisions around how to operationalize these values and new solutions. Finally, diversity and inclusion, as configured by engineering logic, positions firms not as \"ethics owners\" but as ethics allocators; while these companies claim expertise on AI ethics, the responsibility of defining who diversity and inclusion are meant to protect and where it is relevant is pushed downstream to their customers.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123521957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Digital Voodoo Dolls 数字巫毒娃娃
Pub Date : 2021-05-06 DOI: 10.1145/3461702.3462626
M. Slavkovik, Clemens Stachl, Caroline Pitman, Jon Askonas
An institution, be it a body of government, commercial enterprise, or a service, cannot interact directly with a person. Instead, a model is created to represent us. We argue the existence of a new high-fidelity type of person model which we call a digital voodoo doll. We conceptualize it and compare its features with existing models of persons. Digital voodoo dolls are distinguished by existing completely beyond the influence and control of the person they represent. We discuss the ethical issues that such a lack of accountability creates and argue how these concerns can be mitigated.
一个机构,无论是政府机构、商业企业还是服务机构,都不能直接与人互动。相反,我们创建了一个模型来代表我们。我们认为存在一种新的高保真类型的人模型,我们称之为数字巫毒娃娃。我们将其概念化,并将其特征与现有的人物模型进行比较。数字巫毒娃娃的特点是完全不受他们所代表的人的影响和控制。我们讨论了这种缺乏问责制所产生的道德问题,并讨论了如何减轻这些担忧。
{"title":"Digital Voodoo Dolls","authors":"M. Slavkovik, Clemens Stachl, Caroline Pitman, Jon Askonas","doi":"10.1145/3461702.3462626","DOIUrl":"https://doi.org/10.1145/3461702.3462626","url":null,"abstract":"An institution, be it a body of government, commercial enterprise, or a service, cannot interact directly with a person. Instead, a model is created to represent us. We argue the existence of a new high-fidelity type of person model which we call a digital voodoo doll. We conceptualize it and compare its features with existing models of persons. Digital voodoo dolls are distinguished by existing completely beyond the influence and control of the person they represent. We discuss the ethical issues that such a lack of accountability creates and argue how these concerns can be mitigated.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129915259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Step Toward More Inclusive People Annotations for Fairness 迈向更包容的人的一步公平注释
Pub Date : 2021-05-05 DOI: 10.1145/3461702.3462594
Candice Schumann, Susanna Ricco, Utsav Prabhu, V. Ferrari, C. Pantofaru
The Open Images Dataset contains approximately 9 million images and is a widely accepted dataset for computer vision research. As is common practice for large datasets, the annotations are not exhaustive, with bounding boxes and attribute labels for only a subset of the classes in each image. In this paper, we present a new set of annotations on a subset of the Open Images dataset called the MIAP (More Inclusive Annotations for People) subset, containing bounding boxes and attributes for all of the people visible in those images. The attributes and labeling methodology for the MIAP subset were designed to enable research into model fairness. In addition, we analyze the original annotation methodology for the person class and its subclasses, discussing the resulting patterns in order to inform future annotation efforts. By considering both the original and exhaustive annotation sets, researchers can also now study how systematic patterns in training annotations affect modeling.
开放图像数据集包含大约900万张图像,是一个被广泛接受的计算机视觉研究数据集。作为大型数据集的常见做法,注释不是详尽的,仅为每个图像中的类的子集使用边界框和属性标签。在本文中,我们在Open Images数据集的一个子集上提出了一组新的注释,称为MIAP (More Inclusive annotations for People)子集,其中包含了在这些图像中可见的所有人的边界框和属性。设计了MIAP子集的属性和标记方法,以便对模型公平性进行研究。此外,我们还分析了person类及其子类的原始注释方法,讨论了结果模式,以便为将来的注释工作提供信息。通过考虑原始注释集和穷举注释集,研究人员现在还可以研究训练注释中的系统模式如何影响建模。
{"title":"A Step Toward More Inclusive People Annotations for Fairness","authors":"Candice Schumann, Susanna Ricco, Utsav Prabhu, V. Ferrari, C. Pantofaru","doi":"10.1145/3461702.3462594","DOIUrl":"https://doi.org/10.1145/3461702.3462594","url":null,"abstract":"The Open Images Dataset contains approximately 9 million images and is a widely accepted dataset for computer vision research. As is common practice for large datasets, the annotations are not exhaustive, with bounding boxes and attribute labels for only a subset of the classes in each image. In this paper, we present a new set of annotations on a subset of the Open Images dataset called the MIAP (More Inclusive Annotations for People) subset, containing bounding boxes and attributes for all of the people visible in those images. The attributes and labeling methodology for the MIAP subset were designed to enable research into model fairness. In addition, we analyze the original annotation methodology for the person class and its subclasses, discussing the resulting patterns in order to inform future annotation efforts. By considering both the original and exhaustive annotation sets, researchers can also now study how systematic patterns in training annotations affect modeling.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128873287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Towards Accountability in the Use of Artificial Intelligence for Public Administrations 在公共管理中使用人工智能的问责制
Pub Date : 2021-05-04 DOI: 10.1145/3461702.3462631
M. Loi, M. Spielkamp
We argue that the phenomena of distributed responsibility, induced acceptance, and acceptance through ignorance constitute instances of imperfect delegation when tasks are delegated to computationally-driven systems. Imperfect delegation challenges human accountability. We hold that both direct public accountability via public transparency and indirect public accountability via transparency to auditors in public organizations can be both instrumentally ethically valuable and required as a matter of deontology from the principle of democratic self-government. We analyze the regulatory content of 16 guideline documents about the use of AI in the public sector, by mapping their requirements to those of our philosophical account of accountability, and conclude that while some guidelines refer processes that amount to auditing, it seems that the debate would benefit from more clarity about the nature of the entitlement of auditors and the goals of auditing, also in order to develop ethically meaningful standards with respect to which different forms of auditing can be evaluated and compared.
我们认为,当任务委托给计算驱动的系统时,分布式责任、诱导接受和因无知而接受的现象构成了不完美委托的实例。不完善的授权挑战了人类的责任。我们认为,通过公开透明实现的直接公共问责和通过对公共组织的审计员透明实现的间接公共问责,在工具性伦理上都是有价值的,并且是民主自治原则的道义要求。我们分析了关于在公共部门使用人工智能的16个指导性文件的监管内容,通过将其要求映射到我们对问责制的哲学解释,并得出结论:虽然一些指导方针涉及相当于审计的过程,但似乎辩论将受益于更明确的审计师权利的性质和审计目标。也是为了制定有道德意义的标准,以便对不同形式的审计进行评估和比较。
{"title":"Towards Accountability in the Use of Artificial Intelligence for Public Administrations","authors":"M. Loi, M. Spielkamp","doi":"10.1145/3461702.3462631","DOIUrl":"https://doi.org/10.1145/3461702.3462631","url":null,"abstract":"We argue that the phenomena of distributed responsibility, induced acceptance, and acceptance through ignorance constitute instances of imperfect delegation when tasks are delegated to computationally-driven systems. Imperfect delegation challenges human accountability. We hold that both direct public accountability via public transparency and indirect public accountability via transparency to auditors in public organizations can be both instrumentally ethically valuable and required as a matter of deontology from the principle of democratic self-government. We analyze the regulatory content of 16 guideline documents about the use of AI in the public sector, by mapping their requirements to those of our philosophical account of accountability, and conclude that while some guidelines refer processes that amount to auditing, it seems that the debate would benefit from more clarity about the nature of the entitlement of auditors and the goals of auditing, also in order to develop ethically meaningful standards with respect to which different forms of auditing can be evaluated and compared.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125685395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
期刊
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1