首页 > 最新文献

Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society最新文献

英文 中文
Ethical Obligations to Provide Novelty 提供新颖性的道德义务
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462555
Paige Golden, D. Danks
TikTok is a popular platform that enables users to see tailored content feeds, particularly short videos with novel content. In recent years, TikTok has been criticized at times for presenting users with overly homogenous feeds, thereby reducing the diversity of content with which each user engages. In this paper, we consider whether TikTok has an ethical obligation to employ a novelty bias in its content recommendation engine. We explicate the principal morally relevant values and interests of key stakeholders, and observe that key empirical questions must be answered before a precise recommendation can be provided. We argue that TikTok's own values and interests mean that its actions should be largely driven by the values and interests of its users and creators. Unlike some other content platforms, TikTok's ethical obligations are not at odds with the values of its users, and so whether it is obligated to include a novelty bias depends on what will actually advance its users' interests.
TikTok是一个很受欢迎的平台,用户可以看到量身定制的内容源,尤其是内容新颖的短视频。近年来,TikTok不时因向用户提供过于同质化的内容而受到批评,从而减少了每个用户参与的内容的多样性。在本文中,我们考虑TikTok是否有道德义务在其内容推荐引擎中使用新颖性偏见。我们解释了主要的道德相关价值观和关键利益相关者的利益,并观察到在提供精确的建议之前必须回答关键的实证问题。我们认为,抖音自身的价值观和利益意味着,它的行动应该在很大程度上由用户和创作者的价值观和利益驱动。与其他一些内容平台不同,TikTok的道德义务与其用户的价值观并不冲突,因此它是否有义务包含对新颖性的偏见,取决于什么能真正促进用户的利益。
{"title":"Ethical Obligations to Provide Novelty","authors":"Paige Golden, D. Danks","doi":"10.1145/3461702.3462555","DOIUrl":"https://doi.org/10.1145/3461702.3462555","url":null,"abstract":"TikTok is a popular platform that enables users to see tailored content feeds, particularly short videos with novel content. In recent years, TikTok has been criticized at times for presenting users with overly homogenous feeds, thereby reducing the diversity of content with which each user engages. In this paper, we consider whether TikTok has an ethical obligation to employ a novelty bias in its content recommendation engine. We explicate the principal morally relevant values and interests of key stakeholders, and observe that key empirical questions must be answered before a precise recommendation can be provided. We argue that TikTok's own values and interests mean that its actions should be largely driven by the values and interests of its users and creators. Unlike some other content platforms, TikTok's ethical obligations are not at odds with the values of its users, and so whether it is obligated to include a novelty bias depends on what will actually advance its users' interests.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"91 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126028491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Designing Shapelets for Interpretable Data-Agnostic Classification 为可解释的数据不可知分类设计Shapelets
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462553
Riccardo Guidotti, A. Monreale
Time series shapelets are discriminatory subsequences which are representative of a class, and their similarity to a time series can be used for successfully tackling the time series classification problem. The literature shows that Artificial Intelligence (AI) systems adopting classification models based on time series shapelets can be interpretable, more accurate, and significantly fast. Thus, in order to design a data-agnostic and interpretable classification approach, in this paper we first extend the notion of shapelets to different types of data, i.e., images, tabular and textual data. Then, based on this extended notion of shapelets we propose an interpretable data-agnostic classification method. Since the shapelets discovery can be time consuming, especially for data types more complex than time series, we exploit a notion of prototypes for finding candidate shapelets, and reducing both the time required to find a solution and the variance of shapelets. A wide experimentation on datasets of different types shows that the data-agnostic prototype-based shapelets returned by the proposed method empower an interpretable classification which is also fast, accurate, and stable. In addition, we show and we prove that shapelets can be at the basis of explainable AI methods.
时间序列shapelets是一类具有代表性的判别子序列,利用其与时间序列的相似性可以成功地解决时间序列分类问题。文献表明,采用基于时间序列shapelets的分类模型的人工智能(AI)系统具有可解释性、准确性和显著的速度。因此,为了设计一种与数据无关且可解释的分类方法,在本文中,我们首先将shapelets的概念扩展到不同类型的数据,即图像、表格和文本数据。然后,基于这种扩展的shapelets概念,我们提出了一种可解释的数据无关分类方法。由于shapelets的发现可能非常耗时,特别是对于比时间序列更复杂的数据类型,因此我们利用原型的概念来寻找候选shapelets,并减少寻找解决方案所需的时间和shapelets的方差。在不同类型的数据集上进行的大量实验表明,该方法返回的基于数据不可知原型的shapelets实现了快速、准确和稳定的可解释分类。此外,我们展示并证明了shapelets可以作为可解释的AI方法的基础。
{"title":"Designing Shapelets for Interpretable Data-Agnostic Classification","authors":"Riccardo Guidotti, A. Monreale","doi":"10.1145/3461702.3462553","DOIUrl":"https://doi.org/10.1145/3461702.3462553","url":null,"abstract":"Time series shapelets are discriminatory subsequences which are representative of a class, and their similarity to a time series can be used for successfully tackling the time series classification problem. The literature shows that Artificial Intelligence (AI) systems adopting classification models based on time series shapelets can be interpretable, more accurate, and significantly fast. Thus, in order to design a data-agnostic and interpretable classification approach, in this paper we first extend the notion of shapelets to different types of data, i.e., images, tabular and textual data. Then, based on this extended notion of shapelets we propose an interpretable data-agnostic classification method. Since the shapelets discovery can be time consuming, especially for data types more complex than time series, we exploit a notion of prototypes for finding candidate shapelets, and reducing both the time required to find a solution and the variance of shapelets. A wide experimentation on datasets of different types shows that the data-agnostic prototype-based shapelets returned by the proposed method empower an interpretable classification which is also fast, accurate, and stable. In addition, we show and we prove that shapelets can be at the basis of explainable AI methods.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125630995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Person, Human, Neither: The Dehumanization Potential of Automated Image Tagging 人,人,两者都不是:自动图像标记的非人性化潜力
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462567
Pinar Barlas, K. Kyriakou, S. Kleanthous, Jahna Otterbacher
Following the literature on dehumanization via technology, we audit six proprietary image tagging algorithms (ITAs) for their potential to perpetuate dehumanization. We examine the ITAs' outputs on a controlled dataset of images depicting a diverse group of people for tags that indicate the presence of a human in the image. Through an analysis of the (mis)use of these tags, we find that there are some individuals whose 'humanness' is not recognized by an ITA, and that these individuals are often from marginalized social groups. Finally, we compare these findings with the use of the 'face' tag, which can be used for surveillance, revealing that people's faces are often recognized by an ITA even when their 'humanness' is not. Overall, we highlight the subtle ways in which ITAs may inflict widespread, disparate harm, and emphasize the importance of considering the social context of the resulting application.
在关于通过技术进行非人性化的文献之后,我们审核了六种专有图像标记算法(ita),以了解它们使非人性化永久化的潜力。我们在一个受控的图像数据集上检查ita的输出,该数据集描绘了一组不同的人,以寻找表明图像中存在人类的标签。通过对这些标签(错误)使用的分析,我们发现有一些个体的“人性”没有得到ITA的认可,这些个体通常来自边缘社会群体。最后,我们将这些发现与可用于监控的“人脸”标签的使用进行了比较,发现即使人们的“人性”没有被ITA识别出来,人们的脸也经常被ITA识别出来。总的来说,我们强调了ita可能造成广泛的、不同的伤害的微妙方式,并强调了考虑最终应用的社会背景的重要性。
{"title":"Person, Human, Neither: The Dehumanization Potential of Automated Image Tagging","authors":"Pinar Barlas, K. Kyriakou, S. Kleanthous, Jahna Otterbacher","doi":"10.1145/3461702.3462567","DOIUrl":"https://doi.org/10.1145/3461702.3462567","url":null,"abstract":"Following the literature on dehumanization via technology, we audit six proprietary image tagging algorithms (ITAs) for their potential to perpetuate dehumanization. We examine the ITAs' outputs on a controlled dataset of images depicting a diverse group of people for tags that indicate the presence of a human in the image. Through an analysis of the (mis)use of these tags, we find that there are some individuals whose 'humanness' is not recognized by an ITA, and that these individuals are often from marginalized social groups. Finally, we compare these findings with the use of the 'face' tag, which can be used for surveillance, revealing that people's faces are often recognized by an ITA even when their 'humanness' is not. Overall, we highlight the subtle ways in which ITAs may inflict widespread, disparate harm, and emphasize the importance of considering the social context of the resulting application.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134333341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fairness and Data Protection Impact Assessments 公平和数据保护影响评估
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462528
A. Kasirzadeh, Damian Clifford
In this paper, we critically examine the effectiveness of the requirement to conduct a Data Protection Impact Assessment (DPIA) in Article 35 of the General Data Protection Regulation (GDPR) in light of fairness metrics. Through this analysis, we explore the role of the fairness principle as introduced in Article 5(1)(a) and its multifaceted interpretation in the obligation to conduct a DPIA. Our paper argues that although there is a significant theoretical role for the considerations of fairness in the DPIA process, an analysis of the various guidance documents issued by data protection authorities on the obligation to conduct a DPIA reveals that they rarely mention the fairness principle in practice. Our analysis questions this omission, and assesses the capacity of fairness metrics to be truly operationalized within DPIAs. We conclude by exploring the practical effectiveness of DPIA with particular reference to (1) technical challenges that have an impact on the usefulness of DPIAs irrespective of a controller's willingness to actively engage in the process, (2) the context dependent nature of the fairness principle, and (3) the key role played by data controllers in the determination of what is fair.
在本文中,我们根据公平指标,严格审查了《通用数据保护条例》(GDPR)第35条中进行数据保护影响评估(DPIA)的要求的有效性。通过这一分析,我们探讨了第5(1)(a)条中引入的公平原则的作用及其在执行DPIA义务中的多方面解释。我们的论文认为,尽管在DPIA过程中考虑公平性具有重要的理论作用,但对数据保护当局发布的关于执行DPIA义务的各种指导文件的分析表明,它们在实践中很少提及公平原则。我们的分析对这种遗漏提出了质疑,并评估了公平指标在dpia中真正实现的能力。最后,我们通过探索DPIA的实际有效性,特别参考:(1)无论控制者是否愿意积极参与这一过程,技术挑战都会影响DPIA的有用性,(2)公平原则的上下文依赖性质,以及(3)数据控制者在确定什么是公平方面发挥的关键作用。
{"title":"Fairness and Data Protection Impact Assessments","authors":"A. Kasirzadeh, Damian Clifford","doi":"10.1145/3461702.3462528","DOIUrl":"https://doi.org/10.1145/3461702.3462528","url":null,"abstract":"In this paper, we critically examine the effectiveness of the requirement to conduct a Data Protection Impact Assessment (DPIA) in Article 35 of the General Data Protection Regulation (GDPR) in light of fairness metrics. Through this analysis, we explore the role of the fairness principle as introduced in Article 5(1)(a) and its multifaceted interpretation in the obligation to conduct a DPIA. Our paper argues that although there is a significant theoretical role for the considerations of fairness in the DPIA process, an analysis of the various guidance documents issued by data protection authorities on the obligation to conduct a DPIA reveals that they rarely mention the fairness principle in practice. Our analysis questions this omission, and assesses the capacity of fairness metrics to be truly operationalized within DPIAs. We conclude by exploring the practical effectiveness of DPIA with particular reference to (1) technical challenges that have an impact on the usefulness of DPIAs irrespective of a controller's willingness to actively engage in the process, (2) the context dependent nature of the fairness principle, and (3) the key role played by data controllers in the determination of what is fair.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116334079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Governing Algorithmic Systems with Impact Assessments: Six Observations 治理算法系统与影响评估:六个观察
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462580
E. A. Watkins, E. Moss, Jacob Metcalf, Ranjit Singh, M. C. Elish
Algorithmic decision-making and decision-support systems (ADS) are gaining influence over how society distributes resources, administers justice, and provides access to opportunities. Yet collectively we do not adequately study how these systems affect people or document the actual or potential harms resulting from their integration with important social functions. This is a significant challenge for computational justice efforts of measuring and governing AI systems. Impact assessments are often used as instruments to create accountability relationships and grant some measure of agency and voice to communities affected by projects with environmental, financial, and human rights ramifications. Applying these tools-through Algorithmic Impact Assessments (AIA)-is a plausible way to establish accountability relationships for ADSs. At the same time, what an AIA would entail remains under-specified; they raise as many questions as they answer. Choices about the methods, scope, and purpose of AIAs structure the conditions of possibility for AI governance. In this paper, we present our research on the history of impact assessments across diverse domains, through a sociotechnical lens, to present six observations on how they co-constitute accountability. Decisions about what type of effects count as an impact; when impacts are assessed; whose interests are considered; who is invited to participate; who conducts the assessment; how assessments are made publicly available, and what the outputs of the assessment might be; all shape the forms of accountability that AIAs engender. Because AlAs are still an incipient governance strategy, approaching them as social constructions that do not require a single or universal approach offers a chance to produce interventions that emerge from careful deliberation.
算法决策和决策支持系统(ADS)对社会如何分配资源、管理司法和提供机会的影响越来越大。然而,总的来说,我们并没有充分研究这些系统如何影响人们,也没有记录它们与重要社会功能相结合所造成的实际或潜在危害。这对于衡量和管理人工智能系统的计算正义工作来说是一个重大挑战。影响评估通常被用作建立问责关系的工具,并向受环境、财政和人权影响项目影响的社区提供一定程度的代理权和发言权。通过算法影响评估(AIA)应用这些工具是建立美国存托凭证责任关系的可行方法。与此同时,友邦保险将带来什么仍未明确;他们提出的问题和他们回答的一样多。关于人工智能的方法、范围和目的的选择构成了人工智能治理的可能性条件。在本文中,我们通过社会技术的视角介绍了我们对不同领域影响评估历史的研究,并提出了关于它们如何共同构成问责制的六个观察结果。决定什么类型的影响算作影响;评估影响时;考虑到其利益的;谁被邀请参加;由谁进行评估;如何向公众提供评估,以及评估的产出可能是什么;所有这些都塑造了aia产生的问责形式。因为AlAs仍然是一种早期的治理策略,将它们作为不需要单一或普遍方法的社会结构来处理,提供了一个产生经过仔细考虑的干预措施的机会。
{"title":"Governing Algorithmic Systems with Impact Assessments: Six Observations","authors":"E. A. Watkins, E. Moss, Jacob Metcalf, Ranjit Singh, M. C. Elish","doi":"10.1145/3461702.3462580","DOIUrl":"https://doi.org/10.1145/3461702.3462580","url":null,"abstract":"Algorithmic decision-making and decision-support systems (ADS) are gaining influence over how society distributes resources, administers justice, and provides access to opportunities. Yet collectively we do not adequately study how these systems affect people or document the actual or potential harms resulting from their integration with important social functions. This is a significant challenge for computational justice efforts of measuring and governing AI systems. Impact assessments are often used as instruments to create accountability relationships and grant some measure of agency and voice to communities affected by projects with environmental, financial, and human rights ramifications. Applying these tools-through Algorithmic Impact Assessments (AIA)-is a plausible way to establish accountability relationships for ADSs. At the same time, what an AIA would entail remains under-specified; they raise as many questions as they answer. Choices about the methods, scope, and purpose of AIAs structure the conditions of possibility for AI governance. In this paper, we present our research on the history of impact assessments across diverse domains, through a sociotechnical lens, to present six observations on how they co-constitute accountability. Decisions about what type of effects count as an impact; when impacts are assessed; whose interests are considered; who is invited to participate; who conducts the assessment; how assessments are made publicly available, and what the outputs of the assessment might be; all shape the forms of accountability that AIAs engender. Because AlAs are still an incipient governance strategy, approaching them as social constructions that do not require a single or universal approach offers a chance to produce interventions that emerge from careful deliberation.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124603851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Fairness and Machine Fairness 公平与机器公平
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462577
Clinton Castro, David R. O'Brien, Ben Schwan
Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take "fairness" in this context to be a placeholder for a variety of normative egalitarian considerations. We explore a few fairness measures to suss out their egalitarian roots and evaluate them, both as formalizations of egalitarian ideas and as assertions of what fairness demands of predictive systems. We pay special attention to a recent and popular fairness measure, counterfactual fairness, which holds that a prediction about an individual is fair if it is the same in the actual world and any counterfactual world where the individual belongs to a different demographic group (cf. Kusner et al. 2018).
基于预测的决策通常是通过利用机器学习工具做出的,影响着现代生活的几乎所有方面。对这种广泛实践的伦理关注已经产生了公平机器学习领域和许多公平措施,这些公平的数学精确定义旨在确定给定的基于预测的决策系统是否公平。在鲁宾·宾斯(Reuben Binns)(2017)之后,我们将这种情况下的“公平”视为各种规范平等主义考虑的占位符。我们探索了一些公平措施,以找出它们的平等主义根源,并对它们进行评估,既可以作为平等主义思想的形式化,也可以作为预测系统对公平要求的断言。我们特别关注最近流行的公平措施,即反事实公平,它认为,如果对个人的预测在现实世界和个人属于不同人口群体的任何反事实世界中是相同的,那么它就是公平的(参见Kusner et al. 2018)。
{"title":"Fairness and Machine Fairness","authors":"Clinton Castro, David R. O'Brien, Ben Schwan","doi":"10.1145/3461702.3462577","DOIUrl":"https://doi.org/10.1145/3461702.3462577","url":null,"abstract":"Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take \"fairness\" in this context to be a placeholder for a variety of normative egalitarian considerations. We explore a few fairness measures to suss out their egalitarian roots and evaluate them, both as formalizations of egalitarian ideas and as assertions of what fairness demands of predictive systems. We pay special attention to a recent and popular fairness measure, counterfactual fairness, which holds that a prediction about an individual is fair if it is the same in the actual world and any counterfactual world where the individual belongs to a different demographic group (cf. Kusner et al. 2018).","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127874079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Machine Learning and the Meaning of Equal Treatment 机器学习与平等对待的意义
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462556
J. Simons, Sophia Adams Bhatti, Adrian Weller
Approaches to non-discrimination are generally informed by two principles: striving for equality of treatment, and advancing various notions of equality of outcome. We consider when and why there are trade-offs in machine learning between respecting formalistic interpretations of equal treatment and advancing equality of outcome. Exploring a hypothetical discrimination suit against Facebook, we argue that interpretations of equal treatment which require blindness to difference may constrain how machine learning can be deployed to advance equality of outcome. When machine learning models predict outcomes that are unevenly distributed across racial groups, using those models to advance racial justice will often require deliberately taking race into account. We then explore the normative stakes of this tension. We describe three pragmatic policy options underpinned by distinct interpretations and applications of equal treatment. A status quo approach insists on blindness to difference, permitting the design of machine learning models that compound existing patterns of disadvantage. An industry-led approach would specify a narrow set of domains in which institutions were permitted to use protected characteristics to actively reduce inequalities of outcome. A government-led approach would impose positive duties that require institutions to consider how best to advance equality of outcomes and permit the use of protected characteristics to achieve that goal. We argue that while machine learning offers significant possibilities for advancing racial justice and outcome-based equality, harnessing those possibilities will require a shift in the normative commitments that underpin the interpretation and application of equal treatment in non-discrimination law and the governance of machine learning.
实现非歧视的方法通常有两个原则:争取待遇平等,以及推进各种结果平等的概念。我们考虑在尊重平等待遇的形式主义解释和促进结果平等之间,机器学习何时以及为什么存在权衡。在对Facebook的假设歧视诉讼中,我们认为,对平等待遇的解释需要对差异视而不见,这可能会限制如何利用机器学习来促进结果的平等。当机器学习模型预测不同种族群体之间不均匀分布的结果时,使用这些模型来促进种族正义通常需要故意考虑种族因素。然后,我们将探讨这种紧张关系的规范性利害关系。我们描述了三种务实的政策选择,以不同的解释和平等待遇的应用为基础。维持现状的方法坚持无视差异,允许设计出混合现有劣势模式的机器学习模型。由行业主导的方法将指定一组狭窄的领域,允许机构使用受保护的特征来积极减少结果的不平等。政府主导的方法将施加积极的责任,要求机构考虑如何最好地促进结果的平等,并允许使用受保护的特征来实现这一目标。我们认为,虽然机器学习为推进种族正义和基于结果的平等提供了重要的可能性,但利用这些可能性将需要改变规范承诺,这些承诺是非歧视法律和机器学习治理中平等待遇的解释和应用的基础。
{"title":"Machine Learning and the Meaning of Equal Treatment","authors":"J. Simons, Sophia Adams Bhatti, Adrian Weller","doi":"10.1145/3461702.3462556","DOIUrl":"https://doi.org/10.1145/3461702.3462556","url":null,"abstract":"Approaches to non-discrimination are generally informed by two principles: striving for equality of treatment, and advancing various notions of equality of outcome. We consider when and why there are trade-offs in machine learning between respecting formalistic interpretations of equal treatment and advancing equality of outcome. Exploring a hypothetical discrimination suit against Facebook, we argue that interpretations of equal treatment which require blindness to difference may constrain how machine learning can be deployed to advance equality of outcome. When machine learning models predict outcomes that are unevenly distributed across racial groups, using those models to advance racial justice will often require deliberately taking race into account. We then explore the normative stakes of this tension. We describe three pragmatic policy options underpinned by distinct interpretations and applications of equal treatment. A status quo approach insists on blindness to difference, permitting the design of machine learning models that compound existing patterns of disadvantage. An industry-led approach would specify a narrow set of domains in which institutions were permitted to use protected characteristics to actively reduce inequalities of outcome. A government-led approach would impose positive duties that require institutions to consider how best to advance equality of outcomes and permit the use of protected characteristics to achieve that goal. We argue that while machine learning offers significant possibilities for advancing racial justice and outcome-based equality, harnessing those possibilities will require a shift in the normative commitments that underpin the interpretation and application of equal treatment in non-discrimination law and the governance of machine learning.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126750912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
The Ethics of Datasets: Moving Forward Requires Stepping Back 数据集伦理:前进需要后退
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462643
Arvind Narayanan
Machine learning research culture is driven by benchmark datasets to a greater degree than most other research fields. But the centrality of datasets also amplifies the harms associated with data, including privacy violation and underrepresentation or erasure of some populations. This has stirred a much-needed debate on the ethical responsibilities of dataset creators and users. I argue that clarity on this debate requires taking a step back to better understand the benefits of the dataset-driven approach. I show that benchmark datasets play at least six different roles and that the potential harms depend on the roles a dataset plays. By understanding this relationship, we can mitigate the harms while preserving what is scientifically valuable about the prevailing approach.
与大多数其他研究领域相比,机器学习研究文化在更大程度上受到基准数据集的驱动。但数据集的中心化也放大了与数据相关的危害,包括侵犯隐私和某些人群的代表性不足或被抹去。这引发了一场关于数据集创建者和用户的道德责任的争论。我认为,要弄清楚这场争论,需要退一步来更好地理解数据集驱动方法的好处。我展示了基准数据集至少扮演六种不同的角色,并且潜在的危害取决于数据集扮演的角色。通过理解这种关系,我们可以减轻危害,同时保留主流方法的科学价值。
{"title":"The Ethics of Datasets: Moving Forward Requires Stepping Back","authors":"Arvind Narayanan","doi":"10.1145/3461702.3462643","DOIUrl":"https://doi.org/10.1145/3461702.3462643","url":null,"abstract":"Machine learning research culture is driven by benchmark datasets to a greater degree than most other research fields. But the centrality of datasets also amplifies the harms associated with data, including privacy violation and underrepresentation or erasure of some populations. This has stirred a much-needed debate on the ethical responsibilities of dataset creators and users. I argue that clarity on this debate requires taking a step back to better understand the benefits of the dataset-driven approach. I show that benchmark datasets play at least six different roles and that the potential harms depend on the roles a dataset plays. By understanding this relationship, we can mitigate the harms while preserving what is scientifically valuable about the prevailing approach.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121563439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Monitoring AI Services for Misuse 监控人工智能服务的滥用
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462566
S. A. Javadi, Chris Norval, Richard Cloete, Jatinder Singh
Given the surge in interest in AI, we now see the emergence of Artificial Intelligence as a Service (AIaaS). AIaaS entails service providers offering remote access to ML models and capabilities at arms-length', through networked APIs. Such services will grow in popularity, as they enable access to state-of-the-art ML capabilities, 'on demand', 'out of the box', at low cost and without requiring training data or ML expertise. However, there is much public concern regarding AI. AIaaS raises particular considerations, given there is much potential for such services to be used to underpin and drive problematic, inappropriate, undesirable, controversial, or possibly even illegal applications. A key way forward is through service providers monitoring their AI services to identify potential situations of problematic use. Towards this, we elaborate the potential for 'misuse indicators' as a mechanism for uncovering patterns of usage behaviour warranting consideration or further investigation. We introduce a taxonomy for describing these indicators and their contextual considerations, and use exemplars to demonstrate the feasibility analysing AIaaS usage to highlight situations of possible concern. We also seek to draw more attention to AI services and the issues they raise, given AIaaS' increasing prominence, and the general calls for the more responsible and accountable use of AI.
鉴于人们对人工智能的兴趣激增,我们现在看到了人工智能即服务(AIaaS)的出现。AIaaS要求服务提供商通过网络api提供对机器学习模型和功能的远程访问。这些服务将越来越受欢迎,因为它们可以以低成本访问最先进的机器学习功能,“按需”,“开箱即用”,不需要训练数据或机器学习专业知识。然而,公众对人工智能有很多担忧。AIaaS提出了一些特别的考虑,因为这些服务很有可能被用来支持和驱动有问题的、不适当的、不受欢迎的、有争议的,甚至可能是非法的应用程序。一个关键的方法是通过服务提供商监控他们的人工智能服务,以识别潜在的问题使用情况。为此,我们详细阐述了“滥用指标”的潜力,作为一种机制,可以发现需要考虑或进一步调查的使用行为模式。我们引入了一种分类法来描述这些指标及其上下文考虑因素,并使用示例来演示分析AIaaS使用的可行性,以突出可能关注的情况。鉴于AIaaS日益突出,我们还寻求更多地关注人工智能服务及其引发的问题,并呼吁更负责任和负责任地使用人工智能。
{"title":"Monitoring AI Services for Misuse","authors":"S. A. Javadi, Chris Norval, Richard Cloete, Jatinder Singh","doi":"10.1145/3461702.3462566","DOIUrl":"https://doi.org/10.1145/3461702.3462566","url":null,"abstract":"Given the surge in interest in AI, we now see the emergence of Artificial Intelligence as a Service (AIaaS). AIaaS entails service providers offering remote access to ML models and capabilities at arms-length', through networked APIs. Such services will grow in popularity, as they enable access to state-of-the-art ML capabilities, 'on demand', 'out of the box', at low cost and without requiring training data or ML expertise. However, there is much public concern regarding AI. AIaaS raises particular considerations, given there is much potential for such services to be used to underpin and drive problematic, inappropriate, undesirable, controversial, or possibly even illegal applications. A key way forward is through service providers monitoring their AI services to identify potential situations of problematic use. Towards this, we elaborate the potential for 'misuse indicators' as a mechanism for uncovering patterns of usage behaviour warranting consideration or further investigation. We introduce a taxonomy for describing these indicators and their contextual considerations, and use exemplars to demonstrate the feasibility analysing AIaaS usage to highlight situations of possible concern. We also seek to draw more attention to AI services and the issues they raise, given AIaaS' increasing prominence, and the general calls for the more responsible and accountable use of AI.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"228 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134582703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Face Mis-ID: An Interactive Pedagogical Tool Demonstrating Disparate Accuracy Rates in Facial Recognition 面部识别错误:一种展示面部识别不同准确率的交互式教学工具
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462627
Daniella Raz, Corinne Bintz, Vivian Guetler, Aaron Tam, Michael A. Katell, Dharma Dailey, Bernease Herman, P. Krafft, Meg Young
This paper reports on the making of an interactive demo to illustrate algorithmic bias in facial recognition. Facial recognition technology has been demonstrated to be more likely to misidentify women and minoritized people. This risk, among others, has elevated facial recognition into policy discussions across the country, where many jurisdictions have already passed bans on its use. Whereas scholarship on the disparate impacts of algorithmic systems is growing, general public awareness of this set of problems is limited in part by the illegibility of machine learning systems to non-specialists. Inspired by discussions with community organizers advocating for tech fairness issues, we created the Face Mis-ID Demo to reveal the algorithmic functions behind facial recognition technology and to demonstrate its risks to policymakers and members of the community. In this paper, we share the design process behind this interactive demo, its form and function, and the design decisions that honed its accessibility, toward its use for improving legibility of algorithmic systems and awareness of the sources of their disparate impacts.
本文报道了一个交互式演示的制作,以说明人脸识别中的算法偏差。面部识别技术被证明更容易错认女性和少数族裔。除其他外,这种风险已将面部识别提升到全国各地的政策讨论中,许多司法管辖区已经通过了禁止使用面部识别的禁令。尽管关于算法系统的不同影响的学术研究正在增长,但公众对这一系列问题的认识在一定程度上受到机器学习系统对非专业人员的难以理解的限制。在与倡导技术公平问题的社区组织者讨论的启发下,我们创建了人脸识别错误演示,以揭示人脸识别技术背后的算法功能,并向政策制定者和社区成员展示其风险。在本文中,我们分享了这个交互式演示背后的设计过程,它的形式和功能,以及磨练其可访问性的设计决策,以提高算法系统的易读性,并意识到它们不同影响的来源。
{"title":"Face Mis-ID: An Interactive Pedagogical Tool Demonstrating Disparate Accuracy Rates in Facial Recognition","authors":"Daniella Raz, Corinne Bintz, Vivian Guetler, Aaron Tam, Michael A. Katell, Dharma Dailey, Bernease Herman, P. Krafft, Meg Young","doi":"10.1145/3461702.3462627","DOIUrl":"https://doi.org/10.1145/3461702.3462627","url":null,"abstract":"This paper reports on the making of an interactive demo to illustrate algorithmic bias in facial recognition. Facial recognition technology has been demonstrated to be more likely to misidentify women and minoritized people. This risk, among others, has elevated facial recognition into policy discussions across the country, where many jurisdictions have already passed bans on its use. Whereas scholarship on the disparate impacts of algorithmic systems is growing, general public awareness of this set of problems is limited in part by the illegibility of machine learning systems to non-specialists. Inspired by discussions with community organizers advocating for tech fairness issues, we created the Face Mis-ID Demo to reveal the algorithmic functions behind facial recognition technology and to demonstrate its risks to policymakers and members of the community. In this paper, we share the design process behind this interactive demo, its form and function, and the design decisions that honed its accessibility, toward its use for improving legibility of algorithmic systems and awareness of the sources of their disparate impacts.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117144854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1