首页 > 最新文献

Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society最新文献

英文 中文
Data Augmentation for Discrimination Prevention and Bias Disambiguation 防止歧视和消除偏见歧义的数据增强
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375865
Shubham Sharma, Yunfeng Zhang, J. Aliaga, Djallel Bouneffouf, Vinod Muthusamy, Kush R. Varshney
Machine learning models are prone to biased decisions due to biases in the datasets they are trained on. In this paper, we introduce a novel data augmentation technique to create a fairer dataset for model training that could also lend itself to understanding the type of bias existing in the dataset i.e. if bias arises from a lack of representation for a particular group (sampling bias) or if it arises because of human bias reflected in the labels (prejudice based bias). Given a dataset involving a protected attribute with a privileged and unprivileged group, we create an "ideal world'' dataset: for every data sample, we create a new sample having the same features (except the protected attribute(s)) and label as the original sample but with the opposite protected attribute value. The synthetic data points are sorted in order of their proximity to the original training distribution and added successively to the real dataset to create intermediate datasets. We theoretically show that two different notions of fairness: statistical parity difference (independence) and average odds difference (separation) always change in the same direction using such an augmentation. We also show submodularity of the proposed fairness-aware augmentation approach that enables an efficient greedy algorithm. We empirically study the effect of training models on the intermediate datasets and show that this technique reduces the two bias measures while keeping the accuracy nearly constant for three datasets. We then discuss the implications of this study on the disambiguation of sample bias and prejudice based bias and discuss how pre-processing techniques should be evaluated in general. The proposed method can be used by policy makers who want to use unbiased datasets to train machine learning models for their applications to add a subset of synthetic points to an extent that they are comfortable with to mitigate unwanted bias.
由于所训练的数据集存在偏见,机器学习模型容易做出有偏见的决策。在本文中,我们引入了一种新的数据增强技术,为模型训练创建一个更公平的数据集,该数据集还可以帮助理解数据集中存在的偏见类型,即,如果偏见是由于缺乏对特定群体的代表而产生的(抽样偏见),或者由于标签中反映的人为偏见而产生的(基于偏见的偏见)。给定一个涉及具有特权和非特权组的受保护属性的数据集,我们创建一个“理想世界”数据集:对于每个数据样本,我们创建一个具有与原始样本相同特征(受保护属性除外)和标签的新样本,但具有相反的保护属性值。合成数据点按照与原始训练分布的接近程度排序,并依次添加到真实数据集中,形成中间数据集。我们从理论上证明了两种不同的公平概念:统计奇偶差(独立性)和平均几率差(分离)总是在相同的方向上变化。我们还展示了所提出的公平性感知增强方法的子模块性,该方法实现了高效的贪婪算法。我们对训练模型对中间数据集的影响进行了实证研究,结果表明,该技术减少了两个偏差度量,同时保持了三个数据集的精度几乎不变。然后,我们讨论了本研究对样本偏差和基于偏见的偏差消歧的影响,并讨论了如何总体上评估预处理技术。政策制定者可以使用所提出的方法,他们希望使用无偏数据集来训练机器学习模型,以便在他们满意的程度上添加合成点子集,以减轻不必要的偏差。
{"title":"Data Augmentation for Discrimination Prevention and Bias Disambiguation","authors":"Shubham Sharma, Yunfeng Zhang, J. Aliaga, Djallel Bouneffouf, Vinod Muthusamy, Kush R. Varshney","doi":"10.1145/3375627.3375865","DOIUrl":"https://doi.org/10.1145/3375627.3375865","url":null,"abstract":"Machine learning models are prone to biased decisions due to biases in the datasets they are trained on. In this paper, we introduce a novel data augmentation technique to create a fairer dataset for model training that could also lend itself to understanding the type of bias existing in the dataset i.e. if bias arises from a lack of representation for a particular group (sampling bias) or if it arises because of human bias reflected in the labels (prejudice based bias). Given a dataset involving a protected attribute with a privileged and unprivileged group, we create an \"ideal world'' dataset: for every data sample, we create a new sample having the same features (except the protected attribute(s)) and label as the original sample but with the opposite protected attribute value. The synthetic data points are sorted in order of their proximity to the original training distribution and added successively to the real dataset to create intermediate datasets. We theoretically show that two different notions of fairness: statistical parity difference (independence) and average odds difference (separation) always change in the same direction using such an augmentation. We also show submodularity of the proposed fairness-aware augmentation approach that enables an efficient greedy algorithm. We empirically study the effect of training models on the intermediate datasets and show that this technique reduces the two bias measures while keeping the accuracy nearly constant for three datasets. We then discuss the implications of this study on the disambiguation of sample bias and prejudice based bias and discuss how pre-processing techniques should be evaluated in general. The proposed method can be used by policy makers who want to use unbiased datasets to train machine learning models for their applications to add a subset of synthetic points to an extent that they are comfortable with to mitigate unwanted bias.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81454433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Biased Priorities, Biased Outcomes: Three Recommendations for Ethics-oriented Data Annotation Practices 有偏见的优先级,有偏见的结果:面向伦理的数据注释实践的三个建议
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375809
Gunay Kazimzade, Milagros Miceli
In this paper, we analyze the relation between data-related biases and practices of data annotation, by placing them in the context of market economy. We understand annotation as a praxis related to the sensemaking of data and investigate annotation practices for vision models by focusing on the values that are prioritized by industrial decision-makers and practitioners. The quality of data is critical for machine learning models as it holds the power to (mis-)represent the population it is intended to analyze. For autonomous systems to be able to make sense of the world, humans first need to make sense of the data these systems will be trained on. This paper addresses this issue, guided by the following research questions: Which goals are prioritized by decision-makers at the data annotation stage? How do these priorities correlate with data-related bias issues? Focusing on work practices and their context, our research goal aims at understanding the logics driving companies and their impact on the performed annotations. The study follows a qualitative design and is based on 24 interviews with relevant actors and extensive participatory observations, including several weeks of fieldwork at two companies dedicated to data annotation for vision models in Buenos Aires, Argentina and Sofia, Bulgaria. The prevalence of market-oriented values over socially responsible approaches is argued based on three corporate priorities that inform work practices in this field and directly shape the annotations performed: profit (short deadlines connected to the strive for profit are prioritized over alternative approaches that could prevent biased outcomes), standardization (the strive for standardized and, in many cases, reductive or biased annotations to make data fit the products and revenue plans of clients), and opacity (related to client's power to impose their criteria on the annotations that are performed. Criteria that most of the times remain opaque due to corporate confidentiality). Finally, we introduce three elements, aiming at developing ethics-oriented practices of data annotation, that could help prevent biased outcomes: transparency (regarding the documentation of data transformations, including information on responsibilities and criteria for decision-making.), education (training on the potential harms caused by AI and its ethical implications, that could help data annotators and related roles adopt a more critical approach towards the interpretation and labeling of data), and regulations (clear guidelines for ethical AI developed at the governmental level and applied both in private and public organizations).
本文从市场经济的角度出发,分析了数据相关偏差与数据标注实践的关系。我们将注释理解为一种与数据语义相关的实践,并通过关注工业决策者和实践者优先考虑的价值来研究视觉模型的注释实践。数据的质量对于机器学习模型至关重要,因为它有能力(错误地)代表它想要分析的人群。为了让自主系统能够理解世界,人类首先需要理解这些系统将接受训练的数据。本文以以下研究问题为指导,解决了这一问题:决策者在数据注释阶段优先考虑哪些目标?这些优先级如何与数据相关的偏见问题相关联?专注于工作实践及其上下文,我们的研究目标旨在理解驱动公司的逻辑及其对执行注释的影响。该研究遵循定性设计,基于对相关参与者的24次访谈和广泛的参与性观察,包括在阿根廷布宜诺斯艾利斯和保加利亚索非亚两家致力于视觉模型数据注释的公司进行的为期数周的实地考察。以市场为导向的价值观在社会责任方法上的盛行是基于三个公司优先事项来争论的,这些优先事项为该领域的工作实践提供了信息,并直接影响了所执行的注释:利润(与追求利润相关的短期限优先于可以防止偏差结果的替代方法),标准化(争取标准化,在许多情况下,简化或有偏差的注释,以使数据符合客户的产品和收入计划),以及不透明性(与客户将其标准强加于所执行的注释的权力有关。由于公司保密,这些标准在大多数情况下仍然不透明)。最后,我们介绍了三个元素,旨在发展以伦理为导向的数据注释实践,这有助于防止有偏见的结果:透明度(关于数据转换的文档,包括关于责任和决策标准的信息)、教育(关于人工智能造成的潜在危害及其伦理影响的培训,这可以帮助数据注释者和相关角色对数据的解释和标记采取更关键的方法)、和法规(在政府层面制定的道德人工智能的明确指导方针,并适用于私人和公共组织)。
{"title":"Biased Priorities, Biased Outcomes: Three Recommendations for Ethics-oriented Data Annotation Practices","authors":"Gunay Kazimzade, Milagros Miceli","doi":"10.1145/3375627.3375809","DOIUrl":"https://doi.org/10.1145/3375627.3375809","url":null,"abstract":"In this paper, we analyze the relation between data-related biases and practices of data annotation, by placing them in the context of market economy. We understand annotation as a praxis related to the sensemaking of data and investigate annotation practices for vision models by focusing on the values that are prioritized by industrial decision-makers and practitioners. The quality of data is critical for machine learning models as it holds the power to (mis-)represent the population it is intended to analyze. For autonomous systems to be able to make sense of the world, humans first need to make sense of the data these systems will be trained on. This paper addresses this issue, guided by the following research questions: Which goals are prioritized by decision-makers at the data annotation stage? How do these priorities correlate with data-related bias issues? Focusing on work practices and their context, our research goal aims at understanding the logics driving companies and their impact on the performed annotations. The study follows a qualitative design and is based on 24 interviews with relevant actors and extensive participatory observations, including several weeks of fieldwork at two companies dedicated to data annotation for vision models in Buenos Aires, Argentina and Sofia, Bulgaria. The prevalence of market-oriented values over socially responsible approaches is argued based on three corporate priorities that inform work practices in this field and directly shape the annotations performed: profit (short deadlines connected to the strive for profit are prioritized over alternative approaches that could prevent biased outcomes), standardization (the strive for standardized and, in many cases, reductive or biased annotations to make data fit the products and revenue plans of clients), and opacity (related to client's power to impose their criteria on the annotations that are performed. Criteria that most of the times remain opaque due to corporate confidentiality). Finally, we introduce three elements, aiming at developing ethics-oriented practices of data annotation, that could help prevent biased outcomes: transparency (regarding the documentation of data transformations, including information on responsibilities and criteria for decision-making.), education (training on the potential harms caused by AI and its ethical implications, that could help data annotators and related roles adopt a more critical approach towards the interpretation and labeling of data), and regulations (clear guidelines for ethical AI developed at the governmental level and applied both in private and public organizations).","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"449 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77854278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
When Trusted Black Boxes Don't Agree: Incentivizing Iterative Improvement and Accountability in Critical Software Systems 当可信的黑盒不一致时:激励关键软件系统中的迭代改进和问责制
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375807
Jeanna Neefe Matthews, G. Northup, Isabella Grasso, Stephen Lorenz, M. Babaeianjelodar, Hunter Bashaw, Sumona Mondal, Abigail V. Matthews, Mariama Njie, Jessica Goldthwaite
Software increasingly plays a key role in regulated areas like housing, hiring, and credit, as well as major public functions such as criminal justice and elections. It is easy for there to be unintended defects with a large impact on the lives of individuals and society as a whole. Preventing, finding, and fixing software defects is a key focus of both industrial software development efforts as well as academic research in software engineering. In this paper, we discuss flaws in the larger socio-technical decision-making processes in which critical black-box software systems are developed, deployed, and trusted. We use criminal justice software, specifically probabilistic genotyping (PG) software, as a concrete example. We describe how PG software systems, designed to do the same job, produce different results. We highlight the under-appreciated impact of changes in key parameters and the disparate impact that one such parameter can have on different racial/ethnic groups. We propose concrete changes to the socio-technical decision-making processes surrounding the use of PG software that could be used to incentivize iterative improvements in the accuracy, fairness, reliability, and accountability of these systems.
软件在住房、招聘和信贷等监管领域以及刑事司法和选举等主要公共职能中扮演着越来越重要的角色。很容易出现意想不到的缺陷,对个人和整个社会的生活产生重大影响。预防、发现和修复软件缺陷是工业软件开发工作和软件工程学术研究的重点。在本文中,我们讨论了更大的社会技术决策过程中的缺陷,在这些决策过程中,关键的黑盒软件系统被开发、部署和信任。我们使用刑事司法软件,特别是概率基因分型(PG)软件作为一个具体的例子。我们描述了为完成相同工作而设计的PG软件系统如何产生不同的结果。我们强调了关键参数变化的未被充分认识的影响,以及一个这样的参数可能对不同种族/民族群体产生的不同影响。我们建议对围绕PG软件使用的社会技术决策过程进行具体的改变,可以用来激励这些系统的准确性、公平性、可靠性和问责性的迭代改进。
{"title":"When Trusted Black Boxes Don't Agree: Incentivizing Iterative Improvement and Accountability in Critical Software Systems","authors":"Jeanna Neefe Matthews, G. Northup, Isabella Grasso, Stephen Lorenz, M. Babaeianjelodar, Hunter Bashaw, Sumona Mondal, Abigail V. Matthews, Mariama Njie, Jessica Goldthwaite","doi":"10.1145/3375627.3375807","DOIUrl":"https://doi.org/10.1145/3375627.3375807","url":null,"abstract":"Software increasingly plays a key role in regulated areas like housing, hiring, and credit, as well as major public functions such as criminal justice and elections. It is easy for there to be unintended defects with a large impact on the lives of individuals and society as a whole. Preventing, finding, and fixing software defects is a key focus of both industrial software development efforts as well as academic research in software engineering. In this paper, we discuss flaws in the larger socio-technical decision-making processes in which critical black-box software systems are developed, deployed, and trusted. We use criminal justice software, specifically probabilistic genotyping (PG) software, as a concrete example. We describe how PG software systems, designed to do the same job, produce different results. We highlight the under-appreciated impact of changes in key parameters and the disparate impact that one such parameter can have on different racial/ethnic groups. We propose concrete changes to the socio-technical decision-making processes surrounding the use of PG software that could be used to incentivize iterative improvements in the accuracy, fairness, reliability, and accountability of these systems.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88698725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Diversity and Inclusion Metrics in Subset Selection 子集选择中的多样性和包容性指标
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375832
Margaret Mitchell, Dylan Baker, Nyalleng Moorosi, Emily L. Denton, B. Hutchinson, A. Hanna, Timnit Gebru, Jamie Morgenstern
The ethical concept of fairness has recently been applied in machine learning (ML) settings to describe a wide range of constraints and objectives. When considering the relevance of ethical concepts to subset selection problems, the concepts of diversity and inclusion are additionally applicable in order to create outputs that account for social power and access differentials. We introduce metrics based on these concepts, which can be applied together, separately, and in tandem with additional fairness constraints. Results from human subject experiments lend support to the proposed criteria. Social choice methods can additionally be leveraged to aggregate and choose preferable sets, and we detail how these may be applied.
公平的伦理概念最近被应用于机器学习(ML)设置中,以描述广泛的约束和目标。当考虑伦理概念与子集选择问题的相关性时,多样性和包容性的概念也适用于创建考虑社会权力和访问差异的输出。我们引入了基于这些概念的指标,它们可以一起应用,单独应用,也可以与额外的公平性约束一起应用。人体实验的结果支持了所提出的标准。社会选择方法还可以用于聚合和选择优选集,我们将详细说明如何应用这些方法。
{"title":"Diversity and Inclusion Metrics in Subset Selection","authors":"Margaret Mitchell, Dylan Baker, Nyalleng Moorosi, Emily L. Denton, B. Hutchinson, A. Hanna, Timnit Gebru, Jamie Morgenstern","doi":"10.1145/3375627.3375832","DOIUrl":"https://doi.org/10.1145/3375627.3375832","url":null,"abstract":"The ethical concept of fairness has recently been applied in machine learning (ML) settings to describe a wide range of constraints and objectives. When considering the relevance of ethical concepts to subset selection problems, the concepts of diversity and inclusion are additionally applicable in order to create outputs that account for social power and access differentials. We introduce metrics based on these concepts, which can be applied together, separately, and in tandem with additional fairness constraints. Results from human subject experiments lend support to the proposed criteria. Social choice methods can additionally be leveraged to aggregate and choose preferable sets, and we detail how these may be applied.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89237995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Artificial Intelligence and Indigenous Perspectives: Protecting and Empowering Intelligent Human Beings 人工智能和本土视角:保护和赋予智能人类权力
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375845
Suvradip Maitra
As 'control' is increasingly ceded to AI systems, potentially Artificial General Intelligence (AGI) humanity may be facing an identity crisis sooner rather than later, whereby the notion of 'intelligence' no longer remains solely our own. This paper characterizes the problem in terms of an impending loss of control and proposes a relational shift in our attitude towards AI. The shortcomings of value alignment as a solution to the problem are outlined which necessitate an extension of these principles. One such approach is considering strongly relational Indigenous epistemologies. The value of Indigenous perspectives has not been canvassed widely in the literature. Their utility becomes clear when considering the existence of well-developed epistemologies adept at accounting for the non-human, a task that defies Western anthropocentrism. Accommodating AI by considering it as part of our network is a step towards building a symbiotic relationship. Given that AGI questions our fundamental notions of what it means to have human rights, it is argued that in order to co-exist, we find assistance in Indigenous traditions such as the Hawaiian and Lakota ontologies. Lakota rituals provide comfort with the conception of non-human soul-bearer while Hawaiian stories provide possible relational schema to frame our relationship with AI.
随着“控制权”越来越多地交给人工智能系统,潜在的通用人工智能(AGI)人类可能迟早会面临一场身份危机,届时“智能”的概念将不再仅仅属于我们自己。本文从即将失去控制的角度描述了这个问题,并提出了我们对人工智能态度的关系转变。本文概述了价值一致性作为问题解决方案的缺点,这些缺点需要扩展这些原则。其中一种方法是考虑强烈相关的土著认识论。在文献中,土著观点的价值并没有得到广泛的探讨。当考虑到发达的认识论的存在时,它们的效用就变得清晰起来,这些认识论擅长于解释非人类,这是一项挑战西方人类中心主义的任务。通过将人工智能视为我们网络的一部分来容纳它,是朝着建立共生关系迈出的一步。鉴于AGI质疑我们对人权意味着什么的基本概念,有人认为,为了共存,我们可以在夏威夷和拉科塔等土著传统中找到帮助。拉科塔人的仪式为非人类灵魂承载者的概念提供了安慰,而夏威夷人的故事为我们与人工智能的关系提供了可能的关系模式。
{"title":"Artificial Intelligence and Indigenous Perspectives: Protecting and Empowering Intelligent Human Beings","authors":"Suvradip Maitra","doi":"10.1145/3375627.3375845","DOIUrl":"https://doi.org/10.1145/3375627.3375845","url":null,"abstract":"As 'control' is increasingly ceded to AI systems, potentially Artificial General Intelligence (AGI) humanity may be facing an identity crisis sooner rather than later, whereby the notion of 'intelligence' no longer remains solely our own. This paper characterizes the problem in terms of an impending loss of control and proposes a relational shift in our attitude towards AI. The shortcomings of value alignment as a solution to the problem are outlined which necessitate an extension of these principles. One such approach is considering strongly relational Indigenous epistemologies. The value of Indigenous perspectives has not been canvassed widely in the literature. Their utility becomes clear when considering the existence of well-developed epistemologies adept at accounting for the non-human, a task that defies Western anthropocentrism. Accommodating AI by considering it as part of our network is a step towards building a symbiotic relationship. Given that AGI questions our fundamental notions of what it means to have human rights, it is argued that in order to co-exist, we find assistance in Indigenous traditions such as the Hawaiian and Lakota ontologies. Lakota rituals provide comfort with the conception of non-human soul-bearer while Hawaiian stories provide possible relational schema to frame our relationship with AI.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"128 1-2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73877798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
The Problem with Intelligence: Its Value-Laden History and the Future of AI 智能的问题:其充满价值的历史和人工智能的未来
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375813
S. Cave
This paper argues that the concept of intelligence is highly value-laden in ways that impact on the field of AI and debates about its risks and opportunities. This value-ladenness stems from the historical use of the concept of intelligence in the legitimation of dominance hierarchies. The paper first provides a brief overview of the history of this usage, looking at the role of intelligence in patriarchy, the logic of colonialism and scientific racism. It then highlights five ways in which this ideological legacy might be interacting with debates about AI and its risks and opportunities: 1) how some aspects of the AI debate perpetuate the fetishization of intelligence; 2) how the fetishization of intelligence impacts on diversity in the technology industry; 3) how certain hopes for AI perpetuate notions of technology and the mastery of nature; 4) how the association of intelligence with the professional class misdirects concerns about AI; and 5) how the equation of intelligence and dominance fosters fears of superintelligence. This paper therefore takes a first step in bringing together the literature on intelligence testing, eugenics and colonialism from a range of disciplines with that on the ethics and societal impact of AI.
本文认为,智能的概念在影响人工智能领域的方式上具有高度的价值,并讨论了其风险和机遇。这种价值负担源于智力概念在统治等级合法化中的历史使用。本文首先简要概述了这种用法的历史,考察了智力在父权制、殖民主义和科学种族主义中的作用。然后,它强调了这种意识形态遗产可能与关于人工智能及其风险和机遇的辩论相互作用的五种方式:1)人工智能辩论的某些方面如何使智能的拜物教永续;2)智能拜物教对科技行业多样性的影响;3)对人工智能的某些希望如何使技术和自然掌握的概念永久化;4)智能与专业阶层的关联如何误导人们对人工智能的关注;5)智能和支配的等式如何助长对超级智能的恐惧。因此,本文迈出了第一步,将智力测试、优生学和殖民主义等一系列学科的文献与人工智能的伦理和社会影响结合起来。
{"title":"The Problem with Intelligence: Its Value-Laden History and the Future of AI","authors":"S. Cave","doi":"10.1145/3375627.3375813","DOIUrl":"https://doi.org/10.1145/3375627.3375813","url":null,"abstract":"This paper argues that the concept of intelligence is highly value-laden in ways that impact on the field of AI and debates about its risks and opportunities. This value-ladenness stems from the historical use of the concept of intelligence in the legitimation of dominance hierarchies. The paper first provides a brief overview of the history of this usage, looking at the role of intelligence in patriarchy, the logic of colonialism and scientific racism. It then highlights five ways in which this ideological legacy might be interacting with debates about AI and its risks and opportunities: 1) how some aspects of the AI debate perpetuate the fetishization of intelligence; 2) how the fetishization of intelligence impacts on diversity in the technology industry; 3) how certain hopes for AI perpetuate notions of technology and the mastery of nature; 4) how the association of intelligence with the professional class misdirects concerns about AI; and 5) how the equation of intelligence and dominance fosters fears of superintelligence. This paper therefore takes a first step in bringing together the literature on intelligence testing, eugenics and colonialism from a range of disciplines with that on the ethics and societal impact of AI.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"19 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72583819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
More Than "If Time Allows": The Role of Ethics in AI Education 超越“如果时间允许”:伦理在人工智能教育中的作用
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375868
Natalie Garrett, Nathan Beard, Casey Fiesler
Even as public pressure mounts for technology companies to consider societal impacts of products, industries and governments in the AI race are demanding technical talent. To meet this demand, universities clamor to add technical artificial intelligence (AI) and machine learning (ML) courses into computing curriculum-but how are societal and ethical considerations part of this landscape? We explore two pathways for ethics content in AI education: (1) standalone AI ethics courses, and (2) integrating ethics into technical AI courses. For both pathways, we ask: What is being taught? As we train computer scientists who will build and deploy AI tools, how are we training them to consider the consequences of their work? In this exploratory work, we qualitatively analyzed 31 standalone AI ethics classes from 22 U.S. universities and 20 AI/ML technical courses from 12 U.S. universities to understand which ethics-related topics instructors include in courses. We identify and categorize topics in AI ethics education, share notable practices, and note omissions. Our analysis will help AI educators identify what topics should be taught and create scaffolding for developing future AI ethics education.
尽管公众要求科技公司考虑产品的社会影响的压力越来越大,但参与人工智能竞赛的行业和政府仍然需要技术人才。为了满足这一需求,大学纷纷在计算机课程中加入技术人工智能(AI)和机器学习(ML)课程,但社会和伦理考虑如何成为这一领域的一部分?我们探索了人工智能教育中伦理内容的两条路径:(1)独立的人工智能伦理课程,(2)将伦理融入技术人工智能课程。对于这两种途径,我们都要问:教授了什么?当我们培训将构建和部署人工智能工具的计算机科学家时,我们如何训练他们考虑他们工作的后果?在这项探索性工作中,我们定性分析了来自22所美国大学的31门独立的人工智能伦理课程和来自12所美国大学的20门人工智能/机器学习技术课程,以了解教师在课程中包含哪些伦理相关主题。我们对人工智能伦理教育中的主题进行识别和分类,分享值得注意的实践,并指出遗漏。我们的分析将帮助人工智能教育工作者确定应该教授哪些主题,并为发展未来的人工智能伦理教育创建框架。
{"title":"More Than \"If Time Allows\": The Role of Ethics in AI Education","authors":"Natalie Garrett, Nathan Beard, Casey Fiesler","doi":"10.1145/3375627.3375868","DOIUrl":"https://doi.org/10.1145/3375627.3375868","url":null,"abstract":"Even as public pressure mounts for technology companies to consider societal impacts of products, industries and governments in the AI race are demanding technical talent. To meet this demand, universities clamor to add technical artificial intelligence (AI) and machine learning (ML) courses into computing curriculum-but how are societal and ethical considerations part of this landscape? We explore two pathways for ethics content in AI education: (1) standalone AI ethics courses, and (2) integrating ethics into technical AI courses. For both pathways, we ask: What is being taught? As we train computer scientists who will build and deploy AI tools, how are we training them to consider the consequences of their work? In this exploratory work, we qualitatively analyzed 31 standalone AI ethics classes from 22 U.S. universities and 20 AI/ML technical courses from 12 U.S. universities to understand which ethics-related topics instructors include in courses. We identify and categorize topics in AI ethics education, share notable practices, and note omissions. Our analysis will help AI educators identify what topics should be taught and create scaffolding for developing future AI ethics education.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75455921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
From Bad Users and Failed Uses to Responsible Technologies: A Call to Expand the AI Ethics Toolkit 从不良用户和失败的使用到负责任的技术:扩大人工智能伦理工具包的呼吁
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3377141
Gina Neff
Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used. Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people's work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component. This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or "imagined affordances" [1] shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice.
人工智能应用的最新进展引发了学术界和公众对技术伦理设计挑战的关注。这些关于道德的对话主要针对技术设计师,并关注于帮助建立更好、更公平的人工智能工具和技术。然而,这种方法只解决了负责任使用问题的一小部分,并且不足以描述或解决随着更多类型的人工智能技术得到更广泛的应用而出现的问题。今天正在开发的许多工具对人们的工作方式、社会组织、存储和分发信息的方式、人们在哪里以及如何相互作用,以及人们的工作如何得到重视和补偿,都可能产生巨大的、历史性的影响。然而,我们的道德注意力只关注了相当有限的一些问题,比如扩大现有工具的使用范围、公平性和问责制。相反,我认为学者们应该提出更广泛的关于社会权力重新配置的问题,人工智能技术是其中的一个关键组成部分。本演讲将讨论人工智能伦理需要扩展其理论和方法工具包,以摆脱优先考虑优秀设计的概念,这种概念赋予优秀和道德的技术设计师工作特权。相反,我主张利用女权主义理论、组织研究和科学技术的方法,扩大我们评估人工智能用途的方式。这种方法首先假设社会知情的技术能力,或“想象的能力”[1],塑造人们在实践中如何理解和使用技术。它还赋予社会机构在实践中塑造技术的权力以中心地位。
{"title":"From Bad Users and Failed Uses to Responsible Technologies: A Call to Expand the AI Ethics Toolkit","authors":"Gina Neff","doi":"10.1145/3375627.3377141","DOIUrl":"https://doi.org/10.1145/3375627.3377141","url":null,"abstract":"Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used. Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people's work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component. This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or \"imagined affordances\" [1] shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90240671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Arbiter 仲裁者
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375858
Julian Zucker, Myraeka d'Leeuwen
The widespread deployment of machine learning models in high- stakes decision making scenarios requires a code of ethics for machine learning practitioners. We identify four of the primary components required for the ethical practice of machine learn- ing: transparency, fairness, accountability, and reproducibility. We introduce Arbiter, a domain-specific programming language for machine learning practitioners that is designed for ethical machine learning. Arbiter provides a notation for recording how machine learning models will be trained, and we show how this notation can encourage the four described components of ethical machine learning.
{"title":"Arbiter","authors":"Julian Zucker, Myraeka d'Leeuwen","doi":"10.1145/3375627.3375858","DOIUrl":"https://doi.org/10.1145/3375627.3375858","url":null,"abstract":"The widespread deployment of machine learning models in high- stakes decision making scenarios requires a code of ethics for machine learning practitioners. We identify four of the primary components required for the ethical practice of machine learn- ing: transparency, fairness, accountability, and reproducibility. We introduce Arbiter, a domain-specific programming language for machine learning practitioners that is designed for ethical machine learning. Arbiter provides a notation for recording how machine learning models will be trained, and we show how this notation can encourage the four described components of ethical machine learning.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"111 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79315325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Ethics of Food Recommender Applications 食品推荐应用的道德规范
Pub Date : 2020-02-03 DOI: 10.1145/3375627.3375874
Daniel Karpati, A. Najjar, Diego Agustín Ambrossio
The recent unprecedented popularity of food recommender applications has raised several issues related to the ethical, societal and legal implications of relying on these applications. In this paper, in order to assess the relevant ethical issues, we rely on the emerging principles across the AI & Ethics community and define them tailored context specifically. Considering the popular Food Recommender Systems (henceforth F-RS) in the European market cannot be regarded as personalised F-RS, we show how merely this lack of feature shifts the relevance of the focal ethical concerns. We identify the major challenges and propose a scheme for how explicit ethical agendas should be explained. We also argue how a multi-stakeholder approach is indispensable to ensure producing long-term benefits for all stakeholders. After proposing eight ethical desiderata points for F-RS, we present a case-study and assess it based on our proposed desiderata points.
最近,食品推荐应用程序的空前普及引发了一些与依赖这些应用程序的伦理、社会和法律影响相关的问题。在本文中,为了评估相关的伦理问题,我们依赖于人工智能与伦理社区的新兴原则,并根据具体情况对其进行定义。考虑到欧洲市场上流行的食品推荐系统(以下简称F-RS)不能被视为个性化的F-RS,我们展示了如何仅仅缺乏功能就转移了焦点伦理问题的相关性。我们确定了主要的挑战,并提出了一个如何解释明确的道德议程的方案。我们还讨论了多利益相关者方法如何确保为所有利益相关者带来长期利益。在提出F-RS的8个道德理想点之后,我们提出了一个案例研究,并根据我们提出的理想点对其进行评估。
{"title":"Ethics of Food Recommender Applications","authors":"Daniel Karpati, A. Najjar, Diego Agustín Ambrossio","doi":"10.1145/3375627.3375874","DOIUrl":"https://doi.org/10.1145/3375627.3375874","url":null,"abstract":"The recent unprecedented popularity of food recommender applications has raised several issues related to the ethical, societal and legal implications of relying on these applications. In this paper, in order to assess the relevant ethical issues, we rely on the emerging principles across the AI & Ethics community and define them tailored context specifically. Considering the popular Food Recommender Systems (henceforth F-RS) in the European market cannot be regarded as personalised F-RS, we show how merely this lack of feature shifts the relevance of the focal ethical concerns. We identify the major challenges and propose a scheme for how explicit ethical agendas should be explained. We also argue how a multi-stakeholder approach is indispensable to ensure producing long-term benefits for all stakeholders. After proposing eight ethical desiderata points for F-RS, we present a case-study and assess it based on our proposed desiderata points.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84318140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1