首页 > 最新文献

Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society最新文献

英文 中文
Epistemic Reasoning for Machine Ethics with Situation Calculus 基于情境演算的机器伦理学认知推理
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462586
M. Pagnucco, D. Rajaratnam, Raynaldio Limarga, Abhaya C. Nayak, Yang Song
With the rapid development of autonomous machines such as selfdriving vehicles and social robots, there is increasing realisation that machine ethics is important for widespread acceptance of autonomous machines. Our objective is to encode ethical reasoning into autonomous machines following well-defined ethical principles and behavioural norms. We provide an approach to reasoning about actions that incorporates ethical considerations. It builds on Scherl and Levesque's [29, 30] approach to knowledge in the situation calculus. We show how reasoning about knowledge in a dynamic setting can be used to guide ethical and moral choices, aligned with consequentialist and deontological approaches to ethics. We apply our approach to autonomous driving and social robot scenarios, and provide an implementation framework.
随着自动驾驶汽车和社交机器人等自动机器的快速发展,人们越来越意识到机器伦理对于自动机器的广泛接受至关重要。我们的目标是将道德推理编码为遵循明确定义的道德原则和行为规范的自主机器。我们提供了一种包含道德考虑的行为推理方法。它建立在Scherl和Levesque[29,30]的情境演算知识方法的基础上。我们展示了如何在动态环境中对知识进行推理,以指导伦理和道德选择,并与结果主义和义务论的伦理方法保持一致。我们将我们的方法应用于自动驾驶和社交机器人场景,并提供了一个实现框架。
{"title":"Epistemic Reasoning for Machine Ethics with Situation Calculus","authors":"M. Pagnucco, D. Rajaratnam, Raynaldio Limarga, Abhaya C. Nayak, Yang Song","doi":"10.1145/3461702.3462586","DOIUrl":"https://doi.org/10.1145/3461702.3462586","url":null,"abstract":"With the rapid development of autonomous machines such as selfdriving vehicles and social robots, there is increasing realisation that machine ethics is important for widespread acceptance of autonomous machines. Our objective is to encode ethical reasoning into autonomous machines following well-defined ethical principles and behavioural norms. We provide an approach to reasoning about actions that incorporates ethical considerations. It builds on Scherl and Levesque's [29, 30] approach to knowledge in the situation calculus. We show how reasoning about knowledge in a dynamic setting can be used to guide ethical and moral choices, aligned with consequentialist and deontological approaches to ethics. We apply our approach to autonomous driving and social robot scenarios, and provide an implementation framework.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117295797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Comparing Equity and Effectiveness of Different Algorithms in an Application for the Room Rental Market 比较不同算法在房间租赁市场应用中的公平性和有效性
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462600
David Solans, Francesco Fabbri, Caterina Calsamiglia, Carlos Castillo, F. Bonchi
Machine Learning (ML) techniques have been increasingly adopted by the real estate market in the last few years. Applications include, among many others, predicting the market value of a property or an area, advanced systems for managing marketing and ads campaigns, and recommendation systems based on user preferences. While these techniques can provide important benefits to the business owners and the users of the platforms, algorithmic biases can result in inequalities and loss of opportunities for groups of people who are already disadvantaged in their access to housing. In this work, we present a comprehensive and independent algorithmic evaluation of a recommender system for the real estate market, designed specifically for finding shared apartments in metropolitan areas. We were granted full access to the internals of the platform, including details on algorithms and usage data during a period of 2 years. We analyze the performance of the various algorithms which are deployed for the recommender system and asses their effect across different population groups. Our analysis reveals that introducing a recommender system algorithm facilitates finding an appropriate tenant or a desirable room to rent, but at the same time, it strengthen performance inequalities between groups, further reducing opportunities of finding a rental for certain minorities.
在过去的几年里,机器学习(ML)技术越来越多地应用于房地产市场。应用程序包括预测房产或地区的市场价值,管理营销和广告活动的高级系统,以及基于用户偏好的推荐系统。虽然这些技术可以为企业主和平台的用户提供重要的好处,但算法偏见可能导致不平等,并使已经在获得住房方面处于不利地位的群体失去机会。在这项工作中,我们对房地产市场的推荐系统进行了全面而独立的算法评估,该系统专门用于在大都市地区寻找共享公寓。在2年的时间里,我们获得了对平台内部的完全访问权,包括算法和使用数据的详细信息。我们分析了为推荐系统部署的各种算法的性能,并评估了它们在不同人群中的效果。我们的分析表明,引入推荐系统算法有助于找到合适的租户或理想的房间出租,但同时,它加剧了群体之间的表现不平等,进一步减少了某些少数群体找到租金的机会。
{"title":"Comparing Equity and Effectiveness of Different Algorithms in an Application for the Room Rental Market","authors":"David Solans, Francesco Fabbri, Caterina Calsamiglia, Carlos Castillo, F. Bonchi","doi":"10.1145/3461702.3462600","DOIUrl":"https://doi.org/10.1145/3461702.3462600","url":null,"abstract":"Machine Learning (ML) techniques have been increasingly adopted by the real estate market in the last few years. Applications include, among many others, predicting the market value of a property or an area, advanced systems for managing marketing and ads campaigns, and recommendation systems based on user preferences. While these techniques can provide important benefits to the business owners and the users of the platforms, algorithmic biases can result in inequalities and loss of opportunities for groups of people who are already disadvantaged in their access to housing. In this work, we present a comprehensive and independent algorithmic evaluation of a recommender system for the real estate market, designed specifically for finding shared apartments in metropolitan areas. We were granted full access to the internals of the platform, including details on algorithms and usage data during a period of 2 years. We analyze the performance of the various algorithms which are deployed for the recommender system and asses their effect across different population groups. Our analysis reveals that introducing a recommender system algorithm facilitates finding an appropriate tenant or a desirable room to rent, but at the same time, it strengthen performance inequalities between groups, further reducing opportunities of finding a rental for certain minorities.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123882306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Situated Accountability: Ethical Principles, Certification Standards, and Explanation Methods in Applied AI 情境问责:应用人工智能中的伦理原则、认证标准和解释方法
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462564
A. Henriksen, Simon Enni, A. Bechmann
Artificial intelligence (AI) has the potential to benefit humans and society by its employment in important sectors. However, the risks of negative consequences have underscored the importance of accountability for AI systems, their outcomes, and the users of such systems. In recent years, various accountability mechanisms have been put forward in pursuit of the responsible design, development, and use of AI. In this article, we provide an in-depth study of three such mechanisms, as we analyze Scandinavian AI developers' encounter with (1) ethical principles, (2) certification standards, and (3) explanation methods. By doing so, we contribute to closing a gap in the literature between discussions of accountability on the research and policy level, and accountability as a responsibility put on the shoulders of developers in practice. Our study illustrates important flaws in the current enactment of accountability as an ethical and social value which, if left unchecked, risks undermining the pursuit of responsible AI. By bringing attention to these flaws, the article signals where further work is needed in order to build effective accountability systems for AI.
人工智能(AI)有可能通过在重要部门的就业而造福人类和社会。然而,负面后果的风险凸显了对人工智能系统、其结果和此类系统用户问责的重要性。近年来,为了追求负责任的人工智能设计、开发和使用,各种问责机制被提出。在本文中,我们对三种这样的机制进行了深入的研究,因为我们分析了斯堪的纳维亚AI开发人员遇到的(1)道德原则,(2)认证标准,(3)解释方法。通过这样做,我们有助于缩小在研究和政策层面讨论问责制与在实践中将问责制作为一种责任放在开发人员肩上之间的文献差距。我们的研究表明,目前将问责制作为一种道德和社会价值的立法存在重大缺陷,如果不加以控制,可能会破坏对负责任的人工智能的追求。通过引起人们对这些缺陷的关注,这篇文章表明,为了为人工智能建立有效的问责制,需要进一步开展工作。
{"title":"Situated Accountability: Ethical Principles, Certification Standards, and Explanation Methods in Applied AI","authors":"A. Henriksen, Simon Enni, A. Bechmann","doi":"10.1145/3461702.3462564","DOIUrl":"https://doi.org/10.1145/3461702.3462564","url":null,"abstract":"Artificial intelligence (AI) has the potential to benefit humans and society by its employment in important sectors. However, the risks of negative consequences have underscored the importance of accountability for AI systems, their outcomes, and the users of such systems. In recent years, various accountability mechanisms have been put forward in pursuit of the responsible design, development, and use of AI. In this article, we provide an in-depth study of three such mechanisms, as we analyze Scandinavian AI developers' encounter with (1) ethical principles, (2) certification standards, and (3) explanation methods. By doing so, we contribute to closing a gap in the literature between discussions of accountability on the research and policy level, and accountability as a responsibility put on the shoulders of developers in practice. Our study illustrates important flaws in the current enactment of accountability as an ethical and social value which, if left unchecked, risks undermining the pursuit of responsible AI. By bringing attention to these flaws, the article signals where further work is needed in order to build effective accountability systems for AI.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128646004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Blind Justice: Algorithmically Masking Race in Charging Decisions 盲目公正:在收费决策中算法掩盖竞争
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462524
Alex Chohlas-Wood, Joe Nudell, Zhiyuan Jerry Lin, Julian Nyarko, Sharad Goel
A prosecutor's decision to charge or dismiss a criminal case is a particularly high-stakes choice. There is concern, however, that these judgements may suffer from explicit or implicit racial bias, as with many other such actions in the criminal justice system. To reduce potential bias in charging decisions, we designed a system that algorithmically redacts race-related information from free-text case narratives. In a first-of-its-kind initiative, we deployed this system at a large American district attorney's office to help prosecutors make race-obscured charging decisions, where it was used to review many incoming felony cases. We report on both the design, efficacy, and impact of our tool for aiding equitable decision-making. We demonstrate that our redaction algorithm is able to accurately obscure race-related information, making it difficult for a human reviewer to guess the race of a suspect while preserving other information from the case narrative. In the jurisdiction we study, we found little evidence of disparate treatment in charging decisions even prior to deployment of our intervention. Thus, as expected, our tool did not substantially alter charging rates. Nevertheless, our study demonstrates the feasibility of race-obscured charging, and more generally highlights the promise of algorithms to bolster equitable decision-making in the criminal justice system.
检察官决定起诉或驳回刑事案件是一个特别高风险的选择。然而,令人关切的是,这些判决可能像刑事司法系统中的许多其他这类行动一样,受到明显或隐含的种族偏见的影响。为了减少收费决策中的潜在偏见,我们设计了一个系统,该系统通过算法从自由文本案例叙述中编辑与种族相关的信息。我们首次在美国一个大型地区检察官办公室部署了这个系统,以帮助检察官做出种族模糊的指控决定,该系统被用于审查许多即将到来的重罪案件。我们报告了我们帮助公平决策的工具的设计、功效和影响。我们证明,我们的编校算法能够准确地模糊种族相关信息,使人类审查员难以猜测嫌疑人的种族,同时保留案件叙述中的其他信息。在我们研究的司法管辖区,即使在我们的干预措施部署之前,我们也没有发现在收费决定中存在差别待遇的证据。因此,正如预期的那样,我们的工具并没有实质性地改变收费费率。尽管如此,我们的研究证明了种族模糊收费的可行性,并且更普遍地强调了算法在促进刑事司法系统公平决策方面的前景。
{"title":"Blind Justice: Algorithmically Masking Race in Charging Decisions","authors":"Alex Chohlas-Wood, Joe Nudell, Zhiyuan Jerry Lin, Julian Nyarko, Sharad Goel","doi":"10.1145/3461702.3462524","DOIUrl":"https://doi.org/10.1145/3461702.3462524","url":null,"abstract":"A prosecutor's decision to charge or dismiss a criminal case is a particularly high-stakes choice. There is concern, however, that these judgements may suffer from explicit or implicit racial bias, as with many other such actions in the criminal justice system. To reduce potential bias in charging decisions, we designed a system that algorithmically redacts race-related information from free-text case narratives. In a first-of-its-kind initiative, we deployed this system at a large American district attorney's office to help prosecutors make race-obscured charging decisions, where it was used to review many incoming felony cases. We report on both the design, efficacy, and impact of our tool for aiding equitable decision-making. We demonstrate that our redaction algorithm is able to accurately obscure race-related information, making it difficult for a human reviewer to guess the race of a suspect while preserving other information from the case narrative. In the jurisdiction we study, we found little evidence of disparate treatment in charging decisions even prior to deployment of our intervention. Thus, as expected, our tool did not substantially alter charging rates. Nevertheless, our study demonstrates the feasibility of race-obscured charging, and more generally highlights the promise of algorithms to bolster equitable decision-making in the criminal justice system.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121173388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
The Dangers of Drowsiness Detection: Differential Performance, Downstream Impact, and Misuses 睡意检测的危险:不同的表现、下游影响和误用
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462593
Jakub Grzelak, Martim Brandao
Drowsiness and fatigue are important factors in driving safety and work performance. This has motivated academic research into detecting drowsiness, and sparked interest in the deployment of related products in the insurance and work-productivity sectors. In this paper we elaborate on the potential dangers of using such algorithms. We first report on an audit of performance bias across subject gender and ethnicity, identifying which groups would be disparately harmed by the deployment of a state-of-the-art drowsiness detection algorithm. We discuss some of the sources of the bias, such as the lack of robustness of facial analysis algorithms to face occlusions, facial hair, or skin tone. We then identify potential downstream harms of this performance bias, as well as potential misuses of drowsiness detection technology---focusing on driving safety and experience, insurance cream-skimming and coverage-avoidance, worker surveillance, and job precarity.
困倦和疲劳是影响驾驶安全和工作表现的重要因素。这激发了对检测睡意的学术研究,并激发了人们对在保险和工作效率部门部署相关产品的兴趣。在本文中,我们详细说明了使用这种算法的潜在危险。我们首先报告了对跨性别和种族的表现偏见的审计,确定哪些群体会因部署最先进的困倦检测算法而受到不同程度的伤害。我们讨论了偏见的一些来源,例如面部分析算法对面部遮挡、面部毛发或肤色缺乏鲁棒性。然后,我们确定了这种表现偏见的潜在下游危害,以及潜在的滥用困倦检测技术——专注于驾驶安全和经验、保险脱脂和保险规避、工人监视和工作不稳定性。
{"title":"The Dangers of Drowsiness Detection: Differential Performance, Downstream Impact, and Misuses","authors":"Jakub Grzelak, Martim Brandao","doi":"10.1145/3461702.3462593","DOIUrl":"https://doi.org/10.1145/3461702.3462593","url":null,"abstract":"Drowsiness and fatigue are important factors in driving safety and work performance. This has motivated academic research into detecting drowsiness, and sparked interest in the deployment of related products in the insurance and work-productivity sectors. In this paper we elaborate on the potential dangers of using such algorithms. We first report on an audit of performance bias across subject gender and ethnicity, identifying which groups would be disparately harmed by the deployment of a state-of-the-art drowsiness detection algorithm. We discuss some of the sources of the bias, such as the lack of robustness of facial analysis algorithms to face occlusions, facial hair, or skin tone. We then identify potential downstream harms of this performance bias, as well as potential misuses of drowsiness detection technology---focusing on driving safety and experience, insurance cream-skimming and coverage-avoidance, worker surveillance, and job precarity.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123811967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Moral Disagreement and Artificial Intelligence 道德分歧与人工智能
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462534
P. Robinson
Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without consensus about the relevant moral facts. I argue that what makes moral disagreement especially challenging is that there are two different ways of handling it: political solutions, which aim to find a fair compromise, and epistemic solutions, which aim at moral truth.
人工智能系统将被用来做出关于我们的越来越重要的决定。许多这样的决定将不得不在没有对相关道德事实达成共识的情况下做出。我认为,道德分歧之所以特别具有挑战性,是因为有两种不同的处理方式:一种是旨在找到公平妥协的政治解决方案,另一种是旨在寻求道德真理的认知解决方案。
{"title":"Moral Disagreement and Artificial Intelligence","authors":"P. Robinson","doi":"10.1145/3461702.3462534","DOIUrl":"https://doi.org/10.1145/3461702.3462534","url":null,"abstract":"Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without consensus about the relevant moral facts. I argue that what makes moral disagreement especially challenging is that there are two different ways of handling it: political solutions, which aim to find a fair compromise, and epistemic solutions, which aim at moral truth.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"368 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116553911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Multi-Agent Approach to Combine Reasoning and Learning for an Ethical Behavior 伦理行为推理与学习相结合的多智能体方法
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462515
R. Chaput, Jérémy Duval, O. Boissier, Mathieu Guillermin, S. Hassas
The recent field of Machine Ethics is experiencing rapid growth to answer the societal need for Artificial Intelligence (AI) algorithms imbued with ethical considerations, such as benevolence toward human users and actors. Several approaches already exist for this purpose, mostly either by reasoning over a set of predefined ethical principles (Top-Down), or by learning new principles (Bottom-Up). While both methods have their own advantages and drawbacks, only few works have explored hybrid approaches, such as using symbolic rules to guide the learning process for instance, combining the advantages of each. This paper draws upon existing works to propose a novel hybrid method using symbolic judging agents to evaluate the ethics of learning agents' behaviors, and accordingly improve their ability to ethically behave in dynamic multi-agent environments. Multiple benefits ensue from this separation between judging and learning agents: agents can evolve (or be updated by human designers) separately, benefiting from co-construction processes; judging agents can act as accessible proxies for non-expert human stakeholders or regulators; and finally, multiple points of view (one per judging agent) can be adopted to judge the behavior of the same agent, which produces a richer feedback. Our proposed approach is applied to an energy distribution problem, in the context of a Smart Grid simulator, with continuous and multi-dimensional states and actions. The experiments and results show the ability of learning agents to correctly adapt their behaviors to comply with the judging agents' rules, including when rules evolve over time.
最近的机器伦理学领域正在快速发展,以满足社会对充满伦理考虑的人工智能(AI)算法的需求,例如对人类用户和行为者的仁慈。为了达到这个目的,已经有几种方法存在,主要是通过对一组预定义的道德原则进行推理(自上而下),或者通过学习新的原则(自下而上)。虽然这两种方法都有各自的优点和缺点,但只有少数作品探索了混合方法,例如使用符号规则来指导学习过程,结合每种方法的优点。本文在借鉴已有研究成果的基础上,提出了一种新的混合方法,利用符号判断智能体来评估学习智能体行为的伦理性,从而提高学习智能体在动态多智能体环境中的道德行为能力。判断代理和学习代理之间的分离带来了多种好处:代理可以单独进化(或由人类设计师更新),受益于共同构建过程;判断代理可以作为非专业的人类利益相关者或监管机构的可访问代理;最后,可以采用多个视角(每个判断主体一个)来判断同一主体的行为,从而产生更丰富的反馈。我们提出的方法应用于智能电网模拟器中具有连续和多维状态和动作的能量分配问题。实验和结果表明,学习智能体能够正确地调整自己的行为以遵守判断智能体的规则,包括当规则随着时间的推移而演变时。
{"title":"A Multi-Agent Approach to Combine Reasoning and Learning for an Ethical Behavior","authors":"R. Chaput, Jérémy Duval, O. Boissier, Mathieu Guillermin, S. Hassas","doi":"10.1145/3461702.3462515","DOIUrl":"https://doi.org/10.1145/3461702.3462515","url":null,"abstract":"The recent field of Machine Ethics is experiencing rapid growth to answer the societal need for Artificial Intelligence (AI) algorithms imbued with ethical considerations, such as benevolence toward human users and actors. Several approaches already exist for this purpose, mostly either by reasoning over a set of predefined ethical principles (Top-Down), or by learning new principles (Bottom-Up). While both methods have their own advantages and drawbacks, only few works have explored hybrid approaches, such as using symbolic rules to guide the learning process for instance, combining the advantages of each. This paper draws upon existing works to propose a novel hybrid method using symbolic judging agents to evaluate the ethics of learning agents' behaviors, and accordingly improve their ability to ethically behave in dynamic multi-agent environments. Multiple benefits ensue from this separation between judging and learning agents: agents can evolve (or be updated by human designers) separately, benefiting from co-construction processes; judging agents can act as accessible proxies for non-expert human stakeholders or regulators; and finally, multiple points of view (one per judging agent) can be adopted to judge the behavior of the same agent, which produces a richer feedback. Our proposed approach is applied to an energy distribution problem, in the context of a Smart Grid simulator, with continuous and multi-dimensional states and actions. The experiments and results show the ability of learning agents to correctly adapt their behaviors to comply with the judging agents' rules, including when rules evolve over time.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"285 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132451657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Ethical Data Curation for AI: An Approach based on Feminist Epistemology and Critical Theories of Race 人工智能的伦理数据管理:基于女性主义认识论和种族批判理论的方法
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462598
Susan Leavy, E. Siapera, B. O’Sullivan
The potential for bias embedded in data to lead to the perpetuation of social injustice though Artificial Intelligence (AI) necessitates an urgent reform of data curation practices for AI systems, especially those based on machine learning. Without appropriate ethical and regulatory frameworks there is a risk that decades of advances in human rights and civil liberties may be undermined. This paper proposes an approach to data curation for AI, grounded in feminist epistemology and informed by critical theories of race and feminist principles. The objective of this approach is to support critical evaluation of the social dynamics of power embedded in data for AI systems. We propose a set of fundamental guiding principles for ethical data curation that address the social construction of knowledge, call for inclusion of subjugated and new forms of knowledge, support critical evaluation of theoretical concepts within data and recognise the reflexive nature of knowledge. In developing this ethical framework for data curation, we aim to contribute to a virtue ethics for AI and ensure protection of fundamental and human rights.
通过人工智能(AI),数据中嵌入的偏见可能导致社会不公正的延续,这需要对人工智能系统的数据管理实践进行紧急改革,特别是基于机器学习的数据管理实践。如果没有适当的道德和监管框架,几十年来在人权和公民自由方面取得的进展就有可能遭到破坏。本文提出了一种基于女权主义认识论的人工智能数据管理方法,并以种族和女权主义原则的批判理论为基础。这种方法的目标是支持对人工智能系统数据中嵌入的权力的社会动态进行批判性评估。我们提出了一套伦理数据管理的基本指导原则,这些原则涉及知识的社会建构,呼吁包括已被征服的和新的知识形式,支持对数据中理论概念的批判性评估,并认识到知识的反身性。在制定这一数据管理伦理框架的过程中,我们的目标是为人工智能的美德伦理做出贡献,并确保对基本人权的保护。
{"title":"Ethical Data Curation for AI: An Approach based on Feminist Epistemology and Critical Theories of Race","authors":"Susan Leavy, E. Siapera, B. O’Sullivan","doi":"10.1145/3461702.3462598","DOIUrl":"https://doi.org/10.1145/3461702.3462598","url":null,"abstract":"The potential for bias embedded in data to lead to the perpetuation of social injustice though Artificial Intelligence (AI) necessitates an urgent reform of data curation practices for AI systems, especially those based on machine learning. Without appropriate ethical and regulatory frameworks there is a risk that decades of advances in human rights and civil liberties may be undermined. This paper proposes an approach to data curation for AI, grounded in feminist epistemology and informed by critical theories of race and feminist principles. The objective of this approach is to support critical evaluation of the social dynamics of power embedded in data for AI systems. We propose a set of fundamental guiding principles for ethical data curation that address the social construction of knowledge, call for inclusion of subjugated and new forms of knowledge, support critical evaluation of theoretical concepts within data and recognise the reflexive nature of knowledge. In developing this ethical framework for data curation, we aim to contribute to a virtue ethics for AI and ensure protection of fundamental and human rights.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132006076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Participatory Algorithmic Management: Elicitation Methods for Worker Well-Being Models 参与式算法管理:工人福利模型的启发方法
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462628
Min Kyung Lee, Ishan Nigam, Angie Zhang, J. Afriyie, Zhizhen Qin, Sicun Gao
Artificial intelligence is increasingly being used to manage the workforce. Algorithmic management promises organizational efficiency, but often undermines worker well-being. How can we computationally model worker well-being so that algorithmic management can be optimized for and assessed in terms of worker well-being? Toward this goal, we propose a participatory approach for worker well-being models. We first define worker well-being models: Work preference models---preferences about work and working conditions, and managerial fairness models---beliefs about fair resource allocation among multiple workers. We then propose elicitation methods to enable workers to build their own well-being models leveraging pairwise comparisons and ranking. As a case study, we evaluate our methods in the context of algorithmic work scheduling with 25 shift workers and 3 managers. The findings show that workers expressed idiosyncratic work preference models and more uniform managerial fairness models, and the elicitation methods helped workers discover their preferences and gave them a sense of empowerment. Our work provides a method and initial evidence for enabling participatory algorithmic management for worker well-being.
人工智能越来越多地被用于管理劳动力。算法管理保证了组织效率,但往往会损害员工的福祉。我们如何通过计算建立工人福利的模型,以便算法管理可以根据工人福利进行优化和评估?为了实现这一目标,我们提出了工人福利模型的参与式方法。我们首先定义了工人福利模型:工作偏好模型——对工作和工作条件的偏好,以及管理公平模型——对多个工人之间公平资源分配的信念。然后,我们提出启发方法,使工人能够利用两两比较和排名建立自己的幸福模型。作为一个案例研究,我们在算法工作调度的背景下评估了我们的方法,有25名轮班工人和3名经理。研究结果表明,员工表现出特质性的工作偏好模型和更统一的管理公平模型,启发式方法帮助员工发现自己的偏好,并赋予他们一种赋权感。我们的工作为实现参与式算法管理工人福利提供了一种方法和初步证据。
{"title":"Participatory Algorithmic Management: Elicitation Methods for Worker Well-Being Models","authors":"Min Kyung Lee, Ishan Nigam, Angie Zhang, J. Afriyie, Zhizhen Qin, Sicun Gao","doi":"10.1145/3461702.3462628","DOIUrl":"https://doi.org/10.1145/3461702.3462628","url":null,"abstract":"Artificial intelligence is increasingly being used to manage the workforce. Algorithmic management promises organizational efficiency, but often undermines worker well-being. How can we computationally model worker well-being so that algorithmic management can be optimized for and assessed in terms of worker well-being? Toward this goal, we propose a participatory approach for worker well-being models. We first define worker well-being models: Work preference models---preferences about work and working conditions, and managerial fairness models---beliefs about fair resource allocation among multiple workers. We then propose elicitation methods to enable workers to build their own well-being models leveraging pairwise comparisons and ranking. As a case study, we evaluate our methods in the context of algorithmic work scheduling with 25 shift workers and 3 managers. The findings show that workers expressed idiosyncratic work preference models and more uniform managerial fairness models, and the elicitation methods helped workers discover their preferences and gave them a sense of empowerment. Our work provides a method and initial evidence for enabling participatory algorithmic management for worker well-being.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133059800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Beyond Reasonable Doubt: Improving Fairness in Budget-Constrained Decision Making using Confidence Thresholds 排除合理怀疑:利用置信阈值提高预算约束决策的公平性
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462575
Michiel A. Bakker, Duy Patrick Tu, K. Gummadi, A. Pentland, Kush R. Varshney, Adrian Weller
Prior work on fairness in machine learning has focused on settings where all the information needed about each individual is readily available. However, in many applications, further information may be acquired at a cost. For example, when assessing a customer's creditworthiness, a bank initially has access to a limited set of information but progressively improves the assessment by acquiring additional information before making a final decision. In such settings, we posit that a fair decision maker may want to ensure that decisions for all individuals are made with similar expected error rate, even if the features acquired for the individuals are different. We show that a set of carefully chosen confidence thresholds can not only effectively redistribute an information budget according to each individual's needs, but also serve to address individual and group fairness concerns simultaneously. Finally, using two public datasets, we confirm the effectiveness of our methods and investigate the limitations.
之前关于机器学习公平性的工作主要集中在每个人所需的所有信息都很容易获得的情况下。然而,在许多应用中,可能需要付出一定代价才能获得进一步的信息。例如,在评估客户的信誉时,银行最初只能访问一组有限的信息,但在做出最终决定之前,通过获取额外信息逐步改进评估。在这种情况下,我们假设一个公平的决策者可能希望确保所有个体的决策都以相似的预期错误率做出,即使个体获得的特征是不同的。我们的研究表明,一组精心选择的置信阈值不仅可以根据每个人的需求有效地重新分配信息预算,而且可以同时解决个人和群体的公平问题。最后,使用两个公开的数据集,我们证实了我们的方法的有效性,并调查了局限性。
{"title":"Beyond Reasonable Doubt: Improving Fairness in Budget-Constrained Decision Making using Confidence Thresholds","authors":"Michiel A. Bakker, Duy Patrick Tu, K. Gummadi, A. Pentland, Kush R. Varshney, Adrian Weller","doi":"10.1145/3461702.3462575","DOIUrl":"https://doi.org/10.1145/3461702.3462575","url":null,"abstract":"Prior work on fairness in machine learning has focused on settings where all the information needed about each individual is readily available. However, in many applications, further information may be acquired at a cost. For example, when assessing a customer's creditworthiness, a bank initially has access to a limited set of information but progressively improves the assessment by acquiring additional information before making a final decision. In such settings, we posit that a fair decision maker may want to ensure that decisions for all individuals are made with similar expected error rate, even if the features acquired for the individuals are different. We show that a set of carefully chosen confidence thresholds can not only effectively redistribute an information budget according to each individual's needs, but also serve to address individual and group fairness concerns simultaneously. Finally, using two public datasets, we confirm the effectiveness of our methods and investigate the limitations.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"40 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113959902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1