首页 > 最新文献

Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society最新文献

英文 中文
Training for Implicit Norms in Deep Reinforcement Learning Agents through Adversarial Multi-Objective Reward Optimization 基于对抗性多目标奖励优化的深度强化学习智能体内隐规范训练
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462473
M. Peschl
We propose a deep reinforcement learning algorithm that employs an adversarial training strategy for adhering to implicit human norms alongside optimizing for a narrow goal objective. Previous methods which incorporate human values into reinforcement learning algorithms either scale poorly or assume hand-crafted state features. Our algorithm drops these assumptions and is able to automatically infer norms from human demonstrations, which allows for integrating it into existing agents in the form of multi-objective optimization. We benchmark our approach in a search-and-rescue grid world and show that, conditioned on respecting human norms, our agent maintains optimal performance with respect to the predefined goal.
我们提出了一种深度强化学习算法,该算法采用对抗训练策略来坚持内隐的人类规范,同时针对狭窄的目标目标进行优化。以前将人类价值观纳入强化学习算法的方法要么伸缩性差,要么假设手工制作的状态特征。我们的算法放弃了这些假设,并能够自动从人类演示中推断出规范,这允许以多目标优化的形式将其集成到现有的代理中。我们在搜索和救援网格世界中对我们的方法进行了基准测试,并表明,在尊重人类规范的条件下,我们的智能体相对于预定义的目标保持了最佳性能。
{"title":"Training for Implicit Norms in Deep Reinforcement Learning Agents through Adversarial Multi-Objective Reward Optimization","authors":"M. Peschl","doi":"10.1145/3461702.3462473","DOIUrl":"https://doi.org/10.1145/3461702.3462473","url":null,"abstract":"We propose a deep reinforcement learning algorithm that employs an adversarial training strategy for adhering to implicit human norms alongside optimizing for a narrow goal objective. Previous methods which incorporate human values into reinforcement learning algorithms either scale poorly or assume hand-crafted state features. Our algorithm drops these assumptions and is able to automatically infer norms from human demonstrations, which allows for integrating it into existing agents in the form of multi-objective optimization. We benchmark our approach in a search-and-rescue grid world and show that, conditioned on respecting human norms, our agent maintains optimal performance with respect to the predefined goal.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120943512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
How Do the Score Distributions of Subpopulations Influence Fairness Notions? 亚群体的得分分布如何影响公平观念?
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462601
Carmen Mazijn, J. Danckaert, V. Ginis
Automated decisions based on trained algorithms influence human life in an increasingly far-reaching way. In recent years, it has become clear that these decisions are often accompanied by bias and unfair treatment of different subpopulations.Meanwhile, several notions of fairness circulate in the scientific literature, with trade-offs between profit and fairness and between fairness metrics among themselves. Based on both analytical calculations and numerical simulations, we show in this study that some profit-fairness trade-offs and fairness-fairness trade-offs depend substantially on the underlying score distributions given to subpopulations and we present two complementary perspectives to visualize this influence. We further show that higher symmetry in scores of subpopulations can significantly reduce the trade-offs between fairness notions within a given acceptable strictness, even when sacrificing expressiveness. Our exploratory study may help to understand how to overcome the strict mathematical statements about the statistical incompatibility of certain fairness notions.
基于训练算法的自动化决策对人类生活的影响越来越深远。近年来,很明显,这些决定往往伴随着对不同亚群体的偏见和不公平待遇。与此同时,科学文献中流传着一些公平的概念,在利润与公平之间以及它们之间的公平指标之间进行权衡。基于分析计算和数值模拟,我们在本研究中表明,一些利润-公平权衡和公平-公平权衡在很大程度上取决于给予亚群体的潜在得分分布,我们提出了两个互补的观点来可视化这种影响。我们进一步表明,在给定的可接受严格度内,更高的子种群分数的对称性可以显著减少公平概念之间的权衡,即使牺牲了表达性。我们的探索性研究可能有助于理解如何克服关于某些公平概念的统计不相容的严格数学陈述。
{"title":"How Do the Score Distributions of Subpopulations Influence Fairness Notions?","authors":"Carmen Mazijn, J. Danckaert, V. Ginis","doi":"10.1145/3461702.3462601","DOIUrl":"https://doi.org/10.1145/3461702.3462601","url":null,"abstract":"Automated decisions based on trained algorithms influence human life in an increasingly far-reaching way. In recent years, it has become clear that these decisions are often accompanied by bias and unfair treatment of different subpopulations.Meanwhile, several notions of fairness circulate in the scientific literature, with trade-offs between profit and fairness and between fairness metrics among themselves. Based on both analytical calculations and numerical simulations, we show in this study that some profit-fairness trade-offs and fairness-fairness trade-offs depend substantially on the underlying score distributions given to subpopulations and we present two complementary perspectives to visualize this influence. We further show that higher symmetry in scores of subpopulations can significantly reduce the trade-offs between fairness notions within a given acceptable strictness, even when sacrificing expressiveness. Our exploratory study may help to understand how to overcome the strict mathematical statements about the statistical incompatibility of certain fairness notions.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122035692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Platform Power and AI: The Case of Content 平台力量与AI:以内容为例
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462443
Seth Lazar, Taina Bucher, A. Korolova, Cailin O’Connor, Nicolas Suzor
This panel brings experts together from law, philosophy, computer science and media studies to explore how digital platforms exercise power over which content is visible online, and which content is promoted to users, with a special focus on the use of algorithmic systems to achieve these ends.
该小组汇集了来自法律、哲学、计算机科学和媒体研究的专家,探讨数字平台如何行使权力,决定哪些内容在网上可见,哪些内容被推广给用户,并特别关注使用算法系统来实现这些目标。
{"title":"Platform Power and AI: The Case of Content","authors":"Seth Lazar, Taina Bucher, A. Korolova, Cailin O’Connor, Nicolas Suzor","doi":"10.1145/3461702.3462443","DOIUrl":"https://doi.org/10.1145/3461702.3462443","url":null,"abstract":"This panel brings experts together from law, philosophy, computer science and media studies to explore how digital platforms exercise power over which content is visible online, and which content is promoted to users, with a special focus on the use of algorithmic systems to achieve these ends.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122514295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing Effective and Accessible Consumer Protections against Unfair Treatment in Markets where Automated Decision Making is used to Determine Access to Essential Services: A Case Study in Australia's Housing Market 在使用自动决策来决定获得基本服务的市场中,设计有效和可访问的消费者保护措施,防止不公平待遇:澳大利亚住房市场的案例研究
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462468
Linda Przhedetsky
The use of data-driven Automated Decision Making (ADM) to determine access to products or services in competitive markets can enhance or limit access to equality and fair treatment. In cases where essential services such housing, energy and telecommunications, are accessed through a competitive market, consumers who are denied access to one or more of these services may not be able to access a suitable alternative if there are none available to match their needs, budget, and unique circumstances. Being denied access to an essential service such as electricity or housing can be an issue of life or death. Competitive essential services markets therefore illuminate the ways that using ADM to determine access to products or services, if not balanced by appropriate consumer protections, can cause significant harm. My research explores existing and emerging consumer protections that are effective in preventing consumers being harmed by ADM-facilitated decisions in essential services markets.
使用数据驱动的自动决策(ADM)来确定在竞争市场中获得产品或服务的机会,可以增强或限制获得平等和公平待遇的机会。在通过竞争市场获得住房、能源和电信等基本服务的情况下,如果没有能够满足其需求、预算和独特情况的替代服务,被拒绝获得其中一项或多项服务的消费者可能无法获得合适的替代服务。被剥夺获得电力或住房等基本服务可能是一个生死攸关的问题。因此,竞争性的基本服务市场阐明了使用ADM来确定产品或服务的获取途径的方式,如果没有适当的消费者保护来平衡,可能会造成重大伤害。我的研究探讨了现有的和新兴的消费者保护措施,有效地防止消费者受到基本服务市场中由adm促进的决策的伤害。
{"title":"Designing Effective and Accessible Consumer Protections against Unfair Treatment in Markets where Automated Decision Making is used to Determine Access to Essential Services: A Case Study in Australia's Housing Market","authors":"Linda Przhedetsky","doi":"10.1145/3461702.3462468","DOIUrl":"https://doi.org/10.1145/3461702.3462468","url":null,"abstract":"The use of data-driven Automated Decision Making (ADM) to determine access to products or services in competitive markets can enhance or limit access to equality and fair treatment. In cases where essential services such housing, energy and telecommunications, are accessed through a competitive market, consumers who are denied access to one or more of these services may not be able to access a suitable alternative if there are none available to match their needs, budget, and unique circumstances. Being denied access to an essential service such as electricity or housing can be an issue of life or death. Competitive essential services markets therefore illuminate the ways that using ADM to determine access to products or services, if not balanced by appropriate consumer protections, can cause significant harm. My research explores existing and emerging consumer protections that are effective in preventing consumers being harmed by ADM-facilitated decisions in essential services markets.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115300662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Feeding the Beast: Superintelligence, Corporate Capitalism and the End of Humanity 《喂养野兽:超级智能、企业资本主义和人类的终结
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462581
Dominic Leggett
Scientists and philosophers have warned of the possibility that humans, in the future, might create a 'superintelligent' machine that could, in some scenarios, form an existential threat to humanity. This paper argues that such a machine may already exist, and that, if so, it does, in fact, represent such a threat.
科学家和哲学家警告说,未来人类可能会创造出一种“超级智能”机器,在某些情况下,这种机器可能会对人类的生存构成威胁。本文认为,这样的机器可能已经存在,而且,如果是这样,它实际上代表了这样的威胁。
{"title":"Feeding the Beast: Superintelligence, Corporate Capitalism and the End of Humanity","authors":"Dominic Leggett","doi":"10.1145/3461702.3462581","DOIUrl":"https://doi.org/10.1145/3461702.3462581","url":null,"abstract":"Scientists and philosophers have warned of the possibility that humans, in the future, might create a 'superintelligent' machine that could, in some scenarios, form an existential threat to humanity. This paper argues that such a machine may already exist, and that, if so, it does, in fact, represent such a threat.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122801511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithmic Hiring in Practice: Recruiter and HR Professional's Perspectives on AI Use in Hiring 实践中的算法招聘:招聘人员和人力资源专业人员对人工智能在招聘中的应用的看法
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462531
Lan Li, T. Lassiter, Joohee Oh, Min Kyung Lee
The use of AI-enabled hiring software raises questions about the practice of Human Resource (HR) professionals' use of the software and its consequences. We interviewed 15 recruiters and HR professionals about their experiences around two decision-making processes during hiring: sourcing and assessment. For both, AI-enabled software allowed the efficient processing of candidate data, thus providing the ability to introduce or advance candidates from broader and more diverse pools. For sourcing, it can serve as a useful learning resource to find candidates. Though, a lack of trust in data accuracy and an inadequate level of control over algorithmic candidate matches can create reluctance to embrace it. For assessment, its implementation varied across companies depending on the industry and the hiring scenario. Its inclusion may redefine HR professionals' job content as it automates or augments pieces of the existing hiring process. Finally, we discuss how candidate roles that recruiters and HR professionals support drive the use of algorithmic hiring software.
人工智能招聘软件的使用引发了对人力资源(HR)专业人员使用该软件的实践及其后果的质疑。我们采访了15位招聘人员和人力资源专业人士,了解他们在招聘过程中两个决策过程的经验:采购和评估。对于这两家公司来说,人工智能支持的软件都可以有效地处理候选数据,从而提供了从更广泛、更多样化的人群中介绍或推荐候选人的能力。对于外包,它可以作为一个有用的学习资源来寻找候选人。然而,对数据准确性缺乏信任以及对算法候选匹配的控制水平不足,可能会导致人们不愿接受它。为了进行评估,不同公司的实施情况因行业和招聘情况而异。它的加入可能会重新定义人力资源专业人员的工作内容,因为它自动化或增加了现有招聘流程的一部分。最后,我们讨论了招聘人员和人力资源专业人员支持的候选人角色如何推动算法招聘软件的使用。
{"title":"Algorithmic Hiring in Practice: Recruiter and HR Professional's Perspectives on AI Use in Hiring","authors":"Lan Li, T. Lassiter, Joohee Oh, Min Kyung Lee","doi":"10.1145/3461702.3462531","DOIUrl":"https://doi.org/10.1145/3461702.3462531","url":null,"abstract":"The use of AI-enabled hiring software raises questions about the practice of Human Resource (HR) professionals' use of the software and its consequences. We interviewed 15 recruiters and HR professionals about their experiences around two decision-making processes during hiring: sourcing and assessment. For both, AI-enabled software allowed the efficient processing of candidate data, thus providing the ability to introduce or advance candidates from broader and more diverse pools. For sourcing, it can serve as a useful learning resource to find candidates. Though, a lack of trust in data accuracy and an inadequate level of control over algorithmic candidate matches can create reluctance to embrace it. For assessment, its implementation varied across companies depending on the industry and the hiring scenario. Its inclusion may redefine HR professionals' job content as it automates or augments pieces of the existing hiring process. Finally, we discuss how candidate roles that recruiters and HR professionals support drive the use of algorithmic hiring software.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125118770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Modeling and Guiding the Creation of Ethical Human-AI Teams 建模和指导伦理人类-人工智能团队的创建
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462573
Christopher Flathmann, Beau G. Schelble, Rui Zhang, Nathan J. Mcneese
With artificial intelligence continuing to advance, so too do the ethical concerns that can potentially negatively impact humans and the greater society. When these systems begin to interact with humans, these concerns become much more complex and much more important. The field of human-AI teaming provides a relevant example of how AI ethics can have significant and continued effects on humans. This paper reviews research in ethical artificial intelligence, as well as ethical teamwork through the lens of the rapidly advancing field of human-AI teaming, resulting in a model demonstrating the requirements and outcomes of building ethical human-AI teams. The model is created to guide the prioritization of ethics in human-AI teaming by outlining the ethical teaming process, outcomes of ethical teams, and external requirements necessary to ensure ethical human-AI teams. A final discussion is presented on how the developed model will influence the implementation of AI teammates, as well as the development of policy and regulation surrounding the domain in the coming years.
随着人工智能的不断发展,可能对人类和更大社会产生负面影响的伦理问题也在不断发展。当这些系统开始与人类互动时,这些问题变得更加复杂和重要。人类与人工智能合作领域提供了一个相关的例子,说明人工智能伦理如何对人类产生重大而持续的影响。本文通过快速发展的人类与人工智能团队合作的视角,回顾了伦理人工智能以及伦理团队合作的研究,得出了一个模型,展示了构建伦理人类与人工智能团队的要求和成果。该模型旨在通过概述道德团队的过程、道德团队的结果以及确保道德的人类-人工智能团队所需的外部要求,来指导人类-人工智能团队中道德的优先级。最后讨论了所开发的模型将如何影响AI队友的实现,以及未来几年围绕该领域的政策和法规的发展。
{"title":"Modeling and Guiding the Creation of Ethical Human-AI Teams","authors":"Christopher Flathmann, Beau G. Schelble, Rui Zhang, Nathan J. Mcneese","doi":"10.1145/3461702.3462573","DOIUrl":"https://doi.org/10.1145/3461702.3462573","url":null,"abstract":"With artificial intelligence continuing to advance, so too do the ethical concerns that can potentially negatively impact humans and the greater society. When these systems begin to interact with humans, these concerns become much more complex and much more important. The field of human-AI teaming provides a relevant example of how AI ethics can have significant and continued effects on humans. This paper reviews research in ethical artificial intelligence, as well as ethical teamwork through the lens of the rapidly advancing field of human-AI teaming, resulting in a model demonstrating the requirements and outcomes of building ethical human-AI teams. The model is created to guide the prioritization of ethics in human-AI teaming by outlining the ethical teaming process, outcomes of ethical teams, and external requirements necessary to ensure ethical human-AI teams. A final discussion is presented on how the developed model will influence the implementation of AI teammates, as well as the development of policy and regulation surrounding the domain in the coming years.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128727299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Blacklists and Redlists in the Chinese Social Credit System: Diversity, Flexibility, and Comprehensiveness 中国社会信用体系中的黑名单与红榜:多样性、灵活性与全面性
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462535
Severin Engelmann, Mo Chen, Lorenz Dang, Jens Grossklags
The Chinese Social Credit System (SCS) is a novel digital socio-technical credit system. The SCS aims to regulate societal behavior by reputational and material devices. Scholarship on the SCS has offered a variety of legal and theoretical perspectives. However, little is known about its actual implementation. Here, we provide the first comprehensive empirical study of digital blacklists (listing "bad" behavior) and redlists (listing "good" behavior) in the Chinese SCS. Based on a unique data set of reputational blacklists and redlists in 30 Chinese provincial-level administrative divisions (ADs), we show the diversity, flexibility, and comprehensiveness of the SCS listing infrastructure. First, our results demonstrate that the Chinese SCS unfolds in a highly diversified manner: we find differences in accessibility, interface design and credit information across provincial-level SCS blacklists and redlists. Second, SCS listings are flexible. During the COVID-19 outbreak, we observe a swift addition of blacklists and redlists that helps strengthen the compliance with coronavirus-related norms and regulations. Third, the SCS listing infrastructure is comprehensive. Overall, we identify 273 blacklists and 154 redlists across provincial-level ADs. Our blacklist and redlist taxonomy highlights that the SCS listing infrastructure prioritizes law enforcement and industry regulations. We also identify redlists that reward political and moral behavior. Our study substantiates the enormous scale and diversity of the Chinese SCS and puts the debate on its reach and societal impact on firmer ground. Finally, we initiate a discussion on the ethical dimensions of data-driven research on the SCS.
中国社会信用体系(SCS)是一种新型的数字化社会技术信用体系。SCS旨在通过声誉和物质手段规范社会行为。关于南海的学术研究提供了多种法律和理论视角。然而,人们对其实际实施知之甚少。在这里,我们首次对中国南海的数字黑名单(列出“坏”行为)和红名单(列出“好”行为)进行了全面的实证研究。基于中国30个省级行政区划(ADs)的声誉黑名单和红榜的独特数据集,我们展示了SCS列表基础设施的多样性、灵活性和全面性。首先,我们的研究结果表明,中国的SCS以高度多样化的方式展开:我们发现省级SCS黑名单和红皮书在可访问性、界面设计和信用信息方面存在差异。其次,SCS上市是灵活的。在2019冠状病毒病疫情期间,黑名单和红人名单迅速增加,有助于加强对冠状病毒相关规范和法规的遵守。第三,SCS上市基础设施完善。总体而言,我们在省级ADs中确定了273个黑名单和154个红名单。我们的黑名单和红名单分类法强调了SCS的列表基础设施优先考虑执法和行业法规。我们还确定了奖励政治和道德行为的红名单。我们的研究证实了中国南海的巨大规模和多样性,并为其范围和社会影响的辩论奠定了坚实的基础。最后,我们开始讨论数据驱动的SCS研究的伦理维度。
{"title":"Blacklists and Redlists in the Chinese Social Credit System: Diversity, Flexibility, and Comprehensiveness","authors":"Severin Engelmann, Mo Chen, Lorenz Dang, Jens Grossklags","doi":"10.1145/3461702.3462535","DOIUrl":"https://doi.org/10.1145/3461702.3462535","url":null,"abstract":"The Chinese Social Credit System (SCS) is a novel digital socio-technical credit system. The SCS aims to regulate societal behavior by reputational and material devices. Scholarship on the SCS has offered a variety of legal and theoretical perspectives. However, little is known about its actual implementation. Here, we provide the first comprehensive empirical study of digital blacklists (listing \"bad\" behavior) and redlists (listing \"good\" behavior) in the Chinese SCS. Based on a unique data set of reputational blacklists and redlists in 30 Chinese provincial-level administrative divisions (ADs), we show the diversity, flexibility, and comprehensiveness of the SCS listing infrastructure. First, our results demonstrate that the Chinese SCS unfolds in a highly diversified manner: we find differences in accessibility, interface design and credit information across provincial-level SCS blacklists and redlists. Second, SCS listings are flexible. During the COVID-19 outbreak, we observe a swift addition of blacklists and redlists that helps strengthen the compliance with coronavirus-related norms and regulations. Third, the SCS listing infrastructure is comprehensive. Overall, we identify 273 blacklists and 154 redlists across provincial-level ADs. Our blacklist and redlist taxonomy highlights that the SCS listing infrastructure prioritizes law enforcement and industry regulations. We also identify redlists that reward political and moral behavior. Our study substantiates the enormous scale and diversity of the Chinese SCS and puts the debate on its reach and societal impact on firmer ground. Finally, we initiate a discussion on the ethical dimensions of data-driven research on the SCS.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126984380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Examining Religion Bias in AI Text Generators 检查AI文本生成器中的宗教偏见
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462469
D. Muralidhar
One of the biggest reasons artificial intelligence (AI) gets a backlash is because of inherent biases in AI software. Deep learning algorithms use data fed into the systems to find patterns to draw conclusions used to make application decisions. Patterns in data fed into machine learning algorithms have revealed that the AI software decisions have biases embedded within them. Algorithmic audits can certify that the software is making responsible decisions. These audits verify the standards centered around the various AI principles such as explainability, accountability, human-centered values, such as, fairness and transparency, to increase the trust in the algorithm and the software systems that implement AI algorithms.
人工智能(AI)遭到抵制的最大原因之一是人工智能软件固有的偏见。深度学习算法使用输入系统的数据来查找模式,从而得出用于做出应用程序决策的结论。输入机器学习算法的数据模式表明,人工智能软件的决策中嵌入了偏见。算法审计可以证明软件正在做出负责任的决定。这些审计验证了以各种人工智能原则为中心的标准,如可解释性、问责制、以人为本的价值观,如公平和透明度,以增加对算法和实施人工智能算法的软件系统的信任。
{"title":"Examining Religion Bias in AI Text Generators","authors":"D. Muralidhar","doi":"10.1145/3461702.3462469","DOIUrl":"https://doi.org/10.1145/3461702.3462469","url":null,"abstract":"One of the biggest reasons artificial intelligence (AI) gets a backlash is because of inherent biases in AI software. Deep learning algorithms use data fed into the systems to find patterns to draw conclusions used to make application decisions. Patterns in data fed into machine learning algorithms have revealed that the AI software decisions have biases embedded within them. Algorithmic audits can certify that the software is making responsible decisions. These audits verify the standards centered around the various AI principles such as explainability, accountability, human-centered values, such as, fairness and transparency, to increase the trust in the algorithm and the software systems that implement AI algorithms.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129018281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Age Bias in Emotion Detection: An Analysis of Facial Emotion Recognition Performance on Young, Middle-Aged, and Older Adults 情绪检测中的年龄偏差:青年、中年和老年人面部情绪识别表现的分析
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462609
E. Kim, De'Aira G. Bryant, Deepak Srikanth, A. Howard
The growing potential for facial emotion recognition (FER) technology has encouraged expedited development at the cost of rigorous validation. Many of its use-cases may also impact the diverse global community as FER becomes embedded into domains ranging from education to security to healthcare. Yet, prior work has highlighted that FER can exhibit both gender and racial biases like other facial analysis techniques. As a result, bias-mitigation research efforts have mainly focused on tackling gender and racial disparities, while other demographic related biases, such as age, have seen less progress. This work seeks to examine the performance of state of the art commercial FER technology on expressive images of men and women from three distinct age groups. We utilize four different commercial FER systems in a black box methodology to evaluate how six emotions - anger, disgust, fear, happiness, neutrality, and sadness - are correctly detected by age group. We further investigate how algorithmic changes over the last year have affected system performance. Our results found that all four commercial FER systems most accurately perceived emotion in images of young adults and least accurately in images of older adults. This trend was observed for analyses conducted in 2019 and 2020. However, little to no gender disparities were observed in either year. While older adults may not have been the initial target consumer of FER technology, statistics show the demographic is quickly growing more keen to applications that use such systems. Our results demonstrate the importance of considering various demographic subgroups during FER system validation and the need for inclusive, intersectional algorithmic developmental practices.
面部情感识别(FER)技术日益增长的潜力鼓励了以严格验证为代价的加速发展。随着FER嵌入到从教育到安全再到医疗保健的各个领域,它的许多用例也可能影响多样化的全球社区。然而,先前的工作已经强调,像其他面部分析技术一样,FER也会表现出性别和种族偏见。因此,减少偏见的研究工作主要集中在解决性别和种族差异问题上,而其他与人口有关的偏见,如年龄,进展较小。这项工作旨在研究国家的最先进的商业ferc技术的表现对男性和女性的形象,从三个不同的年龄组。我们利用四种不同的商业FER系统,用黑箱方法来评估六种情绪——愤怒、厌恶、恐惧、快乐、中立和悲伤——是如何被年龄组正确检测出来的。我们进一步调查了去年算法的变化是如何影响系统性能的。我们的研究结果发现,所有四种商用FER系统对年轻人图像的情感感知最准确,对老年人图像的情感感知最不准确。在2019年和2020年进行的分析中发现了这一趋势。然而,在这两年中几乎没有观察到性别差异。虽然老年人可能不是FER技术的最初目标消费者,但统计数据显示,老年人对使用此类系统的应用程序的兴趣正在迅速增加。我们的研究结果表明,在FER系统验证过程中考虑各种人口亚群的重要性,以及包容性、交叉性算法开发实践的必要性。
{"title":"Age Bias in Emotion Detection: An Analysis of Facial Emotion Recognition Performance on Young, Middle-Aged, and Older Adults","authors":"E. Kim, De'Aira G. Bryant, Deepak Srikanth, A. Howard","doi":"10.1145/3461702.3462609","DOIUrl":"https://doi.org/10.1145/3461702.3462609","url":null,"abstract":"The growing potential for facial emotion recognition (FER) technology has encouraged expedited development at the cost of rigorous validation. Many of its use-cases may also impact the diverse global community as FER becomes embedded into domains ranging from education to security to healthcare. Yet, prior work has highlighted that FER can exhibit both gender and racial biases like other facial analysis techniques. As a result, bias-mitigation research efforts have mainly focused on tackling gender and racial disparities, while other demographic related biases, such as age, have seen less progress. This work seeks to examine the performance of state of the art commercial FER technology on expressive images of men and women from three distinct age groups. We utilize four different commercial FER systems in a black box methodology to evaluate how six emotions - anger, disgust, fear, happiness, neutrality, and sadness - are correctly detected by age group. We further investigate how algorithmic changes over the last year have affected system performance. Our results found that all four commercial FER systems most accurately perceived emotion in images of young adults and least accurately in images of older adults. This trend was observed for analyses conducted in 2019 and 2020. However, little to no gender disparities were observed in either year. While older adults may not have been the initial target consumer of FER technology, statistics show the demographic is quickly growing more keen to applications that use such systems. Our results demonstrate the importance of considering various demographic subgroups during FER system validation and the need for inclusive, intersectional algorithmic developmental practices.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134368709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
期刊
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1