首页 > 最新文献

Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society最新文献

英文 中文
Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing 保全颜面:调查面部识别审计的伦理问题
Pub Date : 2020-01-03 DOI: 10.1145/3375627.3375820
Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, Emily L. Denton
Although essential to revealing biased performance, well intentioned attempts at algorithmic auditing can have effects that may harm the very populations these measures are meant to protect. This concern is even more salient while auditing biometric systems such as facial recognition, where the data is sensitive and the technology is often used in ethically questionable manners. We demonstrate a set of fiveethical concerns in the particular case of auditing commercial facial processing technology, highlighting additional design considerations and ethical tensions the auditor needs to be aware of so as not exacerbate or complement the harms propagated by the audited system. We go further to provide tangible illustrations of these concerns, and conclude by reflecting on what these concerns mean for the role of the algorithmic audit and the fundamental product limitations they reveal.
尽管对于揭示有偏见的表现至关重要,但善意的算法审计尝试可能会产生损害这些措施旨在保护的人群的影响。在审计面部识别等生物识别系统时,这种担忧更为突出,因为这些系统的数据很敏感,而且技术经常被用于道德上有问题的方式。在审计商业面部处理技术的特殊情况下,我们展示了一组五个道德问题,强调了审计师需要意识到的额外设计考虑和道德紧张关系,以避免加剧或补充被审计系统传播的危害。我们进一步提供了这些问题的具体说明,并通过反思这些问题对算法审计的作用和它们揭示的基本产品限制的意义来结束。
{"title":"Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing","authors":"Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, Emily L. Denton","doi":"10.1145/3375627.3375820","DOIUrl":"https://doi.org/10.1145/3375627.3375820","url":null,"abstract":"Although essential to revealing biased performance, well intentioned attempts at algorithmic auditing can have effects that may harm the very populations these measures are meant to protect. This concern is even more salient while auditing biometric systems such as facial recognition, where the data is sensitive and the technology is often used in ethically questionable manners. We demonstrate a set of fiveethical concerns in the particular case of auditing commercial facial processing technology, highlighting additional design considerations and ethical tensions the auditor needs to be aware of so as not exacerbate or complement the harms propagated by the audited system. We go further to provide tangible illustrations of these concerns, and conclude by reflecting on what these concerns mean for the role of the algorithmic audit and the fundamental product limitations they reveal.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75645379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 189
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society AAAI/ACM人工智能、伦理与社会会议论文集
Pub Date : 2020-01-01 DOI: 10.1145/3375627
{"title":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","authors":"","doi":"10.1145/3375627","DOIUrl":"https://doi.org/10.1145/3375627","url":null,"abstract":"","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72818403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
U.S. Public Opinion on the Governance of Artificial Intelligence 美国公众对人工智能治理的看法
Pub Date : 2019-12-30 DOI: 10.1145/3375627.3375827
Baobao Zhang, A. Dafoe
Artificial intelligence (AI) has widespread societal implications, yet social scientists are only beginning to study public attitudes toward the technology. Existing studies find that the public's trust in institutions can play a major role in shaping the regulation of emerging technologies. Using a large-scale survey (N=2000), we examined Americans' perceptions of 13 AI governance challenges as well as their trust in governmental, corporate, and multistakeholder institutions to responsibly develop and manage AI. While Americans perceive all of the AI governance issues to be important for tech companies and governments to manage, they have only low to moderate trust in these institutions to manage AI applications.
人工智能(AI)具有广泛的社会影响,但社会科学家才刚刚开始研究公众对这项技术的态度。现有研究发现,公众对机构的信任可以在形成对新兴技术的监管方面发挥重要作用。通过大规模调查(N=2000),我们研究了美国人对13项人工智能治理挑战的看法,以及他们对政府、企业和多利益相关者机构负责任地开发和管理人工智能的信任。虽然美国人认为所有人工智能治理问题对科技公司和政府来说都很重要,但他们对这些机构管理人工智能应用的信任程度很低或中等。
{"title":"U.S. Public Opinion on the Governance of Artificial Intelligence","authors":"Baobao Zhang, A. Dafoe","doi":"10.1145/3375627.3375827","DOIUrl":"https://doi.org/10.1145/3375627.3375827","url":null,"abstract":"Artificial intelligence (AI) has widespread societal implications, yet social scientists are only beginning to study public attitudes toward the technology. Existing studies find that the public's trust in institutions can play a major role in shaping the regulation of emerging technologies. Using a large-scale survey (N=2000), we examined Americans' perceptions of 13 AI governance challenges as well as their trust in governmental, corporate, and multistakeholder institutions to responsibly develop and manage AI. While Americans perceive all of the AI governance issues to be important for tech companies and governments to manage, they have only low to moderate trust in these institutions to manage AI applications.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89906880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse? 科学知识的攻防平衡:发表人工智能研究是否会减少误用?
Pub Date : 2019-12-27 DOI: 10.1145/3375627.3375815
Toby Shevlane, A. Dafoe
There is growing concern over the potential misuse of artificial intelligence (AI) research. Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse. This paper addresses the balance between these two effects. Our theoretical framework elucidates the factors governing whether the published research will be more useful for attackers or defenders, such as the possibility for adequate defensive measures, or the independent discovery of the knowledge outside of the scientific community. The balance will vary across scientific fields. However, we show that the existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this cannot be assumed for AI research. The AI research community should consider concepts and policies from a broad set of adjacent fields, and ultimately needs to craft policy well-suited to its particular challenges.
人们越来越担心人工智能(AI)研究可能被滥用。发表科学研究可以促进对技术的滥用,但研究也可以有助于防止滥用。本文讨论了这两种影响之间的平衡。我们的理论框架阐明了决定已发表的研究对攻击者还是防御者更有用的因素,例如采取适当防御措施的可能性,或者在科学界之外独立发现知识的可能性。这种平衡在不同的科学领域会有所不同。然而,我们表明,人工智能内部现有的对话已经从计算机安全内部关于软件漏洞披露的先前辩论中引入了概念和结论。虽然软件漏洞的披露往往有利于防御,但不能假设这适用于人工智能研究。人工智能研究界应该考虑来自广泛的相关领域的概念和政策,最终需要制定适合其特定挑战的政策。
{"title":"The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?","authors":"Toby Shevlane, A. Dafoe","doi":"10.1145/3375627.3375815","DOIUrl":"https://doi.org/10.1145/3375627.3375815","url":null,"abstract":"There is growing concern over the potential misuse of artificial intelligence (AI) research. Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse. This paper addresses the balance between these two effects. Our theoretical framework elucidates the factors governing whether the published research will be more useful for attackers or defenders, such as the possibility for adequate defensive measures, or the independent discovery of the knowledge outside of the scientific community. The balance will vary across scientific fields. However, we show that the existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this cannot be assumed for AI research. The AI research community should consider concepts and policies from a broad set of adjacent fields, and ultimately needs to craft policy well-suited to its particular challenges.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87091374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
The Windfall Clause: Distributing the Benefits of AI for the Common Good 意外之财条款:为公共利益分配人工智能的利益
Pub Date : 2019-12-25 DOI: 10.1145/3375627.3375842
Cullen O'Keefe, P. Cihon, Ben Garfinkel, Carrick Flynn, Jade Leung, A. Dafoe
As the transformative potential of AI has become increasingly salient as a matter of public and political interest, there has been growing discussion about the need to ensure that AI broadly benefits humanity. This in turn has spurred debate on the social responsibilities of large technology companies to serve the interests of society at large. In response, ethical principles and codes of conduct have been proposed to meet the escalating demand for this responsibility to be taken seriously. As yet, however, few institutional innovations have been suggested to translate this responsibility into legal commitments which apply to companies positioned to reap large financial gains from the development and use of AI. This paper offers one potentially attractive tool for addressing such issues: the Windfall Clause, which is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits. By this we mean an early commitment that profits that a firm could not earn without achieving fundamental, economically transformative breakthroughs in AI capabilities will be donated to benefit humanity broadly, with particular attention towards mitigating any downsides from deployment of windfall-generating AI.
随着人工智能的变革潜力成为公众和政治利益日益突出的问题,关于确保人工智能广泛造福人类的必要性的讨论越来越多。这反过来又引发了关于大型科技公司为整个社会利益服务的社会责任的辩论。为此,提出了道德原则和行为守则,以满足日益增加的要求认真对待这一责任的要求。然而,迄今为止,很少有人提出制度性创新,将这一责任转化为法律承诺,适用于那些能够从人工智能的开发和使用中获得巨额财务收益的公司。本文提供了一个潜在的有吸引力的工具来解决这些问题:意外之财条款,这是人工智能公司事先承诺捐赠大量最终的巨额利润。通过这一点,我们的意思是一个早期的承诺,即如果没有在人工智能能力方面取得根本性的、具有经济变革性的突破,公司就无法获得的利润将被捐赠给人类,以广泛造福人类,特别注意减轻部署产生暴利的人工智能带来的任何负面影响。
{"title":"The Windfall Clause: Distributing the Benefits of AI for the Common Good","authors":"Cullen O'Keefe, P. Cihon, Ben Garfinkel, Carrick Flynn, Jade Leung, A. Dafoe","doi":"10.1145/3375627.3375842","DOIUrl":"https://doi.org/10.1145/3375627.3375842","url":null,"abstract":"As the transformative potential of AI has become increasingly salient as a matter of public and political interest, there has been growing discussion about the need to ensure that AI broadly benefits humanity. This in turn has spurred debate on the social responsibilities of large technology companies to serve the interests of society at large. In response, ethical principles and codes of conduct have been proposed to meet the escalating demand for this responsibility to be taken seriously. As yet, however, few institutional innovations have been suggested to translate this responsibility into legal commitments which apply to companies positioned to reap large financial gains from the development and use of AI. This paper offers one potentially attractive tool for addressing such issues: the Windfall Clause, which is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits. By this we mean an early commitment that profits that a firm could not earn without achieving fundamental, economically transformative breakthroughs in AI capabilities will be donated to benefit humanity broadly, with particular attention towards mitigating any downsides from deployment of windfall-generating AI.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84807413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Defining AI in Policy versus Practice 在政策与实践中定义AI
Pub Date : 2019-12-23 DOI: 10.1145/3375627.3375835
P. Krafft, Meg Young, Michael A. Katell, Karen Huang, Ghislain Bugingo
Recent concern about harms of information technologies motivate consideration of regulatory action to forestall or constrain certain developments in the field of artificial intelligence (AI). However, definitional ambiguity hampers the possibility of conversation about this urgent topic of public concern. Legal and regulatory interventions require agreed-upon definitions, but consensus around a definition of AI has been elusive, especially in policy conversations. With an eye towards practical working definitions and a broader understanding of positions on these issues, we survey experts and review published policy documents to examine researcher and policy-maker conceptions of AI. We find that while AI researchers favor definitions of AI that emphasize technical functionality, policy-makers instead use definitions that compare systems to human thinking and behavior. We point out that definitions adhering closely to the functionality of AI systems are more inclusive of technologies in use today, whereas definitions that emphasize human-like capabilities are most applicable to hypothetical future technologies. As a result of this gap, ethical and regulatory efforts may overemphasize concern about future technologies at the expense of pressing issues with existing deployed technologies.
最近对信息技术危害的担忧促使人们考虑采取监管行动,以阻止或限制人工智能(AI)领域的某些发展。然而,定义的模糊性阻碍了对这一公众关注的紧迫话题进行讨论的可能性。法律和监管干预需要商定的定义,但围绕人工智能定义的共识一直难以捉摸,尤其是在政策对话中。着眼于实际工作定义和对这些问题立场的更广泛理解,我们调查了专家并审查了已发表的政策文件,以检查研究人员和政策制定者对人工智能的概念。我们发现,虽然人工智能研究人员倾向于强调技术功能的人工智能定义,但政策制定者却使用将系统与人类思维和行为进行比较的定义。我们指出,与人工智能系统功能密切相关的定义更能涵盖当今使用的技术,而强调类人能力的定义最适用于假设的未来技术。由于这种差距,伦理和监管方面的努力可能会过分强调对未来技术的关注,而忽略了现有部署技术的紧迫问题。
{"title":"Defining AI in Policy versus Practice","authors":"P. Krafft, Meg Young, Michael A. Katell, Karen Huang, Ghislain Bugingo","doi":"10.1145/3375627.3375835","DOIUrl":"https://doi.org/10.1145/3375627.3375835","url":null,"abstract":"Recent concern about harms of information technologies motivate consideration of regulatory action to forestall or constrain certain developments in the field of artificial intelligence (AI). However, definitional ambiguity hampers the possibility of conversation about this urgent topic of public concern. Legal and regulatory interventions require agreed-upon definitions, but consensus around a definition of AI has been elusive, especially in policy conversations. With an eye towards practical working definitions and a broader understanding of positions on these issues, we survey experts and review published policy documents to examine researcher and policy-maker conceptions of AI. We find that while AI researchers favor definitions of AI that emphasize technical functionality, policy-makers instead use definitions that compare systems to human thinking and behavior. We point out that definitions adhering closely to the functionality of AI systems are more inclusive of technologies in use today, whereas definitions that emphasize human-like capabilities are most applicable to hypothetical future technologies. As a result of this gap, ethical and regulatory efforts may overemphasize concern about future technologies at the expense of pressing issues with existing deployed technologies.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78680434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Exploring AI Futures Through Role Play 通过角色扮演探索AI的未来
Pub Date : 2019-12-19 DOI: 10.1145/3375627.3375817
S. Avin, Ross Gruetzemacher, J. Fox
We present an innovative methodology for studying and teaching the impacts of AI through a role-play game. The game serves two primary purposes: 1) training AI developers and AI policy professionals to reflect on and prepare for future social and ethical challenges related to AI and 2) exploring possible futures involving AI technology development, deployment, social impacts, and governance. While the game currently focuses on the inter-relations between short-, mid- and long-term impacts of AI, it has potential to be adapted for a broad range of scenarios, exploring in greater depths issues of AI policy research and affording training within organizations. The game presented here has undergone two years of development and has been tested through over 30 events involving between 3 and 70 participants. The game is under active development, but preliminary findings suggest that role-play is a promising methodology for both exploring AI futures and training individuals and organizations in thinking about, and reflecting on, the impacts of AI and strategic mistakes that can be avoided today.
我们提出了一种创新的方法,通过角色扮演游戏来研究和教授人工智能的影响。该游戏有两个主要目的:1)培训人工智能开发者和人工智能政策专家,让他们反思和准备未来与人工智能相关的社会和伦理挑战;2)探索涉及人工智能技术开发、部署、社会影响和治理的可能未来。虽然该游戏目前侧重于人工智能短期、中期和长期影响之间的相互关系,但它有可能适用于广泛的场景,更深入地探索人工智能政策研究问题,并在组织内部提供培训。这里展示的游戏经历了两年的开发,并通过30多个活动进行了测试,参与者从3到70人不等。这款游戏正在积极开发中,但初步发现表明,角色扮演是一种很有前途的方法,既可以探索人工智能的未来,也可以训练个人和组织思考和反思人工智能的影响,以及今天可以避免的战略错误。
{"title":"Exploring AI Futures Through Role Play","authors":"S. Avin, Ross Gruetzemacher, J. Fox","doi":"10.1145/3375627.3375817","DOIUrl":"https://doi.org/10.1145/3375627.3375817","url":null,"abstract":"We present an innovative methodology for studying and teaching the impacts of AI through a role-play game. The game serves two primary purposes: 1) training AI developers and AI policy professionals to reflect on and prepare for future social and ethical challenges related to AI and 2) exploring possible futures involving AI technology development, deployment, social impacts, and governance. While the game currently focuses on the inter-relations between short-, mid- and long-term impacts of AI, it has potential to be adapted for a broad range of scenarios, exploring in greater depths issues of AI policy research and affording training within organizations. The game presented here has undergone two years of development and has been tested through over 30 events involving between 3 and 70 participants. The game is under active development, but preliminary findings suggest that role-play is a promising methodology for both exploring AI futures and training individuals and organizations in thinking about, and reflecting on, the impacts of AI and strategic mistakes that can be avoided today.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77860836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Meta Decision Trees for Explainable Recommendation Systems 可解释推荐系统的元决策树
Pub Date : 2019-12-19 DOI: 10.1145/3375627.3375876
Eyal Shulman, Lior Wolf
We tackle the problem of building explainable recommendation systems that are based on a per-user decision tree, with decision rules that are based on single attribute values. We build the trees by applying learned regression functions to obtain the decision rules as well as the values at the leaf nodes. The regression functions receive as input the embedding of the user's training set, as well as the embedding of the samples that arrive at the current node. The embedding and the regressors are learned end-to-end with a loss that encourages the decision rules to be sparse. By applying our method, we obtain a collaborative filtering solution that provides a direct explanation to every rating it provides. With regards to accuracy, it is competitive with other algorithms. However, as expected, explainability comes at a cost and the accuracy is typically slightly lower than the state of the art result reported in the literature. Our code is available at urlhttps://github.com/shulmaneyal/metatrees.
我们解决了构建基于每个用户决策树的可解释推荐系统的问题,该系统的决策规则基于单个属性值。我们通过应用学习到的回归函数来构建树,以获得决策规则以及叶节点的值。回归函数接收用户训练集的嵌入作为输入,以及到达当前节点的样本的嵌入。嵌入和回归量是端到端学习的,损失鼓励决策规则是稀疏的。通过应用我们的方法,我们获得了一个协同过滤解决方案,该解决方案为它提供的每个评级提供了直接的解释。在准确性方面,它与其他算法具有竞争力。然而,正如预期的那样,可解释性是有代价的,其准确性通常略低于文献中报道的最先进结果的状态。我们的代码可在urlhttps://github.com/shulmaneyal/metatrees获得。
{"title":"Meta Decision Trees for Explainable Recommendation Systems","authors":"Eyal Shulman, Lior Wolf","doi":"10.1145/3375627.3375876","DOIUrl":"https://doi.org/10.1145/3375627.3375876","url":null,"abstract":"We tackle the problem of building explainable recommendation systems that are based on a per-user decision tree, with decision rules that are based on single attribute values. We build the trees by applying learned regression functions to obtain the decision rules as well as the values at the leaf nodes. The regression functions receive as input the embedding of the user's training set, as well as the embedding of the samples that arrive at the current node. The embedding and the regressors are learned end-to-end with a loss that encourages the decision rules to be sparse. By applying our method, we obtain a collaborative filtering solution that provides a direct explanation to every rating it provides. With regards to accuracy, it is competitive with other algorithms. However, as expected, explainability comes at a cost and the accuracy is typically slightly lower than the state of the art result reported in the literature. Our code is available at urlhttps://github.com/shulmaneyal/metatrees.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82466812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
What's Next for AI Ethics, Policy, and Governance? A Global Overview 人工智能伦理、政策和治理的下一步是什么?全球概况
Pub Date : 2019-12-18 DOI: 10.1145/3375627.3375804
Daniel S. Schiff, J. Biddle, J. Borenstein, Kelly Laas
Since 2016, more than 80 AI ethics documents - including codes, principles, frameworks, and policy strategies - have been produced by corporations, governments, and NGOs. In this paper, we examine three topics of importance related to our ongoing empirical study of ethics and policy issues in these emerging documents. First, we review possible challenges associated with the relative homogeneity of the documents' creators. Second, we provide a novel typology of motivations to characterize both obvious and less obvious goals of the documents. Third, we discuss the varied impacts these documents may have on the AI governance landscape, including what factors are relevant to assessing whether a given document is likely to be successful in achieving its goals.
自2016年以来,企业、政府和非政府组织制定了80多份人工智能伦理文件,包括准则、原则、框架和政策战略。在本文中,我们研究了与这些新兴文件中正在进行的道德和政策问题实证研究相关的三个重要主题。首先,我们回顾与文档创建者的相对同质性相关的可能挑战。其次,我们提供了一种新的动机类型来描述文件的明显和不太明显的目标。第三,我们讨论了这些文件可能对人工智能治理环境产生的各种影响,包括哪些因素与评估给定文件是否有可能成功实现其目标相关。
{"title":"What's Next for AI Ethics, Policy, and Governance? A Global Overview","authors":"Daniel S. Schiff, J. Biddle, J. Borenstein, Kelly Laas","doi":"10.1145/3375627.3375804","DOIUrl":"https://doi.org/10.1145/3375627.3375804","url":null,"abstract":"Since 2016, more than 80 AI ethics documents - including codes, principles, frameworks, and policy strategies - have been produced by corporations, governments, and NGOs. In this paper, we examine three topics of importance related to our ongoing empirical study of ethics and policy issues in these emerging documents. First, we review possible challenges associated with the relative homogeneity of the documents' creators. Second, we provide a novel typology of motivations to characterize both obvious and less obvious goals of the documents. Third, we discuss the varied impacts these documents may have on the AI governance landscape, including what factors are relevant to assessing whether a given document is likely to be successful in achieving its goals.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78544944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms during High-Demand Hours 在高需求时段平衡拼车平台的利润和公平之间的权衡
Pub Date : 2019-12-18 DOI: 10.1145/3375627.3375818
Vedant Nanda, Pan Xu, Karthik Abinav Sankararaman, John P. Dickerson, A. Srinivasan
Rideshare platforms, when assigning requests to drivers, tend to maximize profit for the system and/or minimize waiting time for riders. Such platforms can exacerbate biases that drivers may have over certain types of requests. We consider the case of peak hours when the demand for rides is more than the supply of drivers. Drivers are well aware of their advantage during the peak hours and can choose to be selective about which rides to accept. Moreover, if in such a scenario, the assignment of requests to drivers (by the platform) is made only to maximize profit and/or minimize wait time for riders, requests of a certain type (e.g., from a non-popular pickup location, or to a non-popular drop-off location) might never be assigned to a driver. Such a system can be highly unfair to riders. However, increasing fairness might come at a cost of the overall profit made by the rideshare platform. To balance these conflicting goals, we present a flexible, non-adaptive algorithm, NAdap, that allows the platform designer to control the profit and fairness of the system via parameters α and β respectively.We model the matching problem as an online bipartite matching where the set of drivers is offline and requests arrive online. Upon the arrival of a request, we use NAdap to assign it to a driver (the driver might then choose to accept or reject it) or reject the request. We formalize the measures of profit and fairness in our setting and show that by using NAdap, the competitive ratios for profit and fairness measures would be no worse than α/e and β/e respectively. Extensive experimental results on both real-world and synthetic datasets confirm the validity of our theoretical lower bounds. Additionally, they show that NAdap under some choice of (α, β) can beat two natural heuristics, Greedy and Uniform, on both fairness and profit. Code is available at: https://github.com/nvedant07/rideshare-fairness-peak/. Full paper can be found in the proceedings of AAAI 2020 and on ArXiv: http://arxiv.org/abs/1912.08388).
拼车平台在将请求分配给司机时,往往会最大化系统的利润,并/或最小化乘客的等待时间。这样的平台可能会加剧司机对某些类型请求的偏见。我们考虑的是出行需求大于司机供给的高峰时段情况。司机们很清楚自己在高峰时段的优势,可以选择有选择性地接受哪辆车。此外,如果在这种情况下,(由平台)分配给司机的请求只是为了最大化利润和/或最小化乘客的等待时间,那么特定类型的请求(例如,从一个不受欢迎的上车地点,或到一个不受欢迎的下车地点)可能永远不会分配给司机。这样的系统对乘客是非常不公平的。然而,提高公平性可能会以拼车平台的整体利润为代价。为了平衡这些相互冲突的目标,我们提出了一种灵活的非自适应算法NAdap,该算法允许平台设计者分别通过参数α和β来控制系统的利润和公平性。我们将匹配问题建模为在线二部匹配,其中驱动集离线而请求到达在线。在请求到达时,我们使用NAdap将其分配给驱动程序(然后驱动程序可以选择接受或拒绝它)或拒绝请求。我们在我们的设置中形式化了利润和公平的度量,并表明通过使用NAdap,利润和公平度量的竞争比率将分别不低于α/e和β/e。在真实世界和合成数据集上的大量实验结果证实了我们理论下限的有效性。此外,他们还表明,在(α, β)的某些选择下,NAdap可以在公平和利润上击败两种自然启发式,即贪婪和均匀。代码可从https://github.com/nvedant07/rideshare-fairness-peak/获得。全文可在AAAI 2020会议纪要和ArXiv上找到:http://arxiv.org/abs/1912.08388)。
{"title":"Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms during High-Demand Hours","authors":"Vedant Nanda, Pan Xu, Karthik Abinav Sankararaman, John P. Dickerson, A. Srinivasan","doi":"10.1145/3375627.3375818","DOIUrl":"https://doi.org/10.1145/3375627.3375818","url":null,"abstract":"Rideshare platforms, when assigning requests to drivers, tend to maximize profit for the system and/or minimize waiting time for riders. Such platforms can exacerbate biases that drivers may have over certain types of requests. We consider the case of peak hours when the demand for rides is more than the supply of drivers. Drivers are well aware of their advantage during the peak hours and can choose to be selective about which rides to accept. Moreover, if in such a scenario, the assignment of requests to drivers (by the platform) is made only to maximize profit and/or minimize wait time for riders, requests of a certain type (e.g., from a non-popular pickup location, or to a non-popular drop-off location) might never be assigned to a driver. Such a system can be highly unfair to riders. However, increasing fairness might come at a cost of the overall profit made by the rideshare platform. To balance these conflicting goals, we present a flexible, non-adaptive algorithm, NAdap, that allows the platform designer to control the profit and fairness of the system via parameters α and β respectively.We model the matching problem as an online bipartite matching where the set of drivers is offline and requests arrive online. Upon the arrival of a request, we use NAdap to assign it to a driver (the driver might then choose to accept or reject it) or reject the request. We formalize the measures of profit and fairness in our setting and show that by using NAdap, the competitive ratios for profit and fairness measures would be no worse than α/e and β/e respectively. Extensive experimental results on both real-world and synthetic datasets confirm the validity of our theoretical lower bounds. Additionally, they show that NAdap under some choice of (α, β) can beat two natural heuristics, Greedy and Uniform, on both fairness and profit. Code is available at: https://github.com/nvedant07/rideshare-fairness-peak/. Full paper can be found in the proceedings of AAAI 2020 and on ArXiv: http://arxiv.org/abs/1912.08388).","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73457517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
期刊
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1