{"title":"Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism","authors":"Brenda Leong, Evan Selinger","doi":"10.1145/3287560.3287591","DOIUrl":null,"url":null,"abstract":"The goal of this paper is to advance design, policy, and ethics scholarship on how engineers and regulators can protect consumers from deceptive robots and artificial intelligences that exhibit the problem of dishonest anthropomorphism. The analysis expands upon ideas surrounding the principle of honest anthropomorphism originally formulated by Margot Kaminsky, Mathew Ruben, William D. Smart, and Cindy M. Grimm in their groundbreaking Maryland Law Review article, \"Averting Robot Eyes.\" Applying boundary management theory and philosophical insights into prediction and perception, we create a new taxonomy that identifies fundamental types of dishonest anthropomorphism and pinpoints harms that they can cause. To demonstrate how the taxonomy can be applied as well as clarify the scope of the problems that it can cover, we critically consider a representative series of ethical issues, proposals, and questions concerning whether the principle of honest anthropomorphism has been violated.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"33","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3287560.3287591","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 33
Abstract
The goal of this paper is to advance design, policy, and ethics scholarship on how engineers and regulators can protect consumers from deceptive robots and artificial intelligences that exhibit the problem of dishonest anthropomorphism. The analysis expands upon ideas surrounding the principle of honest anthropomorphism originally formulated by Margot Kaminsky, Mathew Ruben, William D. Smart, and Cindy M. Grimm in their groundbreaking Maryland Law Review article, "Averting Robot Eyes." Applying boundary management theory and philosophical insights into prediction and perception, we create a new taxonomy that identifies fundamental types of dishonest anthropomorphism and pinpoints harms that they can cause. To demonstrate how the taxonomy can be applied as well as clarify the scope of the problems that it can cover, we critically consider a representative series of ethical issues, proposals, and questions concerning whether the principle of honest anthropomorphism has been violated.
本文的目标是推进设计、政策和伦理学术研究,探讨工程师和监管机构如何保护消费者免受欺骗性机器人和人工智能的侵害,这些机器人和人工智能表现出不诚实的拟人化问题。该分析扩展了围绕诚实拟人化原则的思想,该原则最初是由Margot Kaminsky, Mathew Ruben, William D. Smart和Cindy M. Grimm在他们开创性的马里兰法律评论文章“避开机器人的眼睛”中提出的。将边界管理理论和哲学见解应用到预测和感知中,我们创建了一种新的分类法,确定了不诚实的拟人化的基本类型,并指出了它们可能造成的危害。为了证明分类法可以如何应用,并澄清它可以涵盖的问题范围,我们批判性地考虑了一系列具有代表性的伦理问题、建议,以及关于诚实拟人化原则是否被违反的问题。