{"title":"具有语言能力的机器人可能无意中削弱人类的道德规范","authors":"R. Jackson, T. Williams","doi":"10.1109/HRI.2019.8673123","DOIUrl":null,"url":null,"abstract":"Previous research in moral psychology and human-robot interaction has shown that technology shapes human morality, and research in human-robot interaction has shown that humans naturally perceive robots as moral agents. Accordingly, we propose that language-capable autonomous robots are uniquely positioned among technologies to significantly impact human morality. We therefore argue that it is imperative that language-capable robots behave according to human moral norms and communicate in such a way that their intention to adhere to those norms is clear. Unfortunately, the design of current natural language oriented robot architectures enables certain architectural components to circumvent or preempt those architectures' moral reasoning capabilities. In this paper, we show how this may occur, using clarification request generation in current dialog systems as a motivating example. Furthermore, we present experimental evidence that the types of behavior exhibited by current approaches to clarification request generation can cause robots to (1) miscommunicate their moral intentions and (2) weaken humans' perceptions of moral norms within the current context. This work strengthens previous preliminary findings, and does so within an experimental paradigm that provides increased external and ecological validity over earlier approaches.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"28 1","pages":"401-410"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"52","resultStr":"{\"title\":\"Language-Capable Robots may Inadvertently Weaken Human Moral Norms\",\"authors\":\"R. Jackson, T. Williams\",\"doi\":\"10.1109/HRI.2019.8673123\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Previous research in moral psychology and human-robot interaction has shown that technology shapes human morality, and research in human-robot interaction has shown that humans naturally perceive robots as moral agents. Accordingly, we propose that language-capable autonomous robots are uniquely positioned among technologies to significantly impact human morality. We therefore argue that it is imperative that language-capable robots behave according to human moral norms and communicate in such a way that their intention to adhere to those norms is clear. Unfortunately, the design of current natural language oriented robot architectures enables certain architectural components to circumvent or preempt those architectures' moral reasoning capabilities. In this paper, we show how this may occur, using clarification request generation in current dialog systems as a motivating example. Furthermore, we present experimental evidence that the types of behavior exhibited by current approaches to clarification request generation can cause robots to (1) miscommunicate their moral intentions and (2) weaken humans' perceptions of moral norms within the current context. This work strengthens previous preliminary findings, and does so within an experimental paradigm that provides increased external and ecological validity over earlier approaches.\",\"PeriodicalId\":6600,\"journal\":{\"name\":\"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)\",\"volume\":\"28 1\",\"pages\":\"401-410\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"52\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HRI.2019.8673123\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HRI.2019.8673123","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Language-Capable Robots may Inadvertently Weaken Human Moral Norms
Previous research in moral psychology and human-robot interaction has shown that technology shapes human morality, and research in human-robot interaction has shown that humans naturally perceive robots as moral agents. Accordingly, we propose that language-capable autonomous robots are uniquely positioned among technologies to significantly impact human morality. We therefore argue that it is imperative that language-capable robots behave according to human moral norms and communicate in such a way that their intention to adhere to those norms is clear. Unfortunately, the design of current natural language oriented robot architectures enables certain architectural components to circumvent or preempt those architectures' moral reasoning capabilities. In this paper, we show how this may occur, using clarification request generation in current dialog systems as a motivating example. Furthermore, we present experimental evidence that the types of behavior exhibited by current approaches to clarification request generation can cause robots to (1) miscommunicate their moral intentions and (2) weaken humans' perceptions of moral norms within the current context. This work strengthens previous preliminary findings, and does so within an experimental paradigm that provides increased external and ecological validity over earlier approaches.