{"title":"Exploring First Impressions of the Perceived Social Intelligence and Construal Level of Robots that Disclose their Ability to Deceive","authors":"Kantwon Rogers, A. Howard","doi":"10.1109/RO-MAN53752.2022.9900857","DOIUrl":null,"url":null,"abstract":"If a robot tells you it can lie for your benefit, how would that change how you perceive it? This paper presents a mixed-methods empirical study that investigates how disclosure of deceptive or honest capabilities influences the perceived social intelligence and construal level of a robot. We first conduct a study with 198 Mechanical Turk participants, and then a replication of it with 15 undergraduate students in order to gain qualitative data. Our results show that how a robot introduces itself can have noticeable effects on how it is perceived–even from just one exposure. In particular, when revealing having ability to lie when it believes it is in the best interest of a human, people noticeably find the robot to be less trustworthy than a robot that conceals any honesty aspects or reveals total truthfulness. Moreover, robots that are forthcoming with their truthful abilities are seen in a lower construal than one that is transparent about its deceptive abilities. These results add much needed knowledge to the understudied area of robot deception and could inform designers and policy makers of future practices when considering deploying robots that deceive.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN53752.2022.9900857","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
If a robot tells you it can lie for your benefit, how would that change how you perceive it? This paper presents a mixed-methods empirical study that investigates how disclosure of deceptive or honest capabilities influences the perceived social intelligence and construal level of a robot. We first conduct a study with 198 Mechanical Turk participants, and then a replication of it with 15 undergraduate students in order to gain qualitative data. Our results show that how a robot introduces itself can have noticeable effects on how it is perceived–even from just one exposure. In particular, when revealing having ability to lie when it believes it is in the best interest of a human, people noticeably find the robot to be less trustworthy than a robot that conceals any honesty aspects or reveals total truthfulness. Moreover, robots that are forthcoming with their truthful abilities are seen in a lower construal than one that is transparent about its deceptive abilities. These results add much needed knowledge to the understudied area of robot deception and could inform designers and policy makers of future practices when considering deploying robots that deceive.