Alessandra Rossi, Kerstin Dautenhahn, Kheng Lee Koay, Michael L. Walters
{"title":"后果问题","authors":"Alessandra Rossi, Kerstin Dautenhahn, Kheng Lee Koay, Michael L. Walters","doi":"10.1075/is.21025.ros","DOIUrl":null,"url":null,"abstract":"On reviewing the literature regarding acceptance and trust in human-robot interaction (HRI), there are a number of open questions that needed to be addressed in order to establish effective collaborations between humans and robots in real-world applications. In particular, we identified four principal open areas that should be investigated to create guidelines for the successful deployment of robots in the wild. These areas are focused on: (1) the robot’s abilities and limitations; in particular when it makes errors with different severity of consequences, (2) individual differences, (3) the dynamics of human-robot trust, and (4) the interaction between humans and robots over time. In this paper, we present two very similar studies, one with a virtual robot with human-like abilities, and one with a Care-O-bot 4 robot. In the first study, we create an immersive narrative using an interactive storyboard to collect responses of 154 participants. In the second study, 6 participants had repeated interactions over three weeks with a physical robot. We summarise and discuss the findings of our investigations of the effects of robots’ errors on people’s trust in robots for designing mechanisms that allow robots to recover from a breach of trust. In particular, we observed that robots’ errors had greater impact on people’s trust in the robot when the errors were made at the beginning of the interaction and had severe consequences. Our results also provided insights on how these errors vary according to the individuals’ personalities, expectations and previous experiences.","PeriodicalId":46494,"journal":{"name":"Interaction Studies","volume":"62 1","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2023-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A matter of consequences\",\"authors\":\"Alessandra Rossi, Kerstin Dautenhahn, Kheng Lee Koay, Michael L. Walters\",\"doi\":\"10.1075/is.21025.ros\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"On reviewing the literature regarding acceptance and trust in human-robot interaction (HRI), there are a number of open questions that needed to be addressed in order to establish effective collaborations between humans and robots in real-world applications. In particular, we identified four principal open areas that should be investigated to create guidelines for the successful deployment of robots in the wild. These areas are focused on: (1) the robot’s abilities and limitations; in particular when it makes errors with different severity of consequences, (2) individual differences, (3) the dynamics of human-robot trust, and (4) the interaction between humans and robots over time. In this paper, we present two very similar studies, one with a virtual robot with human-like abilities, and one with a Care-O-bot 4 robot. In the first study, we create an immersive narrative using an interactive storyboard to collect responses of 154 participants. In the second study, 6 participants had repeated interactions over three weeks with a physical robot. We summarise and discuss the findings of our investigations of the effects of robots’ errors on people’s trust in robots for designing mechanisms that allow robots to recover from a breach of trust. In particular, we observed that robots’ errors had greater impact on people’s trust in the robot when the errors were made at the beginning of the interaction and had severe consequences. Our results also provided insights on how these errors vary according to the individuals’ personalities, expectations and previous experiences.\",\"PeriodicalId\":46494,\"journal\":{\"name\":\"Interaction Studies\",\"volume\":\"62 1\",\"pages\":\"\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2023-12-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Interaction Studies\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1075/is.21025.ros\",\"RegionNum\":4,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interaction Studies","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1075/is.21025.ros","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMMUNICATION","Score":null,"Total":0}
On reviewing the literature regarding acceptance and trust in human-robot interaction (HRI), there are a number of open questions that needed to be addressed in order to establish effective collaborations between humans and robots in real-world applications. In particular, we identified four principal open areas that should be investigated to create guidelines for the successful deployment of robots in the wild. These areas are focused on: (1) the robot’s abilities and limitations; in particular when it makes errors with different severity of consequences, (2) individual differences, (3) the dynamics of human-robot trust, and (4) the interaction between humans and robots over time. In this paper, we present two very similar studies, one with a virtual robot with human-like abilities, and one with a Care-O-bot 4 robot. In the first study, we create an immersive narrative using an interactive storyboard to collect responses of 154 participants. In the second study, 6 participants had repeated interactions over three weeks with a physical robot. We summarise and discuss the findings of our investigations of the effects of robots’ errors on people’s trust in robots for designing mechanisms that allow robots to recover from a breach of trust. In particular, we observed that robots’ errors had greater impact on people’s trust in the robot when the errors were made at the beginning of the interaction and had severe consequences. Our results also provided insights on how these errors vary according to the individuals’ personalities, expectations and previous experiences.
期刊介绍:
This international peer-reviewed journal aims to advance knowledge in the growing and strongly interdisciplinary area of Interaction Studies in biological and artificial systems. Understanding social behaviour and communication in biological and artificial systems requires knowledge of evolutionary, developmental and neurobiological aspects of social behaviour and communication; the embodied nature of interactions; origins and characteristics of social and narrative intelligence; perception, action and communication in the context of dynamic and social environments; social learning.