{"title":"设计情感表达机器人:交流方式感知的比较研究","authors":"Christiana Tsiourti, A. Weiss, K. Wac, M. Vincze","doi":"10.1145/3125739.3125744","DOIUrl":null,"url":null,"abstract":"Socially assistive agents, be it virtual avatars or robots, need to engage in social interactions with humans and express their internal emotional states, goals, and desires. In this work, we conducted a comparative study to investigate how humans perceive emotional cues expressed by humanoid robots through five communication modalities (face, head, body, voice, locomotion) and examined whether the degree of a robot's human-like embodiment affects this perception. In an online survey, we asked people to identify emotions communicated by Pepper - a highly human-like robot and Hobbit - a robot with abstract humanlike features. A qualitative and quantitative data analysis confirmed the expressive power of the face, but also demonstrated that body expressions or even simple head and locomotion movements could convey emotional information. These findings suggest that emotion recognition accuracy varies as a function of the modality, and a higher degree of anthropomorphism does not necessarily lead to a higher level of recognition accuracy. Our results further the understanding of how people respond to single communication modalities and have implications for designing recognizable multimodal expressions for robots.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":"{\"title\":\"Designing Emotionally Expressive Robots: A Comparative Study on the Perception of Communication Modalities\",\"authors\":\"Christiana Tsiourti, A. Weiss, K. Wac, M. Vincze\",\"doi\":\"10.1145/3125739.3125744\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Socially assistive agents, be it virtual avatars or robots, need to engage in social interactions with humans and express their internal emotional states, goals, and desires. In this work, we conducted a comparative study to investigate how humans perceive emotional cues expressed by humanoid robots through five communication modalities (face, head, body, voice, locomotion) and examined whether the degree of a robot's human-like embodiment affects this perception. In an online survey, we asked people to identify emotions communicated by Pepper - a highly human-like robot and Hobbit - a robot with abstract humanlike features. A qualitative and quantitative data analysis confirmed the expressive power of the face, but also demonstrated that body expressions or even simple head and locomotion movements could convey emotional information. These findings suggest that emotion recognition accuracy varies as a function of the modality, and a higher degree of anthropomorphism does not necessarily lead to a higher level of recognition accuracy. Our results further the understanding of how people respond to single communication modalities and have implications for designing recognizable multimodal expressions for robots.\",\"PeriodicalId\":346669,\"journal\":{\"name\":\"Proceedings of the 5th International Conference on Human Agent Interaction\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"28\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 5th International Conference on Human Agent Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3125739.3125744\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th International Conference on Human Agent Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3125739.3125744","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Designing Emotionally Expressive Robots: A Comparative Study on the Perception of Communication Modalities
Socially assistive agents, be it virtual avatars or robots, need to engage in social interactions with humans and express their internal emotional states, goals, and desires. In this work, we conducted a comparative study to investigate how humans perceive emotional cues expressed by humanoid robots through five communication modalities (face, head, body, voice, locomotion) and examined whether the degree of a robot's human-like embodiment affects this perception. In an online survey, we asked people to identify emotions communicated by Pepper - a highly human-like robot and Hobbit - a robot with abstract humanlike features. A qualitative and quantitative data analysis confirmed the expressive power of the face, but also demonstrated that body expressions or even simple head and locomotion movements could convey emotional information. These findings suggest that emotion recognition accuracy varies as a function of the modality, and a higher degree of anthropomorphism does not necessarily lead to a higher level of recognition accuracy. Our results further the understanding of how people respond to single communication modalities and have implications for designing recognizable multimodal expressions for robots.