{"title":"Multisensory Emotion Perception to Humanoid Robot","authors":"Misako Kawahara, Y. Sawada, A. Tanaka","doi":"10.5057/jjske.tjske-d-21-00015","DOIUrl":null,"url":null,"abstract":"We investigated the multisensory emotion perception from humanoid-robot. In the experiment, participants were presented with video clips containing emotional colored eyes and voice (Task 1) or body gesture and voice (Task 2) of the robot, which were either congruent or incongruent in terms of emotional content (e.g., a happy body gesture paired with a sad voice on an incongruent trial). Participants were instructed to judge the emotion of the robot as either happiness or sadness. We examined the proportion of responses based on visual or auditory cues for the robot’s expression. Results showed that participants relied more on auditory cues than on the visual cues in Task 1. However, this vocal superiority was not observed in Task 2. These results suggest that the multisensory emotion perception from the robot is different whether the cues are natural or artificial. We proposed a model for multisensory emotion perception from a robot.","PeriodicalId":127268,"journal":{"name":"Transactions of Japan Society of Kansei Engineering","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions of Japan Society of Kansei Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5057/jjske.tjske-d-21-00015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We investigated the multisensory emotion perception from humanoid-robot. In the experiment, participants were presented with video clips containing emotional colored eyes and voice (Task 1) or body gesture and voice (Task 2) of the robot, which were either congruent or incongruent in terms of emotional content (e.g., a happy body gesture paired with a sad voice on an incongruent trial). Participants were instructed to judge the emotion of the robot as either happiness or sadness. We examined the proportion of responses based on visual or auditory cues for the robot’s expression. Results showed that participants relied more on auditory cues than on the visual cues in Task 1. However, this vocal superiority was not observed in Task 2. These results suggest that the multisensory emotion perception from the robot is different whether the cues are natural or artificial. We proposed a model for multisensory emotion perception from a robot.