With the expanding range of uses for advancements in animation and voice synthesis, more opportunities arise for interactions with animated virtual humans. Such interactions may be influenced by improved portrayals of character features such as emotion and realism. The present study aimed to examine how variations in animated facial detail and vocal prosody shape user perception of emotion in virtual characters. This impact was assessed via facial electromyography and eye-tracking measures, as well as self-reports of state empathy and character appeal. Results indicate that participants were influenced by emotional valence in terms of zygomaticus major and corrugator supercilii muscle activation. Survey data appear to show greater empathy for conditions of increased facial detail and more human-like vocal prosody. Moreover, eye tracking results suggest a preference for eye contact regardless of detail or prosody, with participants fixating more on facial areas of interest overall for the positively valenced conditions. Finally, there is evidence that trait empathy and mismatches between higher facial detail and lower vocal human-likeness may influence zygomaticus major activity in response to positively valenced stimuli. These results are discussed in the context of virtual character design, contemporary understandings of empathy and the phenomenon of the Uncanny Valley.
扫码关注我们
求助内容:
应助结果提醒方式:
