Senaka Amarakeerthi, Rasika Ranaweera, Michael Cohen
{"title":"cve中基于语音的姿态和手势情感表征","authors":"Senaka Amarakeerthi, Rasika Ranaweera, Michael Cohen","doi":"10.1109/CW.2010.75","DOIUrl":null,"url":null,"abstract":"Collaborative Virtual Environments (CVEs) have become increasingly popular in the past two decades. MostCVEs use avatar systems to represent each user logged into aCVE session. Some avatar systems are capable of expressing emotions with postures, gestures, and facial expressions. Inprevious studies, various approaches have been explored to convey emotional states to the computer, including voice and facial movements. We propose a technique to detect emotions in the voice of a speaker and animate avatars to reflect extracted emotions in real-time. The system has been developed in \"Project Wonderland, \" a Java-based open-source framework for creating collaborative 3D virtual worlds. In our prototype, six primitive emotional states— anger, dislike, fear, happiness, sadness, and surprise— were considered. An emotion classification system which uses short time log frequency power coefficients (LFPC) to represent features and hidden Markov models (HMMs) as the classifier was modified to build an emotion classification unit. Extracted emotions were used to activate existing avatar postures and gestures in Wonderland.","PeriodicalId":410870,"journal":{"name":"2010 International Conference on Cyberworlds","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Speech-Based Emotion Characterization Using Postures and Gestures in CVEs\",\"authors\":\"Senaka Amarakeerthi, Rasika Ranaweera, Michael Cohen\",\"doi\":\"10.1109/CW.2010.75\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Collaborative Virtual Environments (CVEs) have become increasingly popular in the past two decades. MostCVEs use avatar systems to represent each user logged into aCVE session. Some avatar systems are capable of expressing emotions with postures, gestures, and facial expressions. Inprevious studies, various approaches have been explored to convey emotional states to the computer, including voice and facial movements. We propose a technique to detect emotions in the voice of a speaker and animate avatars to reflect extracted emotions in real-time. The system has been developed in \\\"Project Wonderland, \\\" a Java-based open-source framework for creating collaborative 3D virtual worlds. In our prototype, six primitive emotional states— anger, dislike, fear, happiness, sadness, and surprise— were considered. An emotion classification system which uses short time log frequency power coefficients (LFPC) to represent features and hidden Markov models (HMMs) as the classifier was modified to build an emotion classification unit. Extracted emotions were used to activate existing avatar postures and gestures in Wonderland.\",\"PeriodicalId\":410870,\"journal\":{\"name\":\"2010 International Conference on Cyberworlds\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-10-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 International Conference on Cyberworlds\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CW.2010.75\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 International Conference on Cyberworlds","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CW.2010.75","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Speech-Based Emotion Characterization Using Postures and Gestures in CVEs
Collaborative Virtual Environments (CVEs) have become increasingly popular in the past two decades. MostCVEs use avatar systems to represent each user logged into aCVE session. Some avatar systems are capable of expressing emotions with postures, gestures, and facial expressions. Inprevious studies, various approaches have been explored to convey emotional states to the computer, including voice and facial movements. We propose a technique to detect emotions in the voice of a speaker and animate avatars to reflect extracted emotions in real-time. The system has been developed in "Project Wonderland, " a Java-based open-source framework for creating collaborative 3D virtual worlds. In our prototype, six primitive emotional states— anger, dislike, fear, happiness, sadness, and surprise— were considered. An emotion classification system which uses short time log frequency power coefficients (LFPC) to represent features and hidden Markov models (HMMs) as the classifier was modified to build an emotion classification unit. Extracted emotions were used to activate existing avatar postures and gestures in Wonderland.