T. Kitazoe, Sung-Ill Kim, Y. Yoshitomi, Tatsuhiko Ikeda
{"title":"传感器融合对语音、人脸图像和人脸热图像情绪状态识别的影响","authors":"T. Kitazoe, Sung-Ill Kim, Y. Yoshitomi, Tatsuhiko Ikeda","doi":"10.1109/ROMAN.2000.892491","DOIUrl":null,"url":null,"abstract":"A new integrated method is presented to recognize the emotional expressions of human using both voices and facial expressions. For voices, we use such prosodic parameters as pitch signals, energy, and their derivatives, which are trained by hidden Markov model for recognition. For facial expressions, we use feature parameters from thermal images in addition to visible images, which are trained by neural networks for recognition. The thermal images are observed by infrared ray which is not influenced by lighting conditions. The total recognition rates show better performance than that obtained from each single experiment. The results are compared with the recognition by human questionnaire.","PeriodicalId":337709,"journal":{"name":"Proceedings 9th IEEE International Workshop on Robot and Human Interactive Communication. IEEE RO-MAN 2000 (Cat. No.00TH8499)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2000-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"168","resultStr":"{\"title\":\"Effect of sensor fusion for recognition of emotional states using voice, face image and thermal image of face\",\"authors\":\"T. Kitazoe, Sung-Ill Kim, Y. Yoshitomi, Tatsuhiko Ikeda\",\"doi\":\"10.1109/ROMAN.2000.892491\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A new integrated method is presented to recognize the emotional expressions of human using both voices and facial expressions. For voices, we use such prosodic parameters as pitch signals, energy, and their derivatives, which are trained by hidden Markov model for recognition. For facial expressions, we use feature parameters from thermal images in addition to visible images, which are trained by neural networks for recognition. The thermal images are observed by infrared ray which is not influenced by lighting conditions. The total recognition rates show better performance than that obtained from each single experiment. The results are compared with the recognition by human questionnaire.\",\"PeriodicalId\":337709,\"journal\":{\"name\":\"Proceedings 9th IEEE International Workshop on Robot and Human Interactive Communication. IEEE RO-MAN 2000 (Cat. No.00TH8499)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2000-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"168\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings 9th IEEE International Workshop on Robot and Human Interactive Communication. IEEE RO-MAN 2000 (Cat. No.00TH8499)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROMAN.2000.892491\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 9th IEEE International Workshop on Robot and Human Interactive Communication. IEEE RO-MAN 2000 (Cat. No.00TH8499)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROMAN.2000.892491","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Effect of sensor fusion for recognition of emotional states using voice, face image and thermal image of face
A new integrated method is presented to recognize the emotional expressions of human using both voices and facial expressions. For voices, we use such prosodic parameters as pitch signals, energy, and their derivatives, which are trained by hidden Markov model for recognition. For facial expressions, we use feature parameters from thermal images in addition to visible images, which are trained by neural networks for recognition. The thermal images are observed by infrared ray which is not influenced by lighting conditions. The total recognition rates show better performance than that obtained from each single experiment. The results are compared with the recognition by human questionnaire.