{"title":"基于特权信息的情感识别","authors":"Shangfei Wang, Yachen Zhu, Lihua Yue, Q. Ji","doi":"10.1109/TAMD.2015.2463113","DOIUrl":null,"url":null,"abstract":"In this article, we propose a novel approach to recognize emotions with the help of privileged information, which is only available during training, but not available during testing. Such additional information can be exploited during training to construct a better classifier. Specifically, we recognize audience's emotion from EEG signals with the help of the stimulus videos, and tag videos' emotions with the aid of electroencephalogram (EEG) signals. First, frequency features are extracted from EEG signals and audio/visual features are extracted from video stimulus. Second, features are selected by statistical tests. Third, a new EEG feature space and a new video feature space are constructed simultaneously using canonical correlation analysis (CCA). Finally, two support vector machines (SVM) are trained on the new EEG and video feature spaces respectively. During emotion recognition from EEG, only EEG signals are available, and the SVM classifier obtained on EEG feature space is used; while for video emotion tagging, only video clips are available, and the SVM classifier constructed on video feature space is adopted. Experiments of EEG-based emotion recognition and emotion video tagging are conducted on three benchmark databases, demonstrating that video content, as the context, can improve the emotion recognition from EEG signals and EEG signals available during training can enhance emotion video tagging.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"189-200"},"PeriodicalIF":0.0000,"publicationDate":"2015-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2463113","citationCount":"33","resultStr":"{\"title\":\"Emotion Recognition with the Help of Privileged Information\",\"authors\":\"Shangfei Wang, Yachen Zhu, Lihua Yue, Q. Ji\",\"doi\":\"10.1109/TAMD.2015.2463113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this article, we propose a novel approach to recognize emotions with the help of privileged information, which is only available during training, but not available during testing. Such additional information can be exploited during training to construct a better classifier. Specifically, we recognize audience's emotion from EEG signals with the help of the stimulus videos, and tag videos' emotions with the aid of electroencephalogram (EEG) signals. First, frequency features are extracted from EEG signals and audio/visual features are extracted from video stimulus. Second, features are selected by statistical tests. Third, a new EEG feature space and a new video feature space are constructed simultaneously using canonical correlation analysis (CCA). Finally, two support vector machines (SVM) are trained on the new EEG and video feature spaces respectively. During emotion recognition from EEG, only EEG signals are available, and the SVM classifier obtained on EEG feature space is used; while for video emotion tagging, only video clips are available, and the SVM classifier constructed on video feature space is adopted. Experiments of EEG-based emotion recognition and emotion video tagging are conducted on three benchmark databases, demonstrating that video content, as the context, can improve the emotion recognition from EEG signals and EEG signals available during training can enhance emotion video tagging.\",\"PeriodicalId\":49193,\"journal\":{\"name\":\"IEEE Transactions on Autonomous Mental Development\",\"volume\":\"7 1\",\"pages\":\"189-200\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-07-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1109/TAMD.2015.2463113\",\"citationCount\":\"33\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Autonomous Mental Development\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TAMD.2015.2463113\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Autonomous Mental Development","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TAMD.2015.2463113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Emotion Recognition with the Help of Privileged Information
In this article, we propose a novel approach to recognize emotions with the help of privileged information, which is only available during training, but not available during testing. Such additional information can be exploited during training to construct a better classifier. Specifically, we recognize audience's emotion from EEG signals with the help of the stimulus videos, and tag videos' emotions with the aid of electroencephalogram (EEG) signals. First, frequency features are extracted from EEG signals and audio/visual features are extracted from video stimulus. Second, features are selected by statistical tests. Third, a new EEG feature space and a new video feature space are constructed simultaneously using canonical correlation analysis (CCA). Finally, two support vector machines (SVM) are trained on the new EEG and video feature spaces respectively. During emotion recognition from EEG, only EEG signals are available, and the SVM classifier obtained on EEG feature space is used; while for video emotion tagging, only video clips are available, and the SVM classifier constructed on video feature space is adopted. Experiments of EEG-based emotion recognition and emotion video tagging are conducted on three benchmark databases, demonstrating that video content, as the context, can improve the emotion recognition from EEG signals and EEG signals available during training can enhance emotion video tagging.