{"title":"Valence-Arousal Model based Emotion Recognition using EEG, peripheral physiological signals and Facial Expression","authors":"Qi Zhu, G. Lu, Jingjie Yan","doi":"10.1145/3380688.3380694","DOIUrl":null,"url":null,"abstract":"Emotion recognition plays a particularly important role in the field of artificial intelligence. However, the emotional recognition of electroencephalogram (EEG) in the past was only a unimodal or a bimodal based on EEG. This paper aims to use deep learning to perform emotional recognition based on the multimodal with valence-arousal dimension of EEG, peripheral physiological signals, and facial expressions. The experiment uses the complete data of 18 experimenters in the Database for Emotion Analysis Using Physiological Signals (DEAP) to classify the EEG, peripheral physiological signals and facial expression video in unimodal and multimodal fusion. The experiment demonstrates that Multimodal fusion's accuracy is excelled that in unimodal and bimodal fusion. The multimodal compensates for the defects of unimodal and bimodal information sources.","PeriodicalId":414793,"journal":{"name":"Proceedings of the 4th International Conference on Machine Learning and Soft Computing","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th International Conference on Machine Learning and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3380688.3380694","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Emotion recognition plays a particularly important role in the field of artificial intelligence. However, the emotional recognition of electroencephalogram (EEG) in the past was only a unimodal or a bimodal based on EEG. This paper aims to use deep learning to perform emotional recognition based on the multimodal with valence-arousal dimension of EEG, peripheral physiological signals, and facial expressions. The experiment uses the complete data of 18 experimenters in the Database for Emotion Analysis Using Physiological Signals (DEAP) to classify the EEG, peripheral physiological signals and facial expression video in unimodal and multimodal fusion. The experiment demonstrates that Multimodal fusion's accuracy is excelled that in unimodal and bimodal fusion. The multimodal compensates for the defects of unimodal and bimodal information sources.