{"title":"ER-MRL: Emotion Recognition based on Multimodal Representation Learning","authors":"Xiaoding Guo, Yadi Wang, Zhijun Miao, Xiaojin Yang, Jinkai Guo, Xianhong Hou, Feifei Zao","doi":"10.1109/ICIST55546.2022.9926848","DOIUrl":null,"url":null,"abstract":"In recent years, emotion recognition technology has been widely used in emotion change perception and mental illness diagnosis. Previous methods are mainly based on single-task learning strategies, which are unable to fuse multimodal features and remove redundant information. This paper proposes an emotion recognition model ER-MRL, which is based on multimodal representation learning. ER-MRL vectorizes the multimodal emotion data through encoders based on neural networks. The gate mechanism is used for multimodal feature selection. On this basis, ER-MRL calculates the modality specific and modality invariant representation for each emotion category. The Transformer model and multihead self-attention layer are applied to multimodal feature fusion. ER-MRL figures out the prediction result through the tower layer based on fully connected neural networks. Experimental results on the CMU-MOSI dataset show that ER-MRL has better performance on emotion recognition than previous methods.","PeriodicalId":211213,"journal":{"name":"2022 12th International Conference on Information Science and Technology (ICIST)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 12th International Conference on Information Science and Technology (ICIST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIST55546.2022.9926848","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In recent years, emotion recognition technology has been widely used in emotion change perception and mental illness diagnosis. Previous methods are mainly based on single-task learning strategies, which are unable to fuse multimodal features and remove redundant information. This paper proposes an emotion recognition model ER-MRL, which is based on multimodal representation learning. ER-MRL vectorizes the multimodal emotion data through encoders based on neural networks. The gate mechanism is used for multimodal feature selection. On this basis, ER-MRL calculates the modality specific and modality invariant representation for each emotion category. The Transformer model and multihead self-attention layer are applied to multimodal feature fusion. ER-MRL figures out the prediction result through the tower layer based on fully connected neural networks. Experimental results on the CMU-MOSI dataset show that ER-MRL has better performance on emotion recognition than previous methods.