{"title":"Audio-Visual Emotion Recognition System Using Multi-Modal Features","authors":"Anand Handa, Rashi Agarwal, Narendra Kohli","doi":"10.4018/IJCINI.20211001.OA34","DOIUrl":null,"url":null,"abstract":"Due to the highly variant face geometry and appearances, facial expression recognition (FER) is still a challenging problem. CNN can characterize 2D signals. Therefore, for emotion recognition in a video, the authors propose a feature selection model in AlexNet architecture to extract and filter facial features automatically. Similarly, for emotion recognition in audio, the authors use a deep LSTM-RNN. Finally, they propose a probabilistic model for the fusion of audio and visual models using facial features and speech of a subject. The model combines all the extracted features and use them to train the linear SVM (support vector machine) classifiers. The proposed model outperforms the other existing models and achieves state-of-the-art performance for audio, visual, and fusion models. The model classifies the seven known facial expressions, namely anger, happy, surprise, fear, disgust, sad, and neutral, on the eNTERFACE’05 dataset with an overall accuracy of 76.61%.","PeriodicalId":43637,"journal":{"name":"International Journal of Cognitive Informatics and Natural Intelligence","volume":"40 1","pages":"1-14"},"PeriodicalIF":0.6000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Cognitive Informatics and Natural Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/IJCINI.20211001.OA34","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Due to the highly variant face geometry and appearances, facial expression recognition (FER) is still a challenging problem. CNN can characterize 2D signals. Therefore, for emotion recognition in a video, the authors propose a feature selection model in AlexNet architecture to extract and filter facial features automatically. Similarly, for emotion recognition in audio, the authors use a deep LSTM-RNN. Finally, they propose a probabilistic model for the fusion of audio and visual models using facial features and speech of a subject. The model combines all the extracted features and use them to train the linear SVM (support vector machine) classifiers. The proposed model outperforms the other existing models and achieves state-of-the-art performance for audio, visual, and fusion models. The model classifies the seven known facial expressions, namely anger, happy, surprise, fear, disgust, sad, and neutral, on the eNTERFACE’05 dataset with an overall accuracy of 76.61%.
期刊介绍:
The International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) encourages submissions that transcends disciplinary boundaries, and is devoted to rapid publication of high quality papers. The themes of IJCINI are natural intelligence, autonomic computing, and neuroinformatics. IJCINI is expected to provide the first forum and platform in the world for researchers, practitioners, and graduate students to investigate cognitive mechanisms and processes of human information processing, and to stimulate the transdisciplinary effort on cognitive informatics and natural intelligent research and engineering applications.