{"title":"RF-GCN:利用多模态的残差融合图卷积网络进行面部情绪识别","authors":"D. Vishnu Sakthi, P. Ezhumalai","doi":"10.1002/ett.5031","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>The emotional state of individuals is difficult to identify and it is developing now a days because of vast interest in recognition. Many technologies have been developed to identify this emotional expression based on facial expressions, vocal expressions, physiological signals, and body expressions. Among these, facial emotion is very expressive for recognition using multimodalities. Understanding facial emotions has applications in mental well-being, decision-making, and even social change, as emotions play a crucial role in our lives. This recognition is complicated by the high dimensionality of data and non-linear interactions across modalities. Moreover, the way emotion is expressed by people varies and these feature identification remains challenging, where these limitations are overcome by Deep learning models.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>This research work aims at facial emotion recognition through the utilization of a deep learning model, named the proposed Residual Fused-Graph Convolution Network (RF-GCN). Here, multimodal data included is video as well as an Electroencephalogram (EEG) signal. Also, the Non-Local Means (NLM) filter is used for pre-processing input video frames. Here, the feature selection process is carried out using chi-square, after feature extraction, which is done in both pre-processed video frames and input EEG signals. Finally, facial emotion recognition and its types are determined by RF-GCN, which is a combination of both the Deep Residual Network (DRN) and Graph Convolutional Network (GCN).</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Further, RF-GCN is evaluated for performance by metrics such as accuracy, recall, and precision, with superior values of 91.6%, 96.5%, and 94.7%.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>RF-GCN captures the nuanced relationships between different emotional states and improves recognition accuracy. The model is trained and evaluated on the dataset and reflects real-world conditions.</p>\n </section>\n </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 9","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RF-GCN: Residual fused-graph convolutional network using multimodalities for facial emotion recognition\",\"authors\":\"D. Vishnu Sakthi, P. Ezhumalai\",\"doi\":\"10.1002/ett.5031\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>The emotional state of individuals is difficult to identify and it is developing now a days because of vast interest in recognition. Many technologies have been developed to identify this emotional expression based on facial expressions, vocal expressions, physiological signals, and body expressions. Among these, facial emotion is very expressive for recognition using multimodalities. Understanding facial emotions has applications in mental well-being, decision-making, and even social change, as emotions play a crucial role in our lives. This recognition is complicated by the high dimensionality of data and non-linear interactions across modalities. Moreover, the way emotion is expressed by people varies and these feature identification remains challenging, where these limitations are overcome by Deep learning models.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>This research work aims at facial emotion recognition through the utilization of a deep learning model, named the proposed Residual Fused-Graph Convolution Network (RF-GCN). Here, multimodal data included is video as well as an Electroencephalogram (EEG) signal. Also, the Non-Local Means (NLM) filter is used for pre-processing input video frames. Here, the feature selection process is carried out using chi-square, after feature extraction, which is done in both pre-processed video frames and input EEG signals. Finally, facial emotion recognition and its types are determined by RF-GCN, which is a combination of both the Deep Residual Network (DRN) and Graph Convolutional Network (GCN).</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>Further, RF-GCN is evaluated for performance by metrics such as accuracy, recall, and precision, with superior values of 91.6%, 96.5%, and 94.7%.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>RF-GCN captures the nuanced relationships between different emotional states and improves recognition accuracy. The model is trained and evaluated on the dataset and reflects real-world conditions.</p>\\n </section>\\n </div>\",\"PeriodicalId\":23282,\"journal\":{\"name\":\"Transactions on Emerging Telecommunications Technologies\",\"volume\":\"35 9\",\"pages\":\"\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Transactions on Emerging Telecommunications Technologies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ett.5031\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions on Emerging Telecommunications Technologies","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ett.5031","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
RF-GCN: Residual fused-graph convolutional network using multimodalities for facial emotion recognition
Background
The emotional state of individuals is difficult to identify and it is developing now a days because of vast interest in recognition. Many technologies have been developed to identify this emotional expression based on facial expressions, vocal expressions, physiological signals, and body expressions. Among these, facial emotion is very expressive for recognition using multimodalities. Understanding facial emotions has applications in mental well-being, decision-making, and even social change, as emotions play a crucial role in our lives. This recognition is complicated by the high dimensionality of data and non-linear interactions across modalities. Moreover, the way emotion is expressed by people varies and these feature identification remains challenging, where these limitations are overcome by Deep learning models.
Methods
This research work aims at facial emotion recognition through the utilization of a deep learning model, named the proposed Residual Fused-Graph Convolution Network (RF-GCN). Here, multimodal data included is video as well as an Electroencephalogram (EEG) signal. Also, the Non-Local Means (NLM) filter is used for pre-processing input video frames. Here, the feature selection process is carried out using chi-square, after feature extraction, which is done in both pre-processed video frames and input EEG signals. Finally, facial emotion recognition and its types are determined by RF-GCN, which is a combination of both the Deep Residual Network (DRN) and Graph Convolutional Network (GCN).
Results
Further, RF-GCN is evaluated for performance by metrics such as accuracy, recall, and precision, with superior values of 91.6%, 96.5%, and 94.7%.
Conclusions
RF-GCN captures the nuanced relationships between different emotional states and improves recognition accuracy. The model is trained and evaluated on the dataset and reflects real-world conditions.
期刊介绍:
ransactions on Emerging Telecommunications Technologies (ETT), formerly known as European Transactions on Telecommunications (ETT), has the following aims:
- to attract cutting-edge publications from leading researchers and research groups around the world
- to become a highly cited source of timely research findings in emerging fields of telecommunications
- to limit revision and publication cycles to a few months and thus significantly increase attractiveness to publish
- to become the leading journal for publishing the latest developments in telecommunications