Yan Lv, Meng Ning, Fan Zhou, Pengfei Lv, Peiying Zhang, Jian Wang
{"title":"Real-time microexpression recognition in educational scenarios using a dual-branch continuous attention network","authors":"Yan Lv, Meng Ning, Fan Zhou, Pengfei Lv, Peiying Zhang, Jian Wang","doi":"10.1007/s11227-024-06455-5","DOIUrl":null,"url":null,"abstract":"<p>Facial microexpressions (MEs) are involuntary, fleeting, and subtle facial muscle movements that reveal a person’s true emotional state and inner experiences. Microexpression recognition has been applied in various disciplines and fields, particularly in educational settings, where it can help educators better understand students’ emotional states and learning experiences, thus providing personalized teaching support and guidance. However, existing microexpression recognition datasets tailored for educational scenarios are limited. Moreover, microexpression recognition classifiers for educational settings not only require high recognition accuracy but also real-time performance. To this end, we provide a student behavior dataset specifically for research on microexpression and action recognition in educational scenarios. Moreover, we innovatively propose a lightweight dual-branch continuous attention network for microexpression recognition research. Specifically, for the student behavior dataset, we collect data on students” behaviors in real classroom scenarios. We categorize student microexpressions into two types: serious and non-serious. Additionally, we classify student classroom behaviors into several categories: attentive listening, note-taking, yawning, looking around, and nodding. Regarding the dual-branch continuous attention network, unlike most methods that extract features directly from entire video frames, which include abundant identity information, we focus on modeling subtle information from facial regions by using optical flow and motion information from keyframes as input. We extensively evaluate our proposed method on publicly available datasets such as CASME II and SAMM, as well as our provided dataset. The experimental results demonstrate that our proposed method achieves state-of-the-art performance in the field of microexpression recognition and provides a competitive dataset for analyzing student classroom behaviors in educational scenarios. We will provide the GitHub link upon acceptance of the paper, and we will make the dataset available to any applicant under a licensed agreement.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"122 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Supercomputing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11227-024-06455-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Facial microexpressions (MEs) are involuntary, fleeting, and subtle facial muscle movements that reveal a person’s true emotional state and inner experiences. Microexpression recognition has been applied in various disciplines and fields, particularly in educational settings, where it can help educators better understand students’ emotional states and learning experiences, thus providing personalized teaching support and guidance. However, existing microexpression recognition datasets tailored for educational scenarios are limited. Moreover, microexpression recognition classifiers for educational settings not only require high recognition accuracy but also real-time performance. To this end, we provide a student behavior dataset specifically for research on microexpression and action recognition in educational scenarios. Moreover, we innovatively propose a lightweight dual-branch continuous attention network for microexpression recognition research. Specifically, for the student behavior dataset, we collect data on students” behaviors in real classroom scenarios. We categorize student microexpressions into two types: serious and non-serious. Additionally, we classify student classroom behaviors into several categories: attentive listening, note-taking, yawning, looking around, and nodding. Regarding the dual-branch continuous attention network, unlike most methods that extract features directly from entire video frames, which include abundant identity information, we focus on modeling subtle information from facial regions by using optical flow and motion information from keyframes as input. We extensively evaluate our proposed method on publicly available datasets such as CASME II and SAMM, as well as our provided dataset. The experimental results demonstrate that our proposed method achieves state-of-the-art performance in the field of microexpression recognition and provides a competitive dataset for analyzing student classroom behaviors in educational scenarios. We will provide the GitHub link upon acceptance of the paper, and we will make the dataset available to any applicant under a licensed agreement.