LCANet:整合注意力机制和联合损失函数的学生实时情感分析模型

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Complex & Intelligent Systems Pub Date : 2024-11-13 DOI:10.1007/s40747-024-01608-8
Pengyun Hu, Xianpiao Tang, Liu Yang, Chuijian Kong, Daoxun Xia
{"title":"LCANet:整合注意力机制和联合损失函数的学生实时情感分析模型","authors":"Pengyun Hu, Xianpiao Tang, Liu Yang, Chuijian Kong, Daoxun Xia","doi":"10.1007/s40747-024-01608-8","DOIUrl":null,"url":null,"abstract":"<p>By recognizing students’ facial expressions in actual classroom situations, the students’ emotional states can be quickly uncovered, which can help teachers grasp the students’ learning rate, which allows teachers to adjust their teaching strategies and methods, thus improving the quality and effectiveness of classroom teaching. However, most previous facial expression recognition methods have problems such as missing key facial features and imbalanced class distributions in the dateset, resulting in low recognition accuracy. To address these challenges, this paper proposes LCANet, a model founded on a fused attention mechanism and a joint loss function, which allows the recognition of students’ emotions in real classroom scenarios. The model uses ConvNeXt V2 as the backbone network to optimize the global feature extraction capability of the model, and at the same time, it enables the model to pay closer attention to the key regions in facial expressions. We incorporate an improved Channel Spatial Attention (CSA) module as a way to extract more local feature information. Furthermore, to mitigate the class distribution imbalance problem in the facial expression dataset, we introduce a joint loss function. The experimental results show that our LCANet model has good recognition rates on both the public emotion datasets FERPlus, RAF-DB and AffectNet, with accuracies of 91.43%, 90.03% and 64.43%, respectively, with good robustness and generalizability. Additionally, we conducted experiments using the model in real classroom scenarios, detecting and accurately predicting students’ classroom emotions in real time, which provides an important reference for improving teaching in smart teaching scenarios.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LCANet: a model for analysis of students real-time sentiment by integrating attention mechanism and joint loss function\",\"authors\":\"Pengyun Hu, Xianpiao Tang, Liu Yang, Chuijian Kong, Daoxun Xia\",\"doi\":\"10.1007/s40747-024-01608-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>By recognizing students’ facial expressions in actual classroom situations, the students’ emotional states can be quickly uncovered, which can help teachers grasp the students’ learning rate, which allows teachers to adjust their teaching strategies and methods, thus improving the quality and effectiveness of classroom teaching. However, most previous facial expression recognition methods have problems such as missing key facial features and imbalanced class distributions in the dateset, resulting in low recognition accuracy. To address these challenges, this paper proposes LCANet, a model founded on a fused attention mechanism and a joint loss function, which allows the recognition of students’ emotions in real classroom scenarios. The model uses ConvNeXt V2 as the backbone network to optimize the global feature extraction capability of the model, and at the same time, it enables the model to pay closer attention to the key regions in facial expressions. We incorporate an improved Channel Spatial Attention (CSA) module as a way to extract more local feature information. Furthermore, to mitigate the class distribution imbalance problem in the facial expression dataset, we introduce a joint loss function. The experimental results show that our LCANet model has good recognition rates on both the public emotion datasets FERPlus, RAF-DB and AffectNet, with accuracies of 91.43%, 90.03% and 64.43%, respectively, with good robustness and generalizability. Additionally, we conducted experiments using the model in real classroom scenarios, detecting and accurately predicting students’ classroom emotions in real time, which provides an important reference for improving teaching in smart teaching scenarios.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-11-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-024-01608-8\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01608-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

通过识别学生在实际课堂情境中的面部表情,可以快速揭示学生的情绪状态,帮助教师掌握学生的学习进度,从而调整教学策略和方法,提高课堂教学的质量和效果。然而,以往的面部表情识别方法大多存在关键面部特征缺失、日期集类分布不平衡等问题,导致识别准确率较低。为了应对这些挑战,本文提出了一种基于融合注意力机制和联合损失函数的模型 LCANet,它可以识别真实课堂场景中学生的情绪。该模型以 ConvNeXt V2 为骨干网络,优化了模型的全局特征提取能力,同时使模型能更密切地关注面部表情中的关键区域。我们加入了改进的通道空间注意力(CSA)模块,以此来提取更多的局部特征信息。此外,为了缓解面部表情数据集中的类分布不平衡问题,我们引入了联合损失函数。实验结果表明,我们的 LCANet 模型在公共情绪数据集 FERPlus、RAF-DB 和 AffectNet 上都有很好的识别率,准确率分别为 91.43%、90.03% 和 64.43%,并且具有良好的鲁棒性和泛化能力。此外,我们还利用该模型在真实课堂场景中进行了实验,实时检测并准确预测了学生的课堂情绪,为改进智能教学场景下的教学提供了重要参考。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
LCANet: a model for analysis of students real-time sentiment by integrating attention mechanism and joint loss function

By recognizing students’ facial expressions in actual classroom situations, the students’ emotional states can be quickly uncovered, which can help teachers grasp the students’ learning rate, which allows teachers to adjust their teaching strategies and methods, thus improving the quality and effectiveness of classroom teaching. However, most previous facial expression recognition methods have problems such as missing key facial features and imbalanced class distributions in the dateset, resulting in low recognition accuracy. To address these challenges, this paper proposes LCANet, a model founded on a fused attention mechanism and a joint loss function, which allows the recognition of students’ emotions in real classroom scenarios. The model uses ConvNeXt V2 as the backbone network to optimize the global feature extraction capability of the model, and at the same time, it enables the model to pay closer attention to the key regions in facial expressions. We incorporate an improved Channel Spatial Attention (CSA) module as a way to extract more local feature information. Furthermore, to mitigate the class distribution imbalance problem in the facial expression dataset, we introduce a joint loss function. The experimental results show that our LCANet model has good recognition rates on both the public emotion datasets FERPlus, RAF-DB and AffectNet, with accuracies of 91.43%, 90.03% and 64.43%, respectively, with good robustness and generalizability. Additionally, we conducted experiments using the model in real classroom scenarios, detecting and accurately predicting students’ classroom emotions in real time, which provides an important reference for improving teaching in smart teaching scenarios.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
期刊最新文献
DADNet: text detection of arbitrary shapes from drone perspective based on boundary adaptation Two-stage deep reinforcement learning method for agile optical satellite scheduling problem Mix-layers semantic extraction and multi-scale aggregation transformer for semantic segmentation Segment anything model for few-shot medical image segmentation with domain tuning Relieving popularity bias in recommendation via debiasing representation enhancement
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1