An Efficient Illumination-Invariant Dynamic Facial Expression Recognition for Driving Scenarios

IF 2.3 4区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC IET Intelligent Transport Systems Pub Date : 2025-03-04 DOI:10.1049/itr2.70009
Ercheng Pei, Man Guo, Abel Díaz Berenguer, Lang He, HaiFeng Chen
{"title":"An Efficient Illumination-Invariant Dynamic Facial Expression Recognition for Driving Scenarios","authors":"Ercheng Pei,&nbsp;Man Guo,&nbsp;Abel Díaz Berenguer,&nbsp;Lang He,&nbsp;HaiFeng Chen","doi":"10.1049/itr2.70009","DOIUrl":null,"url":null,"abstract":"<p>Facial expression recognition (FER) is significant in many application scenarios, such as driving scenarios with very different lighting conditions between day and night. Existing methods primarily focus on eliminating the negative effects of pose and identity information on FER, but overlook the challenges posed by lighting variations. So, this work proposes an efficient illumination-invariant dynamic FER method. To augment the robustness of FER methods to illumination variance, contrast normalisation is introduced to form a low-level illumination-invariant expression features learningmodule. In addition, to extract dynamic and salient expression features, a two-stage temporal attention mechanism is introduced to form a high-level dynamic salient expression features learning module deciphering dynamic facial expression patterns. Furthermore, additive angular margin loss is incorporated into the training of the proposed model to increase the distances between samples of different categories while reducing the distances between samples belonging to the same category. We conducted comprehensive experiments using the Oulu-CASIA and DFEW datasets. On the Oulu-CASIA VIS and NIR subsets in the normal illumination, the proposed method achieved accuracies of 92.08% and 91.46%, which are 1.01 and 7.06 percentage points higher than the SOTA results from the DCBLSTM and CELDL method, respectively. Based on the Oulu-CASIA NIR subset in the dark illumination, the proposed method achieved an accuracies of 91.25%, which are 4.54 percentage points higher than the SOTA result from the CDLLNet method. On the DFEW dataset, the proposed method achieved a UAR of 60.67% and a WAR of 71.48% with 12M parameters, approaching the SOTA result from the VideoMAE model with 86M parameters. The outcomes of our experiments validate the effectiveness of the proposed dynamic FER method, affirming its ability in addressing the challenges posed by diverse illumination conditions in driving scenarios.</p>","PeriodicalId":50381,"journal":{"name":"IET Intelligent Transport Systems","volume":"19 1","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/itr2.70009","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Intelligent Transport Systems","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/itr2.70009","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Facial expression recognition (FER) is significant in many application scenarios, such as driving scenarios with very different lighting conditions between day and night. Existing methods primarily focus on eliminating the negative effects of pose and identity information on FER, but overlook the challenges posed by lighting variations. So, this work proposes an efficient illumination-invariant dynamic FER method. To augment the robustness of FER methods to illumination variance, contrast normalisation is introduced to form a low-level illumination-invariant expression features learningmodule. In addition, to extract dynamic and salient expression features, a two-stage temporal attention mechanism is introduced to form a high-level dynamic salient expression features learning module deciphering dynamic facial expression patterns. Furthermore, additive angular margin loss is incorporated into the training of the proposed model to increase the distances between samples of different categories while reducing the distances between samples belonging to the same category. We conducted comprehensive experiments using the Oulu-CASIA and DFEW datasets. On the Oulu-CASIA VIS and NIR subsets in the normal illumination, the proposed method achieved accuracies of 92.08% and 91.46%, which are 1.01 and 7.06 percentage points higher than the SOTA results from the DCBLSTM and CELDL method, respectively. Based on the Oulu-CASIA NIR subset in the dark illumination, the proposed method achieved an accuracies of 91.25%, which are 4.54 percentage points higher than the SOTA result from the CDLLNet method. On the DFEW dataset, the proposed method achieved a UAR of 60.67% and a WAR of 71.48% with 12M parameters, approaching the SOTA result from the VideoMAE model with 86M parameters. The outcomes of our experiments validate the effectiveness of the proposed dynamic FER method, affirming its ability in addressing the challenges posed by diverse illumination conditions in driving scenarios.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
IET Intelligent Transport Systems
IET Intelligent Transport Systems 工程技术-运输科技
CiteScore
6.50
自引率
7.40%
发文量
159
审稿时长
3 months
期刊介绍: IET Intelligent Transport Systems is an interdisciplinary journal devoted to research into the practical applications of ITS and infrastructures. The scope of the journal includes the following: Sustainable traffic solutions Deployments with enabling technologies Pervasive monitoring Applications; demonstrations and evaluation Economic and behavioural analyses of ITS services and scenario Data Integration and analytics Information collection and processing; image processing applications in ITS ITS aspects of electric vehicles Autonomous vehicles; connected vehicle systems; In-vehicle ITS, safety and vulnerable road user aspects Mobility as a service systems Traffic management and control Public transport systems technologies Fleet and public transport logistics Emergency and incident management Demand management and electronic payment systems Traffic related air pollution management Policy and institutional issues Interoperability, standards and architectures Funding scenarios Enforcement Human machine interaction Education, training and outreach Current Special Issue Call for papers: Intelligent Transportation Systems in Smart Cities for Sustainable Environment - https://digital-library.theiet.org/files/IET_ITS_CFP_ITSSCSE.pdf Sustainably Intelligent Mobility (SIM) - https://digital-library.theiet.org/files/IET_ITS_CFP_SIM.pdf Traffic Theory and Modelling in the Era of Artificial Intelligence and Big Data (in collaboration with World Congress for Transport Research, WCTR 2019) - https://digital-library.theiet.org/files/IET_ITS_CFP_WCTR.pdf
期刊最新文献
An Efficient Illumination-Invariant Dynamic Facial Expression Recognition for Driving Scenarios Fault-Tolerant Robust Output-Feedback Control of a Vehicle Platoon Considering Measurement Noise and Road Disturbances A Two-Stage Energy-Efficiency Optimization Approach for Conflict-Free Dispatching in Open-Pit Mines Prescribed Performance Ship Tracking Control With a Novel Predefined-Time Performance Function A complete in-cabin monitoring framework for autonomous vehicles in public transportation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1