FERMixNet: An Occlusion Robust Facial Expression Recognition Model With Facial Mixing Augmentation and Mid-Level Representation Learning

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2024-09-03 DOI:10.1109/TAFFC.2024.3454102
Yansong Huang;Junjie Peng;Wenqiang Zhang;Tong Zhao;Gan Chen;Shuhua Tan;Fen Yi;Lu Wang
{"title":"FERMixNet: An Occlusion Robust Facial Expression Recognition Model With Facial Mixing Augmentation and Mid-Level Representation Learning","authors":"Yansong Huang;Junjie Peng;Wenqiang Zhang;Tong Zhao;Gan Chen;Shuhua Tan;Fen Yi;Lu Wang","doi":"10.1109/TAFFC.2024.3454102","DOIUrl":null,"url":null,"abstract":"Facial expressions can provide a better understanding of people’s mental status and attitudes towards specific things. However, facial occlusion in real world is an unfavorable phenomenon that greatly affects the performance of facial expression recognition models. Recent works addressing the occlusion problem have primarily relied on attention mechanisms or occlusion discarding methods that focus on non-occluded regions of the face. However, these methods have not achieved a good balance between occlusion robustness and model efficiency. In this paper, we propose a simple and efficient model, called FERMixNet, for occluded facial expression recognition. The model incorporates a novel facial mixing augmentation strategy (FERMix) that generates new training samples by simulating real-world facial occlusion and preserving high expression-related semantic information. By co-training the original and newly generated samples, the model’s occlusion robustness is improved without increasing its complexity during inference. Additionally, to further enhance the model’s occlusion robustness, we include mid-level representation learning in the network to learn the discriminative non-occluded local features of the samples with low computational cost. Extensive experiments on four public facial occlusion datasets: Occlusion-RAF-DB, Occlusion-FERPlus and FED-RO show that the proposed model achieves state-of-the-art results which demonstrates the good robustness of our method for occluded facial expression recognition. Meanwhile, the proposed model also achieves state-of-the-art results on the in-the-wild facial expression datasets RAF-DB, AffectNet-8, and AffectNet-7. It proves that the proposed model has good application prospects in real world.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 2","pages":"639-654"},"PeriodicalIF":9.8000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10663852/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Facial expressions can provide a better understanding of people’s mental status and attitudes towards specific things. However, facial occlusion in real world is an unfavorable phenomenon that greatly affects the performance of facial expression recognition models. Recent works addressing the occlusion problem have primarily relied on attention mechanisms or occlusion discarding methods that focus on non-occluded regions of the face. However, these methods have not achieved a good balance between occlusion robustness and model efficiency. In this paper, we propose a simple and efficient model, called FERMixNet, for occluded facial expression recognition. The model incorporates a novel facial mixing augmentation strategy (FERMix) that generates new training samples by simulating real-world facial occlusion and preserving high expression-related semantic information. By co-training the original and newly generated samples, the model’s occlusion robustness is improved without increasing its complexity during inference. Additionally, to further enhance the model’s occlusion robustness, we include mid-level representation learning in the network to learn the discriminative non-occluded local features of the samples with low computational cost. Extensive experiments on four public facial occlusion datasets: Occlusion-RAF-DB, Occlusion-FERPlus and FED-RO show that the proposed model achieves state-of-the-art results which demonstrates the good robustness of our method for occluded facial expression recognition. Meanwhile, the proposed model also achieves state-of-the-art results on the in-the-wild facial expression datasets RAF-DB, AffectNet-8, and AffectNet-7. It proves that the proposed model has good application prospects in real world.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FERMixNet:具有面部混合增强和中层表征学习功能的闭塞鲁棒面部表情识别模型
面部表情可以更好地了解人们的精神状态和对特定事物的态度。然而,在现实世界中,面部遮挡是一种不利的现象,它极大地影响了面部表情识别模型的性能。最近解决遮挡问题的工作主要依赖于关注机制或遮挡丢弃方法,这些方法关注面部的非遮挡区域。然而,这些方法并没有很好地平衡遮挡鲁棒性和模型效率。在本文中,我们提出了一个简单而高效的模型,称为FERMixNet,用于遮挡面部表情识别。该模型结合了一种新的面部混合增强策略(FERMix),该策略通过模拟真实世界的面部遮挡并保留与表情高度相关的语义信息来生成新的训练样本。通过对原始样本和新生成样本进行联合训练,在不增加模型推理复杂度的前提下,提高了模型的遮挡鲁棒性。此外,为了进一步增强模型的遮挡鲁棒性,我们在网络中加入中级表示学习,以较低的计算成本学习样本的判别性非遮挡局部特征。在四个公共面部遮挡数据集(occlusion - raf - db、occlusion - ferplus和FED-RO)上进行的大量实验表明,所提出的模型取得了最先进的结果,证明了我们的方法对遮挡面部表情识别具有良好的鲁棒性。同时,该模型在野外面部表情数据集RAF-DB、AffectNet-8和AffectNet-7上也取得了最先进的结果。结果表明,该模型在实际应用中具有良好的应用前景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
Video-Based Cross-Domain Emotion Recognition Via Sample-Graph Relations Self-Distillation EchoReason: a Two-stage Clinically Aligned Vision-Language Framework for Interpretable Diseases Diagnosis from Multi-Modal Ultrasound Advancing Micro-Expression Recognition: a Task-Specific Framework Integrating Frequency Analysis and Structural Embedding Facial Expression Recognition for Chinese Elderly Using Edge and Semantic Features Dual Path Network With Two-Step Transfer Learning An EEG-Based Multi-Source Domain Knowledge Transfer Framework for Cross-Session and Cross-Subject Emotion Recognition
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1