用于面部表情识别的表情互补解缠网络

IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Chinese Journal of Electronics Pub Date : 2024-03-31 DOI:10.23919/cje.2022.00.351
Shanmin Wang;Hui Shuai;Lei Zhu;Qingshan Liu
{"title":"用于面部表情识别的表情互补解缠网络","authors":"Shanmin Wang;Hui Shuai;Lei Zhu;Qingshan Liu","doi":"10.23919/cje.2022.00.351","DOIUrl":null,"url":null,"abstract":"Disentangling facial expressions from other disturbing facial attributes in face images is an essential topic for facial expression recognition. Previous methods only care about facial expression disentanglement (FED) itself, ignoring the negative effects of other facial attributes. Due to the annotations on limited facial attributes, it is difficult for existing FED solutions to disentangle all disturbance from the input face. To solve this issue, we propose an expression complementary disentanglement network (ECDNet). ECDNet proposes to finish the FED task during a face reconstruction process, so as to address all facial attributes during disentanglement. Different from traditional re-construction models, ECDNet reconstructs face images by progressively generating and combining facial appearance and matching geometry. It designs the expression incentive (EIE) and expression inhibition (EIN) mechanisms, inducing the model to characterize the disentangled expression and complementary parts precisely. Facial geometry and appearance, generated in the reconstructed process, are dealt with to represent facial expressions and complementary parts, respectively. The combination of distinctive reconstruction model, EIE, and EIN mechanisms ensures the completeness and exactness of the FED task. Experimental results on RAF-DB, AffectNet, and CAER-S datasets have proven the effectiveness and superiority of ECDNet.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":null,"pages":null},"PeriodicalIF":1.6000,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543219","citationCount":"0","resultStr":"{\"title\":\"Expression Complementary Disentanglement Network for Facial Expression Recognition\",\"authors\":\"Shanmin Wang;Hui Shuai;Lei Zhu;Qingshan Liu\",\"doi\":\"10.23919/cje.2022.00.351\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Disentangling facial expressions from other disturbing facial attributes in face images is an essential topic for facial expression recognition. Previous methods only care about facial expression disentanglement (FED) itself, ignoring the negative effects of other facial attributes. Due to the annotations on limited facial attributes, it is difficult for existing FED solutions to disentangle all disturbance from the input face. To solve this issue, we propose an expression complementary disentanglement network (ECDNet). ECDNet proposes to finish the FED task during a face reconstruction process, so as to address all facial attributes during disentanglement. Different from traditional re-construction models, ECDNet reconstructs face images by progressively generating and combining facial appearance and matching geometry. It designs the expression incentive (EIE) and expression inhibition (EIN) mechanisms, inducing the model to characterize the disentangled expression and complementary parts precisely. Facial geometry and appearance, generated in the reconstructed process, are dealt with to represent facial expressions and complementary parts, respectively. The combination of distinctive reconstruction model, EIE, and EIN mechanisms ensures the completeness and exactness of the FED task. Experimental results on RAF-DB, AffectNet, and CAER-S datasets have proven the effectiveness and superiority of ECDNet.\",\"PeriodicalId\":50701,\"journal\":{\"name\":\"Chinese Journal of Electronics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2024-03-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543219\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Chinese Journal of Electronics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10543219/\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chinese Journal of Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10543219/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

将人脸图像中的面部表情从其他干扰面部属性中分离出来是面部表情识别的一个重要课题。以往的方法只关注面部表情分离(FED)本身,而忽略了其他面部属性的负面影响。由于注释的面部属性有限,现有的 FED 解决方案很难将所有干扰从输入面部中分离出来。为了解决这个问题,我们提出了一种表情互补纠错网络(ECDNet)。ECDNet 提议在人脸重建过程中完成 FED 任务,以便在解缠过程中处理所有面部属性。与传统的重建模型不同,ECDNet 通过逐步生成和组合面部外观和匹配几何图形来重建人脸图像。它设计了表情激励(EIE)和表情抑制(EIN)机制,诱导模型精确表征分解后的表情和互补部分。在重建过程中产生的面部几何和外观分别用来表示面部表情和互补部分。独特的重建模型、EIE 和 EIN 机制的结合确保了 FED 任务的完整性和精确性。在 RAF-DB、AffectNet 和 CAER-S 数据集上的实验结果证明了 ECDNet 的有效性和优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Expression Complementary Disentanglement Network for Facial Expression Recognition
Disentangling facial expressions from other disturbing facial attributes in face images is an essential topic for facial expression recognition. Previous methods only care about facial expression disentanglement (FED) itself, ignoring the negative effects of other facial attributes. Due to the annotations on limited facial attributes, it is difficult for existing FED solutions to disentangle all disturbance from the input face. To solve this issue, we propose an expression complementary disentanglement network (ECDNet). ECDNet proposes to finish the FED task during a face reconstruction process, so as to address all facial attributes during disentanglement. Different from traditional re-construction models, ECDNet reconstructs face images by progressively generating and combining facial appearance and matching geometry. It designs the expression incentive (EIE) and expression inhibition (EIN) mechanisms, inducing the model to characterize the disentangled expression and complementary parts precisely. Facial geometry and appearance, generated in the reconstructed process, are dealt with to represent facial expressions and complementary parts, respectively. The combination of distinctive reconstruction model, EIE, and EIN mechanisms ensures the completeness and exactness of the FED task. Experimental results on RAF-DB, AffectNet, and CAER-S datasets have proven the effectiveness and superiority of ECDNet.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Chinese Journal of Electronics
Chinese Journal of Electronics 工程技术-工程:电子与电气
CiteScore
3.70
自引率
16.70%
发文量
342
审稿时长
12.0 months
期刊介绍: CJE focuses on the emerging fields of electronics, publishing innovative and transformative research papers. Most of the papers published in CJE are from universities and research institutes, presenting their innovative research results. Both theoretical and practical contributions are encouraged, and original research papers reporting novel solutions to the hot topics in electronics are strongly recommended.
期刊最新文献
Front Cover Contents XPull: A Relay-Based Blockchain Intercommunication Framework Achieving Cross-Chain State Pulling Sharper Hardy Uncertainty Relations on Signal Concentration in Terms of Linear Canonical Transform An Efficient and Fast Area Optimization Approach for Mixed Polarity Reed-Muller Logic Circuits
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1