POST: Prototype-oriented similarity transfer framework for cross-domain facial expression recognition

IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Computer Animation and Virtual Worlds Pub Date : 2024-05-17 DOI:10.1002/cav.2260
Zhe Guo, Bingxin Wei, Qinglin Cai, Jiayi Liu, Yi Wang
{"title":"POST: Prototype-oriented similarity transfer framework for cross-domain facial expression recognition","authors":"Zhe Guo,&nbsp;Bingxin Wei,&nbsp;Qinglin Cai,&nbsp;Jiayi Liu,&nbsp;Yi Wang","doi":"10.1002/cav.2260","DOIUrl":null,"url":null,"abstract":"<p>Facial expression recognition (FER) is one of the popular research topics in computer vision. Most deep learning expression recognition methods perform well on a single dataset, but may struggle in cross-domain FER applications when applied to different datasets. FER under cross-dataset also suffers from difficulties such as feature distribution deviation and discriminator degradation. To address these issues, we propose a prototype-oriented similarity transfer framework (POST) for cross-domain FER. The bidirectional cross-attention Swin Transformer (BCS Transformer) module is designed to aggregate local facial feature similarities across different domains, enabling the extraction of relevant cross-domain features. The dual learnable category prototypes is designed to represent potential space samples for both source and target domains, ensuring enhanced domain alignment by leveraging both cross-domain and specific domain features. We further introduce the self-training resampling (STR) strategy to enhance similarity transfer. The experimental results with the RAF-DB dataset as the source domain and the CK+, FER2013, JAFFE and SFEW 2.0 datasets as the target domains, show that our approach achieves much higher performance than the state-of-the-art cross-domain FER methods.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Animation and Virtual Worlds","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cav.2260","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Facial expression recognition (FER) is one of the popular research topics in computer vision. Most deep learning expression recognition methods perform well on a single dataset, but may struggle in cross-domain FER applications when applied to different datasets. FER under cross-dataset also suffers from difficulties such as feature distribution deviation and discriminator degradation. To address these issues, we propose a prototype-oriented similarity transfer framework (POST) for cross-domain FER. The bidirectional cross-attention Swin Transformer (BCS Transformer) module is designed to aggregate local facial feature similarities across different domains, enabling the extraction of relevant cross-domain features. The dual learnable category prototypes is designed to represent potential space samples for both source and target domains, ensuring enhanced domain alignment by leveraging both cross-domain and specific domain features. We further introduce the self-training resampling (STR) strategy to enhance similarity transfer. The experimental results with the RAF-DB dataset as the source domain and the CK+, FER2013, JAFFE and SFEW 2.0 datasets as the target domains, show that our approach achieves much higher performance than the state-of-the-art cross-domain FER methods.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
POST:面向原型的跨域面部表情识别相似性转移框架
面部表情识别(FER)是计算机视觉领域的热门研究课题之一。大多数深度学习表情识别方法在单一数据集上表现良好,但应用于不同数据集时,在跨域 FER 应用中可能会遇到困难。跨数据集下的 FER 还存在特征分布偏差和判别器退化等困难。为了解决这些问题,我们提出了一种面向原型的跨域 FER 相似性转移框架(POST)。双向跨注意力斯温变换器(BCS Transformer)模块旨在聚合不同领域的局部面部特征相似性,从而提取相关的跨领域特征。双重可学习类别原型旨在代表源域和目标域的潜在空间样本,通过利用跨域和特定域特征确保增强域对齐。我们进一步引入了自我训练重采样(STR)策略,以增强相似性转移。以 RAF-DB 数据集为源域,CK+、FER2013、JAFFE 和 SFEW 2.0 数据集为目标域的实验结果表明,我们的方法比最先进的跨域 FER 方法实现了更高的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computer Animation and Virtual Worlds
Computer Animation and Virtual Worlds 工程技术-计算机:软件工程
CiteScore
2.20
自引率
0.00%
发文量
90
审稿时长
6-12 weeks
期刊介绍: With the advent of very powerful PCs and high-end graphics cards, there has been an incredible development in Virtual Worlds, real-time computer animation and simulation, games. But at the same time, new and cheaper Virtual Reality devices have appeared allowing an interaction with these real-time Virtual Worlds and even with real worlds through Augmented Reality. Three-dimensional characters, especially Virtual Humans are now of an exceptional quality, which allows to use them in the movie industry. But this is only a beginning, as with the development of Artificial Intelligence and Agent technology, these characters will become more and more autonomous and even intelligent. They will inhabit the Virtual Worlds in a Virtual Life together with animals and plants.
期刊最新文献
Diverse Motions and Responses in Crowd Simulation A Facial Motion Retargeting Pipeline for Appearance Agnostic 3D Characters Enhancing Front-End Security: Protecting User Data and Privacy in Web Applications Virtual Roaming of Cultural Heritage Based on Image Processing PainterAR: A Self-Painting AR Interface for Mobile Devices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1