Zhe Guo, Bingxin Wei, Qinglin Cai, Jiayi Liu, Yi Wang
{"title":"POST: Prototype-oriented similarity transfer framework for cross-domain facial expression recognition","authors":"Zhe Guo, Bingxin Wei, Qinglin Cai, Jiayi Liu, Yi Wang","doi":"10.1002/cav.2260","DOIUrl":null,"url":null,"abstract":"<p>Facial expression recognition (FER) is one of the popular research topics in computer vision. Most deep learning expression recognition methods perform well on a single dataset, but may struggle in cross-domain FER applications when applied to different datasets. FER under cross-dataset also suffers from difficulties such as feature distribution deviation and discriminator degradation. To address these issues, we propose a prototype-oriented similarity transfer framework (POST) for cross-domain FER. The bidirectional cross-attention Swin Transformer (BCS Transformer) module is designed to aggregate local facial feature similarities across different domains, enabling the extraction of relevant cross-domain features. The dual learnable category prototypes is designed to represent potential space samples for both source and target domains, ensuring enhanced domain alignment by leveraging both cross-domain and specific domain features. We further introduce the self-training resampling (STR) strategy to enhance similarity transfer. The experimental results with the RAF-DB dataset as the source domain and the CK+, FER2013, JAFFE and SFEW 2.0 datasets as the target domains, show that our approach achieves much higher performance than the state-of-the-art cross-domain FER methods.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Animation and Virtual Worlds","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cav.2260","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Facial expression recognition (FER) is one of the popular research topics in computer vision. Most deep learning expression recognition methods perform well on a single dataset, but may struggle in cross-domain FER applications when applied to different datasets. FER under cross-dataset also suffers from difficulties such as feature distribution deviation and discriminator degradation. To address these issues, we propose a prototype-oriented similarity transfer framework (POST) for cross-domain FER. The bidirectional cross-attention Swin Transformer (BCS Transformer) module is designed to aggregate local facial feature similarities across different domains, enabling the extraction of relevant cross-domain features. The dual learnable category prototypes is designed to represent potential space samples for both source and target domains, ensuring enhanced domain alignment by leveraging both cross-domain and specific domain features. We further introduce the self-training resampling (STR) strategy to enhance similarity transfer. The experimental results with the RAF-DB dataset as the source domain and the CK+, FER2013, JAFFE and SFEW 2.0 datasets as the target domains, show that our approach achieves much higher performance than the state-of-the-art cross-domain FER methods.
面部表情识别(FER)是计算机视觉领域的热门研究课题之一。大多数深度学习表情识别方法在单一数据集上表现良好,但应用于不同数据集时,在跨域 FER 应用中可能会遇到困难。跨数据集下的 FER 还存在特征分布偏差和判别器退化等困难。为了解决这些问题,我们提出了一种面向原型的跨域 FER 相似性转移框架(POST)。双向跨注意力斯温变换器(BCS Transformer)模块旨在聚合不同领域的局部面部特征相似性,从而提取相关的跨领域特征。双重可学习类别原型旨在代表源域和目标域的潜在空间样本,通过利用跨域和特定域特征确保增强域对齐。我们进一步引入了自我训练重采样(STR)策略,以增强相似性转移。以 RAF-DB 数据集为源域,CK+、FER2013、JAFFE 和 SFEW 2.0 数据集为目标域的实验结果表明,我们的方法比最先进的跨域 FER 方法实现了更高的性能。
期刊介绍:
With the advent of very powerful PCs and high-end graphics cards, there has been an incredible development in Virtual Worlds, real-time computer animation and simulation, games. But at the same time, new and cheaper Virtual Reality devices have appeared allowing an interaction with these real-time Virtual Worlds and even with real worlds through Augmented Reality. Three-dimensional characters, especially Virtual Humans are now of an exceptional quality, which allows to use them in the movie industry. But this is only a beginning, as with the development of Artificial Intelligence and Agent technology, these characters will become more and more autonomous and even intelligent. They will inhabit the Virtual Worlds in a Virtual Life together with animals and plants.