{"title":"用于动态面部表情识别的细粒度时态增强变换器","authors":"Yaning Zhang;Jiahe Zhang;Linlin Shen;Zitong Yu;Zan Gao","doi":"10.1109/LSP.2024.3456668","DOIUrl":null,"url":null,"abstract":"Dynamic facial expression recognition (DFER) plays a vital role in understanding human emotions and behaviors. Existing efforts tend to fall into a single modality self-supervised pretraining learning paradigm, which limits the representation ability of models. Besides, coarse-grained temporal modeling struggles to capture subtle facial expression representations from various inputs. In this letter, we propose a novel method for DFER, termed fine-grained temporal-enhanced transformer (FTET-DFER), which consists of two stages. First, we employ the inherent correlation between visual and auditory modalities in real videos, to capture temporally dense representations such as facial movements and expressions, in a self-supervised audio-visual learning manner. Second, we utilize the learned embeddings as targets, to achieve the DFER. In addition, we design the FTET block to study fine-grained temporal-enhanced facial expression features based on intra-clip locally-enhanced relations as well as inter-clip locally-enhanced global relationships in videos. Extensive experiments show that FTET-DFER outperforms the state-of-the-arts through within-dataset and cross-dataset evaluation.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fine-Grained Temporal-Enhanced Transformer for Dynamic Facial Expression Recognition\",\"authors\":\"Yaning Zhang;Jiahe Zhang;Linlin Shen;Zitong Yu;Zan Gao\",\"doi\":\"10.1109/LSP.2024.3456668\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Dynamic facial expression recognition (DFER) plays a vital role in understanding human emotions and behaviors. Existing efforts tend to fall into a single modality self-supervised pretraining learning paradigm, which limits the representation ability of models. Besides, coarse-grained temporal modeling struggles to capture subtle facial expression representations from various inputs. In this letter, we propose a novel method for DFER, termed fine-grained temporal-enhanced transformer (FTET-DFER), which consists of two stages. First, we employ the inherent correlation between visual and auditory modalities in real videos, to capture temporally dense representations such as facial movements and expressions, in a self-supervised audio-visual learning manner. Second, we utilize the learned embeddings as targets, to achieve the DFER. In addition, we design the FTET block to study fine-grained temporal-enhanced facial expression features based on intra-clip locally-enhanced relations as well as inter-clip locally-enhanced global relationships in videos. Extensive experiments show that FTET-DFER outperforms the state-of-the-arts through within-dataset and cross-dataset evaluation.\",\"PeriodicalId\":13154,\"journal\":{\"name\":\"IEEE Signal Processing Letters\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Signal Processing Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10669998/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10669998/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Fine-Grained Temporal-Enhanced Transformer for Dynamic Facial Expression Recognition
Dynamic facial expression recognition (DFER) plays a vital role in understanding human emotions and behaviors. Existing efforts tend to fall into a single modality self-supervised pretraining learning paradigm, which limits the representation ability of models. Besides, coarse-grained temporal modeling struggles to capture subtle facial expression representations from various inputs. In this letter, we propose a novel method for DFER, termed fine-grained temporal-enhanced transformer (FTET-DFER), which consists of two stages. First, we employ the inherent correlation between visual and auditory modalities in real videos, to capture temporally dense representations such as facial movements and expressions, in a self-supervised audio-visual learning manner. Second, we utilize the learned embeddings as targets, to achieve the DFER. In addition, we design the FTET block to study fine-grained temporal-enhanced facial expression features based on intra-clip locally-enhanced relations as well as inter-clip locally-enhanced global relationships in videos. Extensive experiments show that FTET-DFER outperforms the state-of-the-arts through within-dataset and cross-dataset evaluation.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.