Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521800
Li Huang;Yaping Huang;Qingji Guan
Image inpainting aims to restore visually realistic contents from a corrupted image, while inpainting forensic methods focus on locating the inpainted regions to fight against inpainting manipulations. Motivated by these two mutually interdependent tasks, in this paper, we propose a novel image inpainting network called Adversarial Collaborative Network (AdvColabNet), which leverages the contradictory and collaborative information from the two tasks of image inpainting and inpainting forensics to enhance the progress of the inpainting model through adversarial collaborative training. Specifically, the proposed AdvColabNet is a coarse-to-fine two-stage framework. In the coarse training stage, a simple generative adversarial model-based U-Net-style network generates initial coarse inpainting results. In the fine stage, the authenticity of inpainting results is assessed using the estimated forensic mask. A forensics-driven adaptive weighting refinement strategy is developed to emphasize learning from pixels with higher probabilities of being inpainted, which helps the network to focus on the challenging regions, resulting in more plausible inpainting results. Comprehensive evaluations on the CelebA-HQ and Places2 datasets demonstrate that our method achieves state-of-the-art robustness performance in terms of PSNR, SSIM, MAE, FID, and LPIPS metrics. We also show that our method effectively deceives the proposed inpainting forensic method compared to state-of-the-art inpainting methods, further demonstrating the superiority of the proposed method.
{"title":"Improving Image Inpainting via Adversarial Collaborative Training","authors":"Li Huang;Yaping Huang;Qingji Guan","doi":"10.1109/TMM.2024.3521800","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521800","url":null,"abstract":"Image inpainting aims to restore visually realistic contents from a corrupted image, while inpainting forensic methods focus on locating the inpainted regions to fight against inpainting manipulations. Motivated by these two mutually interdependent tasks, in this paper, we propose a novel image inpainting network called Adversarial Collaborative Network (AdvColabNet), which leverages the contradictory and collaborative information from the two tasks of image inpainting and inpainting forensics to enhance the progress of the inpainting model through adversarial collaborative training. Specifically, the proposed AdvColabNet is a coarse-to-fine two-stage framework. In the coarse training stage, a simple generative adversarial model-based U-Net-style network generates initial coarse inpainting results. In the fine stage, the authenticity of inpainting results is assessed using the estimated forensic mask. A forensics-driven adaptive weighting refinement strategy is developed to emphasize learning from pixels with higher probabilities of being inpainted, which helps the network to focus on the challenging regions, resulting in more plausible inpainting results. Comprehensive evaluations on the CelebA-HQ and Places2 datasets demonstrate that our method achieves state-of-the-art robustness performance in terms of PSNR, SSIM, MAE, FID, and LPIPS metrics. We also show that our method effectively deceives the proposed inpainting forensic method compared to state-of-the-art inpainting methods, further demonstrating the superiority of the proposed method.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"356-370"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521674
Zhenyu Shu;Shiyang Li;Shiqing Xin;Ligang Liu
3D shape segmentation is a crucial task in the field of multimedia analysis and processing, and recent years have seen a surge in research on this topic. However, many existing methods only consider geometric features of 3D shapes and fail to explore the potential connections between faces, limiting their segmentation performance. In this paper, we propose a novel segmentation approach that mines and enhances the potential consistency of 3D shapes to overcome this limitation. The key idea is to mine the consistency between different partitions of 3D shapes and to use the unique consistency enhancement strategy to continuously optimize the consistency features for the network. Our method also includes a comprehensive set of network structures to mine and enhance consistent features, enabling more effective feature extraction and better utilization of contextual information around each face when processing complex shapes. We evaluate our approach on public benchmarks through extensive experiments and demonstrate its effectiveness in achieving higher accuracy than existing methods.
{"title":"3D Shape Segmentation With Potential Consistency Mining and Enhancement","authors":"Zhenyu Shu;Shiyang Li;Shiqing Xin;Ligang Liu","doi":"10.1109/TMM.2024.3521674","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521674","url":null,"abstract":"3D shape segmentation is a crucial task in the field of multimedia analysis and processing, and recent years have seen a surge in research on this topic. However, many existing methods only consider geometric features of 3D shapes and fail to explore the potential connections between faces, limiting their segmentation performance. In this paper, we propose a novel segmentation approach that mines and enhances the potential consistency of 3D shapes to overcome this limitation. The key idea is to mine the consistency between different partitions of 3D shapes and to use the unique consistency enhancement strategy to continuously optimize the consistency features for the network. Our method also includes a comprehensive set of network structures to mine and enhance consistent features, enabling more effective feature extraction and better utilization of contextual information around each face when processing complex shapes. We evaluate our approach on public benchmarks through extensive experiments and demonstrate its effectiveness in achieving higher accuracy than existing methods.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"133-144"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521823
Meijing Zhang;Mengxue Chen;Qi Li;Yanchen Chen;Rui Lin;Xiaolian Li;Shengfeng He;Wenxi Liu
Crowd counting has drawn increasing attention across various fields. However, existing crowd counting tasks primarily focus on estimating the overall population, ignoring the behavioral and semantic information of different social groups within the crowd. In this paper, we aim to address a newly proposed research problem, namely fine-grained crowd counting, which involves identifying different categories of individuals and accurately counting them in static images. In order to fully leverage the categorical information in static crowd images, we propose a two-tier salient feature propagation module designed to sequentially extract semantic information from both the crowd and its surrounding environment. Additionally, we introduce a category difference loss to refine the feature representation by highlighting the differences between various crowd categories. Moreover, our proposed framework can adapt to a novel problem setup called few-example fine-grained crowd counting. This setup, unlike the original fine-grained crowd counting, requires only a few exemplar point annotations instead of dense annotations from predefined categories, making it applicable in a wider range of scenarios. The baseline model for this task can be established by substituting the loss function in our proposed model with a novel hybrid loss function that integrates point-oriented cross-entropy loss and category contrastive loss. Through comprehensive experiments, we present results in both the formulation and application of fine-grained crowd counting.
{"title":"Category-Contrastive Fine-Grained Crowd Counting and Beyond","authors":"Meijing Zhang;Mengxue Chen;Qi Li;Yanchen Chen;Rui Lin;Xiaolian Li;Shengfeng He;Wenxi Liu","doi":"10.1109/TMM.2024.3521823","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521823","url":null,"abstract":"Crowd counting has drawn increasing attention across various fields. However, existing crowd counting tasks primarily focus on estimating the overall population, ignoring the behavioral and semantic information of different social groups within the crowd. In this paper, we aim to address a newly proposed research problem, namely fine-grained crowd counting, which involves identifying different categories of individuals and accurately counting them in static images. In order to fully leverage the categorical information in static crowd images, we propose a two-tier salient feature propagation module designed to sequentially extract semantic information from both the crowd and its surrounding environment. Additionally, we introduce a category difference loss to refine the feature representation by highlighting the differences between various crowd categories. Moreover, our proposed framework can adapt to a novel problem setup called few-example fine-grained crowd counting. This setup, unlike the original fine-grained crowd counting, requires only a few exemplar point annotations instead of dense annotations from predefined categories, making it applicable in a wider range of scenarios. The baseline model for this task can be established by substituting the loss function in our proposed model with a novel hybrid loss function that integrates point-oriented cross-entropy loss and category contrastive loss. Through comprehensive experiments, we present results in both the formulation and application of fine-grained crowd counting.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"477-488"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521841
Kai Ye;Zepeng Huang;Yilei Xiong;Yu Gao;Jinheng Xie;Linlin Shen
Existing multi-dataset detection works mainly focus on the performance of detector on each of the datasets, with different label spaces. However, in real-world applications, a unified label space across multiple datasets is usually required. To address such a gap, we propose a progressive pseudo labeling (PPL) approach to detect objects across different datasets, over a unified label space. Specifically, we employ the widely used architecture of teacher-student model pair to jointly refine pseudo labels and train the unified object detector. The student model learns from both annotated labels and pseudo labels from the teacher model, which is updated by the exponential moving average (EMA) of the student. Three modules, i.e. Entropy-guided Adaptive Threshold (EAT), Global Classification Module (GCM) and Scene-Aware Fusion (SAF) strategy, are proposed to handle the noise of pseudo labels and fit the overall distribution. Extensive experiments are conducted on different multi-dataset benchmarks. The results demonstrate that our proposed method significantly outperforms the State-of-the-Art and is even comparable with supervised methods trained using annotations of all labels.
{"title":"Progressive Pseudo Labeling for Multi-Dataset Detection Over Unified Label Space","authors":"Kai Ye;Zepeng Huang;Yilei Xiong;Yu Gao;Jinheng Xie;Linlin Shen","doi":"10.1109/TMM.2024.3521841","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521841","url":null,"abstract":"Existing multi-dataset detection works mainly focus on the performance of detector on each of the datasets, with different label spaces. However, in real-world applications, a unified label space across multiple datasets is usually required. To address such a gap, we propose a progressive pseudo labeling (PPL) approach to detect objects across different datasets, over a unified label space. Specifically, we employ the widely used architecture of teacher-student model pair to jointly refine pseudo labels and train the unified object detector. The student model learns from both annotated labels and pseudo labels from the teacher model, which is updated by the exponential moving average (EMA) of the student. Three modules, i.e. Entropy-guided Adaptive Threshold (EAT), Global Classification Module (GCM) and Scene-Aware Fusion (SAF) strategy, are proposed to handle the noise of pseudo labels and fit the overall distribution. Extensive experiments are conducted on different multi-dataset benchmarks. The results demonstrate that our proposed method significantly outperforms the State-of-the-Art and is even comparable with supervised methods trained using annotations of all labels.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"531-543"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521825
Hefeng Wang;Jiale Cao;Jin Xie;Aiping Yang;Yanwei Pang
Text-to-image diffusion models have shown powerful ability on conditional image synthesis. With large-scale vision-language pre-training, diffusion models are able to generate high-quality images with rich textures and reasonable structures under different text prompts. However, adapting pre-trained diffusion models for visual perception is an open problem. In this paper, we propose an implicit and explicit language guidance framework for diffusion-based visual perception, named IEDP. Our IEDP comprises an implicit language guidance branch and an explicit language guidance branch. The implicit branch employs a frozen CLIP image encoder to directly generate implicit text embeddings that are fed to the diffusion model without explicit text prompts. The explicit branch uses the ground-truth labels of corresponding images as text prompts to condition feature extraction in diffusion model. During training, we jointly train the diffusion model by sharing the model weights of these two branches. As a result, the implicit and explicit branches can jointly guide feature learning. During inference, we employ only implicit branch for final prediction, which does not require any ground-truth labels. Experiments are performed on two typical perception tasks, including semantic segmentation and depth estimation. Our IEDP achieves promising performance on both tasks. For semantic segmentation, our IEDP has the mIoU$^text{ss}$ score of 55.9% on ADE20K validation set, which outperforms the baseline method VPD by 2.2%. For depth estimation, our IEDP outperforms the baseline method VPD with a relative gain of 11.0%.
{"title":"Implicit and Explicit Language Guidance for Diffusion-Based Visual Perception","authors":"Hefeng Wang;Jiale Cao;Jin Xie;Aiping Yang;Yanwei Pang","doi":"10.1109/TMM.2024.3521825","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521825","url":null,"abstract":"Text-to-image diffusion models have shown powerful ability on conditional image synthesis. With large-scale vision-language pre-training, diffusion models are able to generate high-quality images with rich textures and reasonable structures under different text prompts. However, adapting pre-trained diffusion models for visual perception is an open problem. In this paper, we propose an implicit and explicit language guidance framework for diffusion-based visual perception, named IEDP. Our IEDP comprises an implicit language guidance branch and an explicit language guidance branch. The implicit branch employs a frozen CLIP image encoder to directly generate implicit text embeddings that are fed to the diffusion model without explicit text prompts. The explicit branch uses the ground-truth labels of corresponding images as text prompts to condition feature extraction in diffusion model. During training, we jointly train the diffusion model by sharing the model weights of these two branches. As a result, the implicit and explicit branches can jointly guide feature learning. During inference, we employ only implicit branch for final prediction, which does not require any ground-truth labels. Experiments are performed on two typical perception tasks, including semantic segmentation and depth estimation. Our IEDP achieves promising performance on both tasks. For semantic segmentation, our IEDP has the mIoU<inline-formula><tex-math>$^text{ss}$</tex-math></inline-formula> score of 55.9% on ADE20K validation set, which outperforms the baseline method VPD by 2.2%. For depth estimation, our IEDP outperforms the baseline method VPD with a relative gain of 11.0%.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"466-476"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521796
Haojin Deng;Yimin Yang
Contrastive learning has gained popularity and pushes state-of-the-art performance across numerous large-scale benchmarks. In contrastive learning, the contrastive loss function plays a pivotal role in discerning similarities between samples through techniques such as rotation or cropping. However, this learning mechanism can also introduce information distortion from the augmented samples. This is because the trained model may develop a significant overreliance on information from samples with identical labels, while concurrently neglecting positive pairs that originate from the same initial image, especially in expansive datasets. This paper proposes a context-enriched contrastive loss function that concurrently improves learning effectiveness and addresses the information distortion by encompassing two convergence targets. The first component, which is notably sensitive to label contrast, differentiates between features of identical and distinct classes which boosts the contrastive training efficiency. Meanwhile, the second component draws closer the augmented samples from the same source image and distances all other samples, similar to self-supervised learning. We evaluate the proposed approach on image classification tasks, which are among the most widely accepted 8 recognition large-scale benchmark datasets: CIFAR10, CIFAR100, Caltech-101, Caltech-256, ImageNet, BiasedMNIST, UTKFace, and CelebA datasets. The experimental results demonstrate that the proposed method achieves improvements over 16 state-of-the-art contrastive learning methods in terms of both generalization performance and learning convergence speed. Interestingly, our technique stands out in addressing systematic distortion tasks. It demonstrates a 22.9% improvement compared to original contrastive loss functions in the downstream BiasedMNIST dataset, highlighting its promise for more efficient and equitable downstream training.
{"title":"Context-Enriched Contrastive Loss: Enhancing Presentation of Inherent Sample Connections in Contrastive Learning Framework","authors":"Haojin Deng;Yimin Yang","doi":"10.1109/TMM.2024.3521796","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521796","url":null,"abstract":"Contrastive learning has gained popularity and pushes state-of-the-art performance across numerous large-scale benchmarks. In contrastive learning, the contrastive loss function plays a pivotal role in discerning similarities between samples through techniques such as rotation or cropping. However, this learning mechanism can also introduce information distortion from the augmented samples. This is because the trained model may develop a significant overreliance on information from samples with identical labels, while concurrently neglecting positive pairs that originate from the same initial image, especially in expansive datasets. This paper proposes a context-enriched contrastive loss function that concurrently improves learning effectiveness and addresses the information distortion by encompassing two convergence targets. The first component, which is notably sensitive to label contrast, differentiates between features of identical and distinct classes which boosts the contrastive training efficiency. Meanwhile, the second component draws closer the augmented samples from the same source image and distances all other samples, similar to self-supervised learning. We evaluate the proposed approach on image classification tasks, which are among the most widely accepted 8 recognition large-scale benchmark datasets: CIFAR10, CIFAR100, Caltech-101, Caltech-256, ImageNet, BiasedMNIST, UTKFace, and CelebA datasets. The experimental results demonstrate that the proposed method achieves improvements over 16 state-of-the-art contrastive learning methods in terms of both generalization performance and learning convergence speed. Interestingly, our technique stands out in addressing systematic distortion tasks. It demonstrates a 22.9% improvement compared to original contrastive loss functions in the downstream BiasedMNIST dataset, highlighting its promise for more efficient and equitable downstream training.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"429-441"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Synthetic faces have been extensively researched and applied in various fields, such as face parsing and recognition. Compared to real face images, synthetic faces engender more controllable and consistent experimental stimuli due to the ability to precisely merge expression animations onto the facial skeleton. Accordingly, we establish an eye-tracking database with 780 synthetic face images and fixation data collected from 22 participants. The use of synthetic images with consistent expressions ensures reliable data support for exploring the database and determining the following findings: (1) A correlation study between saliency intensity and facial movement reveals that the variation of attention distribution within facial regions is mainly attributed to the movement of the mouth. (2) A categorized analysis of different demographic factors demonstrates that the bias towards salient regions aligns with differences in some demographic categories of synthetic characters. In practice, inference of facial saliency distribution is commonly used to predict the regions of interest for facial video-related applications. Therefore, we propose a benchmark model that accurately predicts saliency maps, closely matching the ground truth annotations. This achievement is made possible by utilizing channel alignment and progressive summation for feature fusion, along with the incorporation of Sinusoidal Position Encoding. The ablation experiment also demonstrates the effectiveness of our proposed model. We hope that this paper will contribute to advancing the photorealism of generative digital humans.
{"title":"Explain Vision Focus: Blending Human Saliency Into Synthetic Face Images","authors":"Kaiwei Zhang;Dandan Zhu;Xiongkuo Min;Huiyu Duan;Guangtao Zhai","doi":"10.1109/TMM.2024.3521670","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521670","url":null,"abstract":"Synthetic faces have been extensively researched and applied in various fields, such as face parsing and recognition. Compared to real face images, synthetic faces engender more controllable and consistent experimental stimuli due to the ability to precisely merge expression animations onto the facial skeleton. Accordingly, we establish an eye-tracking database with 780 synthetic face images and fixation data collected from 22 participants. The use of synthetic images with consistent expressions ensures reliable data support for exploring the database and determining the following findings: (1) A correlation study between saliency intensity and facial movement reveals that the variation of attention distribution within facial regions is mainly attributed to the movement of the mouth. (2) A categorized analysis of different demographic factors demonstrates that the bias towards salient regions aligns with differences in some demographic categories of synthetic characters. In practice, inference of facial saliency distribution is commonly used to predict the regions of interest for facial video-related applications. Therefore, we propose a benchmark model that accurately predicts saliency maps, closely matching the ground truth annotations. This achievement is made possible by utilizing channel alignment and progressive summation for feature fusion, along with the incorporation of Sinusoidal Position Encoding. The ablation experiment also demonstrates the effectiveness of our proposed model. We hope that this paper will contribute to advancing the photorealism of generative digital humans.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"489-502"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Classical continuous sign language recognition (CSLR) suffers from some main challenges in real-world scenarios: accurate inter-frame movement trajectories may fail to be captured by traditional RGB cameras due to the motion blur, and valid information may be insufficient under low-illumination scenarios. In this paper, we for the first time leverage an event camera to overcome the above-mentioned challenges. Event cameras are bio-inspired vision sensors that could efficiently record high-speed sign language movements under low-illumination scenarios and capture human information while eliminating redundant background interference. To fully exploit the benefits of the event camera for CSLR, we propose a novel event-guided multi-modal CSLR framework, which could achieve significant performance under complex scenarios. Specifically, a time redundancy correction (TRCorr) module is proposed to rectify redundant information in the temporal sequences, directing the model to focus on distinctive features. A multi-modal cross-attention interaction (MCAI) module is proposed to facilitate information fusion between events and frame domains. Furthermore, we construct the first event-based CSLR dataset, named EvCSLR, which will be released as the first event-based CSLR benchmark. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on EvCSLR and PHOENIX-2014 T datasets.
{"title":"EvCSLR: Event-Guided Continuous Sign Language Recognition and Benchmark","authors":"Yu Jiang;Yuehang Wang;Siqi Li;Yongji Zhang;Qianren Guo;Qi Chu;Yue Gao","doi":"10.1109/TMM.2024.3521750","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521750","url":null,"abstract":"Classical continuous sign language recognition (CSLR) suffers from some main challenges in real-world scenarios: accurate inter-frame movement trajectories may fail to be captured by traditional RGB cameras due to the motion blur, and valid information may be insufficient under low-illumination scenarios. In this paper, we for the first time leverage an event camera to overcome the above-mentioned challenges. Event cameras are bio-inspired vision sensors that could efficiently record high-speed sign language movements under low-illumination scenarios and capture human information while eliminating redundant background interference. To fully exploit the benefits of the event camera for CSLR, we propose a novel event-guided multi-modal CSLR framework, which could achieve significant performance under complex scenarios. Specifically, a time redundancy correction (TRCorr) module is proposed to rectify redundant information in the temporal sequences, directing the model to focus on distinctive features. A multi-modal cross-attention interaction (MCAI) module is proposed to facilitate information fusion between events and frame domains. Furthermore, we construct the first event-based CSLR dataset, named <bold>EvCSLR</b>, which will be released as the first event-based CSLR benchmark. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on EvCSLR and PHOENIX-2014 T datasets.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1349-1361"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The spectral shape holds crucial information for Audio Classification (AC), encompassing the spectrum's envelope, details, and dynamic changes over time. Conventional methods utilize cepstral coefficients for spectral shape description but overlook its variation details. Deep-learning approaches capture some dynamics but demand substantial training or fine-tuning resources. The Learning in the Model Space (LMS) framework precisely captures the dynamic information of temporal data by utilizing model fitting, even when computational resources and data are limited. However, applying LMS to audio faces challenges: 1) The high sampling rate of audio hinders efficient data fitting and capturing of dynamic information. 2) The Dynamic Information of Partial Spectral Shapes (DIPSS) may enhance classification, as only specific spectral shapes are relevant for AC. This paper extends an AC framework called Effective Dynamic Information Capture (EDIC) to tackle the above issues. EDIC constructs Mel-Frequency Cepstral Coefficients (MFCC) sequences within different dimensional intervals as the fitted data, which not only reduces the number of sequence sampling points but can also describe the change of the spectral shape in different parts over time. EDIC enables us to implement a topology-based selection algorithm in the model space, selecting effective DIPSS for the current AC task. The performance on three tasks confirms the effectiveness of EDIC.
{"title":"Investigating the Effective Dynamic Information of Spectral Shapes for Audio Classification","authors":"Liangwei Chen;Xiren Zhou;Qiuju Chen;Fang Xiong;Huanhuan Chen","doi":"10.1109/TMM.2024.3521837","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521837","url":null,"abstract":"The spectral shape holds crucial information for Audio Classification (AC), encompassing the spectrum's envelope, details, and dynamic changes over time. Conventional methods utilize cepstral coefficients for spectral shape description but overlook its variation details. Deep-learning approaches capture some dynamics but demand substantial training or fine-tuning resources. The Learning in the Model Space (LMS) framework precisely captures the dynamic information of temporal data by utilizing model fitting, even when computational resources and data are limited. However, applying LMS to audio faces challenges: 1) The high sampling rate of audio hinders efficient data fitting and capturing of dynamic information. 2) The Dynamic Information of Partial Spectral Shapes (DIPSS) may enhance classification, as only specific spectral shapes are relevant for AC. This paper extends an AC framework called Effective Dynamic Information Capture (EDIC) to tackle the above issues. EDIC constructs Mel-Frequency Cepstral Coefficients (MFCC) sequences within different dimensional intervals as the fitted data, which not only reduces the number of sequence sampling points but can also describe the change of the spectral shape in different parts over time. EDIC enables us to implement a topology-based selection algorithm in the model space, selecting effective DIPSS for the current AC task. The performance on three tasks confirms the effectiveness of EDIC.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1114-1126"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143594347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521702
Liangchen Liu;Nannan Wang;Dawei Zhou;Decheng Liu;Xi Yang;Xinbo Gao;Tongliang Liu
This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM), i.e., improving the performance on unseen classes while maintaining the performance on seen classes. Comparing with existing generalizable methods that neglect the seen classes degradation, the setting of this problem is stricter and fits more closely with practical applications. To solve this problem, we start from the optimization perspective, and leverage the relationship between loss landscape geometry and model generalization ability. By analyzing the loss landscapes of the state-of-the-art method and vanilla Sharpness-aware Minimization (SAM) based method, we conclude that the trade-off performance correlates to both loss value and loss sharpness, while each of them is indispensable. However, we find the optimizing gradient of existing methods cannot maintain high relevance to both loss value and loss sharpness during optimization, which severely affects their trade-off performance. To this end, we propose a novel SAM-based method for prompt learning, denoted as Gradient Constrained Sharpness-aware Context Optimization (GCSCoOp), to dynamically constrain the optimizing gradient, thus achieving above two-fold optimization objective simultaneously. Extensive experiments verify the effectiveness of GCSCoOp in the trade-off problem.
{"title":"Generalizable Prompt Learning via Gradient Constrained Sharpness-Aware Minimization","authors":"Liangchen Liu;Nannan Wang;Dawei Zhou;Decheng Liu;Xi Yang;Xinbo Gao;Tongliang Liu","doi":"10.1109/TMM.2024.3521702","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521702","url":null,"abstract":"This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM), i.e., improving the performance on unseen classes while maintaining the performance on seen classes. Comparing with existing generalizable methods that neglect the seen classes degradation, the setting of this problem is stricter and fits more closely with practical applications. To solve this problem, we start from the optimization perspective, and leverage the relationship between loss landscape geometry and model generalization ability. By analyzing the loss landscapes of the state-of-the-art method and vanilla Sharpness-aware Minimization (SAM) based method, we conclude that the trade-off performance correlates to both <bold>loss value</b> and <bold>loss sharpness</b>, while each of them is indispensable. However, we find the optimizing gradient of existing methods cannot maintain high relevance to both loss value and loss sharpness during optimization, which severely affects their trade-off performance. To this end, we propose a novel SAM-based method for prompt learning, denoted as Gradient Constrained Sharpness-aware Context Optimization (GCSCoOp), to dynamically constrain the optimizing gradient, thus achieving above two-fold optimization objective simultaneously. Extensive experiments verify the effectiveness of GCSCoOp in the trade-off problem.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1100-1113"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143594415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}