Camouflaged object detection (COD) aims to segment targeted objects that have similar colors, textures, or shapes to their background environment. Due to the limited ability in distinguishing highly similar patterns, existing COD methods usually produce inaccurate predictions, especially around the boundary areas, when coping with complex scenes. This paper proposes a Progressive Region-to-Boundary Exploration Network (PRBE-Net) to accurately detect camouflaged objects. PRBE-Net follows an encoder-decoder framework and includes three key modules. Specifically, firstly, both high-level and low-level features of the encoder are integrated by a region and boundary exploration module to explore their complementary information for extracting the object's coarse region and fine boundary cues simultaneously. Secondly, taking the region cues as the guidance information, a Region Enhancement (RE) module is used to adaptively localize and enhance the region information at each layer of the encoder. Subsequently, considering that camouflaged objects usually have blurry boundaries, a Boundary Refinement (BR) decoder is used after the RE module to better detect the boundary areas with the assistance of boundary cues. Through top-down deep supervision, PRBE-Net can progressively refine the prediction. Extensive experiments on four datasets indicate that our PRBE-Net achieves superior results over 21 state-of-the-art COD methods. Additionally, it also shows good results on polyp segmentation, a COD-related task in the medical field.
{"title":"Progressive Region-to-Boundary Exploration Network for Camouflaged Object Detection","authors":"Guanghui Yue;Shangjie Wu;Tianwei Zhou;Gang Li;Jie Du;Yu Luo;Qiuping Jiang","doi":"10.1109/TMM.2024.3521761","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521761","url":null,"abstract":"Camouflaged object detection (COD) aims to segment targeted objects that have similar colors, textures, or shapes to their background environment. Due to the limited ability in distinguishing highly similar patterns, existing COD methods usually produce inaccurate predictions, especially around the boundary areas, when coping with complex scenes. This paper proposes a Progressive Region-to-Boundary Exploration Network (PRBE-Net) to accurately detect camouflaged objects. PRBE-Net follows an encoder-decoder framework and includes three key modules. Specifically, firstly, both high-level and low-level features of the encoder are integrated by a region and boundary exploration module to explore their complementary information for extracting the object's coarse region and fine boundary cues simultaneously. Secondly, taking the region cues as the guidance information, a Region Enhancement (RE) module is used to adaptively localize and enhance the region information at each layer of the encoder. Subsequently, considering that camouflaged objects usually have blurry boundaries, a Boundary Refinement (BR) decoder is used after the RE module to better detect the boundary areas with the assistance of boundary cues. Through top-down deep supervision, PRBE-Net can progressively refine the prediction. Extensive experiments on four datasets indicate that our PRBE-Net achieves superior results over 21 state-of-the-art COD methods. Additionally, it also shows good results on polyp segmentation, a COD-related task in the medical field.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"236-248"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521683
Min Dang;Gang Liu;Hao Li;Di Wang;Rong Pan;Quan Wang
Oriented object detection typically adds an additional rotation angle to the regressed horizontal bounding box (HBB) for representing the oriented bounding box (OBB). However, existing oriented object detectors based on regression angles face inconsistency between metric and loss, boundary discontinuity or square-like problems. To solve the above problems, we propose an anchor-free oriented object detector named PRA-Det, which assigns the center region of the object to regress OBBs represented by the polar radius vectors. Specifically, the proposed PRA-Det introduces a diamond-shaped positive region of category-wise attention factor to assign positive sample points to regress polar radius vectors. PRA-Det regresses the polar radius vector of the edges from the assigned sample points as the regression target and suppresses the predicted low-quality polar radius vectors through the category-wise attention factor. The OBBs defined for different protocols are uniformly encoded by the polar radius encoding module into regression targets represented by polar radius vectors. Therefore, the regression target represented by the polar radius vector does not have angle parameters during training, thus solving the angle-sensitive boundary discontinuity and square-like problems. To optimize the predicted polar radius vector, we design a spatial geometry loss to improve the detection accuracy. Furthermore, in the inference stage, the center offset score of the polar radius vector is combined with the classification score as the confidence to alleviate the inconsistency between classification and regression. The extensive experiments on public benchmarks demonstrate that the PRA-Det is highly competitive with state-of-the-art oriented object detectors and outperforms other comparison methods.
{"title":"PRA-Det: Anchor-Free Oriented Object Detection With Polar Radius Representation","authors":"Min Dang;Gang Liu;Hao Li;Di Wang;Rong Pan;Quan Wang","doi":"10.1109/TMM.2024.3521683","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521683","url":null,"abstract":"Oriented object detection typically adds an additional rotation angle to the regressed horizontal bounding box (HBB) for representing the oriented bounding box (OBB). However, existing oriented object detectors based on regression angles face inconsistency between metric and loss, boundary discontinuity or square-like problems. To solve the above problems, we propose an anchor-free oriented object detector named PRA-Det, which assigns the center region of the object to regress OBBs represented by the polar radius vectors. Specifically, the proposed PRA-Det introduces a diamond-shaped positive region of category-wise attention factor to assign positive sample points to regress polar radius vectors. PRA-Det regresses the polar radius vector of the edges from the assigned sample points as the regression target and suppresses the predicted low-quality polar radius vectors through the category-wise attention factor. The OBBs defined for different protocols are uniformly encoded by the polar radius encoding module into regression targets represented by polar radius vectors. Therefore, the regression target represented by the polar radius vector does not have angle parameters during training, thus solving the angle-sensitive boundary discontinuity and square-like problems. To optimize the predicted polar radius vector, we design a spatial geometry loss to improve the detection accuracy. Furthermore, in the inference stage, the center offset score of the polar radius vector is combined with the classification score as the confidence to alleviate the inconsistency between classification and regression. The extensive experiments on public benchmarks demonstrate that the PRA-Det is highly competitive with state-of-the-art oriented object detectors and outperforms other comparison methods.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"145-157"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, learning-based color and tone enhancement methods for photos have become increasingly popular. However, most learning-based image enhancement methods just learn a mapping from one distribution to another based on one dataset, lacking the ability to adjust images continuously and controllably. It is important to enable the learning-based enhancement models to adjust an image continuously, since in many cases we may want to get a slighter or stronger enhancement effect rather than one fixed adjusted result. In this paper, we propose a quality-guided image enhancement paradigm that enables image enhancement models to learn the distribution of images with various quality ratings. By learning this distribution, image enhancement models can associate image features with their corresponding perceptual qualities, which can be used to adjust images continuously according to different quality scores. To validate the effectiveness of our proposed method, a subjective quality assessment experiment is first conducted, focusing on skin tone adjustment in portrait photography. Guided by the subjective quality ratings obtained from this experiment, our method can adjust the skin tone corresponding to different quality requirements. Furthermore, an experiment conducted on 10 natural raw images corroborates the effectiveness of our model in situations with fewer subjects and fewer shots, and also demonstrates its general applicability to natural images.
{"title":"Quality-Guided Skin Tone Enhancement for Portrait Photography","authors":"Shiqi Gao;Huiyu Duan;Xinyue Li;Kang Fu;Yicong Peng;Qihang Xu;Yuanyuan Chang;Jia Wang;Xiongkuo Min;Guangtao Zhai","doi":"10.1109/TMM.2024.3521829","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521829","url":null,"abstract":"In recent years, learning-based color and tone enhancement methods for photos have become increasingly popular. However, most learning-based image enhancement methods just learn a mapping from one distribution to another based on one dataset, lacking the ability to adjust images continuously and controllably. It is important to enable the learning-based enhancement models to adjust an image continuously, since in many cases we may want to get a slighter or stronger enhancement effect rather than one fixed adjusted result. In this paper, we propose a quality-guided image enhancement paradigm that enables image enhancement models to learn the distribution of images with various quality ratings. By learning this distribution, image enhancement models can associate image features with their corresponding perceptual qualities, which can be used to adjust images continuously according to different quality scores. To validate the effectiveness of our proposed method, a subjective quality assessment experiment is first conducted, focusing on skin tone adjustment in portrait photography. Guided by the subjective quality ratings obtained from this experiment, our method can adjust the skin tone corresponding to different quality requirements. Furthermore, an experiment conducted on 10 natural raw images corroborates the effectiveness of our model in situations with fewer subjects and fewer shots, and also demonstrates its general applicability to natural images.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"171-185"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521800
Li Huang;Yaping Huang;Qingji Guan
Image inpainting aims to restore visually realistic contents from a corrupted image, while inpainting forensic methods focus on locating the inpainted regions to fight against inpainting manipulations. Motivated by these two mutually interdependent tasks, in this paper, we propose a novel image inpainting network called Adversarial Collaborative Network (AdvColabNet), which leverages the contradictory and collaborative information from the two tasks of image inpainting and inpainting forensics to enhance the progress of the inpainting model through adversarial collaborative training. Specifically, the proposed AdvColabNet is a coarse-to-fine two-stage framework. In the coarse training stage, a simple generative adversarial model-based U-Net-style network generates initial coarse inpainting results. In the fine stage, the authenticity of inpainting results is assessed using the estimated forensic mask. A forensics-driven adaptive weighting refinement strategy is developed to emphasize learning from pixels with higher probabilities of being inpainted, which helps the network to focus on the challenging regions, resulting in more plausible inpainting results. Comprehensive evaluations on the CelebA-HQ and Places2 datasets demonstrate that our method achieves state-of-the-art robustness performance in terms of PSNR, SSIM, MAE, FID, and LPIPS metrics. We also show that our method effectively deceives the proposed inpainting forensic method compared to state-of-the-art inpainting methods, further demonstrating the superiority of the proposed method.
{"title":"Improving Image Inpainting via Adversarial Collaborative Training","authors":"Li Huang;Yaping Huang;Qingji Guan","doi":"10.1109/TMM.2024.3521800","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521800","url":null,"abstract":"Image inpainting aims to restore visually realistic contents from a corrupted image, while inpainting forensic methods focus on locating the inpainted regions to fight against inpainting manipulations. Motivated by these two mutually interdependent tasks, in this paper, we propose a novel image inpainting network called Adversarial Collaborative Network (AdvColabNet), which leverages the contradictory and collaborative information from the two tasks of image inpainting and inpainting forensics to enhance the progress of the inpainting model through adversarial collaborative training. Specifically, the proposed AdvColabNet is a coarse-to-fine two-stage framework. In the coarse training stage, a simple generative adversarial model-based U-Net-style network generates initial coarse inpainting results. In the fine stage, the authenticity of inpainting results is assessed using the estimated forensic mask. A forensics-driven adaptive weighting refinement strategy is developed to emphasize learning from pixels with higher probabilities of being inpainted, which helps the network to focus on the challenging regions, resulting in more plausible inpainting results. Comprehensive evaluations on the CelebA-HQ and Places2 datasets demonstrate that our method achieves state-of-the-art robustness performance in terms of PSNR, SSIM, MAE, FID, and LPIPS metrics. We also show that our method effectively deceives the proposed inpainting forensic method compared to state-of-the-art inpainting methods, further demonstrating the superiority of the proposed method.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"356-370"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521674
Zhenyu Shu;Shiyang Li;Shiqing Xin;Ligang Liu
3D shape segmentation is a crucial task in the field of multimedia analysis and processing, and recent years have seen a surge in research on this topic. However, many existing methods only consider geometric features of 3D shapes and fail to explore the potential connections between faces, limiting their segmentation performance. In this paper, we propose a novel segmentation approach that mines and enhances the potential consistency of 3D shapes to overcome this limitation. The key idea is to mine the consistency between different partitions of 3D shapes and to use the unique consistency enhancement strategy to continuously optimize the consistency features for the network. Our method also includes a comprehensive set of network structures to mine and enhance consistent features, enabling more effective feature extraction and better utilization of contextual information around each face when processing complex shapes. We evaluate our approach on public benchmarks through extensive experiments and demonstrate its effectiveness in achieving higher accuracy than existing methods.
{"title":"3D Shape Segmentation With Potential Consistency Mining and Enhancement","authors":"Zhenyu Shu;Shiyang Li;Shiqing Xin;Ligang Liu","doi":"10.1109/TMM.2024.3521674","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521674","url":null,"abstract":"3D shape segmentation is a crucial task in the field of multimedia analysis and processing, and recent years have seen a surge in research on this topic. However, many existing methods only consider geometric features of 3D shapes and fail to explore the potential connections between faces, limiting their segmentation performance. In this paper, we propose a novel segmentation approach that mines and enhances the potential consistency of 3D shapes to overcome this limitation. The key idea is to mine the consistency between different partitions of 3D shapes and to use the unique consistency enhancement strategy to continuously optimize the consistency features for the network. Our method also includes a comprehensive set of network structures to mine and enhance consistent features, enabling more effective feature extraction and better utilization of contextual information around each face when processing complex shapes. We evaluate our approach on public benchmarks through extensive experiments and demonstrate its effectiveness in achieving higher accuracy than existing methods.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"133-144"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521823
Meijing Zhang;Mengxue Chen;Qi Li;Yanchen Chen;Rui Lin;Xiaolian Li;Shengfeng He;Wenxi Liu
Crowd counting has drawn increasing attention across various fields. However, existing crowd counting tasks primarily focus on estimating the overall population, ignoring the behavioral and semantic information of different social groups within the crowd. In this paper, we aim to address a newly proposed research problem, namely fine-grained crowd counting, which involves identifying different categories of individuals and accurately counting them in static images. In order to fully leverage the categorical information in static crowd images, we propose a two-tier salient feature propagation module designed to sequentially extract semantic information from both the crowd and its surrounding environment. Additionally, we introduce a category difference loss to refine the feature representation by highlighting the differences between various crowd categories. Moreover, our proposed framework can adapt to a novel problem setup called few-example fine-grained crowd counting. This setup, unlike the original fine-grained crowd counting, requires only a few exemplar point annotations instead of dense annotations from predefined categories, making it applicable in a wider range of scenarios. The baseline model for this task can be established by substituting the loss function in our proposed model with a novel hybrid loss function that integrates point-oriented cross-entropy loss and category contrastive loss. Through comprehensive experiments, we present results in both the formulation and application of fine-grained crowd counting.
{"title":"Category-Contrastive Fine-Grained Crowd Counting and Beyond","authors":"Meijing Zhang;Mengxue Chen;Qi Li;Yanchen Chen;Rui Lin;Xiaolian Li;Shengfeng He;Wenxi Liu","doi":"10.1109/TMM.2024.3521823","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521823","url":null,"abstract":"Crowd counting has drawn increasing attention across various fields. However, existing crowd counting tasks primarily focus on estimating the overall population, ignoring the behavioral and semantic information of different social groups within the crowd. In this paper, we aim to address a newly proposed research problem, namely fine-grained crowd counting, which involves identifying different categories of individuals and accurately counting them in static images. In order to fully leverage the categorical information in static crowd images, we propose a two-tier salient feature propagation module designed to sequentially extract semantic information from both the crowd and its surrounding environment. Additionally, we introduce a category difference loss to refine the feature representation by highlighting the differences between various crowd categories. Moreover, our proposed framework can adapt to a novel problem setup called few-example fine-grained crowd counting. This setup, unlike the original fine-grained crowd counting, requires only a few exemplar point annotations instead of dense annotations from predefined categories, making it applicable in a wider range of scenarios. The baseline model for this task can be established by substituting the loss function in our proposed model with a novel hybrid loss function that integrates point-oriented cross-entropy loss and category contrastive loss. Through comprehensive experiments, we present results in both the formulation and application of fine-grained crowd counting.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"477-488"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521825
Hefeng Wang;Jiale Cao;Jin Xie;Aiping Yang;Yanwei Pang
Text-to-image diffusion models have shown powerful ability on conditional image synthesis. With large-scale vision-language pre-training, diffusion models are able to generate high-quality images with rich textures and reasonable structures under different text prompts. However, adapting pre-trained diffusion models for visual perception is an open problem. In this paper, we propose an implicit and explicit language guidance framework for diffusion-based visual perception, named IEDP. Our IEDP comprises an implicit language guidance branch and an explicit language guidance branch. The implicit branch employs a frozen CLIP image encoder to directly generate implicit text embeddings that are fed to the diffusion model without explicit text prompts. The explicit branch uses the ground-truth labels of corresponding images as text prompts to condition feature extraction in diffusion model. During training, we jointly train the diffusion model by sharing the model weights of these two branches. As a result, the implicit and explicit branches can jointly guide feature learning. During inference, we employ only implicit branch for final prediction, which does not require any ground-truth labels. Experiments are performed on two typical perception tasks, including semantic segmentation and depth estimation. Our IEDP achieves promising performance on both tasks. For semantic segmentation, our IEDP has the mIoU$^text{ss}$ score of 55.9% on ADE20K validation set, which outperforms the baseline method VPD by 2.2%. For depth estimation, our IEDP outperforms the baseline method VPD with a relative gain of 11.0%.
{"title":"Implicit and Explicit Language Guidance for Diffusion-Based Visual Perception","authors":"Hefeng Wang;Jiale Cao;Jin Xie;Aiping Yang;Yanwei Pang","doi":"10.1109/TMM.2024.3521825","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521825","url":null,"abstract":"Text-to-image diffusion models have shown powerful ability on conditional image synthesis. With large-scale vision-language pre-training, diffusion models are able to generate high-quality images with rich textures and reasonable structures under different text prompts. However, adapting pre-trained diffusion models for visual perception is an open problem. In this paper, we propose an implicit and explicit language guidance framework for diffusion-based visual perception, named IEDP. Our IEDP comprises an implicit language guidance branch and an explicit language guidance branch. The implicit branch employs a frozen CLIP image encoder to directly generate implicit text embeddings that are fed to the diffusion model without explicit text prompts. The explicit branch uses the ground-truth labels of corresponding images as text prompts to condition feature extraction in diffusion model. During training, we jointly train the diffusion model by sharing the model weights of these two branches. As a result, the implicit and explicit branches can jointly guide feature learning. During inference, we employ only implicit branch for final prediction, which does not require any ground-truth labels. Experiments are performed on two typical perception tasks, including semantic segmentation and depth estimation. Our IEDP achieves promising performance on both tasks. For semantic segmentation, our IEDP has the mIoU<inline-formula><tex-math>$^text{ss}$</tex-math></inline-formula> score of 55.9% on ADE20K validation set, which outperforms the baseline method VPD by 2.2%. For depth estimation, our IEDP outperforms the baseline method VPD with a relative gain of 11.0%.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"466-476"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521796
Haojin Deng;Yimin Yang
Contrastive learning has gained popularity and pushes state-of-the-art performance across numerous large-scale benchmarks. In contrastive learning, the contrastive loss function plays a pivotal role in discerning similarities between samples through techniques such as rotation or cropping. However, this learning mechanism can also introduce information distortion from the augmented samples. This is because the trained model may develop a significant overreliance on information from samples with identical labels, while concurrently neglecting positive pairs that originate from the same initial image, especially in expansive datasets. This paper proposes a context-enriched contrastive loss function that concurrently improves learning effectiveness and addresses the information distortion by encompassing two convergence targets. The first component, which is notably sensitive to label contrast, differentiates between features of identical and distinct classes which boosts the contrastive training efficiency. Meanwhile, the second component draws closer the augmented samples from the same source image and distances all other samples, similar to self-supervised learning. We evaluate the proposed approach on image classification tasks, which are among the most widely accepted 8 recognition large-scale benchmark datasets: CIFAR10, CIFAR100, Caltech-101, Caltech-256, ImageNet, BiasedMNIST, UTKFace, and CelebA datasets. The experimental results demonstrate that the proposed method achieves improvements over 16 state-of-the-art contrastive learning methods in terms of both generalization performance and learning convergence speed. Interestingly, our technique stands out in addressing systematic distortion tasks. It demonstrates a 22.9% improvement compared to original contrastive loss functions in the downstream BiasedMNIST dataset, highlighting its promise for more efficient and equitable downstream training.
{"title":"Context-Enriched Contrastive Loss: Enhancing Presentation of Inherent Sample Connections in Contrastive Learning Framework","authors":"Haojin Deng;Yimin Yang","doi":"10.1109/TMM.2024.3521796","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521796","url":null,"abstract":"Contrastive learning has gained popularity and pushes state-of-the-art performance across numerous large-scale benchmarks. In contrastive learning, the contrastive loss function plays a pivotal role in discerning similarities between samples through techniques such as rotation or cropping. However, this learning mechanism can also introduce information distortion from the augmented samples. This is because the trained model may develop a significant overreliance on information from samples with identical labels, while concurrently neglecting positive pairs that originate from the same initial image, especially in expansive datasets. This paper proposes a context-enriched contrastive loss function that concurrently improves learning effectiveness and addresses the information distortion by encompassing two convergence targets. The first component, which is notably sensitive to label contrast, differentiates between features of identical and distinct classes which boosts the contrastive training efficiency. Meanwhile, the second component draws closer the augmented samples from the same source image and distances all other samples, similar to self-supervised learning. We evaluate the proposed approach on image classification tasks, which are among the most widely accepted 8 recognition large-scale benchmark datasets: CIFAR10, CIFAR100, Caltech-101, Caltech-256, ImageNet, BiasedMNIST, UTKFace, and CelebA datasets. The experimental results demonstrate that the proposed method achieves improvements over 16 state-of-the-art contrastive learning methods in terms of both generalization performance and learning convergence speed. Interestingly, our technique stands out in addressing systematic distortion tasks. It demonstrates a 22.9% improvement compared to original contrastive loss functions in the downstream BiasedMNIST dataset, highlighting its promise for more efficient and equitable downstream training.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"429-441"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Synthetic faces have been extensively researched and applied in various fields, such as face parsing and recognition. Compared to real face images, synthetic faces engender more controllable and consistent experimental stimuli due to the ability to precisely merge expression animations onto the facial skeleton. Accordingly, we establish an eye-tracking database with 780 synthetic face images and fixation data collected from 22 participants. The use of synthetic images with consistent expressions ensures reliable data support for exploring the database and determining the following findings: (1) A correlation study between saliency intensity and facial movement reveals that the variation of attention distribution within facial regions is mainly attributed to the movement of the mouth. (2) A categorized analysis of different demographic factors demonstrates that the bias towards salient regions aligns with differences in some demographic categories of synthetic characters. In practice, inference of facial saliency distribution is commonly used to predict the regions of interest for facial video-related applications. Therefore, we propose a benchmark model that accurately predicts saliency maps, closely matching the ground truth annotations. This achievement is made possible by utilizing channel alignment and progressive summation for feature fusion, along with the incorporation of Sinusoidal Position Encoding. The ablation experiment also demonstrates the effectiveness of our proposed model. We hope that this paper will contribute to advancing the photorealism of generative digital humans.
{"title":"Explain Vision Focus: Blending Human Saliency Into Synthetic Face Images","authors":"Kaiwei Zhang;Dandan Zhu;Xiongkuo Min;Huiyu Duan;Guangtao Zhai","doi":"10.1109/TMM.2024.3521670","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521670","url":null,"abstract":"Synthetic faces have been extensively researched and applied in various fields, such as face parsing and recognition. Compared to real face images, synthetic faces engender more controllable and consistent experimental stimuli due to the ability to precisely merge expression animations onto the facial skeleton. Accordingly, we establish an eye-tracking database with 780 synthetic face images and fixation data collected from 22 participants. The use of synthetic images with consistent expressions ensures reliable data support for exploring the database and determining the following findings: (1) A correlation study between saliency intensity and facial movement reveals that the variation of attention distribution within facial regions is mainly attributed to the movement of the mouth. (2) A categorized analysis of different demographic factors demonstrates that the bias towards salient regions aligns with differences in some demographic categories of synthetic characters. In practice, inference of facial saliency distribution is commonly used to predict the regions of interest for facial video-related applications. Therefore, we propose a benchmark model that accurately predicts saliency maps, closely matching the ground truth annotations. This achievement is made possible by utilizing channel alignment and progressive summation for feature fusion, along with the incorporation of Sinusoidal Position Encoding. The ablation experiment also demonstrates the effectiveness of our proposed model. We hope that this paper will contribute to advancing the photorealism of generative digital humans.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"489-502"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TMM.2024.3521678
Yue Dai;Shihui Ying;Yue Gao
Rotation invariant point cloud analysis is essential for many real-world applications where objects can appear in arbitrary orientations. Traditional local rotation-invariant methods rely on lossy region descriptors, limiting the global comprehension of 3D objects. Conversely, global features derived from pose alignment can capture complementary information. To leverage both local and global consistency for enhanced accuracy, we propose the Global-Local-Consistent Hypergraph Cross-Attention Network (GLC-HCAN). This framework includes the Global Consistent Feature (GCF) representation branch, the Local Consistent Feature (LCF) representation branch, and the Hypergraph Cross-Attention (HyperCA) network to model complex correlations through the global-local-consistent hypergraph representation learning. Specifically, the GCF branch employs a multi-pose grouping and aggregation strategy based on PCA for improved global comprehension. Simultaneously, the LCF branch uses local farthest reference point features to enhance local region descriptions. To capture high-order and complex global-local correlations, we construct hypergraphs that integrate both features, mutually enhancing and fusing the representations. The inductive HyperCA module leverages attention techniques to better utilize these high-order relations for comprehensive understanding. Consequently, GLC-HCAN offers an effective and robust rotation-invariant point cloud analysis network, suitable for object classification and shape retrieval tasks in SO(3). Experimental results on both synthetic and scanned point cloud datasets demonstrate that GLC-HCAN outperforms state-of-the-art methods.
{"title":"Exploring Local and Global Consistent Correlation on Hypergraph for Rotation Invariant Point Cloud Analysis","authors":"Yue Dai;Shihui Ying;Yue Gao","doi":"10.1109/TMM.2024.3521678","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521678","url":null,"abstract":"Rotation invariant point cloud analysis is essential for many real-world applications where objects can appear in arbitrary orientations. Traditional local rotation-invariant methods rely on lossy region descriptors, limiting the global comprehension of 3D objects. Conversely, global features derived from pose alignment can capture complementary information. To leverage both local and global consistency for enhanced accuracy, we propose the Global-Local-Consistent Hypergraph Cross-Attention Network (GLC-HCAN). This framework includes the Global Consistent Feature (GCF) representation branch, the Local Consistent Feature (LCF) representation branch, and the Hypergraph Cross-Attention (HyperCA) network to model complex correlations through the global-local-consistent hypergraph representation learning. Specifically, the GCF branch employs a multi-pose grouping and aggregation strategy based on PCA for improved global comprehension. Simultaneously, the LCF branch uses local farthest reference point features to enhance local region descriptions. To capture high-order and complex global-local correlations, we construct hypergraphs that integrate both features, mutually enhancing and fusing the representations. The inductive HyperCA module leverages attention techniques to better utilize these high-order relations for comprehensive understanding. Consequently, GLC-HCAN offers an effective and robust rotation-invariant point cloud analysis network, suitable for object classification and shape retrieval tasks in SO(3). Experimental results on both synthetic and scanned point cloud datasets demonstrate that GLC-HCAN outperforms state-of-the-art methods.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"186-197"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}