Pub Date : 2025-02-27DOI: 10.1016/j.imavis.2025.105468
Chao Li, Xiaoming Peng
Multispectral object detection is generally considered better than single-modality-based object detection, due to the complementary properties of multispectral image pairs. However, how to integrate features from images of different modalities for object detection is still an open problem. In this paper, we propose a new multispectral object detection framework based on the Transformer and Mamba architectures, called the joint Transformer and Mamba detection (JTMDet). Specifically, we divide the feature fusion process into two stages, the intra-scale fusion stage and the inter-scale fusion stage, to comprehensively utilize the multi-modal features at different scales. To this end, we designed the so-called cross-modal fusion (CMF) and cross-level fusion (CLF) modules, both of which contain JTMBlock modules. A JTMBlock module interweaves the Transformer and Mamba layers to robustly capture the useful information in multispectral image pairs while maintaining high inference speed. Extensive experiments on three publicly available datasets conclusively show that the proposed JTMDet framework achieves state-of-the-art multispectral object detection performance, and is competitive with current leading methods. Code and pre-trained models are publicly available at https://github.com/LiC2023/JTMDet.
{"title":"Joint Transformer and Mamba fusion for multispectral object detection","authors":"Chao Li, Xiaoming Peng","doi":"10.1016/j.imavis.2025.105468","DOIUrl":"10.1016/j.imavis.2025.105468","url":null,"abstract":"<div><div>Multispectral object detection is generally considered better than single-modality-based object detection, due to the complementary properties of multispectral image pairs. However, how to integrate features from images of different modalities for object detection is still an open problem. In this paper, we propose a new multispectral object detection framework based on the Transformer and Mamba architectures, called the joint Transformer and Mamba detection (JTMDet). Specifically, we divide the feature fusion process into two stages, the intra-scale fusion stage and the inter-scale fusion stage, to comprehensively utilize the multi-modal features at different scales. To this end, we designed the so-called cross-modal fusion (CMF) and cross-level fusion (CLF) modules, both of which contain JTMBlock modules. A JTMBlock module interweaves the Transformer and Mamba layers to robustly capture the useful information in multispectral image pairs while maintaining high inference speed. Extensive experiments on three publicly available datasets conclusively show that the proposed JTMDet framework achieves state-of-the-art multispectral object detection performance, and is competitive with current leading methods. Code and pre-trained models are publicly available at <span><span>https://github.com/LiC2023/JTMDet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105468"},"PeriodicalIF":4.2,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-26DOI: 10.1016/j.imavis.2025.105483
Liwei Deng , Boda Wu , Songyu Chen , Dongxue Li , Yanze Fang
Semantic segmentation is crucial for the functionality of autonomous driving systems. However, most of the existing real-time semantic segmentation models focus on encoder design and underutilize spatial and frequency domain information in the decoder, limiting the segmentation accuracy of the model. To solve this problem, this paper proposes a multi-branch decoder network combining spatial domain and frequency domain to meet the real-time and accuracy requirements of the semantic segmentation task of road scenes for autonomous driving systems. Firstly, the network introduces a novel multi-scale dilated fusion block that gradually enlarges the receptive field through three consecutive dilated convolutions, and integrates features from different levels using skip connections. At the same time, a strategy of gradually reducing the number of channels is adopted to effectively remove redundant features. Secondly, we design three branches for the decoder. The global branch utilizes a lightweight Transformer architecture to extract global features and employs horizontal and vertical convolutions to achieve interaction among global features. The multi-scale branch combines dilated convolution and adaptive pooling to perform multi-scale feature extraction through fusion and post-processing. The wavelet transform feature converter maps spatial domain features into low-frequency and high-frequency components, which are then fused with global and multi-scale features to enhance the model representation. Finally, we conduct experiments on multiple datasets. The experimental results show that the proposed method best balances segmentation accuracy and inference speed.
{"title":"A spatial-frequency domain multi-branch decoder method for real-time semantic segmentation","authors":"Liwei Deng , Boda Wu , Songyu Chen , Dongxue Li , Yanze Fang","doi":"10.1016/j.imavis.2025.105483","DOIUrl":"10.1016/j.imavis.2025.105483","url":null,"abstract":"<div><div>Semantic segmentation is crucial for the functionality of autonomous driving systems. However, most of the existing real-time semantic segmentation models focus on encoder design and underutilize spatial and frequency domain information in the decoder, limiting the segmentation accuracy of the model. To solve this problem, this paper proposes a multi-branch decoder network combining spatial domain and frequency domain to meet the real-time and accuracy requirements of the semantic segmentation task of road scenes for autonomous driving systems. Firstly, the network introduces a novel multi-scale dilated fusion block that gradually enlarges the receptive field through three consecutive dilated convolutions, and integrates features from different levels using skip connections. At the same time, a strategy of gradually reducing the number of channels is adopted to effectively remove redundant features. Secondly, we design three branches for the decoder. The global branch utilizes a lightweight Transformer architecture to extract global features and employs horizontal and vertical convolutions to achieve interaction among global features. The multi-scale branch combines dilated convolution and adaptive pooling to perform multi-scale feature extraction through fusion and post-processing. The wavelet transform feature converter maps spatial domain features into low-frequency and high-frequency components, which are then fused with global and multi-scale features to enhance the model representation. Finally, we conduct experiments on multiple datasets. The experimental results show that the proposed method best balances segmentation accuracy and inference speed.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105483"},"PeriodicalIF":4.2,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143527561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-26DOI: 10.1016/j.imavis.2025.105469
Chenxi Bai , Kexin Zhang , Haozhe Jin , Peng Qian , Rui Zhai , Ke Lu
Unmanned aerial vehicles (UAVs) images object detection has emerged as a research hotspot, yet remains a significant challenge due to variable target scales and the high proportion of small objects caused by UAVs’ diverse altitudes and angles. To address these issues, we propose a novel Small Object Detection Network Based on Fine-Grained Feature Extraction and Fusion(SFFEF-YOLO). First, we introduce a tiny prediction head to replace the large prediction head, enhancing the detection accuracy for tiny objects while reducing model complexity. Second, we design a Fine-Grained Information Extraction Module (FIEM) to replace standard convolutions. This module improves feature extraction and reduces information loss during downsampling by utilizing multi-branch operations and SPD-Conv. Third, we develop a Multi-Scale Feature Fusion Module (MFFM), which adds an additional skip connection branch based on the bidirectional feature pyramid network (BiFPN) to preserve fine-grained information and improve multi-scale feature fusion. We evaluated SFFEF-YOLO on the VisDrone2019-DET and UAVDT datasets. Compared to YOLOv8, experimental results demonstrate that SFFEF-YOLO achieves a 9.9% mAP0.5 improvement on the VisDrone2019-DET dataset and a 3.6% mAP0.5 improvement on the UAVDT dataset.
{"title":"SFFEF-YOLO: Small object detection network based on fine-grained feature extraction and fusion for unmanned aerial images","authors":"Chenxi Bai , Kexin Zhang , Haozhe Jin , Peng Qian , Rui Zhai , Ke Lu","doi":"10.1016/j.imavis.2025.105469","DOIUrl":"10.1016/j.imavis.2025.105469","url":null,"abstract":"<div><div>Unmanned aerial vehicles (UAVs) images object detection has emerged as a research hotspot, yet remains a significant challenge due to variable target scales and the high proportion of small objects caused by UAVs’ diverse altitudes and angles. To address these issues, we propose a novel Small Object Detection Network Based on Fine-Grained Feature Extraction and Fusion(SFFEF-YOLO). First, we introduce a tiny prediction head to replace the large prediction head, enhancing the detection accuracy for tiny objects while reducing model complexity. Second, we design a Fine-Grained Information Extraction Module (FIEM) to replace standard convolutions. This module improves feature extraction and reduces information loss during downsampling by utilizing multi-branch operations and SPD-Conv. Third, we develop a Multi-Scale Feature Fusion Module (MFFM), which adds an additional skip connection branch based on the bidirectional feature pyramid network (BiFPN) to preserve fine-grained information and improve multi-scale feature fusion. We evaluated SFFEF-YOLO on the VisDrone2019-DET and UAVDT datasets. Compared to YOLOv8, experimental results demonstrate that SFFEF-YOLO achieves a 9.9% mAP0.5 improvement on the VisDrone2019-DET dataset and a 3.6% mAP0.5 improvement on the UAVDT dataset.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105469"},"PeriodicalIF":4.2,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143519440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-24DOI: 10.1016/j.imavis.2025.105467
Shitan Asu, Yujin Dai, Shijie Li, Zheng Li
Recently, CNN-based methods have gained significant attention for addressing the demoiré task due to their powerful feature extraction capabilities. However, these methods are generally trained on datasets with fixed resolutions, limiting their applicability to diverse real-world scenarios. To address this limitation, we introduce a more generalized task: effective demoiréing across multiple resolutions. To facilitate this task, we constructed MTADM, the first multi-resolution moiré dataset, designed to capture diverse real-world scenarios. Leveraging this dataset, we conducted extensive studies and introduced the Adaptive Fractional Calculus and Adjacency Fusion Convolution Network (AF2CN). Specifically, we employ fractional derivatives to develop an adaptive frequency enhancement module, which refines spatial distribution and texture details in moiré patterns. Additionally, we design a spatial attention gate to enhance deep feature interaction. Extensive experiments demonstrate that AF2CN effectively handles multi-resolution moiré patterns. It significantly outperforms previous state-of-the-art methods on fixed-resolution benchmarks while requiring fewer parameters and achieving lower computational costs.
{"title":"AF2CN: Towards effective demoiréing from multi-resolution images","authors":"Shitan Asu, Yujin Dai, Shijie Li, Zheng Li","doi":"10.1016/j.imavis.2025.105467","DOIUrl":"10.1016/j.imavis.2025.105467","url":null,"abstract":"<div><div>Recently, CNN-based methods have gained significant attention for addressing the demoiré task due to their powerful feature extraction capabilities. However, these methods are generally trained on datasets with fixed resolutions, limiting their applicability to diverse real-world scenarios. To address this limitation, we introduce a more generalized task: effective demoiréing across multiple resolutions. To facilitate this task, we constructed MTADM, the first multi-resolution moiré dataset, designed to capture diverse real-world scenarios. Leveraging this dataset, we conducted extensive studies and introduced the Adaptive Fractional Calculus and Adjacency Fusion Convolution Network (AF2CN). Specifically, we employ fractional derivatives to develop an adaptive frequency enhancement module, which refines spatial distribution and texture details in moiré patterns. Additionally, we design a spatial attention gate to enhance deep feature interaction. Extensive experiments demonstrate that AF2CN effectively handles multi-resolution moiré patterns. It significantly outperforms previous state-of-the-art methods on fixed-resolution benchmarks while requiring fewer parameters and achieving lower computational costs.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105467"},"PeriodicalIF":4.2,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-23DOI: 10.1016/j.imavis.2025.105461
Yu Chen , Yang Yu , Rongrong Ni , Haoliang Li , Wei Wang , Yao Zhao
Advanced deepfake technology enables the manipulation of visual and audio signals within videos, leading to visual–audio (VA) inconsistencies. Current multimodal detectors primarily rely on VA contrastive learning to identify such inconsistencies, particularly in critical phoneme–viseme (PV) regions. However, state-of-the-art deepfake techniques have aligned critical PV pairs, thereby reducing the inconsistency traces on which existing methods rely. Due to technical constraints, forgers cannot fully synchronize VA in non-critical phoneme–viseme (NPV) regions. Consequently, we exploit inconsistencies in NPV regions as a general cue for deepfake detection. We propose NPVForensics, a two-stage VA correlation learning framework specifically designed to detect VA inconsistencies in NPV regions of deepfake videos. Firstly, to better extract VA unimodal features, we utilize the Swin Transformer to capture long-term global dependencies. Additionally, the Local Feature Aggregation (LFA) module aggregates local features from spatial and channel dimensions, thus preserving more comprehensive and subtle information. Secondly, the VA Correlation Learning (VACL) module enhances intra-modal augmentation and inter-modal information interaction, exploring intrinsic correlations between the two modalities. Moreover, Representation Alignment is introduced for real videos to narrow the modal gap and effectively extract VA correlations. Finally, our model is pre-trained on real videos using a self-supervised strategy and fine-tuned for the deepfake detection task. We conducted extensive experiments on six widely used deepfake datasets: FaceForensics++, FakeAVCeleb, Celeb-DF-v2, DFDC, FaceShifter, and DeeperForensics-1.0. Our method achieves state-of-the-art performance in cross-manipulation generalization and robustness. Notably, our approach demonstrates superior performance on VA-coordinated datasets such as A2V, T2V-L, and T2V-S. It indicates that VA inconsistencies in NPV regions serve as a general cue for deepfake detection.
{"title":"NPVForensics: Learning VA correlations in non-critical phoneme–viseme regions for deepfake detection","authors":"Yu Chen , Yang Yu , Rongrong Ni , Haoliang Li , Wei Wang , Yao Zhao","doi":"10.1016/j.imavis.2025.105461","DOIUrl":"10.1016/j.imavis.2025.105461","url":null,"abstract":"<div><div>Advanced deepfake technology enables the manipulation of visual and audio signals within videos, leading to visual–audio (VA) inconsistencies. Current multimodal detectors primarily rely on VA contrastive learning to identify such inconsistencies, particularly in critical phoneme–viseme (PV) regions. However, state-of-the-art deepfake techniques have aligned critical PV pairs, thereby reducing the inconsistency traces on which existing methods rely. Due to technical constraints, forgers cannot fully synchronize VA in non-critical phoneme–viseme (NPV) regions. Consequently, we exploit inconsistencies in NPV regions as a general cue for deepfake detection. We propose NPVForensics, a two-stage VA correlation learning framework specifically designed to detect VA inconsistencies in NPV regions of deepfake videos. Firstly, to better extract VA unimodal features, we utilize the Swin Transformer to capture long-term global dependencies. Additionally, the Local Feature Aggregation (LFA) module aggregates local features from spatial and channel dimensions, thus preserving more comprehensive and subtle information. Secondly, the VA Correlation Learning (VACL) module enhances intra-modal augmentation and inter-modal information interaction, exploring intrinsic correlations between the two modalities. Moreover, Representation Alignment is introduced for real videos to narrow the modal gap and effectively extract VA correlations. Finally, our model is pre-trained on real videos using a self-supervised strategy and fine-tuned for the deepfake detection task. We conducted extensive experiments on six widely used deepfake datasets: FaceForensics++, FakeAVCeleb, Celeb-DF-v2, DFDC, FaceShifter, and DeeperForensics-1.0. Our method achieves state-of-the-art performance in cross-manipulation generalization and robustness. Notably, our approach demonstrates superior performance on VA-coordinated datasets such as A2V, T2V-L, and T2V-S. It indicates that VA inconsistencies in NPV regions serve as a general cue for deepfake detection.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105461"},"PeriodicalIF":4.2,"publicationDate":"2025-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143519335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-22DOI: 10.1016/j.imavis.2025.105465
Junting Li , Yanghong Zhou , Jintu Fan , Dahua Shou , Sa Xu , P.Y. Mok
Reconstructing neural radiance fields from limited or sparse views has given very promising potential for this field of research. Previous methods usually constrain the reconstruction process with additional priors, e.g. semantic-based or patch-based regularization. Nevertheless, such regularization is given to the synthesis of unseen views, which may not effectively assist the field of learning, in particular when the training views are sparse. Instead, we propose a feature Field Fusion (FFusion) NeRF in this paper that can learn structure and more details from features extracted from pre-trained neural networks for the sparse training views, and use as extra guide for the training of the RGB field. With such extra feature guides, FFusion predicts more accurate color and density when synthesizing novel views. Experimental results have shown that FFusion can effectively improve the quality of the synthesized novel views with only limited or sparse inputs.
{"title":"Feature Field Fusion for few-shot novel view synthesis","authors":"Junting Li , Yanghong Zhou , Jintu Fan , Dahua Shou , Sa Xu , P.Y. Mok","doi":"10.1016/j.imavis.2025.105465","DOIUrl":"10.1016/j.imavis.2025.105465","url":null,"abstract":"<div><div>Reconstructing neural radiance fields from limited or sparse views has given very promising potential for this field of research. Previous methods usually constrain the reconstruction process with additional priors, e.g. semantic-based or patch-based regularization. Nevertheless, such regularization is given to the synthesis of unseen views, which may not effectively assist the field of learning, in particular when the training views are sparse. Instead, we propose a feature Field Fusion (FFusion) NeRF in this paper that can learn structure and more details from features extracted from pre-trained neural networks for the sparse training views, and use as extra guide for the training of the RGB field. With such extra feature guides, FFusion predicts more accurate color and density when synthesizing novel views. Experimental results have shown that FFusion can effectively improve the quality of the synthesized novel views with only limited or sparse inputs.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105465"},"PeriodicalIF":4.2,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143473931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-21DOI: 10.1016/j.imavis.2025.105438
Shaoyu Chen , Xinggang Wang , Tianheng Cheng , Qian Zhang , Chang Huang , Wenyu Liu
3D detection based on surround-view camera system is a critical and promising technique in autopilot. In this work, we exploit the view symmetry of surround-view camera system as inductive bias to improve optimization and boost performance. We parameterize object’s position by polar coordinate and decompose velocity along radial and tangential direction. And the perception range, label assignment and loss function are correspondingly reformulated in polar coordinate system. This new Polar Parametrization scheme establishes explicit associations between image patterns and prediction targets. Based on it, we propose a surround-view 3D detection method, termed PolarDETR. PolarDETR achieves competitive performance on nuScenes dataset. Thorough ablation studies are provided to validate the effectiveness.
{"title":"PolarDETR: Polar Parametrization for vision-based surround-view 3D detection","authors":"Shaoyu Chen , Xinggang Wang , Tianheng Cheng , Qian Zhang , Chang Huang , Wenyu Liu","doi":"10.1016/j.imavis.2025.105438","DOIUrl":"10.1016/j.imavis.2025.105438","url":null,"abstract":"<div><div>3D detection based on surround-view camera system is a critical and promising technique in autopilot. In this work, we exploit the view symmetry of surround-view camera system as inductive bias to improve optimization and boost performance. We parameterize object’s position by polar coordinate and decompose velocity along radial and tangential direction. And the perception range, label assignment and loss function are correspondingly reformulated in polar coordinate system. This new Polar Parametrization scheme establishes explicit associations between image patterns and prediction targets. Based on it, we propose a surround-view 3D detection method, termed PolarDETR. PolarDETR achieves competitive performance on nuScenes dataset. Thorough ablation studies are provided to validate the effectiveness.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105438"},"PeriodicalIF":4.2,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-21DOI: 10.1016/j.imavis.2025.105462
Vishwas Rathi , Abhilasha Sharma , Amit Kumar Singh
Multispectral images are widely utilized in various computer vision applications because they capture more information than traditional color images. Multispectral imaging systems utilize a multispectral filter array (MFA), an extension of the color filter array found in standard RGB cameras. This approach provides an efficient, cost-effective, and practical method for capturing multispectral images. The primary challenge with multispectral imaging systems using an MFA is the significant undersampling of spectral bands in the mosaicked image. This occurs because a multispectral mosaic image contains a greater number of spectral bands compared to an RGB mosaicked image, leading to reduced sampling density per band. Now, multispectral demosaicing algorithm is required to generate the complete multispectral image from the mosaicked image. The effectiveness of demosaicing algorithms relies heavily on the efficient utilization of spatial and spectral correlations inherent in mosaicked images. In the proposed method, a binary tree-based MFA pattern is employed to capture the mosaicked image. Rather than directly leveraging spectral correlations between bands, median filtering is applied to the spectral differences to mitigate the impact of noise on these correlations. Experimental results demonstrate that the proposed method achieves an improvement of 1.03 dB and 0.92 dB on average from 5-band to 10-band multispectral images from the widely used TokyoTech and CAVE datasets, respectively.
{"title":"Multispectral images reconstruction using median filtering based spectral correlation","authors":"Vishwas Rathi , Abhilasha Sharma , Amit Kumar Singh","doi":"10.1016/j.imavis.2025.105462","DOIUrl":"10.1016/j.imavis.2025.105462","url":null,"abstract":"<div><div>Multispectral images are widely utilized in various computer vision applications because they capture more information than traditional color images. Multispectral imaging systems utilize a multispectral filter array (MFA), an extension of the color filter array found in standard RGB cameras. This approach provides an efficient, cost-effective, and practical method for capturing multispectral images. The primary challenge with multispectral imaging systems using an MFA is the significant undersampling of spectral bands in the mosaicked image. This occurs because a multispectral mosaic image contains a greater number of spectral bands compared to an RGB mosaicked image, leading to reduced sampling density per band. Now, multispectral demosaicing algorithm is required to generate the complete multispectral image from the mosaicked image. The effectiveness of demosaicing algorithms relies heavily on the efficient utilization of spatial and spectral correlations inherent in mosaicked images. In the proposed method, a binary tree-based MFA pattern is employed to capture the mosaicked image. Rather than directly leveraging spectral correlations between bands, median filtering is applied to the spectral differences to mitigate the impact of noise on these correlations. Experimental results demonstrate that the proposed method achieves an improvement of 1.03 dB and 0.92 dB on average from 5-band to 10-band multispectral images from the widely used TokyoTech and CAVE datasets, respectively.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105462"},"PeriodicalIF":4.2,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143473818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-20DOI: 10.1016/j.imavis.2025.105464
Xu Song , Yang Wang , Yan Huang , Caifeng Shan
Gait recognition based on silhouette sequences has made significant strides in recent years through the extraction of body shape and motion features. However, challenges remain in achieving accurate gait recognition under covariate changes, such as variations in view and clothing. To tackle these issues, this paper introduces a novel methodology incorporating a View-aware Part-wise Attention (VPA) mechanism and a Multi-scale Dilated Temporal Extractor (MDTE) to enhance gait recognition. Distinct from existing techniques, VPA mechanism acknowledges the differential sensitivity of various body parts to view changes, applying targeted attention weights at the feature level to improve the efficacy of view-aware constraints in areas of higher saliency or distinctiveness. Concurrently, MDTE employs dilated convolutions across multiple scales to capture the temporal dynamics of gait at diverse levels, thereby refining the motion representation. Comprehensive experiments on the CASIA-B, OU-MVLP, and Gait3D datasets validate the superior performance of our approach. Remarkably, our method achieves a 91.0% accuracy rate under clothing-change conditions on the CASIA-B dataset using solely silhouette information, surpassing the current state-of-the-art (SOTA) techniques. These results underscore the effectiveness and adaptability of our proposed strategy in overcoming the complexities of gait recognition amidst covariate changes.
{"title":"Gait recognition via View-aware Part-wise Attention and Multi-scale Dilated Temporal Extractor","authors":"Xu Song , Yang Wang , Yan Huang , Caifeng Shan","doi":"10.1016/j.imavis.2025.105464","DOIUrl":"10.1016/j.imavis.2025.105464","url":null,"abstract":"<div><div>Gait recognition based on silhouette sequences has made significant strides in recent years through the extraction of body shape and motion features. However, challenges remain in achieving accurate gait recognition under covariate changes, such as variations in view and clothing. To tackle these issues, this paper introduces a novel methodology incorporating a View-aware Part-wise Attention (VPA) mechanism and a Multi-scale Dilated Temporal Extractor (MDTE) to enhance gait recognition. Distinct from existing techniques, VPA mechanism acknowledges the differential sensitivity of various body parts to view changes, applying targeted attention weights at the feature level to improve the efficacy of view-aware constraints in areas of higher saliency or distinctiveness. Concurrently, MDTE employs dilated convolutions across multiple scales to capture the temporal dynamics of gait at diverse levels, thereby refining the motion representation. Comprehensive experiments on the CASIA-B, OU-MVLP, and Gait3D datasets validate the superior performance of our approach. Remarkably, our method achieves a 91.0% accuracy rate under clothing-change conditions on the CASIA-B dataset using solely silhouette information, surpassing the current state-of-the-art (SOTA) techniques. These results underscore the effectiveness and adaptability of our proposed strategy in overcoming the complexities of gait recognition amidst covariate changes.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105464"},"PeriodicalIF":4.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143527562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-19DOI: 10.1016/j.imavis.2025.105453
Tahar Chettaoui , Naser Damer , Fadi Boutros
Foundation models are predominantly trained in an unsupervised or self-supervised manner on highly diverse and large-scale datasets, making them broadly applicable to various downstream tasks. In this work, we investigate for the first time whether such models are suitable for the specific domain of face recognition (FR). We further propose and demonstrate the adaptation of these models for FR across different levels of data availability, including synthetic data. Extensive experiments are conducted on multiple foundation models and datasets of varying scales for training and fine-tuning, with evaluation on a wide range of benchmarks. Our results indicate that, despite their versatility, pre-trained foundation models tend to underperform in FR in comparison with similar architectures trained specifically for this task. However, fine-tuning foundation models yields promising results, often surpassing models trained from scratch, particularly when training data is limited. For example, after fine-tuning only on 1K identities, DINOv2 ViT-S achieved average verification accuracy on LFW, CALFW, CPLFW, CFP-FP, and AgeDB30 benchmarks of 87.10%, compared to 64.70% achieved by the same model and without fine-tuning. While training the same model architecture, ViT-S, from scratch on 1k identities reached 69.96%. With access to larger-scale FR training datasets, these performances reach 96.03% and 95.59% for the DINOv2 and CLIP ViT-L models, respectively. In comparison to the ViT-based architectures trained from scratch for FR, fine-tuned same architectures of foundation models achieve similar performance while requiring lower training computational costs and not relying on the assumption of extensive data availability. We further demonstrated the use of synthetic face data, showing improved performances over both pre-trained foundation and ViT models. Additionally, we examine demographic biases, noting slightly higher biases in certain settings when using foundation models compared to models trained from scratch. We release our code and pre-trained models’ weights at github.com/TaharChettaoui/FRoundation.
{"title":"FRoundation: Are foundation models ready for face recognition?","authors":"Tahar Chettaoui , Naser Damer , Fadi Boutros","doi":"10.1016/j.imavis.2025.105453","DOIUrl":"10.1016/j.imavis.2025.105453","url":null,"abstract":"<div><div>Foundation models are predominantly trained in an unsupervised or self-supervised manner on highly diverse and large-scale datasets, making them broadly applicable to various downstream tasks. In this work, we investigate for the first time whether such models are suitable for the specific domain of face recognition (FR). We further propose and demonstrate the adaptation of these models for FR across different levels of data availability, including synthetic data. Extensive experiments are conducted on multiple foundation models and datasets of varying scales for training and fine-tuning, with evaluation on a wide range of benchmarks. Our results indicate that, despite their versatility, pre-trained foundation models tend to underperform in FR in comparison with similar architectures trained specifically for this task. However, fine-tuning foundation models yields promising results, often surpassing models trained from scratch, particularly when training data is limited. For example, after fine-tuning only on 1K identities, DINOv2 ViT-S achieved average verification accuracy on LFW, CALFW, CPLFW, CFP-FP, and AgeDB30 benchmarks of 87.10%, compared to 64.70% achieved by the same model and without fine-tuning. While training the same model architecture, ViT-S, from scratch on 1k identities reached 69.96%. With access to larger-scale FR training datasets, these performances reach 96.03% and 95.59% for the DINOv2 and CLIP ViT-L models, respectively. In comparison to the ViT-based architectures trained from scratch for FR, fine-tuned same architectures of foundation models achieve similar performance while requiring lower training computational costs and not relying on the assumption of extensive data availability. We further demonstrated the use of synthetic face data, showing improved performances over both pre-trained foundation and ViT models. Additionally, we examine demographic biases, noting slightly higher biases in certain settings when using foundation models compared to models trained from scratch. We release our code and pre-trained models’ weights at <span><span>github.com/TaharChettaoui/FRoundation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105453"},"PeriodicalIF":4.2,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143510399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}