Pub Date : 2025-02-01DOI: 10.1016/j.imavis.2024.105362
Yuxiang Wu , Xiaoyan Wang , Xiaoyan Liu , Yuzhao Gao , Yan Dou
Recently, Transformer-based methods have made significant progress on image super-resolution. They encode long-range dependencies between image patches through self-attention mechanism. However, when extracting all tokens from the entire feature map, the computational cost is expensive. In this paper, we propose a novel lightweight image super-resolution approach, pixel integration network(PIN). Specifically, our method employs fine pixel integration and coarse pixel integration from local and global receptive field. In particular, coarse pixel integration is implemented by a retractable attention, consisting of dense and sparse self-attention. In order to focus on enriching features with contextual information, spatial-gate mechanism and depth-wise convolution are introduced to multi-layer perception. Besides, a spatial frequency fusion block is adopted to obtain more comprehensive, detailed, and stable information at the end of deep feature extraction. Extensive experiments demonstrate that PIN achieves the state-of-the-art performance with small parameters on lightweight super-resolution.
{"title":"Pixel integration from fine to coarse for lightweight image super-resolution","authors":"Yuxiang Wu , Xiaoyan Wang , Xiaoyan Liu , Yuzhao Gao , Yan Dou","doi":"10.1016/j.imavis.2024.105362","DOIUrl":"10.1016/j.imavis.2024.105362","url":null,"abstract":"<div><div>Recently, Transformer-based methods have made significant progress on image super-resolution. They encode long-range dependencies between image patches through self-attention mechanism. However, when extracting all tokens from the entire feature map, the computational cost is expensive. In this paper, we propose a novel lightweight image super-resolution approach, pixel integration network(PIN). Specifically, our method employs fine pixel integration and coarse pixel integration from local and global receptive field. In particular, coarse pixel integration is implemented by a retractable attention, consisting of dense and sparse self-attention. In order to focus on enriching features with contextual information, spatial-gate mechanism and depth-wise convolution are introduced to multi-layer perception. Besides, a spatial frequency fusion block is adopted to obtain more comprehensive, detailed, and stable information at the end of deep feature extraction. Extensive experiments demonstrate that PIN achieves the state-of-the-art performance with small parameters on lightweight super-resolution.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105362"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.imavis.2024.105392
Yufeng Cheng , Dongxue Wang , Shuang Bai , Jingkai Ma , Chen Liang , Kailong Liu , Tao Deng
Methods on the document visual question answering (DocVQA) task have achieved great success by using pre-trained multimodal models. However, two issues are limiting their performances from further improvement. On the one hand, previous methods didn't use explicit semantic information for answer prediction. On the other hand, these methods predict answers only based on global information interaction results and generate low-quality answers. To address the above issues, in this paper, we propose to utilize document semantic segmentation to introduce explicit semantic information of documents into the DocVQA task and design a star-shaped topology structure to enable the interaction of different tokens in short-range contexts. This way, we can obtain token representations with richer multimodal and contextual information for the DocVQA task. With these two strategies, our method can achieve 0.8430 ANLS (Average Normalized Levenshtein Similarity) on the test set of the DocVQA dataset, demonstrating the effectiveness of our method.
{"title":"Understanding document images by introducing explicit semantic information and short-range information interaction","authors":"Yufeng Cheng , Dongxue Wang , Shuang Bai , Jingkai Ma , Chen Liang , Kailong Liu , Tao Deng","doi":"10.1016/j.imavis.2024.105392","DOIUrl":"10.1016/j.imavis.2024.105392","url":null,"abstract":"<div><div>Methods on the document visual question answering (DocVQA) task have achieved great success by using pre-trained multimodal models. However, two issues are limiting their performances from further improvement. On the one hand, previous methods didn't use explicit semantic information for answer prediction. On the other hand, these methods predict answers only based on global information interaction results and generate low-quality answers. To address the above issues, in this paper, we propose to utilize document semantic segmentation to introduce explicit semantic information of documents into the DocVQA task and design a star-shaped topology structure to enable the interaction of different tokens in short-range contexts. This way, we can obtain token representations with richer multimodal and contextual information for the DocVQA task. With these two strategies, our method can achieve 0.8430 ANLS (Average Normalized Levenshtein Similarity) on the test set of the DocVQA dataset, demonstrating the effectiveness of our method.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105392"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.imavis.2024.105402
Jiaqi Zhu , Bin Li , Xinhua Zhao
RGB-D based 6D pose estimation is a key technology for autonomous driving and robotics applications. Recently, methods based on dense correspondence have achieved huge progress. However, it still suffers from heavy computational burden and insufficient combination of two modalities. In this paper, we propose a novel 6D pose estimation algorithm (TPSFusion) which is based on Transformer and multi-level pyramid fusion features. We first introduce a Multi-modal Features Fusion module, which is composed of the Multi-modal Attention Fusion block (MAF) and Multi-level Screening-feature Fusion block (MSF) to enable high-quality cross-modality information interaction. Subsequently, we introduce a new weight estimation branch to calculate the contribution of different keypoints. Finally, our method has competitive results on YCB-Video, LineMOD, and Occlusion LineMOD datasets.
{"title":"TPSFusion: A Transformer-based pyramid screening fusion network for 6D pose estimation","authors":"Jiaqi Zhu , Bin Li , Xinhua Zhao","doi":"10.1016/j.imavis.2024.105402","DOIUrl":"10.1016/j.imavis.2024.105402","url":null,"abstract":"<div><div>RGB-D based 6D pose estimation is a key technology for autonomous driving and robotics applications. Recently, methods based on dense correspondence have achieved huge progress. However, it still suffers from heavy computational burden and insufficient combination of two modalities. In this paper, we propose a novel 6D pose estimation algorithm (TPSFusion) which is based on Transformer and multi-level pyramid fusion features. We first introduce a Multi-modal Features Fusion module, which is composed of the Multi-modal Attention Fusion block (MAF) and Multi-level Screening-feature Fusion block (MSF) to enable high-quality cross-modality information interaction. Subsequently, we introduce a new weight estimation branch to calculate the contribution of different keypoints. Finally, our method has competitive results on YCB-Video, LineMOD, and Occlusion LineMOD datasets.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105402"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical image semantic segmentation is a fundamental yet challenging research task. However, training a fully supervised model for this task requires a substantial amount of pixel-level annotated data, which poses a significant challenge for annotators due to the necessity of specialized medical expert knowledge. To mitigate the labeling burden, a semi-supervised medical image segmentation model that leverages both a small quantity of labeled data and a substantial amount of unlabeled data has attracted prominent attention. However, the performance of current methods is constrained by the distribution mismatch problem between limited labeled and unlabeled datasets. To address this issue, we propose a cross-set data augmentation strategy aimed at minimizing the feature divergence between labeled and unlabeled data. Our approach involves mixing labeled and unlabeled data, as well as integrating ground truth with pseudo-labels to produce augmented samples. By employing three distinct cross-set data augmentation strategies, we enhance the diversity of the training dataset and fully exploit the perturbation space. Our experimental results on COVID-19 CT data, spinal cord gray matter MRI data and prostate T2-weighted MRI data substantiate the efficacy of our proposed approach. The code has been released at: CDA.
{"title":"Cross-set data augmentation for semi-supervised medical image segmentation","authors":"Qianhao Wu , Xixi Jiang , Dong Zhang , Yifei Feng , Jinhui Tang","doi":"10.1016/j.imavis.2024.105407","DOIUrl":"10.1016/j.imavis.2024.105407","url":null,"abstract":"<div><div>Medical image semantic segmentation is a fundamental yet challenging research task. However, training a fully supervised model for this task requires a substantial amount of pixel-level annotated data, which poses a significant challenge for annotators due to the necessity of specialized medical expert knowledge. To mitigate the labeling burden, a semi-supervised medical image segmentation model that leverages both a small quantity of labeled data and a substantial amount of unlabeled data has attracted prominent attention. However, the performance of current methods is constrained by the distribution mismatch problem between limited labeled and unlabeled datasets. To address this issue, we propose a cross-set data augmentation strategy aimed at minimizing the feature divergence between labeled and unlabeled data. Our approach involves mixing labeled and unlabeled data, as well as integrating ground truth with pseudo-labels to produce augmented samples. By employing three distinct cross-set data augmentation strategies, we enhance the diversity of the training dataset and fully exploit the perturbation space. Our experimental results on COVID-19 CT data, spinal cord gray matter MRI data and prostate T2-weighted MRI data substantiate the efficacy of our proposed approach. The code has been released at: <span><span>CDA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105407"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.imavis.2025.105416
Rongji Li, Ziqian Wang
Crack width is a critical indicator of bridge structural health. This paper proposes a UAV-based method for detecting bridge surface defects and quantifying crack width, aiming to improve efficiency and accuracy. The system integrates a UAV with a visual navigation system to capture high-resolution images (7322 × 5102 pixels) and GPS data, followed by image resolution computation and plane correction. For crack detection and segmentation, we introduce AGSAM-Net, a multi-class semantic segmentation network enhanced with attention gating to accurately identify and segment cracks at the pixel level. The system processes 8064 × 6048 pixel images in 2.4 s, with a detection time of 0.5 s per 540 × 540 pixel crack bounding box. By incorporating distance data, the system achieves over 90% accuracy in crack width quantification across multiple datasets. The study also explores potential collaboration with robotic arms, offering new insights into automated bridge maintenance.
{"title":"AGSAM-Net: UAV route planning and visual guidance model for bridge surface defect detection","authors":"Rongji Li, Ziqian Wang","doi":"10.1016/j.imavis.2025.105416","DOIUrl":"10.1016/j.imavis.2025.105416","url":null,"abstract":"<div><div>Crack width is a critical indicator of bridge structural health. This paper proposes a UAV-based method for detecting bridge surface defects and quantifying crack width, aiming to improve efficiency and accuracy. The system integrates a UAV with a visual navigation system to capture high-resolution images (7322 × 5102 pixels) and GPS data, followed by image resolution computation and plane correction. For crack detection and segmentation, we introduce AGSAM-Net, a multi-class semantic segmentation network enhanced with attention gating to accurately identify and segment cracks at the pixel level. The system processes 8064 × 6048 pixel images in 2.4 s, with a detection time of 0.5 s per 540 × 540 pixel crack bounding box. By incorporating distance data, the system achieves over 90% accuracy in crack width quantification across multiple datasets. The study also explores potential collaboration with robotic arms, offering new insights into automated bridge maintenance.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105416"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.imavis.2025.105414
Yanliang Ge , Jinghuai Pan , Junchao Ren , Min He , Hongbo Bi , Qiao Zhang
The main goal of co-salient object detection (CoSOD) is to extract a group of notable objects that appear together in the image. The existing methods face two major challenges: the first is that in some complex scenes or in the case of interference by other salient objects, the mining of consensus cues for co-salient objects is inadequate; the second is that other methods input consensus cues from top to bottom into the decoder, which ignores the compactness of the consensus and lacks cross-layer interaction. To solve the above problems, we propose a consensus mining and consistency cross-layer interactive decoding network, called CCNet, which consists of two key components, namely, a consensus cue mining module (CCM) and a consistency cross-layer interactive decoder (CCID). Specifically, the purpose of CCM is to fully mine the cross-consensus clues among the co-salient objects in the image group, so as to achieve the group consistency modeling of the group of images. Furthermore, CCID accepts features of different levels as input and receives semantic information of group consensus from CCM, which is used to guide features of other levels to learn higher-level feature representations and cross-layer interaction of group semantic consensus clues, thereby maintaining the consistency of group consensus cues and enabling accurate co-saliency map prediction. We evaluated the proposed CCNet using four widely accepted metrics across three challenging CoSOD datasets and the experimental results demonstrate that our proposed approach outperforms other existing state-of-the-art CoSOD methods, particularly on the CoSal2015 and CoSOD3k datasets. The results of our method are available at https://github.com/jinghuaipan/CCNet.
{"title":"Co-salient object detection with consensus mining and consistency cross-layer interactive decoding","authors":"Yanliang Ge , Jinghuai Pan , Junchao Ren , Min He , Hongbo Bi , Qiao Zhang","doi":"10.1016/j.imavis.2025.105414","DOIUrl":"10.1016/j.imavis.2025.105414","url":null,"abstract":"<div><div>The main goal of co-salient object detection (CoSOD) is to extract a group of notable objects that appear together in the image. The existing methods face two major challenges: the first is that in some complex scenes or in the case of interference by other salient objects, the mining of consensus cues for co-salient objects is inadequate; the second is that other methods input consensus cues from top to bottom into the decoder, which ignores the compactness of the consensus and lacks cross-layer interaction. To solve the above problems, we propose a consensus mining and consistency cross-layer interactive decoding network, called CCNet, which consists of two key components, namely, a consensus cue mining module (CCM) and a consistency cross-layer interactive decoder (CCID). Specifically, the purpose of CCM is to fully mine the cross-consensus clues among the co-salient objects in the image group, so as to achieve the group consistency modeling of the group of images. Furthermore, CCID accepts features of different levels as input and receives semantic information of group consensus from CCM, which is used to guide features of other levels to learn higher-level feature representations and cross-layer interaction of group semantic consensus clues, thereby maintaining the consistency of group consensus cues and enabling accurate co-saliency map prediction. We evaluated the proposed CCNet using four widely accepted metrics across three challenging CoSOD datasets and the experimental results demonstrate that our proposed approach outperforms other existing state-of-the-art CoSOD methods, particularly on the CoSal2015 and CoSOD3k datasets. The results of our method are available at <span><span>https://github.com/jinghuaipan/CCNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105414"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.imavis.2025.105429
Achyut Shankar , Hariprasath Manoharan , Adil O. Khadidos , Alaa O. Khadidos , Shitharth Selvarajan , S.B. Goyal
In this paper the need of biometric authentication with synthetic data is analyzed for increasing the security of data in each transmission systems. Since more biometric patterns are represented the complexity of recognition changes where low security features are enabled in transmission process. Hence the process of increasing security is carried out with image biometric patterns where synthetic data is created with explainable artificial intelligence technique thereby appropriate decisions are made. Further sample data is generated at each case thereby all changing representations are minimized with increase in original image set values. Moreover the data flows at each identified biometric patterns are increased where partial decisive strategies are followed in proposed approach. Further more complete interpretabilities that are present in captured images or biometric patterns are reduced thus generated data is maximized to all end users. To verify the outcome of proposed approach four scenarios with comparative performance metrics are simulated where from the comparative analysis it is found that the proposed approach is less robust and complex at a rate of 4% and 6% respectively.
{"title":"Transparency and privacy measures of biometric patterns for data processing with synthetic data using explainable artificial intelligence","authors":"Achyut Shankar , Hariprasath Manoharan , Adil O. Khadidos , Alaa O. Khadidos , Shitharth Selvarajan , S.B. Goyal","doi":"10.1016/j.imavis.2025.105429","DOIUrl":"10.1016/j.imavis.2025.105429","url":null,"abstract":"<div><div>In this paper the need of biometric authentication with synthetic data is analyzed for increasing the security of data in each transmission systems. Since more biometric patterns are represented the complexity of recognition changes where low security features are enabled in transmission process. Hence the process of increasing security is carried out with image biometric patterns where synthetic data is created with explainable artificial intelligence technique thereby appropriate decisions are made. Further sample data is generated at each case thereby all changing representations are minimized with increase in original image set values. Moreover the data flows at each identified biometric patterns are increased where partial decisive strategies are followed in proposed approach. Further more complete interpretabilities that are present in captured images or biometric patterns are reduced thus generated data is maximized to all end users. To verify the outcome of proposed approach four scenarios with comparative performance metrics are simulated where from the comparative analysis it is found that the proposed approach is less robust and complex at a rate of 4% and 6% respectively.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105429"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143139146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.imavis.2024.105381
Siliang Ma , Yong Xu
Bounding box regression is one of the important steps of object detection. However, rotation detectors often involve a more complicated loss based on SkewIoU which is unfriendly to gradient-based training. Most of the existing loss functions for rotated object detection calculate the difference between two bounding boxes only focus on the deviation of area or each points distance (e.g., , and ). The calculation process of some loss functions is extremely complex (e.g. ). In order to improve the efficiency and accuracy of bounding box regression for rotated object detection, we proposed a novel metric for arbitrary shapes comparison based on minimum points distance, which takes most of the factors from existing loss functions for rotated object detection into account, i.e., the overlap or nonoverlapping area, the central points distance and the rotation angle. We also proposed a loss function called based on four points distance for accurate bounding box regression focusing on faster and high quality anchor boxes. In the experiments, loss has been applied to state-of-the-art rotated object detection (e.g., RTMDET, H2RBox) models training with three popular benchmarks of rotated object detection including DOTA, DIOR, HRSC2016 and two benchmarks of arbitrary orientation scene text detection including ICDAR 2017 RRC-MLT and ICDAR 2019 RRC-MLT, which achieves better performance than existing loss functions. The code is available at https://github.com/JacksonMa618/FPDIoU
{"title":"FPDIoU Loss: A loss function for efficient bounding box regression of rotated object detection","authors":"Siliang Ma , Yong Xu","doi":"10.1016/j.imavis.2024.105381","DOIUrl":"10.1016/j.imavis.2024.105381","url":null,"abstract":"<div><div>Bounding box regression is one of the important steps of object detection. However, rotation detectors often involve a more complicated loss based on SkewIoU which is unfriendly to gradient-based training. Most of the existing loss functions for rotated object detection calculate the difference between two bounding boxes only focus on the deviation of area or each points distance (e.g., <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>S</mi><mi>m</mi><mi>o</mi><mi>o</mi><mi>t</mi><mi>h</mi><mo>−</mo><mi>L</mi><mn>1</mn></mrow></msub></math></span>, <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>R</mi><mi>o</mi><mi>t</mi><mi>a</mi><mi>t</mi><mi>e</mi><mi>d</mi><mi>I</mi><mi>o</mi><mi>U</mi></mrow></msub></math></span> and <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>P</mi><mi>I</mi><mi>o</mi><mi>U</mi></mrow></msub></math></span>). The calculation process of some loss functions is extremely complex (e.g. <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>K</mi><mi>F</mi><mi>I</mi><mi>o</mi><mi>U</mi></mrow></msub></math></span>). In order to improve the efficiency and accuracy of bounding box regression for rotated object detection, we proposed a novel metric for arbitrary shapes comparison based on minimum points distance, which takes most of the factors from existing loss functions for rotated object detection into account, i.e., the overlap or nonoverlapping area, the central points distance and the rotation angle. We also proposed a loss function called <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>F</mi><mi>P</mi><mi>D</mi><mi>I</mi><mi>o</mi><mi>U</mi></mrow></msub></math></span> based on four points distance for accurate bounding box regression focusing on faster and high quality anchor boxes. In the experiments, <span><math><mrow><mi>F</mi><mi>P</mi><mi>D</mi><mi>I</mi><mi>o</mi><mi>U</mi></mrow></math></span> loss has been applied to state-of-the-art rotated object detection (e.g., RTMDET, H2RBox) models training with three popular benchmarks of rotated object detection including DOTA, DIOR, HRSC2016 and two benchmarks of arbitrary orientation scene text detection including ICDAR 2017 RRC-MLT and ICDAR 2019 RRC-MLT, which achieves better performance than existing loss functions. The code is available at <span><span>https://github.com/JacksonMa618/FPDIoU</span><svg><path></path></svg></span></div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105381"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.imavis.2024.105361
Jinxin Shao, Haosu Zhang, Jianming Miao
Underwater imagery often suffers from challenges such as color distortion, low contrast, blurring, and noise due to the absorption and scattering of light in water. These degradations complicate visual interpretation and hinder subsequent image processing. Existing methods struggle to effectively address the complex, spatially varying degradations without prior environmental knowledge or may produce unnatural enhancements. To overcome these limitations, we propose a novel method called Global Pyramid Linear Modulation that integrates physical degradation modeling with deep learning for underwater image enhancement. Our approach extends Feature-wise Linear Modulation to a four-dimensional structure, enabling fine-grained, spatially adaptive modulation of feature maps. Our method captures multi-scale contextual information by incorporating a feature pyramid architecture with self-attention and feature fusion mechanisms, effectively modeling complex degradations. We validate our method by integrating it into the MixDehazeNet model and conducting experiments on benchmark datasets. Our approach significantly improves the Peak Signal-to-Noise Ratio, increasing from 28.6 dB to 30.6 dB on the EUVP-515-test dataset. Compared to recent state-of-the-art methods, our method consistently outperforms them by over 3 dB in PSNR on datasets with ground truth. It improves the Underwater Image Quality Measure by more than one on datasets without ground truth. Furthermore, we demonstrate the practical applicability of our method on a real-world underwater dataset, achieving substantial improvements in image quality metrics and visually compelling results. These experiments confirm that our method effectively addresses the limitations of existing techniques by adaptively modeling complex underwater degradations, highlighting its potential for underwater image enhancement tasks.
{"title":"GPLM: Enhancing underwater images with Global Pyramid Linear Modulation","authors":"Jinxin Shao, Haosu Zhang, Jianming Miao","doi":"10.1016/j.imavis.2024.105361","DOIUrl":"10.1016/j.imavis.2024.105361","url":null,"abstract":"<div><div>Underwater imagery often suffers from challenges such as color distortion, low contrast, blurring, and noise due to the absorption and scattering of light in water. These degradations complicate visual interpretation and hinder subsequent image processing. Existing methods struggle to effectively address the complex, spatially varying degradations without prior environmental knowledge or may produce unnatural enhancements. To overcome these limitations, we propose a novel method called Global Pyramid Linear Modulation that integrates physical degradation modeling with deep learning for underwater image enhancement. Our approach extends Feature-wise Linear Modulation to a four-dimensional structure, enabling fine-grained, spatially adaptive modulation of feature maps. Our method captures multi-scale contextual information by incorporating a feature pyramid architecture with self-attention and feature fusion mechanisms, effectively modeling complex degradations. We validate our method by integrating it into the MixDehazeNet model and conducting experiments on benchmark datasets. Our approach significantly improves the Peak Signal-to-Noise Ratio, increasing from 28.6 dB to 30.6 dB on the EUVP-515-test dataset. Compared to recent state-of-the-art methods, our method consistently outperforms them by over 3 dB in PSNR on datasets with ground truth. It improves the Underwater Image Quality Measure by more than one on datasets without ground truth. Furthermore, we demonstrate the practical applicability of our method on a real-world underwater dataset, achieving substantial improvements in image quality metrics and visually compelling results. These experiments confirm that our method effectively addresses the limitations of existing techniques by adaptively modeling complex underwater degradations, highlighting its potential for underwater image enhancement tasks.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105361"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.imavis.2024.105340
Le Jin , Guoshun Zhou , Zherong Liu , Yuanchao Yu , Teng Zhang , Minghui Yang , Jun Zhou
The estimation of an object’s 6D pose is a fundamental task in modern commercial and industrial applications. Vision-based pose estimation has gained popularity due to its cost-effectiveness and ease of setup in the field. However, this type of estimation tends to be less robust compared to other methods due to its sensitivity to the operating environment. For instance, in robot manipulation applications, heavy occlusion and clutter are common, posing significant challenges. For safety and robustness in industrial environments, depth information is often leveraged instead of relying solely on RGB images. Nevertheless, even with depth information, 6D pose estimation in such scenarios still remains challenging. In this paper, we introduce a novel 6D pose estimation method that promotes the network’s learning of high-level object features through self-supervised learning and instance reconstruction. The feature representation of the reconstructed instance is subsequently utilized in direct 6D pose regression via a multi-task learning scheme. As a result, the proposed method can differentiate and retrieve each object instance from a scene that is heavily occluded and cluttered, thereby surpassing conventional pose estimators in such scenarios. Additionally, due to the standardized prediction of reconstructed image, our estimator exhibits robustness performance against variations in lighting conditions and color drift. This is a significant improvement over traditional methods that depend on pixel-level sparse or dense features. We demonstrate that our method achieves state-of-the-art performance (e.g., 85.4% on LM-O) on the most commonly used benchmarks with respect to the ADD(-S) metric. Lastly, we present a CLIP dataset that emulates intense occlusion scenarios of industrial environment and conduct a real-world experiment for manipulation applications to verify the effectiveness and robustness of our proposed method.
{"title":"IRPE: Instance-level reconstruction-based 6D pose estimator","authors":"Le Jin , Guoshun Zhou , Zherong Liu , Yuanchao Yu , Teng Zhang , Minghui Yang , Jun Zhou","doi":"10.1016/j.imavis.2024.105340","DOIUrl":"10.1016/j.imavis.2024.105340","url":null,"abstract":"<div><div>The estimation of an object’s 6D pose is a fundamental task in modern commercial and industrial applications. Vision-based pose estimation has gained popularity due to its cost-effectiveness and ease of setup in the field. However, this type of estimation tends to be less robust compared to other methods due to its sensitivity to the operating environment. For instance, in robot manipulation applications, heavy occlusion and clutter are common, posing significant challenges. For safety and robustness in industrial environments, depth information is often leveraged instead of relying solely on RGB images. Nevertheless, even with depth information, 6D pose estimation in such scenarios still remains challenging. In this paper, we introduce a novel 6D pose estimation method that promotes the network’s learning of high-level object features through self-supervised learning and instance reconstruction. The feature representation of the reconstructed instance is subsequently utilized in direct 6D pose regression via a multi-task learning scheme. As a result, the proposed method can differentiate and retrieve each object instance from a scene that is heavily occluded and cluttered, thereby surpassing conventional pose estimators in such scenarios. Additionally, due to the standardized prediction of reconstructed image, our estimator exhibits robustness performance against variations in lighting conditions and color drift. This is a significant improvement over traditional methods that depend on pixel-level sparse or dense features. We demonstrate that our method achieves state-of-the-art performance (e.g., 85.4% on LM-O) on the most commonly used benchmarks with respect to the ADD(-S) metric. Lastly, we present a CLIP dataset that emulates intense occlusion scenarios of industrial environment and conduct a real-world experiment for manipulation applications to verify the effectiveness and robustness of our proposed method.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105340"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}