{"title":"GRiD: Guided Refinement for Detector-Free Multimodal Image Matching","authors":"Yuyan Liu;Wei He;Hongyan Zhang","doi":"10.1109/TIP.2024.3472491","DOIUrl":null,"url":null,"abstract":"Multimodal image matching is essential in image stitching, image fusion, change detection, and land cover mapping. However, the severe nonlinear radiometric distortion (NRD) and geometric distortions in multimodal images severely limit the accuracy of multimodal image matching, posing significant challenges to existing methods. Additionally, detector-based methods are prone to feature point offset issues in regions with substantial modal differences, which also hinder the subsequent fine registration and fusion of images. To address these challenges, we propose a guided refinement for detector-free multimodal image matching (GRiD) method, which weakens feature point offset issues by establishing pixel-level correspondences and utilizes reference points to guide and correct matches affected by NRD and geometric distortions. Specifically, we first introduce a detector-free framework to alleviate the feature point offset problem by directly finding corresponding pixels between images. Subsequently, to tackle NRD and geometric distortion in multimodal images, we design a guided correction module that establishes robust reference points (RPs) to guide the search for corresponding pixels in regions with significant modality differences. Moreover, to enhance RPs reliability, we incorporate a phase congruency module during the RPs confirmation stage to concentrate RPs around image edge structures. Finally, we perform finer localization on highly correlated corresponding pixels to obtain the optimized matches. We conduct extensive experiments on four multimodal image datasets to validate the effectiveness of the proposed approach. Experimental results demonstrate that our method can achieve sufficient and robust matches across various modality images and effectively suppress the feature point offset problem.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5892-5906"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10715536/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal image matching is essential in image stitching, image fusion, change detection, and land cover mapping. However, the severe nonlinear radiometric distortion (NRD) and geometric distortions in multimodal images severely limit the accuracy of multimodal image matching, posing significant challenges to existing methods. Additionally, detector-based methods are prone to feature point offset issues in regions with substantial modal differences, which also hinder the subsequent fine registration and fusion of images. To address these challenges, we propose a guided refinement for detector-free multimodal image matching (GRiD) method, which weakens feature point offset issues by establishing pixel-level correspondences and utilizes reference points to guide and correct matches affected by NRD and geometric distortions. Specifically, we first introduce a detector-free framework to alleviate the feature point offset problem by directly finding corresponding pixels between images. Subsequently, to tackle NRD and geometric distortion in multimodal images, we design a guided correction module that establishes robust reference points (RPs) to guide the search for corresponding pixels in regions with significant modality differences. Moreover, to enhance RPs reliability, we incorporate a phase congruency module during the RPs confirmation stage to concentrate RPs around image edge structures. Finally, we perform finer localization on highly correlated corresponding pixels to obtain the optimized matches. We conduct extensive experiments on four multimodal image datasets to validate the effectiveness of the proposed approach. Experimental results demonstrate that our method can achieve sufficient and robust matches across various modality images and effectively suppress the feature point offset problem.