Zhiying Jiang;Zengxi Zhang;Jinyuan Liu;Xin Fan;Risheng Liu
{"title":"通过全局感知正交金字塔回归实现多光谱图像拼接","authors":"Zhiying Jiang;Zengxi Zhang;Jinyuan Liu;Xin Fan;Risheng Liu","doi":"10.1109/TIP.2024.3430532","DOIUrl":null,"url":null,"abstract":"Image stitching is a critical task in panorama perception that involves combining images captured from different viewing positions to reconstruct a wider field-of-view (FOV) image. Existing visible image stitching methods suffer from performance drops under severe conditions since environmental factors can easily impair visible images. In contrast, infrared images possess greater penetrating ability and are less affected by environmental factors. Therefore, we propose an infrared and visible image-based multispectral image stitching method to achieve all-weather, broad FOV scene perception. Specifically, based on two pairs of infrared and visible images, we employ the salient structural information from the infrared images and the textual details from the visible images to infer the correspondences within different modality-specific features. For this purpose, a multiscale progressive mechanism coupled with quadrature correlation is exploited to improve regression in different modalities. Exploiting the complementary properties, accurate and credible homography can be obtained by integrating the deformation parameters of the two modalities to compensate for the missing modality-specific information. A global-aware guided reconstruction module is established to generate an informative and broad scene, wherein the attentive features of different viewpoints are introduced to fuse the source images with a more seamless and comprehensive appearance. We construct a high-quality infrared and visible stitching dataset for evaluation, including real-world and synthetic sets. The qualitative and quantitative results demonstrate that the proposed method outperforms the intuitive cascaded fusion-stitching procedure, achieving more robust and credible panorama generation. Code and dataset are available at \n<uri>https://github.com/Jzy2017/MSGA</uri>\n.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multispectral Image Stitching via Global-Aware Quadrature Pyramid Regression\",\"authors\":\"Zhiying Jiang;Zengxi Zhang;Jinyuan Liu;Xin Fan;Risheng Liu\",\"doi\":\"10.1109/TIP.2024.3430532\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image stitching is a critical task in panorama perception that involves combining images captured from different viewing positions to reconstruct a wider field-of-view (FOV) image. Existing visible image stitching methods suffer from performance drops under severe conditions since environmental factors can easily impair visible images. In contrast, infrared images possess greater penetrating ability and are less affected by environmental factors. Therefore, we propose an infrared and visible image-based multispectral image stitching method to achieve all-weather, broad FOV scene perception. Specifically, based on two pairs of infrared and visible images, we employ the salient structural information from the infrared images and the textual details from the visible images to infer the correspondences within different modality-specific features. For this purpose, a multiscale progressive mechanism coupled with quadrature correlation is exploited to improve regression in different modalities. Exploiting the complementary properties, accurate and credible homography can be obtained by integrating the deformation parameters of the two modalities to compensate for the missing modality-specific information. A global-aware guided reconstruction module is established to generate an informative and broad scene, wherein the attentive features of different viewpoints are introduced to fuse the source images with a more seamless and comprehensive appearance. We construct a high-quality infrared and visible stitching dataset for evaluation, including real-world and synthetic sets. The qualitative and quantitative results demonstrate that the proposed method outperforms the intuitive cascaded fusion-stitching procedure, achieving more robust and credible panorama generation. Code and dataset are available at \\n<uri>https://github.com/Jzy2017/MSGA</uri>\\n.\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10609325/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10609325/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multispectral Image Stitching via Global-Aware Quadrature Pyramid Regression
Image stitching is a critical task in panorama perception that involves combining images captured from different viewing positions to reconstruct a wider field-of-view (FOV) image. Existing visible image stitching methods suffer from performance drops under severe conditions since environmental factors can easily impair visible images. In contrast, infrared images possess greater penetrating ability and are less affected by environmental factors. Therefore, we propose an infrared and visible image-based multispectral image stitching method to achieve all-weather, broad FOV scene perception. Specifically, based on two pairs of infrared and visible images, we employ the salient structural information from the infrared images and the textual details from the visible images to infer the correspondences within different modality-specific features. For this purpose, a multiscale progressive mechanism coupled with quadrature correlation is exploited to improve regression in different modalities. Exploiting the complementary properties, accurate and credible homography can be obtained by integrating the deformation parameters of the two modalities to compensate for the missing modality-specific information. A global-aware guided reconstruction module is established to generate an informative and broad scene, wherein the attentive features of different viewpoints are introduced to fuse the source images with a more seamless and comprehensive appearance. We construct a high-quality infrared and visible stitching dataset for evaluation, including real-world and synthetic sets. The qualitative and quantitative results demonstrate that the proposed method outperforms the intuitive cascaded fusion-stitching procedure, achieving more robust and credible panorama generation. Code and dataset are available at
https://github.com/Jzy2017/MSGA
.