Siqi Li , Yipeng Li , Yu-Shen Liu , Shaoyi Du , Jun-Hai Yong , Yue Gao
{"title":"High-resolution synthetic aperture imaging method and benchmark based on event-frame fusion","authors":"Siqi Li , Yipeng Li , Yu-Shen Liu , Shaoyi Du , Jun-Hai Yong , Yue Gao","doi":"10.1016/j.inffus.2025.103211","DOIUrl":null,"url":null,"abstract":"<div><div>Existing event-based synthetic aperture imaging (SAI) methods can reconstruct unobstructed images of the background target scene behind dense foreground occlusions from visual information captured by an event camera. However, limited by the spatial resolution of existing event cameras and the computational paradigm of existing event-based SAI methods, the resolution of images reconstructed by existing methods is insufficient. In this paper, we propose a high-resolution (HR) SAI method based on event-frame fusion. Our method trades the high temporal resolution of the event camera for spatial resolution and fuses events and frames to reconstruct the HR unobstructed image. Our proposed method leverages an event-guided occlusion segmentation mechanism to predict the occlusion masks, and extracts multi-view multi-modal valid visual features while invalid parts are discarded. Then, an adaptive fusion module is proposed to align and fuse the multi-view features, and the HR unobstructed scene image is reconstructed from the fused feature. In addition, we construct a multi-sensor vision acquisition system and collect the first <strong>E</strong>vent-based <strong>H</strong>igh-<strong>R</strong>esolution <strong>S</strong>ynthetic <strong>A</strong>perture <strong>I</strong>maging (THU<span><math><msup><mrow></mrow><mrow><mtext>E-HRSAI</mtext></mrow></msup></math></span>) dataset containing low-resolution occluded frames, event streams, and HR ground truth unobstructed scene images, which will be released as the first benchmark. We conduct experiments on our THU<span><math><msup><mrow></mrow><mrow><mtext>E-HRSAI</mtext></mrow></msup></math></span> dataset, and the experimental results demonstrate that our proposed method achieves state-of-the-art performance. Downstream application results further prove the necessity of our method. Code and dataset are available at: <span><span>https://github.com/lisiqi19971013/E-HRSAI</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"122 ","pages":"Article 103211"},"PeriodicalIF":15.5000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525002842","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/22 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Existing event-based synthetic aperture imaging (SAI) methods can reconstruct unobstructed images of the background target scene behind dense foreground occlusions from visual information captured by an event camera. However, limited by the spatial resolution of existing event cameras and the computational paradigm of existing event-based SAI methods, the resolution of images reconstructed by existing methods is insufficient. In this paper, we propose a high-resolution (HR) SAI method based on event-frame fusion. Our method trades the high temporal resolution of the event camera for spatial resolution and fuses events and frames to reconstruct the HR unobstructed image. Our proposed method leverages an event-guided occlusion segmentation mechanism to predict the occlusion masks, and extracts multi-view multi-modal valid visual features while invalid parts are discarded. Then, an adaptive fusion module is proposed to align and fuse the multi-view features, and the HR unobstructed scene image is reconstructed from the fused feature. In addition, we construct a multi-sensor vision acquisition system and collect the first Event-based High-Resolution Synthetic Aperture Imaging (THU) dataset containing low-resolution occluded frames, event streams, and HR ground truth unobstructed scene images, which will be released as the first benchmark. We conduct experiments on our THU dataset, and the experimental results demonstrate that our proposed method achieves state-of-the-art performance. Downstream application results further prove the necessity of our method. Code and dataset are available at: https://github.com/lisiqi19971013/E-HRSAI.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.