High-resolution synthetic aperture imaging method and benchmark based on event-frame fusion

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Information Fusion Pub Date : 2025-10-01 Epub Date: 2025-04-22 DOI:10.1016/j.inffus.2025.103211
Siqi Li , Yipeng Li , Yu-Shen Liu , Shaoyi Du , Jun-Hai Yong , Yue Gao
{"title":"High-resolution synthetic aperture imaging method and benchmark based on event-frame fusion","authors":"Siqi Li ,&nbsp;Yipeng Li ,&nbsp;Yu-Shen Liu ,&nbsp;Shaoyi Du ,&nbsp;Jun-Hai Yong ,&nbsp;Yue Gao","doi":"10.1016/j.inffus.2025.103211","DOIUrl":null,"url":null,"abstract":"<div><div>Existing event-based synthetic aperture imaging (SAI) methods can reconstruct unobstructed images of the background target scene behind dense foreground occlusions from visual information captured by an event camera. However, limited by the spatial resolution of existing event cameras and the computational paradigm of existing event-based SAI methods, the resolution of images reconstructed by existing methods is insufficient. In this paper, we propose a high-resolution (HR) SAI method based on event-frame fusion. Our method trades the high temporal resolution of the event camera for spatial resolution and fuses events and frames to reconstruct the HR unobstructed image. Our proposed method leverages an event-guided occlusion segmentation mechanism to predict the occlusion masks, and extracts multi-view multi-modal valid visual features while invalid parts are discarded. Then, an adaptive fusion module is proposed to align and fuse the multi-view features, and the HR unobstructed scene image is reconstructed from the fused feature. In addition, we construct a multi-sensor vision acquisition system and collect the first <strong>E</strong>vent-based <strong>H</strong>igh-<strong>R</strong>esolution <strong>S</strong>ynthetic <strong>A</strong>perture <strong>I</strong>maging (THU<span><math><msup><mrow></mrow><mrow><mtext>E-HRSAI</mtext></mrow></msup></math></span>) dataset containing low-resolution occluded frames, event streams, and HR ground truth unobstructed scene images, which will be released as the first benchmark. We conduct experiments on our THU<span><math><msup><mrow></mrow><mrow><mtext>E-HRSAI</mtext></mrow></msup></math></span> dataset, and the experimental results demonstrate that our proposed method achieves state-of-the-art performance. Downstream application results further prove the necessity of our method. Code and dataset are available at: <span><span>https://github.com/lisiqi19971013/E-HRSAI</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"122 ","pages":"Article 103211"},"PeriodicalIF":15.5000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525002842","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/22 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Existing event-based synthetic aperture imaging (SAI) methods can reconstruct unobstructed images of the background target scene behind dense foreground occlusions from visual information captured by an event camera. However, limited by the spatial resolution of existing event cameras and the computational paradigm of existing event-based SAI methods, the resolution of images reconstructed by existing methods is insufficient. In this paper, we propose a high-resolution (HR) SAI method based on event-frame fusion. Our method trades the high temporal resolution of the event camera for spatial resolution and fuses events and frames to reconstruct the HR unobstructed image. Our proposed method leverages an event-guided occlusion segmentation mechanism to predict the occlusion masks, and extracts multi-view multi-modal valid visual features while invalid parts are discarded. Then, an adaptive fusion module is proposed to align and fuse the multi-view features, and the HR unobstructed scene image is reconstructed from the fused feature. In addition, we construct a multi-sensor vision acquisition system and collect the first Event-based High-Resolution Synthetic Aperture Imaging (THUE-HRSAI) dataset containing low-resolution occluded frames, event streams, and HR ground truth unobstructed scene images, which will be released as the first benchmark. We conduct experiments on our THUE-HRSAI dataset, and the experimental results demonstrate that our proposed method achieves state-of-the-art performance. Downstream application results further prove the necessity of our method. Code and dataset are available at: https://github.com/lisiqi19971013/E-HRSAI.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于事件-帧融合的高分辨率合成孔径成像方法及基准
现有的基于事件的合成孔径成像(SAI)方法可以根据事件相机捕获的视觉信息重建前景密集遮挡后的背景目标场景的无遮挡图像。然而,受限于现有事件相机的空间分辨率和现有基于事件的SAI方法的计算范式,现有方法重建的图像分辨率不足。本文提出了一种基于事件-帧融合的高分辨率SAI方法。该方法将事件相机的高时间分辨率转换为空间分辨率,并融合事件和帧来重建HR无遮挡图像。该方法利用事件引导的遮挡分割机制来预测遮挡蒙版,提取多视图多模态的有效视觉特征,丢弃无效部分。然后,提出一种自适应融合模块对多视图特征进行对齐和融合,融合后的特征重构出HR无遮挡场景图像;此外,我们构建了一个多传感器视觉采集系统,并收集了第一个基于事件的高分辨率合成孔径成像(event -based High-Resolution Synthetic Aperture Imaging, THUE-HRSAI)数据集,该数据集包含低分辨率遮挡帧、事件流和HR地真无遮挡场景图像,将作为第一个基准发布。我们在我们的tue - hrsai数据集上进行了实验,实验结果表明我们提出的方法达到了最先进的性能。下游应用结果进一步证明了该方法的必要性。代码和数据集可从https://github.com/lisiqi19971013/E-HRSAI获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
期刊最新文献
GTEE: A global timestamp encoding enhanced method for robust time series imputation in complex missing scenarios Resilient distributed Kalman filtering for cyber-physical systems via mean subsequence reduction Learning across modalities: a systematic survey of multimodal models for financial analysis MuBe4D: A mutual benefit framework for generalizable motion segmentation and geometry-first 4D reconstruction Decoding multilingual imagined speech from scalp EEG via dynamic differentiable graph hierarchical fusion network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1