CrossZoom: Simultaneous Motion Deblurring and Event Super-Resolving

Chi Zhang;Xiang Zhang;Mingyuan Lin;Cheng Li;Chu He;Wen Yang;Gui-Song Xia;Lei Yu
{"title":"CrossZoom: Simultaneous Motion Deblurring and Event Super-Resolving","authors":"Chi Zhang;Xiang Zhang;Mingyuan Lin;Cheng Li;Chu He;Wen Yang;Gui-Song Xia;Lei Yu","doi":"10.1109/TPAMI.2024.3402972","DOIUrl":null,"url":null,"abstract":"Even though the collaboration between traditional and neuromorphic event cameras brings prosperity to frame-event based vision applications, the performance is still confined by the resolution gap crossing two modalities in both spatial and temporal domains. This paper is devoted to bridging the gap by increasing the temporal resolution for images, i.e., motion deblurring, and the spatial resolution for events, i.e., event super-resolving, respectively. To this end, we introduce \n<italic>C</i>\nross\n<italic>Z</i>\noom, a novel unified neural \n<italic>Net</i>\nwork (CZ-Net) to jointly recover sharp latent sequences within the exposure period of a blurry input and the corresponding High-Resolution (HR) events. Specifically, we present a multi-scale blur-event fusion architecture that leverages the scale-variant properties and effectively fuses cross-modal information to achieve cross-enhancement. Attention-based adaptive enhancement and cross-interaction prediction modules are devised to alleviate the distortions inherent in Low-Resolution (LR) events and enhance the final results through the prior blur-event complementary information. Furthermore, we propose a new dataset containing HR \n<italic>sharp-blurry</i>\n images and the corresponding \n<italic>HR-LR</i>\n event streams to facilitate future research. Extensive qualitative and quantitative experiments on synthetic and real-world datasets demonstrate the effectiveness and robustness of the proposed method.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"8209-8227"},"PeriodicalIF":18.6000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10534844/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Even though the collaboration between traditional and neuromorphic event cameras brings prosperity to frame-event based vision applications, the performance is still confined by the resolution gap crossing two modalities in both spatial and temporal domains. This paper is devoted to bridging the gap by increasing the temporal resolution for images, i.e., motion deblurring, and the spatial resolution for events, i.e., event super-resolving, respectively. To this end, we introduce C ross Z oom, a novel unified neural Net work (CZ-Net) to jointly recover sharp latent sequences within the exposure period of a blurry input and the corresponding High-Resolution (HR) events. Specifically, we present a multi-scale blur-event fusion architecture that leverages the scale-variant properties and effectively fuses cross-modal information to achieve cross-enhancement. Attention-based adaptive enhancement and cross-interaction prediction modules are devised to alleviate the distortions inherent in Low-Resolution (LR) events and enhance the final results through the prior blur-event complementary information. Furthermore, we propose a new dataset containing HR sharp-blurry images and the corresponding HR-LR event streams to facilitate future research. Extensive qualitative and quantitative experiments on synthetic and real-world datasets demonstrate the effectiveness and robustness of the proposed method.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CrossZoom:同时进行运动去模糊和事件超解像。
尽管传统相机和神经形态事件相机之间的合作为基于帧事件的视觉应用带来了繁荣,但其性能仍然受限于两种模态在空间和时间域的分辨率差距。本文致力于通过提高图像的时间分辨率(即运动去模糊)和事件的空间分辨率(即事件超分辨率)来弥合这一差距。为此,我们引入了 CrossZoom,这是一种新型的统一神经网络(CZ-Net),可在模糊输入和相应的高分辨率(HR)事件的曝光期内联合恢复清晰的潜伏序列。具体来说,我们提出了一种多尺度模糊-事件融合架构,该架构利用尺度变异特性并有效融合跨模态信息以实现交叉增强。我们设计了基于注意力的自适应增强和交叉交互预测模块,以减轻低分辨率(LR)事件固有的失真,并通过事先的模糊-事件互补信息增强最终结果。此外,我们还提出了一个新的数据集,其中包含 HR 锐利模糊图像和相应的 HR-LR 事件流,以促进未来的研究。在合成和真实世界数据集上进行的大量定性和定量实验证明了所提方法的有效性和鲁棒性。代码和数据集发布于 https://bestrivenzc.github.io/CZ-Net/。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
GrowSP++: Growing Superpoints and Primitives for Unsupervised 3D Semantic Segmentation. Unsupervised Gaze Representation Learning by Switching Features. H2OT: Hierarchical Hourglass Tokenizer for Efficient Video Pose Transformers. MV2DFusion: Leveraging Modality-Specific Object Semantics for Multi-Modal 3D Detection. Parse Trees Guided LLM Prompt Compression.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1