Event-based video reconstruction via attention-based recurrent network

IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neurocomputing Pub Date : 2025-06-01 Epub Date: 2025-02-25 DOI:10.1016/j.neucom.2025.129776
Wenwen Ma, Shanxing Ma, Pieter Meiresone, Gianni Allebosch, Wilfried Philips, Jan Aelterman
{"title":"Event-based video reconstruction via attention-based recurrent network","authors":"Wenwen Ma,&nbsp;Shanxing Ma,&nbsp;Pieter Meiresone,&nbsp;Gianni Allebosch,&nbsp;Wilfried Philips,&nbsp;Jan Aelterman","doi":"10.1016/j.neucom.2025.129776","DOIUrl":null,"url":null,"abstract":"<div><div>Event cameras are novel sensors that capture brightness changes in the form of asynchronous events rather than intensity frames, offering unique advantages such as high dynamic range, high temporal resolution, and no motion blur. However, the sparse, asynchronous nature of event data poses significant challenges for visual perception, limiting compatibility with conventional computer vision algorithms that rely on dense, continuous frames. Event-based video reconstruction has emerged as a promising solution, though existing methods still face challenges in capturing fine-grained details and enhancing contrast. This paper presents a novel approach to video reconstruction from asynchronous event streams, leveraging the unique properties of event data to produce high-quality video. Our method integrates channel and pixel attention mechanisms to focus on essential features and incorporates deformable convolutions and adaptive mix-up operations to provide flexible receptive fields and dynamic fusion across down-sampling and up-sampling layers. Experimental results on multiple real-world event datasets demonstrate that our approach outperforms comparable methods trained on the same dataset, achieving superior video quality from pure event data. We also demonstrate the capability of our method for high dynamic range reconstruction and color video reconstruction using an event camera equipped with a Bayer filter.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"632 ","pages":"Article 129776"},"PeriodicalIF":6.5000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225004485","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/25 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Event cameras are novel sensors that capture brightness changes in the form of asynchronous events rather than intensity frames, offering unique advantages such as high dynamic range, high temporal resolution, and no motion blur. However, the sparse, asynchronous nature of event data poses significant challenges for visual perception, limiting compatibility with conventional computer vision algorithms that rely on dense, continuous frames. Event-based video reconstruction has emerged as a promising solution, though existing methods still face challenges in capturing fine-grained details and enhancing contrast. This paper presents a novel approach to video reconstruction from asynchronous event streams, leveraging the unique properties of event data to produce high-quality video. Our method integrates channel and pixel attention mechanisms to focus on essential features and incorporates deformable convolutions and adaptive mix-up operations to provide flexible receptive fields and dynamic fusion across down-sampling and up-sampling layers. Experimental results on multiple real-world event datasets demonstrate that our approach outperforms comparable methods trained on the same dataset, achieving superior video quality from pure event data. We also demonstrate the capability of our method for high dynamic range reconstruction and color video reconstruction using an event camera equipped with a Bayer filter.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于注意循环网络的事件视频重建
事件相机是一种新颖的传感器,它以异步事件的形式捕捉亮度变化,而不是强度帧,具有高动态范围、高时间分辨率和无运动模糊等独特优势。然而,事件数据的稀疏、异步特性给视觉感知带来了重大挑战,限制了与依赖于密集、连续帧的传统计算机视觉算法的兼容性。基于事件的视频重建已经成为一种很有前途的解决方案,尽管现有的方法在捕获细粒度细节和增强对比度方面仍然面临挑战。本文提出了一种从异步事件流中重建视频的新方法,利用事件数据的独特属性来生成高质量的视频。我们的方法集成了通道和像素关注机制来关注基本特征,并结合了可变形卷积和自适应混合操作来提供灵活的接受域和跨下采样和上采样层的动态融合。在多个真实事件数据集上的实验结果表明,我们的方法优于在相同数据集上训练的可比方法,从纯事件数据中获得了更好的视频质量。我们还展示了我们的方法在高动态范围重建和彩色视频重建的能力,使用配备拜耳滤波器的事件相机。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
期刊最新文献
TF-GNN: A temporal feedback graph neural network with discriminative regularization for heterogeneous fraud detection FSSG: Generative few-shot object detection via style-geometry fusion SSFA-Net: Sparse strip and dual-domain spatial-frequency attention for efficient image dehazing Analysis of adaptive optimal control theory for nonzero-sum stackelberg game based on high-order neural networks and conjugate gradient method Blind confusion of classification networks: A black box evaluation under common and structured image corruptions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1