Unified Video Reconstruction for Rolling Shutter and Global Shutter Cameras

Bin Fan;Zhexiong Wan;Boxin Shi;Chao Xu;Yuchao Dai
{"title":"Unified Video Reconstruction for Rolling Shutter and Global Shutter Cameras","authors":"Bin Fan;Zhexiong Wan;Boxin Shi;Chao Xu;Yuchao Dai","doi":"10.1109/TIP.2024.3504275","DOIUrl":null,"url":null,"abstract":"Currently, the general domain of video reconstruction (VR) is fragmented into different shutters spanning global shutter and rolling shutter cameras. Despite rapid progress in the state-of-the-art, existing methods overwhelmingly follow shutter-specific paradigms and cannot conceptually generalize to other shutter types, hindering the uniformity of VR models. In this paper, we propose UniVR, a versatile framework to handle various shutters through unified modeling and shared parameters. Specifically, UniVR encodes diverse shutter types into a unified space via a tractable shutter adapter, which is parameter-free and thus can be seamlessly delivered to current well-established VR architectures for cross-shutter transfer. To demonstrate its effectiveness, we conceptualize UniVR as three shutter-generic VR methods, namely Uni-SoftSplat, Uni-SuperSloMo, and Uni-RIFE. Extensive experimental results demonstrate that the pre-trained model without any fine-tuning can achieve reasonable performance even on novel shutters. After fine-tuning, new state-of-the-art performances are established that go beyond shutter-specific methods and enjoy strong generalization. The code is available at \n<uri>https://github.com/GitCVfb/UniVR</uri>\n.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6821-6835"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10770126/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Currently, the general domain of video reconstruction (VR) is fragmented into different shutters spanning global shutter and rolling shutter cameras. Despite rapid progress in the state-of-the-art, existing methods overwhelmingly follow shutter-specific paradigms and cannot conceptually generalize to other shutter types, hindering the uniformity of VR models. In this paper, we propose UniVR, a versatile framework to handle various shutters through unified modeling and shared parameters. Specifically, UniVR encodes diverse shutter types into a unified space via a tractable shutter adapter, which is parameter-free and thus can be seamlessly delivered to current well-established VR architectures for cross-shutter transfer. To demonstrate its effectiveness, we conceptualize UniVR as three shutter-generic VR methods, namely Uni-SoftSplat, Uni-SuperSloMo, and Uni-RIFE. Extensive experimental results demonstrate that the pre-trained model without any fine-tuning can achieve reasonable performance even on novel shutters. After fine-tuning, new state-of-the-art performances are established that go beyond shutter-specific methods and enjoy strong generalization. The code is available at https://github.com/GitCVfb/UniVR .
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
卷帘式和全局快门相机的统一视频重建
目前,视频重建(VR)的一般领域被分割成不同的快门,包括全局快门和滚动快门相机。尽管最新技术进步迅速,但现有方法绝大多数遵循快门特定范式,无法在概念上推广到其他快门类型,从而阻碍了VR模型的统一性。在本文中,我们提出了一个通用框架UniVR,通过统一建模和共享参数来处理各种快门。具体来说,UniVR通过一个易于处理的快门适配器将不同的快门类型编码到一个统一的空间中,这是无参数的,因此可以无缝地交付到当前完善的VR架构中进行跨快门传输。为了证明其有效性,我们将UniVR概念化为三种快门通用VR方法,即Uni-SoftSplat, Uni-SuperSloMo和Uni-RIFE。大量的实验结果表明,无需任何微调的预训练模型即使在新型百叶窗上也能获得合理的性能。经过微调,建立了新的最先进的性能,超越了特定的快门方法,并具有很强的通用性。代码可在https://github.com/GitCVfb/UniVR上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Rethinking Feature Reconstruction via Category Prototype in Semantic Segmentation Spiking Neural Networks With Adaptive Membrane Time Constant for Event-Based Tracking Self-Supervised Monocular Depth Estimation With Dual-Path Encoders and Offset Field Interpolation Hyperspectral Image Classification via Cascaded Spatial Cross-Attention Network A New Cross-Space Total Variation Regularization Model for Color Image Restoration With Quaternion Blur Operator
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1