SeBIR:语义引导的突发图像修复

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neural Networks Pub Date : 2024-10-26 DOI:10.1016/j.neunet.2024.106834
Huan Liu, Mingwen Shao, Yecong Wan, Yuexian Liu, Kai Shang
{"title":"SeBIR:语义引导的突发图像修复","authors":"Huan Liu,&nbsp;Mingwen Shao,&nbsp;Yecong Wan,&nbsp;Yuexian Liu,&nbsp;Kai Shang","doi":"10.1016/j.neunet.2024.106834","DOIUrl":null,"url":null,"abstract":"<div><div>Burst image restoration methods offer the possibility of recovering faithful scene details from multiple low-quality snapshots captured by hand-held devices in adverse scenarios, thereby attracting increasing attention in recent years. However, individual frames in a burst typically suffer from inter-frame misalignments, leading to ghosting artifacts. Besides, existing methods indiscriminately handle all burst frames, struggling to seamlessly remove the corrupted information due to the neglect of multi-frame spatio-temporal varying degradation. To alleviate these limitations, we propose a general semantic-guided model named <strong>SeBIR</strong> for burst image restoration incorporating the semantic prior knowledge of Segment Anything Model (SAM) to enable adaptive recovery. Specifically, instead of relying solely on a single aligning scheme, we develop a joint implicit and explicit strategy that sufficiently leverages semantic knowledge as guidance to achieve inter-frame alignment. To further adaptively modulate and aggregate aligned features with spatio-temporal disparity, we elaborate a semantic-guided fusion module using the intermediate semantic features of SAM as an explicit guide to weaken the inherent degradation and strengthen the valuable complementary information across frames. Additionally, a semantic-guided local loss is designed to boost local consistency and image quality. Extensive experiments on synthetic and real-world datasets demonstrate the superiority of our method in both quantitative and qualitative evaluations for burst super-resolution, burst denoising, and burst low-light image enhancement tasks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106834"},"PeriodicalIF":6.0000,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SeBIR: Semantic-guided burst image restoration\",\"authors\":\"Huan Liu,&nbsp;Mingwen Shao,&nbsp;Yecong Wan,&nbsp;Yuexian Liu,&nbsp;Kai Shang\",\"doi\":\"10.1016/j.neunet.2024.106834\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Burst image restoration methods offer the possibility of recovering faithful scene details from multiple low-quality snapshots captured by hand-held devices in adverse scenarios, thereby attracting increasing attention in recent years. However, individual frames in a burst typically suffer from inter-frame misalignments, leading to ghosting artifacts. Besides, existing methods indiscriminately handle all burst frames, struggling to seamlessly remove the corrupted information due to the neglect of multi-frame spatio-temporal varying degradation. To alleviate these limitations, we propose a general semantic-guided model named <strong>SeBIR</strong> for burst image restoration incorporating the semantic prior knowledge of Segment Anything Model (SAM) to enable adaptive recovery. Specifically, instead of relying solely on a single aligning scheme, we develop a joint implicit and explicit strategy that sufficiently leverages semantic knowledge as guidance to achieve inter-frame alignment. To further adaptively modulate and aggregate aligned features with spatio-temporal disparity, we elaborate a semantic-guided fusion module using the intermediate semantic features of SAM as an explicit guide to weaken the inherent degradation and strengthen the valuable complementary information across frames. Additionally, a semantic-guided local loss is designed to boost local consistency and image quality. Extensive experiments on synthetic and real-world datasets demonstrate the superiority of our method in both quantitative and qualitative evaluations for burst super-resolution, burst denoising, and burst low-light image enhancement tasks.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"181 \",\"pages\":\"Article 106834\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2024-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608024007585\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608024007585","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

连拍图像复原方法可以从手持设备在不利场景下捕获的多张低质量快照中恢复忠实的场景细节,因此近年来受到越来越多的关注。然而,连拍中的单个帧通常会出现帧间错位,从而导致重影伪影。此外,现有方法不加区分地处理所有突发帧,由于忽略了多帧时空变化退化,难以无缝去除损坏的信息。为了缓解这些局限性,我们提出了一种名为 SeBIR 的通用语义引导模型,该模型用于突发图像修复,并结合了 "分段任意模型"(Segment Anything Model,SAM)的语义先验知识,从而实现自适应恢复。具体来说,我们不再仅仅依赖单一的对齐方案,而是开发了一种隐式和显式联合策略,充分利用语义知识作为实现帧间对齐的指导。为了进一步自适应地调节和聚合具有时空差异的对齐特征,我们精心设计了一个语义指导的融合模块,使用 SAM 的中间语义特征作为显式指导,以弱化固有的劣化,并加强各帧之间有价值的互补信息。此外,还设计了语义引导的局部损失,以提高局部一致性和图像质量。在合成数据集和真实数据集上进行的大量实验证明了我们的方法在突发超分辨率、突发去噪和突发低照度图像增强任务的定量和定性评估中的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SeBIR: Semantic-guided burst image restoration
Burst image restoration methods offer the possibility of recovering faithful scene details from multiple low-quality snapshots captured by hand-held devices in adverse scenarios, thereby attracting increasing attention in recent years. However, individual frames in a burst typically suffer from inter-frame misalignments, leading to ghosting artifacts. Besides, existing methods indiscriminately handle all burst frames, struggling to seamlessly remove the corrupted information due to the neglect of multi-frame spatio-temporal varying degradation. To alleviate these limitations, we propose a general semantic-guided model named SeBIR for burst image restoration incorporating the semantic prior knowledge of Segment Anything Model (SAM) to enable adaptive recovery. Specifically, instead of relying solely on a single aligning scheme, we develop a joint implicit and explicit strategy that sufficiently leverages semantic knowledge as guidance to achieve inter-frame alignment. To further adaptively modulate and aggregate aligned features with spatio-temporal disparity, we elaborate a semantic-guided fusion module using the intermediate semantic features of SAM as an explicit guide to weaken the inherent degradation and strengthen the valuable complementary information across frames. Additionally, a semantic-guided local loss is designed to boost local consistency and image quality. Extensive experiments on synthetic and real-world datasets demonstrate the superiority of our method in both quantitative and qualitative evaluations for burst super-resolution, burst denoising, and burst low-light image enhancement tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
期刊最新文献
Multi-source Selective Graph Domain Adaptation Network for cross-subject EEG emotion recognition. Spectral integrated neural networks (SINNs) for solving forward and inverse dynamic problems. Corrigendum to “Multi-view Graph Pooling with Coarsened Graph Disentanglement” [Neural Networks 174 (2024) 1-10/106221] Rethinking the impact of noisy labels in graph classification: A utility and privacy perspective Multilevel semantic and adaptive actionness learning for weakly supervised temporal action localization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1