使用纹理转移图像增强的超分辨率

Jose Jaena Mari Ople, Daniel Stanley Tan, A. Azcarraga, Chao-Lung Yang, K. Hua
{"title":"使用纹理转移图像增强的超分辨率","authors":"Jose Jaena Mari Ople, Daniel Stanley Tan, A. Azcarraga, Chao-Lung Yang, K. Hua","doi":"10.1109/ICIP40778.2020.9190844","DOIUrl":null,"url":null,"abstract":"Recent deep learning approaches in single image super-resolution (SISR) can generate high-definition textures for super-resolved (SR) images. However, they tend to hallucinate fake textures and even produce artifacts. An alternative to SISR, reference-based SR (RefSR) approaches use high-resolution (HR) reference (Ref) images to provide HR details that are missing in the low-resolution (LR) input image. We propose a novel framework that leverages existing SISR approaches and enhances them with RefSR. Specifically, we refine the output of SISR methods using neural texture transfer, where HR features are queried from the Ref images. The query is conducted by computing the similarity of textural and semantic features between the input image and the Ref images. The most similar HR features, patch-wise, to the LR image is used to augment the SR image through an augmentation network. In the case of dissimilar Ref images from the LR input image, we prevent performance degradation by including the similarity scores in the input features of the network. Furthermore, we use random texture patches during the training to condition our augmentation network to not always trust the queried texture features. Different from past RefSR approaches, our method can use arbitrary Ref images and its lower-bound performance is based on the SR image. We showcase that our method drastically improves the performance of the base SISR approach.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Super-Resolution by Image Enhancement Using Texture Transfer\",\"authors\":\"Jose Jaena Mari Ople, Daniel Stanley Tan, A. Azcarraga, Chao-Lung Yang, K. Hua\",\"doi\":\"10.1109/ICIP40778.2020.9190844\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent deep learning approaches in single image super-resolution (SISR) can generate high-definition textures for super-resolved (SR) images. However, they tend to hallucinate fake textures and even produce artifacts. An alternative to SISR, reference-based SR (RefSR) approaches use high-resolution (HR) reference (Ref) images to provide HR details that are missing in the low-resolution (LR) input image. We propose a novel framework that leverages existing SISR approaches and enhances them with RefSR. Specifically, we refine the output of SISR methods using neural texture transfer, where HR features are queried from the Ref images. The query is conducted by computing the similarity of textural and semantic features between the input image and the Ref images. The most similar HR features, patch-wise, to the LR image is used to augment the SR image through an augmentation network. In the case of dissimilar Ref images from the LR input image, we prevent performance degradation by including the similarity scores in the input features of the network. Furthermore, we use random texture patches during the training to condition our augmentation network to not always trust the queried texture features. Different from past RefSR approaches, our method can use arbitrary Ref images and its lower-bound performance is based on the SR image. We showcase that our method drastically improves the performance of the base SISR approach.\",\"PeriodicalId\":405734,\"journal\":{\"name\":\"2020 IEEE International Conference on Image Processing (ICIP)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Image Processing (ICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIP40778.2020.9190844\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP40778.2020.9190844","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

最近的单幅超分辨率(SISR)深度学习方法可以为超分辨率(SR)图像生成高清纹理。然而,他们往往会产生幻觉,甚至产生人工制品。作为SISR的替代方案,基于参考的SR (RefSR)方法使用高分辨率(HR)参考(Ref)图像来提供低分辨率(LR)输入图像中缺失的HR细节。我们提出了一个新的框架,利用现有的SISR方法,并通过RefSR对其进行增强。具体来说,我们使用神经纹理转移来改进SISR方法的输出,其中从Ref图像中查询HR特征。该查询通过计算输入图像与Ref图像之间的纹理和语义特征的相似度来进行。与LR图像最相似的HR特征(斑块)被用于通过增强网络增强SR图像。在与LR输入图像不同的Ref图像的情况下,我们通过在网络的输入特征中包含相似性分数来防止性能下降。此外,我们在训练过程中使用随机纹理补丁来调节我们的增强网络,使其不总是信任查询到的纹理特征。与以往的RefSR方法不同,我们的方法可以使用任意的RefSR图像,并且它的下界性能是基于SR图像的。我们展示了我们的方法大大提高了基本SISR方法的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Super-Resolution by Image Enhancement Using Texture Transfer
Recent deep learning approaches in single image super-resolution (SISR) can generate high-definition textures for super-resolved (SR) images. However, they tend to hallucinate fake textures and even produce artifacts. An alternative to SISR, reference-based SR (RefSR) approaches use high-resolution (HR) reference (Ref) images to provide HR details that are missing in the low-resolution (LR) input image. We propose a novel framework that leverages existing SISR approaches and enhances them with RefSR. Specifically, we refine the output of SISR methods using neural texture transfer, where HR features are queried from the Ref images. The query is conducted by computing the similarity of textural and semantic features between the input image and the Ref images. The most similar HR features, patch-wise, to the LR image is used to augment the SR image through an augmentation network. In the case of dissimilar Ref images from the LR input image, we prevent performance degradation by including the similarity scores in the input features of the network. Furthermore, we use random texture patches during the training to condition our augmentation network to not always trust the queried texture features. Different from past RefSR approaches, our method can use arbitrary Ref images and its lower-bound performance is based on the SR image. We showcase that our method drastically improves the performance of the base SISR approach.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Deep Adversarial Active Learning With Model Uncertainty For Image Classification Emotion Transformation Feature: Novel Feature For Deception Detection In Videos Object Segmentation In Electrical Impedance Tomography For Tactile Sensing A Syndrome-Based Autoencoder For Point Cloud Geometry Compression A Comparison Of Compressed Sensing And Dnn Based Reconstruction For Ghost Motion Imaging
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1