首页 > 最新文献

Recent Advances in Image Restoration with Applications to Real World Problems最新文献

英文 中文
Generative Adversarial Networks for Visible to Infrared Video Conversion 可见光到红外视频转换的生成对抗网络
Pub Date : 2020-11-04 DOI: 10.5772/intechopen.93866
M. S. Uddin, Jiang Li
Deep learning models are data driven. For example, the most popular convolutional neural network (CNN) model used for image classification or object detection requires large labeled databases for training to achieve competitive performances. This requirement is not difficult to be satisfied in the visible domain since there are lots of labeled video and image databases available nowadays. However, given the less popularity of infrared (IR) camera, the availability of labeled infrared videos or image databases is limited. Therefore, training deep learning models in infrared domain is still challenging. In this chapter, we applied the pix2pix generative adversarial network (Pix2Pix GAN) and cycle-consistent GAN (Cycle GAN) models to convert visible videos to infrared videos. The Pix2Pix GAN model requires visible-infrared image pairs for training while the Cycle GAN relaxes this constraint and requires only unpaired images from both domains. We applied the two models to an open-source database where visible and infrared videos provided by the signal multimedia and telecommunications laboratory at the Federal University of Rio de Janeiro. We evaluated conversion results by performance metrics including Inception Score (IS), Frechet Inception Distance (FID) and Kernel Inception Distance (KID). Our experiments suggest that cycle-consistent GAN is more effective than pix2pix GAN for generating IR images from optical images.
深度学习模型是数据驱动的。例如,用于图像分类或目标检测的最流行的卷积神经网络(CNN)模型需要大型标记数据库进行训练以获得具有竞争力的性能。这一要求不难在可见领域得到满足,因为目前已有大量的标记视频和图像数据库。然而,由于红外摄像机的普及程度较低,标记红外视频或图像数据库的可用性是有限的。因此,在红外域训练深度学习模型仍然具有挑战性。在本章中,我们应用pix2pix生成对抗网络(pix2pix GAN)和周期一致GAN (Cycle GAN)模型将可见视频转换为红外视频。Pix2Pix GAN模型需要可见光-红外图像对进行训练,而Cycle GAN放宽了这一限制,只需要来自两个域的未配对图像。我们将这两种模型应用到一个开源数据库中,其中包含由里约热内卢联邦大学的信号多媒体和电信实验室提供的可见光和红外视频。我们通过包括Inception Score (IS)、Frechet Inception Distance (FID)和Kernel Inception Distance (KID)在内的性能指标来评估转换结果。我们的实验表明,在从光学图像生成红外图像方面,周期一致的GAN比pix2pix GAN更有效。
{"title":"Generative Adversarial Networks for Visible to Infrared Video Conversion","authors":"M. S. Uddin, Jiang Li","doi":"10.5772/intechopen.93866","DOIUrl":"https://doi.org/10.5772/intechopen.93866","url":null,"abstract":"Deep learning models are data driven. For example, the most popular convolutional neural network (CNN) model used for image classification or object detection requires large labeled databases for training to achieve competitive performances. This requirement is not difficult to be satisfied in the visible domain since there are lots of labeled video and image databases available nowadays. However, given the less popularity of infrared (IR) camera, the availability of labeled infrared videos or image databases is limited. Therefore, training deep learning models in infrared domain is still challenging. In this chapter, we applied the pix2pix generative adversarial network (Pix2Pix GAN) and cycle-consistent GAN (Cycle GAN) models to convert visible videos to infrared videos. The Pix2Pix GAN model requires visible-infrared image pairs for training while the Cycle GAN relaxes this constraint and requires only unpaired images from both domains. We applied the two models to an open-source database where visible and infrared videos provided by the signal multimedia and telecommunications laboratory at the Federal University of Rio de Janeiro. We evaluated conversion results by performance metrics including Inception Score (IS), Frechet Inception Distance (FID) and Kernel Inception Distance (KID). Our experiments suggest that cycle-consistent GAN is more effective than pix2pix GAN for generating IR images from optical images.","PeriodicalId":171152,"journal":{"name":"Recent Advances in Image Restoration with Applications to Real World Problems","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116545433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Style-Based Unsupervised Learning for Real-World Face Image Super-Resolution 基于风格的无监督学习用于真实世界的人脸图像超分辨率
Pub Date : 2020-11-04 DOI: 10.5772/intechopen.92320
A. C. Sidiya, Xin Li
Face image synthesis has advanced rapidly in recent years. However, similar success has not been witnessed in related areas such as face single image super-resolution (SISR). The performance of SISR on real-world low-quality face images remains unsatisfactory. In this paper, we demonstrate how to advance the state-of-the-art in face SISR by leveraging style-based generator in unsupervised settings. For real-world low-resolution (LR) face images, we propose a novel unsupervised learning approach by combining style-based generator with relativistic discriminator. With a carefully designed training strategy, we demonstrate our converges faster and better suppresses artifacts than Bulat’s approach. When trained on an ensemble of high-quality datasets (CelebA, AFLW, LS3D-W, and VGGFace2), we report significant visual quality improvements over other competing methods especially for real-world low-quality face images such as those in Widerface. Additionally, we have verified that both our unsupervised approaches are capable of improving the matching performance of widely used face recognition systems such as OpenFace.
近年来,人脸图像合成技术发展迅速。然而,在人脸单图像超分辨率(SISR)等相关领域尚未取得类似的成功。SISR在实际低质量人脸图像上的表现仍然不令人满意。在本文中,我们演示了如何通过在无监督设置中利用基于样式的生成器来推进最先进的面向SISR。对于现实世界的低分辨率人脸图像,我们提出了一种新的无监督学习方法,该方法将基于风格的生成器与相对论鉴别器相结合。通过精心设计的训练策略,我们证明了我们的收敛比Bulat的方法更快,更好地抑制了工件。当在高质量数据集(CelebA, AFLW, LS3D-W和VGGFace2)的集合上进行训练时,我们报告了比其他竞争方法显着的视觉质量改进,特别是对于现实世界中的低质量人脸图像,如Widerface中的图像。此外,我们已经验证了我们的两种无监督方法都能够提高广泛使用的人脸识别系统(如OpenFace)的匹配性能。
{"title":"Style-Based Unsupervised Learning for Real-World Face Image Super-Resolution","authors":"A. C. Sidiya, Xin Li","doi":"10.5772/intechopen.92320","DOIUrl":"https://doi.org/10.5772/intechopen.92320","url":null,"abstract":"Face image synthesis has advanced rapidly in recent years. However, similar success has not been witnessed in related areas such as face single image super-resolution (SISR). The performance of SISR on real-world low-quality face images remains unsatisfactory. In this paper, we demonstrate how to advance the state-of-the-art in face SISR by leveraging style-based generator in unsupervised settings. For real-world low-resolution (LR) face images, we propose a novel unsupervised learning approach by combining style-based generator with relativistic discriminator. With a carefully designed training strategy, we demonstrate our converges faster and better suppresses artifacts than Bulat’s approach. When trained on an ensemble of high-quality datasets (CelebA, AFLW, LS3D-W, and VGGFace2), we report significant visual quality improvements over other competing methods especially for real-world low-quality face images such as those in Widerface. Additionally, we have verified that both our unsupervised approaches are capable of improving the matching performance of widely used face recognition systems such as OpenFace.","PeriodicalId":171152,"journal":{"name":"Recent Advances in Image Restoration with Applications to Real World Problems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131163777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Resolution Enhancement of Hyperspectral Data Exploiting Real Multi-Platform Data 利用真实多平台数据提高高光谱数据的分辨率
Pub Date : 2020-11-04 DOI: 10.5772/intechopen.92795
R. Restaino, G. Vivone, P. Addesso, Daniele Picone, J. Chanussot
Multi-platform data introduce new possibilities in the context of data fusion, as they allow to exploit several remotely sensed images acquired by different combinations of sensors. This scenario is particularly interesting for the sharpening of hyperspectral (HS) images, due to the limited availability of high-resolution (HR) sensors mounted onboard of the same platform as that of the HS device. However, the differences in the acquisition geometry and the nonsimultaneity of this kind of observations introduce further difficulties whose effects have to be taken into account in the design of data fusion algorithms. In this study, we present the most widespread HS image sharpening techniques and assess their performances by testing them over real acquisitions taken by the Earth Observing-1 (EO-1) and the WorldView-3 (WV3) satellites. We also highlight the difficulties arising from the use of multi-platform data and, at the same time, the benefits achievable through this approach.
多平台数据为数据融合带来了新的可能性,因为它们允许利用由不同传感器组合获得的多个遥感图像。这种情况对于高光谱(HS)图像的锐化特别有趣,因为与高光谱设备安装在同一平台上的高分辨率(HR)传感器的可用性有限。然而,这类观测的采集几何和非同时性的差异带来了进一步的困难,这些困难的影响必须在数据融合算法的设计中加以考虑。在本研究中,我们介绍了最广泛的HS图像锐化技术,并通过对地球观测-1 (EO-1)和世界观测-3 (WV3)卫星拍摄的真实图像进行测试来评估其性能。我们还强调了使用多平台数据所带来的困难,同时,通过这种方法可以实现的好处。
{"title":"Resolution Enhancement of Hyperspectral Data Exploiting Real Multi-Platform Data","authors":"R. Restaino, G. Vivone, P. Addesso, Daniele Picone, J. Chanussot","doi":"10.5772/intechopen.92795","DOIUrl":"https://doi.org/10.5772/intechopen.92795","url":null,"abstract":"Multi-platform data introduce new possibilities in the context of data fusion, as they allow to exploit several remotely sensed images acquired by different combinations of sensors. This scenario is particularly interesting for the sharpening of hyperspectral (HS) images, due to the limited availability of high-resolution (HR) sensors mounted onboard of the same platform as that of the HS device. However, the differences in the acquisition geometry and the nonsimultaneity of this kind of observations introduce further difficulties whose effects have to be taken into account in the design of data fusion algorithms. In this study, we present the most widespread HS image sharpening techniques and assess their performances by testing them over real acquisitions taken by the Earth Observing-1 (EO-1) and the WorldView-3 (WV3) satellites. We also highlight the difficulties arising from the use of multi-platform data and, at the same time, the benefits achievable through this approach.","PeriodicalId":171152,"journal":{"name":"Recent Advances in Image Restoration with Applications to Real World Problems","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124804536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Recent Advances in Image Restoration with Applications to Real World Problems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1