Restoration of haze-free images using generative adversarial network

Weichao Yi, Ming Liu, Liquan Dong, Yuejin Zhao, Xiaohua Liu, Mei Hui
{"title":"Restoration of haze-free images using generative adversarial network","authors":"Weichao Yi, Ming Liu, Liquan Dong, Yuejin Zhao, Xiaohua Liu, Mei Hui","doi":"10.1117/12.2541893","DOIUrl":null,"url":null,"abstract":"Haze is the result of the interaction between specific climate and human activities. When observing objects in hazy conditions, optical system will produce degradation problems such as color attenuation, image detail loss and contrast reduction. Image haze removal is a challenging and ill-conditioned problem because of the ambiguities of unknown radiance and medium transmission. In order to get clean images, traditional machine vision methods usually use various constraints/prior conditions to obtain a reasonable haze removal solutions, the key to achieve haze removal is to estimate the medium transmission of the input hazy image in earlier studies. In this paper, however, we concentrated on recovering a clear image from a hazy input directly by using Generative Adversarial Network (GAN) without estimating the transmission matrix and atmospheric scattering model parameters, we present an end-to-end model that consists of an encoder and a decoder, the encoder is extracting the features of the hazy images, and represents these features in high dimensional space, while the decoder is employed to recover the corresponding images from high-level coding features. And based perceptual losses optimization could get high quality of textural information of haze recovery and reproduce more natural haze-removal images. Experimental results on hazy image datasets input shows better subjective visual quality than traditional methods. Furthermore, we test the haze removal images on a specialized object detection network- YOLO, the detection result shows that our method can improve the object detection performance on haze removal images, indicated that we can get clean haze-free images from hazy input through our GAN model.","PeriodicalId":384253,"journal":{"name":"International Symposium on Multispectral Image Processing and Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Symposium on Multispectral Image Processing and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2541893","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Haze is the result of the interaction between specific climate and human activities. When observing objects in hazy conditions, optical system will produce degradation problems such as color attenuation, image detail loss and contrast reduction. Image haze removal is a challenging and ill-conditioned problem because of the ambiguities of unknown radiance and medium transmission. In order to get clean images, traditional machine vision methods usually use various constraints/prior conditions to obtain a reasonable haze removal solutions, the key to achieve haze removal is to estimate the medium transmission of the input hazy image in earlier studies. In this paper, however, we concentrated on recovering a clear image from a hazy input directly by using Generative Adversarial Network (GAN) without estimating the transmission matrix and atmospheric scattering model parameters, we present an end-to-end model that consists of an encoder and a decoder, the encoder is extracting the features of the hazy images, and represents these features in high dimensional space, while the decoder is employed to recover the corresponding images from high-level coding features. And based perceptual losses optimization could get high quality of textural information of haze recovery and reproduce more natural haze-removal images. Experimental results on hazy image datasets input shows better subjective visual quality than traditional methods. Furthermore, we test the haze removal images on a specialized object detection network- YOLO, the detection result shows that our method can improve the object detection performance on haze removal images, indicated that we can get clean haze-free images from hazy input through our GAN model.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于生成对抗网络的无雾图像恢复
雾霾是特定气候与人类活动相互作用的结果。在雾霾条件下观测物体时,光学系统会产生颜色衰减、图像细节丢失、对比度降低等退化问题。由于未知辐射和介质传输的模糊性,图像雾霾去除是一个具有挑战性和病态的问题。为了获得干净的图像,传统的机器视觉方法通常使用各种约束/先验条件来获得合理的去雾方案,实现去雾的关键是在早期的研究中对输入的雾霾图像的介质传输进行估计。然而,在本文中,我们专注于在不估计传输矩阵和大气散射模型参数的情况下,直接使用生成式对抗网络(GAN)从朦胧输入中恢复清晰图像,我们提出了一个由编码器和解码器组成的端到端模型,编码器提取朦胧图像的特征,并将这些特征表示在高维空间中。而解码器则用于从高级编码特征中恢复相应的图像。基于感知损失优化可以获得高质量的雾霾恢复纹理信息,再现更自然的去雾图像。实验结果表明,在模糊图像数据集上输入的主观视觉质量优于传统方法。此外,我们在专门的目标检测网络YOLO上对去雾图像进行了测试,检测结果表明我们的方法可以提高去雾图像的目标检测性能,表明我们可以通过GAN模型从雾输入中获得干净的无雾图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Image fusion for multimodality image via domain transfer and nonrigid transformation Dimensionality reduction of hyperspectral images based on subspace combination clustering and adaptive band selection Remote multi-object detection based on bounding box field Facial morphe via domain translation and FM2RLS Restoration of haze-free images using generative adversarial network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1