I-GANs for Synthetical Infrared Images Generation

Mohammad Mahdi Moradi, R. Ghaderi
{"title":"I-GANs for Synthetical Infrared Images Generation","authors":"Mohammad Mahdi Moradi, R. Ghaderi","doi":"10.1109/MVIP53647.2022.9738551","DOIUrl":null,"url":null,"abstract":"Due to the insensitivity of infrared images to changes in light intensity and weather conditions, these images are used in many surveillance systems and different fields. However, despite all the applications and benefits of these images, not enough data is available in many applications due to the high cost, time-consuming, and complicated data preparation. Two deep neural networks based on Conditional Generative Adversarial Networks are introduced to solve this problem and produce synthetical infrared images. One of these models is only for problems where the pair to pair visible and infrared images are available, and as a result, the mapping between these two domains will be learned. Given that in many of the problems we face unpaired data, another network is proposed in which the goal is to obtain a mapping from visible to infrared images so that the distribution of synthetical infrared images is indistinguishable from the real ones. Two publicly available datasets have been used to train and test the proposed models. Results properly demonstrate that the evaluation of the proposed system in regard to peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) has improved by 4.6199% and 3.9196%, respectively, compared to previous models.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Machine Vision and Image Processing (MVIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MVIP53647.2022.9738551","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Due to the insensitivity of infrared images to changes in light intensity and weather conditions, these images are used in many surveillance systems and different fields. However, despite all the applications and benefits of these images, not enough data is available in many applications due to the high cost, time-consuming, and complicated data preparation. Two deep neural networks based on Conditional Generative Adversarial Networks are introduced to solve this problem and produce synthetical infrared images. One of these models is only for problems where the pair to pair visible and infrared images are available, and as a result, the mapping between these two domains will be learned. Given that in many of the problems we face unpaired data, another network is proposed in which the goal is to obtain a mapping from visible to infrared images so that the distribution of synthetical infrared images is indistinguishable from the real ones. Two publicly available datasets have been used to train and test the proposed models. Results properly demonstrate that the evaluation of the proposed system in regard to peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) has improved by 4.6199% and 3.9196%, respectively, compared to previous models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于合成红外图像生成的i - gan
由于红外图像对光强和天气条件的变化不敏感,这些图像被用于许多监控系统和不同的领域。然而,尽管这些图像有很多应用和好处,但由于高成本、耗时和复杂的数据准备,在许多应用中没有足够的数据可用。引入了基于条件生成对抗网络的两种深度神经网络来解决这一问题,并生成了综合红外图像。其中一个模型只适用于有可见光和红外图像的问题,因此,这两个域之间的映射将被学习。考虑到我们面临的许多不成对数据的问题,提出了另一种网络,其目标是获得可见光到红外图像的映射,从而使合成红外图像的分布与真实图像无法区分。两个公开可用的数据集被用来训练和测试所提出的模型。结果表明,该系统在峰值信噪比(PSNR)和结构相似度指标(SSIM)方面的评价分别比之前的模型提高了4.6199%和3.9196%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Transfer Learning on Semantic Segmentation for Sugar Crystal Analysis Evaluation of the Image Processing Technique in Interpretation of Polar Plot Characteristics of Transformer Frequency Response Novel Gaussian Mixture-based Video Coding for Fixed Background Video Streaming Automated Cell Tracking Using Adaptive Multi-stage Kalman Filter In Time-laps Images Facial Expression Recognition: a Comparison with Different Classical and Deep Learning Methods
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1