Object-Level Remote Sensing Image Augmentation Using U-Net-Based Generative Adversarial Networks

Jian Huang, Shan Liu, Yutian Tang, Xiushan Zhang
{"title":"Object-Level Remote Sensing Image Augmentation Using U-Net-Based Generative Adversarial Networks","authors":"Jian Huang, Shan Liu, Yutian Tang, Xiushan Zhang","doi":"10.1155/2021/1230279","DOIUrl":null,"url":null,"abstract":"With the continuous development of deep learning in computer vision, semantic segmentation technology is constantly employed for processing remote sensing images. For instance, it is a key technology to automatically mark important objects such as ships or port land from port area remote sensing images. However, the existing supervised semantic segmentation model based on deep learning requires a large number of training samples. Otherwise, it will not be able to correctly learn the characteristics of the target objects, which results in the poor performance or even failure of semantic segmentation task. Since the target objects such as ships may move from time to time, it is nontrivial to collect enough samples to achieve satisfactory segmentation performance. And this severely hinders the performance improvement of most of existing augmentation methods. To tackle this problem, in this paper, we propose an object-level remote sensing image augmentation approach based on leveraging the U-Net-based generative adversarial networks. Specifically, our proposed approach consists two components including the semantic tag image generator and the U-Net GAN-based translator. To evaluate the effectiveness of the proposed approach, comprehensive experiments are conducted on a public dataset HRSC2016. State-of-the-art generative models, DCGAN, WGAN, and CycleGAN, are selected as baselines. According to the experimental results, our proposed approach significantly outperforms the baselines in terms of not only drawing the outlines of target objects but also capturing their meaningful details.","PeriodicalId":23995,"journal":{"name":"Wirel. Commun. Mob. Comput.","volume":"21 1","pages":"1230279:1-1230279:12"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wirel. Commun. Mob. Comput.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2021/1230279","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

With the continuous development of deep learning in computer vision, semantic segmentation technology is constantly employed for processing remote sensing images. For instance, it is a key technology to automatically mark important objects such as ships or port land from port area remote sensing images. However, the existing supervised semantic segmentation model based on deep learning requires a large number of training samples. Otherwise, it will not be able to correctly learn the characteristics of the target objects, which results in the poor performance or even failure of semantic segmentation task. Since the target objects such as ships may move from time to time, it is nontrivial to collect enough samples to achieve satisfactory segmentation performance. And this severely hinders the performance improvement of most of existing augmentation methods. To tackle this problem, in this paper, we propose an object-level remote sensing image augmentation approach based on leveraging the U-Net-based generative adversarial networks. Specifically, our proposed approach consists two components including the semantic tag image generator and the U-Net GAN-based translator. To evaluate the effectiveness of the proposed approach, comprehensive experiments are conducted on a public dataset HRSC2016. State-of-the-art generative models, DCGAN, WGAN, and CycleGAN, are selected as baselines. According to the experimental results, our proposed approach significantly outperforms the baselines in terms of not only drawing the outlines of target objects but also capturing their meaningful details.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于u - net的生成对抗网络的目标级遥感图像增强
随着计算机视觉深度学习的不断发展,语义分割技术不断被用于遥感图像的处理。例如,在港区遥感影像中自动标记船舶或港区陆地等重要目标是一项关键技术。然而,现有的基于深度学习的监督语义分割模型需要大量的训练样本。否则,它将不能正确地学习目标对象的特征,从而导致语义分割任务的性能差甚至失败。由于船舶等目标物体可能会不时移动,因此收集足够的样本以获得满意的分割性能是非常重要的。这严重阻碍了大多数现有增强方法的性能提高。为了解决这个问题,在本文中,我们提出了一种基于u - net的生成对抗网络的目标级遥感图像增强方法。具体来说,我们提出的方法由语义标签图像生成器和基于U-Net gan的翻译器两部分组成。为了评估该方法的有效性,在公共数据集HRSC2016上进行了综合实验。最先进的生成模型,DCGAN, WGAN和CycleGAN,被选择作为基线。实验结果表明,我们提出的方法不仅在绘制目标物体的轮廓方面,而且在捕获目标物体的有意义的细节方面都明显优于基线。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
AI-Empowered Propagation Prediction and Optimization for Reconfigurable Wireless Networks C SVM Classification and KNN Techniques for Cyber Crime Detection A Secure and Efficient Energy Trading Model Using Blockchain for a 5G-Deployed Smart Community Fusion Deep Learning and Machine Learning for Heterogeneous Military Entity Recognition Influence of Embedded Microprocessor Wireless Communication and Computer Vision in Wushu Competition Referees' Decision Support
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1