基于 Attention U-Net 和植被光谱特征从 Landsat8 作业陆地成像仪遥感图像中提取植被

IF 1.4 4区 地球科学 Q4 ENVIRONMENTAL SCIENCES Journal of Applied Remote Sensing Pub Date : 2024-05-01 DOI:10.1117/1.jrs.18.032403
Jingfeng Zhang, Bin Zhou, Jin Lu, Ben Wang, Zhipeng Ding, Songyue He
{"title":"基于 Attention U-Net 和植被光谱特征从 Landsat8 作业陆地成像仪遥感图像中提取植被","authors":"Jingfeng Zhang, Bin Zhou, Jin Lu, Ben Wang, Zhipeng Ding, Songyue He","doi":"10.1117/1.jrs.18.032403","DOIUrl":null,"url":null,"abstract":"The rapid, accurate, and intelligent extraction of vegetation areas is of great significance for conducting research on forest resource inventory, climate change, and the greenhouse effect. Currently, existing semantic segmentation models suffer from limitations such as insufficient extraction accuracy (ACC) and unbalanced positive and negative categories in datasets. Therefore, we propose the Attention U-Net model for vegetation extraction from Landsat8 operational land imager remote sensing images. By combining the convolutional block attention module, Visual Geometry Group 16 backbone network, and Dice loss, the model alleviates the phenomenon of omission and misclassification of the fragmented vegetation areas and the imbalance of positive and negative classes. In addition, to test the influence of remote sensing images with different band combinations on the ACC of vegetation extraction, we introduce near-infrared (NIR) and short-wave infrared (SWIR) spectral information to conduct band combination operations, thus forming three datasets, namely, the 432 dataset (R, G, B), 543 dataset (NIR, R, G), and 654 dataset (SWIR, NIR, R). In addition, to validate the effectiveness of the proposed model, it was compared with three classic semantic segmentation models, namely, PSP-Net, DeepLabv3+, and U-Net. Experimental results demonstrate that all models exhibit improved extraction performance on false color datasets compared with the true color dataset, particularly on the 654 dataset where vegetation extraction performance is optimal. Moreover, the proposed Attention U-Net achieves the highest overall ACC with mean intersection over union, mean pixel ACC, and ACC reaching 0.877, 0.940, and 0.946, respectively, providing substantial evidence for the effectiveness of the proposed model. Furthermore, the model demonstrates good generalizability and transferability when tested in other regions, indicating its potential for further application and promotion.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Vegetation extraction from Landsat8 operational land imager remote sensing imagery based on Attention U-Net and vegetation spectral features\",\"authors\":\"Jingfeng Zhang, Bin Zhou, Jin Lu, Ben Wang, Zhipeng Ding, Songyue He\",\"doi\":\"10.1117/1.jrs.18.032403\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The rapid, accurate, and intelligent extraction of vegetation areas is of great significance for conducting research on forest resource inventory, climate change, and the greenhouse effect. Currently, existing semantic segmentation models suffer from limitations such as insufficient extraction accuracy (ACC) and unbalanced positive and negative categories in datasets. Therefore, we propose the Attention U-Net model for vegetation extraction from Landsat8 operational land imager remote sensing images. By combining the convolutional block attention module, Visual Geometry Group 16 backbone network, and Dice loss, the model alleviates the phenomenon of omission and misclassification of the fragmented vegetation areas and the imbalance of positive and negative classes. In addition, to test the influence of remote sensing images with different band combinations on the ACC of vegetation extraction, we introduce near-infrared (NIR) and short-wave infrared (SWIR) spectral information to conduct band combination operations, thus forming three datasets, namely, the 432 dataset (R, G, B), 543 dataset (NIR, R, G), and 654 dataset (SWIR, NIR, R). In addition, to validate the effectiveness of the proposed model, it was compared with three classic semantic segmentation models, namely, PSP-Net, DeepLabv3+, and U-Net. Experimental results demonstrate that all models exhibit improved extraction performance on false color datasets compared with the true color dataset, particularly on the 654 dataset where vegetation extraction performance is optimal. Moreover, the proposed Attention U-Net achieves the highest overall ACC with mean intersection over union, mean pixel ACC, and ACC reaching 0.877, 0.940, and 0.946, respectively, providing substantial evidence for the effectiveness of the proposed model. Furthermore, the model demonstrates good generalizability and transferability when tested in other regions, indicating its potential for further application and promotion.\",\"PeriodicalId\":54879,\"journal\":{\"name\":\"Journal of Applied Remote Sensing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2024-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Applied Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1117/1.jrs.18.032403\",\"RegionNum\":4,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENVIRONMENTAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1117/1.jrs.18.032403","RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

快速、准确、智能地提取植被区域对于开展森林资源清查、气候变化和温室效应研究具有重要意义。目前,现有的语义分割模型存在提取精度(ACC)不足、数据集的正负分类不平衡等局限性。因此,我们提出了用于从 Landsat8 业务陆地成像仪遥感图像中提取植被的注意力 U-Net 模型。该模型将卷积块注意力模块、视觉几何组 16 骨干网络和骰子损失相结合,缓解了植被破碎区域的遗漏和误分类现象以及正负类别不平衡问题。此外,为了检验不同波段组合的遥感影像对植被提取 ACC 的影响,我们引入了近红外和短波红外光谱信息进行波段组合操作,从而形成了三个数据集,即 432 数据集(R、G、B)、543 数据集(NIR、R、G)和 654 数据集(SWIR、NIR、R)。此外,为了验证所提模型的有效性,还将其与三种经典语义分割模型(即 PSP-Net、DeepLabv3+ 和 U-Net)进行了比较。实验结果表明,与真彩色数据集相比,所有模型在假彩色数据集上的提取性能都有所提高,尤其是在 654 数据集上,植被提取性能最佳。此外,所提出的 Attention U-Net 实现了最高的整体 ACC 值,平均交集大于联合值、平均像素 ACC 值和 ACC 值分别达到 0.877、0.940 和 0.946,为所提出模型的有效性提供了实质性证据。此外,该模型在其他地区进行测试时也表现出良好的普适性和可移植性,表明其具有进一步应用和推广的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Vegetation extraction from Landsat8 operational land imager remote sensing imagery based on Attention U-Net and vegetation spectral features
The rapid, accurate, and intelligent extraction of vegetation areas is of great significance for conducting research on forest resource inventory, climate change, and the greenhouse effect. Currently, existing semantic segmentation models suffer from limitations such as insufficient extraction accuracy (ACC) and unbalanced positive and negative categories in datasets. Therefore, we propose the Attention U-Net model for vegetation extraction from Landsat8 operational land imager remote sensing images. By combining the convolutional block attention module, Visual Geometry Group 16 backbone network, and Dice loss, the model alleviates the phenomenon of omission and misclassification of the fragmented vegetation areas and the imbalance of positive and negative classes. In addition, to test the influence of remote sensing images with different band combinations on the ACC of vegetation extraction, we introduce near-infrared (NIR) and short-wave infrared (SWIR) spectral information to conduct band combination operations, thus forming three datasets, namely, the 432 dataset (R, G, B), 543 dataset (NIR, R, G), and 654 dataset (SWIR, NIR, R). In addition, to validate the effectiveness of the proposed model, it was compared with three classic semantic segmentation models, namely, PSP-Net, DeepLabv3+, and U-Net. Experimental results demonstrate that all models exhibit improved extraction performance on false color datasets compared with the true color dataset, particularly on the 654 dataset where vegetation extraction performance is optimal. Moreover, the proposed Attention U-Net achieves the highest overall ACC with mean intersection over union, mean pixel ACC, and ACC reaching 0.877, 0.940, and 0.946, respectively, providing substantial evidence for the effectiveness of the proposed model. Furthermore, the model demonstrates good generalizability and transferability when tested in other regions, indicating its potential for further application and promotion.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Applied Remote Sensing
Journal of Applied Remote Sensing 环境科学-成像科学与照相技术
CiteScore
3.40
自引率
11.80%
发文量
194
审稿时长
3 months
期刊介绍: The Journal of Applied Remote Sensing is a peer-reviewed journal that optimizes the communication of concepts, information, and progress among the remote sensing community.
期刊最新文献
Few-shot synthetic aperture radar object detection algorithm based on meta-learning and variational inference Object-based strategy for generating high-resolution four-dimensional thermal surface models of buildings based on integration of visible and thermal unmanned aerial vehicle imagery Frequent oversights in on-orbit modulation transfer function estimation of optical imager onboard EO satellites Comprehensive comparison of different gridded precipitation products over geographic regions of Türkiye Monitoring soil moisture in cotton fields with synthetic aperture radar and optical data in arid and semi-arid regions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1