利用时间序列 Sentinel-2 图像绘制基于像素的作物地图的轻量级 CNN-Transformer 网络

IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Computers and Electronics in Agriculture Pub Date : 2024-08-28 DOI:10.1016/j.compag.2024.109370
{"title":"利用时间序列 Sentinel-2 图像绘制基于像素的作物地图的轻量级 CNN-Transformer 网络","authors":"","doi":"10.1016/j.compag.2024.109370","DOIUrl":null,"url":null,"abstract":"<div><p>Deep learning approaches have provided state-of-the-art performance in crop mapping. Recently, several studies have combined the strengths of two dominant deep learning architectures, Convolutional Neural Networks (CNNs) and Transformers, to classify crops using remote sensing images. Despite their success, many of these models utilize patch-based methods that require extensive data labeling, as each sample contains multiple pixels with corresponding labels. This leads to higher costs in data preparation and processing. Moreover, previous methods rarely considered the impact of missing values caused by clouds and no-observations in remote sensing data. Therefore, this study proposes a lightweight multi-stage CNN-Transformer network (MCTNet) for pixel-based crop mapping using time-series Sentinel-2 imagery. MCTNet consists of several successive modules, each containing a CNN sub-module and a Transformer sub-module to extract important features from the images, respectively. An attention-based learnable positional encoding (ALPE) module is designed in the Transformer sub-module to capture the complex temporal relations in the time-series data with different missing rates. Arkansas and California in the U.S. are selected to evaluate the model. Experimental results show that the MCTNet has a lightweight advantage with the fewest parameters and memory usage while achieving the superior performance compared to eight advanced models. Specifically, MCTNet obtained an overall accuracy (OA) of 0.968, a kappa coefficient (Kappa) of 0.951, and a macro-averaged F1 score (F1) of 0.933 in Arkansas, and an OA of 0.852, a Kappa of 0.806, and an F1 score of 0.829 in California. The results highlight the importance of each component of the model, particularly the ALPE module, which enhanced the Kappa of MCTNet by 4.2% in Arkansas and improved the model’s robustness to missing values in remote sensing data. Additionally, visualization results demonstrated that the features extracted from CNN and Transformer sub-modules are complementary, explaining the effectiveness of the MCTNet.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A lightweight CNN-Transformer network for pixel-based crop mapping using time-series Sentinel-2 imagery\",\"authors\":\"\",\"doi\":\"10.1016/j.compag.2024.109370\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Deep learning approaches have provided state-of-the-art performance in crop mapping. Recently, several studies have combined the strengths of two dominant deep learning architectures, Convolutional Neural Networks (CNNs) and Transformers, to classify crops using remote sensing images. Despite their success, many of these models utilize patch-based methods that require extensive data labeling, as each sample contains multiple pixels with corresponding labels. This leads to higher costs in data preparation and processing. Moreover, previous methods rarely considered the impact of missing values caused by clouds and no-observations in remote sensing data. Therefore, this study proposes a lightweight multi-stage CNN-Transformer network (MCTNet) for pixel-based crop mapping using time-series Sentinel-2 imagery. MCTNet consists of several successive modules, each containing a CNN sub-module and a Transformer sub-module to extract important features from the images, respectively. An attention-based learnable positional encoding (ALPE) module is designed in the Transformer sub-module to capture the complex temporal relations in the time-series data with different missing rates. Arkansas and California in the U.S. are selected to evaluate the model. Experimental results show that the MCTNet has a lightweight advantage with the fewest parameters and memory usage while achieving the superior performance compared to eight advanced models. Specifically, MCTNet obtained an overall accuracy (OA) of 0.968, a kappa coefficient (Kappa) of 0.951, and a macro-averaged F1 score (F1) of 0.933 in Arkansas, and an OA of 0.852, a Kappa of 0.806, and an F1 score of 0.829 in California. The results highlight the importance of each component of the model, particularly the ALPE module, which enhanced the Kappa of MCTNet by 4.2% in Arkansas and improved the model’s robustness to missing values in remote sensing data. Additionally, visualization results demonstrated that the features extracted from CNN and Transformer sub-modules are complementary, explaining the effectiveness of the MCTNet.</p></div>\",\"PeriodicalId\":50627,\"journal\":{\"name\":\"Computers and Electronics in Agriculture\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.7000,\"publicationDate\":\"2024-08-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers and Electronics in Agriculture\",\"FirstCategoryId\":\"97\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0168169924007610\",\"RegionNum\":1,\"RegionCategory\":\"农林科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURE, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Electronics in Agriculture","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0168169924007610","RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

深度学习方法在农作物绘图方面提供了最先进的性能。最近,有几项研究结合了卷积神经网络(CNN)和变换器这两种主流深度学习架构的优势,利用遥感图像对作物进行分类。尽管这些模型取得了成功,但其中许多都采用了基于斑块的方法,需要进行大量数据标注,因为每个样本都包含多个具有相应标签的像素。这导致数据准备和处理的成本较高。此外,以前的方法很少考虑遥感数据中云层和无观测数据造成的缺失值的影响。因此,本研究提出了一种轻量级多级 CNN 变换器网络(MCTNet),用于利用时间序列 Sentinel-2 图像进行基于像素的作物绘图。MCTNet 由几个连续的模块组成,每个模块包含一个 CNN 子模块和一个变换器子模块,分别用于从图像中提取重要特征。变换器子模块中设计了一个基于注意力的可学习位置编码(ALPE)模块,以捕捉具有不同缺失率的时间序列数据中复杂的时间关系。我们选择了美国的阿肯色州和加利福尼亚州来评估该模型。实验结果表明,与 8 个先进模型相比,MCTNet 具有参数最少、内存用量最少的轻量级优势,同时性能优越。具体而言,MCTNet 在阿肯色州的总体准确率 (OA) 为 0.968,卡帕系数 (Kappa) 为 0.951,宏观平均 F1 得分 (F1) 为 0.933;在加利福尼亚州的 OA 为 0.852,Kappa 为 0.806,F1 得分为 0.829。结果凸显了模型各组成部分的重要性,尤其是 ALPE 模块,它使 MCTNet 在阿肯色州的 Kappa 值提高了 4.2%,并改善了模型对遥感数据缺失值的鲁棒性。此外,可视化结果表明,从 CNN 和 Transformer 子模块中提取的特征是互补的,说明了 MCTNet 的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A lightweight CNN-Transformer network for pixel-based crop mapping using time-series Sentinel-2 imagery

Deep learning approaches have provided state-of-the-art performance in crop mapping. Recently, several studies have combined the strengths of two dominant deep learning architectures, Convolutional Neural Networks (CNNs) and Transformers, to classify crops using remote sensing images. Despite their success, many of these models utilize patch-based methods that require extensive data labeling, as each sample contains multiple pixels with corresponding labels. This leads to higher costs in data preparation and processing. Moreover, previous methods rarely considered the impact of missing values caused by clouds and no-observations in remote sensing data. Therefore, this study proposes a lightweight multi-stage CNN-Transformer network (MCTNet) for pixel-based crop mapping using time-series Sentinel-2 imagery. MCTNet consists of several successive modules, each containing a CNN sub-module and a Transformer sub-module to extract important features from the images, respectively. An attention-based learnable positional encoding (ALPE) module is designed in the Transformer sub-module to capture the complex temporal relations in the time-series data with different missing rates. Arkansas and California in the U.S. are selected to evaluate the model. Experimental results show that the MCTNet has a lightweight advantage with the fewest parameters and memory usage while achieving the superior performance compared to eight advanced models. Specifically, MCTNet obtained an overall accuracy (OA) of 0.968, a kappa coefficient (Kappa) of 0.951, and a macro-averaged F1 score (F1) of 0.933 in Arkansas, and an OA of 0.852, a Kappa of 0.806, and an F1 score of 0.829 in California. The results highlight the importance of each component of the model, particularly the ALPE module, which enhanced the Kappa of MCTNet by 4.2% in Arkansas and improved the model’s robustness to missing values in remote sensing data. Additionally, visualization results demonstrated that the features extracted from CNN and Transformer sub-modules are complementary, explaining the effectiveness of the MCTNet.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers and Electronics in Agriculture
Computers and Electronics in Agriculture 工程技术-计算机:跨学科应用
CiteScore
15.30
自引率
14.50%
发文量
800
审稿时长
62 days
期刊介绍: Computers and Electronics in Agriculture provides international coverage of advancements in computer hardware, software, electronic instrumentation, and control systems applied to agricultural challenges. Encompassing agronomy, horticulture, forestry, aquaculture, and animal farming, the journal publishes original papers, reviews, and applications notes. It explores the use of computers and electronics in plant or animal agricultural production, covering topics like agricultural soils, water, pests, controlled environments, and waste. The scope extends to on-farm post-harvest operations and relevant technologies, including artificial intelligence, sensors, machine vision, robotics, networking, and simulation modeling. Its companion journal, Smart Agricultural Technology, continues the focus on smart applications in production agriculture.
期刊最新文献
TrackPlant3D: 3D organ growth tracking framework for organ-level dynamic phenotyping Camouflaged cotton bollworm instance segmentation based on PVT and Mask R-CNN Path planning of manure-robot cleaners using grid-based reinforcement learning Immersive human-machine teleoperation framework for precision agriculture: Integrating UAV-based digital mapping and virtual reality control Improving soil moisture prediction with deep learning and machine learning models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1