CropSight:利用街景和 PlanetScope 卫星图像实现基于对象的作物类型地面实况检索的大规模操作框架

IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-08-01 DOI:10.1016/j.isprsjprs.2024.07.025
{"title":"CropSight:利用街景和 PlanetScope 卫星图像实现基于对象的作物类型地面实况检索的大规模操作框架","authors":"","doi":"10.1016/j.isprsjprs.2024.07.025","DOIUrl":null,"url":null,"abstract":"<div><p>Crop type maps are essential in informing agricultural policy decisions by providing crucial data on the specific crops cultivated in given regions. The generation of crop type maps usually involves the collection of ground truth data of various crop species, which can be challenging at large scales. As an alternative to conventional field observations, street view images offer a valuable and extensive resource for gathering large-scale crop type ground truth through imaging the crops cultivated in the roadside agricultural fields. Yet our ability to systematically retrieve crop type labels at large scales from street view images in an operational fashion is still limited. The crop type retrieval is usually at the pixel level with uncertainty seldom considered. In our study, we develop a novel deep learning-based CropSight modeling framework to retrieve the object-based crop type ground truth by synthesizing Google Street View (GSV) and PlanetScope satellite images. CropSight comprises three key components: (1) A large-scale operational cropland field-view imagery collection method is devised to systematically acquire representative geotagged cropland field-view images of various crop types across regions in an operational manner; (2) UncertainFusionNet, a novel Bayesian convolutional neural network, is developed to retrieve high-quality crop type labels from collected field-view images with uncertainty quantified; (3) Segmentation Anything Model (SAM) is fine-tuned and employed to delineate the cropland boundary tailored to each collected field-view image with its coordinate as the point prompt using the PlanetScope satellite imagery. With four agricultural dominated regions in the US as study areas, CropSight consistently shows high accuracy in retrieving crop type labels of multiple dominated crop species (overall accuracy around 97 %) and in delineating corresponding cropland boundaries (F1 score around 92 %). UncertainFusionNet outperforms the benchmark models (i.e., ResNet-50 and Vision Transformer) for crop type image classification, showing an improvement in overall accuracy of 2–8 %. The fine-tuned SAM surpasses the performance of Mask-RCNN and the base SAM in cropland boundary delineation, achieving a 4–12 % increase in F1 score. The further comparison with the benchmark crop type product (i.e., cropland data layer (CDL)) indicates that CropSight is a promising alternative to crop type mapping products for providing high-quality, object-based crop type ground truth of diverse crop species at large scales. CropSight holds considerable promise to extrapolate over space and time for operationalizing large-scale object-based crop type ground truth retrieval in a near-real-time manner.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0924271624002922/pdfft?md5=56094958cdf792c198b8ff25886be1bf&pid=1-s2.0-S0924271624002922-main.pdf","citationCount":"0","resultStr":"{\"title\":\"CropSight: Towards a large-scale operational framework for object-based crop type ground truth retrieval using street view and PlanetScope satellite imagery\",\"authors\":\"\",\"doi\":\"10.1016/j.isprsjprs.2024.07.025\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Crop type maps are essential in informing agricultural policy decisions by providing crucial data on the specific crops cultivated in given regions. The generation of crop type maps usually involves the collection of ground truth data of various crop species, which can be challenging at large scales. As an alternative to conventional field observations, street view images offer a valuable and extensive resource for gathering large-scale crop type ground truth through imaging the crops cultivated in the roadside agricultural fields. Yet our ability to systematically retrieve crop type labels at large scales from street view images in an operational fashion is still limited. The crop type retrieval is usually at the pixel level with uncertainty seldom considered. In our study, we develop a novel deep learning-based CropSight modeling framework to retrieve the object-based crop type ground truth by synthesizing Google Street View (GSV) and PlanetScope satellite images. CropSight comprises three key components: (1) A large-scale operational cropland field-view imagery collection method is devised to systematically acquire representative geotagged cropland field-view images of various crop types across regions in an operational manner; (2) UncertainFusionNet, a novel Bayesian convolutional neural network, is developed to retrieve high-quality crop type labels from collected field-view images with uncertainty quantified; (3) Segmentation Anything Model (SAM) is fine-tuned and employed to delineate the cropland boundary tailored to each collected field-view image with its coordinate as the point prompt using the PlanetScope satellite imagery. With four agricultural dominated regions in the US as study areas, CropSight consistently shows high accuracy in retrieving crop type labels of multiple dominated crop species (overall accuracy around 97 %) and in delineating corresponding cropland boundaries (F1 score around 92 %). UncertainFusionNet outperforms the benchmark models (i.e., ResNet-50 and Vision Transformer) for crop type image classification, showing an improvement in overall accuracy of 2–8 %. The fine-tuned SAM surpasses the performance of Mask-RCNN and the base SAM in cropland boundary delineation, achieving a 4–12 % increase in F1 score. The further comparison with the benchmark crop type product (i.e., cropland data layer (CDL)) indicates that CropSight is a promising alternative to crop type mapping products for providing high-quality, object-based crop type ground truth of diverse crop species at large scales. CropSight holds considerable promise to extrapolate over space and time for operationalizing large-scale object-based crop type ground truth retrieval in a near-real-time manner.</p></div>\",\"PeriodicalId\":50269,\"journal\":{\"name\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":10.6000,\"publicationDate\":\"2024-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0924271624002922/pdfft?md5=56094958cdf792c198b8ff25886be1bf&pid=1-s2.0-S0924271624002922-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0924271624002922\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271624002922","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

摘要

作物类型图通过提供特定地区种植的特定作物的关键数据,为农业政策决策提供依据。作物类型图的生成通常需要收集各种作物种类的地面实况数据,而这在大尺度范围内具有挑战性。作为传统实地观测的替代方法,街景图像通过对路边农田中种植的作物进行成像,为收集大规模作物类型地面实况提供了宝贵而广泛的资源。然而,我们以可操作的方式从街景图像中系统地检索大尺度作物类型标签的能力仍然有限。作物类型检索通常是在像素级别上进行的,很少考虑不确定性。在我们的研究中,我们开发了一种基于深度学习的新型 CropSight 建模框架,通过综合谷歌街景(GSV)和 PlanetScope 卫星图像来检索基于对象的作物类型地面实况。CropSight 由三个关键部分组成:(1) 设计了一种大规模可操作的耕地田间视图图像收集方法,以可操作的方式系统地获取各地区各种作物类型的具有代表性的地理标记耕地田间视图图像;(2) 开发了一种新型贝叶斯卷积神经网络 UncertainFusionNet,用于从采集的田间视图图像中获取高质量的作物类型标签,并量化其不确定性;(3) 利用 PlanetScope 卫星图像,微调并采用 Segmentation Anything Model (SAM),以每张采集的田间视图图像的坐标作为点提示,为其量身定制耕地边界的划分方法。以美国四个以农业为主的地区为研究区域,CropSight 在检索多种优势作物种类的作物类型标签(总体准确率约为 97%)和划定相应的耕地边界(F1 分数约为 92%)方面始终表现出很高的准确性。在作物类型图像分类方面,UncertainFusionNet 的表现优于基准模型(即 ResNet-50 和 Vision Transformer),总体准确率提高了 2-8 %。微调后的 SAM 在耕地边界划分方面的性能超过了 Mask-RCNN 和基础 SAM,F1 分数提高了 4-12%。与基准作物类型产品(即耕地数据层 (CDL))的进一步比较表明,CropSight 是作物类型绘图产品的一个很有前途的替代品,可在大尺度上为不同作物物种提供高质量、基于对象的作物类型基本事实。CropSight 在空间和时间推断方面大有可为,可以以接近实时的方式实现基于对象的大规模作物类型地面实况检索。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CropSight: Towards a large-scale operational framework for object-based crop type ground truth retrieval using street view and PlanetScope satellite imagery

Crop type maps are essential in informing agricultural policy decisions by providing crucial data on the specific crops cultivated in given regions. The generation of crop type maps usually involves the collection of ground truth data of various crop species, which can be challenging at large scales. As an alternative to conventional field observations, street view images offer a valuable and extensive resource for gathering large-scale crop type ground truth through imaging the crops cultivated in the roadside agricultural fields. Yet our ability to systematically retrieve crop type labels at large scales from street view images in an operational fashion is still limited. The crop type retrieval is usually at the pixel level with uncertainty seldom considered. In our study, we develop a novel deep learning-based CropSight modeling framework to retrieve the object-based crop type ground truth by synthesizing Google Street View (GSV) and PlanetScope satellite images. CropSight comprises three key components: (1) A large-scale operational cropland field-view imagery collection method is devised to systematically acquire representative geotagged cropland field-view images of various crop types across regions in an operational manner; (2) UncertainFusionNet, a novel Bayesian convolutional neural network, is developed to retrieve high-quality crop type labels from collected field-view images with uncertainty quantified; (3) Segmentation Anything Model (SAM) is fine-tuned and employed to delineate the cropland boundary tailored to each collected field-view image with its coordinate as the point prompt using the PlanetScope satellite imagery. With four agricultural dominated regions in the US as study areas, CropSight consistently shows high accuracy in retrieving crop type labels of multiple dominated crop species (overall accuracy around 97 %) and in delineating corresponding cropland boundaries (F1 score around 92 %). UncertainFusionNet outperforms the benchmark models (i.e., ResNet-50 and Vision Transformer) for crop type image classification, showing an improvement in overall accuracy of 2–8 %. The fine-tuned SAM surpasses the performance of Mask-RCNN and the base SAM in cropland boundary delineation, achieving a 4–12 % increase in F1 score. The further comparison with the benchmark crop type product (i.e., cropland data layer (CDL)) indicates that CropSight is a promising alternative to crop type mapping products for providing high-quality, object-based crop type ground truth of diverse crop species at large scales. CropSight holds considerable promise to extrapolate over space and time for operationalizing large-scale object-based crop type ground truth retrieval in a near-real-time manner.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
期刊最新文献
Integrating synthetic datasets with CLIP semantic insights for single image localization advancements Selective weighted least square and piecewise bilinear transformation for accurate satellite DSM generation Word2Scene: Efficient remote sensing image scene generation with only one word via hybrid intelligence and low-rank representation A_OPTRAM-ET: An automatic optical trapezoid model for evapotranspiration estimation and its global-scale assessments Atmospheric correction of geostationary ocean color imager data over turbid coastal waters under high solar zenith angles
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1