首页 > 最新文献

ISPRS Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
From satellite-based phenological metrics to crop planting dates: Deriving field-level planting dates for corn and soybean in the U.S. Midwest 从卫星物候指标到作物播种日期:推导美国中西部玉米和大豆的田间播种日期
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-16 DOI: 10.1016/j.isprsjprs.2024.07.031
Qu Zhou , Kaiyu Guan , Sheng Wang , James Hipple , Zhangliang Chen

Information on planting dates is crucial for modeling crop development, analyzing crop yield, and evaluating the effectiveness of policy-driven planting windows. Despite their high importance, field-level planting date datasets are scarce. Satellite remote sensing provides accurate and cost-effective solutions for detecting crop phenology from moderate to high resolutions, but remote sensing-based crop planting date detection is rare. Here, we aimed to generate field-level crop planting date maps by taking advantage of satellite remote sensing-derived phenological metrics and proposed a two-step framework to predict crop planting dates from these metrics using required growing degree dates (RGDD) as a bridge. Specifically, we modeled RGDD from the planting date to the spring inflection date (derived from phenological metrics) and then predicted the crop planting dates based on phenological metrics, RGDD, and environmental variables. The ∼3-day and 30-m Harmonized Landsat and Sentinel-2 (HLS) products were used to derive crop phenological metrics for corn and soybean fields in the U.S. Midwest from 2016 to 2021, and the ground truth of field-level planting dates from USDA Risk Management Agency (RMA) reports were used for the development and validation of our proposed two-step framework. The results indicated that our framework could accurately predict field-level planting dates from HLS-derived phenological metrics, capturing 77 % field-level variations for corn (mean absolute error, MAE=4.6 days) and 71 % for soybean (MAE=5.4 days). We also evaluated the predicted planting dates with USDA National Agricultural Statistics Service (NASS) state-level crop progress reports, achieving strong consistency with median planting dates for corn (R2=0.90, MAE=2.7 days) and soybeans (R2=0.87, MAE=2.5 days). The model’s performance degraded slightly when predicting planting dates for fields with irrigation (MAE=5.4 days for corn, MAE=6.1 days for soybean) and cover cropping (MAE=5.4 days for corn, MAE=5.6 days for soybean). The USDA RMA Common Crop Insurance Policy (CCIP) provides county- or sub-county-level crop planting windows, which drive producers’ decisions on when to plant. Within the CCIP-driven planting windows, higher prediction accuracies were achieved (MAE for corn: 4.5 days, soybean: 5.2 days). Our proposed two-step framework (phenological metrics-RGDD-planting dates) also outperformed the traditional one-step model (phenological metrics-planting dates). The proposed framework can be beneficial for deriving planting dates from current and future phenological products and contribute to studies related to planting dates such as the analysis of yield gaps, management practices, and government policies.

有关播种日期的信息对于建立作物生长模型、分析作物产量和评估政策导向的播种窗口的有效性至关重要。尽管非常重要,但田间水平的播种日期数据集却很少。卫星遥感为中高分辨率的作物物候探测提供了精确而经济的解决方案,但基于遥感的作物播种日期探测却很少见。在此,我们旨在利用卫星遥感得出的物候指标生成田间级作物播种日期图,并提出了一个两步框架,以必要生长度日期(RGDD)为桥梁,根据这些指标预测作物播种日期。具体来说,我们建立了从播种日期到春季拐点日期的 RGDD 模型(来自物候指标),然后根据物候指标、RGDD 和环境变量预测作物播种日期。我们利用∼3 天和 30 米的大地遥感卫星和哨兵-2(HLS)协调产品推导出美国中西部地区 2016 年至 2021 年玉米和大豆田的作物物候指标,并利用美国农业部风险管理署(RMA)报告中田间种植日期的地面实况来开发和验证我们提出的两步框架。结果表明,我们的框架可以根据 HLS 派生的物候指标准确预测田间种植日期,玉米的田间变化率为 77%(平均绝对误差 MAE=4.6 天),大豆的田间变化率为 71%(平均绝对误差 MAE=5.4 天)。我们还根据美国农业部国家农业统计服务局 (NASS) 州级作物进度报告对预测播种日期进行了评估,结果与玉米(R2=0.90,MAE=2.7 天)和大豆(R2=0.87,MAE=2.5 天)的中位数播种日期非常一致。在预测灌溉田(玉米 MAE=5.4 天,大豆 MAE=6.1 天)和覆盖种植(玉米 MAE=5.4 天,大豆 MAE=5.6 天)的播种期时,模型的性能略有下降。美国农业部 RMA 共同作物保险政策 (CCIP) 提供了县级或县级以下的作物播种窗口,促使生产者决定何时播种。在 CCIP 驱动的种植窗口内,预测准确率更高(玉米的 MAE 为 4.5 天,大豆为 5.2 天)。我们提出的两步框架(物候指标-RGDD-播种日期)也优于传统的一步模型(物候指标-播种日期)。建议的框架有助于从当前和未来的物候产品中推导出种植日期,并有助于与种植日期有关的研究,如产量差距分析、管理实践和政府政策。
{"title":"From satellite-based phenological metrics to crop planting dates: Deriving field-level planting dates for corn and soybean in the U.S. Midwest","authors":"Qu Zhou ,&nbsp;Kaiyu Guan ,&nbsp;Sheng Wang ,&nbsp;James Hipple ,&nbsp;Zhangliang Chen","doi":"10.1016/j.isprsjprs.2024.07.031","DOIUrl":"10.1016/j.isprsjprs.2024.07.031","url":null,"abstract":"<div><p>Information on planting dates is crucial for modeling crop development, analyzing crop yield, and evaluating the effectiveness of policy-driven planting windows. Despite their high importance, field-level planting date datasets are scarce. Satellite remote sensing provides accurate and cost-effective solutions for detecting crop phenology from moderate to high resolutions, but remote sensing-based crop planting date detection is rare. Here, we aimed to generate field-level crop planting date maps by taking advantage of satellite remote sensing-derived phenological metrics and proposed a two-step framework to predict crop planting dates from these metrics using required growing degree dates (RGDD) as a bridge. Specifically, we modeled RGDD from the planting date to the spring inflection date (derived from phenological metrics) and then predicted the crop planting dates based on phenological metrics, RGDD, and environmental variables. The ∼3-day and 30-m Harmonized Landsat and Sentinel-2 (HLS) products were used to derive crop phenological metrics for corn and soybean fields in the U.S. Midwest from 2016 to 2021, and the ground truth of field-level planting dates from USDA Risk Management Agency (RMA) reports were used for the development and validation of our proposed two-step framework. The results indicated that our framework could accurately predict field-level planting dates from HLS-derived phenological metrics, capturing 77 % field-level variations for corn (mean absolute error, MAE=4.6 days) and 71 % for soybean (MAE=5.4 days). We also evaluated the predicted planting dates with USDA National Agricultural Statistics Service (NASS) state-level crop progress reports, achieving strong consistency with median planting dates for corn (R<sup>2</sup>=0.90, MAE=2.7 days) and soybeans (R<sup>2</sup>=0.87, MAE=2.5 days). The model’s performance degraded slightly when predicting planting dates for fields with irrigation (MAE=5.4 days for corn, MAE=6.1 days for soybean) and cover cropping (MAE=5.4 days for corn, MAE=5.6 days for soybean). The USDA RMA Common Crop Insurance Policy (CCIP) provides county- or sub-county-level crop planting windows, which drive producers’ decisions on when to plant. Within the CCIP-driven planting windows, higher prediction accuracies were achieved (MAE for corn: 4.5 days, soybean: 5.2 days). Our proposed two-step framework (phenological metrics-RGDD-planting dates) also outperformed the traditional one-step model (phenological metrics-planting dates). The proposed framework can be beneficial for deriving planting dates from current and future phenological products and contribute to studies related to planting dates such as the analysis of yield gaps, management practices, and government policies.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"216 ","pages":"Pages 259-273"},"PeriodicalIF":10.6,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unveiling spatiotemporal tree cover patterns in China: The first 30 m annual tree cover mapping from 1985 to 2023 揭示中国树木覆盖的时空格局:首次绘制1985-2023年30米年树木覆盖图
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-13 DOI: 10.1016/j.isprsjprs.2024.08.001
Yaotong Cai , Xiaocong Xu , Peng Zhu , Sheng Nie , Cheng Wang , Yujiu Xiong , Xiaoping Liu

China leads in the greening of the world, with a nearly doubled increase in its forest area since the 1980 s revealed by the National Forest Inventory (NFI). However, a significant challenge persists in the absence of consistent and reliable remote sensing data that align with the NFI, hindering a comprehensive understanding of the spatiotemporal patterns of terrestrial ecosystem changes driven by afforestation and reforestation efforts over recent decades in China. Moreover, conventional binary thematic maps and land use and land cover (LULC) maps encounter difficulties in providing a thorough assessment of canopy cover at the subpixel level and trees extending beyond officially designated forest boundaries. This limitation creates substantial gaps in our comprehension of their invaluable contributions to ecosystem services. To confront these challenges, this study presents a systematic framework integrating time-series Landsat satellite imagery and random forest-based ensemble learning techniques. This framework aims to generate China’s inaugural annual tree cover dataset (CATCD) spanning from 1985 to 2023 at a 30 m spatial resolution. Evaluation against multisource reference data shown high correlations ranging from 0.70 to 0.96 and reasonable RMSE values ranging from 5.6 % to 25.2 %, highlighting the reliability and precision of our approach across different years and data collection methodologies. Our analysis reveals that China’s forested area has doubled, expanding from 1.04 million km2 in 1985 to 2.10 million km2 in 2023. Notably, 33 % of this growth can be attributed to a shift from non-forest to forest land categories, primarily observed in the three-north and southwest regions. However, the majority, contributing 67 %, results primarily from crown closure in central and southern China. This realization underscores the limitations of conventional binary thematic maps and LULC maps in accurately quantifying forest gain in China. Furthermore, China’s tree population structure has undergone a transformative shift from 83 % forest trees and 17 % non-forest trees in 1985 to 92 % forest trees and 8 % non-forest trees in 2023, signifying a transition from afforestation to established forests. Our study not only enhances the understanding of tree cover variations in China but also provides valuable data for ecological investigations, land management strategies, and assessments related to climate change.

国家森林资源清查(NFI)显示,自 20 世纪 80 年代以来,中国的森林面积增加了近一倍,在世界绿化进程中居于领先地位。然而,由于缺乏与《国家森林资源清查》相一致的可靠遥感数据,全面了解中国近几十年来植树造林和重新造林所导致的陆地生态系统变化的时空模式成为一大挑战。此外,传统的二元专题地图和土地利用与土地覆盖(LULC)地图在提供亚像素级树冠覆盖和官方指定森林边界以外树木的全面评估方面也存在困难。这种局限性使我们在理解它们对生态系统服务的宝贵贡献方面存在巨大差距。为了应对这些挑战,本研究提出了一个整合时间序列大地卫星图像和基于随机森林的集合学习技术的系统框架。该框架旨在生成中国首个年度树木覆盖数据集(CATCD),时间跨度为 1985 年至 2023 年,空间分辨率为 30 米。根据多源参考数据进行的评估显示,相关性从 0.70 到 0.96 不等,均方根误差值从 5.6 % 到 25.2 % 不等,显示了我们的方法在不同年份和数据收集方法中的可靠性和精确性。我们的分析显示,中国的森林面积翻了一番,从 1985 年的 104 万平方公里扩大到 2023 年的 210 万平方公里。值得注意的是,33% 的增长可归因于非林地向林地类别的转变,这主要体现在三北和西南地区。不过,67%的增长主要来自华中和华南地区的树冠郁闭。这凸显了传统二元专题地图和 LULC 地图在准确量化中国森林增量方面的局限性。此外,中国的树木种群结构发生了转变,从 1985 年的 83% 林地和 17% 非林地树木转变为 2023 年的 92% 林地和 8% 非林地树木,这标志着从植树造林向人工林的过渡。我们的研究不仅加深了对中国林木覆盖率变化的理解,还为生态调查、土地管理策略和气候变化相关评估提供了宝贵数据。
{"title":"Unveiling spatiotemporal tree cover patterns in China: The first 30 m annual tree cover mapping from 1985 to 2023","authors":"Yaotong Cai ,&nbsp;Xiaocong Xu ,&nbsp;Peng Zhu ,&nbsp;Sheng Nie ,&nbsp;Cheng Wang ,&nbsp;Yujiu Xiong ,&nbsp;Xiaoping Liu","doi":"10.1016/j.isprsjprs.2024.08.001","DOIUrl":"10.1016/j.isprsjprs.2024.08.001","url":null,"abstract":"<div><p>China leads in the greening of the world, with a nearly doubled increase in its forest area since the 1980 s revealed by the National Forest Inventory (NFI). However, a significant challenge persists in the absence of consistent and reliable remote sensing data that align with the NFI, hindering a comprehensive understanding of the spatiotemporal patterns of terrestrial ecosystem changes driven by afforestation and reforestation efforts over recent decades in China. Moreover, conventional binary thematic maps and land use and land cover (LULC) maps encounter difficulties in providing a thorough assessment of canopy cover at the subpixel level and trees extending beyond officially designated forest boundaries. This limitation creates substantial gaps in our comprehension of their invaluable contributions to ecosystem services. To confront these challenges, this study presents a systematic framework integrating time-series Landsat satellite imagery and random forest-based ensemble learning techniques. This framework aims to generate China’s inaugural annual tree cover dataset (CATCD) spanning from 1985 to 2023 at a 30 m spatial resolution. Evaluation against multisource reference data shown high correlations ranging from 0.70 to 0.96 and reasonable RMSE values ranging from 5.6 % to 25.2 %, highlighting the reliability and precision of our approach across different years and data collection methodologies. Our analysis reveals that China’s forested area has doubled, expanding from 1.04 million km2 in 1985 to 2.10 million km2 in 2023. Notably, 33 % of this growth can be attributed to a shift from non-forest to forest land categories, primarily observed in the three-north and southwest regions. However, the majority, contributing 67 %, results primarily from crown closure in central and southern China. This realization underscores the limitations of conventional binary thematic maps and LULC maps in accurately quantifying forest gain in China. Furthermore, China’s tree population structure has undergone a transformative shift from 83 % forest trees and 17 % non-forest trees in 1985 to 92 % forest trees and 8 % non-forest trees in 2023, signifying a transition from afforestation to established forests. Our study not only enhances the understanding of tree cover variations in China but also provides valuable data for ecological investigations, land management strategies, and assessments related to climate change.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"216 ","pages":"Pages 240-258"},"PeriodicalIF":10.6,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141979195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fraction-dependent variations in cooling efficiency of urban trees across global cities 全球城市中城市树木冷却效率的分数变化
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-09 DOI: 10.1016/j.isprsjprs.2024.07.026
Wenfeng Zhan , Chunli Wang , Shasha Wang , Long Li , Yingying Ji , Huilin Du , Fan Huang , Sida Jiang , Zihan Liu , Huyan Fu

Investigating the relationship between cooling efficiency (CE) and tree cover percentage (TCP) is critical for planning of green space within cities. However, the spatiotemporal complexities of the intra-city CE-TCP relationship worldwide with distinct climates, as well as the differing impacts of consistently increasing tree cover within urban regions on cooling potential, remain unclear. Here we used satellite-derived MODIS observations to investigate the CE-TCP relationship across 440 global cities during summertime from 2018 to 2020. We further investigated the impacts of enhancing tree cover by a consistent amount in different urban locales on the reduction of population heat exposure among specific age groups. Our results demonstrate a nonlinear CE-TCP relationship globally – CE exhibits an initial sharp decline followed by a gradual reduction as TCP rises, and this nonlinearity is more pronounced in tropical and arid climates than in other climate zones. We observe that 91.4% of cities experience a greater reduction in population heat exposure when introducing the same amount of TCP in areas with fewer trees than in those with denser canopies; and heat exposure mitigation is more prominent for laborers than for vulnerable groups. These insights are critical for developing strategies to minimize urban heat-related health risks.

研究降温效率(CE)与树木覆盖率(TCP)之间的关系对于城市绿地规划至关重要。然而,全球范围内不同气候条件下城市内CE-TCP关系的时空复杂性,以及城市区域内树木覆盖率持续增加对降温潜力的不同影响仍不清楚。在此,我们利用源自卫星的 MODIS 观测数据,研究了 2018 年至 2020 年夏季全球 440 个城市的 CE-TCP 关系。我们进一步研究了在不同城市地区提高一定数量的树木覆盖率对减少特定年龄组人群热暴露的影响。我们的研究结果表明,在全球范围内,CE 与 TCP 之间存在非线性关系--CE 最初急剧下降,随后随着 TCP 的上升而逐渐降低,这种非线性关系在热带和干旱气候中比其他气候区更为明显。我们观察到,如果在树木较少的地区引入相同数量的 TCP,91.4% 的城市会比在树冠较密的地区减少更多的人口热暴露;与弱势群体相比,劳动者的热暴露缓解更为显著。这些见解对于制定最大限度降低城市热相关健康风险的战略至关重要。
{"title":"Fraction-dependent variations in cooling efficiency of urban trees across global cities","authors":"Wenfeng Zhan ,&nbsp;Chunli Wang ,&nbsp;Shasha Wang ,&nbsp;Long Li ,&nbsp;Yingying Ji ,&nbsp;Huilin Du ,&nbsp;Fan Huang ,&nbsp;Sida Jiang ,&nbsp;Zihan Liu ,&nbsp;Huyan Fu","doi":"10.1016/j.isprsjprs.2024.07.026","DOIUrl":"10.1016/j.isprsjprs.2024.07.026","url":null,"abstract":"<div><p>Investigating the relationship between cooling efficiency (CE) and tree cover percentage (TCP) is critical for planning of green space within cities. However, the spatiotemporal complexities of the intra-city CE-TCP relationship worldwide with distinct climates, as well as the differing impacts of consistently increasing tree cover within urban regions on cooling potential, remain unclear. Here we used satellite-derived MODIS observations to investigate the CE-TCP relationship across 440 global cities during summertime from 2018 to 2020. We further investigated the impacts of enhancing tree cover by a consistent amount in different urban locales on the reduction of population heat exposure among specific age groups. Our results demonstrate a nonlinear CE-TCP relationship globally – CE exhibits an initial sharp decline followed by a gradual reduction as TCP rises, and this nonlinearity is more pronounced in tropical and arid climates than in other climate zones. We observe that 91.4% of cities experience a greater reduction in population heat exposure when introducing the same amount of TCP in areas with fewer trees than in those with denser canopies; and heat exposure mitigation is more prominent for laborers than for vulnerable groups. These insights are critical for developing strategies to minimize urban heat-related health risks.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"216 ","pages":"Pages 229-239"},"PeriodicalIF":10.6,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141909558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bark beetle pre-emergence detection using multi-temporal hyperspectral drone images: Green shoulder indices can indicate subtle tree vitality decline 利用多时高光谱无人机图像进行树皮甲虫萌发前检测:绿肩指数可显示树木细微的生命力衰退
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-08 DOI: 10.1016/j.isprsjprs.2024.07.027
Langning Huo , Niko Koivumäki , Raquel A. Oliveira , Teemu Hakala , Lauri Markelin , Roope Näsi , Juha Suomalainen , Antti Polvivaara , Samuli Junttila , Eija Honkavaara

Forest stress monitoring and in-time identification of forest disturbances are important to improve forest resilience to climate change. Fast-developing drone techniques and hyperspectral imagery provide tools for understanding the forest decline process under stress and contribute to focused monitoring. This study explored and developed hyperspectral drone imagery for early detection of forest stress caused by European spruce bark beetle Ips typographus (L.), before offspring emergence, which is crucial in controlling the spread but has been shown to be challenging.

This study challenges the highest possible detectability of infested trees using a hyperspectral drone system that provided images with very high spectral, spatial, and temporal resolutions in Southern Finland. Images were acquired bi-weekly, four times (T1, T2, T3, T4), covering 8 weeks from trees being attacked by the first filial generation (F1) to the beginning of second filial generation (F2) brood emergence. Very low separability was observed for the reflectance from healthy and attacked trees, but the first and second derivative reflectance captured vitality changes, with the green shoulder region (wavelengths 490–550 nm) exhibiting the highest separability of all wavelengths (400–1700 nm). We discovered that the peak and valley values of the first and second derivative curves in the green shoulder region consistently shifted with longer infestation time.

Based on this finding, we developed green shoulder indices. The detection rates were 0.24–0.31 and 0.76–0.83 for T3 and T4, higher than commonly used VIs such as the Photochemical Reflectance Index and the Red Edge Inflection Position, with detection rates of 0.69 and 0.34 for T4, respectively. We also proposed simplified green shoulder indices using the reflectance from three bands that can be used with multispectral cameras and satellite images for large area monitoring of forest health. We concluded that the detectability of infestations was very low for the first month after attack, and then rapidly increased before brood emergence. We highlighted the great potential of green shoulder indices in quantifying the photochemical functioning of the vegetation under stress. The methodology can potentially be applied for early identification of forests with declining vitality caused by various sources of forest stress and disturbances, such as infestations, diseases and drought.

森林压力监测和森林干扰的及时识别对于提高森林抵御气候变化的能力非常重要。快速发展的无人机技术和高光谱图像为了解压力下的森林衰退过程提供了工具,有助于进行重点监测。本研究探索并开发了高光谱无人机图像,用于在欧洲云杉树皮甲虫(L.)的后代萌发之前及早发现其造成的森林压力,这对于控制其蔓延至关重要,但已证明具有挑战性。
{"title":"Bark beetle pre-emergence detection using multi-temporal hyperspectral drone images: Green shoulder indices can indicate subtle tree vitality decline","authors":"Langning Huo ,&nbsp;Niko Koivumäki ,&nbsp;Raquel A. Oliveira ,&nbsp;Teemu Hakala ,&nbsp;Lauri Markelin ,&nbsp;Roope Näsi ,&nbsp;Juha Suomalainen ,&nbsp;Antti Polvivaara ,&nbsp;Samuli Junttila ,&nbsp;Eija Honkavaara","doi":"10.1016/j.isprsjprs.2024.07.027","DOIUrl":"10.1016/j.isprsjprs.2024.07.027","url":null,"abstract":"<div><p>Forest stress monitoring and in-time identification of forest disturbances are important to improve forest resilience to climate change. Fast-developing drone techniques and hyperspectral imagery provide tools for understanding the forest decline process under stress and contribute to focused monitoring. This study explored and developed hyperspectral drone imagery for early detection of forest stress caused by European spruce bark beetle <em>Ips typographus</em> (L.), before offspring emergence, which is crucial in controlling the spread but has been shown to be challenging.</p><p>This study challenges the highest possible detectability of infested trees using a hyperspectral drone system that provided images with very high spectral, spatial, and temporal resolutions in Southern Finland. Images were acquired bi-weekly, four times (T1, T2, T3, T4), covering 8 weeks from trees being attacked by the first filial generation (F1) to the beginning of second filial generation (F2) brood emergence. Very low separability was observed for the reflectance from healthy and attacked trees, but the first and second derivative reflectance captured vitality changes, with the green shoulder region (wavelengths 490–550 nm) exhibiting the highest separability of all wavelengths (400–1700 nm). We discovered that the peak and valley values of the first and second derivative curves in the green shoulder region consistently shifted with longer infestation time.</p><p>Based on this finding, we developed green shoulder indices. The detection rates were 0.24–0.31 and 0.76–0.83 for T3 and T4, higher than commonly used VIs such as the Photochemical Reflectance Index and the Red Edge Inflection Position, with detection rates of 0.69 and 0.34 for T4, respectively. We also proposed simplified green shoulder indices using the reflectance from three bands that can be used with multispectral cameras and satellite images for large area monitoring of forest health. We concluded that the detectability of infestations was very low for the first month after attack, and then rapidly increased before brood emergence. We highlighted the great potential of green shoulder indices in quantifying the photochemical functioning of the vegetation under stress. The methodology can potentially be applied for early identification of forests with declining vitality caused by various sources of forest stress and disturbances, such as infestations, diseases and drought.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"216 ","pages":"Pages 200-216"},"PeriodicalIF":10.6,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0924271624002946/pdfft?md5=58f992d2b15e04cc22e920e7bc17c830&pid=1-s2.0-S0924271624002946-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141904826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TSG-Seg: Temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds TSG-Seg:用于三维激光雷达点云半监督语义分割的时间选择性指导
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-08 DOI: 10.1016/j.isprsjprs.2024.07.020
Weihao Xuan , Heli Qi , Aoran Xiao

LiDAR-based semantic scene understanding holds a pivotal role in various applications, including remote sensing and autonomous driving. However, the majority of LiDAR segmentation models rely on extensive and densely annotated training datasets, which is extremely laborious to annotate and hinder the widespread adoption of LiDAR systems. Semi-supervised learning (SSL) offers a promising solution by leveraging only a small amount of labeled data and a larger set of unlabeled data, aiming to train robust models with desired accuracy comparable to fully supervised learning. A typical pipeline of SSL involves the initial use of labeled data to train segmentation models, followed by the utilization of predictions generated from unlabeled data, which are used as pseudo-ground truths for model retraining. However, the scarcity of labeled data limits the capture of comprehensive representations, leading to the constraints of these pseudo-ground truths in reliability. We observed that objects captured by LiDAR sensors from varying perspectives showcase diverse data characteristics due to occlusions and distance variation, and LiDAR segmentation models trained with limited labels prove susceptible to these viewpoint disparities, resulting in inaccurately predicted pseudo-ground truths across viewpoints and the accumulation of retraining errors. To address this problem, we introduce the Temporal-Selective Guided Learning (TSG-Seg) framework. TSG-Seg explores temporal cues inherent in LiDAR frames to bridge the cross-viewpoint representations, fostering consistent and robust segmentation predictions across differing viewpoints. Specifically, we first establish point-wise correspondences across LiDAR frames with different time stamps through point registration. Subsequently, reliable point predictions are selected and propagated to points from adjacent views to the current view, serving as strong and refined supervision signals for subsequent model re-training to achieve better segmentation. We conducted extensive experiments on various SSL labeling setups across multiple public datasets, including SemanticKITTI and SemanticPOSS, to evaluate the effectiveness of TSG-Seg. Our results demonstrate its competitive performance and robustness in diverse scenarios, from data-limited to data-abundant settings. Notably, TSG-Seg achieves a mIoU of 48.6% using only 5% of and 62.3% with 40% of labeled data in the sequential split on SemanticKITTI. This consistently outperforms state-of-the-art segmentation methods, including GPC and LaserMix. These findings underscore TSG-Seg’s superior capability and potential for real-world applications. The project can be found at https://tsgseg.github.io.

基于激光雷达的语义场景理解在遥感和自动驾驶等各种应用中发挥着举足轻重的作用。然而,大多数激光雷达分割模型都依赖于大量密集标注的训练数据集,标注工作极其繁重,阻碍了激光雷达系统的广泛应用。半监督学习(SSL)提供了一种很有前景的解决方案,它只利用少量标注数据和更大的非标注数据集,旨在训练出稳健的模型,其预期精度可与完全监督学习相媲美。SSL 的典型流程包括首先使用标注数据训练分割模型,然后利用未标注数据生成的预测结果,将其作为伪地面真相用于模型再训练。然而,标注数据的稀缺性限制了对全面表征的捕捉,导致这些伪地面真实的可靠性受到制约。我们观察到,由于遮挡和距离变化,激光雷达传感器从不同视角捕捉到的物体呈现出不同的数据特征,而使用有限标签训练的激光雷达分割模型很容易受到这些视角差异的影响,从而导致不同视角的伪地面真值预测不准确,并积累了再训练误差。为了解决这个问题,我们引入了时间选择性指导学习(TSG-Seg)框架。TSG-Seg 利用激光雷达帧中固有的时间线索来弥合跨视点表征,从而在不同视点之间实现一致、稳健的分割预测。具体来说,我们首先通过点注册建立不同时间戳的激光雷达帧之间的点对应关系。随后,选择可靠的点预测并传播到当前视图相邻视图的点上,作为后续模型再训练的强大而精细的监督信号,以实现更好的分割。我们在多个公共数据集(包括 SemanticKITTI 和 SemanticPOSS)的各种 SSL 标签设置上进行了广泛的实验,以评估 TSG-Seg 的有效性。实验结果表明,在从数据有限到数据丰富的各种场景中,TSG-Seg 都具有极具竞争力的性能和鲁棒性。值得注意的是,在SemanticKITTI的顺序分割中,TSG-Seg仅使用5%的标记数据就实现了48.6%的mIoU,使用40%的标记数据实现了62.3%的mIoU。这始终优于最先进的分割方法,包括GPC和LaserMix。这些发现凸显了TSG-Seg在实际应用中的卓越能力和潜力。该项目的网址是 。
{"title":"TSG-Seg: Temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds","authors":"Weihao Xuan ,&nbsp;Heli Qi ,&nbsp;Aoran Xiao","doi":"10.1016/j.isprsjprs.2024.07.020","DOIUrl":"10.1016/j.isprsjprs.2024.07.020","url":null,"abstract":"<div><p>LiDAR-based semantic scene understanding holds a pivotal role in various applications, including remote sensing and autonomous driving. However, the majority of LiDAR segmentation models rely on extensive and densely annotated training datasets, which is extremely laborious to annotate and hinder the widespread adoption of LiDAR systems. Semi-supervised learning (SSL) offers a promising solution by leveraging only a small amount of labeled data and a larger set of unlabeled data, aiming to train robust models with desired accuracy comparable to fully supervised learning. A typical pipeline of SSL involves the initial use of labeled data to train segmentation models, followed by the utilization of predictions generated from unlabeled data, which are used as pseudo-ground truths for model retraining. However, the scarcity of labeled data limits the capture of comprehensive representations, leading to the constraints of these pseudo-ground truths in reliability. We observed that objects captured by LiDAR sensors from varying perspectives showcase diverse data characteristics due to occlusions and distance variation, and LiDAR segmentation models trained with limited labels prove susceptible to these viewpoint disparities, resulting in inaccurately predicted pseudo-ground truths across viewpoints and the accumulation of retraining errors. To address this problem, we introduce the Temporal-Selective Guided Learning (TSG-Seg) framework. TSG-Seg explores temporal cues inherent in LiDAR frames to bridge the cross-viewpoint representations, fostering consistent and robust segmentation predictions across differing viewpoints. Specifically, we first establish point-wise correspondences across LiDAR frames with different time stamps through point registration. Subsequently, reliable point predictions are selected and propagated to points from adjacent views to the current view, serving as strong and refined supervision signals for subsequent model re-training to achieve better segmentation. We conducted extensive experiments on various SSL labeling setups across multiple public datasets, including SemanticKITTI and SemanticPOSS, to evaluate the effectiveness of TSG-Seg. Our results demonstrate its competitive performance and robustness in diverse scenarios, from data-limited to data-abundant settings. Notably, TSG-Seg achieves a mIoU of 48.6% using only 5% of and 62.3% with 40% of labeled data in the sequential split on SemanticKITTI. This consistently outperforms state-of-the-art segmentation methods, including GPC and LaserMix. These findings underscore TSG-Seg’s superior capability and potential for real-world applications. The project can be found at <span><span>https://tsgseg.github.io</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"216 ","pages":"Pages 217-228"},"PeriodicalIF":10.6,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141904827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised multi-class tree crown delineation using aerial multispectral imagery and lidar data 利用航空多光谱图像和激光雷达数据进行半监督多类树冠划分
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-08 DOI: 10.1016/j.isprsjprs.2024.07.032
S. Dersch , A. Schöttl , P. Krzystek , M. Heurich

The segmentation of individual trees based on deep learning is more accurate than conventional meth- ods. However, a sufficient amount of training data is mandatory to leverage the accuracy potential of deep learning-based approaches. Semi-supervised learning techniques, by contrast, can help simplify the time-consuming labelling process. In this study, we introduce a new semi-supervised tree segmen- tation approach for the precise delineation and classification of individual trees that takes advantage of pre-clustered tree training labels. Specifically, the instance segmentation Mask R-CNN is combined with the normalized cut clustering method, which is applied to lidar point clouds. The study areas were located in the Bavarian Forest National Park, southeast Germany, where the tree composition includes coniferous, deciduous and mixed forest. Important tree species are European beech (Fagus sylvatica), Norway spruce (Picea abies) and silver fir (Abies alba). Multispectral image data with a ground sample distance of 10 cm and laser scanning data with a point density of approximately 55 points/m2 were acquired in June 2017. From the laser scanning data, three-channel images with a resolution of 10 cm were generated. The models were tested in seven reference plots in the national park, with a total of 516 trees measured on the ground. When the color infrared images were used, the experiments demonstrated that the Mask R-CNN models, trained with the tree labels generated through lidar-based clustering, yielded mean F1 scores of 79 % that were up to 18 % higher than those of the normalized cut baseline method and thus significantly improved. Similarly, the mean over- all accuracy of the classification results for the coniferous, deciduous, and standing deadwood tree groups was 96 % and enhanced by up to 6 % compared with the baseline classification approach. The experiments with lidar-based images yielded slightly worse (1–2 %) results both for segmentation and for classification. Our study demonstrates the utility of this simplified training data preparation pro- cedure, which leads to models trained with significantly larger amounts of data than is feasible with with manual labelling. The accuracy improvement of up to 18 % in terms of the F1 score is further evidence of its advantages.

与传统方法相比,基于深度学习的单棵树分割更为精确。然而,要发挥基于深度学习的方法的准确性潜力,必须有足够数量的训练数据。相比之下,半监督学习技术可以帮助简化耗时的标记过程。在本研究中,我们引入了一种新的半监督树划分方法,利用预先聚类的树训练标签,对单棵树进行精确划分和分类。具体来说,我们将实例分割面具 R-CNN 与归一化切割聚类方法相结合,并将其应用于激光雷达点云。研究区域位于德国东南部的巴伐利亚森林国家公园,那里的树木组成包括针叶林、落叶林和混交林。重要树种有欧洲山毛榉()、挪威云杉()和银杉()。2017 年 6 月获取了地面采样距离为 10 的多光谱图像数据和点密度约为 55 的激光扫描数据。根据激光扫描数据生成了分辨率为 10 的三通道图像。这些模型在国家公园的七个参考地块中进行了测试,共实地测量了 516 棵树。实验表明,使用基于激光雷达聚类生成的树木标签训练的 Mask R-CNN 模型在使用彩色红外图像时,平均 F1 分数为 79%,比归一化剪切基线方法高出 18%,因此得到了显著提高。同样,针叶树、落叶树和枯木树分类结果的平均总体准确率为 96%,与基线分类方法相比提高了 6%。基于激光雷达图像的实验在分割和分类方面的结果都略差(1-2%)。我们的研究证明了这种简化的训练数据准备方法的实用性,与人工标注方法相比,这种方法可以使用更多的数据来训练模型。从 F1 分数来看,准确率提高了 18%,这进一步证明了它的优势。
{"title":"Semi-supervised multi-class tree crown delineation using aerial multispectral imagery and lidar data","authors":"S. Dersch ,&nbsp;A. Schöttl ,&nbsp;P. Krzystek ,&nbsp;M. Heurich","doi":"10.1016/j.isprsjprs.2024.07.032","DOIUrl":"10.1016/j.isprsjprs.2024.07.032","url":null,"abstract":"<div><p>The segmentation of individual trees based on deep learning is more accurate than conventional meth- ods. However, a sufficient amount of training data is mandatory to leverage the accuracy potential of deep learning-based approaches. Semi-supervised learning techniques, by contrast, can help simplify the time-consuming labelling process. In this study, we introduce a new semi-supervised tree segmen- tation approach for the precise delineation and classification of individual trees that takes advantage of pre-clustered tree training labels. Specifically, the instance segmentation Mask R-CNN is combined with the normalized cut clustering method, which is applied to lidar point clouds. The study areas were located in the Bavarian Forest National Park, southeast Germany, where the tree composition includes coniferous, deciduous and mixed forest. Important tree species are European beech (<em>Fagus sylvatica</em>), Norway spruce (<em>Picea abies</em>) and silver fir (<em>Abies alba</em>). Multispectral image data with a ground sample distance of 10 <em>cm</em> and laser scanning data with a point density of approximately 55 <em>points/m</em><sup>2</sup> were acquired in June 2017. From the laser scanning data, three-channel images with a resolution of 10 <em>cm</em> were generated. The models were tested in seven reference plots in the national park, with a total of 516 trees measured on the ground. When the color infrared images were used, the experiments demonstrated that the Mask R-CNN models, trained with the tree labels generated through lidar-based clustering, yielded mean F1 scores of 79 % that were up to 18 % higher than those of the normalized cut baseline method and thus significantly improved. Similarly, the mean over- all accuracy of the classification results for the coniferous, deciduous, and standing deadwood tree groups was 96 % and enhanced by up to 6 % compared with the baseline classification approach. The experiments with lidar-based images yielded slightly worse (1–2 %) results both for segmentation and for classification. Our study demonstrates the utility of this simplified training data preparation pro- cedure, which leads to models trained with significantly larger amounts of data than is feasible with with manual labelling. The accuracy improvement of up to 18 % in terms of the F1 score is further evidence of its advantages.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"216 ","pages":"Pages 154-167"},"PeriodicalIF":10.6,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141904825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The importance of spatial scale and vegetation complexity in woody species diversity and its relationship with remotely sensed variables 空间尺度和植被复杂性对木本物种多样性的重要性及其与遥感变量的关系
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-07 DOI: 10.1016/j.isprsjprs.2024.07.029
Wendy G. Canto-Sansores , Jorge Omar López-Martínez , Edgar J. González , Jorge A. Meave , José Luis Hernández-Stefanoni , Pedro A. Macario-Mendoza

Plant species diversity is key to ecosystem functioning, but in recent decades anthropogenic activities have prompted an alarming decline in this community trait. Thus, developing strategies to understand diversity dynamics based on affordable and efficient remote sensing monitoring is essential, as well as examining the relevance of spatial scale and vegetation structural complexity to these dynamics. Here, we used two mathematical approaches to assess the relationship between tropical woody species diversity and spectral diversity in a human-modified landscape in two vegetation types differing in their degree of complexity. Vegetation complexity was measured through the fraction of species that concentrate different proportions of the cumulative importance value index. Species diversity was assessed using Hill numbers at three spatial scales, and metrics of spectral heterogeneity, vegetation indices, as well as raw data from Landsat 9 and Sentinel-2 sensors were calculated and analysed through general linear models (GLM) and Random Forest. Vegetation complexity emerged as an important variable in modelling species from remote sensing metrics, indicating the need to model species diversity by vegetation type rather than region. Hill numbers showed different relationships with remotely sensed metrics, in consistency with the scale-dependency of ecological processes on species diversity. Contrary to multiple previous reports, in our study, GLMs produced the best fits between Hill numbers of all orders and remotely sensed metrics. If we are to meet the need of conducting efficient and speedy woody species diversity monitoring globally, we propose modelling this diversity from remotely-sensed variables as an attractive strategy, so long as the intrinsic properties of each vegetation type are acknowledged to avoid under- or overestimation biases.

植物物种多样性是生态系统功能的关键,但近几十年来,人类活动已导致这一群落特征惊人地减少。因此,在经济、高效的遥感监测基础上制定了解多样性动态的策略,以及研究空间尺度和植被结构复杂性与这些动态的相关性至关重要。在这里,我们使用两种数学方法来评估人类改造景观中两种复杂程度不同的植被类型中热带木本物种多样性与光谱多样性之间的关系。植被复杂度是通过集中了累积重要性值指数不同比例的物种的比例来衡量的。使用希尔数评估了三种空间尺度的物种多样性,并计算了光谱异质性指标、植被指数以及来自 Landsat 9 和 Sentinel-2 传感器的原始数据,并通过一般线性模型(GLM)和随机森林进行了分析。植被复杂度是利用遥感指标建立物种模型的一个重要变量,这表明需要根据植被类型而不是区域建立物种多样性模型。山丘数量与遥感指标显示出不同的关系,这与生态过程对物种多样性的规模依赖性是一致的。与之前的多份报告相反,在我们的研究中,GLM 在各阶希尔数与遥感指标之间产生了最佳拟合。如果我们要满足在全球范围内开展高效、快速的木本物种多样性监测的需要,我们建议利用遥感变量对物种多样性进行建模是一种有吸引力的策略,但前提是必须承认每种植被类型的固有特性,以避免低估或高估偏差。
{"title":"The importance of spatial scale and vegetation complexity in woody species diversity and its relationship with remotely sensed variables","authors":"Wendy G. Canto-Sansores ,&nbsp;Jorge Omar López-Martínez ,&nbsp;Edgar J. González ,&nbsp;Jorge A. Meave ,&nbsp;José Luis Hernández-Stefanoni ,&nbsp;Pedro A. Macario-Mendoza","doi":"10.1016/j.isprsjprs.2024.07.029","DOIUrl":"10.1016/j.isprsjprs.2024.07.029","url":null,"abstract":"<div><p>Plant species diversity is key to ecosystem functioning, but in recent decades anthropogenic activities have prompted an alarming decline in this community trait. Thus, developing strategies to understand diversity dynamics based on affordable and efficient remote sensing monitoring is essential, as well as examining the relevance of spatial scale and vegetation structural complexity to these dynamics. Here, we used two mathematical approaches to assess the relationship between tropical woody species diversity and spectral diversity in a human-modified landscape in two vegetation types differing in their degree of complexity. Vegetation complexity was measured through the fraction of species that concentrate different proportions of the cumulative importance value index. Species diversity was assessed using Hill numbers at three spatial scales, and metrics of spectral heterogeneity, vegetation indices, as well as raw data from Landsat 9 and Sentinel-2 sensors were calculated and analysed through general linear models (GLM) and Random Forest. Vegetation complexity emerged as an important variable in modelling species from remote sensing metrics, indicating the need to model species diversity by vegetation type rather than region. Hill numbers showed different relationships with remotely sensed metrics, in consistency with the scale-dependency of ecological processes on species diversity. Contrary to multiple previous reports, in our study, GLMs produced the best fits between Hill numbers of all orders and remotely sensed metrics. If we are to meet the need of conducting efficient and speedy woody species diversity monitoring globally, we propose modelling this diversity from remotely-sensed variables as an attractive strategy, so long as the intrinsic properties of each vegetation type are acknowledged to avoid under- or overestimation biases.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"216 ","pages":"Pages 142-153"},"PeriodicalIF":10.6,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141904830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training-free thick cloud removal for Sentinel-2 imagery using value propagation interpolation 利用值传播插值法为哨兵-2 号图像去除厚云,无需训练
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-07 DOI: 10.1016/j.isprsjprs.2024.07.030
Laurens Arp , Holger Hoos , Peter van Bodegom , Alistair Francis , James Wheeler , Dean van Laar , Mitra Baratchi

Remote sensing imagery has an ever-increasing impact on important downstream applications, such as vegetation monitoring and climate change modelling. Clouds obscuring parts of the images create a substantial bottleneck in most machine learning tasks that use remote sensing data, and being robust to this issue is an important technical challenge. In many cases, cloudy images cannot be used in a machine learning pipeline, leading to either the removal of the images altogether, or to using suboptimal solutions reliant on recent cloud-free imagery or the availability of pre-trained models for the exact use case. In this work, we propose VPint2, a cloud removal method built upon the VPint algorithm, an easy-to-apply data-driven spatial interpolation method requiring no prior training, to address the problem of cloud removal. This method leverages previously sensed cloud-free images to represent the spatial structure of a region, which is then used to propagate up-to-date information from non-cloudy pixels to cloudy ones. We also created a benchmark dataset called SEN2-MSI-T, composed of 20 scenes with 5 full-sized images each, belonging to five common land cover classes. We used this dataset to evaluate our method against three alternatives: mosaicking, an AutoML-based regression method, and the nearest similar pixel interpolator. Additionally, we compared against two previously published neural network-based methods on SEN2-MSI-T, and evaluate our method on a subset of the popular SEN12MS-CR-TS benchmark dataset. The methods are compared using several performance metrics, including the structural similarity index, mean absolute error, and error rates on a downstream NDVI derivation task. Our experimental results show that VPint2 performed significantly better than competing methods over 20 experimental conditions, improving performance by 2.4% to 34.3% depending on the condition. We also found that the performance of VPint2 only decreases marginally as the temporal distance of its reference image increases, and that, unlike typical interpolation methods, the performance of VPint2 remains strong for larger percentages of cloud cover. Our findings furthermore support a cloud removal evaluation approach founded on the transfer of cloud masks over the use of cloud-free previous acquisitions as ground truth.

遥感图像对植被监测和气候变化建模等重要下游应用的影响与日俱增。在大多数使用遥感数据的机器学习任务中,云层遮挡了部分图像会造成很大的瓶颈,因此如何稳健地解决这一问题是一项重要的技术挑战。在很多情况下,多云图像不能用于机器学习管道,这导致要么完全删除图像,要么使用次优解决方案,依赖于最近的无云图像或针对具体使用情况的预训练模型。在这项工作中,我们提出了一种基于 VPint 算法的云去除方法 VPint2,这是一种易于应用的数据驱动空间插值方法,无需事先训练,即可解决云去除问题。该方法利用之前感测到的无云图像来表示一个区域的空间结构,然后利用该结构将最新信息从无云像素传播到有云像素。我们还创建了一个名为 SEN2-MSI-T 的基准数据集,该数据集由 20 个场景组成,每个场景有 5 幅全尺寸图像,分别属于五个常见的土地覆被类别。我们使用该数据集对我们的方法与三种替代方法进行了评估:镶嵌法、基于 AutoML 的回归法和最近相似像素插值法。此外,我们还在 SEN2-MSI-T 数据集上与之前发布的两种基于神经网络的方法进行了比较,并在广受欢迎的 SEN12MS-CR-TS 基准数据集的一个子集上对我们的方法进行了评估。这些方法使用多个性能指标进行比较,包括结构相似性指数、平均绝对误差和下游 NDVI 推导任务的误差率。实验结果表明,在 20 种实验条件下,VPint2 的性能明显优于其他竞争方法,根据条件的不同,性能提高了 2.4% 到 34.3%。我们还发现,VPint2 的性能只会随着其参考图像的时间距离的增加而略有下降,而且与典型的插值方法不同,VPint2 在云层覆盖比例较大的情况下仍然表现出色。此外,我们的研究结果还支持基于云掩膜转移的云去除评估方法,而不是使用无云的先前采集图像作为地面实况。
{"title":"Training-free thick cloud removal for Sentinel-2 imagery using value propagation interpolation","authors":"Laurens Arp ,&nbsp;Holger Hoos ,&nbsp;Peter van Bodegom ,&nbsp;Alistair Francis ,&nbsp;James Wheeler ,&nbsp;Dean van Laar ,&nbsp;Mitra Baratchi","doi":"10.1016/j.isprsjprs.2024.07.030","DOIUrl":"10.1016/j.isprsjprs.2024.07.030","url":null,"abstract":"<div><p>Remote sensing imagery has an ever-increasing impact on important downstream applications, such as vegetation monitoring and climate change modelling. Clouds obscuring parts of the images create a substantial bottleneck in most machine learning tasks that use remote sensing data, and being robust to this issue is an important technical challenge. In many cases, cloudy images cannot be used in a machine learning pipeline, leading to either the removal of the images altogether, or to using suboptimal solutions reliant on recent cloud-free imagery or the availability of pre-trained models for the exact use case. In this work, we propose VPint2, a cloud removal method built upon the VPint algorithm, an easy-to-apply data-driven spatial interpolation method requiring no prior training, to address the problem of cloud removal. This method leverages previously sensed cloud-free images to represent the spatial structure of a region, which is then used to propagate up-to-date information from non-cloudy pixels to cloudy ones. We also created a benchmark dataset called SEN2-MSI-T, composed of 20 scenes with 5 full-sized images each, belonging to five common land cover classes. We used this dataset to evaluate our method against three alternatives: mosaicking, an AutoML-based regression method, and the nearest similar pixel interpolator. Additionally, we compared against two previously published neural network-based methods on SEN2-MSI-T, and evaluate our method on a subset of the popular SEN12MS-CR-TS benchmark dataset. The methods are compared using several performance metrics, including the structural similarity index, mean absolute error, and error rates on a downstream NDVI derivation task. Our experimental results show that VPint2 performed significantly better than competing methods over 20 experimental conditions, improving performance by 2.4% to 34.3% depending on the condition. We also found that the performance of VPint2 only decreases marginally as the temporal distance of its reference image increases, and that, unlike typical interpolation methods, the performance of VPint2 remains strong for larger percentages of cloud cover. Our findings furthermore support a cloud removal evaluation approach founded on the transfer of cloud masks over the use of cloud-free previous acquisitions as ground truth.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"216 ","pages":"Pages 168-184"},"PeriodicalIF":10.6,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0924271624002995/pdfft?md5=03a0c3a91dc45f1f5a72905e72c7cb9d&pid=1-s2.0-S0924271624002995-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141904829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond clouds: Seamless flood mapping using Harmonized Landsat and Sentinel-2 time series imagery and water occurrence data 超越云层:利用协调大地遥感卫星和哨兵-2 时间序列图像以及水文发生数据进行无缝洪水测绘
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-07 DOI: 10.1016/j.isprsjprs.2024.07.022
Zhiwei Li , Shaofen Xu , Qihao Weng

Floods are among the most devastating natural disasters, posing significant risks to life, property, and infrastructure globally. Earth observation satellites provide data for continuous and extensive flood monitoring, yet limitations exist in the spatial completeness of monitoring using optical images due to cloud cover. Recent studies have developed gap-filling methods for reconstructing cloud-covered areas in water maps. However, these methods are not tailored for and validated in cloudy and rainy flooding scenarios with rapid water extent changes and limited clear-sky observations, leaving room for further improvements. This study investigated and developed a novel reconstruction method for time series flood extent mapping, supporting spatially seamless monitoring of flood extents. The proposed method first identified surface water from time series images using a fine-tuned large foundation model. Then, the cloud-covered areas in the water maps were reconstructed, adhering to the introduced submaximal stability assumption, on the basis of the prior water occurrence data in the Global Surface Water dataset. The reconstructed time series water maps were refined through spatiotemporal Markov random field modeling for the final delineation of flooding areas. The effectiveness of the proposed method was evaluated with Harmonized Landsat and Sentinel-2 datasets under varying cloud cover conditions, enabling seamless flood mapping at 2–3-day frequency and 30 m resolution. Experiments at four global sites confirmed the superiority of the proposed method. It achieved higher reconstruction accuracy with average F1-scores of 0.931 during floods and 0.903 before/after floods, outperforming the typical gap-filling method with average F1-scores of 0.871 and 0.772, respectively. Additionally, the maximum flood extent maps and flood duration maps, which were composed on the basis of the reconstructed water maps, were more accurate than those using the original cloud-contaminated water maps. The benefits of synthetic aperture radar images (e.g., Sentinel-1) for enhancing flood mapping under cloud cover conditions were also discussed. The method proposed in this paper provided an effective way for flood monitoring in cloudy and rainy scenarios, supporting emergency response and disaster management. The code and datasets used in this study have been made available online (https://github.com/dr-lizhiwei/SeamlessFloodMapper).

洪水是最具破坏性的自然灾害之一,对全球生命、财产和基础设施构成重大风险。地球观测卫星为连续、广泛的洪水监测提供数据,但由于云层覆盖,使用光学图像进行监测的空间完整性受到限制。最近的研究开发了一些填补空白的方法,用于重建水地图中的云层覆盖区域。然而,这些方法并不适合水域范围变化迅速、晴空观测有限的多云和多雨洪水情况,也没有在这种情况下得到验证,因此仍有进一步改进的空间。本研究调查并开发了一种用于绘制时间序列洪水范围图的新型重建方法,以支持洪水范围的空间无缝监测。该方法首先使用微调的大型基础模型从时间序列图像中识别地表水。然后,以全球地表水数据集中的先期水情发生数据为基础,遵循引入的次最大稳定性假设,重建水情图中的云覆盖区域。通过时空马尔可夫随机场建模,对重建的时间序列水地图进行细化,以最终划定洪涝区。在不同的云层覆盖条件下,利用协调大地卫星和哨兵-2 数据集评估了所提方法的有效性,从而实现了 2-3 天频率和 30 米分辨率的无缝洪水测绘。在全球四个地点进行的实验证实了拟议方法的优越性。它实现了更高的重建精度,洪水期间的平均 F1 分数为 0.931,洪水前后的平均 F1 分数为 0.903,优于典型的填隙法(平均 F1 分数分别为 0.871 和 0.772)。此外,在重建水图的基础上绘制的最大洪水范围图和洪水持续时间图比使用原始云污染水图绘制的更为精确。此外,还讨论了合成孔径雷达图像(如哨兵-1)在云层覆盖条件下增强洪水测绘的优势。本文提出的方法为多云和多雨情况下的洪水监测提供了有效途径,为应急响应和灾害管理提供了支持。本研究中使用的代码和数据集可在网上查阅()。
{"title":"Beyond clouds: Seamless flood mapping using Harmonized Landsat and Sentinel-2 time series imagery and water occurrence data","authors":"Zhiwei Li ,&nbsp;Shaofen Xu ,&nbsp;Qihao Weng","doi":"10.1016/j.isprsjprs.2024.07.022","DOIUrl":"10.1016/j.isprsjprs.2024.07.022","url":null,"abstract":"<div><p>Floods are among the most devastating natural disasters, posing significant risks to life, property, and infrastructure globally. Earth observation satellites provide data for continuous and extensive flood monitoring, yet limitations exist in the spatial completeness of monitoring using optical images due to cloud cover. Recent studies have developed gap-filling methods for reconstructing cloud-covered areas in water maps. However, these methods are not tailored for and validated in cloudy and rainy flooding scenarios with rapid water extent changes and limited clear-sky observations, leaving room for further improvements. This study investigated and developed a novel reconstruction method for time series flood extent mapping, supporting spatially seamless monitoring of flood extents. The proposed method first identified surface water from time series images using a fine-tuned large foundation model. Then, the cloud-covered areas in the water maps were reconstructed, adhering to the introduced submaximal stability assumption, on the basis of the prior water occurrence data in the Global Surface Water dataset. The reconstructed time series water maps were refined through spatiotemporal Markov random field modeling for the final delineation of flooding areas. The effectiveness of the proposed method was evaluated with Harmonized Landsat and Sentinel-2 datasets under varying cloud cover conditions, enabling seamless flood mapping at 2–3-day frequency and 30 m resolution. Experiments at four global sites confirmed the superiority of the proposed method. It achieved higher reconstruction accuracy with average F1-scores of 0.931 during floods and 0.903 before/after floods, outperforming the typical gap-filling method with average F1-scores of 0.871 and 0.772, respectively. Additionally, the maximum flood extent maps and flood duration maps, which were composed on the basis of the reconstructed water maps, were more accurate than those using the original cloud-contaminated water maps. The benefits of synthetic aperture radar images (e.g., Sentinel-1) for enhancing flood mapping under cloud cover conditions were also discussed. The method proposed in this paper provided an effective way for flood monitoring in cloudy and rainy scenarios, supporting emergency response and disaster management. The code and datasets used in this study have been made available online (<span><span>https://github.com/dr-lizhiwei/SeamlessFloodMapper</span><svg><path></path></svg></span>).</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"216 ","pages":"Pages 185-199"},"PeriodicalIF":10.6,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0924271624002892/pdfft?md5=01f4ede2e5789a2d4f709563c851664a&pid=1-s2.0-S0924271624002892-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141904854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal-spectral-semantic-aware convolutional transformer network for multi-class tidal wetland change detection in Greater Bay Area 用于粤港澳大湾区多类潮汐湿地变化检测的时空-光谱-语义感知卷积变换器网络
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-06 DOI: 10.1016/j.isprsjprs.2024.07.024
Siyu Qian , Zhaohui Xue , Mingming Jia , Yiping Chen , Hongjun Su

Coastal tidal wetlands are crucial for environmental and economic health, but facing threats from various environmental changes. Detecting changes of tidal wetlands is essential for promoting sustainable development in coastal areas. Despite extensive researches on tidal wetland changes, persistent challenges still exist. Firstly, the high similarity among tidal wetland types hinders the effectiveness of existing common indices. Secondly, many current methods, relying on hand-crafted features, are time-consuming and subject to personal biases. Thirdly, few studies effectively integrate multi-temporal and semantic information, leading to misinterpretations from environmental noise and tidal variations. In view of the abovementioned issues, we proposed a novel temporal-spectral-semantic-aware convolutional transformer network (TSSA-CTNet) for multi-class tidal wetland change detection. Firstly, to address spectral similarity among different tidal wetlands, we proposed a sparse second order feature construction (SSFC) module to construct more separable spectral representations. Secondly, to get more separable features automatically, we constructed temporal-spatial feature extractor (TSFE) and siamese semantic sharing (SiamSS) blocks to extract temporal-spatial-semantic features. Thirdly, to fully utilize semantic information, we proposed a center comparative label smoothing (CCLS) module to generate semantic-aware labels. Experiments in the Greater Bay Area, using Landsat data from 2000 to 2019, demonstrated that TSSA-CTNet achieved 89.20% overall accuracy, outperforming other methods by 3.75%–16.39%. The study revealed significant area losses in tidal flats, mangroves, and tidal marshes, decreased by 3148 hectares, 35 hectares, and 240 hectares, respectively. Among the cities in GBA, Zhuhai shows the most significant area loss with a total of 1626 hectares. TSSA-CTNet proves effective for multi-class tidal wetland change detection, offering valuable insights for tidal wetland protection.

沿海潮汐湿地对环境和经济健康至关重要,但也面临着各种环境变化的威胁。检测潮汐湿地的变化对于促进沿海地区的可持续发展至关重要。尽管对潮汐湿地变化进行了广泛的研究,但仍然存在持续的挑战。首先,潮汐湿地类型之间的高度相似性阻碍了现有通用指数的有效性。其次,目前的许多方法依赖于手工创建特征,既费时又受个人偏见的影响。第三,很少有研究能有效整合多时信息和语义信息,导致环境噪声和潮汐变化造成误读。针对上述问题,我们提出了一种新型的时间-光谱-语义感知卷积变换网络(TSSA-CTNet),用于多类潮汐湿地变化检测。首先,针对不同潮汐湿地之间的光谱相似性,我们提出了稀疏二阶特征构建(SSFC)模块,以构建更多可分离的光谱表示。其次,为了自动获取更多可分离的特征,我们构建了时空特征提取器(TSFE)和连体语义共享(SiamSS)模块来提取时空语义特征。第三,为了充分利用语义信息,我们提出了中心比较标签平滑(CCLS)模块来生成语义感知标签。利用 2000 年至 2019 年的 Landsat 数据在大湾区进行的实验表明,TSSA-CTNet 的总体准确率达到 89.20%,优于其他方法 3.75%-16.39%。研究显示,滩涂、红树林和沼泽的面积损失巨大,分别减少了3148公顷、35公顷和240公顷。在全球滩涂区城市中,珠海的面积损失最为严重,共减少了 1626 公顷。事实证明,TSSA-CTNet 能够有效地进行多类潮汐湿地变化检测,为潮汐湿地保护提供有价值的见解。
{"title":"Temporal-spectral-semantic-aware convolutional transformer network for multi-class tidal wetland change detection in Greater Bay Area","authors":"Siyu Qian ,&nbsp;Zhaohui Xue ,&nbsp;Mingming Jia ,&nbsp;Yiping Chen ,&nbsp;Hongjun Su","doi":"10.1016/j.isprsjprs.2024.07.024","DOIUrl":"10.1016/j.isprsjprs.2024.07.024","url":null,"abstract":"<div><p>Coastal tidal wetlands are crucial for environmental and economic health, but facing threats from various environmental changes. Detecting changes of tidal wetlands is essential for promoting sustainable development in coastal areas. Despite extensive researches on tidal wetland changes, persistent challenges still exist. Firstly, the high similarity among tidal wetland types hinders the effectiveness of existing common indices. Secondly, many current methods, relying on hand-crafted features, are time-consuming and subject to personal biases. Thirdly, few studies effectively integrate multi-temporal and semantic information, leading to misinterpretations from environmental noise and tidal variations. In view of the abovementioned issues, we proposed a novel temporal-spectral-semantic-aware convolutional transformer network (TSSA-CTNet) for multi-class tidal wetland change detection. Firstly, to address spectral similarity among different tidal wetlands, we proposed a sparse second order feature construction (SSFC) module to construct more separable spectral representations. Secondly, to get more separable features automatically, we constructed temporal-spatial feature extractor (TSFE) and siamese semantic sharing (SiamSS) blocks to extract temporal-spatial-semantic features. Thirdly, to fully utilize semantic information, we proposed a center comparative label smoothing (CCLS) module to generate semantic-aware labels. Experiments in the Greater Bay Area, using Landsat data from 2000 to 2019, demonstrated that TSSA-CTNet achieved 89.20% overall accuracy, outperforming other methods by 3.75%–16.39%. The study revealed significant area losses in tidal flats, mangroves, and tidal marshes, decreased by 3148 hectares, 35 hectares, and 240 hectares, respectively. Among the cities in GBA, Zhuhai shows the most significant area loss with a total of 1626 hectares. TSSA-CTNet proves effective for multi-class tidal wetland change detection, offering valuable insights for tidal wetland protection.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"216 ","pages":"Pages 126-141"},"PeriodicalIF":10.6,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141904857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1