首页 > 最新文献

ISPRS Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
Reconstructing NDVI time series in cloud-prone regions: A fusion-and-fit approach with deep learning residual constraint 在多云地区重建 NDVI 时间序列:具有深度学习残差约束的融合拟合方法
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-16 DOI: 10.1016/j.isprsjprs.2024.09.010

The time series data of Normalized Difference Vegetation Index (NDVI) is crucial for monitoring changes in terrestrial vegetation. Existing reconstruction methods encounter challenges in areas prone to clouds, primarily due to inadequate utilization of spatial, temporal, periodic, and multi-sensor information, as well as a lack of physical interpretations. This frequently results in limited model performance or the omission of spatial details when predicting scenarios involving land cover changes. In this study, we propose a novel approach named Residual (Re) Constraints (Co) fusion-and-fit (ReCoff), consisting of two steps: ReCoF fusion (F) and Savitzky-Golay (SG) fit. This approach addresses the challenges of reconstructing 30 m Landsat NDVI time series data in cloudy regions. The fusion-fit process captures land cover changes and maps them from MODIS to Landsat using a deep learning model with residual constraints, while simultaneously integrating multi-dimensional, multi-sensor, and long time-series information. ReCoff offers three distinct advantages. First, the fusion results are more robust to land cover change scenarios and contain richer spatial details (RMSE of 0.091 vs. 0.101, 0.164, and 0.188 for ReCoF vs. STFGAN, FSDAF, and ESTARFM). Second, ReCoff improves the effectiveness of reconstructing dense time-series data (2016–2020, 16-day interval) in cloudy areas, whereas other methods are more susceptible to the impact of prolonged data gaps. ReCoff achieves a correlation coefficient of 0.84 with the MODIS reference series, outperforming SG (0.28), HANTS (0.32), and GF-SG (0.48). Third, with the help of the GEE platform, ReCoff can be applied over large areas (771 km × 634 km) and long-time scales (bimonthly intervals from 2000 to 2020) in cloudy regions. ReCoff demonstrates potential for accurately reconstructing time-series data in cloudy areas.

归一化植被指数(NDVI)的时间序列数据对于监测陆地植被的变化至关重要。现有的重建方法在多云地区遇到了挑战,主要原因是没有充分利用空间、时间、周期和多传感器信息,以及缺乏物理解释。这经常导致在预测涉及土地覆被变化的场景时,模型性能有限或遗漏空间细节。在本研究中,我们提出了一种名为 "残差(Re)约束(Co)融合与拟合(ReCoff)"的新方法,由两个步骤组成:ReCoF 融合(F)和萨维茨基-戈莱(SG)拟合。这种方法解决了在多云地区重建 30 米大地遥感卫星 NDVI 时间序列数据的难题。融合拟合过程利用具有残差约束的深度学习模型捕捉土地覆被变化,并将其从 MODIS 映射到 Landsat,同时整合多维度、多传感器和长时间序列信息。ReCoff 具有三个显著优势。首先,融合结果对土地覆被变化情景更加稳健,并包含更丰富的空间细节(ReCoF 与 STFGAN、FSDAF 和 ESTARFM 相比,RMSE 分别为 0.091、0.101、0.164 和 0.188)。其次,ReCoff 提高了在多云地区重建密集时间序列数据(2016-2020 年,16 天间隔)的有效性,而其他方法更容易受到长时间数据间隙的影响。ReCoff 与 MODIS 参考序列的相关系数达到 0.84,优于 SG(0.28)、HANTS(0.32)和 GF-SG(0.48)。第三,在 GEE 平台的帮助下,ReCoff 可以应用于大面积(771 km × 634 km)和长时间尺度(2000 年至 2020 年的双月时间间隔)的多云地区。ReCoff 展示了在多云地区准确重建时间序列数据的潜力。
{"title":"Reconstructing NDVI time series in cloud-prone regions: A fusion-and-fit approach with deep learning residual constraint","authors":"","doi":"10.1016/j.isprsjprs.2024.09.010","DOIUrl":"10.1016/j.isprsjprs.2024.09.010","url":null,"abstract":"<div><p>The time series data of Normalized Difference Vegetation Index (NDVI) is crucial for monitoring changes in terrestrial vegetation. Existing reconstruction methods encounter challenges in areas prone to clouds, primarily due to inadequate utilization of spatial, temporal, periodic, and multi-sensor information, as well as a lack of physical interpretations. This frequently results in limited model performance or the omission of spatial details when predicting scenarios involving land cover changes. In this study, we propose a novel approach named Residual (Re) Constraints (Co) fusion-and-fit (ReCoff), consisting of two steps: ReCoF fusion (F) and Savitzky-Golay (SG) fit. This approach addresses the challenges of reconstructing 30 m Landsat NDVI time series data in cloudy regions. The fusion-fit process captures land cover changes and maps them from MODIS to Landsat using a deep learning model with residual constraints, while simultaneously integrating multi-dimensional, multi-sensor, and long time-series information. ReCoff offers three distinct advantages. First, the fusion results are more robust to land cover change scenarios and contain richer spatial details (RMSE of 0.091 vs. 0.101, 0.164, and 0.188 for ReCoF vs. STFGAN, FSDAF, and ESTARFM). Second, ReCoff improves the effectiveness of reconstructing dense time-series data (2016–2020, 16-day interval) in cloudy areas, whereas other methods are more susceptible to the impact of prolonged data gaps. ReCoff achieves a correlation coefficient of 0.84 with the MODIS reference series, outperforming SG (0.28), HANTS (0.32), and GF-SG (0.48). Third, with the help of the GEE platform, ReCoff can be applied over large areas (771 km × 634 km) and long-time scales (bimonthly intervals from 2000 to 2020) in cloudy regions. ReCoff demonstrates potential for accurately reconstructing time-series data in cloudy areas.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142233854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MuSRFM: Multiple scale resolution fusion based precise and robust satellite derived bathymetry model for island nearshore shallow water regions using sentinel-2 multi-spectral imagery MuSRFM:利用哨兵-2 号多光谱图像为岛屿近岸浅水区建立基于多尺度分辨率融合的精确、稳健的卫星水深模型
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-14 DOI: 10.1016/j.isprsjprs.2024.09.007

The multi-spectral imagery based Satellite Derived Bathymetry (SDB) provides an efficient and cost-effective approach for acquiring bathymetry data of nearshore shallow water regions. Compared with conventional pixelwise inversion models, Deep Learning (DL) models have the theoretical capability to encompass a broader receptive field, automatically extracting comprehensive spatial features. However, enhancing spatial features by increasing the input size escalates computational complexity and model scale, challenging the hardware. To address this issue, we propose the Multiple Scale Resolution Fusion Model (MuSRFM), a novel DL-based SDB model, to integrate information of varying scales by utilizing temporally fused Sentinel-2 L2A multi-spectral imagery. The MuSRFM uses a Multi-scale Center-aligned Hierarchical Resampler (MCHR) to composite large-scale multi-spectral imagery into hierarchical scale resolution representations since the receptive field gradually narrows its focus as the spatial resolution decreases. Through this strategy, the MuSRFM gains access to rich spatial information while maintaining efficiency by progressively aggregating features of different scales through the Cropped Aligned Fusion Module (CAFM). We select St. Croix (Virgin Islands) as the training/testing dataset source, and the Root Mean Square Error (RMSE) obtained by the MuSRFM on the testing dataset is 0.8131 m (with a bathymetric range of 0–25 m), surpassing the machine learning based models and traditional semi-empirical models used as the baselines by over 35 % and 60 %, respectively. Additionally, multiple island areas worldwide, including Vieques, Oahu, Kauai, Saipan and Tinian, which exhibit distinct characteristics, are utilized to construct a real-world dataset for assessing the generalizability and transferability of the proposed MuSRFM. While the MuSRFM experiences a degradation in accuracy when applied to the diverse real-world dataset, it outperforms other baseline models considerably. Across various study areas in the real-world dataset, its RMSE lead over the second-ranked model ranges from 6.8 % to 38.1 %, indicating its accuracy and generalizability; in the Kauai area, where the performance is not ideal, a significant improvement in accuracy is achieved through fine-tuning on limited in-situ data. The code of MuSRFM is available at https://github.com/qxm1995716/musrfm.

基于多光谱图像的卫星衍生水深测量(SDB)为获取近岸浅水区域的水深测量数据提供了一种高效、经济的方法。与传统的像素反演模型相比,深度学习(DL)模型在理论上能够涵盖更广阔的感受野,自动提取全面的空间特征。然而,通过增大输入尺寸来增强空间特征会增加计算复杂度和模型规模,对硬件提出了挑战。为了解决这个问题,我们提出了多尺度分辨率融合模型(MuSRFM),这是一种基于 DL 的新型 SDB 模型,利用时间融合的哨兵-2 L2A 多光谱图像来整合不同尺度的信息。MuSRFM 采用多尺度中心对齐分层重采样器 (MCHR),将大尺度多光谱图像合成为分层尺度分辨率表示,因为随着空间分辨率的降低,感受野会逐渐缩小焦点。通过这种策略,MuSRFM 可以获取丰富的空间信息,同时通过裁剪对齐融合模块(CAFM)逐步聚合不同尺度的特征,从而保持效率。我们选择圣克罗伊岛(维尔京群岛)作为训练/测试数据集源,MuSRFM 在测试数据集上获得的均方根误差(RMSE)为 0.8131 米(水深范围为 0-25 米),分别超过基于机器学习的模型和传统半经验模型 35% 和 60% 以上。此外,世界各地的多个岛屿地区,包括别克斯岛、瓦胡岛、可爱岛、塞班岛和提尼安岛,都表现出不同的特征,利用这些岛屿地区构建了一个真实世界数据集,用于评估拟议的 MuSRFM 的通用性和可转移性。虽然 MuSRFM 在应用于多样化的真实世界数据集时精度有所下降,但其性能大大优于其他基线模型。在真实世界数据集的各个研究区域,它的 RMSE 领先于排名第二的模型 6.8 % 到 38.1 %,这表明了它的准确性和普适性;在考艾岛地区,它的性能并不理想,但通过对有限的现场数据进行微调,它的准确性得到了显著提高。MuSRFM 的代码见 https://github.com/qxm1995716/musrfm。
{"title":"MuSRFM: Multiple scale resolution fusion based precise and robust satellite derived bathymetry model for island nearshore shallow water regions using sentinel-2 multi-spectral imagery","authors":"","doi":"10.1016/j.isprsjprs.2024.09.007","DOIUrl":"10.1016/j.isprsjprs.2024.09.007","url":null,"abstract":"<div><p>The multi-spectral imagery based Satellite Derived Bathymetry (SDB) provides an efficient and cost-effective approach for acquiring bathymetry data of nearshore shallow water regions. Compared with conventional pixelwise inversion models, Deep Learning (DL) models have the theoretical capability to encompass a broader receptive field, automatically extracting comprehensive spatial features. However, enhancing spatial features by increasing the input size escalates computational complexity and model scale, challenging the hardware. To address this issue, we propose the Multiple Scale Resolution Fusion Model (MuSRFM), a novel DL-based SDB model, to integrate information of varying scales by utilizing temporally fused Sentinel-2 L2A multi-spectral imagery. The MuSRFM uses a Multi-scale Center-aligned Hierarchical Resampler (MCHR) to composite large-scale multi-spectral imagery into hierarchical scale resolution representations since the receptive field gradually narrows its focus as the spatial resolution decreases. Through this strategy, the MuSRFM gains access to rich spatial information while maintaining efficiency by progressively aggregating features of different scales through the Cropped Aligned Fusion Module (CAFM). We select St. Croix (Virgin Islands) as the training/testing dataset source, and the Root Mean Square Error (RMSE) obtained by the MuSRFM on the testing dataset is 0.8131 m (with a bathymetric range of 0–25 m), surpassing the machine learning based models and traditional semi-empirical models used as the baselines by over 35 % and 60 %, respectively. Additionally, multiple island areas worldwide, including Vieques, Oahu, Kauai, Saipan and Tinian, which exhibit distinct characteristics, are utilized to construct a real-world dataset for assessing the generalizability and transferability of the proposed MuSRFM. While the MuSRFM experiences a degradation in accuracy when applied to the diverse real-world dataset, it outperforms other baseline models considerably. Across various study areas in the real-world dataset, its RMSE lead over the second-ranked model ranges from 6.8 % to 38.1 %, indicating its accuracy and generalizability; in the Kauai area, where the performance is not ideal, a significant improvement in accuracy is achieved through fine-tuning on limited in-situ data. The code of MuSRFM is available at <span><span>https://github.com/qxm1995716/musrfm</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0924271624003459/pdfft?md5=4925ae29c5fd595f63a6ca31611a8d4c&pid=1-s2.0-S0924271624003459-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Snow depth retrieval method for PolSAR data using multi-parameters snow backscattering model 利用多参数雪后散射模型的 PolSAR 数据雪深检索方法
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-13 DOI: 10.1016/j.isprsjprs.2024.09.005

Snow depth (SD) is a crucial property of snow, its spatial and temporal variation is important for global change, snowmelt runoff simulation, disaster prediction, and freshwater storage estimation. Polarimetric Synthetic Aperture Radar (PolSAR) can precisely describe the backscattering of the target and emerge as an effective tool for SD retrieval. The backscattering component of dry snow is mainly composed of volume scattering from the snowpack and surface scattering from the snow-ground interface. However, the existing method for retrieving SD using PolSAR data has the problems of over-reliance on in-situ data and ignoring surface scattering from the snow-ground interface. We proposed a novel SD retrieval method for PolSAR data by fully considering the primary backscattering components of snow and through multi-parameter estimation to solve the snow backscattering model. Firstly, a snow backscattering model was formed by combining the small permittivity volume scattering model and the Michigan semi-empirical surface scattering model to simulate the different scattering components of snow, and the corresponding backscattering coefficients were extracted using the Yamaguchi decomposition. Then, the snow permittivity was calculated through generalized volume parameters and the extinction coefficient was further estimated through modeling. Finally, the snow backscattering model was solved by these parameters to retrieve SD. The proposed method was validated by Ku-band UAV SAR data acquired in Altay, Xinjiang, and the accuracy was evaluated by in-situ data. The correlation coefficient, root mean square error, and mean absolute error are 0.80, 4.49 cm, and 3.95 cm, respectively. Meanwhile, the uncertainties generated by different SD, model parameters estimation, solution method, and underlying surface are analyzed to enhance the generality of the proposed method.

雪深(SD)是雪的一个重要属性,其时空变化对全球变化、融雪径流模拟、灾害预测和淡水储量估算具有重要意义。极坐标合成孔径雷达(PolSAR)可以精确描述目标的后向散射,是一种有效的标度检索工具。干雪的后向散射成分主要由雪堆的体积散射和雪地界面的表面散射组成。然而,现有的利用 PolSAR 数据检索自毁的方法存在过度依赖原地数据和忽略雪地界面表面散射的问题。我们通过充分考虑雪的主要后向散射成分,并通过多参数估计来求解雪的后向散射模型,提出了一种新型的 PolSAR 数据自毁率检索方法。首先,结合小介电常数体积散射模型和密歇根半经验表面散射模型,模拟雪的不同散射分量,形成雪的后向散射模型,并利用山口分解法提取相应的后向散射系数。然后,通过广义体积参数计算雪的介电常数,并通过建模进一步估算消光系数。最后,通过这些参数求解雪的反向散射模型,从而得到标度。通过在新疆阿勒泰地区获取的 Ku 波段无人机合成孔径雷达数据对所提出的方法进行了验证,并通过原位数据对其精度进行了评估。相关系数、均方根误差和平均绝对误差分别为 0.80、4.49 厘米和 3.95 厘米。同时,分析了不同标度、模型参数估计、求解方法和底面产生的不确定性,以增强所提方法的通用性。
{"title":"Snow depth retrieval method for PolSAR data using multi-parameters snow backscattering model","authors":"","doi":"10.1016/j.isprsjprs.2024.09.005","DOIUrl":"10.1016/j.isprsjprs.2024.09.005","url":null,"abstract":"<div><p>Snow depth (SD) is a crucial property of snow, its spatial and temporal variation is important for global change, snowmelt runoff simulation, disaster prediction, and freshwater storage estimation. Polarimetric Synthetic Aperture Radar (PolSAR) can precisely describe the backscattering of the target and emerge as an effective tool for SD retrieval. The backscattering component of dry snow is mainly composed of volume scattering from the snowpack and surface scattering from the snow-ground interface. However, the existing method for retrieving SD using PolSAR data has the problems of over-reliance on in-situ data and ignoring surface scattering from the snow-ground interface. We proposed a novel SD retrieval method for PolSAR data by fully considering the primary backscattering components of snow and through multi-parameter estimation to solve the snow backscattering model. Firstly, a snow backscattering model was formed by combining the small permittivity volume scattering model and the Michigan semi-empirical surface scattering model to simulate the different scattering components of snow, and the corresponding backscattering coefficients were extracted using the Yamaguchi decomposition. Then, the snow permittivity was calculated through generalized volume parameters and the extinction coefficient was further estimated through modeling. Finally, the snow backscattering model was solved by these parameters to retrieve SD. The proposed method was validated by Ku-band UAV SAR data acquired in Altay, Xinjiang, and the accuracy was evaluated by in-situ data. The correlation coefficient, root mean square error, and mean absolute error are 0.80, 4.49 cm, and 3.95 cm, respectively. Meanwhile, the uncertainties generated by different SD, model parameters estimation, solution method, and underlying surface are analyzed to enhance the generality of the proposed method.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sequential polarimetric phase optimization algorithm for dynamic deformation monitoring of landslides 用于滑坡动态变形监测的序列偏振相位优化算法
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-12 DOI: 10.1016/j.isprsjprs.2024.08.013

In the era of big SAR data, it is urgent to develop dynamic time series DInSAR processing procedures for near-real-time monitoring of landslides. However, the dense vegetation coverage in mountainous areas causes severe decorrelations, which demands high precision and efficiency of phase optimization processing. The common phase optimization using single-polarization SAR data cannot produce satisfactory results due to the limited statistical samples in some natural scenarios. The novel polarimetric phase optimization algorithms, however, have low computational efficiency, limiting their applications in large-scale scenarios and long data sequences. In addition, temporal changes in the scattering properties of ground features and the continuous increase of SAR data require dynamic phase optimization processing. To achieve efficient phase optimization for dynamic DInSAR time series analysis, we combine the Sequential Estimator (SE) with the Total Power (TP) polarization stacking method and solve it using eigen decomposition-based Maximum Likelihood Estimator (EMI), named SETP-EMI. The simulation and real data experiments demonstrate the significant improvements of the SETP-EMI method in precision and efficiency compared to the EMI and TP-EMI methods. The SETP-EMI exhibits an increase of more than 50% and 20% in highly coherent points for the real data compared to the EMI and TP-EMI, respectively. It, meanwhile, achieves approximately six and two times more efficient than the EMI and TP-EMI methods in the real data case. These results highlight the effectiveness of the SETP-EMI method in promptly capturing and analyzing evolving landslide deformations, providing valuable insights for real-time monitoring and decision-making.

在合成孔径雷达大数据时代,开发动态时间序列 DInSAR 处理程序用于近实时监测山体滑坡迫在眉睫。然而,山区植被茂密,会造成严重的去相关性,这对相位优化处理的精度和效率提出了很高的要求。在某些自然场景中,由于统计样本有限,使用单极化合成孔径雷达数据进行普通相位优化无法获得令人满意的结果。而新型偏振相位优化算法的计算效率较低,限制了其在大规模场景和长数据序列中的应用。此外,地面地物散射特性的时间变化和合成孔径雷达数据的不断增加也要求进行动态相位优化处理。为了在动态 DInSAR 时间序列分析中实现高效的相位优化,我们将序列估计器(SE)与总功率(TP)极化叠加方法相结合,并使用基于特征分解的最大似然估计器(EMI)进行求解,命名为 SETP-EMI。模拟和实际数据实验证明,与 EMI 和 TP-EMI 方法相比,SETP-EMI 方法在精度和效率方面都有显著提高。与 EMI 和 TP-EMI 相比,SETP-EMI 在真实数据中的高相干点分别增加了 50%和 20%。同时,在真实数据情况下,它比 EMI 和 TP-EMI 方法的效率分别高出约六倍和两倍。这些结果凸显了 SETP-EMI 方法在及时捕捉和分析不断变化的滑坡变形方面的有效性,为实时监测和决策提供了宝贵的见解。
{"title":"Sequential polarimetric phase optimization algorithm for dynamic deformation monitoring of landslides","authors":"","doi":"10.1016/j.isprsjprs.2024.08.013","DOIUrl":"10.1016/j.isprsjprs.2024.08.013","url":null,"abstract":"<div><p>In the era of big SAR data, it is urgent to develop dynamic time series DInSAR processing procedures for near-real-time monitoring of landslides. However, the dense vegetation coverage in mountainous areas causes severe decorrelations, which demands high precision and efficiency of phase optimization processing. The common phase optimization using single-polarization SAR data cannot produce satisfactory results due to the limited statistical samples in some natural scenarios. The novel polarimetric phase optimization algorithms, however, have low computational efficiency, limiting their applications in large-scale scenarios and long data sequences. In addition, temporal changes in the scattering properties of ground features and the continuous increase of SAR data require dynamic phase optimization processing. To achieve efficient phase optimization for dynamic DInSAR time series analysis, we combine the Sequential Estimator (SE) with the Total Power (TP) polarization stacking method and solve it using eigen decomposition-based Maximum Likelihood Estimator (EMI), named SETP-EMI. The simulation and real data experiments demonstrate the significant improvements of the SETP-EMI method in precision and efficiency compared to the EMI and TP-EMI methods. The SETP-EMI exhibits an increase of more than 50% and 20% in highly coherent points for the real data compared to the EMI and TP-EMI, respectively. It, meanwhile, achieves approximately six and two times more efficient than the EMI and TP-EMI methods in the real data case. These results highlight the effectiveness of the SETP-EMI method in promptly capturing and analyzing evolving landslide deformations, providing valuable insights for real-time monitoring and decision-making.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A general albedo recovery approach for aerial photogrammetric images through inverse rendering 通过反渲染恢复航空摄影测量图像反照率的一般方法
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-12 DOI: 10.1016/j.isprsjprs.2024.09.001

Modeling outdoor scenes for the synthetic 3D environment requires the recovery of reflectance/albedo information from raw images, which is an ill-posed problem due to the complicated unmodeled physics in this process (e.g., indirect lighting, volume scattering, specular reflection). The problem remains unsolved in a practical context. The recovered albedo can facilitate model relighting and shading, which can further enhance the realism of rendered models and the applications of digital twins. Typically, photogrammetric 3D models simply take the source images as texture materials, which inherently embed unwanted lighting artifacts (at the time of capture) into the texture. Therefore, these “polluted” textures are suboptimal for a synthetic environment to enable realistic rendering. In addition, these embedded environmental lightings further bring challenges to photo-consistencies across different images that cause image-matching uncertainties. This paper presents a general image formation model for albedo recovery from typical aerial photogrammetric images under natural illuminations and derives the inverse model to resolve the albedo information through inverse rendering intrinsic image decomposition. Our approach builds on the fact that both the sun illumination and scene geometry are estimable in aerial photogrammetry, thus they can provide direct inputs for this ill-posed problem. This physics-based approach does not require additional input other than data acquired through the typical drone-based photogrammetric collection and was shown to favorably outperform existing approaches. We also demonstrate that the recovered albedo image can in turn improve typical image processing tasks in photogrammetry such as feature and dense matching, edge, and line extraction. [This work extends our prior work “A Novel Intrinsic Image Decomposition Method to Recover Albedo for Aerial Images in Photogrammetry Processing” in ISPRS Congress 2022]. The code will be made available at github.com/GDAOSU/albedo_aerial_photogrammetry

合成三维环境的室外场景建模需要从原始图像中恢复反射率/反照率信息,由于这一过程中存在复杂的未建模物理现象(如间接照明、体散射、镜面反射),因此这是一个难以解决的问题。在实际应用中,这一问题仍未得到解决。恢复的反照率可以促进模型的重新照明和着色,从而进一步增强渲染模型的真实感和数字双胞胎的应用。通常情况下,摄影测量三维模型只是将源图像作为纹理素材,这就在纹理中嵌入了(捕捉时)不需要的照明伪影。因此,这些 "污染 "纹理对于合成环境的逼真渲染来说是不理想的。此外,这些内嵌的环境光照还会给不同图像之间的光照一致性带来挑战,从而导致图像匹配的不确定性。本文提出了一个通用的图像形成模型,用于从自然光照下的典型航空摄影测量图像中恢复反照率,并推导出反模型,通过反渲染内在图像分解来解析反照率信息。我们的方法基于这样一个事实,即在航空摄影测量中,太阳光照和场景几何都是可以估算的,因此它们可以为这个问题提供直接输入。除了通过典型的无人机摄影测量采集获得的数据外,这种基于物理学的方法不需要额外的输入,而且性能优于现有方法。我们还证明,恢复的反照率图像可以反过来改进摄影测量中的典型图像处理任务,如特征和密集匹配、边缘和线条提取。[这项工作扩展了我们之前在 2022 年国际摄影测量和遥感学会大会上发表的 "在摄影测量处理中恢复航空图像反照率的新型本征图像分解方法 "的工作]。代码将发布在
{"title":"A general albedo recovery approach for aerial photogrammetric images through inverse rendering","authors":"","doi":"10.1016/j.isprsjprs.2024.09.001","DOIUrl":"10.1016/j.isprsjprs.2024.09.001","url":null,"abstract":"<div><p>Modeling outdoor scenes for the synthetic 3D environment requires the recovery of reflectance/albedo information from raw images, which is an ill-posed problem due to the complicated unmodeled physics in this process (e.g., indirect lighting, volume scattering, specular reflection). The problem remains unsolved in a practical context. The recovered albedo can facilitate model relighting and shading, which can further enhance the realism of rendered models and the applications of digital twins. Typically, photogrammetric 3D models simply take the source images as texture materials, which inherently embed unwanted lighting artifacts (at the time of capture) into the texture. Therefore, these “polluted” textures are suboptimal for a synthetic environment to enable realistic rendering. In addition, these embedded environmental lightings further bring challenges to photo-consistencies across different images that cause image-matching uncertainties. This paper presents a general image formation model for albedo recovery from typical aerial photogrammetric images under natural illuminations and derives the inverse model to resolve the albedo information through inverse rendering intrinsic image decomposition. Our approach builds on the fact that both the sun illumination and scene geometry are estimable in aerial photogrammetry, thus they can provide direct inputs for this ill-posed problem. This physics-based approach does not require additional input other than data acquired through the typical drone-based photogrammetric collection and was shown to favorably outperform existing approaches. We also demonstrate that the recovered albedo image can in turn improve typical image processing tasks in photogrammetry such as feature and dense matching, edge, and line extraction. [This work extends our prior work “A Novel Intrinsic Image Decomposition Method to Recover Albedo for Aerial Images in Photogrammetry Processing” in ISPRS Congress 2022]. The code will be made available at <span><span>github.com/GDAOSU/albedo_aerial_photogrammetry</span><svg><path></path></svg></span></p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating AVHRR snow cover fraction by coupling physical constraints into a deep learning framework 通过将物理约束条件与深度学习框架相结合来估算 AVHRR 雪盖分数
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-12 DOI: 10.1016/j.isprsjprs.2024.08.015

Accurate snow cover information is crucial for studying global climate and hydrology. Although deep learning has innovated snow cover fraction (SCF) retrieval, its effectiveness in practical application remains limited. This limitation stems from its reliance on appropriate training data and the necessity for more advanced interpretability. To overcome these challenges, a novel deep learning framework model by coupling the asymptotic radiative transfer (ART) model was developed to retrieve the Northern Hemisphere SCF based on advanced very high-resolution radiometer (AVHRR) surface reflectance data, named the ART-DL SCF model. Using Landsat 5 snow cover images as the reference SCF, the new model incorporates snow surface albedo retrieval from the ART model as a physical constraint into relevant snow identification parameters. Comprehensive validation results with Landsat reference SCF show an RMSE of 0.2228, an NMAD of 0.1227, and a bias of −0.0013. Moreover, the binary validation reveals an overall accuracy of 90.20%, with omission and commission errors both below 10%. Significantly, introducing physical constraints both improves the accuracy and stability of the model and mitigates underestimation issues. Compared to the model without physical constraints, the ART-DL SCF model shows a marked reduction of 4.79 percentage points in the RMSE and 5.35 percentage points in MAE. These accuracies were significantly higher than the currently available SnowCCI AVHRR products from the European Space Agency (ESA). Additionally, the model exhibits strong temporal and spatial generalizability and performs well in forest areas. This study presents a physical model coupled with deep learning for SCF retrieval that can better serve global climatic, hydrological, and other related studies.

准确的雪盖信息对于研究全球气候和水文至关重要。虽然深度学习对雪覆盖率(SCF)检索进行了创新,但其实际应用效果仍然有限。这种局限性源于它对适当训练数据的依赖性和更高级可解释性的必要性。为了克服这些挑战,研究人员开发了一种新型深度学习框架模型,该模型通过与渐近辐射传递(ART)模型耦合,基于先进的甚高分辨率辐射计(AVHRR)表面反射率数据来检索北半球的雪盖率,命名为 ART-DL SCF 模型。新模型使用大地遥感卫星 5 号雪覆盖图像作为参考 SCF,将 ART 模型的雪面反照率检索作为物理约束纳入相关的雪识别参数。使用 Landsat 参考 SCF 的综合验证结果显示,RMSE 为 0.2228,NMAD 为 0.1227,偏差为 -0.0013。此外,二元验证显示总体准确率为 90.20%,遗漏误差和误差均低于 10%。值得注意的是,引入物理约束既提高了模型的准确性和稳定性,又缓解了低估问题。与无物理约束的模型相比,ART-DL SCF 模型的均方根误差和最大允许误差分别显著降低了 4.79 个百分点和 5.35 个百分点。这些精度明显高于欧洲航天局(ESA)目前可用的 SnowCCI AVHRR 产品。此外,该模型还具有很强的时空通用性,在森林地区表现良好。本研究提出了一种结合深度学习的 SCF 检索物理模型,可更好地服务于全球气候、水文和其他相关研究。
{"title":"Estimating AVHRR snow cover fraction by coupling physical constraints into a deep learning framework","authors":"","doi":"10.1016/j.isprsjprs.2024.08.015","DOIUrl":"10.1016/j.isprsjprs.2024.08.015","url":null,"abstract":"<div><p>Accurate snow cover information is crucial for studying global climate and hydrology. Although deep learning has innovated snow cover fraction (SCF) retrieval, its effectiveness in practical application remains limited. This limitation stems from its reliance on appropriate training data and the necessity for more advanced interpretability. To overcome these challenges, a novel deep learning framework model by coupling the asymptotic radiative transfer (ART) model was developed to retrieve the Northern Hemisphere SCF based on advanced very high-resolution radiometer (AVHRR) surface reflectance data, named the ART-DL SCF model. Using Landsat 5 snow cover images as the reference SCF, the new model incorporates snow surface albedo retrieval from the ART model as a physical constraint into relevant snow identification parameters. Comprehensive validation results with Landsat reference SCF show an RMSE of 0.2228, an NMAD of 0.1227, and a bias of −0.0013. Moreover, the binary validation reveals an overall accuracy of 90.20%, with omission and commission errors both below 10%. Significantly, introducing physical constraints both improves the accuracy and stability of the model and mitigates underestimation issues. Compared to the model without physical constraints, the ART-DL SCF model shows a marked reduction of 4.79 percentage points in the RMSE and 5.35 percentage points in MAE. These accuracies were significantly higher than the currently available SnowCCI AVHRR products from the European Space Agency (ESA). Additionally, the model exhibits strong temporal and spatial generalizability and performs well in forest areas. This study presents a physical model coupled with deep learning for SCF retrieval that can better serve global climatic, hydrological, and other related studies.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective variance attention-enhanced diffusion model for crop field aerial image super resolution 用于作物田航空图像超级分辨率的有效方差注意力增强扩散模型
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-11 DOI: 10.1016/j.isprsjprs.2024.08.017

Image super-resolution (SR) can significantly improve the resolution and quality of aerial imagery. Emerging diffusion models (DM) have shown superior image generation capabilities through multistep refinement. To explore their effectiveness on high-resolution cropland aerial imagery SR, we first built the CropSR dataset, which includes 321,992 samples for self-supervised SR training and two real-matched SR datasets from high-low altitude orthomosaics and fixed-point photography (CropSR-OR/FP) for testing. Inspired by the observed trend of decreasing image variance with higher flight altitude, we developed the Variance-Average-Spatial Attention (VASA). The VASA demonstrated effectiveness across various types of SR models, and we further developed the Efficient VASA-enhanced Diffusion Model (EVADM). To comprehensively and consistently evaluate the quality of SR models, we introduced the Super-resolution Relative Fidelity Index (SRFI), which considers both structural and perceptual similarity. On the × 2 and × 4 real SR datasets, EVADM reduced Fréchet-Inception-Distance (FID) by 14.6 and 8.0, respectively, along with SRFI gains of 27 % and 6 % compared to the baselines. The superior generalization ability of EVADM was further validated using the open Agriculture-Vision dataset. Extensive downstream case studies have demonstrated the high practicality of our SR method, indicating a promising avenue for realistic aerial imagery enhancement and effective downstream applications. The code and dataset for testing are available at https://github.com/HobbitArmy/EVADM.

图像超分辨率(SR)可显著提高航空图像的分辨率和质量。新兴的扩散模型(DM)通过多步细化显示了卓越的图像生成能力。为了探索这些模型在高分辨率耕地航空图像 SR 方面的有效性,我们首先建立了 CropSR 数据集,其中包括用于自我监督 SR 训练的 321,992 个样本,以及用于测试的两个真实匹配 SR 数据集,这两个数据集分别来自高低空正射影像图和定点摄影(CropSR-OR/FP)。受观察到的图像方差随飞行高度增加而减小的趋势启发,我们开发了方差-平均空间注意力(VASA)。VASA 在各种类型的 SR 模型中都表现出了有效性,因此我们进一步开发了高效 VASA 增强扩散模型 (EVADM)。为了全面、一致地评估 SR 模型的质量,我们引入了超级分辨率相对保真度指数(SRFI),该指数同时考虑了结构和感知的相似性。在 × 2 和 × 4 真实 SR 数据集上,EVADM 将弗雷谢特-截取距离(FID)分别缩短了 14.6 和 8.0,与基线相比,SRFI 分别提高了 27% 和 6%。EVADM 的卓越泛化能力通过公开的 Agriculture-Vision 数据集得到了进一步验证。广泛的下游案例研究证明了我们的 SR 方法具有很高的实用性,为现实的航空图像增强和有效的下游应用提供了广阔的前景。测试代码和数据集可在 https://github.com/HobbitArmy/EVADM 上获取。
{"title":"Effective variance attention-enhanced diffusion model for crop field aerial image super resolution","authors":"","doi":"10.1016/j.isprsjprs.2024.08.017","DOIUrl":"10.1016/j.isprsjprs.2024.08.017","url":null,"abstract":"<div><p>Image super-resolution (SR) can significantly improve the resolution and quality of aerial imagery. Emerging diffusion models (DM) have shown superior image generation capabilities through multistep refinement. To explore their effectiveness on high-resolution cropland aerial imagery SR, we first built the CropSR dataset, which includes 321,992 samples for self-supervised SR training and two real-matched SR datasets from high-low altitude orthomosaics and fixed-point photography (CropSR-OR/FP) for testing. Inspired by the observed trend of decreasing image variance with higher flight altitude, we developed the Variance-Average-Spatial Attention (VASA). The VASA demonstrated effectiveness across various types of SR models, and we further developed the Efficient VASA-enhanced Diffusion Model (EVADM). To comprehensively and consistently evaluate the quality of SR models, we introduced the Super-resolution Relative Fidelity Index (SRFI), which considers both structural and perceptual similarity. On the × 2 and × 4 real SR datasets, EVADM reduced Fréchet-Inception-Distance (FID) by 14.6 and 8.0, respectively, along with SRFI gains of 27 % and 6 % compared to the baselines. The superior generalization ability of EVADM was further validated using the open Agriculture-Vision dataset. Extensive downstream case studies have demonstrated the high practicality of our SR method, indicating a promising avenue for realistic aerial imagery enhancement and effective downstream applications. The code and dataset for testing are available at <span><span>https://github.com/HobbitArmy/EVADM</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-resolution mapping of grassland canopy cover in China through the integration of extensive drone imagery and satellite data 通过整合大量无人机图像和卫星数据,高分辨率绘制中国草地冠层覆盖图
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-11 DOI: 10.1016/j.isprsjprs.2024.09.004

Canopy cover is a crucial indicator for assessing grassland health and ecosystem services. However, achieving accurate high-resolution estimates of grassland canopy cover at a large spatial scale remains challenging due to the limited spatial coverage of field measurements and the scale mismatch between field measurements and satellite imagery. In this study, we addressed these challenges by proposing a regression-based approach to estimate large-scale grassland canopy cover, leveraging the integration of drone imagery and multisource remote sensing data. Specifically, over 90,000 10 × 10 m drone image tiles were collected at 1,255 sites across China. All drone image tiles were classified into grass and non-grass pixels to generate ground-truth canopy cover estimates. These estimates were then temporally aligned with satellite imagery-derived features to build a random forest regression model to map the grassland canopy cover distribution of China. Our results revealed that a single classification model can effectively distinguish between grass and non-grass pixels in drone images collected across diverse grassland types and large spatial scales, with multilayer perceptron demonstrating superior classification accuracy compared to Canopeo, support vector machine, random forest, and pyramid scene parsing network. The integration of extensive drone imagery successfully addressed the scale-mismatch issue between traditional ground measurements and satellite imagery, contributing significantly to enhancing mapping accuracy. The national canopy cover map of China generated for the year 2021 exhibited a spatial pattern of increasing canopy cover from northwest to southeast, with an average value of 56 % and a standard deviation of 26 %. Moreover, it demonstrated high accuracy, with a coefficient of determination of 0.89 and a root-mean-squared error of 12.38 %. The resulting high-resolution canopy cover map of China holds great potential in advancing our comprehension of grassland ecosystem processes and advocating for the sustainable management of grassland resources.

冠层覆盖是评估草地健康和生态系统服务的重要指标。然而,由于野外测量的空间覆盖范围有限以及野外测量与卫星图像之间的尺度不匹配,在大空间尺度上实现对草地冠层覆盖的精确高分辨率估算仍具有挑战性。在本研究中,我们利用无人机图像和多源遥感数据的整合,提出了一种基于回归的方法来估算大尺度草地冠层覆盖率,从而解决了这些难题。具体而言,我们在全国 1,255 个地点收集了 90,000 多张 10 × 10 米的无人机图像。所有无人机图像瓦片都被划分为草地和非草地像素,以生成地面实况的冠层覆盖估算值。然后,将这些估算值与卫星图像衍生特征进行时间对齐,建立随机森林回归模型,绘制中国草地冠层覆盖分布图。我们的研究结果表明,单一分类模型可以有效区分无人机图像中的草地和非草地像素,这些图像跨越了不同的草地类型和大的空间尺度,其中多层感知器的分类精度优于Canopeo、支持向量机、随机森林和金字塔场景解析网络。大量无人机图像的整合成功解决了传统地面测量与卫星图像之间的尺度不匹配问题,为提高制图精度做出了重大贡献。生成的 2021 年中国全国冠层覆盖图呈现出由西北向东南递增的空间格局,平均值为 56%,标准偏差为 26%。此外,它还表现出很高的精度,决定系数为 0.89,均方根误差为 12.38%。所绘制的中国高分辨率冠层覆盖图在推进我们对草原生态系统过程的理解和倡导草原资源的可持续管理方面具有巨大潜力。
{"title":"High-resolution mapping of grassland canopy cover in China through the integration of extensive drone imagery and satellite data","authors":"","doi":"10.1016/j.isprsjprs.2024.09.004","DOIUrl":"10.1016/j.isprsjprs.2024.09.004","url":null,"abstract":"<div><p>Canopy cover is a crucial indicator for assessing grassland health and ecosystem services. However, achieving accurate high-resolution estimates of grassland canopy cover at a large spatial scale remains challenging due to the limited spatial coverage of field measurements and the scale mismatch between field measurements and satellite imagery. In this study, we addressed these challenges by proposing a regression-based approach to estimate large-scale grassland canopy cover, leveraging the integration of drone imagery and multisource remote sensing data. Specifically, over 90,000 10 × 10 m drone image tiles were collected at 1,255 sites across China. All drone image tiles were classified into grass and non-grass pixels to generate ground-truth canopy cover estimates. These estimates were then temporally aligned with satellite imagery-derived features to build a random forest regression model to map the grassland canopy cover distribution of China. Our results revealed that a single classification model can effectively distinguish between grass and non-grass pixels in drone images collected across diverse grassland types and large spatial scales, with multilayer perceptron demonstrating superior classification accuracy compared to Canopeo, support vector machine, random forest, and pyramid scene parsing network. The integration of extensive drone imagery successfully addressed the scale-mismatch issue between traditional ground measurements and satellite imagery, contributing significantly to enhancing mapping accuracy. The national canopy cover map of China generated for the year 2021 exhibited a spatial pattern of increasing canopy cover from northwest to southeast, with an average value of 56 % and a standard deviation of 26 %. Moreover, it demonstrated high accuracy, with a coefficient of determination of 0.89 and a root-mean-squared error of 12.38 %. The resulting high-resolution canopy cover map of China holds great potential in advancing our comprehension of grassland ecosystem processes and advocating for the sustainable management of grassland resources.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of synthetic aperture radar with deep learning in agricultural applications 深度学习合成孔径雷达在农业应用中的研究综述
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-10 DOI: 10.1016/j.isprsjprs.2024.08.018

Synthetic Aperture Radar (SAR) observations, valued for their consistent acquisition schedule and not being affected by cloud cover and variations between day and night, have become extensively utilized in a range of agricultural applications. The advent of deep learning allows for the capture of salient features from SAR observations. This is accomplished through discerning both spatial and temporal relationships within SAR data. This study reviews the current state of the art in the use of SAR with deep learning for crop classification/mapping, monitoring and yield estimation applications and the potential of leveraging both for the detection of agricultural management practices.

This review introduces the principles of SAR and its applications in agriculture, highlighting current limitations and challenges. It explores deep learning techniques as a solution to mitigate these issues and enhance the capability of SAR for agricultural applications. The review covers various aspects of SAR observables, methodologies for the fusion of optical and SAR data, common and emerging deep learning architectures, data augmentation techniques, validation and testing methods, and open-source reference datasets, all aimed at enhancing the precision and utility of SAR with deep learning for agricultural applications.

合成孔径雷达(SAR)观测数据因其采集时间一致、不受云层遮挡和昼夜变化的影响而备受重视,已被广泛应用于一系列农业应用中。深度学习技术的出现使得从合成孔径雷达观测数据中捕捉突出特征成为可能。这是通过辨别合成孔径雷达数据中的空间和时间关系来实现的。本研究回顾了将合成孔径雷达与深度学习用于作物分类/测绘、监测和产量估算应用的技术现状,以及利用这两种技术检测农业管理实践的潜力。本综述介绍了合成孔径雷达的原理及其在农业中的应用,强调了当前的局限性和挑战,并探讨了深度学习技术作为缓解这些问题的解决方案,以及增强合成孔径雷达在农业应用中的能力。综述涉及合成孔径雷达观测数据的各个方面、光学和合成孔径雷达数据融合方法、常见和新兴的深度学习架构、数据增强技术、验证和测试方法以及开源参考数据集,所有这些都旨在通过深度学习提高合成孔径雷达在农业应用中的精度和实用性。
{"title":"Review of synthetic aperture radar with deep learning in agricultural applications","authors":"","doi":"10.1016/j.isprsjprs.2024.08.018","DOIUrl":"10.1016/j.isprsjprs.2024.08.018","url":null,"abstract":"<div><p>Synthetic Aperture Radar (SAR) observations, valued for their consistent acquisition schedule and not being affected by cloud cover and variations between day and night, have become extensively utilized in a range of agricultural applications. The advent of deep learning allows for the capture of salient features from SAR observations. This is accomplished through discerning both spatial and temporal relationships within SAR data. This study reviews the current state of the art in the use of SAR with deep learning for crop classification/mapping, monitoring and yield estimation applications and the potential of leveraging both for the detection of agricultural management practices.</p><p>This review introduces the principles of SAR and its applications in agriculture, highlighting current limitations and challenges. It explores deep learning techniques as a solution to mitigate these issues and enhance the capability of SAR for agricultural applications. The review covers various aspects of SAR observables, methodologies for the fusion of optical and SAR data, common and emerging deep learning architectures, data augmentation techniques, validation and testing methods, and open-source reference datasets, all aimed at enhancing the precision and utility of SAR with deep learning for agricultural applications.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harmony in diversity: Content cleansing change detection framework for very-high-resolution remote-sensing images 多样性中的和谐:超高分辨率遥感图像的内容清理变化检测框架
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-10 DOI: 10.1016/j.isprsjprs.2024.09.002

Change detection, as a crucial task in the field of Earth observation, aims to identify changed pixels between multi-temporal remote-sensing images captured at the same geographical area. However, in practical applications, there are challenges of pseudo changes arising from diverse imaging conditions and different remote-sensing platforms. Existing methods either overlook the different imaging styles between bi-temporal images, or transfer the bi-temporal styles via domain adaptation that may lose ground details. To address these problems, we introduce the disentangled representation learning that mitigates differences of imaging styles while preserving content details to develop a change detection framework, named Content Cleansing Network (CCNet). Specifically, CCNet embeds each input image into two distinct subspaces: a shared content space and a private style space. The separation of style space aims to mitigate the discrepant style due to different imaging condition, while the extracted content space reflects semantic features that is essential for change detection. Then, a multi-resolution parallel structure constructs the content space encoder, facilitating robust feature extraction of semantic information and spatial details. The cleansed content features enable accurate detection of changes in the land surface. Additionally, a lightweight decoder for image restoration enhances the independence and interpretability of the disentangled spaces. To verify the proposed method, CCNet is applied to five public datasets and a multi-temporal dataset collected in this study. Comparative experiments against eleven advanced methods demonstrate the effectiveness and superiority of CCNet. The experimental results show that our method robustly addresses the issues related to both temporal and platform variations, making it a promising method for change detection in complex conditions and supporting downstream applications.

变化检测是地球观测领域的一项重要任务,旨在识别在同一地理区域拍摄的多时相遥感图像之间发生变化的像素。然而,在实际应用中,不同的成像条件和不同的遥感平台会产生伪变化。现有的方法要么忽略了双时相图像之间不同的成像风格,要么通过域自适应转移双时相风格,从而可能丢失地面细节。为了解决这些问题,我们引入了分解表示学习,在保留内容细节的同时减轻成像风格的差异,从而开发出一种名为内容清洗网络(CCNet)的变化检测框架。具体来说,CCNet 将每个输入图像嵌入两个不同的子空间:共享内容空间和私有风格空间。风格空间的分离旨在减少因成像条件不同而产生的风格差异,而提取的内容空间则反映了对变化检测至关重要的语义特征。然后,多分辨率并行结构构建了内容空间编码器,促进了对语义信息和空间细节的稳健特征提取。经过净化的内容特征能够准确检测地表的变化。此外,用于图像复原的轻量级解码器增强了分离空间的独立性和可解释性。为了验证所提出的方法,CCNet 被应用于本研究中收集的五个公共数据集和一个多时数据集。与 11 种先进方法的对比实验证明了 CCNet 的有效性和优越性。实验结果表明,我们的方法能稳健地解决与时间和平台变化相关的问题,使其成为在复杂条件下进行变化检测和支持下游应用的一种有前途的方法。
{"title":"Harmony in diversity: Content cleansing change detection framework for very-high-resolution remote-sensing images","authors":"","doi":"10.1016/j.isprsjprs.2024.09.002","DOIUrl":"10.1016/j.isprsjprs.2024.09.002","url":null,"abstract":"<div><p>Change detection, as a crucial task in the field of Earth observation, aims to identify changed pixels between multi-temporal remote-sensing images captured at the same geographical area. However, in practical applications, there are challenges of pseudo changes arising from diverse imaging conditions and different remote-sensing platforms. Existing methods either overlook the different imaging styles between bi-temporal images, or transfer the bi-temporal styles via domain adaptation that may lose ground details. To address these problems, we introduce the disentangled representation learning that mitigates differences of imaging styles while preserving content details to develop a change detection framework, named Content Cleansing Network (CCNet). Specifically, CCNet embeds each input image into two distinct subspaces: a shared content space and a private style space. The separation of style space aims to mitigate the discrepant style due to different imaging condition, while the extracted content space reflects semantic features that is essential for change detection. Then, a multi-resolution parallel structure constructs the content space encoder, facilitating robust feature extraction of semantic information and spatial details. The cleansed content features enable accurate detection of changes in the land surface. Additionally, a lightweight decoder for image restoration enhances the independence and interpretability of the disentangled spaces. To verify the proposed method, CCNet is applied to five public datasets and a multi-temporal dataset collected in this study. Comparative experiments against eleven advanced methods demonstrate the effectiveness and superiority of CCNet. The experimental results show that our method robustly addresses the issues related to both temporal and platform variations, making it a promising method for change detection in complex conditions and supporting downstream applications.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S092427162400340X/pdfft?md5=05257e0a48272b7c28a6809497111281&pid=1-s2.0-S092427162400340X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1