首页 > 最新文献

ISPRS Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
Sequential polarimetric phase optimization algorithm for dynamic deformation monitoring of landslides 用于滑坡动态变形监测的序列偏振相位优化算法
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-12 DOI: 10.1016/j.isprsjprs.2024.08.013
Yian Wang , Jiayin Luo , Jie Dong , Jordi J. Mallorqui , Mingsheng Liao , Lu Zhang , Jianya Gong

In the era of big SAR data, it is urgent to develop dynamic time series DInSAR processing procedures for near-real-time monitoring of landslides. However, the dense vegetation coverage in mountainous areas causes severe decorrelations, which demands high precision and efficiency of phase optimization processing. The common phase optimization using single-polarization SAR data cannot produce satisfactory results due to the limited statistical samples in some natural scenarios. The novel polarimetric phase optimization algorithms, however, have low computational efficiency, limiting their applications in large-scale scenarios and long data sequences. In addition, temporal changes in the scattering properties of ground features and the continuous increase of SAR data require dynamic phase optimization processing. To achieve efficient phase optimization for dynamic DInSAR time series analysis, we combine the Sequential Estimator (SE) with the Total Power (TP) polarization stacking method and solve it using eigen decomposition-based Maximum Likelihood Estimator (EMI), named SETP-EMI. The simulation and real data experiments demonstrate the significant improvements of the SETP-EMI method in precision and efficiency compared to the EMI and TP-EMI methods. The SETP-EMI exhibits an increase of more than 50% and 20% in highly coherent points for the real data compared to the EMI and TP-EMI, respectively. It, meanwhile, achieves approximately six and two times more efficient than the EMI and TP-EMI methods in the real data case. These results highlight the effectiveness of the SETP-EMI method in promptly capturing and analyzing evolving landslide deformations, providing valuable insights for real-time monitoring and decision-making.

在合成孔径雷达大数据时代,开发动态时间序列 DInSAR 处理程序用于近实时监测山体滑坡迫在眉睫。然而,山区植被茂密,会造成严重的去相关性,这对相位优化处理的精度和效率提出了很高的要求。在某些自然场景中,由于统计样本有限,使用单极化合成孔径雷达数据进行普通相位优化无法获得令人满意的结果。而新型偏振相位优化算法的计算效率较低,限制了其在大规模场景和长数据序列中的应用。此外,地面地物散射特性的时间变化和合成孔径雷达数据的不断增加也要求进行动态相位优化处理。为了在动态 DInSAR 时间序列分析中实现高效的相位优化,我们将序列估计器(SE)与总功率(TP)极化叠加方法相结合,并使用基于特征分解的最大似然估计器(EMI)进行求解,命名为 SETP-EMI。模拟和实际数据实验证明,与 EMI 和 TP-EMI 方法相比,SETP-EMI 方法在精度和效率方面都有显著提高。与 EMI 和 TP-EMI 相比,SETP-EMI 在真实数据中的高相干点分别增加了 50%和 20%。同时,在真实数据情况下,它比 EMI 和 TP-EMI 方法的效率分别高出约六倍和两倍。这些结果凸显了 SETP-EMI 方法在及时捕捉和分析不断变化的滑坡变形方面的有效性,为实时监测和决策提供了宝贵的见解。
{"title":"Sequential polarimetric phase optimization algorithm for dynamic deformation monitoring of landslides","authors":"Yian Wang ,&nbsp;Jiayin Luo ,&nbsp;Jie Dong ,&nbsp;Jordi J. Mallorqui ,&nbsp;Mingsheng Liao ,&nbsp;Lu Zhang ,&nbsp;Jianya Gong","doi":"10.1016/j.isprsjprs.2024.08.013","DOIUrl":"10.1016/j.isprsjprs.2024.08.013","url":null,"abstract":"<div><p>In the era of big SAR data, it is urgent to develop dynamic time series DInSAR processing procedures for near-real-time monitoring of landslides. However, the dense vegetation coverage in mountainous areas causes severe decorrelations, which demands high precision and efficiency of phase optimization processing. The common phase optimization using single-polarization SAR data cannot produce satisfactory results due to the limited statistical samples in some natural scenarios. The novel polarimetric phase optimization algorithms, however, have low computational efficiency, limiting their applications in large-scale scenarios and long data sequences. In addition, temporal changes in the scattering properties of ground features and the continuous increase of SAR data require dynamic phase optimization processing. To achieve efficient phase optimization for dynamic DInSAR time series analysis, we combine the Sequential Estimator (SE) with the Total Power (TP) polarization stacking method and solve it using eigen decomposition-based Maximum Likelihood Estimator (EMI), named SETP-EMI. The simulation and real data experiments demonstrate the significant improvements of the SETP-EMI method in precision and efficiency compared to the EMI and TP-EMI methods. The SETP-EMI exhibits an increase of more than 50% and 20% in highly coherent points for the real data compared to the EMI and TP-EMI, respectively. It, meanwhile, achieves approximately six and two times more efficient than the EMI and TP-EMI methods in the real data case. These results highlight the effectiveness of the SETP-EMI method in promptly capturing and analyzing evolving landslide deformations, providing valuable insights for real-time monitoring and decision-making.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 84-100"},"PeriodicalIF":10.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A general albedo recovery approach for aerial photogrammetric images through inverse rendering 通过反渲染恢复航空摄影测量图像反照率的一般方法
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-12 DOI: 10.1016/j.isprsjprs.2024.09.001
Shuang Song , Rongjun Qin

Modeling outdoor scenes for the synthetic 3D environment requires the recovery of reflectance/albedo information from raw images, which is an ill-posed problem due to the complicated unmodeled physics in this process (e.g., indirect lighting, volume scattering, specular reflection). The problem remains unsolved in a practical context. The recovered albedo can facilitate model relighting and shading, which can further enhance the realism of rendered models and the applications of digital twins. Typically, photogrammetric 3D models simply take the source images as texture materials, which inherently embed unwanted lighting artifacts (at the time of capture) into the texture. Therefore, these “polluted” textures are suboptimal for a synthetic environment to enable realistic rendering. In addition, these embedded environmental lightings further bring challenges to photo-consistencies across different images that cause image-matching uncertainties. This paper presents a general image formation model for albedo recovery from typical aerial photogrammetric images under natural illuminations and derives the inverse model to resolve the albedo information through inverse rendering intrinsic image decomposition. Our approach builds on the fact that both the sun illumination and scene geometry are estimable in aerial photogrammetry, thus they can provide direct inputs for this ill-posed problem. This physics-based approach does not require additional input other than data acquired through the typical drone-based photogrammetric collection and was shown to favorably outperform existing approaches. We also demonstrate that the recovered albedo image can in turn improve typical image processing tasks in photogrammetry such as feature and dense matching, edge, and line extraction. [This work extends our prior work “A Novel Intrinsic Image Decomposition Method to Recover Albedo for Aerial Images in Photogrammetry Processing” in ISPRS Congress 2022]. The code will be made available at github.com/GDAOSU/albedo_aerial_photogrammetry

合成三维环境的室外场景建模需要从原始图像中恢复反射率/反照率信息,由于这一过程中存在复杂的未建模物理现象(如间接照明、体散射、镜面反射),因此这是一个难以解决的问题。在实际应用中,这一问题仍未得到解决。恢复的反照率可以促进模型的重新照明和着色,从而进一步增强渲染模型的真实感和数字双胞胎的应用。通常情况下,摄影测量三维模型只是将源图像作为纹理素材,这就在纹理中嵌入了(捕捉时)不需要的照明伪影。因此,这些 "污染 "纹理对于合成环境的逼真渲染来说是不理想的。此外,这些内嵌的环境光照还会给不同图像之间的光照一致性带来挑战,从而导致图像匹配的不确定性。本文提出了一个通用的图像形成模型,用于从自然光照下的典型航空摄影测量图像中恢复反照率,并推导出反模型,通过反渲染内在图像分解来解析反照率信息。我们的方法基于这样一个事实,即在航空摄影测量中,太阳光照和场景几何都是可以估算的,因此它们可以为这个问题提供直接输入。除了通过典型的无人机摄影测量采集获得的数据外,这种基于物理学的方法不需要额外的输入,而且性能优于现有方法。我们还证明,恢复的反照率图像可以反过来改进摄影测量中的典型图像处理任务,如特征和密集匹配、边缘和线条提取。[这项工作扩展了我们之前在 2022 年国际摄影测量和遥感学会大会上发表的 "在摄影测量处理中恢复航空图像反照率的新型本征图像分解方法 "的工作]。代码将发布在
{"title":"A general albedo recovery approach for aerial photogrammetric images through inverse rendering","authors":"Shuang Song ,&nbsp;Rongjun Qin","doi":"10.1016/j.isprsjprs.2024.09.001","DOIUrl":"10.1016/j.isprsjprs.2024.09.001","url":null,"abstract":"<div><p>Modeling outdoor scenes for the synthetic 3D environment requires the recovery of reflectance/albedo information from raw images, which is an ill-posed problem due to the complicated unmodeled physics in this process (e.g., indirect lighting, volume scattering, specular reflection). The problem remains unsolved in a practical context. The recovered albedo can facilitate model relighting and shading, which can further enhance the realism of rendered models and the applications of digital twins. Typically, photogrammetric 3D models simply take the source images as texture materials, which inherently embed unwanted lighting artifacts (at the time of capture) into the texture. Therefore, these “polluted” textures are suboptimal for a synthetic environment to enable realistic rendering. In addition, these embedded environmental lightings further bring challenges to photo-consistencies across different images that cause image-matching uncertainties. This paper presents a general image formation model for albedo recovery from typical aerial photogrammetric images under natural illuminations and derives the inverse model to resolve the albedo information through inverse rendering intrinsic image decomposition. Our approach builds on the fact that both the sun illumination and scene geometry are estimable in aerial photogrammetry, thus they can provide direct inputs for this ill-posed problem. This physics-based approach does not require additional input other than data acquired through the typical drone-based photogrammetric collection and was shown to favorably outperform existing approaches. We also demonstrate that the recovered albedo image can in turn improve typical image processing tasks in photogrammetry such as feature and dense matching, edge, and line extraction. [This work extends our prior work “A Novel Intrinsic Image Decomposition Method to Recover Albedo for Aerial Images in Photogrammetry Processing” in ISPRS Congress 2022]. The code will be made available at <span><span>github.com/GDAOSU/albedo_aerial_photogrammetry</span><svg><path></path></svg></span></p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 101-119"},"PeriodicalIF":10.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating AVHRR snow cover fraction by coupling physical constraints into a deep learning framework 通过将物理约束条件与深度学习框架相结合来估算 AVHRR 雪盖分数
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-12 DOI: 10.1016/j.isprsjprs.2024.08.015
Qin Zhao , Xiaohua Hao , Tao Che , Donghang Shao , Wenzheng Ji , Siqiong Luo , Guanghui Huang , Tianwen Feng , Leilei Dong , Xingliang Sun , Hongyi Li , Jian Wang

Accurate snow cover information is crucial for studying global climate and hydrology. Although deep learning has innovated snow cover fraction (SCF) retrieval, its effectiveness in practical application remains limited. This limitation stems from its reliance on appropriate training data and the necessity for more advanced interpretability. To overcome these challenges, a novel deep learning framework model by coupling the asymptotic radiative transfer (ART) model was developed to retrieve the Northern Hemisphere SCF based on advanced very high-resolution radiometer (AVHRR) surface reflectance data, named the ART-DL SCF model. Using Landsat 5 snow cover images as the reference SCF, the new model incorporates snow surface albedo retrieval from the ART model as a physical constraint into relevant snow identification parameters. Comprehensive validation results with Landsat reference SCF show an RMSE of 0.2228, an NMAD of 0.1227, and a bias of −0.0013. Moreover, the binary validation reveals an overall accuracy of 90.20%, with omission and commission errors both below 10%. Significantly, introducing physical constraints both improves the accuracy and stability of the model and mitigates underestimation issues. Compared to the model without physical constraints, the ART-DL SCF model shows a marked reduction of 4.79 percentage points in the RMSE and 5.35 percentage points in MAE. These accuracies were significantly higher than the currently available SnowCCI AVHRR products from the European Space Agency (ESA). Additionally, the model exhibits strong temporal and spatial generalizability and performs well in forest areas. This study presents a physical model coupled with deep learning for SCF retrieval that can better serve global climatic, hydrological, and other related studies.

准确的雪盖信息对于研究全球气候和水文至关重要。虽然深度学习对雪覆盖率(SCF)检索进行了创新,但其实际应用效果仍然有限。这种局限性源于它对适当训练数据的依赖性和更高级可解释性的必要性。为了克服这些挑战,研究人员开发了一种新型深度学习框架模型,该模型通过与渐近辐射传递(ART)模型耦合,基于先进的甚高分辨率辐射计(AVHRR)表面反射率数据来检索北半球的雪盖率,命名为 ART-DL SCF 模型。新模型使用大地遥感卫星 5 号雪覆盖图像作为参考 SCF,将 ART 模型的雪面反照率检索作为物理约束纳入相关的雪识别参数。使用 Landsat 参考 SCF 的综合验证结果显示,RMSE 为 0.2228,NMAD 为 0.1227,偏差为 -0.0013。此外,二元验证显示总体准确率为 90.20%,遗漏误差和误差均低于 10%。值得注意的是,引入物理约束既提高了模型的准确性和稳定性,又缓解了低估问题。与无物理约束的模型相比,ART-DL SCF 模型的均方根误差和最大允许误差分别显著降低了 4.79 个百分点和 5.35 个百分点。这些精度明显高于欧洲航天局(ESA)目前可用的 SnowCCI AVHRR 产品。此外,该模型还具有很强的时空通用性,在森林地区表现良好。本研究提出了一种结合深度学习的 SCF 检索物理模型,可更好地服务于全球气候、水文和其他相关研究。
{"title":"Estimating AVHRR snow cover fraction by coupling physical constraints into a deep learning framework","authors":"Qin Zhao ,&nbsp;Xiaohua Hao ,&nbsp;Tao Che ,&nbsp;Donghang Shao ,&nbsp;Wenzheng Ji ,&nbsp;Siqiong Luo ,&nbsp;Guanghui Huang ,&nbsp;Tianwen Feng ,&nbsp;Leilei Dong ,&nbsp;Xingliang Sun ,&nbsp;Hongyi Li ,&nbsp;Jian Wang","doi":"10.1016/j.isprsjprs.2024.08.015","DOIUrl":"10.1016/j.isprsjprs.2024.08.015","url":null,"abstract":"<div><p>Accurate snow cover information is crucial for studying global climate and hydrology. Although deep learning has innovated snow cover fraction (SCF) retrieval, its effectiveness in practical application remains limited. This limitation stems from its reliance on appropriate training data and the necessity for more advanced interpretability. To overcome these challenges, a novel deep learning framework model by coupling the asymptotic radiative transfer (ART) model was developed to retrieve the Northern Hemisphere SCF based on advanced very high-resolution radiometer (AVHRR) surface reflectance data, named the ART-DL SCF model. Using Landsat 5 snow cover images as the reference SCF, the new model incorporates snow surface albedo retrieval from the ART model as a physical constraint into relevant snow identification parameters. Comprehensive validation results with Landsat reference SCF show an RMSE of 0.2228, an NMAD of 0.1227, and a bias of −0.0013. Moreover, the binary validation reveals an overall accuracy of 90.20%, with omission and commission errors both below 10%. Significantly, introducing physical constraints both improves the accuracy and stability of the model and mitigates underestimation issues. Compared to the model without physical constraints, the ART-DL SCF model shows a marked reduction of 4.79 percentage points in the RMSE and 5.35 percentage points in MAE. These accuracies were significantly higher than the currently available SnowCCI AVHRR products from the European Space Agency (ESA). Additionally, the model exhibits strong temporal and spatial generalizability and performs well in forest areas. This study presents a physical model coupled with deep learning for SCF retrieval that can better serve global climatic, hydrological, and other related studies.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 120-135"},"PeriodicalIF":10.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective variance attention-enhanced diffusion model for crop field aerial image super resolution 用于作物田航空图像超级分辨率的有效方差注意力增强扩散模型
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-11 DOI: 10.1016/j.isprsjprs.2024.08.017
Xiangyu Lu , Jianlin Zhang , Rui Yang , Qina Yang , Mengyuan Chen , Hongxing Xu , Pinjun Wan , Jiawen Guo , Fei Liu

Image super-resolution (SR) can significantly improve the resolution and quality of aerial imagery. Emerging diffusion models (DM) have shown superior image generation capabilities through multistep refinement. To explore their effectiveness on high-resolution cropland aerial imagery SR, we first built the CropSR dataset, which includes 321,992 samples for self-supervised SR training and two real-matched SR datasets from high-low altitude orthomosaics and fixed-point photography (CropSR-OR/FP) for testing. Inspired by the observed trend of decreasing image variance with higher flight altitude, we developed the Variance-Average-Spatial Attention (VASA). The VASA demonstrated effectiveness across various types of SR models, and we further developed the Efficient VASA-enhanced Diffusion Model (EVADM). To comprehensively and consistently evaluate the quality of SR models, we introduced the Super-resolution Relative Fidelity Index (SRFI), which considers both structural and perceptual similarity. On the × 2 and × 4 real SR datasets, EVADM reduced Fréchet-Inception-Distance (FID) by 14.6 and 8.0, respectively, along with SRFI gains of 27 % and 6 % compared to the baselines. The superior generalization ability of EVADM was further validated using the open Agriculture-Vision dataset. Extensive downstream case studies have demonstrated the high practicality of our SR method, indicating a promising avenue for realistic aerial imagery enhancement and effective downstream applications. The code and dataset for testing are available at https://github.com/HobbitArmy/EVADM.

图像超分辨率(SR)可显著提高航空图像的分辨率和质量。新兴的扩散模型(DM)通过多步细化显示了卓越的图像生成能力。为了探索这些模型在高分辨率耕地航空图像 SR 方面的有效性,我们首先建立了 CropSR 数据集,其中包括用于自我监督 SR 训练的 321,992 个样本,以及用于测试的两个真实匹配 SR 数据集,这两个数据集分别来自高低空正射影像图和定点摄影(CropSR-OR/FP)。受观察到的图像方差随飞行高度增加而减小的趋势启发,我们开发了方差-平均空间注意力(VASA)。VASA 在各种类型的 SR 模型中都表现出了有效性,因此我们进一步开发了高效 VASA 增强扩散模型 (EVADM)。为了全面、一致地评估 SR 模型的质量,我们引入了超级分辨率相对保真度指数(SRFI),该指数同时考虑了结构和感知的相似性。在 × 2 和 × 4 真实 SR 数据集上,EVADM 将弗雷谢特-截取距离(FID)分别缩短了 14.6 和 8.0,与基线相比,SRFI 分别提高了 27% 和 6%。EVADM 的卓越泛化能力通过公开的 Agriculture-Vision 数据集得到了进一步验证。广泛的下游案例研究证明了我们的 SR 方法具有很高的实用性,为现实的航空图像增强和有效的下游应用提供了广阔的前景。测试代码和数据集可在 https://github.com/HobbitArmy/EVADM 上获取。
{"title":"Effective variance attention-enhanced diffusion model for crop field aerial image super resolution","authors":"Xiangyu Lu ,&nbsp;Jianlin Zhang ,&nbsp;Rui Yang ,&nbsp;Qina Yang ,&nbsp;Mengyuan Chen ,&nbsp;Hongxing Xu ,&nbsp;Pinjun Wan ,&nbsp;Jiawen Guo ,&nbsp;Fei Liu","doi":"10.1016/j.isprsjprs.2024.08.017","DOIUrl":"10.1016/j.isprsjprs.2024.08.017","url":null,"abstract":"<div><p>Image super-resolution (SR) can significantly improve the resolution and quality of aerial imagery. Emerging diffusion models (DM) have shown superior image generation capabilities through multistep refinement. To explore their effectiveness on high-resolution cropland aerial imagery SR, we first built the CropSR dataset, which includes 321,992 samples for self-supervised SR training and two real-matched SR datasets from high-low altitude orthomosaics and fixed-point photography (CropSR-OR/FP) for testing. Inspired by the observed trend of decreasing image variance with higher flight altitude, we developed the Variance-Average-Spatial Attention (VASA). The VASA demonstrated effectiveness across various types of SR models, and we further developed the Efficient VASA-enhanced Diffusion Model (EVADM). To comprehensively and consistently evaluate the quality of SR models, we introduced the Super-resolution Relative Fidelity Index (SRFI), which considers both structural and perceptual similarity. On the × 2 and × 4 real SR datasets, EVADM reduced Fréchet-Inception-Distance (FID) by 14.6 and 8.0, respectively, along with SRFI gains of 27 % and 6 % compared to the baselines. The superior generalization ability of EVADM was further validated using the open Agriculture-Vision dataset. Extensive downstream case studies have demonstrated the high practicality of our SR method, indicating a promising avenue for realistic aerial imagery enhancement and effective downstream applications. The code and dataset for testing are available at <span><span>https://github.com/HobbitArmy/EVADM</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 50-68"},"PeriodicalIF":10.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-resolution mapping of grassland canopy cover in China through the integration of extensive drone imagery and satellite data 通过整合大量无人机图像和卫星数据,高分辨率绘制中国草地冠层覆盖图
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-11 DOI: 10.1016/j.isprsjprs.2024.09.004
Tianyu Hu , Mengqi Cao , Xiaoxia Zhao , Xiaoqiang Liu , Zhonghua Liu , Liangyun Liu , Zhenying Huang , Shengli Tao , Zhiyao Tang , Yanpei Guo , Chengjun Ji , Chengyang Zheng , Guoyan Wang , Xiaokang Hu , Luhong Zhou , Yunxiang Cheng , Wenhong Ma , Yonghui Wang , Pujin Zhang , Yuejun Fan , Yanjun Su

Canopy cover is a crucial indicator for assessing grassland health and ecosystem services. However, achieving accurate high-resolution estimates of grassland canopy cover at a large spatial scale remains challenging due to the limited spatial coverage of field measurements and the scale mismatch between field measurements and satellite imagery. In this study, we addressed these challenges by proposing a regression-based approach to estimate large-scale grassland canopy cover, leveraging the integration of drone imagery and multisource remote sensing data. Specifically, over 90,000 10 × 10 m drone image tiles were collected at 1,255 sites across China. All drone image tiles were classified into grass and non-grass pixels to generate ground-truth canopy cover estimates. These estimates were then temporally aligned with satellite imagery-derived features to build a random forest regression model to map the grassland canopy cover distribution of China. Our results revealed that a single classification model can effectively distinguish between grass and non-grass pixels in drone images collected across diverse grassland types and large spatial scales, with multilayer perceptron demonstrating superior classification accuracy compared to Canopeo, support vector machine, random forest, and pyramid scene parsing network. The integration of extensive drone imagery successfully addressed the scale-mismatch issue between traditional ground measurements and satellite imagery, contributing significantly to enhancing mapping accuracy. The national canopy cover map of China generated for the year 2021 exhibited a spatial pattern of increasing canopy cover from northwest to southeast, with an average value of 56 % and a standard deviation of 26 %. Moreover, it demonstrated high accuracy, with a coefficient of determination of 0.89 and a root-mean-squared error of 12.38 %. The resulting high-resolution canopy cover map of China holds great potential in advancing our comprehension of grassland ecosystem processes and advocating for the sustainable management of grassland resources.

冠层覆盖是评估草地健康和生态系统服务的重要指标。然而,由于野外测量的空间覆盖范围有限以及野外测量与卫星图像之间的尺度不匹配,在大空间尺度上实现对草地冠层覆盖的精确高分辨率估算仍具有挑战性。在本研究中,我们利用无人机图像和多源遥感数据的整合,提出了一种基于回归的方法来估算大尺度草地冠层覆盖率,从而解决了这些难题。具体而言,我们在全国 1,255 个地点收集了 90,000 多张 10 × 10 米的无人机图像。所有无人机图像瓦片都被划分为草地和非草地像素,以生成地面实况的冠层覆盖估算值。然后,将这些估算值与卫星图像衍生特征进行时间对齐,建立随机森林回归模型,绘制中国草地冠层覆盖分布图。我们的研究结果表明,单一分类模型可以有效区分无人机图像中的草地和非草地像素,这些图像跨越了不同的草地类型和大的空间尺度,其中多层感知器的分类精度优于Canopeo、支持向量机、随机森林和金字塔场景解析网络。大量无人机图像的整合成功解决了传统地面测量与卫星图像之间的尺度不匹配问题,为提高制图精度做出了重大贡献。生成的 2021 年中国全国冠层覆盖图呈现出由西北向东南递增的空间格局,平均值为 56%,标准偏差为 26%。此外,它还表现出很高的精度,决定系数为 0.89,均方根误差为 12.38%。所绘制的中国高分辨率冠层覆盖图在推进我们对草原生态系统过程的理解和倡导草原资源的可持续管理方面具有巨大潜力。
{"title":"High-resolution mapping of grassland canopy cover in China through the integration of extensive drone imagery and satellite data","authors":"Tianyu Hu ,&nbsp;Mengqi Cao ,&nbsp;Xiaoxia Zhao ,&nbsp;Xiaoqiang Liu ,&nbsp;Zhonghua Liu ,&nbsp;Liangyun Liu ,&nbsp;Zhenying Huang ,&nbsp;Shengli Tao ,&nbsp;Zhiyao Tang ,&nbsp;Yanpei Guo ,&nbsp;Chengjun Ji ,&nbsp;Chengyang Zheng ,&nbsp;Guoyan Wang ,&nbsp;Xiaokang Hu ,&nbsp;Luhong Zhou ,&nbsp;Yunxiang Cheng ,&nbsp;Wenhong Ma ,&nbsp;Yonghui Wang ,&nbsp;Pujin Zhang ,&nbsp;Yuejun Fan ,&nbsp;Yanjun Su","doi":"10.1016/j.isprsjprs.2024.09.004","DOIUrl":"10.1016/j.isprsjprs.2024.09.004","url":null,"abstract":"<div><p>Canopy cover is a crucial indicator for assessing grassland health and ecosystem services. However, achieving accurate high-resolution estimates of grassland canopy cover at a large spatial scale remains challenging due to the limited spatial coverage of field measurements and the scale mismatch between field measurements and satellite imagery. In this study, we addressed these challenges by proposing a regression-based approach to estimate large-scale grassland canopy cover, leveraging the integration of drone imagery and multisource remote sensing data. Specifically, over 90,000 10 × 10 m drone image tiles were collected at 1,255 sites across China. All drone image tiles were classified into grass and non-grass pixels to generate ground-truth canopy cover estimates. These estimates were then temporally aligned with satellite imagery-derived features to build a random forest regression model to map the grassland canopy cover distribution of China. Our results revealed that a single classification model can effectively distinguish between grass and non-grass pixels in drone images collected across diverse grassland types and large spatial scales, with multilayer perceptron demonstrating superior classification accuracy compared to Canopeo, support vector machine, random forest, and pyramid scene parsing network. The integration of extensive drone imagery successfully addressed the scale-mismatch issue between traditional ground measurements and satellite imagery, contributing significantly to enhancing mapping accuracy. The national canopy cover map of China generated for the year 2021 exhibited a spatial pattern of increasing canopy cover from northwest to southeast, with an average value of 56 % and a standard deviation of 26 %. Moreover, it demonstrated high accuracy, with a coefficient of determination of 0.89 and a root-mean-squared error of 12.38 %. The resulting high-resolution canopy cover map of China holds great potential in advancing our comprehension of grassland ecosystem processes and advocating for the sustainable management of grassland resources.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 69-83"},"PeriodicalIF":10.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of synthetic aperture radar with deep learning in agricultural applications 深度学习合成孔径雷达在农业应用中的研究综述
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-10 DOI: 10.1016/j.isprsjprs.2024.08.018
Mahya G.Z. Hashemi , Ehsan Jalilvand , Hamed Alemohammad , Pang-Ning Tan , Narendra N. Das

Synthetic Aperture Radar (SAR) observations, valued for their consistent acquisition schedule and not being affected by cloud cover and variations between day and night, have become extensively utilized in a range of agricultural applications. The advent of deep learning allows for the capture of salient features from SAR observations. This is accomplished through discerning both spatial and temporal relationships within SAR data. This study reviews the current state of the art in the use of SAR with deep learning for crop classification/mapping, monitoring and yield estimation applications and the potential of leveraging both for the detection of agricultural management practices.

This review introduces the principles of SAR and its applications in agriculture, highlighting current limitations and challenges. It explores deep learning techniques as a solution to mitigate these issues and enhance the capability of SAR for agricultural applications. The review covers various aspects of SAR observables, methodologies for the fusion of optical and SAR data, common and emerging deep learning architectures, data augmentation techniques, validation and testing methods, and open-source reference datasets, all aimed at enhancing the precision and utility of SAR with deep learning for agricultural applications.

合成孔径雷达(SAR)观测数据因其采集时间一致、不受云层遮挡和昼夜变化的影响而备受重视,已被广泛应用于一系列农业应用中。深度学习技术的出现使得从合成孔径雷达观测数据中捕捉突出特征成为可能。这是通过辨别合成孔径雷达数据中的空间和时间关系来实现的。本研究回顾了将合成孔径雷达与深度学习用于作物分类/测绘、监测和产量估算应用的技术现状,以及利用这两种技术检测农业管理实践的潜力。本综述介绍了合成孔径雷达的原理及其在农业中的应用,强调了当前的局限性和挑战,并探讨了深度学习技术作为缓解这些问题的解决方案,以及增强合成孔径雷达在农业应用中的能力。综述涉及合成孔径雷达观测数据的各个方面、光学和合成孔径雷达数据融合方法、常见和新兴的深度学习架构、数据增强技术、验证和测试方法以及开源参考数据集,所有这些都旨在通过深度学习提高合成孔径雷达在农业应用中的精度和实用性。
{"title":"Review of synthetic aperture radar with deep learning in agricultural applications","authors":"Mahya G.Z. Hashemi ,&nbsp;Ehsan Jalilvand ,&nbsp;Hamed Alemohammad ,&nbsp;Pang-Ning Tan ,&nbsp;Narendra N. Das","doi":"10.1016/j.isprsjprs.2024.08.018","DOIUrl":"10.1016/j.isprsjprs.2024.08.018","url":null,"abstract":"<div><p>Synthetic Aperture Radar (SAR) observations, valued for their consistent acquisition schedule and not being affected by cloud cover and variations between day and night, have become extensively utilized in a range of agricultural applications. The advent of deep learning allows for the capture of salient features from SAR observations. This is accomplished through discerning both spatial and temporal relationships within SAR data. This study reviews the current state of the art in the use of SAR with deep learning for crop classification/mapping, monitoring and yield estimation applications and the potential of leveraging both for the detection of agricultural management practices.</p><p>This review introduces the principles of SAR and its applications in agriculture, highlighting current limitations and challenges. It explores deep learning techniques as a solution to mitigate these issues and enhance the capability of SAR for agricultural applications. The review covers various aspects of SAR observables, methodologies for the fusion of optical and SAR data, common and emerging deep learning architectures, data augmentation techniques, validation and testing methods, and open-source reference datasets, all aimed at enhancing the precision and utility of SAR with deep learning for agricultural applications.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 20-49"},"PeriodicalIF":10.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harmony in diversity: Content cleansing change detection framework for very-high-resolution remote-sensing images 多样性中的和谐:超高分辨率遥感图像的内容清理变化检测框架
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-09-10 DOI: 10.1016/j.isprsjprs.2024.09.002
Mofan Cheng , Wei He , Zhuohong Li , Guangyi Yang , Hongyan Zhang

Change detection, as a crucial task in the field of Earth observation, aims to identify changed pixels between multi-temporal remote-sensing images captured at the same geographical area. However, in practical applications, there are challenges of pseudo changes arising from diverse imaging conditions and different remote-sensing platforms. Existing methods either overlook the different imaging styles between bi-temporal images, or transfer the bi-temporal styles via domain adaptation that may lose ground details. To address these problems, we introduce the disentangled representation learning that mitigates differences of imaging styles while preserving content details to develop a change detection framework, named Content Cleansing Network (CCNet). Specifically, CCNet embeds each input image into two distinct subspaces: a shared content space and a private style space. The separation of style space aims to mitigate the discrepant style due to different imaging condition, while the extracted content space reflects semantic features that is essential for change detection. Then, a multi-resolution parallel structure constructs the content space encoder, facilitating robust feature extraction of semantic information and spatial details. The cleansed content features enable accurate detection of changes in the land surface. Additionally, a lightweight decoder for image restoration enhances the independence and interpretability of the disentangled spaces. To verify the proposed method, CCNet is applied to five public datasets and a multi-temporal dataset collected in this study. Comparative experiments against eleven advanced methods demonstrate the effectiveness and superiority of CCNet. The experimental results show that our method robustly addresses the issues related to both temporal and platform variations, making it a promising method for change detection in complex conditions and supporting downstream applications.

变化检测是地球观测领域的一项重要任务,旨在识别在同一地理区域拍摄的多时相遥感图像之间发生变化的像素。然而,在实际应用中,不同的成像条件和不同的遥感平台会产生伪变化。现有的方法要么忽略了双时相图像之间不同的成像风格,要么通过域自适应转移双时相风格,从而可能丢失地面细节。为了解决这些问题,我们引入了分解表示学习,在保留内容细节的同时减轻成像风格的差异,从而开发出一种名为内容清洗网络(CCNet)的变化检测框架。具体来说,CCNet 将每个输入图像嵌入两个不同的子空间:共享内容空间和私有风格空间。风格空间的分离旨在减少因成像条件不同而产生的风格差异,而提取的内容空间则反映了对变化检测至关重要的语义特征。然后,多分辨率并行结构构建了内容空间编码器,促进了对语义信息和空间细节的稳健特征提取。经过净化的内容特征能够准确检测地表的变化。此外,用于图像复原的轻量级解码器增强了分离空间的独立性和可解释性。为了验证所提出的方法,CCNet 被应用于本研究中收集的五个公共数据集和一个多时数据集。与 11 种先进方法的对比实验证明了 CCNet 的有效性和优越性。实验结果表明,我们的方法能稳健地解决与时间和平台变化相关的问题,使其成为在复杂条件下进行变化检测和支持下游应用的一种有前途的方法。
{"title":"Harmony in diversity: Content cleansing change detection framework for very-high-resolution remote-sensing images","authors":"Mofan Cheng ,&nbsp;Wei He ,&nbsp;Zhuohong Li ,&nbsp;Guangyi Yang ,&nbsp;Hongyan Zhang","doi":"10.1016/j.isprsjprs.2024.09.002","DOIUrl":"10.1016/j.isprsjprs.2024.09.002","url":null,"abstract":"<div><p>Change detection, as a crucial task in the field of Earth observation, aims to identify changed pixels between multi-temporal remote-sensing images captured at the same geographical area. However, in practical applications, there are challenges of pseudo changes arising from diverse imaging conditions and different remote-sensing platforms. Existing methods either overlook the different imaging styles between bi-temporal images, or transfer the bi-temporal styles via domain adaptation that may lose ground details. To address these problems, we introduce the disentangled representation learning that mitigates differences of imaging styles while preserving content details to develop a change detection framework, named Content Cleansing Network (CCNet). Specifically, CCNet embeds each input image into two distinct subspaces: a shared content space and a private style space. The separation of style space aims to mitigate the discrepant style due to different imaging condition, while the extracted content space reflects semantic features that is essential for change detection. Then, a multi-resolution parallel structure constructs the content space encoder, facilitating robust feature extraction of semantic information and spatial details. The cleansed content features enable accurate detection of changes in the land surface. Additionally, a lightweight decoder for image restoration enhances the independence and interpretability of the disentangled spaces. To verify the proposed method, CCNet is applied to five public datasets and a multi-temporal dataset collected in this study. Comparative experiments against eleven advanced methods demonstrate the effectiveness and superiority of CCNet. The experimental results show that our method robustly addresses the issues related to both temporal and platform variations, making it a promising method for change detection in complex conditions and supporting downstream applications.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 1-19"},"PeriodicalIF":10.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S092427162400340X/pdfft?md5=05257e0a48272b7c28a6809497111281&pid=1-s2.0-S092427162400340X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards SDG 11: Large-scale geographic and demographic characterisation of informal settlements fusing remote sensing, POI, and open geo-data 实现可持续发展目标 11:融合遥感、POI 和开放地理数据的大规模非正规住区地理和人口特征描述
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-31 DOI: 10.1016/j.isprsjprs.2024.08.014
Wei Tu , Dongsheng Chen , Rui Cao , Jizhe Xia , Yatao Zhang , Qingquan Li

Informal settlements’ geographic and demographic mapping is essential for evaluating human-centric sustainable development in cities, thus fostering the road to Sustainable Development Goal 11. However, fine-grained informal settlements’ geographic and demographic information is not well available. To fill the gap, this study proposes an effective framework for both fine-grained geographic and demographic characterisation of informal settlements by integrating openly available remote sensing imagery, points-of-interest (POI), and demographic data. Pixel-level informal settlement is firstly mapped by a hierarchical recognition method with satellite imagery and POI. The patch-scale and city-scale geographic patterns of informal settlements are further analysed with landscape metrics. Spatial-demographic profiles are depicted by linking with the open WorldPop dataset to reveal the demographic pattern. Taking the Guangdong-Hong Kong-Macao Greater Bay Area (GBA) in China as the study area, the experiment demonstrates the effectiveness of informal settlement mapping, with an overall accuracy of 91.82%. The aggregated data and code are released (https://github.com/DongshengChen9/IF4SDG11). The demographic patterns of the informal settlements reveal that Guangzhou and Shenzhen, the two core cities in the GBA, concentrate more on young people living in the informal settlements. While the rapid-developing city Shenzhen shows a more significant trend of gender imbalance in the informal settlements. These findings provide valuable insights into monitoring informal settlements in the urban agglomeration and human-centric urban sustainable development, as well as SDG 11.1.1.

非正规住区的地理和人口分布图对于评估城市以人为本的可持续发展至关重要,从而促进实现可持续发展目标 11 的道路。然而,精细的非正规住区地理和人口信息并不容易获得。为了填补这一空白,本研究提出了一个有效的框架,通过整合可公开获取的遥感图像、兴趣点(POI)和人口数据,对非正规住区进行精细的地理和人口特征描述。首先通过卫星图像和兴趣点的分层识别方法绘制像素级非正规住区地图。利用景观指标进一步分析非正规住区的斑块尺度和城市尺度地理模式。通过与开放的 WorldPop 数据集链接,描绘出空间-人口概况,从而揭示人口模式。以中国粤港澳大湾区(GBA)为研究区域,实验证明了非正规住区绘图的有效性,总体准确率达到 91.82%。汇总数据和代码已发布(https://github.com/DongshengChen9/IF4SDG11)。非正规居住区的人口模式显示,广州和深圳这两个广州地区的核心城市在非正规居住区集中了更多的年轻人。而快速发展城市深圳的非正规居住区性别失衡趋势更为明显。这些发现为监测城市群中的非正规住区、以人为本的城市可持续发展以及可持续发展目标 11.1.1 提供了宝贵的见解。
{"title":"Towards SDG 11: Large-scale geographic and demographic characterisation of informal settlements fusing remote sensing, POI, and open geo-data","authors":"Wei Tu ,&nbsp;Dongsheng Chen ,&nbsp;Rui Cao ,&nbsp;Jizhe Xia ,&nbsp;Yatao Zhang ,&nbsp;Qingquan Li","doi":"10.1016/j.isprsjprs.2024.08.014","DOIUrl":"10.1016/j.isprsjprs.2024.08.014","url":null,"abstract":"<div><p>Informal settlements’ geographic and demographic mapping is essential for evaluating human-centric sustainable development in cities, thus fostering the road to Sustainable Development Goal 11. However, fine-grained informal settlements’ geographic and demographic information is not well available. To fill the gap, this study proposes an effective framework for both fine-grained geographic and demographic characterisation of informal settlements by integrating openly available remote sensing imagery, points-of-interest (POI), and demographic data. Pixel-level informal settlement is firstly mapped by a hierarchical recognition method with satellite imagery and POI. The patch-scale and city-scale geographic patterns of informal settlements are further analysed with landscape metrics. Spatial-demographic profiles are depicted by linking with the open WorldPop dataset to reveal the demographic pattern. Taking the Guangdong-Hong Kong-Macao Greater Bay Area (GBA) in China as the study area, the experiment demonstrates the effectiveness of informal settlement mapping, with an overall accuracy of 91.82%. The aggregated data and code are released (<span><span>https://github.com/DongshengChen9/IF4SDG11</span><svg><path></path></svg></span>). The demographic patterns of the informal settlements reveal that Guangzhou and Shenzhen, the two core cities in the GBA, concentrate more on young people living in the informal settlements. While the rapid-developing city Shenzhen shows a more significant trend of gender imbalance in the informal settlements. These findings provide valuable insights into monitoring informal settlements in the urban agglomeration and human-centric urban sustainable development, as well as SDG 11.1.1.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"217 ","pages":"Pages 199-215"},"PeriodicalIF":10.6,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0924271624003253/pdfft?md5=ea26a3272c1484993048b4db670eff37&pid=1-s2.0-S0924271624003253-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A spatiotemporal shape model fitting method for within-season crop phenology detection 用于作物季内物候检测的时空形状模型拟合方法
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-30 DOI: 10.1016/j.isprsjprs.2024.08.009
Ruyin Cao , Luchun Li , Licong Liu , Hongyi Liang , Xiaolin Zhu , Miaogen Shen , Ji Zhou , Yuechen Li , Jin Chen

Crop phenological information must be reliably acquired earlier in the growing season to benefit agricultural management. Although the popular shape model fitting (SMF) method and its various improved versions (e.g., SMF by the Separate phenological stage, SMF-S) have been successfully applied to after-season crop phenology detection, these existing methods cannot be applied to within-season crop phenology detection. This discrepancy arises due to the fact that, in the within-season scenario, phenological stages can beyond the defined cut-off time. Consequently, enhancing the alignment of the vegetation index (VI) curve segments prior to the cut-off time does not necessarily guarantee accurate within-season phenological detection. To resolve this issue, a new method named spatiotemporal shape model fitting (STSMF) was developed. STSMF does not seek to optimize the local curve matching between the target pixel and the shape model; instead, it determines similar local VI trajectories in the neighboring pixels of previous years. The within-season phenology of the target pixel was thus estimated from the corresponding phenological stage of the determined local VI trajectories. When compared with ground phenology observations, STSMF outperformed the existing SMF and SMF-S which were modified for the within-season scenario (SMFws and SMFSws) with the smallest mean absolute differences (MAE) between observed phenological stages and their corresponding model estimates. The MAE values averaged over all phenological stages for STSMF, SMFSws, and SMFws were 9.8, 12.4, and 27.1 days at winter wheat stations; 8.4, 14.9, and 55.3 days at corn stations; and 7.9, 12.4, and 64.6 days at soybean stations, respectively. Intercomparisons between after-season and within-season regional phenology maps also demonstrated the superior performance of STSMF (e.g., correlation coefficients for STSMF and SMFSws are 0.89 and 0.80 at the maturity stage of winter wheat). Furthermore, the performance of STSMF was less affected by the detection time and the determination of shape models. In conclusion, the straightforward, effective, and stable nature of STSMF makes it suitable for within-season detection of agronomic phenological stages.

作物物候信息必须在生长季节早期可靠获取,才能有利于农业管理。尽管流行的形状模型拟合(SMF)方法及其各种改进版本(例如,SMF by the Separate phenological stage,SMF-S)已成功应用于作物季后物候检测,但这些现有方法无法应用于作物季内物候检测。造成这种差异的原因是,在季内情况下,物候期可能会超出规定的截止时间。因此,在截止时间之前加强植被指数(VI)曲线段的对齐并不一定能保证季内物候检测的准确性。为了解决这个问题,我们开发了一种名为时空形状模型拟合(STSMF)的新方法。STSMF 并不寻求优化目标像素与形状模型之间的局部曲线匹配,而是确定相邻像素往年的相似局部 VI 轨迹。因此,目标像元的季内物候是根据确定的局部 VI 轨迹的相应物候阶段估算的。与地面物候观测结果相比,STSMF 的表现优于现有的 SMF 和 SMF-S(SMFws 和 SMFSws),后者针对季内情景进行了修改,观测到的物候阶段与其相应的模型估计值之间的平均绝对差值(MAE)最小。STSMF、SMFSws 和 SMFws 在所有物候期的平均 MAE 值在冬小麦站分别为 9.8、12.4 和 27.1 天;在玉米站分别为 8.4、14.9 和 55.3 天;在大豆站分别为 7.9、12.4 和 64.6 天。季后和季内区域物候图之间的相互比较也证明了 STSMF 的卓越性能(例如,在冬小麦成熟期,STSMF 和 SMFSws 的相关系数分别为 0.89 和 0.80)。此外,STSMF 的性能受检测时间和形状模型确定的影响较小。总之,STSMF 简单、有效、稳定的特性使其适用于农艺物候期的季内检测。
{"title":"A spatiotemporal shape model fitting method for within-season crop phenology detection","authors":"Ruyin Cao ,&nbsp;Luchun Li ,&nbsp;Licong Liu ,&nbsp;Hongyi Liang ,&nbsp;Xiaolin Zhu ,&nbsp;Miaogen Shen ,&nbsp;Ji Zhou ,&nbsp;Yuechen Li ,&nbsp;Jin Chen","doi":"10.1016/j.isprsjprs.2024.08.009","DOIUrl":"10.1016/j.isprsjprs.2024.08.009","url":null,"abstract":"<div><p>Crop phenological information must be reliably acquired earlier in the growing season to benefit agricultural management. Although the popular shape model fitting (SMF) method and its various improved versions (e.g., SMF by the Separate phenological stage, SMF-S) have been successfully applied to after-season crop phenology detection, these existing methods cannot be applied to within-season crop phenology detection. This discrepancy arises due to the fact that, in the within-season scenario, phenological stages can beyond the defined cut-off time. Consequently, enhancing the alignment of the vegetation index (VI) curve segments prior to the cut-off time does not necessarily guarantee accurate within-season phenological detection. To resolve this issue, a new method named <u>s</u>patio<u>t</u>emporal <u>s</u>hape <u>m</u>odel <u>f</u>itting (STSMF) was developed. STSMF does not seek to optimize the local curve matching between the target pixel and the shape model; instead, it determines similar local VI trajectories in the neighboring pixels of previous years. The within-season phenology of the target pixel was thus estimated from the corresponding phenological stage of the determined local VI trajectories. When compared with ground phenology observations, STSMF outperformed the existing SMF and SMF-S which were modified for the within-season scenario (<span><math><mrow><msub><mrow><mi>SMF</mi></mrow><mrow><mi>ws</mi></mrow></msub></mrow></math></span> and <span><math><mrow><msub><mrow><mi>SMFS</mi></mrow><mrow><mi>ws</mi></mrow></msub></mrow></math></span>) with the smallest mean absolute differences (MAE) between observed phenological stages and their corresponding model estimates. The MAE values averaged over all phenological stages for STSMF, <span><math><mrow><msub><mrow><mi>SMFS</mi></mrow><mrow><mi>ws</mi></mrow></msub></mrow></math></span>, and <span><math><mrow><msub><mrow><mi>SMF</mi></mrow><mrow><mi>ws</mi></mrow></msub></mrow></math></span> were 9.8, 12.4, and 27.1 days at winter wheat stations; 8.4, 14.9, and 55.3 days at corn stations; and 7.9, 12.4, and 64.6 days at soybean stations, respectively. Intercomparisons between after-season and within-season regional phenology maps also demonstrated the superior performance of STSMF (e.g., correlation coefficients for STSMF and <span><math><mrow><msub><mrow><mi>SMFS</mi></mrow><mrow><mi>ws</mi></mrow></msub></mrow></math></span> are 0.89 and 0.80 at the maturity stage of winter wheat). Furthermore, the performance of STSMF was less affected by the detection time and the determination of shape models. In conclusion, the straightforward, effective, and stable nature of STSMF makes it suitable for within-season detection of agronomic phenological stages.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"217 ","pages":"Pages 179-198"},"PeriodicalIF":10.6,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Satellite remote sensing of vegetation phenology: Progress, challenges, and opportunities 卫星遥感植被物候:进展、挑战和机遇
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-08-29 DOI: 10.1016/j.isprsjprs.2024.08.011
Zheng Gong , Wenyan Ge , Jiaqi Guo , Jincheng Liu

Vegetation phenology serves as a crucial indicator of ecosystem dynamics and its response to environmental cues. Against the backdrop of global climate warming, it plays a pivotal role in delving into global climate change, terrestrial ecosystem dynamics, and guiding agricultural production. Ground-based field observations of vegetation phenology are increasingly challenged by rapid global ecological changes. Since the 1970 s, the development and application of remote sensing technology have offered a novel approach to address these challenges. Utilizing satellite remote sensing to acquire phenological parameters has been widely applied in monitoring vegetation phenology, significantly advancing phenological research. This paper describes commonly used vegetation indices, smoothing methods, and extraction techniques in monitoring vegetation phenology using satellite remote sensing. It systematically summarizes the applications and progress of vegetation phenology remote sensing at a global scale in recent years and analyzes the challenges of vegetation phenology remote sensing: These challenges include the need for higher spatiotemporal resolution data to capture vegetation changes, the necessity to compare remote sensing monitoring methods with direct field observations, the requirement to compare different remote sensing techniques to ensure accuracy, and the importance of incorporating seasonal variations and differences into phenology extraction models. It delves into the key issues and challenges existing in current vegetation phenology remote sensing, including the limitations of existing vegetation indices, the impact of spatiotemporal scale effects on phenology parameter extraction, uncertainties in phenology algorithms and machine learning, and the relationship between vegetation phenology and global climate change. Based on these discussions, the it proposes several opportunities and future prospects, containing improving the temporal and spatial resolution of data sources, using multiple datasets to monitor vegetation phenology dynamics, quantifying uncertainties in the algorithm and machine learning processes for phenology parameter extraction, clarifying the adaptive mechanisms of vegetation phenology to environmental changes, focusing on the impact of extreme weather, and establishing an integrated “sky-space-ground” vegetation phenology monitoring network. These developments aim to enhance the accuracy of phenology extraction, explore and understand the mechanisms of surface phenology changes, and impart more biophysical significance to vegetation phenology parameters.

植被物候是生态系统动态及其对环境线索反应的重要指标。在全球气候变暖的背景下,它在研究全球气候变化、陆地生态系统动态和指导农业生产方面发挥着举足轻重的作用。植被物候的地面实地观测正日益受到全球生态快速变化的挑战。自 20 世纪 70 年代以来,遥感技术的发展和应用为应对这些挑战提供了一种新方法。利用卫星遥感获取物候参数已广泛应用于植被物候监测,极大地推动了物候研究。本文介绍了利用卫星遥感技术监测植被物候的常用植被指数、平滑方法和提取技术。它系统地总结了近年来全球范围内植被物候遥感的应用和进展,并分析了植被物候遥感面临的挑战:这些挑战包括:需要更高的时空分辨率数据来捕捉植被变化;需要将遥感监测方法与直接实地观测进行比较;需要对不同的遥感技术进行比较以确保准确性;以及将季节变化和差异纳入物候提取模型的重要性。报告深入探讨了当前植被物候遥感中存在的关键问题和挑战,包括现有植被指数的局限性、时空尺度效应对物候参数提取的影响、物候算法和机器学习的不确定性,以及植被物候与全球气候变化之间的关系。基于这些讨论,报告提出了若干机遇和未来展望,包括提高数据源的时空分辨率、利用多种数据集监测植被物候动态、量化物候参数提取算法和机器学习过程中的不确定性、阐明植被物候对环境变化的适应机制、关注极端天气的影响以及建立 "天-空-地 "一体化植被物候监测网络。这些进展旨在提高物候提取的准确性,探索和理解地表物候变化的机制,赋予植被物候参数更多的生物物理意义。
{"title":"Satellite remote sensing of vegetation phenology: Progress, challenges, and opportunities","authors":"Zheng Gong ,&nbsp;Wenyan Ge ,&nbsp;Jiaqi Guo ,&nbsp;Jincheng Liu","doi":"10.1016/j.isprsjprs.2024.08.011","DOIUrl":"10.1016/j.isprsjprs.2024.08.011","url":null,"abstract":"<div><p>Vegetation phenology serves as a crucial indicator of ecosystem dynamics and its response to environmental cues. Against the backdrop of global climate warming, it plays a pivotal role in delving into global climate change, terrestrial ecosystem dynamics, and guiding agricultural production. Ground-based field observations of vegetation phenology are increasingly challenged by rapid global ecological changes. Since the 1970 s, the development and application of remote sensing technology have offered a novel approach to address these challenges. Utilizing satellite remote sensing to acquire phenological parameters has been widely applied in monitoring vegetation phenology, significantly advancing phenological research. This paper describes commonly used vegetation indices, smoothing methods, and extraction techniques in monitoring vegetation phenology using satellite remote sensing. It systematically summarizes the applications and progress of vegetation phenology remote sensing at a global scale in recent years and analyzes the challenges of vegetation phenology remote sensing: These challenges include the need for higher spatiotemporal resolution data to capture vegetation changes, the necessity to compare remote sensing monitoring methods with direct field observations, the requirement to compare different remote sensing techniques to ensure accuracy, and the importance of incorporating seasonal variations and differences into phenology extraction models. It delves into the key issues and challenges existing in current vegetation phenology remote sensing, including the limitations of existing vegetation indices, the impact of spatiotemporal scale effects on phenology parameter extraction, uncertainties in phenology algorithms and machine learning, and the relationship between vegetation phenology and global climate change. Based on these discussions, the it proposes several opportunities and future prospects, containing improving the temporal and spatial resolution of data sources, using multiple datasets to monitor vegetation phenology dynamics, quantifying uncertainties in the algorithm and machine learning processes for phenology parameter extraction, clarifying the adaptive mechanisms of vegetation phenology to environmental changes, focusing on the impact of extreme weather, and establishing an integrated “sky-space-ground” vegetation phenology monitoring network. These developments aim to enhance the accuracy of phenology extraction, explore and understand the mechanisms of surface phenology changes, and impart more biophysical significance to vegetation phenology parameters.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"217 ","pages":"Pages 149-164"},"PeriodicalIF":10.6,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142090450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1