Pub Date : 2024-09-12DOI: 10.1016/j.isprsjprs.2024.08.013
Yian Wang , Jiayin Luo , Jie Dong , Jordi J. Mallorqui , Mingsheng Liao , Lu Zhang , Jianya Gong
In the era of big SAR data, it is urgent to develop dynamic time series DInSAR processing procedures for near-real-time monitoring of landslides. However, the dense vegetation coverage in mountainous areas causes severe decorrelations, which demands high precision and efficiency of phase optimization processing. The common phase optimization using single-polarization SAR data cannot produce satisfactory results due to the limited statistical samples in some natural scenarios. The novel polarimetric phase optimization algorithms, however, have low computational efficiency, limiting their applications in large-scale scenarios and long data sequences. In addition, temporal changes in the scattering properties of ground features and the continuous increase of SAR data require dynamic phase optimization processing. To achieve efficient phase optimization for dynamic DInSAR time series analysis, we combine the Sequential Estimator (SE) with the Total Power (TP) polarization stacking method and solve it using eigen decomposition-based Maximum Likelihood Estimator (EMI), named SETP-EMI. The simulation and real data experiments demonstrate the significant improvements of the SETP-EMI method in precision and efficiency compared to the EMI and TP-EMI methods. The SETP-EMI exhibits an increase of more than 50% and 20% in highly coherent points for the real data compared to the EMI and TP-EMI, respectively. It, meanwhile, achieves approximately six and two times more efficient than the EMI and TP-EMI methods in the real data case. These results highlight the effectiveness of the SETP-EMI method in promptly capturing and analyzing evolving landslide deformations, providing valuable insights for real-time monitoring and decision-making.
{"title":"Sequential polarimetric phase optimization algorithm for dynamic deformation monitoring of landslides","authors":"Yian Wang , Jiayin Luo , Jie Dong , Jordi J. Mallorqui , Mingsheng Liao , Lu Zhang , Jianya Gong","doi":"10.1016/j.isprsjprs.2024.08.013","DOIUrl":"10.1016/j.isprsjprs.2024.08.013","url":null,"abstract":"<div><p>In the era of big SAR data, it is urgent to develop dynamic time series DInSAR processing procedures for near-real-time monitoring of landslides. However, the dense vegetation coverage in mountainous areas causes severe decorrelations, which demands high precision and efficiency of phase optimization processing. The common phase optimization using single-polarization SAR data cannot produce satisfactory results due to the limited statistical samples in some natural scenarios. The novel polarimetric phase optimization algorithms, however, have low computational efficiency, limiting their applications in large-scale scenarios and long data sequences. In addition, temporal changes in the scattering properties of ground features and the continuous increase of SAR data require dynamic phase optimization processing. To achieve efficient phase optimization for dynamic DInSAR time series analysis, we combine the Sequential Estimator (SE) with the Total Power (TP) polarization stacking method and solve it using eigen decomposition-based Maximum Likelihood Estimator (EMI), named SETP-EMI. The simulation and real data experiments demonstrate the significant improvements of the SETP-EMI method in precision and efficiency compared to the EMI and TP-EMI methods. The SETP-EMI exhibits an increase of more than 50% and 20% in highly coherent points for the real data compared to the EMI and TP-EMI, respectively. It, meanwhile, achieves approximately six and two times more efficient than the EMI and TP-EMI methods in the real data case. These results highlight the effectiveness of the SETP-EMI method in promptly capturing and analyzing evolving landslide deformations, providing valuable insights for real-time monitoring and decision-making.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 84-100"},"PeriodicalIF":10.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.isprsjprs.2024.09.001
Shuang Song , Rongjun Qin
Modeling outdoor scenes for the synthetic 3D environment requires the recovery of reflectance/albedo information from raw images, which is an ill-posed problem due to the complicated unmodeled physics in this process (e.g., indirect lighting, volume scattering, specular reflection). The problem remains unsolved in a practical context. The recovered albedo can facilitate model relighting and shading, which can further enhance the realism of rendered models and the applications of digital twins. Typically, photogrammetric 3D models simply take the source images as texture materials, which inherently embed unwanted lighting artifacts (at the time of capture) into the texture. Therefore, these “polluted” textures are suboptimal for a synthetic environment to enable realistic rendering. In addition, these embedded environmental lightings further bring challenges to photo-consistencies across different images that cause image-matching uncertainties. This paper presents a general image formation model for albedo recovery from typical aerial photogrammetric images under natural illuminations and derives the inverse model to resolve the albedo information through inverse rendering intrinsic image decomposition. Our approach builds on the fact that both the sun illumination and scene geometry are estimable in aerial photogrammetry, thus they can provide direct inputs for this ill-posed problem. This physics-based approach does not require additional input other than data acquired through the typical drone-based photogrammetric collection and was shown to favorably outperform existing approaches. We also demonstrate that the recovered albedo image can in turn improve typical image processing tasks in photogrammetry such as feature and dense matching, edge, and line extraction. [This work extends our prior work “A Novel Intrinsic Image Decomposition Method to Recover Albedo for Aerial Images in Photogrammetry Processing” in ISPRS Congress 2022]. The code will be made available at github.com/GDAOSU/albedo_aerial_photogrammetry
{"title":"A general albedo recovery approach for aerial photogrammetric images through inverse rendering","authors":"Shuang Song , Rongjun Qin","doi":"10.1016/j.isprsjprs.2024.09.001","DOIUrl":"10.1016/j.isprsjprs.2024.09.001","url":null,"abstract":"<div><p>Modeling outdoor scenes for the synthetic 3D environment requires the recovery of reflectance/albedo information from raw images, which is an ill-posed problem due to the complicated unmodeled physics in this process (e.g., indirect lighting, volume scattering, specular reflection). The problem remains unsolved in a practical context. The recovered albedo can facilitate model relighting and shading, which can further enhance the realism of rendered models and the applications of digital twins. Typically, photogrammetric 3D models simply take the source images as texture materials, which inherently embed unwanted lighting artifacts (at the time of capture) into the texture. Therefore, these “polluted” textures are suboptimal for a synthetic environment to enable realistic rendering. In addition, these embedded environmental lightings further bring challenges to photo-consistencies across different images that cause image-matching uncertainties. This paper presents a general image formation model for albedo recovery from typical aerial photogrammetric images under natural illuminations and derives the inverse model to resolve the albedo information through inverse rendering intrinsic image decomposition. Our approach builds on the fact that both the sun illumination and scene geometry are estimable in aerial photogrammetry, thus they can provide direct inputs for this ill-posed problem. This physics-based approach does not require additional input other than data acquired through the typical drone-based photogrammetric collection and was shown to favorably outperform existing approaches. We also demonstrate that the recovered albedo image can in turn improve typical image processing tasks in photogrammetry such as feature and dense matching, edge, and line extraction. [This work extends our prior work “A Novel Intrinsic Image Decomposition Method to Recover Albedo for Aerial Images in Photogrammetry Processing” in ISPRS Congress 2022]. The code will be made available at <span><span>github.com/GDAOSU/albedo_aerial_photogrammetry</span><svg><path></path></svg></span></p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 101-119"},"PeriodicalIF":10.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.isprsjprs.2024.08.015
Qin Zhao , Xiaohua Hao , Tao Che , Donghang Shao , Wenzheng Ji , Siqiong Luo , Guanghui Huang , Tianwen Feng , Leilei Dong , Xingliang Sun , Hongyi Li , Jian Wang
Accurate snow cover information is crucial for studying global climate and hydrology. Although deep learning has innovated snow cover fraction (SCF) retrieval, its effectiveness in practical application remains limited. This limitation stems from its reliance on appropriate training data and the necessity for more advanced interpretability. To overcome these challenges, a novel deep learning framework model by coupling the asymptotic radiative transfer (ART) model was developed to retrieve the Northern Hemisphere SCF based on advanced very high-resolution radiometer (AVHRR) surface reflectance data, named the ART-DL SCF model. Using Landsat 5 snow cover images as the reference SCF, the new model incorporates snow surface albedo retrieval from the ART model as a physical constraint into relevant snow identification parameters. Comprehensive validation results with Landsat reference SCF show an RMSE of 0.2228, an NMAD of 0.1227, and a bias of −0.0013. Moreover, the binary validation reveals an overall accuracy of 90.20%, with omission and commission errors both below 10%. Significantly, introducing physical constraints both improves the accuracy and stability of the model and mitigates underestimation issues. Compared to the model without physical constraints, the ART-DL SCF model shows a marked reduction of 4.79 percentage points in the RMSE and 5.35 percentage points in MAE. These accuracies were significantly higher than the currently available SnowCCI AVHRR products from the European Space Agency (ESA). Additionally, the model exhibits strong temporal and spatial generalizability and performs well in forest areas. This study presents a physical model coupled with deep learning for SCF retrieval that can better serve global climatic, hydrological, and other related studies.
{"title":"Estimating AVHRR snow cover fraction by coupling physical constraints into a deep learning framework","authors":"Qin Zhao , Xiaohua Hao , Tao Che , Donghang Shao , Wenzheng Ji , Siqiong Luo , Guanghui Huang , Tianwen Feng , Leilei Dong , Xingliang Sun , Hongyi Li , Jian Wang","doi":"10.1016/j.isprsjprs.2024.08.015","DOIUrl":"10.1016/j.isprsjprs.2024.08.015","url":null,"abstract":"<div><p>Accurate snow cover information is crucial for studying global climate and hydrology. Although deep learning has innovated snow cover fraction (SCF) retrieval, its effectiveness in practical application remains limited. This limitation stems from its reliance on appropriate training data and the necessity for more advanced interpretability. To overcome these challenges, a novel deep learning framework model by coupling the asymptotic radiative transfer (ART) model was developed to retrieve the Northern Hemisphere SCF based on advanced very high-resolution radiometer (AVHRR) surface reflectance data, named the ART-DL SCF model. Using Landsat 5 snow cover images as the reference SCF, the new model incorporates snow surface albedo retrieval from the ART model as a physical constraint into relevant snow identification parameters. Comprehensive validation results with Landsat reference SCF show an RMSE of 0.2228, an NMAD of 0.1227, and a bias of −0.0013. Moreover, the binary validation reveals an overall accuracy of 90.20%, with omission and commission errors both below 10%. Significantly, introducing physical constraints both improves the accuracy and stability of the model and mitigates underestimation issues. Compared to the model without physical constraints, the ART-DL SCF model shows a marked reduction of 4.79 percentage points in the RMSE and 5.35 percentage points in MAE. These accuracies were significantly higher than the currently available SnowCCI AVHRR products from the European Space Agency (ESA). Additionally, the model exhibits strong temporal and spatial generalizability and performs well in forest areas. This study presents a physical model coupled with deep learning for SCF retrieval that can better serve global climatic, hydrological, and other related studies.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 120-135"},"PeriodicalIF":10.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1016/j.isprsjprs.2024.08.017
Xiangyu Lu , Jianlin Zhang , Rui Yang , Qina Yang , Mengyuan Chen , Hongxing Xu , Pinjun Wan , Jiawen Guo , Fei Liu
Image super-resolution (SR) can significantly improve the resolution and quality of aerial imagery. Emerging diffusion models (DM) have shown superior image generation capabilities through multistep refinement. To explore their effectiveness on high-resolution cropland aerial imagery SR, we first built the CropSR dataset, which includes 321,992 samples for self-supervised SR training and two real-matched SR datasets from high-low altitude orthomosaics and fixed-point photography (CropSR-OR/FP) for testing. Inspired by the observed trend of decreasing image variance with higher flight altitude, we developed the Variance-Average-Spatial Attention (VASA). The VASA demonstrated effectiveness across various types of SR models, and we further developed the Efficient VASA-enhanced Diffusion Model (EVADM). To comprehensively and consistently evaluate the quality of SR models, we introduced the Super-resolution Relative Fidelity Index (SRFI), which considers both structural and perceptual similarity. On the × 2 and × 4 real SR datasets, EVADM reduced Fréchet-Inception-Distance (FID) by 14.6 and 8.0, respectively, along with SRFI gains of 27 % and 6 % compared to the baselines. The superior generalization ability of EVADM was further validated using the open Agriculture-Vision dataset. Extensive downstream case studies have demonstrated the high practicality of our SR method, indicating a promising avenue for realistic aerial imagery enhancement and effective downstream applications. The code and dataset for testing are available at https://github.com/HobbitArmy/EVADM.
图像超分辨率(SR)可显著提高航空图像的分辨率和质量。新兴的扩散模型(DM)通过多步细化显示了卓越的图像生成能力。为了探索这些模型在高分辨率耕地航空图像 SR 方面的有效性,我们首先建立了 CropSR 数据集,其中包括用于自我监督 SR 训练的 321,992 个样本,以及用于测试的两个真实匹配 SR 数据集,这两个数据集分别来自高低空正射影像图和定点摄影(CropSR-OR/FP)。受观察到的图像方差随飞行高度增加而减小的趋势启发,我们开发了方差-平均空间注意力(VASA)。VASA 在各种类型的 SR 模型中都表现出了有效性,因此我们进一步开发了高效 VASA 增强扩散模型 (EVADM)。为了全面、一致地评估 SR 模型的质量,我们引入了超级分辨率相对保真度指数(SRFI),该指数同时考虑了结构和感知的相似性。在 × 2 和 × 4 真实 SR 数据集上,EVADM 将弗雷谢特-截取距离(FID)分别缩短了 14.6 和 8.0,与基线相比,SRFI 分别提高了 27% 和 6%。EVADM 的卓越泛化能力通过公开的 Agriculture-Vision 数据集得到了进一步验证。广泛的下游案例研究证明了我们的 SR 方法具有很高的实用性,为现实的航空图像增强和有效的下游应用提供了广阔的前景。测试代码和数据集可在 https://github.com/HobbitArmy/EVADM 上获取。
{"title":"Effective variance attention-enhanced diffusion model for crop field aerial image super resolution","authors":"Xiangyu Lu , Jianlin Zhang , Rui Yang , Qina Yang , Mengyuan Chen , Hongxing Xu , Pinjun Wan , Jiawen Guo , Fei Liu","doi":"10.1016/j.isprsjprs.2024.08.017","DOIUrl":"10.1016/j.isprsjprs.2024.08.017","url":null,"abstract":"<div><p>Image super-resolution (SR) can significantly improve the resolution and quality of aerial imagery. Emerging diffusion models (DM) have shown superior image generation capabilities through multistep refinement. To explore their effectiveness on high-resolution cropland aerial imagery SR, we first built the CropSR dataset, which includes 321,992 samples for self-supervised SR training and two real-matched SR datasets from high-low altitude orthomosaics and fixed-point photography (CropSR-OR/FP) for testing. Inspired by the observed trend of decreasing image variance with higher flight altitude, we developed the Variance-Average-Spatial Attention (VASA). The VASA demonstrated effectiveness across various types of SR models, and we further developed the Efficient VASA-enhanced Diffusion Model (EVADM). To comprehensively and consistently evaluate the quality of SR models, we introduced the Super-resolution Relative Fidelity Index (SRFI), which considers both structural and perceptual similarity. On the × 2 and × 4 real SR datasets, EVADM reduced Fréchet-Inception-Distance (FID) by 14.6 and 8.0, respectively, along with SRFI gains of 27 % and 6 % compared to the baselines. The superior generalization ability of EVADM was further validated using the open Agriculture-Vision dataset. Extensive downstream case studies have demonstrated the high practicality of our SR method, indicating a promising avenue for realistic aerial imagery enhancement and effective downstream applications. The code and dataset for testing are available at <span><span>https://github.com/HobbitArmy/EVADM</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 50-68"},"PeriodicalIF":10.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1016/j.isprsjprs.2024.09.004
Tianyu Hu , Mengqi Cao , Xiaoxia Zhao , Xiaoqiang Liu , Zhonghua Liu , Liangyun Liu , Zhenying Huang , Shengli Tao , Zhiyao Tang , Yanpei Guo , Chengjun Ji , Chengyang Zheng , Guoyan Wang , Xiaokang Hu , Luhong Zhou , Yunxiang Cheng , Wenhong Ma , Yonghui Wang , Pujin Zhang , Yuejun Fan , Yanjun Su
Canopy cover is a crucial indicator for assessing grassland health and ecosystem services. However, achieving accurate high-resolution estimates of grassland canopy cover at a large spatial scale remains challenging due to the limited spatial coverage of field measurements and the scale mismatch between field measurements and satellite imagery. In this study, we addressed these challenges by proposing a regression-based approach to estimate large-scale grassland canopy cover, leveraging the integration of drone imagery and multisource remote sensing data. Specifically, over 90,000 10 × 10 m drone image tiles were collected at 1,255 sites across China. All drone image tiles were classified into grass and non-grass pixels to generate ground-truth canopy cover estimates. These estimates were then temporally aligned with satellite imagery-derived features to build a random forest regression model to map the grassland canopy cover distribution of China. Our results revealed that a single classification model can effectively distinguish between grass and non-grass pixels in drone images collected across diverse grassland types and large spatial scales, with multilayer perceptron demonstrating superior classification accuracy compared to Canopeo, support vector machine, random forest, and pyramid scene parsing network. The integration of extensive drone imagery successfully addressed the scale-mismatch issue between traditional ground measurements and satellite imagery, contributing significantly to enhancing mapping accuracy. The national canopy cover map of China generated for the year 2021 exhibited a spatial pattern of increasing canopy cover from northwest to southeast, with an average value of 56 % and a standard deviation of 26 %. Moreover, it demonstrated high accuracy, with a coefficient of determination of 0.89 and a root-mean-squared error of 12.38 %. The resulting high-resolution canopy cover map of China holds great potential in advancing our comprehension of grassland ecosystem processes and advocating for the sustainable management of grassland resources.
{"title":"High-resolution mapping of grassland canopy cover in China through the integration of extensive drone imagery and satellite data","authors":"Tianyu Hu , Mengqi Cao , Xiaoxia Zhao , Xiaoqiang Liu , Zhonghua Liu , Liangyun Liu , Zhenying Huang , Shengli Tao , Zhiyao Tang , Yanpei Guo , Chengjun Ji , Chengyang Zheng , Guoyan Wang , Xiaokang Hu , Luhong Zhou , Yunxiang Cheng , Wenhong Ma , Yonghui Wang , Pujin Zhang , Yuejun Fan , Yanjun Su","doi":"10.1016/j.isprsjprs.2024.09.004","DOIUrl":"10.1016/j.isprsjprs.2024.09.004","url":null,"abstract":"<div><p>Canopy cover is a crucial indicator for assessing grassland health and ecosystem services. However, achieving accurate high-resolution estimates of grassland canopy cover at a large spatial scale remains challenging due to the limited spatial coverage of field measurements and the scale mismatch between field measurements and satellite imagery. In this study, we addressed these challenges by proposing a regression-based approach to estimate large-scale grassland canopy cover, leveraging the integration of drone imagery and multisource remote sensing data. Specifically, over 90,000 10 × 10 m drone image tiles were collected at 1,255 sites across China. All drone image tiles were classified into grass and non-grass pixels to generate ground-truth canopy cover estimates. These estimates were then temporally aligned with satellite imagery-derived features to build a random forest regression model to map the grassland canopy cover distribution of China. Our results revealed that a single classification model can effectively distinguish between grass and non-grass pixels in drone images collected across diverse grassland types and large spatial scales, with multilayer perceptron demonstrating superior classification accuracy compared to Canopeo, support vector machine, random forest, and pyramid scene parsing network. The integration of extensive drone imagery successfully addressed the scale-mismatch issue between traditional ground measurements and satellite imagery, contributing significantly to enhancing mapping accuracy. The national canopy cover map of China generated for the year 2021 exhibited a spatial pattern of increasing canopy cover from northwest to southeast, with an average value of 56 % and a standard deviation of 26 %. Moreover, it demonstrated high accuracy, with a coefficient of determination of 0.89 and a root-mean-squared error of 12.38 %. The resulting high-resolution canopy cover map of China holds great potential in advancing our comprehension of grassland ecosystem processes and advocating for the sustainable management of grassland resources.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 69-83"},"PeriodicalIF":10.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1016/j.isprsjprs.2024.08.018
Mahya G.Z. Hashemi , Ehsan Jalilvand , Hamed Alemohammad , Pang-Ning Tan , Narendra N. Das
Synthetic Aperture Radar (SAR) observations, valued for their consistent acquisition schedule and not being affected by cloud cover and variations between day and night, have become extensively utilized in a range of agricultural applications. The advent of deep learning allows for the capture of salient features from SAR observations. This is accomplished through discerning both spatial and temporal relationships within SAR data. This study reviews the current state of the art in the use of SAR with deep learning for crop classification/mapping, monitoring and yield estimation applications and the potential of leveraging both for the detection of agricultural management practices.
This review introduces the principles of SAR and its applications in agriculture, highlighting current limitations and challenges. It explores deep learning techniques as a solution to mitigate these issues and enhance the capability of SAR for agricultural applications. The review covers various aspects of SAR observables, methodologies for the fusion of optical and SAR data, common and emerging deep learning architectures, data augmentation techniques, validation and testing methods, and open-source reference datasets, all aimed at enhancing the precision and utility of SAR with deep learning for agricultural applications.
{"title":"Review of synthetic aperture radar with deep learning in agricultural applications","authors":"Mahya G.Z. Hashemi , Ehsan Jalilvand , Hamed Alemohammad , Pang-Ning Tan , Narendra N. Das","doi":"10.1016/j.isprsjprs.2024.08.018","DOIUrl":"10.1016/j.isprsjprs.2024.08.018","url":null,"abstract":"<div><p>Synthetic Aperture Radar (SAR) observations, valued for their consistent acquisition schedule and not being affected by cloud cover and variations between day and night, have become extensively utilized in a range of agricultural applications. The advent of deep learning allows for the capture of salient features from SAR observations. This is accomplished through discerning both spatial and temporal relationships within SAR data. This study reviews the current state of the art in the use of SAR with deep learning for crop classification/mapping, monitoring and yield estimation applications and the potential of leveraging both for the detection of agricultural management practices.</p><p>This review introduces the principles of SAR and its applications in agriculture, highlighting current limitations and challenges. It explores deep learning techniques as a solution to mitigate these issues and enhance the capability of SAR for agricultural applications. The review covers various aspects of SAR observables, methodologies for the fusion of optical and SAR data, common and emerging deep learning architectures, data augmentation techniques, validation and testing methods, and open-source reference datasets, all aimed at enhancing the precision and utility of SAR with deep learning for agricultural applications.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 20-49"},"PeriodicalIF":10.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1016/j.isprsjprs.2024.09.002
Mofan Cheng , Wei He , Zhuohong Li , Guangyi Yang , Hongyan Zhang
Change detection, as a crucial task in the field of Earth observation, aims to identify changed pixels between multi-temporal remote-sensing images captured at the same geographical area. However, in practical applications, there are challenges of pseudo changes arising from diverse imaging conditions and different remote-sensing platforms. Existing methods either overlook the different imaging styles between bi-temporal images, or transfer the bi-temporal styles via domain adaptation that may lose ground details. To address these problems, we introduce the disentangled representation learning that mitigates differences of imaging styles while preserving content details to develop a change detection framework, named Content Cleansing Network (CCNet). Specifically, CCNet embeds each input image into two distinct subspaces: a shared content space and a private style space. The separation of style space aims to mitigate the discrepant style due to different imaging condition, while the extracted content space reflects semantic features that is essential for change detection. Then, a multi-resolution parallel structure constructs the content space encoder, facilitating robust feature extraction of semantic information and spatial details. The cleansed content features enable accurate detection of changes in the land surface. Additionally, a lightweight decoder for image restoration enhances the independence and interpretability of the disentangled spaces. To verify the proposed method, CCNet is applied to five public datasets and a multi-temporal dataset collected in this study. Comparative experiments against eleven advanced methods demonstrate the effectiveness and superiority of CCNet. The experimental results show that our method robustly addresses the issues related to both temporal and platform variations, making it a promising method for change detection in complex conditions and supporting downstream applications.
{"title":"Harmony in diversity: Content cleansing change detection framework for very-high-resolution remote-sensing images","authors":"Mofan Cheng , Wei He , Zhuohong Li , Guangyi Yang , Hongyan Zhang","doi":"10.1016/j.isprsjprs.2024.09.002","DOIUrl":"10.1016/j.isprsjprs.2024.09.002","url":null,"abstract":"<div><p>Change detection, as a crucial task in the field of Earth observation, aims to identify changed pixels between multi-temporal remote-sensing images captured at the same geographical area. However, in practical applications, there are challenges of pseudo changes arising from diverse imaging conditions and different remote-sensing platforms. Existing methods either overlook the different imaging styles between bi-temporal images, or transfer the bi-temporal styles via domain adaptation that may lose ground details. To address these problems, we introduce the disentangled representation learning that mitigates differences of imaging styles while preserving content details to develop a change detection framework, named Content Cleansing Network (CCNet). Specifically, CCNet embeds each input image into two distinct subspaces: a shared content space and a private style space. The separation of style space aims to mitigate the discrepant style due to different imaging condition, while the extracted content space reflects semantic features that is essential for change detection. Then, a multi-resolution parallel structure constructs the content space encoder, facilitating robust feature extraction of semantic information and spatial details. The cleansed content features enable accurate detection of changes in the land surface. Additionally, a lightweight decoder for image restoration enhances the independence and interpretability of the disentangled spaces. To verify the proposed method, CCNet is applied to five public datasets and a multi-temporal dataset collected in this study. Comparative experiments against eleven advanced methods demonstrate the effectiveness and superiority of CCNet. The experimental results show that our method robustly addresses the issues related to both temporal and platform variations, making it a promising method for change detection in complex conditions and supporting downstream applications.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 1-19"},"PeriodicalIF":10.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S092427162400340X/pdfft?md5=05257e0a48272b7c28a6809497111281&pid=1-s2.0-S092427162400340X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-31DOI: 10.1016/j.isprsjprs.2024.08.014
Wei Tu , Dongsheng Chen , Rui Cao , Jizhe Xia , Yatao Zhang , Qingquan Li
Informal settlements’ geographic and demographic mapping is essential for evaluating human-centric sustainable development in cities, thus fostering the road to Sustainable Development Goal 11. However, fine-grained informal settlements’ geographic and demographic information is not well available. To fill the gap, this study proposes an effective framework for both fine-grained geographic and demographic characterisation of informal settlements by integrating openly available remote sensing imagery, points-of-interest (POI), and demographic data. Pixel-level informal settlement is firstly mapped by a hierarchical recognition method with satellite imagery and POI. The patch-scale and city-scale geographic patterns of informal settlements are further analysed with landscape metrics. Spatial-demographic profiles are depicted by linking with the open WorldPop dataset to reveal the demographic pattern. Taking the Guangdong-Hong Kong-Macao Greater Bay Area (GBA) in China as the study area, the experiment demonstrates the effectiveness of informal settlement mapping, with an overall accuracy of 91.82%. The aggregated data and code are released (https://github.com/DongshengChen9/IF4SDG11). The demographic patterns of the informal settlements reveal that Guangzhou and Shenzhen, the two core cities in the GBA, concentrate more on young people living in the informal settlements. While the rapid-developing city Shenzhen shows a more significant trend of gender imbalance in the informal settlements. These findings provide valuable insights into monitoring informal settlements in the urban agglomeration and human-centric urban sustainable development, as well as SDG 11.1.1.
{"title":"Towards SDG 11: Large-scale geographic and demographic characterisation of informal settlements fusing remote sensing, POI, and open geo-data","authors":"Wei Tu , Dongsheng Chen , Rui Cao , Jizhe Xia , Yatao Zhang , Qingquan Li","doi":"10.1016/j.isprsjprs.2024.08.014","DOIUrl":"10.1016/j.isprsjprs.2024.08.014","url":null,"abstract":"<div><p>Informal settlements’ geographic and demographic mapping is essential for evaluating human-centric sustainable development in cities, thus fostering the road to Sustainable Development Goal 11. However, fine-grained informal settlements’ geographic and demographic information is not well available. To fill the gap, this study proposes an effective framework for both fine-grained geographic and demographic characterisation of informal settlements by integrating openly available remote sensing imagery, points-of-interest (POI), and demographic data. Pixel-level informal settlement is firstly mapped by a hierarchical recognition method with satellite imagery and POI. The patch-scale and city-scale geographic patterns of informal settlements are further analysed with landscape metrics. Spatial-demographic profiles are depicted by linking with the open WorldPop dataset to reveal the demographic pattern. Taking the Guangdong-Hong Kong-Macao Greater Bay Area (GBA) in China as the study area, the experiment demonstrates the effectiveness of informal settlement mapping, with an overall accuracy of 91.82%. The aggregated data and code are released (<span><span>https://github.com/DongshengChen9/IF4SDG11</span><svg><path></path></svg></span>). The demographic patterns of the informal settlements reveal that Guangzhou and Shenzhen, the two core cities in the GBA, concentrate more on young people living in the informal settlements. While the rapid-developing city Shenzhen shows a more significant trend of gender imbalance in the informal settlements. These findings provide valuable insights into monitoring informal settlements in the urban agglomeration and human-centric urban sustainable development, as well as SDG 11.1.1.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"217 ","pages":"Pages 199-215"},"PeriodicalIF":10.6,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0924271624003253/pdfft?md5=ea26a3272c1484993048b4db670eff37&pid=1-s2.0-S0924271624003253-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1016/j.isprsjprs.2024.08.009
Ruyin Cao , Luchun Li , Licong Liu , Hongyi Liang , Xiaolin Zhu , Miaogen Shen , Ji Zhou , Yuechen Li , Jin Chen
Crop phenological information must be reliably acquired earlier in the growing season to benefit agricultural management. Although the popular shape model fitting (SMF) method and its various improved versions (e.g., SMF by the Separate phenological stage, SMF-S) have been successfully applied to after-season crop phenology detection, these existing methods cannot be applied to within-season crop phenology detection. This discrepancy arises due to the fact that, in the within-season scenario, phenological stages can beyond the defined cut-off time. Consequently, enhancing the alignment of the vegetation index (VI) curve segments prior to the cut-off time does not necessarily guarantee accurate within-season phenological detection. To resolve this issue, a new method named spatiotemporal shape model fitting (STSMF) was developed. STSMF does not seek to optimize the local curve matching between the target pixel and the shape model; instead, it determines similar local VI trajectories in the neighboring pixels of previous years. The within-season phenology of the target pixel was thus estimated from the corresponding phenological stage of the determined local VI trajectories. When compared with ground phenology observations, STSMF outperformed the existing SMF and SMF-S which were modified for the within-season scenario ( and ) with the smallest mean absolute differences (MAE) between observed phenological stages and their corresponding model estimates. The MAE values averaged over all phenological stages for STSMF, , and were 9.8, 12.4, and 27.1 days at winter wheat stations; 8.4, 14.9, and 55.3 days at corn stations; and 7.9, 12.4, and 64.6 days at soybean stations, respectively. Intercomparisons between after-season and within-season regional phenology maps also demonstrated the superior performance of STSMF (e.g., correlation coefficients for STSMF and are 0.89 and 0.80 at the maturity stage of winter wheat). Furthermore, the performance of STSMF was less affected by the detection time and the determination of shape models. In conclusion, the straightforward, effective, and stable nature of STSMF makes it suitable for within-season detection of agronomic phenological stages.
作物物候信息必须在生长季节早期可靠获取,才能有利于农业管理。尽管流行的形状模型拟合(SMF)方法及其各种改进版本(例如,SMF by the Separate phenological stage,SMF-S)已成功应用于作物季后物候检测,但这些现有方法无法应用于作物季内物候检测。造成这种差异的原因是,在季内情况下,物候期可能会超出规定的截止时间。因此,在截止时间之前加强植被指数(VI)曲线段的对齐并不一定能保证季内物候检测的准确性。为了解决这个问题,我们开发了一种名为时空形状模型拟合(STSMF)的新方法。STSMF 并不寻求优化目标像素与形状模型之间的局部曲线匹配,而是确定相邻像素往年的相似局部 VI 轨迹。因此,目标像元的季内物候是根据确定的局部 VI 轨迹的相应物候阶段估算的。与地面物候观测结果相比,STSMF 的表现优于现有的 SMF 和 SMF-S(SMFws 和 SMFSws),后者针对季内情景进行了修改,观测到的物候阶段与其相应的模型估计值之间的平均绝对差值(MAE)最小。STSMF、SMFSws 和 SMFws 在所有物候期的平均 MAE 值在冬小麦站分别为 9.8、12.4 和 27.1 天;在玉米站分别为 8.4、14.9 和 55.3 天;在大豆站分别为 7.9、12.4 和 64.6 天。季后和季内区域物候图之间的相互比较也证明了 STSMF 的卓越性能(例如,在冬小麦成熟期,STSMF 和 SMFSws 的相关系数分别为 0.89 和 0.80)。此外,STSMF 的性能受检测时间和形状模型确定的影响较小。总之,STSMF 简单、有效、稳定的特性使其适用于农艺物候期的季内检测。
{"title":"A spatiotemporal shape model fitting method for within-season crop phenology detection","authors":"Ruyin Cao , Luchun Li , Licong Liu , Hongyi Liang , Xiaolin Zhu , Miaogen Shen , Ji Zhou , Yuechen Li , Jin Chen","doi":"10.1016/j.isprsjprs.2024.08.009","DOIUrl":"10.1016/j.isprsjprs.2024.08.009","url":null,"abstract":"<div><p>Crop phenological information must be reliably acquired earlier in the growing season to benefit agricultural management. Although the popular shape model fitting (SMF) method and its various improved versions (e.g., SMF by the Separate phenological stage, SMF-S) have been successfully applied to after-season crop phenology detection, these existing methods cannot be applied to within-season crop phenology detection. This discrepancy arises due to the fact that, in the within-season scenario, phenological stages can beyond the defined cut-off time. Consequently, enhancing the alignment of the vegetation index (VI) curve segments prior to the cut-off time does not necessarily guarantee accurate within-season phenological detection. To resolve this issue, a new method named <u>s</u>patio<u>t</u>emporal <u>s</u>hape <u>m</u>odel <u>f</u>itting (STSMF) was developed. STSMF does not seek to optimize the local curve matching between the target pixel and the shape model; instead, it determines similar local VI trajectories in the neighboring pixels of previous years. The within-season phenology of the target pixel was thus estimated from the corresponding phenological stage of the determined local VI trajectories. When compared with ground phenology observations, STSMF outperformed the existing SMF and SMF-S which were modified for the within-season scenario (<span><math><mrow><msub><mrow><mi>SMF</mi></mrow><mrow><mi>ws</mi></mrow></msub></mrow></math></span> and <span><math><mrow><msub><mrow><mi>SMFS</mi></mrow><mrow><mi>ws</mi></mrow></msub></mrow></math></span>) with the smallest mean absolute differences (MAE) between observed phenological stages and their corresponding model estimates. The MAE values averaged over all phenological stages for STSMF, <span><math><mrow><msub><mrow><mi>SMFS</mi></mrow><mrow><mi>ws</mi></mrow></msub></mrow></math></span>, and <span><math><mrow><msub><mrow><mi>SMF</mi></mrow><mrow><mi>ws</mi></mrow></msub></mrow></math></span> were 9.8, 12.4, and 27.1 days at winter wheat stations; 8.4, 14.9, and 55.3 days at corn stations; and 7.9, 12.4, and 64.6 days at soybean stations, respectively. Intercomparisons between after-season and within-season regional phenology maps also demonstrated the superior performance of STSMF (e.g., correlation coefficients for STSMF and <span><math><mrow><msub><mrow><mi>SMFS</mi></mrow><mrow><mi>ws</mi></mrow></msub></mrow></math></span> are 0.89 and 0.80 at the maturity stage of winter wheat). Furthermore, the performance of STSMF was less affected by the detection time and the determination of shape models. In conclusion, the straightforward, effective, and stable nature of STSMF makes it suitable for within-season detection of agronomic phenological stages.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"217 ","pages":"Pages 179-198"},"PeriodicalIF":10.6,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1016/j.isprsjprs.2024.08.011
Zheng Gong , Wenyan Ge , Jiaqi Guo , Jincheng Liu
Vegetation phenology serves as a crucial indicator of ecosystem dynamics and its response to environmental cues. Against the backdrop of global climate warming, it plays a pivotal role in delving into global climate change, terrestrial ecosystem dynamics, and guiding agricultural production. Ground-based field observations of vegetation phenology are increasingly challenged by rapid global ecological changes. Since the 1970 s, the development and application of remote sensing technology have offered a novel approach to address these challenges. Utilizing satellite remote sensing to acquire phenological parameters has been widely applied in monitoring vegetation phenology, significantly advancing phenological research. This paper describes commonly used vegetation indices, smoothing methods, and extraction techniques in monitoring vegetation phenology using satellite remote sensing. It systematically summarizes the applications and progress of vegetation phenology remote sensing at a global scale in recent years and analyzes the challenges of vegetation phenology remote sensing: These challenges include the need for higher spatiotemporal resolution data to capture vegetation changes, the necessity to compare remote sensing monitoring methods with direct field observations, the requirement to compare different remote sensing techniques to ensure accuracy, and the importance of incorporating seasonal variations and differences into phenology extraction models. It delves into the key issues and challenges existing in current vegetation phenology remote sensing, including the limitations of existing vegetation indices, the impact of spatiotemporal scale effects on phenology parameter extraction, uncertainties in phenology algorithms and machine learning, and the relationship between vegetation phenology and global climate change. Based on these discussions, the it proposes several opportunities and future prospects, containing improving the temporal and spatial resolution of data sources, using multiple datasets to monitor vegetation phenology dynamics, quantifying uncertainties in the algorithm and machine learning processes for phenology parameter extraction, clarifying the adaptive mechanisms of vegetation phenology to environmental changes, focusing on the impact of extreme weather, and establishing an integrated “sky-space-ground” vegetation phenology monitoring network. These developments aim to enhance the accuracy of phenology extraction, explore and understand the mechanisms of surface phenology changes, and impart more biophysical significance to vegetation phenology parameters.
{"title":"Satellite remote sensing of vegetation phenology: Progress, challenges, and opportunities","authors":"Zheng Gong , Wenyan Ge , Jiaqi Guo , Jincheng Liu","doi":"10.1016/j.isprsjprs.2024.08.011","DOIUrl":"10.1016/j.isprsjprs.2024.08.011","url":null,"abstract":"<div><p>Vegetation phenology serves as a crucial indicator of ecosystem dynamics and its response to environmental cues. Against the backdrop of global climate warming, it plays a pivotal role in delving into global climate change, terrestrial ecosystem dynamics, and guiding agricultural production. Ground-based field observations of vegetation phenology are increasingly challenged by rapid global ecological changes. Since the 1970 s, the development and application of remote sensing technology have offered a novel approach to address these challenges. Utilizing satellite remote sensing to acquire phenological parameters has been widely applied in monitoring vegetation phenology, significantly advancing phenological research. This paper describes commonly used vegetation indices, smoothing methods, and extraction techniques in monitoring vegetation phenology using satellite remote sensing. It systematically summarizes the applications and progress of vegetation phenology remote sensing at a global scale in recent years and analyzes the challenges of vegetation phenology remote sensing: These challenges include the need for higher spatiotemporal resolution data to capture vegetation changes, the necessity to compare remote sensing monitoring methods with direct field observations, the requirement to compare different remote sensing techniques to ensure accuracy, and the importance of incorporating seasonal variations and differences into phenology extraction models. It delves into the key issues and challenges existing in current vegetation phenology remote sensing, including the limitations of existing vegetation indices, the impact of spatiotemporal scale effects on phenology parameter extraction, uncertainties in phenology algorithms and machine learning, and the relationship between vegetation phenology and global climate change. Based on these discussions, the it proposes several opportunities and future prospects, containing improving the temporal and spatial resolution of data sources, using multiple datasets to monitor vegetation phenology dynamics, quantifying uncertainties in the algorithm and machine learning processes for phenology parameter extraction, clarifying the adaptive mechanisms of vegetation phenology to environmental changes, focusing on the impact of extreme weather, and establishing an integrated “sky-space-ground” vegetation phenology monitoring network. These developments aim to enhance the accuracy of phenology extraction, explore and understand the mechanisms of surface phenology changes, and impart more biophysical significance to vegetation phenology parameters.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"217 ","pages":"Pages 149-164"},"PeriodicalIF":10.6,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142090450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}