首页 > 最新文献

ISPRS Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
A novel deep learning algorithm for broad scale seagrass extent mapping in shallow coastal environments 一种基于深度学习的浅海环境大比例尺海草分布图绘制算法
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.008
Jianghai Peng , Jiwei Li , Thomas C. Ingalls , Steven R. Schill , Hannah R. Kerner , Gregory P. Asner
Recently, the importance of seagrasses in the functioning of coastal ecosystems and their ability to mitigate climate change has gained increased recognition. However, there has been a rapid global deterioration of seagrass ecosystems due to climate change and human-mediated disturbances. Accurate broad-scale mapping of seagrass extent is necessary for seagrass conservation and management actions. Traditionally, these mapping methods have primarily relied on spectral information, along with additional data such as manually designed spatial/texture features (e.g., from the Gray Level Co-Occurrence Matrix) and satellite-derived bathymetry. Despite the widely reported success of prior methods in mapping seagrass across small geographic areas, two challenges remain in broad-scale seagrass extent mapping: 1) spectral overlap between seagrass and other benthic habitats that results in the misclassification of coral/macroalgae to seagrass; 2) seagrass ecosystems exhibit spatial and temporal variability, most current models trained on data from specific locations or time periods encounter difficulties in generalizing to diverse locations or time periods with varying seagrass characteristics, such as density and species. In this study, we developed a novel deep learning model (i.e., Seagrass DenseNet: SGDenseNet) based on the DenseNet architecture to overcome these difficulties. The model was trained and validated using surface reflectance from Sentinel-2 MSI and 9,369 field data samples from four diverse regional shallow coastal water areas. Our model achieves an overall accuracy of 90% for seagrass extent mapping. Furthermore, we evaluated our deep learning model using 1,067 seagrass field data samples worldwide, achieving a producer’s accuracy of 81%. Our new deep learning model could be applied to map seagrass extents at a very broad-scale with high accuracy.
最近,人们越来越认识到海草在沿海生态系统功能中的重要性及其缓解气候变化的能力。然而,由于气候变化和人为干扰,全球海草生态系统正在迅速恶化。准确的海草范围测绘是海草保护和管理的必要条件。传统上,这些制图方法主要依赖于光谱信息,以及人工设计的空间/纹理特征(例如,来自灰度共生矩阵)和卫星衍生的水深测量等附加数据。尽管先前的方法在小地理区域海草制图中取得了广泛的成功,但在大尺度海草范围制图中仍然存在两个挑战:1)海草和其他底栖生物栖息地之间的光谱重叠导致珊瑚/大型藻类被错误地分类为海草;2)海草生态系统具有时空变异性,目前大多数基于特定地点或时间段数据训练的模型难以推广到具有不同海草特征(如密度和种类)的不同地点或时间段。在本研究中,我们基于DenseNet架构开发了一种新的深度学习模型(即海草DenseNet: SGDenseNet)来克服这些困难。该模型使用Sentinel-2 MSI的表面反射率和来自四个不同区域浅海水域的9,369个现场数据样本进行了训练和验证。我们的模型在海草范围映射方面达到了90%的总体精度。此外,我们使用全球1,067个海草田数据样本评估了我们的深度学习模型,实现了81%的生产者准确率。我们的新深度学习模型可以在非常广泛的范围内以高精度绘制海草范围。
{"title":"A novel deep learning algorithm for broad scale seagrass extent mapping in shallow coastal environments","authors":"Jianghai Peng ,&nbsp;Jiwei Li ,&nbsp;Thomas C. Ingalls ,&nbsp;Steven R. Schill ,&nbsp;Hannah R. Kerner ,&nbsp;Gregory P. Asner","doi":"10.1016/j.isprsjprs.2024.12.008","DOIUrl":"10.1016/j.isprsjprs.2024.12.008","url":null,"abstract":"<div><div>Recently, the importance of seagrasses in the functioning of coastal ecosystems and their ability to mitigate climate change has gained increased recognition. However, there has been a rapid global deterioration of seagrass ecosystems due to climate change and human-mediated disturbances. Accurate broad-scale mapping of seagrass extent is necessary for seagrass conservation and management actions. Traditionally, these mapping methods have primarily relied on spectral information, along with additional data such as manually designed spatial/texture features (e.g., from the Gray Level Co-Occurrence Matrix) and satellite-derived bathymetry. Despite the widely reported success of prior methods in mapping seagrass across small geographic areas, two challenges remain in broad-scale seagrass extent mapping: 1) spectral overlap between seagrass and other benthic habitats that results in the misclassification of coral/macroalgae to seagrass; 2) seagrass ecosystems exhibit spatial and temporal variability, most current models trained on data from specific locations or time periods encounter difficulties in generalizing to diverse locations or time periods with varying seagrass characteristics, such as density and species. In this study, we developed a novel deep learning model (i.e., Seagrass DenseNet: SGDenseNet) based on the DenseNet architecture to overcome these difficulties. The model was trained and validated using surface reflectance from Sentinel-2 MSI and 9,369 field data samples from four diverse regional shallow coastal water areas. Our model achieves an overall accuracy of 90% for seagrass extent mapping. Furthermore, we evaluated our deep learning model using 1,067 seagrass field data samples worldwide, achieving a producer’s accuracy of 81%. Our new deep learning model could be applied to map seagrass extents at a very broad-scale with high accuracy.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 277-294"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing a spatiotemporal fusion framework for generating daily UAV images in agricultural areas using publicly available satellite data 开发一个时空融合框架,用于利用公开的卫星数据在农业地区生成日常无人机图像
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.024
Hamid Ebrahimy, Tong Yu, Zhou Zhang
Monitoring agricultural areas, given their rapid transformation and small-scale spatial changes, necessitates obtaining dense time series of high-resolution remote sensing data. In this manner, the unmanned aerial vehicle (UAV) that can provide high-resolution images is indispensable for monitoring and assessing agricultural areas, especially for rapidly changing crops like alfalfa. Considering the practical limitations of acquiring daily UAV images, the utilization of spatiotemporal fusion (STF) approaches to integrate publicly available satellite images with high temporal resolution and UAV images with high spatial resolution can be considered an effective alternative. This study proposed an effective STF algorithm that utilizes the Generalized Linear Model (GLM) as the mapping function and is called GLM-STF. The algorithm is designed to use coarse difference images to map fine difference images via the GLM algorithm. It then combines these fine difference images with the original fine images to synthesize daily UAV image at the prediction time. In this study, we deployed a two-step STF process: (1) MODIS MCD43A4 and Harmonized Landsat and Sentinel-2 (HLS) data were fused to produce daily HLS images; and (2) daily HLS data and UAV images were fused to produce daily UAV images. We evaluated the reliability of the deployed framework at three distinct experimental sites that were covered by alfalfa crops. The performance of the GLM-STF algorithm was compared with five benchmark STF algorithms: STARFM, ESTARFM, Fit-FC, FSDAF, and VSDF, by using three quantitative accuracy evaluation metrics, including root mean squared error (RMSE), correlation coefficient (CC), and structure similarity index (SSIM). The proposed STF algorithm yielded the most accurate synthesized UAV images, followed by VSDF, which proved to be the most accurate benchmark algorithm. Specifically, GML-STF achieved an average RMSE of 0.029 (compared to VSDF’s 0.043), an average CC of 0.725 (compared to VSDF’s 0.669), and an average SSIM of 0.840 (compared to VSDF’s 0.811). The superiority of GLM-STF was also observed with the visual comparisons as well. Additionally, GLM-STF was less sensitive to the increase in the acquisition time difference between the reference image pairs and prediction date, indicating its suitability for STF tasks with limited input reference pairs. The developed framework in this study is thus expected to provide high-quality UAV images with high spatial resolution and frequent observations for various applications.
由于农业区域的快速变化和小尺度的空间变化,对其进行监测需要获得密集的高分辨率遥感数据时间序列。因此,能够提供高分辨率图像的无人机(UAV)对于农业区域的监测和评估是必不可少的,特别是对于像苜蓿这样快速变化的作物。考虑到获取日常无人机图像的实际局限性,利用时空融合(STF)方法整合公开的高时间分辨率卫星图像和高空间分辨率无人机图像可以被认为是一种有效的替代方案。本研究提出了一种利用广义线性模型(Generalized Linear Model, GLM)作为映射函数的有效STF算法,称为GLM-STF。该算法通过GLM算法将粗差图像映射到细差图像。然后将这些精细差分图像与原始精细图像相结合,合成预测时刻的无人机日常图像。在这项研究中,我们部署了一个两步的STF过程:(1)MODIS MCD43A4和Harmonized Landsat and Sentinel-2 (HLS)数据融合产生每日HLS图像;(2)将HLS日数据与无人机图像融合生成无人机日图像。我们在苜蓿作物覆盖的三个不同的实验地点评估了部署框架的可靠性。采用均方根误差(RMSE)、相关系数(CC)和结构相似度指数(SSIM) 3个定量精度评价指标,将GLM-STF算法与STARFM、ESTARFM、Fit-FC、FSDAF和VSDF 5种基准STF算法的性能进行比较。提出的STF算法合成无人机图像精度最高,其次是VSDF算法,是最精确的基准算法。具体来说,GML-STF的平均RMSE为0.029(与VSDF的0.043相比),平均CC为0.725(与VSDF的0.669相比),平均SSIM为0.840(与VSDF的0.811相比)。通过视觉对比也观察到GLM-STF的优越性。此外,GLM-STF对参考图像对与预测日期之间获取时间差的增加不太敏感,表明其适用于输入参考图像对有限的STF任务。因此,本研究开发的框架有望为各种应用提供具有高空间分辨率和频繁观测的高质量无人机图像。
{"title":"Developing a spatiotemporal fusion framework for generating daily UAV images in agricultural areas using publicly available satellite data","authors":"Hamid Ebrahimy,&nbsp;Tong Yu,&nbsp;Zhou Zhang","doi":"10.1016/j.isprsjprs.2024.12.024","DOIUrl":"10.1016/j.isprsjprs.2024.12.024","url":null,"abstract":"<div><div>Monitoring agricultural areas, given their rapid transformation and small-scale spatial changes, necessitates obtaining dense time series of high-resolution remote sensing data. In this manner, the unmanned aerial vehicle (UAV) that can provide high-resolution images is indispensable for monitoring and assessing agricultural areas, especially for rapidly changing crops like alfalfa. Considering the practical limitations of acquiring daily UAV images, the utilization of spatiotemporal fusion (STF) approaches to integrate publicly available satellite images with high temporal resolution and UAV images with high spatial resolution can be considered an effective alternative. This study proposed an effective STF algorithm that utilizes the Generalized Linear Model (GLM) as the mapping function and is called GLM-STF. The algorithm is designed to use coarse difference images to map fine difference images via the GLM algorithm. It then combines these fine difference images with the original fine images to synthesize daily UAV image at the prediction time. In this study, we deployed a two-step STF process: (1) MODIS MCD43A4 and Harmonized Landsat and Sentinel-2 (HLS) data were fused to produce daily HLS images; and (2) daily HLS data and UAV images were fused to produce daily UAV images. We evaluated the reliability of the deployed framework at three distinct experimental sites that were covered by alfalfa crops. The performance of the GLM-STF algorithm was compared with five benchmark STF algorithms: STARFM, ESTARFM, Fit-FC, FSDAF, and VSDF, by using three quantitative accuracy evaluation metrics, including root mean squared error (RMSE), correlation coefficient (CC), and structure similarity index (SSIM). The proposed STF algorithm yielded the most accurate synthesized UAV images, followed by VSDF, which proved to be the most accurate benchmark algorithm. Specifically, GML-STF achieved an average RMSE of 0.029 (compared to VSDF’s 0.043), an average CC of 0.725 (compared to VSDF’s 0.669), and an average SSIM of 0.840 (compared to VSDF’s 0.811). The superiority of GLM-STF was also observed with the visual comparisons as well. Additionally, GLM-STF was less sensitive to the increase in the acquisition time difference between the reference image pairs and prediction date, indicating its suitability for STF tasks with limited input reference pairs. The developed framework in this study is thus expected to provide high-quality UAV images with high spatial resolution and frequent observations for various applications.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 413-427"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142925269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3LATNet: Attention based deep learning model for global Chlorophyll-a retrieval from GCOM-C satellite 3LATNet:基于注意力的GCOM-C卫星叶绿素a全球反演深度学习模型
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.019
Muhammad Salah , Salem Ibrahim Salem , Nobuyuki Utsumi , Hiroto Higa , Joji Ishizaka , Kazuo Oki
<div><div>Chlorophyll-a (Chla) retrieval from satellite observations is crucial for assessing water quality and the health of aquatic ecosystems. Utilizing satellite data, while invaluable, poses challenges including inherent satellite biases, the necessity for precise atmospheric correction (AC), and the complexity of water bodies, all of which complicate establishing a reliable relationship between remote sensing reflectance (R<sub>rs</sub>) and Chla concentrations. Furthermore, the Global Change Observation Mission − Climate (GCOM-C) satellite operated by Japan Aerospace Exploration Agency (JAXA) has brought a significant leap forward in ocean color monitoring, featuring a 250 m spatial resolution and integrating the 380 nm band, enhancing the detection capabilities for aquatic environments. JAXA’s standard Chla product grounded in empirical algorithms, coupled with the limited research on the impact of atmospheric correction (AC) on R<sub>rs</sub> products, underscores the need for further analysis of these factors. This study introduces the three bidirectional Long short–term memory and ATtention mechanism Network (3LATNet) model that was trained on a large dataset incorporating 5610 in-situ R<sub>rs</sub> measurements and their corresponding Chla concentrations collected from global locations to cover broad trophic status. The R<sub>rs</sub> spectra have been resampled to the Second-Generation Global Imager (SGLI) aboard GCOM-C. The model was also trained using satellite matchup data, aiming to achieve a generalized deep-learning model. 3LATNet was evaluated compared to conventional Chla algorithms and ML algorithms, including JAXA’s standard Chla product. Our findings reveal a remarkable reduction in Chla estimation error, marked by a 42.5 % (from 17 to 9.77 mg/m<sup>3</sup>) reduction in mean absolute error (MAE) and a 57.3 % (from 43.12 to 18.43 mg/m<sup>3</sup>) reduction in root mean square error (RMSE) compared to JAXA’s standard Chla algorithm using in-situ data, and nearly a twofold improvement in absolute errors when evaluating using matchup SGLI R<sub>rs</sub>. Furthermore, we conduct an in-depth assessment of the impact of AC on the models’ performance. SeaDAS predominantly exhibited invalid reflectance values at the 412 nm band, while OC-SMART displayed more significant variability in percentage errors. In comparison, JAXA’s AC proved more precise in retrieving R<sub>rs</sub>. We comprehensively evaluated the spatial consistency of Chla models under clear and harmful algal bloom events. 3LATNet effectively captured Chla patterns across various ranges. Conversely, the RF algorithm frequently overestimates Chla concentrations in the low to mid-range. JAXA’s Chla algorithm, on the other hand, consistently tends to underestimate Chla concentrations, a trend that is particularly pronounced in high-range Chla areas and during harmful algal bloom events. These outcomes underscore the potential of our innovative approach for enhancing g
从卫星观测资料中获取叶绿素a (Chla)对评价水质和水生生态系统的健康至关重要。利用卫星数据虽然非常宝贵,但也带来了挑战,包括固有的卫星偏差、精确大气校正(AC)的必要性以及水体的复杂性,所有这些都使建立遥感反射率(Rrs)和Chla浓度之间的可靠关系变得复杂。此外,由日本宇宙航空研究开发机构(JAXA)运营的全球变化观测任务-气候(GCOM-C)卫星在海洋颜色监测方面取得了重大飞跃,其空间分辨率为250米,集成了380纳米波段,增强了对水生环境的探测能力。JAXA基于经验算法的标准Chla产品,加上对大气校正(AC)对Rrs产品影响的有限研究,强调了进一步分析这些因素的必要性。本研究介绍了三种双向长短期记忆和注意机制网络(3LATNet)模型,该模型是在一个大型数据集上训练的,该数据集包括5610个原位Rrs测量值及其对应的全球各地的Chla浓度,以涵盖广泛的营养状态。Rrs光谱被重新采样到GCOM-C上的第二代全球成像仪(SGLI)上。该模型还使用卫星匹配数据进行训练,旨在实现广义深度学习模型。3LATNet与传统Chla算法和ML算法(包括JAXA的标准Chla产品)进行了比较评估。我们的研究结果显示,Chla估计误差显著降低,与JAXA使用原位数据的标准Chla算法相比,平均绝对误差(MAE)降低了42.5%(从17到9.77 mg/m3),均方根误差(RMSE)降低了57.3%(从43.12到18.43 mg/m3),使用匹配SGLI rr进行评估时,绝对误差提高了近两倍。此外,我们对AC对模型性能的影响进行了深入评估。SeaDAS主要在412 nm波段显示无效反射率值,而OC-SMART在百分比误差方面表现出更显著的变化。相比之下,JAXA的AC被证明在检索rr方面更加精确。综合评价了清藻和有害藻华事件下Chla模型的空间一致性。3LATNet有效地捕获了不同范围的Chla模式。相反,RF算法在中低范围内经常高估Chla浓度。另一方面,JAXA的Chla算法一直倾向于低估Chla浓度,这一趋势在Chla高范围地区和有害藻华事件期间尤为明显。这些结果强调了我们在加强全球范围水质监测方面的创新方法的潜力。
{"title":"3LATNet: Attention based deep learning model for global Chlorophyll-a retrieval from GCOM-C satellite","authors":"Muhammad Salah ,&nbsp;Salem Ibrahim Salem ,&nbsp;Nobuyuki Utsumi ,&nbsp;Hiroto Higa ,&nbsp;Joji Ishizaka ,&nbsp;Kazuo Oki","doi":"10.1016/j.isprsjprs.2024.12.019","DOIUrl":"10.1016/j.isprsjprs.2024.12.019","url":null,"abstract":"&lt;div&gt;&lt;div&gt;Chlorophyll-a (Chla) retrieval from satellite observations is crucial for assessing water quality and the health of aquatic ecosystems. Utilizing satellite data, while invaluable, poses challenges including inherent satellite biases, the necessity for precise atmospheric correction (AC), and the complexity of water bodies, all of which complicate establishing a reliable relationship between remote sensing reflectance (R&lt;sub&gt;rs&lt;/sub&gt;) and Chla concentrations. Furthermore, the Global Change Observation Mission − Climate (GCOM-C) satellite operated by Japan Aerospace Exploration Agency (JAXA) has brought a significant leap forward in ocean color monitoring, featuring a 250 m spatial resolution and integrating the 380 nm band, enhancing the detection capabilities for aquatic environments. JAXA’s standard Chla product grounded in empirical algorithms, coupled with the limited research on the impact of atmospheric correction (AC) on R&lt;sub&gt;rs&lt;/sub&gt; products, underscores the need for further analysis of these factors. This study introduces the three bidirectional Long short–term memory and ATtention mechanism Network (3LATNet) model that was trained on a large dataset incorporating 5610 in-situ R&lt;sub&gt;rs&lt;/sub&gt; measurements and their corresponding Chla concentrations collected from global locations to cover broad trophic status. The R&lt;sub&gt;rs&lt;/sub&gt; spectra have been resampled to the Second-Generation Global Imager (SGLI) aboard GCOM-C. The model was also trained using satellite matchup data, aiming to achieve a generalized deep-learning model. 3LATNet was evaluated compared to conventional Chla algorithms and ML algorithms, including JAXA’s standard Chla product. Our findings reveal a remarkable reduction in Chla estimation error, marked by a 42.5 % (from 17 to 9.77 mg/m&lt;sup&gt;3&lt;/sup&gt;) reduction in mean absolute error (MAE) and a 57.3 % (from 43.12 to 18.43 mg/m&lt;sup&gt;3&lt;/sup&gt;) reduction in root mean square error (RMSE) compared to JAXA’s standard Chla algorithm using in-situ data, and nearly a twofold improvement in absolute errors when evaluating using matchup SGLI R&lt;sub&gt;rs&lt;/sub&gt;. Furthermore, we conduct an in-depth assessment of the impact of AC on the models’ performance. SeaDAS predominantly exhibited invalid reflectance values at the 412 nm band, while OC-SMART displayed more significant variability in percentage errors. In comparison, JAXA’s AC proved more precise in retrieving R&lt;sub&gt;rs&lt;/sub&gt;. We comprehensively evaluated the spatial consistency of Chla models under clear and harmful algal bloom events. 3LATNet effectively captured Chla patterns across various ranges. Conversely, the RF algorithm frequently overestimates Chla concentrations in the low to mid-range. JAXA’s Chla algorithm, on the other hand, consistently tends to underestimate Chla concentrations, a trend that is particularly pronounced in high-range Chla areas and during harmful algal bloom events. These outcomes underscore the potential of our innovative approach for enhancing g","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 490-508"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PolSAR2PolSAR: A semi-supervised despeckling algorithm for polarimetric SAR images
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2025.01.008
Cristiano Ulondu Mendes , Emanuele Dalsasso , Yi Zhang , Loïc Denis , Florence Tupin
Polarimetric Synthetic Aperture Radar (PolSAR) imagery is a valuable tool for Earth observation. This imaging technique finds wide application in various fields, including agriculture, forestry, geology, and disaster monitoring. However, due to the inherent presence of speckle noise, filtering is often necessary to improve the interpretability and reliability of PolSAR data. The effectiveness of a speckle filter is measured by its ability to attenuate fluctuations without introducing artifacts or degrading spatial and polarimetric information. Recent advancements in this domain leverage the power of deep learning. These approaches adopt a supervised learning strategy, which requires a large amount of speckle-free images that are costly to produce. In contrast, this paper presents PolSAR2PolSAR, a semi-supervised learning strategy that only requires, from the sensor under consideration, pairs of noisy images of the same location and acquired in the same configuration (same incidence angle and mode as during the revisit of the satellite on its orbit). Our approach applies to a wide range of sensors. Experiments on RADARSAT-2 and RADARSAT Constellation Mission (RCM) data demonstrate the capacity of the proposed method to effectively reduce speckle noise and retrieve fine details. The code of the trained models is made freely available at https://gitlab.telecom-paris.fr/ring/polsar2polsar The repository additionally contains a model fine-tuned on SLC PolSAR images from NASA’s UAVSAR sensor.
{"title":"PolSAR2PolSAR: A semi-supervised despeckling algorithm for polarimetric SAR images","authors":"Cristiano Ulondu Mendes ,&nbsp;Emanuele Dalsasso ,&nbsp;Yi Zhang ,&nbsp;Loïc Denis ,&nbsp;Florence Tupin","doi":"10.1016/j.isprsjprs.2025.01.008","DOIUrl":"10.1016/j.isprsjprs.2025.01.008","url":null,"abstract":"<div><div>Polarimetric Synthetic Aperture Radar (PolSAR) imagery is a valuable tool for Earth observation. This imaging technique finds wide application in various fields, including agriculture, forestry, geology, and disaster monitoring. However, due to the inherent presence of speckle noise, filtering is often necessary to improve the interpretability and reliability of PolSAR data. The effectiveness of a speckle filter is measured by its ability to attenuate fluctuations without introducing artifacts or degrading spatial and polarimetric information. Recent advancements in this domain leverage the power of deep learning. These approaches adopt a supervised learning strategy, which requires a large amount of speckle-free images that are costly to produce. In contrast, this paper presents PolSAR2PolSAR, a semi-supervised learning strategy that only requires, from the sensor under consideration, pairs of noisy images of the same location and acquired in the same configuration (same incidence angle and mode as during the revisit of the satellite on its orbit). Our approach applies to a wide range of sensors. Experiments on RADARSAT-2 and RADARSAT Constellation Mission (RCM) data demonstrate the capacity of the proposed method to effectively reduce speckle noise and retrieve fine details. The code of the trained models is made freely available at <span><span>https://gitlab.telecom-paris.fr/ring/polsar2polsar</span><svg><path></path></svg></span> The repository additionally contains a model fine-tuned on SLC PolSAR images from NASA’s UAVSAR sensor.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 783-798"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143162316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scattering mechanism-guided zero-shot PolSAR target recognition 散射机构制导的零弹PolSAR目标识别
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.022
Feng Li , Xiaojing Yang , Liang Zhang , Yanhua Wang , Yuqi Han , Xin Zhang , Yang Li
In response to the challenges posed by the difficulty in obtaining polarimetric synthetic aperture radar (PolSAR) data for certain specific categories of targets, we present a zero-shot target recognition method for PolSAR images. Based on a generative model, the method leverages the unique characteristics of polarimetric SAR images and incorporates two key modules: the scattering characteristics-guided semantic embedding generation module (SE) and the polarization characteristics-guided distributional correction module (DC). The former ensures the stability of synthetic features for unseen classes by controlling scattering characteristics. At the same time, the latter enhances the quality of synthetic features by utilizing polarimetric features, thereby improving the accuracy of zero-shot recognition. The proposed method is evaluated on the GOTCHA dataset to assess its performance in recognizing unseen classes. The experiment results demonstrate that the proposed method achieves SOTA performance in zero-shot PolSAR target recognition (e.g., improving the recognition accuracy of unseen categories by nearly 20%). Our codes are available at https://github.com/chuyihuan/Zero-shot-PolSAR-target-recognition.
针对极化合成孔径雷达(PolSAR)数据难以获取特定类别目标的问题,提出了一种针对PolSAR图像的零射击目标识别方法。该方法基于生成模型,利用极化SAR图像的独特特性,结合散射特征引导的语义嵌入生成模块(SE)和极化特征引导的分布校正模块(DC)两个关键模块。前者通过控制散射特性来保证不可见类合成特征的稳定性。同时,后者利用偏振特征增强合成特征的质量,从而提高零弹识别的精度。在GOTCHA数据集上对该方法进行了评估,以评估其识别未见类的性能。实验结果表明,该方法在零射击PolSAR目标识别中达到了SOTA性能(例如,未见类别的识别精度提高了近20%)。我们的代码可在https://github.com/chuyihuan/Zero-shot-PolSAR-target-recognition上获得。
{"title":"Scattering mechanism-guided zero-shot PolSAR target recognition","authors":"Feng Li ,&nbsp;Xiaojing Yang ,&nbsp;Liang Zhang ,&nbsp;Yanhua Wang ,&nbsp;Yuqi Han ,&nbsp;Xin Zhang ,&nbsp;Yang Li","doi":"10.1016/j.isprsjprs.2024.12.022","DOIUrl":"10.1016/j.isprsjprs.2024.12.022","url":null,"abstract":"<div><div>In response to the challenges posed by the difficulty in obtaining polarimetric synthetic aperture radar (PolSAR) data for certain specific categories of targets, we present a zero-shot target recognition method for PolSAR images. Based on a generative model, the method leverages the unique characteristics of polarimetric SAR images and incorporates two key modules: the scattering characteristics-guided semantic embedding generation module (SE) and the polarization characteristics-guided distributional correction module (DC). The former ensures the stability of synthetic features for unseen classes by controlling scattering characteristics. At the same time, the latter enhances the quality of synthetic features by utilizing polarimetric features, thereby improving the accuracy of zero-shot recognition. The proposed method is evaluated on the GOTCHA dataset to assess its performance in recognizing unseen classes. The experiment results demonstrate that the proposed method achieves SOTA performance in zero-shot PolSAR target recognition (<em>e.g.,</em> improving the recognition accuracy of unseen categories by nearly 20%). Our codes are available at <span><span>https://github.com/chuyihuan/Zero-shot-PolSAR-target-recognition</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 428-439"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142925267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of SAR-Optical fusion to extract shoreline position from Cloud-Contaminated satellite images
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2025.01.013
Yongjing Mao, Kristen D. Splinter
Shorelines derived from optical satellite images are increasingly being used for regional to global scale analysis of sandy coastline dynamics. The optical satellite record, however, is contaminated by cloud cover, which can substantially reduce the temporal resolution of available images for shoreline analysis. Meanwhile, with the development of deep learning methods, optical images are increasingly fused with Synthetic Aperture Radar (SAR) images that are unaffected by clouds to reconstruct the cloud-contaminated pixels. Such SAR-Optical fusion methods have been shown successful for different land surface applications, but the unique characteristics of coastal areas make the applicability of this method unknown in these dynamic zones.
Herein we apply a deep internal learning (DIL) method to reconstruct cloud-contaminated optical images and explore its applicability to retrieve shorelines obscured by clouds. Our approach uses a mixed sequence of SAR and Gaussian noise images as the prior and the cloudy Modified Normalized Difference Water Index (MNDWI) as the target. The DIL encodes the target with priors and synthesizes plausible pixels under cloud cover. A unique aspect of our workflow is the inclusion of Gaussian noise in the prior sequence for MNDWI images when SAR images collected within a 1-day temporal lag are not available. A novel loss function of DIL model is also introduced to optimize the image reconstruction near the shoreline. These new developments have significant contribution to the model accuracy.
The DIL method is tested at four different sites with varying tide, wave, and shoreline dynamics. Shorelines derived from the reconstructed and true MNDWI images are compared to quantify the internal accuracy of shoreline reconstruction. For microtidal environments with mean springs tidal range less than 2 m, the mean absolute error (MAE) of shoreline reconstruction is less than 7.5 m with the coefficient of determination (R2) more than 0.78 regardless of shoreline and wave dynamics. The method is less skilful in macro- and mesotidal environments due to the larger water level difference in the paired optical and SAR images, resulting in the MAE of 12.59 m and R2 of 0.43. The proposed SAR-Optical fusion method demonstrates substantially better accuracy in retrieving cloud-obscured shoreline positions compared to interpolation methods relying solely on optical images. Results from our work highlight the great potential of SAR-Optical fusion to derive shorelines even under the cloudiest conditions, thus increasing the temporal resolution of shoreline datasets.
{"title":"Application of SAR-Optical fusion to extract shoreline position from Cloud-Contaminated satellite images","authors":"Yongjing Mao,&nbsp;Kristen D. Splinter","doi":"10.1016/j.isprsjprs.2025.01.013","DOIUrl":"10.1016/j.isprsjprs.2025.01.013","url":null,"abstract":"<div><div>Shorelines derived from optical satellite images are increasingly being used for regional to global scale analysis of sandy coastline dynamics. The optical satellite record, however, is contaminated by cloud cover, which can substantially reduce the temporal resolution of available images for shoreline analysis. Meanwhile, with the development of deep learning methods, optical images are increasingly fused with Synthetic Aperture Radar (SAR) images that are unaffected by clouds to reconstruct the cloud-contaminated pixels. Such SAR-Optical fusion methods have been shown successful for different land surface applications, but the unique characteristics of coastal areas make the applicability of this method unknown in these dynamic zones.</div><div>Herein we apply a deep internal learning (DIL) method to reconstruct cloud-contaminated optical images and explore its applicability to retrieve shorelines obscured by clouds. Our approach uses a mixed sequence of SAR and Gaussian noise images as the prior and the cloudy Modified Normalized Difference Water Index (MNDWI) as the target. The DIL encodes the target with priors and synthesizes plausible pixels under cloud cover. A unique aspect of our workflow is the inclusion of Gaussian noise in the prior sequence for MNDWI images when SAR images collected within a 1-day temporal lag are not available. A novel loss function of DIL model is also introduced to optimize the image reconstruction near the shoreline. These new developments have significant contribution to the model accuracy.</div><div>The DIL method is tested at four different sites with varying tide, wave, and shoreline dynamics. Shorelines derived from the reconstructed and true MNDWI images are compared to quantify the internal accuracy of shoreline reconstruction. For microtidal environments with mean springs tidal range less than 2 m, the mean absolute error (MAE) of shoreline reconstruction is less than 7.5 m with the coefficient of determination (<span><math><mrow><msup><mrow><mi>R</mi></mrow><mn>2</mn></msup></mrow></math></span>) more than 0.78 regardless of shoreline and wave dynamics. The method is less skilful in macro- and mesotidal environments due to the larger water level difference in the paired optical and SAR images, resulting in the MAE of 12.59 m and <span><math><mrow><msup><mrow><mi>R</mi></mrow><mn>2</mn></msup></mrow></math></span> of 0.43. The proposed SAR-Optical fusion method demonstrates substantially better accuracy in retrieving cloud-obscured shoreline positions compared to interpolation methods relying solely on optical images. Results from our work highlight the great potential of SAR-Optical fusion to derive shorelines even under the cloudiest conditions, thus increasing the temporal resolution of shoreline datasets.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 563-579"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143035308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Refined change detection in heterogeneous low-resolution remote sensing images for disaster emergency response 用于灾害应急响应的异构低分辨率遥感图像中的精细变化检测
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.010
Di Wang , Guorui Ma , Haiming Zhang , Xiao Wang , Yongxian Zhang
Heterogeneous Remote Sensing Images Change Detection (HRSICD) is a significant challenge in remote sensing image processing, with substantial application value in rapid natural disaster response. However, significant differences in imaging modalities often result in poor comparability of their features, affecting the recognition accuracy. To address the issue, we propose a novel HRSICD method based on image structure relationships and semantic information. First, we employ a Multi-scale Pyramid Convolution Encoder to efficiently extract the multi-scale and detailed features. Next, the Cross-domain Feature Alignment Module aligns the structural relationships and semantic features of the heterogeneous images, enhancing the comparability between heterogeneous image features. Finally, the Multi-level Decoder fuses the structural and semantic features, achieving refined identification of change areas. We validated the advancement of proposed method on five publicly available HRSICD datasets. Additionally, zero-shot generalization experiments and real-world applications were conducted to assess its generalization capability. Our method achieved favorable results in all experiments, demonstrating its effectiveness. The code of the proposed method will be made available at https://github.com/Lucky-DW/HRSICD.
异构遥感图像变化检测(HRSICD)是遥感图像处理中的一个重大挑战,在快速自然灾害响应中具有重要的应用价值。然而,由于成像方式的差异较大,往往导致其特征的可比性较差,从而影响识别的准确性。为了解决这个问题,我们提出了一种基于图像结构关系和语义信息的HRSICD方法。首先,我们采用一个多尺度金字塔卷积编码器来有效地提取多尺度和细节特征。接下来,跨域特征对齐模块对异构图像的结构关系和语义特征进行对齐,增强异构图像特征之间的可比性。最后,多层解码器融合了结构特征和语义特征,实现了变化区域的精细识别。我们在五个公开可用的HRSICD数据集上验证了所提出方法的先进性。此外,通过零射击泛化实验和实际应用来评估其泛化能力。我们的方法在所有实验中都取得了良好的效果,证明了它的有效性。建议方法的代码将在https://github.com/Lucky-DW/HRSICD上提供。
{"title":"Refined change detection in heterogeneous low-resolution remote sensing images for disaster emergency response","authors":"Di Wang ,&nbsp;Guorui Ma ,&nbsp;Haiming Zhang ,&nbsp;Xiao Wang ,&nbsp;Yongxian Zhang","doi":"10.1016/j.isprsjprs.2024.12.010","DOIUrl":"10.1016/j.isprsjprs.2024.12.010","url":null,"abstract":"<div><div>Heterogeneous Remote Sensing Images Change Detection (HRSICD) is a significant challenge in remote sensing image processing, with substantial application value in rapid natural disaster response. However, significant differences in imaging modalities often result in poor comparability of their features, affecting the recognition accuracy. To address the issue, we propose a novel HRSICD method based on image structure relationships and semantic information. First, we employ a Multi-scale Pyramid Convolution Encoder to efficiently extract the multi-scale and detailed features. Next, the Cross-domain Feature Alignment Module aligns the structural relationships and semantic features of the heterogeneous images, enhancing the comparability between heterogeneous image features. Finally, the Multi-level Decoder fuses the structural and semantic features, achieving refined identification of change areas. We validated the advancement of proposed method on five publicly available HRSICD datasets. Additionally, zero-shot generalization experiments and real-world applications were conducted to assess its generalization capability. Our method achieved favorable results in all experiments, demonstrating its effectiveness. The code of the proposed method will be made available at <span><span>https://github.com/Lucky-DW/HRSICD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 139-155"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PylonModeler: A hybrid-driven 3D reconstruction method for power transmission pylons from LiDAR point clouds PylonModeler:一种混合驱动的基于LiDAR点云的输电塔三维重建方法
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.003
Shaolong Wu , Chi Chen , Bisheng Yang , Zhengfei Yan , Zhiye Wang , Shangzhe Sun , Qin Zou , Jing Fu
As the power grid is an indispensable foundation of modern society, creating a digital twin of the grid is of great importance. Pylons serve as components in the transmission corridor, and their precise 3D reconstruction is essential for the safe operation of power grids. However, 3D pylon reconstruction from LiDAR point clouds presents numerous challenges due to data quality and the diversity and complexity of pylon structures. To address these challenges, we introduce PylonModeler: a hybrid-driven method for 3D pylon reconstruction using airborne LiDAR point clouds, thereby enabling accurate, robust, and efficient real-time pylon reconstruction. Different strategies are employed to achieve independent reconstructions and assemblies for various structures. We propose Pylon Former, a lightweight transformer network for real-time pylon recognition and decomposition. Subsequently, we apply a data-driven approach for the pylon body reconstruction. Considering structural characteristics, fitting and clustering algorithms are used to reconstruct both external and internal structures. The pylon head is reconstructed using a hybrid approach. A pre-built pylon head parameter model library defines different pylons by a series of parameters. The coherent point drift (CPD) algorithm is adopted to establish the topological relationships between pylon head structures and set initial model parameters, which are refined through optimization for accurate pylon head reconstruction. Finally, the pylon body and head models are combined to complete the reconstruction. We collected an airborne LiDAR dataset, which includes a total of 3398 pylon data across eight types. The dataset consists of transmission lines of various voltage levels, such as 110 kV, 220 kV, and 500 kV. PylonModeler is validated on this dataset. The average reconstruction time of a pylon is 1.10 s, with an average reconstruction accuracy of 0.216 m. In addition, we evaluate the performance of PylonModeler on public airborne LiDAR data from Luxembourg. Compared to previous state-of-the-art methods, reconstruction accuracy improved by approximately 26.28 %. With superior performance, PylonModeler is tens of times faster than the current model-driven methods, enabling real-time pylon reconstruction.
由于电网是现代社会不可或缺的基础,因此创建电网的数字孪生体具有重要意义。输电塔作为输电走廊的组成部分,其精确的三维重建对电网的安全运行至关重要。然而,由于数据质量和塔结构的多样性和复杂性,激光雷达点云的三维塔重建面临着许多挑战。为了应对这些挑战,我们引入了PylonModeler:一种使用机载激光雷达点云进行3D塔重建的混合驱动方法,从而实现准确、稳健、高效的实时塔重建。采用不同的策略来实现不同结构的独立重建和组装。我们提出了一种用于实时铁塔识别和分解的轻量级变压器网络——铁塔前网络。随后,我们应用数据驱动的方法进行塔体重建。考虑结构特征,采用拟合和聚类算法重构外部和内部结构。采用混合方法重建塔头。预建的塔头参数模型库通过一系列参数定义不同的塔。采用相干点漂移(CPD)算法建立塔头结构之间的拓扑关系,设置初始模型参数,通过优化对模型参数进行细化,实现塔头的精确重建。最后将塔身模型与塔头模型结合,完成重建。我们收集了一个机载激光雷达数据集,其中包括8种类型的3398个塔数据。数据集包括110 kV、220 kV和500 kV等不同电压等级的输电线路。在此数据集上验证PylonModeler。平均重建时间为1.10 s,平均重建精度为0.216 m。此外,我们评估了PylonModeler在卢森堡公共机载LiDAR数据上的性能。与之前最先进的方法相比,重建精度提高了约26.28%。PylonModeler具有卓越的性能,比当前的模型驱动方法快几十倍,可以实现实时的pylon重建。
{"title":"PylonModeler: A hybrid-driven 3D reconstruction method for power transmission pylons from LiDAR point clouds","authors":"Shaolong Wu ,&nbsp;Chi Chen ,&nbsp;Bisheng Yang ,&nbsp;Zhengfei Yan ,&nbsp;Zhiye Wang ,&nbsp;Shangzhe Sun ,&nbsp;Qin Zou ,&nbsp;Jing Fu","doi":"10.1016/j.isprsjprs.2024.12.003","DOIUrl":"10.1016/j.isprsjprs.2024.12.003","url":null,"abstract":"<div><div>As the power grid is an indispensable foundation of modern society, creating a digital twin of the grid is of great importance. Pylons serve as components in the transmission corridor, and their precise 3D reconstruction is essential for the safe operation of power grids. However, 3D pylon reconstruction from LiDAR point clouds presents numerous challenges due to data quality and the diversity and complexity of pylon structures. To address these challenges, we introduce PylonModeler: a hybrid-driven method for 3D pylon reconstruction using airborne LiDAR point clouds, thereby enabling accurate, robust, and efficient real-time pylon reconstruction. Different strategies are employed to achieve independent reconstructions and assemblies for various structures. We propose Pylon Former, a lightweight transformer network for real-time pylon recognition and decomposition. Subsequently, we apply a data-driven approach for the pylon body reconstruction. Considering structural characteristics, fitting and clustering algorithms are used to reconstruct both external and internal structures. The pylon head is reconstructed using a hybrid approach. A pre-built pylon head parameter model library defines different pylons by a series of parameters. The coherent point drift (CPD) algorithm is adopted to establish the topological relationships between pylon head structures and set initial model parameters, which are refined through optimization for accurate pylon head reconstruction. Finally, the pylon body and head models are combined to complete the reconstruction. We collected an airborne LiDAR dataset, which includes a total of 3398 pylon data across eight types. The dataset consists of transmission lines of various voltage levels, such as 110 kV, 220 kV, and 500 kV. PylonModeler is validated on this dataset. The average reconstruction time of a pylon is 1.10 s, with an average reconstruction accuracy of 0.216 m. In addition, we evaluate the performance of PylonModeler on public airborne LiDAR data from Luxembourg. Compared to previous state-of-the-art methods, reconstruction accuracy improved by approximately 26.28 %. With superior performance, PylonModeler is tens of times faster than the current model-driven methods, enabling real-time pylon reconstruction.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 100-124"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unwrapping error and fading signal correction on multi-looked InSAR data 多视InSAR数据解包裹误差与衰落信号校正
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.006
Zhangfeng Ma , Nanxin Wang , Yingbao Yang , Yosuke Aoki , Shengji Wei
Multi-looking, aimed at reducing data size and improving the signal-to-noise ratio, is indispensable for large-scale InSAR data processing. However, the resulting “Fading Signal” caused by multi-looking breaks the phase consistency among triplet interferograms and introduces bias into the estimated displacements. This inconsistency challenges the assumption that only unwrapping errors are involved in triplet phase closure. Therefore, untangling phase unwrapping errors and fading signals from triplet phase closure is critical to achieving more precise InSAR measurements. To address this challenge, we propose a new method that mitigates phase unwrapping errors and fading signals. This new method consists of two key steps. The first step is triplet phase closure-based stacking, which allows for the direct estimation of fading signals in each interferogram. The second step is Basis Pursuit Denoising-based unwrapping error correction, which transforms unwrapping error correction into sparse signal recovery. Through these two procedures, the new method can be seamlessly integrated into the traditional InSAR workflow. Additionally, the estimated fading signal can be directly used to derive soil moisture as a by-product of our method. Experimental results on the San Francisco Bay area demonstrate that the new method reduces velocity estimation errors by approximately 9 %–19 %, effectively addressing phase unwrapping errors and fading signals. This performance outperforms both ILP and Lasso methods, which only account for unwrapping errors in the triplet closure. Additionally, the derived by-product, soil moisture, shows strong consistency with most external soil moisture products.
在大规模InSAR数据处理中,以减少数据量和提高信噪比为目的的多视处理是必不可少的。然而,多重观察导致的“衰落信号”破坏了三重态干涉图之间的相位一致性,并在估计位移时引入了偏差。这种不一致挑战了三元组阶段闭包中只涉及展开错误的假设。因此,解缠结相位解包裹误差和三重态相位闭合产生的衰落信号对于实现更精确的InSAR测量至关重要。为了解决这一挑战,我们提出了一种新的方法来减轻相位解包裹误差和衰落信号。这种新方法包括两个关键步骤。第一步是基于三重态相位闭合的叠加,它允许在每个干涉图中直接估计衰落信号。第二步是基于基追求去噪的解包裹纠错,将解包裹纠错转化为稀疏信号恢复。通过这两个步骤,新方法可以无缝集成到传统的InSAR工作流程中。此外,估计的衰落信号可以直接用于导出土壤湿度作为我们的方法的副产品。在旧金山湾区的实验结果表明,新方法将速度估计误差降低了约9% - 19%,有效地解决了相位解包裹误差和衰落信号。这种性能优于ILP和Lasso方法,后者只处理三元组闭包中的展开错误。此外,衍生的副产物土壤湿度与大多数外部土壤湿度产品具有很强的一致性。
{"title":"Unwrapping error and fading signal correction on multi-looked InSAR data","authors":"Zhangfeng Ma ,&nbsp;Nanxin Wang ,&nbsp;Yingbao Yang ,&nbsp;Yosuke Aoki ,&nbsp;Shengji Wei","doi":"10.1016/j.isprsjprs.2024.12.006","DOIUrl":"10.1016/j.isprsjprs.2024.12.006","url":null,"abstract":"<div><div>Multi-looking, aimed at reducing data size and improving the signal-to-noise ratio, is indispensable for large-scale InSAR data processing. However, the resulting “Fading Signal” caused by multi-looking breaks the phase consistency among triplet interferograms and introduces bias into the estimated displacements. This inconsistency challenges the assumption that only unwrapping errors are involved in triplet phase closure. Therefore, untangling phase unwrapping errors and fading signals from triplet phase closure is critical to achieving more precise InSAR measurements. To address this challenge, we propose a new method that mitigates phase unwrapping errors and fading signals. This new method consists of two key steps. The first step is triplet phase closure-based stacking, which allows for the direct estimation of fading signals in each interferogram. The second step is Basis Pursuit Denoising-based unwrapping error correction, which transforms unwrapping error correction into sparse signal recovery. Through these two procedures, the new method can be seamlessly integrated into the traditional InSAR workflow. Additionally, the estimated fading signal can be directly used to derive soil moisture as a by-product of our method. Experimental results on the San Francisco Bay area demonstrate that the new method reduces velocity estimation errors by approximately 9 %–19 %, effectively addressing phase unwrapping errors and fading signals. This performance outperforms both ILP and Lasso methods, which only account for unwrapping errors in the triplet closure. Additionally, the derived by-product, soil moisture, shows strong consistency with most external soil moisture products.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 51-63"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSO-based fine polarimetric decomposition for ship scattering characterization 基于pso的精细极化分解舰船散射表征
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.11.015
Junpeng Wang , Sinong Quan , Shiqi Xing , Yongzhen Li , Hao Wu , Weize Meng
Due to the inappropriate estimation and inadequate awareness of scattering from complex substructures within ships, a reasonable, reliable, and complete interpretation tool to characterize ship scattering for polarimetric synthetic aperture radar (PolSAR) is still lacking. In this paper, a fine polarimetric decomposition with explicit physical meaning is proposed to reveal and characterize the local-structure-related scattering behaviors on ships. To this end, a nine-component decomposition scheme is first established through incorporating the rotated dihedral and planar resonator scattering models, which makes full use of polarimetric information and comprehensively considers the complex structure scattering of ships. In order to reasonably estimation the scattering components, three practical scattering dominance principles as well as an explicit objective function are raised, and a particle swarm optimization (PSO)-based model inversion strategy is subsequently presented. This not only overcomes the underdetermined problem, but also improves the scattering mechanism ambiguity by circumventing the constrained estimation order. Finally, a ship indicator by linearly combining the output scattering contribution is further derived, which constitutes a complete ship scattering interpretation approach along with the proposed decomposition. Experiments carried out with real PolSAR datasets demonstrate that the proposed method adequately and objectively describes the scatterers on ships, which provides an effective way to ship scattering characterization. Moreover, it also verifies the feasibility of fine polarimetric decomposition in a further application with the quantitative analysis of scattering components.
由于对船舶内部复杂子结构散射的估计不当和认识不足,偏振合成孔径雷达(PolSAR)仍然缺乏一种合理、可靠、完整的表征船舶散射的解释工具。本文提出了一种具有明确物理意义的精细极化分解方法来揭示和表征船舶上与局部结构相关的散射行为。为此,首先结合旋转二面体和平面谐振腔散射模型,充分利用极化信息,综合考虑船舶的复杂结构散射,建立了九分量分解方案。为了合理估计散射分量,提出了3种实用的散射优势原则和明确的目标函数,并提出了基于粒子群优化(PSO)的模型反演策略。这不仅克服了欠确定问题,而且通过绕过有约束的估计顺序,改善了散射机制的模糊性。最后,进一步推导出由输出散射贡献线性组合而成的舰船指标,该指标与所提出的分解方法一起构成了完整的舰船散射解释方法。实验结果表明,该方法充分、客观地描述了舰船散射体,为舰船散射表征提供了一种有效的方法。此外,通过对散射成分的定量分析,验证了精细极化分解在进一步应用中的可行性。
{"title":"PSO-based fine polarimetric decomposition for ship scattering characterization","authors":"Junpeng Wang ,&nbsp;Sinong Quan ,&nbsp;Shiqi Xing ,&nbsp;Yongzhen Li ,&nbsp;Hao Wu ,&nbsp;Weize Meng","doi":"10.1016/j.isprsjprs.2024.11.015","DOIUrl":"10.1016/j.isprsjprs.2024.11.015","url":null,"abstract":"<div><div>Due to the inappropriate estimation and inadequate awareness of scattering from complex substructures within ships, a reasonable, reliable, and complete interpretation tool to characterize ship scattering for polarimetric synthetic aperture radar (PolSAR) is still lacking. In this paper, a fine polarimetric decomposition with explicit physical meaning is proposed to reveal and characterize the local-structure-related scattering behaviors on ships. To this end, a nine-component decomposition scheme is first established through incorporating the rotated dihedral and planar resonator scattering models, which makes full use of polarimetric information and comprehensively considers the complex structure scattering of ships. In order to reasonably estimation the scattering components, three practical scattering dominance principles as well as an explicit objective function are raised, and a particle swarm optimization (PSO)-based model inversion strategy is subsequently presented. This not only overcomes the underdetermined problem, but also improves the scattering mechanism ambiguity by circumventing the constrained estimation order. Finally, a ship indicator by linearly combining the output scattering contribution is further derived, which constitutes a complete ship scattering interpretation approach along with the proposed decomposition. Experiments carried out with real PolSAR datasets demonstrate that the proposed method adequately and objectively describes the scatterers on ships, which provides an effective way to ship scattering characterization. Moreover, it also verifies the feasibility of fine polarimetric decomposition in a further application with the quantitative analysis of scattering components.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 18-31"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1