首页 > 最新文献

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing最新文献

英文 中文
Enhancing Remote Sensing Semantic Segmentation Accuracy and Efficiency Through Transformer and Knowledge Distillation
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-13 DOI: 10.1109/JSTARS.2025.3525634
Kang Zheng;Yu Chen;Jingrong Wang;Zhifei Liu;Shuai Bao;Jiao Zhan;Nan Shen
In semantic segmentation tasks, the transition from convolutional neural networks (CNNs) to transformers is driven by the latter's superior ability to capture global semantic information in remote sensing images. However, most transformer methods face challenges such as slow inference speed and limitations in capturing local features. To address these issues, this study designs a hybrid approach that integrates knowledge distillation with a combination of CNN and transformer to enhance semantic segmentation in remote sensing images. First, this article proposes the dual-path convolutional transformer network (DP-CTNet) with a dual-path structure to leverage the strengths of both CNN and transformers. It incorporates a feature refinement module to optimize the transformer's feature learning, and a feature fusion module to effectively merge CNN and transformer features, preventing the insufficient learning of local features by the transformer. Then, DP-CTNet serves as the teacher model, and pruning and knowledge distillation are employed to create efficient DP-CTNet (EDP-CTNet) with superior segmentation speed and accuracy. Angle knowledge distillation (AKD) is proposed to enhance the feature migration learning of DP-CTNet during knowledge distillation, leading to improved EDP-CTNet performance. Experimental results demonstrate that DP-CTNet thoroughly combines the respective advantages of CNN and Transformer, maintaining local detail features while learning extensive sequential semantic information. EDP-CTNet not only delivers impressive segmentation speed but also exhibits excellent segmentation accuracy following AKD training. In comparison to other models, the two models proposed in this article notably distinguish themselves in terms of accuracy and result visualization.
{"title":"Enhancing Remote Sensing Semantic Segmentation Accuracy and Efficiency Through Transformer and Knowledge Distillation","authors":"Kang Zheng;Yu Chen;Jingrong Wang;Zhifei Liu;Shuai Bao;Jiao Zhan;Nan Shen","doi":"10.1109/JSTARS.2025.3525634","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3525634","url":null,"abstract":"In semantic segmentation tasks, the transition from convolutional neural networks (CNNs) to transformers is driven by the latter's superior ability to capture global semantic information in remote sensing images. However, most transformer methods face challenges such as slow inference speed and limitations in capturing local features. To address these issues, this study designs a hybrid approach that integrates knowledge distillation with a combination of CNN and transformer to enhance semantic segmentation in remote sensing images. First, this article proposes the dual-path convolutional transformer network (DP-CTNet) with a dual-path structure to leverage the strengths of both CNN and transformers. It incorporates a feature refinement module to optimize the transformer's feature learning, and a feature fusion module to effectively merge CNN and transformer features, preventing the insufficient learning of local features by the transformer. Then, DP-CTNet serves as the teacher model, and pruning and knowledge distillation are employed to create efficient DP-CTNet (EDP-CTNet) with superior segmentation speed and accuracy. Angle knowledge distillation (AKD) is proposed to enhance the feature migration learning of DP-CTNet during knowledge distillation, leading to improved EDP-CTNet performance. Experimental results demonstrate that DP-CTNet thoroughly combines the respective advantages of CNN and Transformer, maintaining local detail features while learning extensive sequential semantic information. EDP-CTNet not only delivers impressive segmentation speed but also exhibits excellent segmentation accuracy following AKD training. In comparison to other models, the two models proposed in this article notably distinguish themselves in terms of accuracy and result visualization.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4074-4092"},"PeriodicalIF":4.7,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10839278","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geodetic Evidence of the Interannual Fluctuations and Long-Term Trends Over the Antarctic Ice Sheet Mass Change
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-13 DOI: 10.1109/JSTARS.2025.3528516
Yuanjin Pan;Xiaohong Zhang;Jiashuang Jiao;Hao Ding;C. K. Shum
The spatiotemporal characteristics of the Antarctic ice sheet (AIS), as constrained by geodetic observations, provide us with a deeper understanding of the current evolution of ice mass balance. However, it still needs further in-depth research on interannual fluctuations and long-term trends of ice mass changes throughout the AIS. In this study, these two aspects were quantitatively analyzed through global positioning system (GPS) and gravity recovery and climate experiment/follow on (GRACE/GFO) over the past two decades. The nonlinear variation of GPS-inferred vertical land motion (VLM) and the influence of surface elastic load are of particular concern. The principal component analysis method is utilized to extract common mode signals from GPS time series, while correcting for various surface loads. The first principal components (PCs) accounted for 57.67%, 35.87%, 36.28%, and 36.03% of the total variances in the vertical components for GPS raw, atmospheric + nontidal oceanic (AO)-removed, AO + hydrographic model (AOH)-removed, and AO + GRACE/GFO-based load (AOG)-removed, respectively. Furthermore, the GPS vertical velocity, excluding the common mode component + AOG, yielded a median value of 0.13 mm/yr, which indicates that the retreat of ice mass has made a significant contribution to the GPS-observed VLM. In addition, the glacial isostatic adjustment (GIA) effect is found to play a key role in the large-scale VLM uplifting of the West AIS. After evaluating five different GIA models with GPS vertical velocity, we suggest that the ICE-6G_D model can more effectively correct GIA signals in GPS observations over Antarctica.
{"title":"Geodetic Evidence of the Interannual Fluctuations and Long-Term Trends Over the Antarctic Ice Sheet Mass Change","authors":"Yuanjin Pan;Xiaohong Zhang;Jiashuang Jiao;Hao Ding;C. K. Shum","doi":"10.1109/JSTARS.2025.3528516","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3528516","url":null,"abstract":"The spatiotemporal characteristics of the Antarctic ice sheet (AIS), as constrained by geodetic observations, provide us with a deeper understanding of the current evolution of ice mass balance. However, it still needs further in-depth research on interannual fluctuations and long-term trends of ice mass changes throughout the AIS. In this study, these two aspects were quantitatively analyzed through global positioning system (GPS) and gravity recovery and climate experiment/follow on (GRACE/GFO) over the past two decades. The nonlinear variation of GPS-inferred vertical land motion (VLM) and the influence of surface elastic load are of particular concern. The principal component analysis method is utilized to extract common mode signals from GPS time series, while correcting for various surface loads. The first principal components (PCs) accounted for 57.67%, 35.87%, 36.28%, and 36.03% of the total variances in the vertical components for GPS raw, atmospheric + nontidal oceanic (AO)-removed, AO + hydrographic model (AOH)-removed, and AO + GRACE/GFO-based load (AOG)-removed, respectively. Furthermore, the GPS vertical velocity, excluding the common mode component + AOG, yielded a median value of 0.13 mm/yr, which indicates that the retreat of ice mass has made a significant contribution to the GPS-observed VLM. In addition, the glacial isostatic adjustment (GIA) effect is found to play a key role in the large-scale VLM uplifting of the West AIS. After evaluating five different GIA models with GPS vertical velocity, we suggest that the ICE-6G_D model can more effectively correct GIA signals in GPS observations over Antarctica.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4525-4535"},"PeriodicalIF":4.7,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10839024","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
C3DGS: Compressing 3D Gaussian Model for Surface Reconstruction of Large-Scale Scenes Based on Multiview UAV Images
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-13 DOI: 10.1109/JSTARS.2025.3529261
Jiating Qian;Yiming Yan;Fengjiao Gao;Baoyu Ge;Maosheng Wei;Boyi Shangguan;Guangjun He
Methods based on 3D Gaussian Splatting (3DGS) for surface reconstruction face challenges when applied to large-scale scenes captured by UAV. Because the number of 3D Gaussians increases dramatically, leading to significant computational requirement and limiting the fineness of surface reconstruction. To address this challenge, we propose C3DGS that compresses 3D Gaussian model and ensures the quality of surface reconstruction of large-scale scenes in the face of heavy computational costs. Our method quantifies the contribution of 3D Gaussians to the surface reconstruction and prunes redundant 3D Gaussians to reduce the computational requirement of the model. In addition, pruning 3D Gaussians inevitably incurs loss, and in order to guarantee as many details as possible in the surface reconstruction of a complex scene, we use a ray tracing volume rendering method that can better evaluate the opacity of 3D Gaussians. Furthermore, we introduce two regularization terms to enhance the geometric consistency of multiple views, thus improving the realism of surface reconstruction. Experiments show that our method outperforms other 3DGS-based surface reconstruction methods when facing large-scale scenes.
{"title":"C3DGS: Compressing 3D Gaussian Model for Surface Reconstruction of Large-Scale Scenes Based on Multiview UAV Images","authors":"Jiating Qian;Yiming Yan;Fengjiao Gao;Baoyu Ge;Maosheng Wei;Boyi Shangguan;Guangjun He","doi":"10.1109/JSTARS.2025.3529261","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3529261","url":null,"abstract":"Methods based on 3D Gaussian Splatting (3DGS) for surface reconstruction face challenges when applied to large-scale scenes captured by UAV. Because the number of 3D Gaussians increases dramatically, leading to significant computational requirement and limiting the fineness of surface reconstruction. To address this challenge, we propose C3DGS that compresses 3D Gaussian model and ensures the quality of surface reconstruction of large-scale scenes in the face of heavy computational costs. Our method quantifies the contribution of 3D Gaussians to the surface reconstruction and prunes redundant 3D Gaussians to reduce the computational requirement of the model. In addition, pruning 3D Gaussians inevitably incurs loss, and in order to guarantee as many details as possible in the surface reconstruction of a complex scene, we use a ray tracing volume rendering method that can better evaluate the opacity of 3D Gaussians. Furthermore, we introduce two regularization terms to enhance the geometric consistency of multiple views, thus improving the realism of surface reconstruction. Experiments show that our method outperforms other 3DGS-based surface reconstruction methods when facing large-scale scenes.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4396-4409"},"PeriodicalIF":4.7,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10839501","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSL-MBC: Self-Supervised Learning With Multibranch Consistency for Few-Shot PolSAR Image Classification
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-13 DOI: 10.1109/JSTARS.2025.3528529
Wenmei Li;Hao Xia;Bin Xi;Yu Wang;Jing Lu;Yuhong He
Deep learning methods have recently made substantial advances in polarimetric synthetic aperture radar (PolSAR) image classification. However, supervised training relying on massive labeled samples is one of its major limitations, especially for PolSAR images that are hard to manually annotate. Self-supervised learning (SSL) is an effective solution for insufficient labeled samples by mining supervised information from the data itself. Nevertheless, fully utilizing SSL in PolSAR classification tasks is still a great challenge due to the data complexity. Based on the abovementioned issues, we propose an SSL model with multibranch consistency (SSL-MBC) for few-shot PolSAR image classification. Specifically, the data augmentation technique used in the pretext task involves a combination of various spatial transformations and channel transformations achieved through scattering feature extraction. In addition, the distinct scattering features of PolSAR data are considered as its unique multimodal representations. It is observed that the different modal representations of the same instance exhibit similarity in the encoding space, with the hidden features of more modals being more prominent. Therefore, a multibranch contrastive SSL framework, without negative samples, is employed to efficiently achieve representation learning. The resulting abstract features are then fine-tuned to ensure generalization in downstream tasks, thereby enabling few-shot classification. Experimental results yielded from selected PolSAR datasets convincingly indicate that our method exhibits superior performance compared to other existing methodologies. The exhaustive ablation study shows that the model performance degrades when either the data augmentation or any branch is masked, and the classification result does not rely on the label amount.
{"title":"SSL-MBC: Self-Supervised Learning With Multibranch Consistency for Few-Shot PolSAR Image Classification","authors":"Wenmei Li;Hao Xia;Bin Xi;Yu Wang;Jing Lu;Yuhong He","doi":"10.1109/JSTARS.2025.3528529","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3528529","url":null,"abstract":"Deep learning methods have recently made substantial advances in polarimetric synthetic aperture radar (PolSAR) image classification. However, supervised training relying on massive labeled samples is one of its major limitations, especially for PolSAR images that are hard to manually annotate. Self-supervised learning (SSL) is an effective solution for insufficient labeled samples by mining supervised information from the data itself. Nevertheless, fully utilizing SSL in PolSAR classification tasks is still a great challenge due to the data complexity. Based on the abovementioned issues, we propose an SSL model with multibranch consistency (SSL-MBC) for few-shot PolSAR image classification. Specifically, the data augmentation technique used in the pretext task involves a combination of various spatial transformations and channel transformations achieved through scattering feature extraction. In addition, the distinct scattering features of PolSAR data are considered as its unique multimodal representations. It is observed that the different modal representations of the same instance exhibit similarity in the encoding space, with the hidden features of more modals being more prominent. Therefore, a multibranch contrastive SSL framework, without negative samples, is employed to efficiently achieve representation learning. The resulting abstract features are then fine-tuned to ensure generalization in downstream tasks, thereby enabling few-shot classification. Experimental results yielded from selected PolSAR datasets convincingly indicate that our method exhibits superior performance compared to other existing methodologies. The exhaustive ablation study shows that the model performance degrades when either the data augmentation or any branch is masked, and the classification result does not rely on the label amount.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4696-4710"},"PeriodicalIF":4.7,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10839016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensitivity Analysis of Copolar Complex Coherence for Crop Monitoring At L-Band
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-10 DOI: 10.1109/JSTARS.2025.3528100
Jiayin Luo;Juan M. Lopez-Sanchez;Irena Hajnsek
Time series of polarimetric synthetic aperture radar (SAR) images are usually employed for agricultural crop monitoring. Most methods exploit backscattering coefficients, radar vegetation indices (RVI), and outputs from target decompositions. This article investigates the sensitivity of the copolar complex coherence to the growth of three crop types (barley, corn, and canola) using L-band data from two airborne campaigns. The copolar complex coherence is represented on the complex plane (unit circle) for interpretation. The experimental results from this article reveal that the changes in the position of the copolar complex coherence and the shape of complex coherence region are highly sensitive to specific growth stages of the examined crops. For a given crop type, changes in coherence can reflect variations in crop biophysical parameters, and inhomogeneities within the field. Furthermore, the trends in coherence variation throughout the given growth stages differ among crop types.
{"title":"Sensitivity Analysis of Copolar Complex Coherence for Crop Monitoring At L-Band","authors":"Jiayin Luo;Juan M. Lopez-Sanchez;Irena Hajnsek","doi":"10.1109/JSTARS.2025.3528100","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3528100","url":null,"abstract":"Time series of polarimetric synthetic aperture radar (SAR) images are usually employed for agricultural crop monitoring. Most methods exploit backscattering coefficients, radar vegetation indices (RVI), and outputs from target decompositions. This article investigates the sensitivity of the copolar complex coherence to the growth of three crop types (barley, corn, and canola) using <italic>L</i>-band data from two airborne campaigns. The copolar complex coherence is represented on the complex plane (unit circle) for interpretation. The experimental results from this article reveal that the changes in the position of the copolar complex coherence and the shape of complex coherence region are highly sensitive to specific growth stages of the examined crops. For a given crop type, changes in coherence can reflect variations in crop biophysical parameters, and inhomogeneities within the field. Furthermore, the trends in coherence variation throughout the given growth stages differ among crop types.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4850-4866"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836873","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143430413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral Image Classification Using Spectral-Spatial Dual Random Fields With Gaussian and Markov Processes
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-10 DOI: 10.1109/JSTARS.2025.3528115
Yaqiu Zhang;Lizhi Liu;Xinnian Yang
This article presents a novel hyperspectral image (HSI) classification approach that integrates the sparse inducing variational Gaussian process (SIVGP) with a spatially adaptive Markov random field (SAMRF), termed G-MDRF. Variational inference is employed to obtain a sparse approximation of the posterior distribution, modeling the spectral field within the latent function space. Subsequently, SAMRF is utilized to model the spatial prior within the function space, while the alternating direction method of multipliers (ADMM) is employed to enhance computational efficiency. Experimental results on three datasets with varying complexity show that the proposed algorithm improves computational efficiency by approximately 152 times and accuracy by about 7%–26% compared to the current popular Gaussian process methods. Compared to classical random field methods, G-MDRF rapidly achieves a convergent solution with only one ten-thousandth to one hundred-thousandth of the iterations, improving accuracy by about 5%–18%. Particularly, when the number of classes in the dataset increases and the scene becomes more complex, the proposed method demonstrates a greater advantage in both computational efficiency and classification accuracy compared to existing methods.
{"title":"Hyperspectral Image Classification Using Spectral-Spatial Dual Random Fields With Gaussian and Markov Processes","authors":"Yaqiu Zhang;Lizhi Liu;Xinnian Yang","doi":"10.1109/JSTARS.2025.3528115","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3528115","url":null,"abstract":"This article presents a novel hyperspectral image (HSI) classification approach that integrates the sparse inducing variational Gaussian process (SIVGP) with a spatially adaptive Markov random field (SAMRF), termed G-MDRF. Variational inference is employed to obtain a sparse approximation of the posterior distribution, modeling the spectral field within the latent function space. Subsequently, SAMRF is utilized to model the spatial prior within the function space, while the alternating direction method of multipliers (ADMM) is employed to enhance computational efficiency. Experimental results on three datasets with varying complexity show that the proposed algorithm improves computational efficiency by approximately 152 times and accuracy by about 7%–26% compared to the current popular Gaussian process methods. Compared to classical random field methods, G-MDRF rapidly achieves a convergent solution with only one ten-thousandth to one hundred-thousandth of the iterations, improving accuracy by about 5%–18%. Particularly, when the number of classes in the dataset increases and the scene becomes more complex, the proposed method demonstrates a greater advantage in both computational efficiency and classification accuracy compared to existing methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4199-4212"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836880","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Land Subsidence in the Yangtze River Delta, China Explored Using InSAR Technique From 2019 to 2021
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-10 DOI: 10.1109/JSTARS.2025.3527748
Hongbo Jiang;Guangcai Feng;Yuexin Wang;Zhiqiang Xiong;Hesheng Chen;Ning Li;Zeng Lin
The combined effects of global warming and human activities have intensified land subsidence (LS), limiting the sustainable development of economy in delta regions. Despite the potential of interferometric synthetic aperture radar (InSAR) for monitoring LS, its application across vast delta regions may be hindered by complex data processing, high computational demands, and the need for standardized results. To overcome these challenges, we adopted the multitemporal InSAR technique, integrating a frame data parallel processing strategy and an overall adjustment correction method, to obtain the temporal deformation sequences of the entire Yangtze River Delta (YRD) region in China from January 2019 to December 2021. We calculated the annual average deformation rate and identified deformation areas, with 73.5% concentrated along the Yangtze River, along the coastline, and within the northern Anhui mining area. A significant correlation was observed between LS and anthropogenic activities, such as economic development and land reclamation activities. Further analysis reveals that the increase in GDP growth rate may contribute to LS. Approximately, 38% of the reclaimed area in the YRD is at risk of LS. Land reclamation activities present a dichotomy, with Hangzhou Bay as the dividing line. This study provides a new perspective and scientific basis for understanding and analyzing LS in deltaic environments, contributing to sustainable development and advancing wide-area InSAR deformation monitoring.
{"title":"Land Subsidence in the Yangtze River Delta, China Explored Using InSAR Technique From 2019 to 2021","authors":"Hongbo Jiang;Guangcai Feng;Yuexin Wang;Zhiqiang Xiong;Hesheng Chen;Ning Li;Zeng Lin","doi":"10.1109/JSTARS.2025.3527748","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3527748","url":null,"abstract":"The combined effects of global warming and human activities have intensified land subsidence (LS), limiting the sustainable development of economy in delta regions. Despite the potential of interferometric synthetic aperture radar (InSAR) for monitoring LS, its application across vast delta regions may be hindered by complex data processing, high computational demands, and the need for standardized results. To overcome these challenges, we adopted the multitemporal InSAR technique, integrating a frame data parallel processing strategy and an overall adjustment correction method, to obtain the temporal deformation sequences of the entire Yangtze River Delta (YRD) region in China from January 2019 to December 2021. We calculated the annual average deformation rate and identified deformation areas, with 73.5% concentrated along the Yangtze River, along the coastline, and within the northern Anhui mining area. A significant correlation was observed between LS and anthropogenic activities, such as economic development and land reclamation activities. Further analysis reveals that the increase in GDP growth rate may contribute to LS. Approximately, 38% of the reclaimed area in the YRD is at risk of LS. Land reclamation activities present a dichotomy, with Hangzhou Bay as the dividing line. This study provides a new perspective and scientific basis for understanding and analyzing LS in deltaic environments, contributing to sustainable development and advancing wide-area InSAR deformation monitoring.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4174-4187"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836200","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automatic Decision-Level Fusion Rice Mapping Method of Optical and SAR Images Based on Cloud Coverage
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-10 DOI: 10.1109/JSTARS.2025.3528124
Xueqin Jiang;Song Gao;Huaqiang Du;Shenghui Fang;Yan Gong;Ning Han;Yirong Wang
Timely and accurate mapping of paddy rice cultivation is crucial for estimating rice production and optimizing land utilization. Optical images are essential data source for paddy rice mapping, but it is susceptible to cloud contamination. Existing methods struggle to effectively utilize clear-sky pixel information in optical images containing clouds, which impacts the accuracy of paddy rice mapping under cloudy conditions. To address the abovementioned problems, we propose an automatic decision-level fusion rice mapping method of optical and synthetic aperture radar (SAR) images based on cloud coverage (the Auto-OSDF method). The method effectively utilizes clear-sky pixels in images containing clouds and leverages the advantages of SAR features in heavily clouded regions. We tested and validated the Auto-OSDF method in Xiangyin County, Hunan Province, and analyzed the impact of different cloud coverage levels (10%–50%) on the accuracy of rice mapping based on this method. The results indicate that, as cloud coverage increases, the rice mapping accuracy of the Auto-OSDF method is not significantly affected, with overall accuracy and Kappa coefficients both above 93% and 0.90, respectively. To show the value of the proposed method in large-scale applications, we further mapped paddy rice in the entire Hunan Province, and the overall accuracy and Kappa coefficient were 92.47% and 0.87, respectively. The results obtained by the Auto-OSDF method show an average R2 of 0.926 compared to municipal-level statistical planting areas. The abovementioned study demonstrates that the Auto-OSDF method is capable of achieving stable and high-precision rice mapping under cloud contamination interference.
{"title":"An Automatic Decision-Level Fusion Rice Mapping Method of Optical and SAR Images Based on Cloud Coverage","authors":"Xueqin Jiang;Song Gao;Huaqiang Du;Shenghui Fang;Yan Gong;Ning Han;Yirong Wang","doi":"10.1109/JSTARS.2025.3528124","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3528124","url":null,"abstract":"Timely and accurate mapping of paddy rice cultivation is crucial for estimating rice production and optimizing land utilization. Optical images are essential data source for paddy rice mapping, but it is susceptible to cloud contamination. Existing methods struggle to effectively utilize clear-sky pixel information in optical images containing clouds, which impacts the accuracy of paddy rice mapping under cloudy conditions. To address the abovementioned problems, we propose an automatic decision-level fusion rice mapping method of optical and synthetic aperture radar (SAR) images based on cloud coverage (the Auto-OSDF method). The method effectively utilizes clear-sky pixels in images containing clouds and leverages the advantages of SAR features in heavily clouded regions. We tested and validated the Auto-OSDF method in Xiangyin County, Hunan Province, and analyzed the impact of different cloud coverage levels (10%–50%) on the accuracy of rice mapping based on this method. The results indicate that, as cloud coverage increases, the rice mapping accuracy of the Auto-OSDF method is not significantly affected, with overall accuracy and Kappa coefficients both above 93% and 0.90, respectively. To show the value of the proposed method in large-scale applications, we further mapped paddy rice in the entire Hunan Province, and the overall accuracy and Kappa coefficient were 92.47% and 0.87, respectively. The results obtained by the Auto-OSDF method show an average R<sup>2</sup> of 0.926 compared to municipal-level statistical planting areas. The abovementioned study demonstrates that the Auto-OSDF method is capable of achieving stable and high-precision rice mapping under cloud contamination interference.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5018-5032"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836781","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ViT-ISRGAN: A High-Quality Super-Resolution Reconstruction Method for Multispectral Remote Sensing Images
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-10 DOI: 10.1109/JSTARS.2025.3527226
Yifeng Yang;Hengqian Zhao;Xiadan Huangfu;Zihan Li;Pan Wang
The reflective characteristics of remote sensing image information depend on the scale of the observed area, with high-resolution images providing more detailed feature information. Currently, monitoring refined industries and extracting regional information necessitate higher-resolution remote sensing images. Super-resolution reconstruction of remote sensing multispectral images not only enhances the spatial resolution of these images but also preserves and improves the spectral information of multispectral data, thereby providing richer ground object information and more accurate environmental monitoring data. To improve the effectiveness of feature extraction in the generator network while maintaining model efficiency, this article proposes the vision transformer improved super-resolution generative adversarial network (ViT-ISRGAN) model. This model is an improvement upon the original SRGAN super-resolution image reconstruction method, incorporating lightweight network modules, channel attention modules, spatial-spectral residual attention, and the vision transformer structure. The ViT-ISRGAN model focuses on reconstructing four types of typical ground objects based on Sentinel-2 images: urban, water, farmland, and forest. Results indicate that the ViT-ISRGAN model excels in capturing texture details and color restoration, effectively extracting spectral and texture information from multispectral remote sensing images across various scenes. Compared to other super-resolution (SR) models, this approach demonstrates superior effectiveness and performance in the SR tasks of remote sensing multispectral images.
{"title":"ViT-ISRGAN: A High-Quality Super-Resolution Reconstruction Method for Multispectral Remote Sensing Images","authors":"Yifeng Yang;Hengqian Zhao;Xiadan Huangfu;Zihan Li;Pan Wang","doi":"10.1109/JSTARS.2025.3527226","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3527226","url":null,"abstract":"The reflective characteristics of remote sensing image information depend on the scale of the observed area, with high-resolution images providing more detailed feature information. Currently, monitoring refined industries and extracting regional information necessitate higher-resolution remote sensing images. Super-resolution reconstruction of remote sensing multispectral images not only enhances the spatial resolution of these images but also preserves and improves the spectral information of multispectral data, thereby providing richer ground object information and more accurate environmental monitoring data. To improve the effectiveness of feature extraction in the generator network while maintaining model efficiency, this article proposes the vision transformer improved super-resolution generative adversarial network (ViT-ISRGAN) model. This model is an improvement upon the original SRGAN super-resolution image reconstruction method, incorporating lightweight network modules, channel attention modules, spatial-spectral residual attention, and the vision transformer structure. The ViT-ISRGAN model focuses on reconstructing four types of typical ground objects based on Sentinel-2 images: urban, water, farmland, and forest. Results indicate that the ViT-ISRGAN model excels in capturing texture details and color restoration, effectively extracting spectral and texture information from multispectral remote sensing images across various scenes. Compared to other super-resolution (SR) models, this approach demonstrates superior effectiveness and performance in the SR tasks of remote sensing multispectral images.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"3973-3988"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836746","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HFIFNet: Hierarchical Feature Interaction Network With Multiscale Fusion for Change Detection
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-10 DOI: 10.1109/JSTARS.2025.3528053
Mingzhi Han;Tao Xu;Qingjie Liu;Xiaohui Yang;Jing Wang;Jiaqi Kong
Change detection (CD) from remote sensing images has been widely used in land management and urban planning. Benefiting from deep learning, numerous methods have achieved significant results in the CD of clearly changed targets. However, there are still significant challenges in the CD of weak targets, such as targets with small size, targets with blurred boundaries, and targets with low distinguishability from the background. Feature extraction from these targets can result in the loss of critical spatial features, potentially leading to decreased CD performance. Inspired by the improvement of multiscale features for CD of weak target, a hierarchical feature interaction network with multiscale fusion was proposed. First, a hierarchical feature interactive fusion module is proposed, which achieves optimized multichannel feature interaction and enhances the distinguishability between weak targets and background. Moreover, the module also achieves cross scale feature fusion, which compensates for the loss of spatial feature of changed targets at a single scale during feature extraction. Second, VMamba Block is utilized to obtain global features, and a spatial feature localization module was proposed to enhance the saliency of spatial features such as edges and textures. The distinguishability between weak targets and irrelevant spatial features is further enhanced. Our method has been experimentally evaluated on three public datasets, and outperformed state-of-the-art approaches by 1.06%, 1.41%, and 2.63% in F1 score on the LEVIR-CD, S2Looking, and NALand datasets, respectively. These results affirm the effectiveness of our method for weak targets in CD tasks.
{"title":"HFIFNet: Hierarchical Feature Interaction Network With Multiscale Fusion for Change Detection","authors":"Mingzhi Han;Tao Xu;Qingjie Liu;Xiaohui Yang;Jing Wang;Jiaqi Kong","doi":"10.1109/JSTARS.2025.3528053","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3528053","url":null,"abstract":"Change detection (CD) from remote sensing images has been widely used in land management and urban planning. Benefiting from deep learning, numerous methods have achieved significant results in the CD of clearly changed targets. However, there are still significant challenges in the CD of weak targets, such as targets with small size, targets with blurred boundaries, and targets with low distinguishability from the background. Feature extraction from these targets can result in the loss of critical spatial features, potentially leading to decreased CD performance. Inspired by the improvement of multiscale features for CD of weak target, a hierarchical feature interaction network with multiscale fusion was proposed. First, a hierarchical feature interactive fusion module is proposed, which achieves optimized multichannel feature interaction and enhances the distinguishability between weak targets and background. Moreover, the module also achieves cross scale feature fusion, which compensates for the loss of spatial feature of changed targets at a single scale during feature extraction. Second, VMamba Block is utilized to obtain global features, and a spatial feature localization module was proposed to enhance the saliency of spatial features such as edges and textures. The distinguishability between weak targets and irrelevant spatial features is further enhanced. Our method has been experimentally evaluated on three public datasets, and outperformed state-of-the-art approaches by 1.06%, 1.41%, and 2.63% in F1 score on the LEVIR-CD, S2Looking, and NALand datasets, respectively. These results affirm the effectiveness of our method for weak targets in CD tasks.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4318-4330"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836868","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1