Pub Date : 2025-01-17DOI: 10.1109/JSTARS.2025.3531439
Shuang Wu;Lei Deng;Qinghua Qiao
Accurate long-term estimation of fractional vegetation cover (FVC) is crucial for monitoring vegetation dynamics. Satellite-based methods, such as the dimidiate pixel method (DPM), struggle with spatial heterogeneity due to coarse resolution. Existing methods using unmanned aerial vehicles (UAVs) combined with satellite data (UCS) inadequately leverage the high spatial resolution of UAV imagery to address spatial heterogeneity and are seldom applied to long-term FVC monitoring. To overcome spatial challenges, an improved dimidiate pixel method (IDPM) is proposed here, utilizing 2021 Landsat imagery to generate FVCDPM via DPM and upscaled UAV imagery for FVCUAV as ground references. The IDPM uses the pruned exact linear time method to segment the normalized difference vegetation index (NDVI) into intervals, within which DPM performance is evaluated for potential improvements. Specifically, if the difference (D) between FVCDPM and FVCUAV is nonzero, NDVI-derived texture features are incorporated into FVCDPM through multiple linear regression to enhance accuracy. To address temporal challenges and ensure consistency across years, the 2021 NDVI serves as a reference for inter-year NDVI calibration, employing least squares regression (LSR) and histogram matching (HM) to identify the most effective method for extending the IDPM to other years. Results demonstrate that 1) the IDPM, by developing distinct DPM improvement models for different NDVI intervals, considerably improves UAV and satellite data integration, with a 48.51% increase in R2 and a 56.47% reduction in root mean square error (RMSE) compared to the DPM and UCS and 2) HM is found to be more suitable for mining areas, increasing R2 by 25.00% and reducing RMSE by 54.05% compared to LSR. This method provides an efficient, rapid solution for mitigating spatial heterogeneity and advancing long-term FVC estimation.
{"title":"Estimating Long-Term Fractional Vegetation Cover Using an Improved Dimidiate Pixel Method With UAV-Assisted Satellite Data: A Case Study in a Mining Region","authors":"Shuang Wu;Lei Deng;Qinghua Qiao","doi":"10.1109/JSTARS.2025.3531439","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3531439","url":null,"abstract":"Accurate long-term estimation of fractional vegetation cover (FVC) is crucial for monitoring vegetation dynamics. Satellite-based methods, such as the dimidiate pixel method (DPM), struggle with spatial heterogeneity due to coarse resolution. Existing methods using unmanned aerial vehicles (UAVs) combined with satellite data (UCS) inadequately leverage the high spatial resolution of UAV imagery to address spatial heterogeneity and are seldom applied to long-term FVC monitoring. To overcome spatial challenges, an improved dimidiate pixel method (IDPM) is proposed here, utilizing 2021 Landsat imagery to generate FVC<sub>DPM</sub> via DPM and upscaled UAV imagery for FVC<sub>UAV</sub> as ground references. The IDPM uses the pruned exact linear time method to segment the normalized difference vegetation index (NDVI) into intervals, within which DPM performance is evaluated for potential improvements. Specifically, if the difference (D) between FVC<sub>DPM</sub> and FVC<sub>UAV</sub> is nonzero, NDVI-derived texture features are incorporated into FVC<sub>DPM</sub> through multiple linear regression to enhance accuracy. To address temporal challenges and ensure consistency across years, the 2021 NDVI serves as a reference for inter-year NDVI calibration, employing least squares regression (LSR) and histogram matching (HM) to identify the most effective method for extending the IDPM to other years. Results demonstrate that 1) the IDPM, by developing distinct DPM improvement models for different NDVI intervals, considerably improves UAV and satellite data integration, with a 48.51% increase in <italic>R</i><sup>2</sup> and a 56.47% reduction in root mean square error (RMSE) compared to the DPM and UCS and 2) HM is found to be more suitable for mining areas, increasing <italic>R</i><sup>2</sup> by 25.00% and reducing RMSE by 54.05% compared to LSR. This method provides an efficient, rapid solution for mitigating spatial heterogeneity and advancing long-term FVC estimation.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4162-4173"},"PeriodicalIF":4.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10845181","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1109/JSTARS.2025.3531448
Chang-Jiang Zhang;Mei-Shu Chen;Lei-Ming Ma;Xiao-Qin Lu
Tropical cyclone (TC) is a highly catastrophic weather event, and accurate estimation of intensity is of great significance. The current proposed TC intensity estimation model focuses on training using satellite images from single or two channels, and the model cannot fully capture features related to TC intensity, resulting in low accuracy. To this end, we propose a double-layer encoder–decoder model for estimating the intensity of TC, which is trained using images from three channels: infrared, water vapor, and passive microwave. The model mainly consists of three modules: wavelet transform enhancement module, multichannel satellite image fusion module, and TC intensity estimation module, which are used to extract high-frequency information from the source image, generate a three-channel fused image, and perform TC intensity estimation. To validate the performance of our model, we conducted extensive experiments on the TCIR dataset. The experimental results show that the proposed model has MAE and RMSE of 3.76 m/s and 4.62 m/s for TC intensity estimation, which are 15.70% and 20.07% lower than advanced Dvorak technology, respectively. Therefore, the model proposed in this article has great potential in accurately estimating TC intensity.
{"title":"Deep Learning and Wavelet Transform Combined With Multichannel Satellite Images for Tropical Cyclone Intensity Estimation","authors":"Chang-Jiang Zhang;Mei-Shu Chen;Lei-Ming Ma;Xiao-Qin Lu","doi":"10.1109/JSTARS.2025.3531448","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3531448","url":null,"abstract":"Tropical cyclone (TC) is a highly catastrophic weather event, and accurate estimation of intensity is of great significance. The current proposed TC intensity estimation model focuses on training using satellite images from single or two channels, and the model cannot fully capture features related to TC intensity, resulting in low accuracy. To this end, we propose a double-layer encoder–decoder model for estimating the intensity of TC, which is trained using images from three channels: infrared, water vapor, and passive microwave. The model mainly consists of three modules: wavelet transform enhancement module, multichannel satellite image fusion module, and TC intensity estimation module, which are used to extract high-frequency information from the source image, generate a three-channel fused image, and perform TC intensity estimation. To validate the performance of our model, we conducted extensive experiments on the TCIR dataset. The experimental results show that the proposed model has MAE and RMSE of 3.76 m/s and 4.62 m/s for TC intensity estimation, which are 15.70% and 20.07% lower than advanced Dvorak technology, respectively. Therefore, the model proposed in this article has great potential in accurately estimating TC intensity.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4711-4735"},"PeriodicalIF":4.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10845190","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1109/JSTARS.2025.3531353
Qun Song;Hangyuan Lu;Chang Xu;Rixian Liu;Weiguo Wan;Wei Tu
Pansharpening is the process of fusing a multispectral (MS) image with a panchromatic image to produce a high-resolution MS (HRMS) image. However, existing techniques face challenges in integrating long-range dependencies to correct locally misaligned features, which results in spatial-spectral distortions. Moreover, these methods tend to be computationally expensive. To address these challenges, we propose a novel detail injection algorithm and develop the invertible attention-guided adaptive convolution and dual-domain Transformer (IACDT) network. In IACDT, we designed an invertible attention mechanism embedded with spectral-spatial attention to efficiently and losslessly extract locally spatial-spectral-aware detail information. In addition, we presented a frequency-spatial dual-domain attention mechanism that combines a frequency-enhanced Transformer and a spatial window Transformer for long-range contextual detail feature correction. This architecture effectively integrates local detail features with long-range dependencies, enabling the model to correct both local misalignments and global inconsistencies. The final HRMS image is obtained through a reconstruction block that consists of residual multireceptive field attention. Extensive experiments demonstrate that IACDT achieves superior fusion performance, computational efficiency, and outstanding results in downstream tasks compared to state-of-the-art methods.
{"title":"Invertible Attention-Guided Adaptive Convolution and Dual-Domain Transformer for Pansharpening","authors":"Qun Song;Hangyuan Lu;Chang Xu;Rixian Liu;Weiguo Wan;Wei Tu","doi":"10.1109/JSTARS.2025.3531353","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3531353","url":null,"abstract":"Pansharpening is the process of fusing a multispectral (MS) image with a panchromatic image to produce a high-resolution MS (HRMS) image. However, existing techniques face challenges in integrating long-range dependencies to correct locally misaligned features, which results in spatial-spectral distortions. Moreover, these methods tend to be computationally expensive. To address these challenges, we propose a novel detail injection algorithm and develop the invertible attention-guided adaptive convolution and dual-domain Transformer (IACDT) network. In IACDT, we designed an invertible attention mechanism embedded with spectral-spatial attention to efficiently and losslessly extract locally spatial-spectral-aware detail information. In addition, we presented a frequency-spatial dual-domain attention mechanism that combines a frequency-enhanced Transformer and a spatial window Transformer for long-range contextual detail feature correction. This architecture effectively integrates local detail features with long-range dependencies, enabling the model to correct both local misalignments and global inconsistencies. The final HRMS image is obtained through a reconstruction block that consists of residual multireceptive field attention. Extensive experiments demonstrate that IACDT achieves superior fusion performance, computational efficiency, and outstanding results in downstream tasks compared to state-of-the-art methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5217-5231"},"PeriodicalIF":4.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10845120","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1109/JSTARS.2025.3530926
Tianxiang Wang;Zhangfan Zeng;ShiHe Zhou;Qiao Xu
Automatic target recognition based on synthetic aperture radar (SAR) has extensive applications in dynamic surveillance, modern airport management, and military decision-making. However, the natural mechanisms of SAR imaging introduce challenges such as target feature discretization, clutter interference, and significant scale variation, which hinder the performance of existing recognition networks in practical scenarios. As such, this article presents a novel network architecture: the multiscale discrete feature enhancement network with augmented reversible transformation. The proposed network consists of three core components: an augmented feature extraction (AFE) backbone, a discrete feature enhancement module (DFEM), and a Spider feature pyramid network (Spider FPN). The AFE backbone has the capability of effective target information preservation and clutter suppression with the aid of integration of augmented reversible transformations with intermediate supervision module and double subnetworks. The DFEM enhances both local and global discrete feature awareness through its two submodules: local discrete feature enhancement module and global semantic information awareness module. The Spider FPN overcomes target scale variation challenges, especially for small-scale targets, through a fusion-diffusion mechanism and the designed feature perception fusion module. The functionality of the proposed method is evaluated on three public datasets: SARDet-100 K, MSAR-1.0, and SAR-AIRcraft-1.0 of various polarizations and environmental conditions. Experimental results demonstrate that the proposed network outperforms current state-of-the-art methods in terms of average precision by the levels of 63.3%, 72.3%, and 67.4%, respectively.
{"title":"A Multiscale Discrete Feature Enhancement Network With Augmented Reversible Transformation for SAR Automatic Target Recognition","authors":"Tianxiang Wang;Zhangfan Zeng;ShiHe Zhou;Qiao Xu","doi":"10.1109/JSTARS.2025.3530926","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3530926","url":null,"abstract":"Automatic target recognition based on synthetic aperture radar (SAR) has extensive applications in dynamic surveillance, modern airport management, and military decision-making. However, the natural mechanisms of SAR imaging introduce challenges such as target feature discretization, clutter interference, and significant scale variation, which hinder the performance of existing recognition networks in practical scenarios. As such, this article presents a novel network architecture: the multiscale discrete feature enhancement network with augmented reversible transformation. The proposed network consists of three core components: an augmented feature extraction (AFE) backbone, a discrete feature enhancement module (DFEM), and a Spider feature pyramid network (Spider FPN). The AFE backbone has the capability of effective target information preservation and clutter suppression with the aid of integration of augmented reversible transformations with intermediate supervision module and double subnetworks. The DFEM enhances both local and global discrete feature awareness through its two submodules: local discrete feature enhancement module and global semantic information awareness module. The Spider FPN overcomes target scale variation challenges, especially for small-scale targets, through a fusion-diffusion mechanism and the designed feature perception fusion module. The functionality of the proposed method is evaluated on three public datasets: SARDet-100 K, MSAR-1.0, and SAR-AIRcraft-1.0 of various polarizations and environmental conditions. Experimental results demonstrate that the proposed network outperforms current state-of-the-art methods in terms of average precision by the levels of 63.3%, 72.3%, and 67.4%, respectively.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5135-5156"},"PeriodicalIF":4.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10844330","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1109/JSTARS.2025.3530959
Minkyung Chung;Yongil Kim
Image super-resolution (SR) aims to enhance the spatial resolution of images and overcome the hardware limitations of imaging systems. While deep-learning networks have significantly improved SR performance, obtaining paired low-resolution (LR) and high-resolution (HR) images for supervised learning remains challenging in real-world scenarios. In this article, we propose a novel unsupervised image super-resolution model for real-world remote sensing images, specifically focusing on HR satellite imagery. Our model, the bicubic-downsampled LR image-guided generative adversarial network for unsupervised learning (BLG-GAN-U), divides the SR process into two stages: LR image domain translation and image super-resolution. To implement this division, the model integrates omnidirectional real-to-synthetic domain translation with training strategies such as frequency separation and guided filtering. The model was evaluated through comparative analyses and ablation studies using real-world LR–HR datasets from WorldView-3 HR satellite imagery. The experimental results demonstrate that BLG-GAN-U effectively generates high-quality SR images with excellent perceptual quality and reasonable image fidelity, even with a relatively smaller network capacity.
{"title":"Unsupervised Image Super-Resolution for High-Resolution Satellite Imagery via Omnidirectional Real-to-Synthetic Domain Translation","authors":"Minkyung Chung;Yongil Kim","doi":"10.1109/JSTARS.2025.3530959","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3530959","url":null,"abstract":"Image super-resolution (SR) aims to enhance the spatial resolution of images and overcome the hardware limitations of imaging systems. While deep-learning networks have significantly improved SR performance, obtaining paired low-resolution (LR) and high-resolution (HR) images for supervised learning remains challenging in real-world scenarios. In this article, we propose a novel unsupervised image super-resolution model for real-world remote sensing images, specifically focusing on HR satellite imagery. Our model, the bicubic-downsampled LR image-guided generative adversarial network for unsupervised learning (BLG-GAN-U), divides the SR process into two stages: LR image domain translation and image super-resolution. To implement this division, the model integrates omnidirectional real-to-synthetic domain translation with training strategies such as frequency separation and guided filtering. The model was evaluated through comparative analyses and ablation studies using real-world LR–HR datasets from WorldView-3 HR satellite imagery. The experimental results demonstrate that BLG-GAN-U effectively generates high-quality SR images with excellent perceptual quality and reasonable image fidelity, even with a relatively smaller network capacity.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4427-4445"},"PeriodicalIF":4.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10844307","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, flooding and droughts in the Yangtze River basin have become increasingly unpredictable. Remote sensing is an effective tool for monitoring water distribution. However, cloudy weather and mountainous terrain directly affect water extraction from remote sensing images. A single data source cannot resolve this issue and often encounters the challenge of “different features having the same spectrum.” To address these problems, we constructed a dataset using both active and passive remote sensing data and designed a partitioning scheme with corresponding water body extraction rules for multiple terrains area. This partitioning method and its associated rules significantly reduce the false positive rate of water extraction in mountainous areas. Our approach successfully extracts water bodies from cloudy optical imagery without being hindered by cloud cover, thereby enhancing the usability of optical remote sensing images. The accuracy of our method reaches 91.73%, with a Kappa value of 0.90. In multiple terrains area, our method's Kappa coefficient is 0.39 higher than synthetic aperture radar and optical imagery water index and 0.06 higher than Res-U-Net. It shows superior performance and greater stability in mountainous and cloudy regions. In conclusion, this method facilitates consistent water extraction on large datasets.
{"title":"A Water Extraction Method for Multiple Terrains Area Based on Multisource Fused Images: A Case Study of the Yangtze River Basin","authors":"Huang Ruolong;Shen Qian;Fu Bolin;Yao Yue;Zhang Yuting;Du Qianyu","doi":"10.1109/JSTARS.2025.3531505","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3531505","url":null,"abstract":"In recent years, flooding and droughts in the Yangtze River basin have become increasingly unpredictable. Remote sensing is an effective tool for monitoring water distribution. However, cloudy weather and mountainous terrain directly affect water extraction from remote sensing images. A single data source cannot resolve this issue and often encounters the challenge of “different features having the same spectrum.” To address these problems, we constructed a dataset using both active and passive remote sensing data and designed a partitioning scheme with corresponding water body extraction rules for multiple terrains area. This partitioning method and its associated rules significantly reduce the false positive rate of water extraction in mountainous areas. Our approach successfully extracts water bodies from cloudy optical imagery without being hindered by cloud cover, thereby enhancing the usability of optical remote sensing images. The accuracy of our method reaches 91.73%, with a Kappa value of 0.90. In multiple terrains area, our method's Kappa coefficient is 0.39 higher than synthetic aperture radar and optical imagery water index and 0.06 higher than Res-U-Net. It shows superior performance and greater stability in mountainous and cloudy regions. In conclusion, this method facilitates consistent water extraction on large datasets.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4964-4978"},"PeriodicalIF":4.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10845130","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1109/JSTARS.2025.3530989
Tingting Wei;Xingwang Hu;Zhengwei Guo;Gaofeng Shu;Yabo Huang;Ning Li
In the increasingly complex electromagnetic environment, the spectrum is becoming more and more crowded. Synthetic aperture radar (SAR) is more susceptible to be affected by the radio frequency interference (RFI) in the same frequency band when receiving echo signal. Pulse RFI (PRFI) is a common form of RFI and often has time-varying characteristics, which will deteriorate the SAR images quality and hinder image interpretation. To effectively suppress the PRFI, the serial number of the pulses in SAR raw data containing PRFI need to be screened out with high precision. A two-stage method for screening PRFI in SAR raw data alternating the use of time and frequency domains was proposed in this article. First, range-cell level difference screening is performed in the time domain and frequency domain, respectively, to initially screen the PRFI. Then, the preliminary screening results are accumulated along the range direction, and the accumulated results are classified using a clustering algorithm to perform pulse-level screening to obtain the serial number of the pulses containing PRFI. Compared with the traditional PRFI screening methods, the proposed approach boasts a remarkable ability to circumvent missed screening and false alarm when screening weak-energy PRFIs. It possesses exceptional sensitivity and accuracy, offering fresh perspectives and innovative solutions to the PRFI screening challenge. The effectiveness and superiority of the proposed method are verified by the simulation data and measured data experiments.
{"title":"A Two-Stage Method for Screening Pulse RFI in SAR Raw Data Alternating the Use of Time and Frequency Domains","authors":"Tingting Wei;Xingwang Hu;Zhengwei Guo;Gaofeng Shu;Yabo Huang;Ning Li","doi":"10.1109/JSTARS.2025.3530989","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3530989","url":null,"abstract":"In the increasingly complex electromagnetic environment, the spectrum is becoming more and more crowded. Synthetic aperture radar (SAR) is more susceptible to be affected by the radio frequency interference (RFI) in the same frequency band when receiving echo signal. Pulse RFI (PRFI) is a common form of RFI and often has time-varying characteristics, which will deteriorate the SAR images quality and hinder image interpretation. To effectively suppress the PRFI, the serial number of the pulses in SAR raw data containing PRFI need to be screened out with high precision. A two-stage method for screening PRFI in SAR raw data alternating the use of time and frequency domains was proposed in this article. First, range-cell level difference screening is performed in the time domain and frequency domain, respectively, to initially screen the PRFI. Then, the preliminary screening results are accumulated along the range direction, and the accumulated results are classified using a clustering algorithm to perform pulse-level screening to obtain the serial number of the pulses containing PRFI. Compared with the traditional PRFI screening methods, the proposed approach boasts a remarkable ability to circumvent missed screening and false alarm when screening weak-energy PRFIs. It possesses exceptional sensitivity and accuracy, offering fresh perspectives and innovative solutions to the PRFI screening challenge. The effectiveness and superiority of the proposed method are verified by the simulation data and measured data experiments.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4331-4346"},"PeriodicalIF":4.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10844320","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1109/JSTARS.2025.3530152
Changzhi Yang;Kebiao Mao;Jiancheng Shi;Zhonghua Guo;Sayed M. Bateni
Current research often improves the accuracy of global navigation satellite system-reflectometry soil moisture (SM) inversion by incorporating auxiliary data, which somewhat limits its potential for practical application. To reduce the reliance on auxiliary data, this article presents a cyclone global navigation satellite system SM inversion method based on the time-constrained and spatially explicit artificial intelligence (TCSE-AI) model. The method initially segments data into multiple subsets through time constraints, thus limiting irrelevant factors to a relatively stable state and endowing the data with temporal attributes. Then, it incorporates raster data spatial information, integrating the potential spatiotemporal distribution characteristics of the data into the SM inversion model. Finally, it constructs SM inversion models using machine learning methods. The experimental results indicate that the TCSE-AI SM inversion model based on the XGBoost and random forest model architectures achieved favorable results. Their monthly SM inversion results for 2022 were compared with the soil moisture active passive (SMAP) products, with Pearson's correlation coefficients (R) all greater than 0.91 and root-mean-square errors (RMSEs) less than 0.05 cm3/cm3. Subsequently, this study used the XGBoost method as an example for validation with in situ data and conducted an interannual SM cross-inversion experiment. From January to June 2022, the R between SM inversion results in the study area and in situ SM was 0.788, with an RMSE of 0.063 cm3/cm3. The interannual cross-inversion experimental results, except for cases of missing data over multiple days, indicate that the TCSE-AI model generally achieved the accurate estimates of SM. Compared with SMAP SM, the R was all greater than 0.8, with a maximum RMSE of 0.072 cm3/cm3, and they showed satisfactory consistency with the in situ data.
{"title":"A Time-Constrained and Spatially Explicit AI Model for Soil Moisture Inversion Using CYGNSS Data","authors":"Changzhi Yang;Kebiao Mao;Jiancheng Shi;Zhonghua Guo;Sayed M. Bateni","doi":"10.1109/JSTARS.2025.3530152","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3530152","url":null,"abstract":"Current research often improves the accuracy of global navigation satellite system-reflectometry soil moisture (SM) inversion by incorporating auxiliary data, which somewhat limits its potential for practical application. To reduce the reliance on auxiliary data, this article presents a cyclone global navigation satellite system SM inversion method based on the time-constrained and spatially explicit artificial intelligence (TCSE-AI) model. The method initially segments data into multiple subsets through time constraints, thus limiting irrelevant factors to a relatively stable state and endowing the data with temporal attributes. Then, it incorporates raster data spatial information, integrating the potential spatiotemporal distribution characteristics of the data into the SM inversion model. Finally, it constructs SM inversion models using machine learning methods. The experimental results indicate that the TCSE-AI SM inversion model based on the XGBoost and random forest model architectures achieved favorable results. Their monthly SM inversion results for 2022 were compared with the soil moisture active passive (SMAP) products, with Pearson's correlation coefficients (<italic>R</i>) all greater than 0.91 and root-mean-square errors (RMSEs) less than 0.05 cm<sup>3</sup>/cm<sup>3</sup>. Subsequently, this study used the XGBoost method as an example for validation with in situ data and conducted an interannual SM cross-inversion experiment. From January to June 2022, the <italic>R</i> between SM inversion results in the study area and in situ SM was 0.788, with an RMSE of 0.063 cm<sup>3</sup>/cm<sup>3</sup>. The interannual cross-inversion experimental results, except for cases of missing data over multiple days, indicate that the TCSE-AI model generally achieved the accurate estimates of SM. Compared with SMAP SM, the <italic>R</i> was all greater than 0.8, with a maximum RMSE of 0.072 cm<sup>3</sup>/cm<sup>3</sup>, and they showed satisfactory consistency with the in situ data.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5100-5119"},"PeriodicalIF":4.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10845082","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Updating and digitizing cadastral maps remains a major challenge in land administration, demanding significant financial and human resources. This study presents a fully automated AI-based system to address this issue, focusing on the extraction and digitization of agricultural cadastral maps using photogrammetric images. The proposed method leverages the segment anything model for high-accuracy segmentation, achieving a notable intersection over union score of 92%, significantly outperforming traditional approaches. In addition, the system reduces processing time by 40% and eliminates the need for manual intervention, enabling scalable, efficient digitization. These improvements are critical for better land-use planning, resource allocation, and sustainable land management practices. The model, implemented using open-source Python libraries, integrates three stages: image preprocessing, AI-based segmentation, and postprocessing. By automating these processes, the system not only accelerates map production but also reduces environmental impacts associated with traditional mapping techniques. The approach also enhances the accuracy of agricultural boundary delineation, offering benefits for land dispute resolution and optimized agricultural practices. This research contributes to the modernization of land administration systems by providing an accessible, scalable solution for surveyors and policymakers. It bridges the gap between cutting-edge artificial intelligence advancements and practical applications, addressing technical and operational challenges in geospatial data management. The findings underscore the importance of automating cadastral mapping for both economic efficiency and environmental sustainability.
{"title":"Super-Resolution AI-Based Approach for Extracting Agricultural Cadastral Maps: Form and Content Validation","authors":"Alireza Vafaeinejad;Nima Alimohammadi;Alireza Sharifi;Mohammad Mahdi Safari","doi":"10.1109/JSTARS.2025.3530714","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3530714","url":null,"abstract":"Updating and digitizing cadastral maps remains a major challenge in land administration, demanding significant financial and human resources. This study presents a fully automated AI-based system to address this issue, focusing on the extraction and digitization of agricultural cadastral maps using photogrammetric images. The proposed method leverages the segment anything model for high-accuracy segmentation, achieving a notable intersection over union score of 92%, significantly outperforming traditional approaches. In addition, the system reduces processing time by 40% and eliminates the need for manual intervention, enabling scalable, efficient digitization. These improvements are critical for better land-use planning, resource allocation, and sustainable land management practices. The model, implemented using open-source Python libraries, integrates three stages: image preprocessing, AI-based segmentation, and postprocessing. By automating these processes, the system not only accelerates map production but also reduces environmental impacts associated with traditional mapping techniques. The approach also enhances the accuracy of agricultural boundary delineation, offering benefits for land dispute resolution and optimized agricultural practices. This research contributes to the modernization of land administration systems by providing an accessible, scalable solution for surveyors and policymakers. It bridges the gap between cutting-edge artificial intelligence advancements and practical applications, addressing technical and operational challenges in geospatial data management. The findings underscore the importance of automating cadastral mapping for both economic efficiency and environmental sustainability.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5204-5216"},"PeriodicalIF":4.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10843845","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-16DOI: 10.1109/JSTARS.2025.3530525
Zhe Zhang;Yukuan Dong;Chunlin Li;Chengrun Wu;Qiushi Wang;Xiao Liu
Urbanization has increased the surface urban heat island (SUHI) effect. This study uses local climate zones (LCZ) and urban built environment characteristics (UBECs) to explore the factors influencing land surface temperature (LST) and SUHI in various UBECs in Shenyang, China. Google Earth Engine was used to calculate LST. An LCZ map of Shenyang was created to analyze seasonal differences in the SUHI. A correlation model was used to screen the UBEC, and a geographically and temporally weighted regression (GTWR) model was used to explain the spatial variations in the urban heat environment caused by built environments in different seasons. Compared to traditional methods, the GTWR model exhibits better goodness of fit and is more effective in capturing the spatiotemporal heterogeneity of variables. Compact and high-rise areas had higher SUHI effects compared to other LCZs, whereas land-cover LCZs had a cool-island effect. The GTWR model helps planners identify the climatic impacts of each factor in different spatial locations within the study area, as well as variations across seasons. Vegetation-related factors had less impact in densely-built areas, whereas the proportion of blue areas was more effective in alleviating extreme climates in high-density zones. The impact of building density on the heat island effect exhibited substantial spatiotemporal variation, particularly in compact, high-rise LCZs during both seasons. To address extreme winter–summer weather in cold regions, this study examined seasonal SUHIs and their interaction with UBECs, offering strategies and guidance for heat mitigation in urban design.
{"title":"Impacts and Spatiotemporal Differentiation of Built Environments on the Urban Heat Island Effect in Cold-Climate Cities Based on Local Climate Zones","authors":"Zhe Zhang;Yukuan Dong;Chunlin Li;Chengrun Wu;Qiushi Wang;Xiao Liu","doi":"10.1109/JSTARS.2025.3530525","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3530525","url":null,"abstract":"Urbanization has increased the surface urban heat island (SUHI) effect. This study uses local climate zones (LCZ) and urban built environment characteristics (UBECs) to explore the factors influencing land surface temperature (LST) and SUHI in various UBECs in Shenyang, China. Google Earth Engine was used to calculate LST. An LCZ map of Shenyang was created to analyze seasonal differences in the SUHI. A correlation model was used to screen the UBEC, and a geographically and temporally weighted regression (GTWR) model was used to explain the spatial variations in the urban heat environment caused by built environments in different seasons. Compared to traditional methods, the GTWR model exhibits better goodness of fit and is more effective in capturing the spatiotemporal heterogeneity of variables. Compact and high-rise areas had higher SUHI effects compared to other LCZs, whereas land-cover LCZs had a cool-island effect. The GTWR model helps planners identify the climatic impacts of each factor in different spatial locations within the study area, as well as variations across seasons. Vegetation-related factors had less impact in densely-built areas, whereas the proportion of blue areas was more effective in alleviating extreme climates in high-density zones. The impact of building density on the heat island effect exhibited substantial spatiotemporal variation, particularly in compact, high-rise LCZs during both seasons. To address extreme winter–summer weather in cold regions, this study examined seasonal SUHIs and their interaction with UBECs, offering strategies and guidance for heat mitigation in urban design.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5406-5422"},"PeriodicalIF":4.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10843833","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143446294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}