首页 > 最新文献

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing最新文献

英文 中文
Full-Wave Simulations of Forest at L-Band With Fast Hybrid Multiple Scattering Theory Method and Comparison With GNSS Signals
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-23 DOI: 10.1109/JSTARS.2025.3533313
Jongwoo Jeong;Leung Tsang;Mehmet Kurum;Abesh Ghosh;Andreas Colliander;Simon Yueh;Kyle McDonald;Nicholas Steiner;Michael H. Cosh
Full-wave simulations at L-band using the fast hybrid multiple scattering theory method (FHMSTM) have been applied to the Harvard Forest in Massachusetts using the Soil Moisture Active Passive Validation Experiment 2022 (SMAPVEX22) dataset. Due to the limitations of commercial full-wave electromagnetic solvers, the FHMSTM is our choice considering its efficient and fast solutions. During SMAPVEX22, scientists collected a dataset of tree sizes, tree positions (derived from light detection and ranging measurement), and microwave signals utilizing the Global Navigation Satellite System Transmissometry approach. The 3-D geometric forest model provides 300 trees with heights up to 19 m by processing the dataset. We import the forest model into the FHMSTM and analyze microwave propagation at MA401. The FHMSTM analysis shows that the transmissivity ranges from 0.627 to 0.674 for the vertically polarized incident wave source and from 0.593 to 0.665 for the horizontally polarized incident wave source. To validate the FHMSTM, a comparison is made with the GNSS signals. The comparison results of microwaves are in good agreement, demonstrating the physical results such as shadowing effects under the trees and higher electric amplitudes at some points in forests compared to that of the open area. We also analyze the effects of tapered trees in this study.
{"title":"Full-Wave Simulations of Forest at L-Band With Fast Hybrid Multiple Scattering Theory Method and Comparison With GNSS Signals","authors":"Jongwoo Jeong;Leung Tsang;Mehmet Kurum;Abesh Ghosh;Andreas Colliander;Simon Yueh;Kyle McDonald;Nicholas Steiner;Michael H. Cosh","doi":"10.1109/JSTARS.2025.3533313","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3533313","url":null,"abstract":"Full-wave simulations at L-band using the fast hybrid multiple scattering theory method (FHMSTM) have been applied to the Harvard Forest in Massachusetts using the Soil Moisture Active Passive Validation Experiment 2022 (SMAPVEX22) dataset. Due to the limitations of commercial full-wave electromagnetic solvers, the FHMSTM is our choice considering its efficient and fast solutions. During SMAPVEX22, scientists collected a dataset of tree sizes, tree positions (derived from light detection and ranging measurement), and microwave signals utilizing the Global Navigation Satellite System Transmissometry approach. The 3-D geometric forest model provides 300 trees with heights up to 19 m by processing the dataset. We import the forest model into the FHMSTM and analyze microwave propagation at MA401. The FHMSTM analysis shows that the transmissivity ranges from 0.627 to 0.674 for the vertically polarized incident wave source and from 0.593 to 0.665 for the horizontally polarized incident wave source. To validate the FHMSTM, a comparison is made with the GNSS signals. The comparison results of microwaves are in good agreement, demonstrating the physical results such as shadowing effects under the trees and higher electric amplitudes at some points in forests compared to that of the open area. We also analyze the effects of tapered trees in this study.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5395-5405"},"PeriodicalIF":4.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10850750","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143446365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CR-DEQ-SAR: A Deep Equilibrium Sparse SAR Imaging Method for Compound Regularization
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-23 DOI: 10.1109/JSTARS.2025.3533082
Guoru Zhou;Yixin Zuo;Zhe Zhang;Bingchen Zhang;Yirong Wu
Synthetic aperture radar (SAR) is a microwave remote sensing technology offering all-weather, high-resolution imaging. The rising demand for high precision and real-time processing under complex conditions in resource-constrained environments has spurred interest in deep network-based SAR imaging, which combines traditional sparse SAR imaging methods with deep learning to optimize parameters and scene features while retaining physical model interpretability and enabling fast inference. However, the single regularization cannot entirely capture the features of complex observation scenes, and network architectures based on iterative unfolding often face memory and numerical precision constraints during training. In this article, we propose a deep equilibrium sparse SAR Imaging method for compound regularization, integrating sparse and implicit regularizations to better capture complex scene features. The deep equilibrium model (DEQ) serves as a novel deep network framework that directly computes fixed points using analytical methods, theoretically allowing for infinite forward iterations while maintaining constant memory requirements. This is particularly advantageous in memory-intensive SAR imaging applications. Finally, we validate the effectiveness and superiority of the proposed method through experiments on real SAR scenes. The experimental results show that the proposed method outperforms existing deep learning-based SAR imaging methods regarding reconstruction performance and memory usage.
{"title":"CR-DEQ-SAR: A Deep Equilibrium Sparse SAR Imaging Method for Compound Regularization","authors":"Guoru Zhou;Yixin Zuo;Zhe Zhang;Bingchen Zhang;Yirong Wu","doi":"10.1109/JSTARS.2025.3533082","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3533082","url":null,"abstract":"Synthetic aperture radar (SAR) is a microwave remote sensing technology offering all-weather, high-resolution imaging. The rising demand for high precision and real-time processing under complex conditions in resource-constrained environments has spurred interest in deep network-based SAR imaging, which combines traditional sparse SAR imaging methods with deep learning to optimize parameters and scene features while retaining physical model interpretability and enabling fast inference. However, the single regularization cannot entirely capture the features of complex observation scenes, and network architectures based on iterative unfolding often face memory and numerical precision constraints during training. In this article, we propose a deep equilibrium sparse SAR Imaging method for compound regularization, integrating sparse and implicit regularizations to better capture complex scene features. The deep equilibrium model (DEQ) serves as a novel deep network framework that directly computes fixed points using analytical methods, theoretically allowing for infinite forward iterations while maintaining constant memory requirements. This is particularly advantageous in memory-intensive SAR imaging applications. Finally, we validate the effectiveness and superiority of the proposed method through experiments on real SAR scenes. The experimental results show that the proposed method outperforms existing deep learning-based SAR imaging methods regarding reconstruction performance and memory usage.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4680-4695"},"PeriodicalIF":4.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10851410","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multihierarchy Flow Field Prediction Network for Multimodal Remote Sensing Image Registration
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-23 DOI: 10.1109/JSTARS.2025.3532939
Wenqing Wang;Kunpeng Mu;Han Liu
Multimodal remote sensing image registration aims to achieve alignment between different modal image pairs. This effectively enhances the subsequent effects of multisource data fusion, object detection and recognition, and provides support for geographic spatial analysis and applications. Most existing approaches for multimodal remote sensing image registration are targeted at registering rigid transformations accompanied by large-scale deformations. Regrettably, they overlook the local disparities between different modalities and are incapable of effectively handling scenes with nonrigid distortions. Therefore, this article proposes a multimodal remote sensing image registration method that uses multihierarchy flow field cumulative prediction at different scales. The method consists of a multiscale feature pyramid, a dense feature matching module, a swin-transformer flow field prediction, and a spatial transformation module. The model makes full use of the features of different scales and levels of the image, gradually refines the flow field prediction to align the local nonrigid distortion area, and adopts a registration strategy that combines bidirectional similarity loss and hierarchy feature registration loss for different levels of features of different modalities. At the same time, the photometric error loss is introduced to optimize the entire network from both the feature and original image levels. Experimental results show that our network model shows good registration performance for a variety of cross-modal remote sensing images with nonrigid distortion.
{"title":"A Multihierarchy Flow Field Prediction Network for Multimodal Remote Sensing Image Registration","authors":"Wenqing Wang;Kunpeng Mu;Han Liu","doi":"10.1109/JSTARS.2025.3532939","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3532939","url":null,"abstract":"Multimodal remote sensing image registration aims to achieve alignment between different modal image pairs. This effectively enhances the subsequent effects of multisource data fusion, object detection and recognition, and provides support for geographic spatial analysis and applications. Most existing approaches for multimodal remote sensing image registration are targeted at registering rigid transformations accompanied by large-scale deformations. Regrettably, they overlook the local disparities between different modalities and are incapable of effectively handling scenes with nonrigid distortions. Therefore, this article proposes a multimodal remote sensing image registration method that uses multihierarchy flow field cumulative prediction at different scales. The method consists of a multiscale feature pyramid, a dense feature matching module, a swin-transformer flow field prediction, and a spatial transformation module. The model makes full use of the features of different scales and levels of the image, gradually refines the flow field prediction to align the local nonrigid distortion area, and adopts a registration strategy that combines bidirectional similarity loss and hierarchy feature registration loss for different levels of features of different modalities. At the same time, the photometric error loss is introduced to optimize the entire network from both the feature and original image levels. Experimental results show that our network model shows good registration performance for a variety of cross-modal remote sensing images with nonrigid distortion.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5232-5243"},"PeriodicalIF":4.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10850759","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HSRoadNet: Hard-Swish Activation Function and Improved Squeeze–Excitation Module Network for Road Extraction Using Satellite Remote Sensing Imagery
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-23 DOI: 10.1109/JSTARS.2025.3533196
Xunqiang Gong;Yingjie Ma;Ailong Ma;Zhaoyang Hou;Meng Zhang;Yanfei Zhong
Road information plays an essential role in many fields. To prevent failed extraction of heterogeneous regions, fracture of extracted roads and others resulted from vehicles and trees when using very high resolution remote sensing images; a remote sensing image road extraction method based on Hard-Swish Squeeze–Excitation RoadNet is proposed in this article. First, road extraction task is divided into three correlated subtasks to reduce the impact of vehicles and trees in road extracting. Second, a normalization layer is adopted to prevent gradient levels from vanishing and exploring and avoid fracture of the extracted road. Then, adopting Hard-Swish activation function to improve the accuracy of road extracting, and then finally, using the improved squeeze–excitation module to make the trained net a full use of the characteristic information of the road that do not increase excessive capacity. Comparison experimental results indicate that, in various indicators, the proposed method performs serviceably, it, respectively, increased by 16.8%, 2.2%, 1.5%, and 8.5% over the suboptimal in F-score, global accuracy, class average accuracy, and recall ratio. The mean intersection over union (MIoU) value of the proposed method was the suboptimum with a disparity of 0.2% from the optimal. Ablation experiments show that the proposed method performs best in various indices, and the global accuracy, MIoU, class average accuracy, and recall rate are improved by 0.5%, 0.1%, 0.5%, and 0.2%, respectively, compared with the suboptimal method. The F-score is suboptimal, with a 0.3% difference from the best.
{"title":"HSRoadNet: Hard-Swish Activation Function and Improved Squeeze–Excitation Module Network for Road Extraction Using Satellite Remote Sensing Imagery","authors":"Xunqiang Gong;Yingjie Ma;Ailong Ma;Zhaoyang Hou;Meng Zhang;Yanfei Zhong","doi":"10.1109/JSTARS.2025.3533196","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3533196","url":null,"abstract":"Road information plays an essential role in many fields. To prevent failed extraction of heterogeneous regions, fracture of extracted roads and others resulted from vehicles and trees when using very high resolution remote sensing images; a remote sensing image road extraction method based on Hard-Swish Squeeze–Excitation RoadNet is proposed in this article. First, road extraction task is divided into three correlated subtasks to reduce the impact of vehicles and trees in road extracting. Second, a normalization layer is adopted to prevent gradient levels from vanishing and exploring and avoid fracture of the extracted road. Then, adopting Hard-Swish activation function to improve the accuracy of road extracting, and then finally, using the improved squeeze–excitation module to make the trained net a full use of the characteristic information of the road that do not increase excessive capacity. Comparison experimental results indicate that, in various indicators, the proposed method performs serviceably, it, respectively, increased by 16.8%, 2.2%, 1.5%, and 8.5% over the suboptimal in <italic>F</i>-score, global accuracy, class average accuracy, and recall ratio. The mean intersection over union (MIoU) value of the proposed method was the suboptimum with a disparity of 0.2% from the optimal. Ablation experiments show that the proposed method performs best in various indices, and the global accuracy, MIoU, class average accuracy, and recall rate are improved by 0.5%, 0.1%, 0.5%, and 0.2%, respectively, compared with the suboptimal method. The <italic>F</i>-score is suboptimal, with a 0.3% difference from the best.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4907-4920"},"PeriodicalIF":4.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10850767","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of Forest Water Potential From Ground-Based L-Band Radiometry
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-23 DOI: 10.1109/JSTARS.2025.3533567
Thomas Jagdhuber;Anne-Sophie Schmidt;Anke Fluhrer;David Chaparro;François Jonard;María Piles;Natan Holtzman;Alexandra G. Konings;Andrew F. Feldman;Martin J. Baur;Susan Steele-Dunne;Konstantin Schellenberg;Harald Kunstmann
Monitoring the water status of forests is paramount for assessing vegetation health, particularly in the context of increasing duration and intensity of droughts. In this study, a methodology was developed for estimating forest water potential at the canopy scale from ground-based L-band radiometry. The study uses radiometer data from a tower-based experiment of the SMAPVEX 19-21 campaign from April to October 2019 at Harvard Forest, MA, USA. The gravimetric and the relative water content of the forest stand was retrieved from radiometer-based vegetation optical depth. A model-based methodology was adapted and assessed to transform the relative water content estimates into values of forest water potential. A comparison and validation of the retrieved forest water potential was conducted with in situ measurements of leaf and xylem water potential to understand the limitations and potentials of the proposed approach for diurnal, weekly and monthly time scales. The radiometer-based water potential estimates of the forest stand were found to be consistent in time with rPearson correlations up to 0.6 and similar in value, down to RMSE = 0.14 [MPa], compared to their in situ measurements from individual trees in the radiometer footprint, showing encouraging retrieval capabilities. However, a major challenge was the bias between the radiometer-based estimates and the in situ measurements over longer times (weeks & months). Here, an approach using either air temperature or soil moisture to update the minimum water potential of the forest stand ($text{FW}{{mathrm{P}}_{text{min}}}$) was developed to adjust the mismatch. These results showcase the potential of microwave radiometry for continuous monitoring of plant water status at different spatial and temporal scales, which has long been awaited by forest ecologists and tree physiologists.
{"title":"Estimation of Forest Water Potential From Ground-Based L-Band Radiometry","authors":"Thomas Jagdhuber;Anne-Sophie Schmidt;Anke Fluhrer;David Chaparro;François Jonard;María Piles;Natan Holtzman;Alexandra G. Konings;Andrew F. Feldman;Martin J. Baur;Susan Steele-Dunne;Konstantin Schellenberg;Harald Kunstmann","doi":"10.1109/JSTARS.2025.3533567","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3533567","url":null,"abstract":"Monitoring the water status of forests is paramount for assessing vegetation health, particularly in the context of increasing duration and intensity of droughts. In this study, a methodology was developed for estimating forest water potential at the canopy scale from ground-based L-band radiometry. The study uses radiometer data from a tower-based experiment of the SMAPVEX 19-21 campaign from April to October 2019 at Harvard Forest, MA, USA. The gravimetric and the relative water content of the forest stand was retrieved from radiometer-based vegetation optical depth. A model-based methodology was adapted and assessed to transform the relative water content estimates into values of forest water potential. A comparison and validation of the retrieved forest water potential was conducted with in situ measurements of leaf and xylem water potential to understand the limitations and potentials of the proposed approach for diurnal, weekly and monthly time scales. The radiometer-based water potential estimates of the forest stand were found to be consistent in time with r<sub>Pearson</sub> correlations up to 0.6 and similar in value, down to RMSE = 0.14 [MPa], compared to their in situ measurements from individual trees in the radiometer footprint, showing encouraging retrieval capabilities. However, a major challenge was the bias between the radiometer-based estimates and the in situ measurements over longer times (weeks & months). Here, an approach using either air temperature or soil moisture to update the minimum water potential of the forest stand (<inline-formula><tex-math>$text{FW}{{mathrm{P}}_{text{min}}}$</tex-math></inline-formula>) was developed to adjust the mismatch. These results showcase the potential of microwave radiometry for continuous monitoring of plant water status at different spatial and temporal scales, which has long been awaited by forest ecologists and tree physiologists.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5509-5522"},"PeriodicalIF":4.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10852024","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143489139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IAE-CDNet: A Remote Sensing Change Detection Network for Buildings With Interactive Attention-Enhanced
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-22 DOI: 10.1109/JSTARS.2025.3532783
Zhaoyang Han;Linlin Zhang;Qingyan Meng;Chongchang Wang;Wenxu Shi;Maofan Zhao
Currently, the development of deep learning has had a positive impact on remote sensing image change detection tasks, but many current methods still face challenges in effectively processing global and local features, especially in the task of building change detection in high-resolution images containing complex scenes. The extraction of target-related features is typically difficult, and changes in scene conditions further increase the difficulty of identifying real changes. To address these challenges, we propose the interactive attention-enhanced change detection network (IAE-CDNet). We design the local–global interaction attention module, which effectively establishes the interactive relationship between local and global features and realizes information interaction between branches, enhancing the ability to obtain architectural detail features. Additionally, our change perception attention enhancement module enhances the feature perception ability of the real change area through the joint action of the internal comprehensive feature extractor and the fusion attention mechanism. We conduct extensive experiments on three datasets. Results indicate that the evaluation indicators and performance of our IAE-CDNet are better than those of other state-of-the-art methods.
{"title":"IAE-CDNet: A Remote Sensing Change Detection Network for Buildings With Interactive Attention-Enhanced","authors":"Zhaoyang Han;Linlin Zhang;Qingyan Meng;Chongchang Wang;Wenxu Shi;Maofan Zhao","doi":"10.1109/JSTARS.2025.3532783","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3532783","url":null,"abstract":"Currently, the development of deep learning has had a positive impact on remote sensing image change detection tasks, but many current methods still face challenges in effectively processing global and local features, especially in the task of building change detection in high-resolution images containing complex scenes. The extraction of target-related features is typically difficult, and changes in scene conditions further increase the difficulty of identifying real changes. To address these challenges, we propose the interactive attention-enhanced change detection network (IAE-CDNet). We design the local–global interaction attention module, which effectively establishes the interactive relationship between local and global features and realizes information interaction between branches, enhancing the ability to obtain architectural detail features. Additionally, our change perception attention enhancement module enhances the feature perception ability of the real change area through the joint action of the internal comprehensive feature extractor and the fusion attention mechanism. We conduct extensive experiments on three datasets. Results indicate that the evaluation indicators and performance of our IAE-CDNet are better than those of other state-of-the-art methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5063-5081"},"PeriodicalIF":4.7,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10849815","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retrieval of Land Surface Temperature From Passive Microwave Observations Using CatBoost-Based Adaptive Feature Selection
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-22 DOI: 10.1109/JSTARS.2025.3532605
Yang Dai;Yingbao Yang;Xin Pan;Penghua Hu;Xiangjin Meng;Fanggang Li;Zhenwei Wang
Passive microwave (PMW) remote sensing is increasingly employed for generating seamless all-weather land surface temperature (LST) data due to its ability to penetrate cloud cover and capture the actual surface conditions underneath. Existing PMW retrieval methods often utilize large amounts of remote sensing data, overlooking the fact that redundant data can increase computational and time costs, reduce model interpretability, and may negatively impact accuracy. In this article, we proposed a PMW-LST retrieval method that integrates CatBoost-Based adaptive feature selection. First, we categorized the data into six groups based on the underlying surface types and data view time. Next, for each group, we ranked the feature sets according to their importance and employed the recursive feature elimination (RFE) method for feature selection. Finally, the optimized feature sets were used in the CatBoost algorithm to construct the PMW-LST retrieval model. We compared the accuracy of the proposed method with the Holmes, multichannel, and Random Forest algorithms. Results showed that the proposed method had lowest RMSE, with the value of 3.28 K (1.95 K), 2.69 K (1.65 K), and 3.71 K (2.22 K) on grassland, cropland, and barren land at daytime (nighttime), respectively. The verification at sites in Heihe river basin shows that the ubRMSE ranges from 1.73 to 4.48 K at daytime and 2.71 to 3.19 K at nighttime under clear-sky conditions, and from 1.83 to 5.23 K at daytime and 2.77 to 3.93 K at nighttime under cloudy-sky conditions. These results indicate the proposed method achieves higher accuracy in generating seamless all-weather LST data.
{"title":"Retrieval of Land Surface Temperature From Passive Microwave Observations Using CatBoost-Based Adaptive Feature Selection","authors":"Yang Dai;Yingbao Yang;Xin Pan;Penghua Hu;Xiangjin Meng;Fanggang Li;Zhenwei Wang","doi":"10.1109/JSTARS.2025.3532605","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3532605","url":null,"abstract":"Passive microwave (PMW) remote sensing is increasingly employed for generating seamless all-weather land surface temperature (LST) data due to its ability to penetrate cloud cover and capture the actual surface conditions underneath. Existing PMW retrieval methods often utilize large amounts of remote sensing data, overlooking the fact that redundant data can increase computational and time costs, reduce model interpretability, and may negatively impact accuracy. In this article, we proposed a PMW-LST retrieval method that integrates CatBoost-Based adaptive feature selection. First, we categorized the data into six groups based on the underlying surface types and data view time. Next, for each group, we ranked the feature sets according to their importance and employed the recursive feature elimination (RFE) method for feature selection. Finally, the optimized feature sets were used in the CatBoost algorithm to construct the PMW-LST retrieval model. We compared the accuracy of the proposed method with the Holmes, multichannel, and Random Forest algorithms. Results showed that the proposed method had lowest RMSE, with the value of 3.28 K (1.95 K), 2.69 K (1.65 K), and 3.71 K (2.22 K) on grassland, cropland, and barren land at daytime (nighttime), respectively. The verification at sites in Heihe river basin shows that the ubRMSE ranges from 1.73 to 4.48 K at daytime and 2.71 to 3.19 K at nighttime under clear-sky conditions, and from 1.83 to 5.23 K at daytime and 2.77 to 3.93 K at nighttime under cloudy-sky conditions. These results indicate the proposed method achieves higher accuracy in generating seamless all-weather LST data.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4949-4963"},"PeriodicalIF":4.7,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10849807","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ClassWise-SAM-Adapter: Parameter-Efficient Fine-Tuning Adapts Segment Anything to SAR Domain for Semantic Segmentation
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-22 DOI: 10.1109/JSTARS.2025.3532690
Xinyang Pu;Hecheng Jia;Linghao Zheng;Feng Wang;Feng Xu
In the realm of artificial intelligence, the emergence of foundation models, backed by high computing capabilities and extensive data, has been revolutionary. A segment anything model (SAM), built on the vision transformer (ViT) model with millions of parameters and trained on its corresponding large-scale dataset SA-1B, excels in various segmentation scenarios relying on its significance of semantic information and generalization ability. Such achievement of visual foundation model stimulates continuous researches on specific downstream tasks in computer vision. The classwise-SAM-adapter (CWSAM) is designed to adapt the high-performing SAM for landcover classification on space-borne synthetic aperture radar (SAR) images. The proposed CWSAM freezes most of SAM's parameters and incorporates lightweight adapters for parameter-efficient fine-tuning, and a classwise mask decoder is designed to achieve semantic segmentation task. This adapt-tuning method allows for efficient landcover classification of SAR images, balancing the accuracy with computational demand. In addition, the task-specific input module injects low-frequency information of SAR images by MLP-based layers to improve the model performance. Compared to conventional state-of-the-art semantic segmentation algorithms by extensive experiments, CWSAM showcases enhanced performance with fewer computing resources, highlighting the potential of leveraging foundational models such as SAM for specific downstream tasks in the SAR domain.
{"title":"ClassWise-SAM-Adapter: Parameter-Efficient Fine-Tuning Adapts Segment Anything to SAR Domain for Semantic Segmentation","authors":"Xinyang Pu;Hecheng Jia;Linghao Zheng;Feng Wang;Feng Xu","doi":"10.1109/JSTARS.2025.3532690","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3532690","url":null,"abstract":"In the realm of artificial intelligence, the emergence of foundation models, backed by high computing capabilities and extensive data, has been revolutionary. A segment anything model (SAM), built on the vision transformer (ViT) model with millions of parameters and trained on its corresponding large-scale dataset SA-1B, excels in various segmentation scenarios relying on its significance of semantic information and generalization ability. Such achievement of visual foundation model stimulates continuous researches on specific downstream tasks in computer vision. The classwise-SAM-adapter (CWSAM) is designed to adapt the high-performing SAM for landcover classification on space-borne synthetic aperture radar (SAR) images. The proposed CWSAM freezes most of SAM's parameters and incorporates lightweight adapters for parameter-efficient fine-tuning, and a classwise mask decoder is designed to achieve semantic segmentation task. This adapt-tuning method allows for efficient landcover classification of SAR images, balancing the accuracy with computational demand. In addition, the task-specific input module injects low-frequency information of SAR images by MLP-based layers to improve the model performance. Compared to conventional state-of-the-art semantic segmentation algorithms by extensive experiments, CWSAM showcases enhanced performance with fewer computing resources, highlighting the potential of leveraging foundational models such as SAM for specific downstream tasks in the SAR domain.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4791-4804"},"PeriodicalIF":4.7,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10849617","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What Helps to Detect What? Explainable AI and Multisensor Fusion for Semantic Segmentation of Simultaneous Crop and Land Cover Land Use Delineation
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-22 DOI: 10.1109/JSTARS.2025.3532829
Saman Ebrahimi;Saurav Kumar
This study introduces two novel explainable AI frameworks, Interclass-Grad-CAM and Spectral-Grad-CAM, designed to enhance the interpretability of semantic segmentation models for Crop and Land Cover Land Use (CLCLU) mapping. Interclass-Grad-CAM provides insights into interactions between land cover classes, revealing complex spatial arrangements, while Spectral-Grad-CAM quantifies the contributions of individual spectral bands to model predictions, optimizing spectral data use. These XAI methods significantly advance understanding of model behavior, particularly in heterogeneous landscapes, and ensure enhanced transparency in CLCLU mapping. To demonstrate the effectiveness of these innovations, we developed a framework that addresses data asymmetry between the United States and Mexico in the transboundary Middle Rio Grande region. Our approach integrates pixel-level multisensor fusion, combining dual-month moderate-resolution optical imagery (July and December 2023), synthetic aperture radar (SAR), and digital elevation model (DEM) data, processed using a Multi-Attention Network with a modified Mix Vision Transformer encoder to process multiple spectral inputs. Results indicate a uniform improvement in class-specific Intersection over Union by approximately 1% with multisensor integration compared to optical imagery alone. Optical bands proved most effective for crop classification, while SAR and DEM data enhanced predictions for nonagricultural types. This framework not only improves CLCLU mapping accuracy, but also offers a robust tool for broader environmental monitoring and resource management applications.
{"title":"What Helps to Detect What? Explainable AI and Multisensor Fusion for Semantic Segmentation of Simultaneous Crop and Land Cover Land Use Delineation","authors":"Saman Ebrahimi;Saurav Kumar","doi":"10.1109/JSTARS.2025.3532829","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3532829","url":null,"abstract":"This study introduces two novel explainable AI frameworks, Interclass-Grad-CAM and Spectral-Grad-CAM, designed to enhance the interpretability of semantic segmentation models for Crop and Land Cover Land Use (CLCLU) mapping. Interclass-Grad-CAM provides insights into interactions between land cover classes, revealing complex spatial arrangements, while Spectral-Grad-CAM quantifies the contributions of individual spectral bands to model predictions, optimizing spectral data use. These XAI methods significantly advance understanding of model behavior, particularly in heterogeneous landscapes, and ensure enhanced transparency in CLCLU mapping. To demonstrate the effectiveness of these innovations, we developed a framework that addresses data asymmetry between the United States and Mexico in the transboundary Middle Rio Grande region. Our approach integrates pixel-level multisensor fusion, combining dual-month moderate-resolution optical imagery (July and December 2023), synthetic aperture radar (SAR), and digital elevation model (DEM) data, processed using a Multi-Attention Network with a modified Mix Vision Transformer encoder to process multiple spectral inputs. Results indicate a uniform improvement in class-specific Intersection over Union by approximately 1% with multisensor integration compared to optical imagery alone. Optical bands proved most effective for crop classification, while SAR and DEM data enhanced predictions for nonagricultural types. This framework not only improves CLCLU mapping accuracy, but also offers a robust tool for broader environmental monitoring and resource management applications.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5423-5444"},"PeriodicalIF":4.7,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10849589","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAR Image Simulation for Crater Terrain Using Formation Theory-Based Modeling and Hybrid Ray-Tracing
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-22 DOI: 10.1109/JSTARS.2025.3532748
Ya-Ting Zhou;Yongsheng Zhou;Qiang Yin;Fei Ma;Fan Zhang
Simulating synthetic aperture radar (SAR) images of crater terrain is a crucial technique for expanding SAR sample databases and facilitating the development of quantitative information extraction models for craters. However, existing simulation methods often overlook crucial factors, including the explosive depth effect in crater morphology modeling and the double-bounce scattering effect in electromagnetic scattering calculations. To overcome these limitations, this article introduces a novel approach to simulating SAR images of crater terrain. The approach incorporates crater formation theory to describe the relationship between various explosion parameters and craters. Moreover, it employs a hybrid ray-tracing approach that considers both surface and double-bounce scattering effects. Initially, crater morphology models are established for surface, shallow burial, and deep burial explosions. This involves incorporating the explosive depth parameter into crater morphology modeling through crater formation theory and quantitatively assessing soil movement influenced by the explosion. Subsequently, the ray-tracing algorithm and the advanced integral equation model are combined to accurately calculate electromagnetic scattering characteristics. Finally, simulated SAR images of the crater terrain are generated using the SAR echo fast time-frequency domain simulation algorithm and the chirp scaling imaging algorithm. The results obtained by simulating SAR images under different explosion parameters offer valuable insights into the effects of various explosion parameters on crater morphology. This research could contribute to the creation of comprehensive crater terrain datasets and support the application of SAR technology for damage assessment purposes.
{"title":"SAR Image Simulation for Crater Terrain Using Formation Theory-Based Modeling and Hybrid Ray-Tracing","authors":"Ya-Ting Zhou;Yongsheng Zhou;Qiang Yin;Fei Ma;Fan Zhang","doi":"10.1109/JSTARS.2025.3532748","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3532748","url":null,"abstract":"Simulating synthetic aperture radar (SAR) images of crater terrain is a crucial technique for expanding SAR sample databases and facilitating the development of quantitative information extraction models for craters. However, existing simulation methods often overlook crucial factors, including the explosive depth effect in crater morphology modeling and the double-bounce scattering effect in electromagnetic scattering calculations. To overcome these limitations, this article introduces a novel approach to simulating SAR images of crater terrain. The approach incorporates crater formation theory to describe the relationship between various explosion parameters and craters. Moreover, it employs a hybrid ray-tracing approach that considers both surface and double-bounce scattering effects. Initially, crater morphology models are established for surface, shallow burial, and deep burial explosions. This involves incorporating the explosive depth parameter into crater morphology modeling through crater formation theory and quantitatively assessing soil movement influenced by the explosion. Subsequently, the ray-tracing algorithm and the advanced integral equation model are combined to accurately calculate electromagnetic scattering characteristics. Finally, simulated SAR images of the crater terrain are generated using the SAR echo fast time-frequency domain simulation algorithm and the chirp scaling imaging algorithm. The results obtained by simulating SAR images under different explosion parameters offer valuable insights into the effects of various explosion parameters on crater morphology. This research could contribute to the creation of comprehensive crater terrain datasets and support the application of SAR technology for damage assessment purposes.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5005-5017"},"PeriodicalIF":4.7,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10849666","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1