Pub Date : 2021-11-01DOI: 10.1109/lgrs.2020.3011547
Huizhang Yang, Chengzhi Chen, Shengyao Chen, Feng Xi, Zhong Liu
Radio frequency interference (RFI) can significantly pollute synthetic aperture radar (SAR) data and images, which is also harmful to SAR interferometry (InSAR) for retrieving elevational information. To address this issue, in recent years, a class of advanced RFI suppression methods has been proposed based on narrowband properties of RFI and sparsity assumptions of radar echoes or target reflectivity. However, for SAR echoes and the associated scene reflectivity, these assumptions are usually not feasible when the imaged scene is spatially extended. In view of these problems, this study proposes an InSAR-based RFI suppression method for the case of extended scenes. For this task, we combine the RFI-polluted SAR data with RFI-free interferometric data to form an interferometric SAR data pair. We show that such an InSAR data pair embeds an interferogram having the image amplitude multiplying by a complex exponential interferometric phase. We treat the interferogram as a kind of natural image and use discrete Fourier cosine transform (DCT) for its sparse representation. Then combining the DCT-domain sparsity with low-rank modeling of RFI, we retrieve the interferogram and reconstruct the SAR image via joint low-rank and sparse optimization. Numerical simulations show that the proposed method can effectively recover SAR images and interferometric phases from RFI-polluted SAR data.
{"title":"SAR RFI Suppression for Extended Scene Using Interferometric Data via Joint Low-Rank and Sparse Optimization","authors":"Huizhang Yang, Chengzhi Chen, Shengyao Chen, Feng Xi, Zhong Liu","doi":"10.1109/lgrs.2020.3011547","DOIUrl":"https://doi.org/10.1109/lgrs.2020.3011547","url":null,"abstract":"Radio frequency interference (RFI) can significantly pollute synthetic aperture radar (SAR) data and images, which is also harmful to SAR interferometry (InSAR) for retrieving elevational information. To address this issue, in recent years, a class of advanced RFI suppression methods has been proposed based on narrowband properties of RFI and sparsity assumptions of radar echoes or target reflectivity. However, for SAR echoes and the associated scene reflectivity, these assumptions are usually not feasible when the imaged scene is spatially extended. In view of these problems, this study proposes an InSAR-based RFI suppression method for the case of extended scenes. For this task, we combine the RFI-polluted SAR data with RFI-free interferometric data to form an interferometric SAR data pair. We show that such an InSAR data pair embeds an interferogram having the image amplitude multiplying by a complex exponential interferometric phase. We treat the interferogram as a kind of natural image and use discrete Fourier cosine transform (DCT) for its sparse representation. Then combining the DCT-domain sparsity with low-rank modeling of RFI, we retrieve the interferogram and reconstruct the SAR image via joint low-rank and sparse optimization. Numerical simulations show that the proposed method can effectively recover SAR images and interferometric phases from RFI-polluted SAR data.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"1976-1980"},"PeriodicalIF":4.8,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41751863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.1109/lgrs.2020.3009411
Na Yang, Yanjie Tang, Yongqiang Chen, Feng Xiang
The different orbit design and launching conditions of Soil Moisture and Ocean Salinity (SMOS, ESA) and Soil Moisture Active Passive (SMAP, NASA) result in different passing time over any point on the ground. The time lag between the two satellites is thought to be one of the reasons to induce uncertainties in soil moisture data comparison and validation. This letter calculates the temporal difference between SMOS and SMAP at first; it is found that their mismatch mainly concentrates within a period of 30–90 min. During such time lag, the change in surface soil moisture (5 cm) and other meteorological variables is analyzed on the basis of the U.S. Climate Reference Network (USCRN) high-frequency (5-min) field observations and Murrumbidgee Soil Moisture Monitoring Network (MSMMN) in situ measurements (20-min). This letter found that in most cases, air temperature, wind, and relative humidity present a moderate change of about 10%–20%, while solar radiation shows very strong variation from tens to hundreds (%). Soil moisture and soil temperature are always stable, the value of soil moisture at the two time points when SMOS and SMAP pass overhead are almost the same, and the averaged minimum and maximum fluctuations of soil moisture are only 0.004/0.003 and 0.007/0.01 $text{m}^{3}/text{m}^{3}$ , respectively, which are far less than the nominal accuracy of satellites (0.04 $text{m}^{3}/text{m}^{3})$ and probably unrecognizable. Soil moisture experiences a natural fading of very small magnitude during the time intervals of satellites, the temporal mismatch may not induce external uncertainties in soil moisture data comparison and validation, and it is safe to conclude that the impact is negligible.
{"title":"Study on Stability of Surface Soil Moisture and Other Meteorological Variables Within Time Intervals of SMOS and SMAP","authors":"Na Yang, Yanjie Tang, Yongqiang Chen, Feng Xiang","doi":"10.1109/lgrs.2020.3009411","DOIUrl":"https://doi.org/10.1109/lgrs.2020.3009411","url":null,"abstract":"The different orbit design and launching conditions of Soil Moisture and Ocean Salinity (SMOS, ESA) and Soil Moisture Active Passive (SMAP, NASA) result in different passing time over any point on the ground. The time lag between the two satellites is thought to be one of the reasons to induce uncertainties in soil moisture data comparison and validation. This letter calculates the temporal difference between SMOS and SMAP at first; it is found that their mismatch mainly concentrates within a period of 30–90 min. During such time lag, the change in surface soil moisture (5 cm) and other meteorological variables is analyzed on the basis of the U.S. Climate Reference Network (USCRN) high-frequency (5-min) field observations and Murrumbidgee Soil Moisture Monitoring Network (MSMMN) in situ measurements (20-min). This letter found that in most cases, air temperature, wind, and relative humidity present a moderate change of about 10%–20%, while solar radiation shows very strong variation from tens to hundreds (%). Soil moisture and soil temperature are always stable, the value of soil moisture at the two time points when SMOS and SMAP pass overhead are almost the same, and the averaged minimum and maximum fluctuations of soil moisture are only 0.004/0.003 and 0.007/0.01 $text{m}^{3}/text{m}^{3}$ , respectively, which are far less than the nominal accuracy of satellites (0.04 $text{m}^{3}/text{m}^{3})$ and probably unrecognizable. Soil moisture experiences a natural fading of very small magnitude during the time intervals of satellites, the temporal mismatch may not induce external uncertainties in soil moisture data comparison and validation, and it is safe to conclude that the impact is negligible.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"1911-1915"},"PeriodicalIF":4.8,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/lgrs.2020.3009411","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46755728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-21DOI: 10.36227/techrxiv.16831330.v1
Sefa Kucuk, S. E. Yuksel
Sparse unmixing (SU) aims to express the observed image signatures as a linear combination of pure spectra known a priori and has become a very popular technique with promising results in analyzing hyperspectral images (HSIs) over the past ten years. In SU, utilizing the spatial–contextual information allows for more realistic abundance estimation. To make full use of the spatial–spectral information, in this letter, we propose a pointwise mutual information (PMI)-based graph Laplacian (GL) regularization for SU. Specifically, we construct the affinity matrices via PMI by modeling the association between neighboring image features through a statistical framework and then we use them in the GL regularizer. We also adopt a double reweighted $ell _{1}$ norm minimization scheme to promote the sparsity of fractional abundances. Experimental results on simulated and real datasets prove the effectiveness of the proposed method and its superiority over competing algorithms in the literature.
{"title":"Pointwise Mutual Information-Based Graph Laplacian Regularized Sparse Unmixing","authors":"Sefa Kucuk, S. E. Yuksel","doi":"10.36227/techrxiv.16831330.v1","DOIUrl":"https://doi.org/10.36227/techrxiv.16831330.v1","url":null,"abstract":"Sparse unmixing (SU) aims to express the observed image signatures as a linear combination of pure spectra known a priori and has become a very popular technique with promising results in analyzing hyperspectral images (HSIs) over the past ten years. In SU, utilizing the spatial–contextual information allows for more realistic abundance estimation. To make full use of the spatial–spectral information, in this letter, we propose a pointwise mutual information (PMI)-based graph Laplacian (GL) regularization for SU. Specifically, we construct the affinity matrices via PMI by modeling the association between neighboring image features through a statistical framework and then we use them in the GL regularizer. We also adopt a double reweighted $ell _{1}$ norm minimization scheme to promote the sparsity of fractional abundances. Experimental results on simulated and real datasets prove the effectiveness of the proposed method and its superiority over competing algorithms in the literature.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":" ","pages":"1-5"},"PeriodicalIF":4.8,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48381364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-09DOI: 10.1109/lgrs.2021.3109061
Peifang Deng, Kejie Xu, Hong Huang
Scene classification is an indispensable part of remote sensing image interpretation, and various convolutional neural network (CNN)-based methods have been explored to improve classification accuracy. Although they have shown good classification performance on high-resolution remote sensing (HRRS) images, discriminative ability of extracted features is still limited. In this letter, a high-performance joint framework combined CNNs and vision transformer (ViT) (CTNet) is proposed to further boost the discriminative ability of features for HRRS scene classification. The CTNet method contains two modules, including the stream of ViT (T-stream) and the stream of CNNs (C-stream). For the T-stream, flattened image patches are sent into pretrained ViT model to mine semantic features in HRRS images. To complement with T-stream, pretrained CNN is transferred to extract local structural features in the C-stream. Then, semantic features and structural features are concatenated to predict labels of unknown samples. Finally, a joint loss function is developed to optimize the joint model and increase the intraclass aggregation. The highest accuracies on the aerial image dataset (AID) and Northwestern Polytechnical University (NWPU)-RESISC45 datasets obtained by the CTNet method are 97.70% and 95.49%, respectively. The classification results reveal that the proposed method achieves high classification performance compared with other state-of-the-art (SOTA) methods.
{"title":"When CNNs Meet Vision Transformer: A Joint Framework for Remote Sensing Scene Classification","authors":"Peifang Deng, Kejie Xu, Hong Huang","doi":"10.1109/lgrs.2021.3109061","DOIUrl":"https://doi.org/10.1109/lgrs.2021.3109061","url":null,"abstract":"Scene classification is an indispensable part of remote sensing image interpretation, and various convolutional neural network (CNN)-based methods have been explored to improve classification accuracy. Although they have shown good classification performance on high-resolution remote sensing (HRRS) images, discriminative ability of extracted features is still limited. In this letter, a high-performance joint framework combined CNNs and vision transformer (ViT) (CTNet) is proposed to further boost the discriminative ability of features for HRRS scene classification. The CTNet method contains two modules, including the stream of ViT (T-stream) and the stream of CNNs (C-stream). For the T-stream, flattened image patches are sent into pretrained ViT model to mine semantic features in HRRS images. To complement with T-stream, pretrained CNN is transferred to extract local structural features in the C-stream. Then, semantic features and structural features are concatenated to predict labels of unknown samples. Finally, a joint loss function is developed to optimize the joint model and increase the intraclass aggregation. The highest accuracies on the aerial image dataset (AID) and Northwestern Polytechnical University (NWPU)-RESISC45 datasets obtained by the CTNet method are 97.70% and 95.49%, respectively. The classification results reveal that the proposed method achieves high classification performance compared with other state-of-the-art (SOTA) methods.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"19 1","pages":"1-5"},"PeriodicalIF":4.8,"publicationDate":"2021-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62481570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.36227/techrxiv.16432641
A. Nair, K. S. Srujan, Sayali Kulkarni, Kshitij Alwadhi, Navya Jain, H. Kodamana, S. Sukumaran, V. John
Tropical cyclones (TCs) are the most destructive weather systems that form over the tropical oceans, with about 90 storms forming globally every year. The timely detection and tracking of TCs are important for advanced warning to the affected regions. As these storms form over the open oceans far from the continents, remote sensing plays a crucial role in detecting them. Here we present an automated TC detection from satellite images based on a novel deep learning technique. In this study, we propose a multistaged deep learning framework for the detection of TCs, including, 1) a detector—Mask region-convolutional neural network (R-CNN); 2) a wind speed filter; and 3) a classifier—convolutional neural network (CNN). The hyperparameters of the entire pipeline are optimized to showcase the best performance using Bayesian optimization. Results indicate that the proposed approach yields high precision (97.10%), specificity (97.59%), and accuracy (86.55%) for test images.
{"title":"A Deep Learning Framework for the Detection of Tropical Cyclones From Satellite Images","authors":"A. Nair, K. S. Srujan, Sayali Kulkarni, Kshitij Alwadhi, Navya Jain, H. Kodamana, S. Sukumaran, V. John","doi":"10.36227/techrxiv.16432641","DOIUrl":"https://doi.org/10.36227/techrxiv.16432641","url":null,"abstract":"Tropical cyclones (TCs) are the most destructive weather systems that form over the tropical oceans, with about 90 storms forming globally every year. The timely detection and tracking of TCs are important for advanced warning to the affected regions. As these storms form over the open oceans far from the continents, remote sensing plays a crucial role in detecting them. Here we present an automated TC detection from satellite images based on a novel deep learning technique. In this study, we propose a multistaged deep learning framework for the detection of TCs, including, 1) a detector—Mask region-convolutional neural network (R-CNN); 2) a wind speed filter; and 3) a classifier—convolutional neural network (CNN). The hyperparameters of the entire pipeline are optimized to showcase the best performance using Bayesian optimization. Results indicate that the proposed approach yields high precision (97.10%), specificity (97.59%), and accuracy (86.55%) for test images.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":" ","pages":"1-5"},"PeriodicalIF":4.8,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43343042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-26DOI: 10.1002/essoar.10507902.1
S. Kundu, V. Lakshmi, R. Torres
We studied the temporal and spatial changes in flood water elevation and variation in the surface extent due to flooding resulting from Hurricane Florence (September 2018) using the L-band observation from an unmanned aerial vehicle synthetic aperture radar (UAVSAR) and C-band synthetic aperture radar (SAR) sensors on Sentinel-1. The novelty of this study lies in the estimation of the changes in the flood depth during the hurricane and investigating the best method. Overall, flood depths from SAR were observed to be well-correlated with the spatially distributed ground-based observations ($R^{2} = 0.79$ –0.96). The corresponding change in water level ($partial text{h}/partial text{t}$ ) also compared well between the remote sensing approach and the ground observations ($R^{2} = 0.90$ ). This study highlights the potential use of SAR remote sensing for inundated landscapes (and locations with scarce ground observations), and it emphasizes the need for more frequent SAR observations during flood inundation to provide spatially distributed and high temporal repeat observations of inundation to characterize flood dynamics.
{"title":"Estimation of Flood Inundation and Depth During Hurricane Florence Using Sentinel-1 and UAVSAR Data","authors":"S. Kundu, V. Lakshmi, R. Torres","doi":"10.1002/essoar.10507902.1","DOIUrl":"https://doi.org/10.1002/essoar.10507902.1","url":null,"abstract":"We studied the temporal and spatial changes in flood water elevation and variation in the surface extent due to flooding resulting from Hurricane Florence (September 2018) using the L-band observation from an unmanned aerial vehicle synthetic aperture radar (UAVSAR) and C-band synthetic aperture radar (SAR) sensors on Sentinel-1. The novelty of this study lies in the estimation of the changes in the flood depth during the hurricane and investigating the best method. Overall, flood depths from SAR were observed to be well-correlated with the spatially distributed ground-based observations (<inline-formula> <tex-math notation=\"LaTeX\">$R^{2} = 0.79$ </tex-math></inline-formula>–0.96). The corresponding change in water level (<inline-formula> <tex-math notation=\"LaTeX\">$partial text{h}/partial text{t}$ </tex-math></inline-formula>) also compared well between the remote sensing approach and the ground observations (<inline-formula> <tex-math notation=\"LaTeX\">$R^{2} = 0.90$ </tex-math></inline-formula>). This study highlights the potential use of SAR remote sensing for inundated landscapes (and locations with scarce ground observations), and it emphasizes the need for more frequent SAR observations during flood inundation to provide spatially distributed and high temporal repeat observations of inundation to characterize flood dynamics.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":" ","pages":"1-5"},"PeriodicalIF":4.8,"publicationDate":"2021-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/essoar.10507902.1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45282019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-14DOI: 10.1109/LGRS.2021.3085139
M. L. Mekhalfi, Carlo Nicolò, Y. Bazi, Mohamad Mahmoud Al Rahhal, Norah A. Alsharif, E. Maghayreh
Ongoing discoveries of water reserves have fostered an increasing adoption of crop circles in the desert in several countries. Automatically quantifying and surveying the layout of crop circles in remote areas can be of great use for stakeholders in managing the expansion of the farming land. This letter compares latest deep learning models for crop circle detection and counting, namely Detection Transformers, EfficientDet and YOLOv5 are evaluated. To this end, we build two datasets, via Google Earth Pro, corresponding to two large crop circle hot spots in Egypt and Saudi Arabia. The images were drawn at an altitude of 20 km above the targets. The models are assessed in within-domain and cross-domain scenarios, and yielded plausible detection potential and inference response.
{"title":"Contrasting YOLOv5, Transformer, and EfficientDet Detectors for Crop Circle Detection in Desert","authors":"M. L. Mekhalfi, Carlo Nicolò, Y. Bazi, Mohamad Mahmoud Al Rahhal, Norah A. Alsharif, E. Maghayreh","doi":"10.1109/LGRS.2021.3085139","DOIUrl":"https://doi.org/10.1109/LGRS.2021.3085139","url":null,"abstract":"Ongoing discoveries of water reserves have fostered an increasing adoption of crop circles in the desert in several countries. Automatically quantifying and surveying the layout of crop circles in remote areas can be of great use for stakeholders in managing the expansion of the farming land. This letter compares latest deep learning models for crop circle detection and counting, namely Detection Transformers, EfficientDet and YOLOv5 are evaluated. To this end, we build two datasets, via Google Earth Pro, corresponding to two large crop circle hot spots in Egypt and Saudi Arabia. The images were drawn at an altitude of 20 km above the targets. The models are assessed in within-domain and cross-domain scenarios, and yielded plausible detection potential and inference response.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"19 1","pages":"1-5"},"PeriodicalIF":4.8,"publicationDate":"2021-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2021.3085139","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62479064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-12DOI: 10.1109/LGRS.2021.3051183
Minjie Wan, Xiaobo Ye, Xiaojie Zhang, Yunkai Xu, G. Gu, Qian Chen
The precision of infrared (IR) small target tracking is seriously limited due to lack of texture information and interference of background clutter. The key issue of robust tracking is to exploit generic feature representations of IR small targets under different types of background. In this letter, we present a new IR small target tracking method via compressive convolution feature (CCF) extraction. First, a Gaussian curvature-based feature map is calculated to suppress clutters so that the contrast between target and background can be obviously improved. Then, a three-layer compressive convolutional network, which consists of a simple layer, a compressive layer, and a complex layer, is designed to represent each candidate target by a CCF vector. Based on the proposed mechanism of feature extraction, a support vector machine (SVM) classifier with continuous probabilistic output is trained to compute the likelihood probability of each candidate. Finally, the long-term tracking for IR small target is implemented under the framework of the inverse sparse representation-based particle filter. Both qualitative and quantitative experiments based on real IR sequences verify that our method can achieve more satisfactory performances in terms of precision and robustness compared with other typical visual trackers.
{"title":"Infrared Small Target Tracking via Gaussian Curvature-Based Compressive Convolution Feature Extraction","authors":"Minjie Wan, Xiaobo Ye, Xiaojie Zhang, Yunkai Xu, G. Gu, Qian Chen","doi":"10.1109/LGRS.2021.3051183","DOIUrl":"https://doi.org/10.1109/LGRS.2021.3051183","url":null,"abstract":"The precision of infrared (IR) small target tracking is seriously limited due to lack of texture information and interference of background clutter. The key issue of robust tracking is to exploit generic feature representations of IR small targets under different types of background. In this letter, we present a new IR small target tracking method via compressive convolution feature (CCF) extraction. First, a Gaussian curvature-based feature map is calculated to suppress clutters so that the contrast between target and background can be obviously improved. Then, a three-layer compressive convolutional network, which consists of a simple layer, a compressive layer, and a complex layer, is designed to represent each candidate target by a CCF vector. Based on the proposed mechanism of feature extraction, a support vector machine (SVM) classifier with continuous probabilistic output is trained to compute the likelihood probability of each candidate. Finally, the long-term tracking for IR small target is implemented under the framework of the inverse sparse representation-based particle filter. Both qualitative and quantitative experiments based on real IR sequences verify that our method can achieve more satisfactory performances in terms of precision and robustness compared with other typical visual trackers.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"26 1","pages":"1-5"},"PeriodicalIF":4.8,"publicationDate":"2021-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2021.3051183","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62476742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1109/LGRS.2020.2968635
Lei Zhang, Hong Yu, Zhenzhan Wang, X. Yin, Liang Yang, Hua-dong Du, Bin Li, Y. Wang, Wu Zhou
Haiyang-2B (HY-2B) is the second marine dynamic environment satellite of China. Sea surface temperature (SST) products from the scanning microwave radiometer (SMR) onboard HY-2B satellite are evaluated against in situ measurements. Approximately, ten months of data are used for the initial evaluation, from January 15, 2019 to November 15, 2019. The temporal and spatial windows for collocation are 30 min and 25 km, respectively, which produce 450 416 matchup pairs between HY-2B/SMR and in situ SSTs. The statistical comparison of the entire data set shows that the mean bias is −0.13 °C (SMR minus buoy), and the corresponding root-mean-square error (RMSE) is 1.06 °C. Time series of collocations for the SST difference shows that a good agreement is found between HY-2B/SMR and in situ SSTs after June 15, revealing a mean bias and an RMSE of only 0.09 °C and 0.72 °C, respectively. A three-way error analysis is conducted between the SMR, Global Precipitation Measurement Microwave Imager (GMI), and in situ SSTs. Individual standard deviations are found to be 0.41 °C for the GMI SST, 0.15 °C for the in situ SST, and 1.03 °C for the SMR SST. The results indicate that the HY2B/SMR SST products need to be improved during the period from January 15, 2019 to June 15, 2019.
{"title":"Evaluation of the Initial Sea Surface Temperature From the HY-2B Scanning Microwave Radiometer","authors":"Lei Zhang, Hong Yu, Zhenzhan Wang, X. Yin, Liang Yang, Hua-dong Du, Bin Li, Y. Wang, Wu Zhou","doi":"10.1109/LGRS.2020.2968635","DOIUrl":"https://doi.org/10.1109/LGRS.2020.2968635","url":null,"abstract":"Haiyang-2B (HY-2B) is the second marine dynamic environment satellite of China. Sea surface temperature (SST) products from the scanning microwave radiometer (SMR) onboard HY-2B satellite are evaluated against in situ measurements. Approximately, ten months of data are used for the initial evaluation, from January 15, 2019 to November 15, 2019. The temporal and spatial windows for collocation are 30 min and 25 km, respectively, which produce 450 416 matchup pairs between HY-2B/SMR and in situ SSTs. The statistical comparison of the entire data set shows that the mean bias is −0.13 °C (SMR minus buoy), and the corresponding root-mean-square error (RMSE) is 1.06 °C. Time series of collocations for the SST difference shows that a good agreement is found between HY-2B/SMR and in situ SSTs after June 15, revealing a mean bias and an RMSE of only 0.09 °C and 0.72 °C, respectively. A three-way error analysis is conducted between the SMR, Global Precipitation Measurement Microwave Imager (GMI), and in situ SSTs. Individual standard deviations are found to be 0.41 °C for the GMI SST, 0.15 °C for the in situ SST, and 1.03 °C for the SMR SST. The results indicate that the HY2B/SMR SST products need to be improved during the period from January 15, 2019 to June 15, 2019.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"137-141"},"PeriodicalIF":4.8,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2020.2968635","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62473216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1109/LGRS.2020.2966369
S. Hou, Zengguo Sun, Liu Yang, Yunjing Song
In order to overcome the drawback of the traditional Kirsch template despeckling usings fixed windows, an improved Kirsch direction template despeckling algorithm, based on structural information detection, is proposed for high-resolution synthetic aperture radar (SAR) images. First, the point targets are detected and preserved in the current region. Second, the window is enlarged adaptively based on the statistical characteristics of the local region. Finally, the window finally obtained is classified. The averaged filter is directly adopted if the region is homogeneous, or else the Kirsch template filter is used. Combining point target detection, adaptive windowing, and region classification, altogether the proposed algorithm can effectively improve the performance of the traditional Kirsch direction template despeckling. Despeckling experiments on simulated and real high-resolution SAR images demonstrate that the Kirsch direction template despeckling algorithm based on structural information detection can not only sufficiently suppress speckle in homogenous and edge regions, but also effectively preserve point targets and edge information, leading to good despeckling results.
{"title":"Kirsch Direction Template Despeckling Algorithm of High-Resolution SAR Images-Based on Structural Information Detection","authors":"S. Hou, Zengguo Sun, Liu Yang, Yunjing Song","doi":"10.1109/LGRS.2020.2966369","DOIUrl":"https://doi.org/10.1109/LGRS.2020.2966369","url":null,"abstract":"In order to overcome the drawback of the traditional Kirsch template despeckling usings fixed windows, an improved Kirsch direction template despeckling algorithm, based on structural information detection, is proposed for high-resolution synthetic aperture radar (SAR) images. First, the point targets are detected and preserved in the current region. Second, the window is enlarged adaptively based on the statistical characteristics of the local region. Finally, the window finally obtained is classified. The averaged filter is directly adopted if the region is homogeneous, or else the Kirsch template filter is used. Combining point target detection, adaptive windowing, and region classification, altogether the proposed algorithm can effectively improve the performance of the traditional Kirsch direction template despeckling. Despeckling experiments on simulated and real high-resolution SAR images demonstrate that the Kirsch direction template despeckling algorithm based on structural information detection can not only sufficiently suppress speckle in homogenous and edge regions, but also effectively preserve point targets and edge information, leading to good despeckling results.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"177-181"},"PeriodicalIF":4.8,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2020.2966369","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62472413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}