Pub Date : 2021-10-21DOI: 10.36227/techrxiv.16831330.v1
Sefa Kucuk, S. E. Yuksel
Sparse unmixing (SU) aims to express the observed image signatures as a linear combination of pure spectra known a priori and has become a very popular technique with promising results in analyzing hyperspectral images (HSIs) over the past ten years. In SU, utilizing the spatial–contextual information allows for more realistic abundance estimation. To make full use of the spatial–spectral information, in this letter, we propose a pointwise mutual information (PMI)-based graph Laplacian (GL) regularization for SU. Specifically, we construct the affinity matrices via PMI by modeling the association between neighboring image features through a statistical framework and then we use them in the GL regularizer. We also adopt a double reweighted $ell _{1}$ norm minimization scheme to promote the sparsity of fractional abundances. Experimental results on simulated and real datasets prove the effectiveness of the proposed method and its superiority over competing algorithms in the literature.
{"title":"Pointwise Mutual Information-Based Graph Laplacian Regularized Sparse Unmixing","authors":"Sefa Kucuk, S. E. Yuksel","doi":"10.36227/techrxiv.16831330.v1","DOIUrl":"https://doi.org/10.36227/techrxiv.16831330.v1","url":null,"abstract":"Sparse unmixing (SU) aims to express the observed image signatures as a linear combination of pure spectra known a priori and has become a very popular technique with promising results in analyzing hyperspectral images (HSIs) over the past ten years. In SU, utilizing the spatial–contextual information allows for more realistic abundance estimation. To make full use of the spatial–spectral information, in this letter, we propose a pointwise mutual information (PMI)-based graph Laplacian (GL) regularization for SU. Specifically, we construct the affinity matrices via PMI by modeling the association between neighboring image features through a statistical framework and then we use them in the GL regularizer. We also adopt a double reweighted $ell _{1}$ norm minimization scheme to promote the sparsity of fractional abundances. Experimental results on simulated and real datasets prove the effectiveness of the proposed method and its superiority over competing algorithms in the literature.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":" ","pages":"1-5"},"PeriodicalIF":4.8,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48381364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-09DOI: 10.1109/lgrs.2021.3109061
Peifang Deng, Kejie Xu, Hong Huang
Scene classification is an indispensable part of remote sensing image interpretation, and various convolutional neural network (CNN)-based methods have been explored to improve classification accuracy. Although they have shown good classification performance on high-resolution remote sensing (HRRS) images, discriminative ability of extracted features is still limited. In this letter, a high-performance joint framework combined CNNs and vision transformer (ViT) (CTNet) is proposed to further boost the discriminative ability of features for HRRS scene classification. The CTNet method contains two modules, including the stream of ViT (T-stream) and the stream of CNNs (C-stream). For the T-stream, flattened image patches are sent into pretrained ViT model to mine semantic features in HRRS images. To complement with T-stream, pretrained CNN is transferred to extract local structural features in the C-stream. Then, semantic features and structural features are concatenated to predict labels of unknown samples. Finally, a joint loss function is developed to optimize the joint model and increase the intraclass aggregation. The highest accuracies on the aerial image dataset (AID) and Northwestern Polytechnical University (NWPU)-RESISC45 datasets obtained by the CTNet method are 97.70% and 95.49%, respectively. The classification results reveal that the proposed method achieves high classification performance compared with other state-of-the-art (SOTA) methods.
{"title":"When CNNs Meet Vision Transformer: A Joint Framework for Remote Sensing Scene Classification","authors":"Peifang Deng, Kejie Xu, Hong Huang","doi":"10.1109/lgrs.2021.3109061","DOIUrl":"https://doi.org/10.1109/lgrs.2021.3109061","url":null,"abstract":"Scene classification is an indispensable part of remote sensing image interpretation, and various convolutional neural network (CNN)-based methods have been explored to improve classification accuracy. Although they have shown good classification performance on high-resolution remote sensing (HRRS) images, discriminative ability of extracted features is still limited. In this letter, a high-performance joint framework combined CNNs and vision transformer (ViT) (CTNet) is proposed to further boost the discriminative ability of features for HRRS scene classification. The CTNet method contains two modules, including the stream of ViT (T-stream) and the stream of CNNs (C-stream). For the T-stream, flattened image patches are sent into pretrained ViT model to mine semantic features in HRRS images. To complement with T-stream, pretrained CNN is transferred to extract local structural features in the C-stream. Then, semantic features and structural features are concatenated to predict labels of unknown samples. Finally, a joint loss function is developed to optimize the joint model and increase the intraclass aggregation. The highest accuracies on the aerial image dataset (AID) and Northwestern Polytechnical University (NWPU)-RESISC45 datasets obtained by the CTNet method are 97.70% and 95.49%, respectively. The classification results reveal that the proposed method achieves high classification performance compared with other state-of-the-art (SOTA) methods.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"19 1","pages":"1-5"},"PeriodicalIF":4.8,"publicationDate":"2021-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62481570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.36227/techrxiv.16432641
A. Nair, K. S. Srujan, Sayali Kulkarni, Kshitij Alwadhi, Navya Jain, H. Kodamana, S. Sukumaran, V. John
Tropical cyclones (TCs) are the most destructive weather systems that form over the tropical oceans, with about 90 storms forming globally every year. The timely detection and tracking of TCs are important for advanced warning to the affected regions. As these storms form over the open oceans far from the continents, remote sensing plays a crucial role in detecting them. Here we present an automated TC detection from satellite images based on a novel deep learning technique. In this study, we propose a multistaged deep learning framework for the detection of TCs, including, 1) a detector—Mask region-convolutional neural network (R-CNN); 2) a wind speed filter; and 3) a classifier—convolutional neural network (CNN). The hyperparameters of the entire pipeline are optimized to showcase the best performance using Bayesian optimization. Results indicate that the proposed approach yields high precision (97.10%), specificity (97.59%), and accuracy (86.55%) for test images.
{"title":"A Deep Learning Framework for the Detection of Tropical Cyclones From Satellite Images","authors":"A. Nair, K. S. Srujan, Sayali Kulkarni, Kshitij Alwadhi, Navya Jain, H. Kodamana, S. Sukumaran, V. John","doi":"10.36227/techrxiv.16432641","DOIUrl":"https://doi.org/10.36227/techrxiv.16432641","url":null,"abstract":"Tropical cyclones (TCs) are the most destructive weather systems that form over the tropical oceans, with about 90 storms forming globally every year. The timely detection and tracking of TCs are important for advanced warning to the affected regions. As these storms form over the open oceans far from the continents, remote sensing plays a crucial role in detecting them. Here we present an automated TC detection from satellite images based on a novel deep learning technique. In this study, we propose a multistaged deep learning framework for the detection of TCs, including, 1) a detector—Mask region-convolutional neural network (R-CNN); 2) a wind speed filter; and 3) a classifier—convolutional neural network (CNN). The hyperparameters of the entire pipeline are optimized to showcase the best performance using Bayesian optimization. Results indicate that the proposed approach yields high precision (97.10%), specificity (97.59%), and accuracy (86.55%) for test images.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":" ","pages":"1-5"},"PeriodicalIF":4.8,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43343042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-26DOI: 10.1002/essoar.10507902.1
S. Kundu, V. Lakshmi, R. Torres
We studied the temporal and spatial changes in flood water elevation and variation in the surface extent due to flooding resulting from Hurricane Florence (September 2018) using the L-band observation from an unmanned aerial vehicle synthetic aperture radar (UAVSAR) and C-band synthetic aperture radar (SAR) sensors on Sentinel-1. The novelty of this study lies in the estimation of the changes in the flood depth during the hurricane and investigating the best method. Overall, flood depths from SAR were observed to be well-correlated with the spatially distributed ground-based observations ($R^{2} = 0.79$ –0.96). The corresponding change in water level ($partial text{h}/partial text{t}$ ) also compared well between the remote sensing approach and the ground observations ($R^{2} = 0.90$ ). This study highlights the potential use of SAR remote sensing for inundated landscapes (and locations with scarce ground observations), and it emphasizes the need for more frequent SAR observations during flood inundation to provide spatially distributed and high temporal repeat observations of inundation to characterize flood dynamics.
{"title":"Estimation of Flood Inundation and Depth During Hurricane Florence Using Sentinel-1 and UAVSAR Data","authors":"S. Kundu, V. Lakshmi, R. Torres","doi":"10.1002/essoar.10507902.1","DOIUrl":"https://doi.org/10.1002/essoar.10507902.1","url":null,"abstract":"We studied the temporal and spatial changes in flood water elevation and variation in the surface extent due to flooding resulting from Hurricane Florence (September 2018) using the L-band observation from an unmanned aerial vehicle synthetic aperture radar (UAVSAR) and C-band synthetic aperture radar (SAR) sensors on Sentinel-1. The novelty of this study lies in the estimation of the changes in the flood depth during the hurricane and investigating the best method. Overall, flood depths from SAR were observed to be well-correlated with the spatially distributed ground-based observations (<inline-formula> <tex-math notation=\"LaTeX\">$R^{2} = 0.79$ </tex-math></inline-formula>–0.96). The corresponding change in water level (<inline-formula> <tex-math notation=\"LaTeX\">$partial text{h}/partial text{t}$ </tex-math></inline-formula>) also compared well between the remote sensing approach and the ground observations (<inline-formula> <tex-math notation=\"LaTeX\">$R^{2} = 0.90$ </tex-math></inline-formula>). This study highlights the potential use of SAR remote sensing for inundated landscapes (and locations with scarce ground observations), and it emphasizes the need for more frequent SAR observations during flood inundation to provide spatially distributed and high temporal repeat observations of inundation to characterize flood dynamics.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":" ","pages":"1-5"},"PeriodicalIF":4.8,"publicationDate":"2021-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/essoar.10507902.1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45282019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-14DOI: 10.1109/LGRS.2021.3085139
M. L. Mekhalfi, Carlo Nicolò, Y. Bazi, Mohamad Mahmoud Al Rahhal, Norah A. Alsharif, E. Maghayreh
Ongoing discoveries of water reserves have fostered an increasing adoption of crop circles in the desert in several countries. Automatically quantifying and surveying the layout of crop circles in remote areas can be of great use for stakeholders in managing the expansion of the farming land. This letter compares latest deep learning models for crop circle detection and counting, namely Detection Transformers, EfficientDet and YOLOv5 are evaluated. To this end, we build two datasets, via Google Earth Pro, corresponding to two large crop circle hot spots in Egypt and Saudi Arabia. The images were drawn at an altitude of 20 km above the targets. The models are assessed in within-domain and cross-domain scenarios, and yielded plausible detection potential and inference response.
{"title":"Contrasting YOLOv5, Transformer, and EfficientDet Detectors for Crop Circle Detection in Desert","authors":"M. L. Mekhalfi, Carlo Nicolò, Y. Bazi, Mohamad Mahmoud Al Rahhal, Norah A. Alsharif, E. Maghayreh","doi":"10.1109/LGRS.2021.3085139","DOIUrl":"https://doi.org/10.1109/LGRS.2021.3085139","url":null,"abstract":"Ongoing discoveries of water reserves have fostered an increasing adoption of crop circles in the desert in several countries. Automatically quantifying and surveying the layout of crop circles in remote areas can be of great use for stakeholders in managing the expansion of the farming land. This letter compares latest deep learning models for crop circle detection and counting, namely Detection Transformers, EfficientDet and YOLOv5 are evaluated. To this end, we build two datasets, via Google Earth Pro, corresponding to two large crop circle hot spots in Egypt and Saudi Arabia. The images were drawn at an altitude of 20 km above the targets. The models are assessed in within-domain and cross-domain scenarios, and yielded plausible detection potential and inference response.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"19 1","pages":"1-5"},"PeriodicalIF":4.8,"publicationDate":"2021-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2021.3085139","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62479064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-12DOI: 10.1109/LGRS.2021.3051183
Minjie Wan, Xiaobo Ye, Xiaojie Zhang, Yunkai Xu, G. Gu, Qian Chen
The precision of infrared (IR) small target tracking is seriously limited due to lack of texture information and interference of background clutter. The key issue of robust tracking is to exploit generic feature representations of IR small targets under different types of background. In this letter, we present a new IR small target tracking method via compressive convolution feature (CCF) extraction. First, a Gaussian curvature-based feature map is calculated to suppress clutters so that the contrast between target and background can be obviously improved. Then, a three-layer compressive convolutional network, which consists of a simple layer, a compressive layer, and a complex layer, is designed to represent each candidate target by a CCF vector. Based on the proposed mechanism of feature extraction, a support vector machine (SVM) classifier with continuous probabilistic output is trained to compute the likelihood probability of each candidate. Finally, the long-term tracking for IR small target is implemented under the framework of the inverse sparse representation-based particle filter. Both qualitative and quantitative experiments based on real IR sequences verify that our method can achieve more satisfactory performances in terms of precision and robustness compared with other typical visual trackers.
{"title":"Infrared Small Target Tracking via Gaussian Curvature-Based Compressive Convolution Feature Extraction","authors":"Minjie Wan, Xiaobo Ye, Xiaojie Zhang, Yunkai Xu, G. Gu, Qian Chen","doi":"10.1109/LGRS.2021.3051183","DOIUrl":"https://doi.org/10.1109/LGRS.2021.3051183","url":null,"abstract":"The precision of infrared (IR) small target tracking is seriously limited due to lack of texture information and interference of background clutter. The key issue of robust tracking is to exploit generic feature representations of IR small targets under different types of background. In this letter, we present a new IR small target tracking method via compressive convolution feature (CCF) extraction. First, a Gaussian curvature-based feature map is calculated to suppress clutters so that the contrast between target and background can be obviously improved. Then, a three-layer compressive convolutional network, which consists of a simple layer, a compressive layer, and a complex layer, is designed to represent each candidate target by a CCF vector. Based on the proposed mechanism of feature extraction, a support vector machine (SVM) classifier with continuous probabilistic output is trained to compute the likelihood probability of each candidate. Finally, the long-term tracking for IR small target is implemented under the framework of the inverse sparse representation-based particle filter. Both qualitative and quantitative experiments based on real IR sequences verify that our method can achieve more satisfactory performances in terms of precision and robustness compared with other typical visual trackers.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"26 1","pages":"1-5"},"PeriodicalIF":4.8,"publicationDate":"2021-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2021.3051183","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62476742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1109/LGRS.2020.2968635
Lei Zhang, Hong Yu, Zhenzhan Wang, X. Yin, Liang Yang, Hua-dong Du, Bin Li, Y. Wang, Wu Zhou
Haiyang-2B (HY-2B) is the second marine dynamic environment satellite of China. Sea surface temperature (SST) products from the scanning microwave radiometer (SMR) onboard HY-2B satellite are evaluated against in situ measurements. Approximately, ten months of data are used for the initial evaluation, from January 15, 2019 to November 15, 2019. The temporal and spatial windows for collocation are 30 min and 25 km, respectively, which produce 450 416 matchup pairs between HY-2B/SMR and in situ SSTs. The statistical comparison of the entire data set shows that the mean bias is −0.13 °C (SMR minus buoy), and the corresponding root-mean-square error (RMSE) is 1.06 °C. Time series of collocations for the SST difference shows that a good agreement is found between HY-2B/SMR and in situ SSTs after June 15, revealing a mean bias and an RMSE of only 0.09 °C and 0.72 °C, respectively. A three-way error analysis is conducted between the SMR, Global Precipitation Measurement Microwave Imager (GMI), and in situ SSTs. Individual standard deviations are found to be 0.41 °C for the GMI SST, 0.15 °C for the in situ SST, and 1.03 °C for the SMR SST. The results indicate that the HY2B/SMR SST products need to be improved during the period from January 15, 2019 to June 15, 2019.
{"title":"Evaluation of the Initial Sea Surface Temperature From the HY-2B Scanning Microwave Radiometer","authors":"Lei Zhang, Hong Yu, Zhenzhan Wang, X. Yin, Liang Yang, Hua-dong Du, Bin Li, Y. Wang, Wu Zhou","doi":"10.1109/LGRS.2020.2968635","DOIUrl":"https://doi.org/10.1109/LGRS.2020.2968635","url":null,"abstract":"Haiyang-2B (HY-2B) is the second marine dynamic environment satellite of China. Sea surface temperature (SST) products from the scanning microwave radiometer (SMR) onboard HY-2B satellite are evaluated against in situ measurements. Approximately, ten months of data are used for the initial evaluation, from January 15, 2019 to November 15, 2019. The temporal and spatial windows for collocation are 30 min and 25 km, respectively, which produce 450 416 matchup pairs between HY-2B/SMR and in situ SSTs. The statistical comparison of the entire data set shows that the mean bias is −0.13 °C (SMR minus buoy), and the corresponding root-mean-square error (RMSE) is 1.06 °C. Time series of collocations for the SST difference shows that a good agreement is found between HY-2B/SMR and in situ SSTs after June 15, revealing a mean bias and an RMSE of only 0.09 °C and 0.72 °C, respectively. A three-way error analysis is conducted between the SMR, Global Precipitation Measurement Microwave Imager (GMI), and in situ SSTs. Individual standard deviations are found to be 0.41 °C for the GMI SST, 0.15 °C for the in situ SST, and 1.03 °C for the SMR SST. The results indicate that the HY2B/SMR SST products need to be improved during the period from January 15, 2019 to June 15, 2019.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"137-141"},"PeriodicalIF":4.8,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2020.2968635","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62473216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1109/LGRS.2020.2966369
S. Hou, Zengguo Sun, Liu Yang, Yunjing Song
In order to overcome the drawback of the traditional Kirsch template despeckling usings fixed windows, an improved Kirsch direction template despeckling algorithm, based on structural information detection, is proposed for high-resolution synthetic aperture radar (SAR) images. First, the point targets are detected and preserved in the current region. Second, the window is enlarged adaptively based on the statistical characteristics of the local region. Finally, the window finally obtained is classified. The averaged filter is directly adopted if the region is homogeneous, or else the Kirsch template filter is used. Combining point target detection, adaptive windowing, and region classification, altogether the proposed algorithm can effectively improve the performance of the traditional Kirsch direction template despeckling. Despeckling experiments on simulated and real high-resolution SAR images demonstrate that the Kirsch direction template despeckling algorithm based on structural information detection can not only sufficiently suppress speckle in homogenous and edge regions, but also effectively preserve point targets and edge information, leading to good despeckling results.
{"title":"Kirsch Direction Template Despeckling Algorithm of High-Resolution SAR Images-Based on Structural Information Detection","authors":"S. Hou, Zengguo Sun, Liu Yang, Yunjing Song","doi":"10.1109/LGRS.2020.2966369","DOIUrl":"https://doi.org/10.1109/LGRS.2020.2966369","url":null,"abstract":"In order to overcome the drawback of the traditional Kirsch template despeckling usings fixed windows, an improved Kirsch direction template despeckling algorithm, based on structural information detection, is proposed for high-resolution synthetic aperture radar (SAR) images. First, the point targets are detected and preserved in the current region. Second, the window is enlarged adaptively based on the statistical characteristics of the local region. Finally, the window finally obtained is classified. The averaged filter is directly adopted if the region is homogeneous, or else the Kirsch template filter is used. Combining point target detection, adaptive windowing, and region classification, altogether the proposed algorithm can effectively improve the performance of the traditional Kirsch direction template despeckling. Despeckling experiments on simulated and real high-resolution SAR images demonstrate that the Kirsch direction template despeckling algorithm based on structural information detection can not only sufficiently suppress speckle in homogenous and edge regions, but also effectively preserve point targets and edge information, leading to good despeckling results.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"177-181"},"PeriodicalIF":4.8,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2020.2966369","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62472413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1109/LGRS.2020.2967387
Jiajia Cai, Hao Zhou, Weimin Huang, B. Wen
Ship detection at the sea surface is important for improving human marine activities. Most existing ship detection methods for high-frequency surface wave radar (HFSWR) are based on peak and constant false alarm rate (CFAR) detection and require a coherent integration time (CIT) of several minutes. However, in such a long period, the target may not be stationary. To account for the nonstationary property, a time-frequency analysis (TFA)-based ship detection and direction finding (DF) method is proposed for HFSWR. Target ridges on the TF representation (TFR) of the echo data are detected first. Next, array snapshots are formed by sampling the extracted ridges and are used to estimate the direction of arrival (DOA). The processing results of the radar data collected at Dongshan, Fujian Province, China, show that the proposed method outperforms the CFAR method with both increased detection rates and decreased DF errors, especially under relatively low signal-to-noise ratio (SNR) scenarios.
{"title":"Ship Detection and Direction Finding Based on Time-Frequency Analysis for Compact HF Radar","authors":"Jiajia Cai, Hao Zhou, Weimin Huang, B. Wen","doi":"10.1109/LGRS.2020.2967387","DOIUrl":"https://doi.org/10.1109/LGRS.2020.2967387","url":null,"abstract":"Ship detection at the sea surface is important for improving human marine activities. Most existing ship detection methods for high-frequency surface wave radar (HFSWR) are based on peak and constant false alarm rate (CFAR) detection and require a coherent integration time (CIT) of several minutes. However, in such a long period, the target may not be stationary. To account for the nonstationary property, a time-frequency analysis (TFA)-based ship detection and direction finding (DF) method is proposed for HFSWR. Target ridges on the TF representation (TFR) of the echo data are detected first. Next, array snapshots are formed by sampling the extracted ridges and are used to estimate the direction of arrival (DOA). The processing results of the radar data collected at Dongshan, Fujian Province, China, show that the proposed method outperforms the CFAR method with both increased detection rates and decreased DF errors, especially under relatively low signal-to-noise ratio (SNR) scenarios.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"72-76"},"PeriodicalIF":4.8,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2020.2967387","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62473135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/LGRS.2019.2962723
Zhi He, Q. Shi, Kai Liu, Jingjing Cao, Wen Zhan, B. Cao
Mangrove species classification is of particular importance for coastal conservation and restoration. However, it is challenging to distinguish species-level differences with limited training data. In this letter, we propose an object-oriented classification method for mangrove forests by using the hyperspectral image (HSI) and the 3-D Siamese residual network. First, superpixel segmentation is utilized to obtain objects with various shapes and scales. Second, 3-D patches of each object are extracted from the original HSI, and those patches containing training samples are adopted to pairwise train the network. The 3-D spatial pyramid pooling (3-D-SPP) is added in the network to extract features in multiple scales. Finally, the abstract features of test samples are learned by the trained network, and the labels are determined by the nearest neighbor classifier within the metric space. Experiments on real mangrove hyperspectral data demonstrate the effectiveness of the proposed method in species classification of mangroves.
{"title":"Object-Oriented Mangrove Species Classification Using Hyperspectral Data and 3-D Siamese Residual Network","authors":"Zhi He, Q. Shi, Kai Liu, Jingjing Cao, Wen Zhan, B. Cao","doi":"10.1109/LGRS.2019.2962723","DOIUrl":"https://doi.org/10.1109/LGRS.2019.2962723","url":null,"abstract":"Mangrove species classification is of particular importance for coastal conservation and restoration. However, it is challenging to distinguish species-level differences with limited training data. In this letter, we propose an object-oriented classification method for mangrove forests by using the hyperspectral image (HSI) and the 3-D Siamese residual network. First, superpixel segmentation is utilized to obtain objects with various shapes and scales. Second, 3-D patches of each object are extracted from the original HSI, and those patches containing training samples are adopted to pairwise train the network. The 3-D spatial pyramid pooling (3-D-SPP) is added in the network to extract features in multiple scales. Finally, the abstract features of test samples are learned by the trained network, and the labels are determined by the nearest neighbor classifier within the metric space. Experiments on real mangrove hyperspectral data demonstrate the effectiveness of the proposed method in species classification of mangroves.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"17 1","pages":"2150-2154"},"PeriodicalIF":4.8,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2019.2962723","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45575045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}