Pub Date : 2018-08-01DOI: 10.1109/PRRS.2018.8486287
Song Cui, Yanfei Zhong
Automatic matching of multi-modal remote sensing images remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper introduces the phase congruency model with illumination and contrast invariance for image matching, and extends the model to a novel image registration method, named as multi-scale phase consistency (MS-PC). The Euclidean distance between MS-PC descriptors is used as similarity metric to achieve correspondences. The proposed method is evaluated with four pairs of multi-model remote sensing images. The experimental results show that MS-PC is more robust to the radiation differences between images, and performs better than two popular method (i.e. SIFT and SAR-SIFT) in both registration accuracy and tie points number.
{"title":"Multi-Modal Remote Sensing Image Registration Based on Multi-Scale Phase Congruency","authors":"Song Cui, Yanfei Zhong","doi":"10.1109/PRRS.2018.8486287","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486287","url":null,"abstract":"Automatic matching of multi-modal remote sensing images remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper introduces the phase congruency model with illumination and contrast invariance for image matching, and extends the model to a novel image registration method, named as multi-scale phase consistency (MS-PC). The Euclidean distance between MS-PC descriptors is used as similarity metric to achieve correspondences. The proposed method is evaluated with four pairs of multi-model remote sensing images. The experimental results show that MS-PC is more robust to the radiation differences between images, and performs better than two popular method (i.e. SIFT and SAR-SIFT) in both registration accuracy and tie points number.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130019209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PRRS.2018.8486322
E. Michaelsen, U. Soergel
In man-made structures regularities and repetitions prevails. In particular in building facades lattices are common in which windows and other elements are repeated as well in vertical columns as in horizontal rows. In very-high-resolution space-borne radar images such lattices appear saliently. Even untrained arbitrary subjects see the structure instantaneously. However, automatic perceptual grouping is rarely attempted. This contribution applies a new lattice grouping method to such data. Utilization of knowledge about the particular mapping process of such radar data is distinguished from the use of Gestalt laws. The latter are universally applicable to all kinds of pictorial data. An example with so called permanent scatterers in the city of Berlin shows what can be achieved with automatic perceptual grouping alone, and what can be gained using domain knowledge. Keywords- perceptual grouping, SAR, permanent scatterers, façade recognition
{"title":"Reconstructing Lattices from Permanent Scatterers on Facades","authors":"E. Michaelsen, U. Soergel","doi":"10.1109/PRRS.2018.8486322","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486322","url":null,"abstract":"In man-made structures regularities and repetitions prevails. In particular in building facades lattices are common in which windows and other elements are repeated as well in vertical columns as in horizontal rows. In very-high-resolution space-borne radar images such lattices appear saliently. Even untrained arbitrary subjects see the structure instantaneously. However, automatic perceptual grouping is rarely attempted. This contribution applies a new lattice grouping method to such data. Utilization of knowledge about the particular mapping process of such radar data is distinguished from the use of Gestalt laws. The latter are universally applicable to all kinds of pictorial data. An example with so called permanent scatterers in the city of Berlin shows what can be achieved with automatic perceptual grouping alone, and what can be gained using domain knowledge. Keywords- perceptual grouping, SAR, permanent scatterers, façade recognition","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129131599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PRRS.2018.8486395
M. Rezaee, Yun Zhang, Rakesh K. Mishra, Fei Tong, Hengjian Tong
Acquiring information about forest stands such as individual tree species is crucial for monitoring forests. To date, such information is assessed by human interpreters using airborne or an Unmanned Aerial Vehicle (UAV), which is time/cost consuming. The recent advancement in remote sensing image acquisition, such as WorldView-3, has increased the spatial resolution up to 30 cm and spectral resolution up to 16 bands. This advancement has significantly increased the potential for Individual Tree Species Detection (ITSD). In order to use the single source Worldview-3 images, our proposed method first segments the image to delineate trees, and then detects trees using a VGG-16 network. We developed a pipeline for feeding the deep CNN network using the information from all the 8 visible-near infrareds' bands and trained it. The result is compared with two state-of-the-art ensemble classifiers namely Random Forest (RF) and Gradient Boosting (GB). Results demonstrate that the VGG-16 outperforms all the other methods reaching an accuracy of about 92.13%.
{"title":"Using a VGG-16 Network for Individual Tree Species Detection with an Object-Based Approach","authors":"M. Rezaee, Yun Zhang, Rakesh K. Mishra, Fei Tong, Hengjian Tong","doi":"10.1109/PRRS.2018.8486395","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486395","url":null,"abstract":"Acquiring information about forest stands such as individual tree species is crucial for monitoring forests. To date, such information is assessed by human interpreters using airborne or an Unmanned Aerial Vehicle (UAV), which is time/cost consuming. The recent advancement in remote sensing image acquisition, such as WorldView-3, has increased the spatial resolution up to 30 cm and spectral resolution up to 16 bands. This advancement has significantly increased the potential for Individual Tree Species Detection (ITSD). In order to use the single source Worldview-3 images, our proposed method first segments the image to delineate trees, and then detects trees using a VGG-16 network. We developed a pipeline for feeding the deep CNN network using the information from all the 8 visible-near infrareds' bands and trained it. The result is compared with two state-of-the-art ensemble classifiers namely Random Forest (RF) and Gradient Boosting (GB). Results demonstrate that the VGG-16 outperforms all the other methods reaching an accuracy of about 92.13%.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126826878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PRRS.2018.8486164
Mengmeng Zhang, Wei Li, Xueling Wei, Xiang Li
Currently, how to efficiently exploit useful information from multi-source remote sensing data for better Earth observation becomes an interesting but challenging problem. In this paper, we propose an collaborative classification framework for hyperspectral image (HSI) and Light Detection and Ranging (LIDAR) data via image-to-image convolutional neural network (CNN). There is an image-to-image mapping, learning a representation from input source (i.e., HSI) to output source (i.e., LIDAR). Then, the extracted features are expected to own characteristics of both HSI and LIDAR data, and the collaborative classification is implemented by integrating hidden layers of the deep CNN. Experimental results on two real remote sensing data sets demonstrate the effectiveness of the proposed framework.
{"title":"Collaborative Classification of Hyperspectral and LIDAR Data Using Unsupervised Image-to-Image CNN","authors":"Mengmeng Zhang, Wei Li, Xueling Wei, Xiang Li","doi":"10.1109/PRRS.2018.8486164","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486164","url":null,"abstract":"Currently, how to efficiently exploit useful information from multi-source remote sensing data for better Earth observation becomes an interesting but challenging problem. In this paper, we propose an collaborative classification framework for hyperspectral image (HSI) and Light Detection and Ranging (LIDAR) data via image-to-image convolutional neural network (CNN). There is an image-to-image mapping, learning a representation from input source (i.e., HSI) to output source (i.e., LIDAR). Then, the extracted features are expected to own characteristics of both HSI and LIDAR data, and the collaborative classification is implemented by integrating hidden layers of the deep CNN. Experimental results on two real remote sensing data sets demonstrate the effectiveness of the proposed framework.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125191331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PRRS.2018.8486193
Pel Pengcheng, Shi Yue, Wan ChengBo, Ma Xinming, Guo Wa, Qiao Rongbo
Since SVM is sensitive to the noises and outliers in the training set, a new SVM algorithm based on affinity Grey-Sigmoid kernel is proposed in the paper. The cluster membership is defined by the distance from the cluster center, but also defined by the affinity among samples. The affinity among samples is measured by the minimum super sphere which containing the maximum of the samples. Then the Grey degree of samples are defined by their position in the super sphere. Compared with the SVM based on traditional Sigmoid kernel, experimental results show that the Grey-Sigmoid kernel is more robust and efficient.
{"title":"The UAV Image Classification Method Based on the Grey-Sigmoid Kernel Function Support Vector Machine","authors":"Pel Pengcheng, Shi Yue, Wan ChengBo, Ma Xinming, Guo Wa, Qiao Rongbo","doi":"10.1109/PRRS.2018.8486193","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486193","url":null,"abstract":"Since SVM is sensitive to the noises and outliers in the training set, a new SVM algorithm based on affinity Grey-Sigmoid kernel is proposed in the paper. The cluster membership is defined by the distance from the cluster center, but also defined by the affinity among samples. The affinity among samples is measured by the minimum super sphere which containing the maximum of the samples. Then the Grey degree of samples are defined by their position in the super sphere. Compared with the SVM based on traditional Sigmoid kernel, experimental results show that the Grey-Sigmoid kernel is more robust and efficient.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"57 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114050358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PRRS.2018.8486304
Meng Zhang, L. Hong
The spectral resolution and spatial resolution of hyperspectral images are continuously improving, providing rich information for interpreting remote sensing image. How to improve the image classification accuracy has become the focus of many studies. Recently, Deep learning is capable to extract discriminating high-level abstract features for image classification task, and some interesting results have been acquired in image processing. However, when deep learning is applied to the classification of hyperspectral remote sensing images, the spectral-based classification method is short of spatial and scale information; the image patch-based classification method ignores the rich spectral information provided by hyperspectral images. In this study, a multi-scale feature fusion hyperspectral image classification method based on deep learning was proposed. Firstly, multiscale features were obtained by multi-scale segmentation. Then multiscale features were input into the convolution neural network to extract high-level features. Finally, the high-level features were used for classification. Experimental results show that the classification results of the fusion multi-scale features are better than the single-scale features and regional feature classification results.
{"title":"Deep Learning Integrated with Multiscale Pixel and Object Features for Hyperspectral Image Classification","authors":"Meng Zhang, L. Hong","doi":"10.1109/PRRS.2018.8486304","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486304","url":null,"abstract":"The spectral resolution and spatial resolution of hyperspectral images are continuously improving, providing rich information for interpreting remote sensing image. How to improve the image classification accuracy has become the focus of many studies. Recently, Deep learning is capable to extract discriminating high-level abstract features for image classification task, and some interesting results have been acquired in image processing. However, when deep learning is applied to the classification of hyperspectral remote sensing images, the spectral-based classification method is short of spatial and scale information; the image patch-based classification method ignores the rich spectral information provided by hyperspectral images. In this study, a multi-scale feature fusion hyperspectral image classification method based on deep learning was proposed. Firstly, multiscale features were obtained by multi-scale segmentation. Then multiscale features were input into the convolution neural network to extract high-level features. Finally, the high-level features were used for classification. Experimental results show that the classification results of the fusion multi-scale features are better than the single-scale features and regional feature classification results.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133976399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PRRS.2018.8486181
Yanming Chen, Xiaoqiang Liu, Mengru Yao, Liang Cheng, Manchun Li
Light Detection and Ranging (LiDAR), as an active remote sensing technology, can be mounted on satellite, aircraft, vehicle, tripod and other platforms to acquire three-dimensional information of the earth surface efficiently. However, it is difficult to obtain omnidirectional three-dimensional information of the earth surface using a LiDAR system from a single platform. So the integration of multi-platform LiDAR data, in which data registration is a core part, has become an important topic in geospatial information processing. In this paper, the iterative closest common ground points registration method is proposed. Firstly, the possible common ground points of mobile and airborne LiDAR data are extracted. And then the adaptive octree structure is utilized to thin the LiDAR ground points, which make mobile and airborne LiDAR ground points have the same point density. Finally, the fine registration parameters are calculated by the iterative closest point (ICP) method, in which the thinned ground points from two sources are input data. The innovation of this method is that the common ground points and adaptive octree structure are used to optimize the input data of iterative closest point, which overcomes the registration difficulty caused by different perspectives and resolutions of mobile and airborne LiDAR. The proposed method was tested in this paper and can effectively realize the fine registration of mobile and airborne LiDAR data and make the façade points acquired by mobile LiDAR and the roof points acquired by airborne LiDAR fitter.
{"title":"Fine Registration of Mobile and Airborne LiDAR Data Based on Common Ground Points","authors":"Yanming Chen, Xiaoqiang Liu, Mengru Yao, Liang Cheng, Manchun Li","doi":"10.1109/PRRS.2018.8486181","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486181","url":null,"abstract":"Light Detection and Ranging (LiDAR), as an active remote sensing technology, can be mounted on satellite, aircraft, vehicle, tripod and other platforms to acquire three-dimensional information of the earth surface efficiently. However, it is difficult to obtain omnidirectional three-dimensional information of the earth surface using a LiDAR system from a single platform. So the integration of multi-platform LiDAR data, in which data registration is a core part, has become an important topic in geospatial information processing. In this paper, the iterative closest common ground points registration method is proposed. Firstly, the possible common ground points of mobile and airborne LiDAR data are extracted. And then the adaptive octree structure is utilized to thin the LiDAR ground points, which make mobile and airborne LiDAR ground points have the same point density. Finally, the fine registration parameters are calculated by the iterative closest point (ICP) method, in which the thinned ground points from two sources are input data. The innovation of this method is that the common ground points and adaptive octree structure are used to optimize the input data of iterative closest point, which overcomes the registration difficulty caused by different perspectives and resolutions of mobile and airborne LiDAR. The proposed method was tested in this paper and can effectively realize the fine registration of mobile and airborne LiDAR data and make the façade points acquired by mobile LiDAR and the roof points acquired by airborne LiDAR fitter.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114220181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PRRS.2018.8486325
L. Li, C. Yu, T. Sun, Z. Han, X. Tang
For the high-resolution borehole image obtained by digital panoramic borehole camera system, a method for recognizing soil layer based on color features is proposed. Due to the obvious difference in color between soil layer and common rock layer, a soil layer detection model based on HSV color space is established. The binarized image of soil layer is obtained by using this model. Secondly, the binary image is filtered to depress the noise effects. Then, the binarized image of the soil layer is divided and the density of pixels in each segmentation is calculated to determine the depth, area and direction of the soil layer, so that the identification of soil layer in the digital borehole image can be achieved. Through verifying this method with many actual borehole images and comparing them with the corresponding borehole radar images, the result illustrate that this method can identify all of the soil layer throughout the whole borehole digital optical image automatically and quickly. It provides a new reliable method for the automatic identification of borehole structural planes in engineering application.
{"title":"Automatic Identification of Soil Layer from Borehole Digital Optical Image and GPR Based on Color Features","authors":"L. Li, C. Yu, T. Sun, Z. Han, X. Tang","doi":"10.1109/PRRS.2018.8486325","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486325","url":null,"abstract":"For the high-resolution borehole image obtained by digital panoramic borehole camera system, a method for recognizing soil layer based on color features is proposed. Due to the obvious difference in color between soil layer and common rock layer, a soil layer detection model based on HSV color space is established. The binarized image of soil layer is obtained by using this model. Secondly, the binary image is filtered to depress the noise effects. Then, the binarized image of the soil layer is divided and the density of pixels in each segmentation is calculated to determine the depth, area and direction of the soil layer, so that the identification of soil layer in the digital borehole image can be achieved. Through verifying this method with many actual borehole images and comparing them with the corresponding borehole radar images, the result illustrate that this method can identify all of the soil layer throughout the whole borehole digital optical image automatically and quickly. It provides a new reliable method for the automatic identification of borehole structural planes in engineering application.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115272104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PRRS.2018.8486195
Qingling Jia, Xue Wan, Baoqin Hei, Shengyang Li
Recent work has shown that convolutional neural network can solve the stereo matching problems in artificial scene successfully, such as buildings, roads and so on. However, whether it is suitable for remote sensing stereo image matching in featureless area, for example lunar surface, is uncertain. This paper exploits the ability of DispNet, an end-to-end disparity estimation algorithm based on convolutional neural network, for image matching in featureless lunar surface areas. Experiments using image pairs from NASA Polar Stereo Dataset demonstrate that DispNet has superior performance in the aspects of matching accuracy, the continuity of disparity and speed compared to three traditional stereo matching methods, SGM, BM and SAD. Thus it has the potential for the application in future planetary exploration tasks such as visual odometry for rover navigation and image matching for precise landing
{"title":"DispNet Based Stereo Matching for Planetary Scene Depth Estimation Using Remote Sensing Images","authors":"Qingling Jia, Xue Wan, Baoqin Hei, Shengyang Li","doi":"10.1109/PRRS.2018.8486195","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486195","url":null,"abstract":"Recent work has shown that convolutional neural network can solve the stereo matching problems in artificial scene successfully, such as buildings, roads and so on. However, whether it is suitable for remote sensing stereo image matching in featureless area, for example lunar surface, is uncertain. This paper exploits the ability of DispNet, an end-to-end disparity estimation algorithm based on convolutional neural network, for image matching in featureless lunar surface areas. Experiments using image pairs from NASA Polar Stereo Dataset demonstrate that DispNet has superior performance in the aspects of matching accuracy, the continuity of disparity and speed compared to three traditional stereo matching methods, SGM, BM and SAD. Thus it has the potential for the application in future planetary exploration tasks such as visual odometry for rover navigation and image matching for precise landing","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124693711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PRRS.2018.8486182
Shaokun Zhang, Zhiyou Hong, Yiping Chen, Zejian Kang, Zhipeng Luo, Jonathan Li
As underground cavities can cause ground collapse, which will make serious threat to people's safety and property. It is of great significance to implement underground cavity inspection on urban streets and roads subgrade. In the practical application of engineering, the ground penetrating radar (GPR) has shown promising for detection of underground cavities. In this paper, we propose a novel encoding-based back projection (EBP) algorithm to detect underground holes. Our proposed method has a natural filtering function and avoids the effect of trailing, which makes the target localization more accurate. The experiments use the simulation data derived from the GPR numerical simulation software (GprMax) and the measured data collected from the Latvia radar system. And the results demonstrate that the proposed method has superior performance.
{"title":"An Encoding-Based Back Projection Algorithm for Underground Holes Detection via Ground Penetrating Radar","authors":"Shaokun Zhang, Zhiyou Hong, Yiping Chen, Zejian Kang, Zhipeng Luo, Jonathan Li","doi":"10.1109/PRRS.2018.8486182","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486182","url":null,"abstract":"As underground cavities can cause ground collapse, which will make serious threat to people's safety and property. It is of great significance to implement underground cavity inspection on urban streets and roads subgrade. In the practical application of engineering, the ground penetrating radar (GPR) has shown promising for detection of underground cavities. In this paper, we propose a novel encoding-based back projection (EBP) algorithm to detect underground holes. Our proposed method has a natural filtering function and avoids the effect of trailing, which makes the target localization more accurate. The experiments use the simulation data derived from the GPR numerical simulation software (GprMax) and the measured data collected from the Latvia radar system. And the results demonstrate that the proposed method has superior performance.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129917713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}