首页 > 最新文献

2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)最新文献

英文 中文
Multi-Modal Remote Sensing Image Registration Based on Multi-Scale Phase Congruency 基于多尺度相位一致性的多模态遥感图像配准
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486287
Song Cui, Yanfei Zhong
Automatic matching of multi-modal remote sensing images remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper introduces the phase congruency model with illumination and contrast invariance for image matching, and extends the model to a novel image registration method, named as multi-scale phase consistency (MS-PC). The Euclidean distance between MS-PC descriptors is used as similarity metric to achieve correspondences. The proposed method is evaluated with four pairs of multi-model remote sensing images. The experimental results show that MS-PC is more robust to the radiation differences between images, and performs better than two popular method (i.e. SIFT and SAR-SIFT) in both registration accuracy and tie points number.
由于多模态遥感图像之间存在显著的非线性辐射差异,多模态遥感图像的自动匹配一直是遥感图像分析中的一个难点。本文引入了具有光照和对比度不变性的相位一致性模型用于图像匹配,并将该模型扩展为一种新的图像配准方法,称为多尺度相位一致性(MS-PC)。采用MS-PC描述符之间的欧几里得距离作为相似性度量来实现对应。利用四对多模型遥感图像对该方法进行了评价。实验结果表明,MS-PC对图像之间的辐射差异具有更强的鲁棒性,在配准精度和结合点数量上都优于常用的两种方法(SIFT和SAR-SIFT)。
{"title":"Multi-Modal Remote Sensing Image Registration Based on Multi-Scale Phase Congruency","authors":"Song Cui, Yanfei Zhong","doi":"10.1109/PRRS.2018.8486287","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486287","url":null,"abstract":"Automatic matching of multi-modal remote sensing images remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper introduces the phase congruency model with illumination and contrast invariance for image matching, and extends the model to a novel image registration method, named as multi-scale phase consistency (MS-PC). The Euclidean distance between MS-PC descriptors is used as similarity metric to achieve correspondences. The proposed method is evaluated with four pairs of multi-model remote sensing images. The experimental results show that MS-PC is more robust to the radiation differences between images, and performs better than two popular method (i.e. SIFT and SAR-SIFT) in both registration accuracy and tie points number.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130019209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Reconstructing Lattices from Permanent Scatterers on Facades 从立面上的永久散射体重建网格
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486322
E. Michaelsen, U. Soergel
In man-made structures regularities and repetitions prevails. In particular in building facades lattices are common in which windows and other elements are repeated as well in vertical columns as in horizontal rows. In very-high-resolution space-borne radar images such lattices appear saliently. Even untrained arbitrary subjects see the structure instantaneously. However, automatic perceptual grouping is rarely attempted. This contribution applies a new lattice grouping method to such data. Utilization of knowledge about the particular mapping process of such radar data is distinguished from the use of Gestalt laws. The latter are universally applicable to all kinds of pictorial data. An example with so called permanent scatterers in the city of Berlin shows what can be achieved with automatic perceptual grouping alone, and what can be gained using domain knowledge. Keywords- perceptual grouping, SAR, permanent scatterers, façade recognition
在人造结构中,规律和重复占主导地位。特别是在建筑立面中,格子是常见的,窗户和其他元素在垂直柱和水平行中重复出现。在非常高分辨率的星载雷达图像中,这样的格点显得非常明显。即使是未经训练的实验对象也能立刻看到这个结构。然而,很少尝试自动感知分组。这一贡献为这类数据应用了一种新的点阵分组方法。利用关于这种雷达数据的特定制图过程的知识与使用格式塔定律是不同的。后者普遍适用于各种图像数据。以柏林市的永久散射体为例,展示了仅通过自动感知分组可以实现什么,以及使用领域知识可以获得什么。关键词:感知分组;SAR;永久散射体
{"title":"Reconstructing Lattices from Permanent Scatterers on Facades","authors":"E. Michaelsen, U. Soergel","doi":"10.1109/PRRS.2018.8486322","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486322","url":null,"abstract":"In man-made structures regularities and repetitions prevails. In particular in building facades lattices are common in which windows and other elements are repeated as well in vertical columns as in horizontal rows. In very-high-resolution space-borne radar images such lattices appear saliently. Even untrained arbitrary subjects see the structure instantaneously. However, automatic perceptual grouping is rarely attempted. This contribution applies a new lattice grouping method to such data. Utilization of knowledge about the particular mapping process of such radar data is distinguished from the use of Gestalt laws. The latter are universally applicable to all kinds of pictorial data. An example with so called permanent scatterers in the city of Berlin shows what can be achieved with automatic perceptual grouping alone, and what can be gained using domain knowledge. Keywords- perceptual grouping, SAR, permanent scatterers, façade recognition","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129131599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using a VGG-16 Network for Individual Tree Species Detection with an Object-Based Approach 基于对象方法的VGG-16网络树种检测
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486395
M. Rezaee, Yun Zhang, Rakesh K. Mishra, Fei Tong, Hengjian Tong
Acquiring information about forest stands such as individual tree species is crucial for monitoring forests. To date, such information is assessed by human interpreters using airborne or an Unmanned Aerial Vehicle (UAV), which is time/cost consuming. The recent advancement in remote sensing image acquisition, such as WorldView-3, has increased the spatial resolution up to 30 cm and spectral resolution up to 16 bands. This advancement has significantly increased the potential for Individual Tree Species Detection (ITSD). In order to use the single source Worldview-3 images, our proposed method first segments the image to delineate trees, and then detects trees using a VGG-16 network. We developed a pipeline for feeding the deep CNN network using the information from all the 8 visible-near infrareds' bands and trained it. The result is compared with two state-of-the-art ensemble classifiers namely Random Forest (RF) and Gradient Boosting (GB). Results demonstrate that the VGG-16 outperforms all the other methods reaching an accuracy of about 92.13%.
获取森林林分的信息,如单个树种,对监测森林至关重要。迄今为止,这些信息是由使用机载或无人驾驶飞行器(UAV)的人工口译员评估的,这是耗时/成本消耗的。最近在遥感图像采集方面取得的进展,如WorldView-3,将空间分辨率提高到30厘米,光谱分辨率提高到16个波段。这一进展大大增加了单个树种检测(ITSD)的潜力。为了使用单源Worldview-3图像,我们提出的方法首先对图像进行分割以描绘树木,然后使用VGG-16网络进行树木检测。我们开发了一个管道,利用所有8个可见-近红外波段的信息馈送深度CNN网络,并对其进行训练。结果与两种最先进的集成分类器即随机森林(RF)和梯度增强(GB)进行了比较。结果表明,VGG-16的精度达到92.13%,优于其他方法。
{"title":"Using a VGG-16 Network for Individual Tree Species Detection with an Object-Based Approach","authors":"M. Rezaee, Yun Zhang, Rakesh K. Mishra, Fei Tong, Hengjian Tong","doi":"10.1109/PRRS.2018.8486395","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486395","url":null,"abstract":"Acquiring information about forest stands such as individual tree species is crucial for monitoring forests. To date, such information is assessed by human interpreters using airborne or an Unmanned Aerial Vehicle (UAV), which is time/cost consuming. The recent advancement in remote sensing image acquisition, such as WorldView-3, has increased the spatial resolution up to 30 cm and spectral resolution up to 16 bands. This advancement has significantly increased the potential for Individual Tree Species Detection (ITSD). In order to use the single source Worldview-3 images, our proposed method first segments the image to delineate trees, and then detects trees using a VGG-16 network. We developed a pipeline for feeding the deep CNN network using the information from all the 8 visible-near infrareds' bands and trained it. The result is compared with two state-of-the-art ensemble classifiers namely Random Forest (RF) and Gradient Boosting (GB). Results demonstrate that the VGG-16 outperforms all the other methods reaching an accuracy of about 92.13%.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126826878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Collaborative Classification of Hyperspectral and LIDAR Data Using Unsupervised Image-to-Image CNN 使用无监督图像对图像CNN的高光谱和激光雷达数据协同分类
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486164
Mengmeng Zhang, Wei Li, Xueling Wei, Xiang Li
Currently, how to efficiently exploit useful information from multi-source remote sensing data for better Earth observation becomes an interesting but challenging problem. In this paper, we propose an collaborative classification framework for hyperspectral image (HSI) and Light Detection and Ranging (LIDAR) data via image-to-image convolutional neural network (CNN). There is an image-to-image mapping, learning a representation from input source (i.e., HSI) to output source (i.e., LIDAR). Then, the extracted features are expected to own characteristics of both HSI and LIDAR data, and the collaborative classification is implemented by integrating hidden layers of the deep CNN. Experimental results on two real remote sensing data sets demonstrate the effectiveness of the proposed framework.
目前,如何有效地利用多源遥感数据中的有用信息进行对地观测已成为一个有趣而又具有挑战性的问题。在本文中,我们提出了一个基于图像到图像卷积神经网络(CNN)的高光谱图像(HSI)和光探测和测距(LIDAR)数据的协同分类框架。有一个图像到图像的映射,学习从输入源(即HSI)到输出源(即激光雷达)的表示。然后,期望提取的特征同时具有HSI和LIDAR数据的特征,并通过集成深度CNN的隐藏层来实现协同分类。在两个真实遥感数据集上的实验结果验证了该框架的有效性。
{"title":"Collaborative Classification of Hyperspectral and LIDAR Data Using Unsupervised Image-to-Image CNN","authors":"Mengmeng Zhang, Wei Li, Xueling Wei, Xiang Li","doi":"10.1109/PRRS.2018.8486164","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486164","url":null,"abstract":"Currently, how to efficiently exploit useful information from multi-source remote sensing data for better Earth observation becomes an interesting but challenging problem. In this paper, we propose an collaborative classification framework for hyperspectral image (HSI) and Light Detection and Ranging (LIDAR) data via image-to-image convolutional neural network (CNN). There is an image-to-image mapping, learning a representation from input source (i.e., HSI) to output source (i.e., LIDAR). Then, the extracted features are expected to own characteristics of both HSI and LIDAR data, and the collaborative classification is implemented by integrating hidden layers of the deep CNN. Experimental results on two real remote sensing data sets demonstrate the effectiveness of the proposed framework.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125191331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The UAV Image Classification Method Based on the Grey-Sigmoid Kernel Function Support Vector Machine 基于灰色- s型核函数支持向量机的无人机图像分类方法
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486193
Pel Pengcheng, Shi Yue, Wan ChengBo, Ma Xinming, Guo Wa, Qiao Rongbo
Since SVM is sensitive to the noises and outliers in the training set, a new SVM algorithm based on affinity Grey-Sigmoid kernel is proposed in the paper. The cluster membership is defined by the distance from the cluster center, but also defined by the affinity among samples. The affinity among samples is measured by the minimum super sphere which containing the maximum of the samples. Then the Grey degree of samples are defined by their position in the super sphere. Compared with the SVM based on traditional Sigmoid kernel, experimental results show that the Grey-Sigmoid kernel is more robust and efficient.
针对支持向量机对训练集中的噪声和离群点敏感的特点,提出了一种基于亲和性灰- sigmoid核的支持向量机算法。聚类隶属度由离聚类中心的距离来定义,但也由样本之间的亲和力来定义。样品间的亲和度用包含最大样品的最小超球来测定。然后根据样本在超球中的位置来定义样本的灰色度。实验结果表明,与基于传统Sigmoid核的支持向量机相比,灰色-Sigmoid核具有更强的鲁棒性和效率。
{"title":"The UAV Image Classification Method Based on the Grey-Sigmoid Kernel Function Support Vector Machine","authors":"Pel Pengcheng, Shi Yue, Wan ChengBo, Ma Xinming, Guo Wa, Qiao Rongbo","doi":"10.1109/PRRS.2018.8486193","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486193","url":null,"abstract":"Since SVM is sensitive to the noises and outliers in the training set, a new SVM algorithm based on affinity Grey-Sigmoid kernel is proposed in the paper. The cluster membership is defined by the distance from the cluster center, but also defined by the affinity among samples. The affinity among samples is measured by the minimum super sphere which containing the maximum of the samples. Then the Grey degree of samples are defined by their position in the super sphere. Compared with the SVM based on traditional Sigmoid kernel, experimental results show that the Grey-Sigmoid kernel is more robust and efficient.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"57 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114050358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Integrated with Multiscale Pixel and Object Features for Hyperspectral Image Classification 基于多尺度像素和目标特征的深度学习高光谱图像分类
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486304
Meng Zhang, L. Hong
The spectral resolution and spatial resolution of hyperspectral images are continuously improving, providing rich information for interpreting remote sensing image. How to improve the image classification accuracy has become the focus of many studies. Recently, Deep learning is capable to extract discriminating high-level abstract features for image classification task, and some interesting results have been acquired in image processing. However, when deep learning is applied to the classification of hyperspectral remote sensing images, the spectral-based classification method is short of spatial and scale information; the image patch-based classification method ignores the rich spectral information provided by hyperspectral images. In this study, a multi-scale feature fusion hyperspectral image classification method based on deep learning was proposed. Firstly, multiscale features were obtained by multi-scale segmentation. Then multiscale features were input into the convolution neural network to extract high-level features. Finally, the high-level features were used for classification. Experimental results show that the classification results of the fusion multi-scale features are better than the single-scale features and regional feature classification results.
高光谱影像的光谱分辨率和空间分辨率不断提高,为遥感影像解译提供了丰富的信息。如何提高图像的分类精度已成为众多研究的焦点。近年来,深度学习能够为图像分类任务提取有区别的高级抽象特征,并在图像处理中获得了一些有趣的结果。然而,当深度学习应用于高光谱遥感图像分类时,基于光谱的分类方法缺乏空间和尺度信息;基于图像补丁的分类方法忽略了高光谱图像提供的丰富光谱信息。本研究提出了一种基于深度学习的多尺度特征融合高光谱图像分类方法。首先,通过多尺度分割得到多尺度特征;然后将多尺度特征输入到卷积神经网络中,提取高阶特征。最后,利用高级特征进行分类。实验结果表明,融合多尺度特征的分类结果优于单尺度特征和区域特征分类结果。
{"title":"Deep Learning Integrated with Multiscale Pixel and Object Features for Hyperspectral Image Classification","authors":"Meng Zhang, L. Hong","doi":"10.1109/PRRS.2018.8486304","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486304","url":null,"abstract":"The spectral resolution and spatial resolution of hyperspectral images are continuously improving, providing rich information for interpreting remote sensing image. How to improve the image classification accuracy has become the focus of many studies. Recently, Deep learning is capable to extract discriminating high-level abstract features for image classification task, and some interesting results have been acquired in image processing. However, when deep learning is applied to the classification of hyperspectral remote sensing images, the spectral-based classification method is short of spatial and scale information; the image patch-based classification method ignores the rich spectral information provided by hyperspectral images. In this study, a multi-scale feature fusion hyperspectral image classification method based on deep learning was proposed. Firstly, multiscale features were obtained by multi-scale segmentation. Then multiscale features were input into the convolution neural network to extract high-level features. Finally, the high-level features were used for classification. Experimental results show that the classification results of the fusion multi-scale features are better than the single-scale features and regional feature classification results.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133976399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fine Registration of Mobile and Airborne LiDAR Data Based on Common Ground Points 基于公共地面点的移动和机载激光雷达数据精细配准
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486181
Yanming Chen, Xiaoqiang Liu, Mengru Yao, Liang Cheng, Manchun Li
Light Detection and Ranging (LiDAR), as an active remote sensing technology, can be mounted on satellite, aircraft, vehicle, tripod and other platforms to acquire three-dimensional information of the earth surface efficiently. However, it is difficult to obtain omnidirectional three-dimensional information of the earth surface using a LiDAR system from a single platform. So the integration of multi-platform LiDAR data, in which data registration is a core part, has become an important topic in geospatial information processing. In this paper, the iterative closest common ground points registration method is proposed. Firstly, the possible common ground points of mobile and airborne LiDAR data are extracted. And then the adaptive octree structure is utilized to thin the LiDAR ground points, which make mobile and airborne LiDAR ground points have the same point density. Finally, the fine registration parameters are calculated by the iterative closest point (ICP) method, in which the thinned ground points from two sources are input data. The innovation of this method is that the common ground points and adaptive octree structure are used to optimize the input data of iterative closest point, which overcomes the registration difficulty caused by different perspectives and resolutions of mobile and airborne LiDAR. The proposed method was tested in this paper and can effectively realize the fine registration of mobile and airborne LiDAR data and make the façade points acquired by mobile LiDAR and the roof points acquired by airborne LiDAR fitter.
光探测与测距(LiDAR)作为一种主动式遥感技术,可以安装在卫星、飞机、车辆、三脚架等平台上,高效获取地球表面三维信息。然而,利用单一平台的激光雷达系统很难获得地球表面的全方位三维信息。因此,以数据配准为核心的多平台激光雷达数据集成已成为地理空间信息处理领域的重要课题。本文提出了一种迭代的最近公共接地点配准方法。首先,提取移动和机载激光雷达数据可能的公共接地点;然后利用自适应八叉树结构对激光雷达地面点进行细化,使移动和机载激光雷达地面点具有相同的点密度。最后,采用迭代最近点法(ICP)计算精细配准参数,该方法将两个源的稀疏接地点作为输入数据。该方法的创新之处在于利用公共接地点和自适应八叉树结构对迭代最近点输入数据进行优化,克服了移动和机载激光雷达不同视角和分辨率带来的配准困难。本文对所提出的方法进行了测试,可以有效地实现移动激光雷达与机载激光雷达数据的精细配准,并使移动激光雷达获取的前方点与机载激光雷达滤波器获取的屋顶点相匹配。
{"title":"Fine Registration of Mobile and Airborne LiDAR Data Based on Common Ground Points","authors":"Yanming Chen, Xiaoqiang Liu, Mengru Yao, Liang Cheng, Manchun Li","doi":"10.1109/PRRS.2018.8486181","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486181","url":null,"abstract":"Light Detection and Ranging (LiDAR), as an active remote sensing technology, can be mounted on satellite, aircraft, vehicle, tripod and other platforms to acquire three-dimensional information of the earth surface efficiently. However, it is difficult to obtain omnidirectional three-dimensional information of the earth surface using a LiDAR system from a single platform. So the integration of multi-platform LiDAR data, in which data registration is a core part, has become an important topic in geospatial information processing. In this paper, the iterative closest common ground points registration method is proposed. Firstly, the possible common ground points of mobile and airborne LiDAR data are extracted. And then the adaptive octree structure is utilized to thin the LiDAR ground points, which make mobile and airborne LiDAR ground points have the same point density. Finally, the fine registration parameters are calculated by the iterative closest point (ICP) method, in which the thinned ground points from two sources are input data. The innovation of this method is that the common ground points and adaptive octree structure are used to optimize the input data of iterative closest point, which overcomes the registration difficulty caused by different perspectives and resolutions of mobile and airborne LiDAR. The proposed method was tested in this paper and can effectively realize the fine registration of mobile and airborne LiDAR data and make the façade points acquired by mobile LiDAR and the roof points acquired by airborne LiDAR fitter.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114220181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic Identification of Soil Layer from Borehole Digital Optical Image and GPR Based on Color Features 基于颜色特征的钻孔数字光学图像与探地雷达的土层自动识别
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486325
L. Li, C. Yu, T. Sun, Z. Han, X. Tang
For the high-resolution borehole image obtained by digital panoramic borehole camera system, a method for recognizing soil layer based on color features is proposed. Due to the obvious difference in color between soil layer and common rock layer, a soil layer detection model based on HSV color space is established. The binarized image of soil layer is obtained by using this model. Secondly, the binary image is filtered to depress the noise effects. Then, the binarized image of the soil layer is divided and the density of pixels in each segmentation is calculated to determine the depth, area and direction of the soil layer, so that the identification of soil layer in the digital borehole image can be achieved. Through verifying this method with many actual borehole images and comparing them with the corresponding borehole radar images, the result illustrate that this method can identify all of the soil layer throughout the whole borehole digital optical image automatically and quickly. It provides a new reliable method for the automatic identification of borehole structural planes in engineering application.
针对数字全景钻孔相机系统获得的高分辨率钻孔图像,提出了一种基于颜色特征的土层识别方法。针对土层与普通岩层颜色存在明显差异的问题,建立了基于HSV颜色空间的土层检测模型。利用该模型得到了二值化后的土层图像。其次,对二值图像进行滤波,抑制噪声的影响。然后,对二值化后的土层图像进行分割,并计算每次分割像素的密度,确定土层的深度、面积和方向,从而实现数字钻孔图像中土层的识别。通过对大量实际钻孔图像进行验证,并与相应的钻孔雷达图像进行对比,结果表明,该方法能够自动、快速地识别整个钻孔数字光学图像中的所有土层。为工程应用中钻孔结构面自动识别提供了一种新的可靠方法。
{"title":"Automatic Identification of Soil Layer from Borehole Digital Optical Image and GPR Based on Color Features","authors":"L. Li, C. Yu, T. Sun, Z. Han, X. Tang","doi":"10.1109/PRRS.2018.8486325","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486325","url":null,"abstract":"For the high-resolution borehole image obtained by digital panoramic borehole camera system, a method for recognizing soil layer based on color features is proposed. Due to the obvious difference in color between soil layer and common rock layer, a soil layer detection model based on HSV color space is established. The binarized image of soil layer is obtained by using this model. Secondly, the binary image is filtered to depress the noise effects. Then, the binarized image of the soil layer is divided and the density of pixels in each segmentation is calculated to determine the depth, area and direction of the soil layer, so that the identification of soil layer in the digital borehole image can be achieved. Through verifying this method with many actual borehole images and comparing them with the corresponding borehole radar images, the result illustrate that this method can identify all of the soil layer throughout the whole borehole digital optical image automatically and quickly. It provides a new reliable method for the automatic identification of borehole structural planes in engineering application.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115272104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DispNet Based Stereo Matching for Planetary Scene Depth Estimation Using Remote Sensing Images 基于disnet的遥感影像行星景深立体匹配
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486195
Qingling Jia, Xue Wan, Baoqin Hei, Shengyang Li
Recent work has shown that convolutional neural network can solve the stereo matching problems in artificial scene successfully, such as buildings, roads and so on. However, whether it is suitable for remote sensing stereo image matching in featureless area, for example lunar surface, is uncertain. This paper exploits the ability of DispNet, an end-to-end disparity estimation algorithm based on convolutional neural network, for image matching in featureless lunar surface areas. Experiments using image pairs from NASA Polar Stereo Dataset demonstrate that DispNet has superior performance in the aspects of matching accuracy, the continuity of disparity and speed compared to three traditional stereo matching methods, SGM, BM and SAD. Thus it has the potential for the application in future planetary exploration tasks such as visual odometry for rover navigation and image matching for precise landing
近年来的研究表明,卷积神经网络可以成功地解决建筑物、道路等人工场景中的立体匹配问题。然而,它是否适用于无特征区域(如月球表面)的遥感立体图像匹配是不确定的。本文利用基于卷积神经网络的端到端视差估计算法disnet在无特征月球表面的图像匹配能力。利用NASA极地立体数据集的图像对进行的实验表明,与SGM、BM和SAD三种传统立体匹配方法相比,disnet在匹配精度、视差连续性和速度方面都具有优越的性能。因此,它在未来的行星探测任务中具有应用潜力,例如用于漫游车导航的视觉里程测量和用于精确着陆的图像匹配
{"title":"DispNet Based Stereo Matching for Planetary Scene Depth Estimation Using Remote Sensing Images","authors":"Qingling Jia, Xue Wan, Baoqin Hei, Shengyang Li","doi":"10.1109/PRRS.2018.8486195","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486195","url":null,"abstract":"Recent work has shown that convolutional neural network can solve the stereo matching problems in artificial scene successfully, such as buildings, roads and so on. However, whether it is suitable for remote sensing stereo image matching in featureless area, for example lunar surface, is uncertain. This paper exploits the ability of DispNet, an end-to-end disparity estimation algorithm based on convolutional neural network, for image matching in featureless lunar surface areas. Experiments using image pairs from NASA Polar Stereo Dataset demonstrate that DispNet has superior performance in the aspects of matching accuracy, the continuity of disparity and speed compared to three traditional stereo matching methods, SGM, BM and SAD. Thus it has the potential for the application in future planetary exploration tasks such as visual odometry for rover navigation and image matching for precise landing","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124693711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Encoding-Based Back Projection Algorithm for Underground Holes Detection via Ground Penetrating Radar 一种基于编码的探地雷达地下孔探测反投影算法
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486182
Shaokun Zhang, Zhiyou Hong, Yiping Chen, Zejian Kang, Zhipeng Luo, Jonathan Li
As underground cavities can cause ground collapse, which will make serious threat to people's safety and property. It is of great significance to implement underground cavity inspection on urban streets and roads subgrade. In the practical application of engineering, the ground penetrating radar (GPR) has shown promising for detection of underground cavities. In this paper, we propose a novel encoding-based back projection (EBP) algorithm to detect underground holes. Our proposed method has a natural filtering function and avoids the effect of trailing, which makes the target localization more accurate. The experiments use the simulation data derived from the GPR numerical simulation software (GprMax) and the measured data collected from the Latvia radar system. And the results demonstrate that the proposed method has superior performance.
由于地下洞室会引起地面塌陷,对人们的生命财产安全造成严重威胁。对城市街道、道路路基实施地下空腔检测具有重要意义。在工程实际应用中,探地雷达(GPR)在探测地下洞室方面显示出了良好的前景。本文提出了一种基于编码的反投影(EBP)算法。该方法具有自然滤波功能,避免了跟踪的影响,使目标定位更加准确。实验采用探地雷达数值模拟软件(GprMax)的模拟数据和拉脱维亚雷达系统的实测数据。实验结果表明,该方法具有较好的性能。
{"title":"An Encoding-Based Back Projection Algorithm for Underground Holes Detection via Ground Penetrating Radar","authors":"Shaokun Zhang, Zhiyou Hong, Yiping Chen, Zejian Kang, Zhipeng Luo, Jonathan Li","doi":"10.1109/PRRS.2018.8486182","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486182","url":null,"abstract":"As underground cavities can cause ground collapse, which will make serious threat to people's safety and property. It is of great significance to implement underground cavity inspection on urban streets and roads subgrade. In the practical application of engineering, the ground penetrating radar (GPR) has shown promising for detection of underground cavities. In this paper, we propose a novel encoding-based back projection (EBP) algorithm to detect underground holes. Our proposed method has a natural filtering function and avoids the effect of trailing, which makes the target localization more accurate. The experiments use the simulation data derived from the GPR numerical simulation software (GprMax) and the measured data collected from the Latvia radar system. And the results demonstrate that the proposed method has superior performance.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129917713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1