首页 > 最新文献

2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)最新文献

英文 中文
Multi-Modal Remote Sensing Image Registration Based on Multi-Scale Phase Congruency 基于多尺度相位一致性的多模态遥感图像配准
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486287
Song Cui, Yanfei Zhong
Automatic matching of multi-modal remote sensing images remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper introduces the phase congruency model with illumination and contrast invariance for image matching, and extends the model to a novel image registration method, named as multi-scale phase consistency (MS-PC). The Euclidean distance between MS-PC descriptors is used as similarity metric to achieve correspondences. The proposed method is evaluated with four pairs of multi-model remote sensing images. The experimental results show that MS-PC is more robust to the radiation differences between images, and performs better than two popular method (i.e. SIFT and SAR-SIFT) in both registration accuracy and tie points number.
由于多模态遥感图像之间存在显著的非线性辐射差异,多模态遥感图像的自动匹配一直是遥感图像分析中的一个难点。本文引入了具有光照和对比度不变性的相位一致性模型用于图像匹配,并将该模型扩展为一种新的图像配准方法,称为多尺度相位一致性(MS-PC)。采用MS-PC描述符之间的欧几里得距离作为相似性度量来实现对应。利用四对多模型遥感图像对该方法进行了评价。实验结果表明,MS-PC对图像之间的辐射差异具有更强的鲁棒性,在配准精度和结合点数量上都优于常用的两种方法(SIFT和SAR-SIFT)。
{"title":"Multi-Modal Remote Sensing Image Registration Based on Multi-Scale Phase Congruency","authors":"Song Cui, Yanfei Zhong","doi":"10.1109/PRRS.2018.8486287","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486287","url":null,"abstract":"Automatic matching of multi-modal remote sensing images remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper introduces the phase congruency model with illumination and contrast invariance for image matching, and extends the model to a novel image registration method, named as multi-scale phase consistency (MS-PC). The Euclidean distance between MS-PC descriptors is used as similarity metric to achieve correspondences. The proposed method is evaluated with four pairs of multi-model remote sensing images. The experimental results show that MS-PC is more robust to the radiation differences between images, and performs better than two popular method (i.e. SIFT and SAR-SIFT) in both registration accuracy and tie points number.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130019209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Reconstructing Lattices from Permanent Scatterers on Facades 从立面上的永久散射体重建网格
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486322
E. Michaelsen, U. Soergel
In man-made structures regularities and repetitions prevails. In particular in building facades lattices are common in which windows and other elements are repeated as well in vertical columns as in horizontal rows. In very-high-resolution space-borne radar images such lattices appear saliently. Even untrained arbitrary subjects see the structure instantaneously. However, automatic perceptual grouping is rarely attempted. This contribution applies a new lattice grouping method to such data. Utilization of knowledge about the particular mapping process of such radar data is distinguished from the use of Gestalt laws. The latter are universally applicable to all kinds of pictorial data. An example with so called permanent scatterers in the city of Berlin shows what can be achieved with automatic perceptual grouping alone, and what can be gained using domain knowledge. Keywords- perceptual grouping, SAR, permanent scatterers, façade recognition
在人造结构中,规律和重复占主导地位。特别是在建筑立面中,格子是常见的,窗户和其他元素在垂直柱和水平行中重复出现。在非常高分辨率的星载雷达图像中,这样的格点显得非常明显。即使是未经训练的实验对象也能立刻看到这个结构。然而,很少尝试自动感知分组。这一贡献为这类数据应用了一种新的点阵分组方法。利用关于这种雷达数据的特定制图过程的知识与使用格式塔定律是不同的。后者普遍适用于各种图像数据。以柏林市的永久散射体为例,展示了仅通过自动感知分组可以实现什么,以及使用领域知识可以获得什么。关键词:感知分组;SAR;永久散射体
{"title":"Reconstructing Lattices from Permanent Scatterers on Facades","authors":"E. Michaelsen, U. Soergel","doi":"10.1109/PRRS.2018.8486322","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486322","url":null,"abstract":"In man-made structures regularities and repetitions prevails. In particular in building facades lattices are common in which windows and other elements are repeated as well in vertical columns as in horizontal rows. In very-high-resolution space-borne radar images such lattices appear saliently. Even untrained arbitrary subjects see the structure instantaneously. However, automatic perceptual grouping is rarely attempted. This contribution applies a new lattice grouping method to such data. Utilization of knowledge about the particular mapping process of such radar data is distinguished from the use of Gestalt laws. The latter are universally applicable to all kinds of pictorial data. An example with so called permanent scatterers in the city of Berlin shows what can be achieved with automatic perceptual grouping alone, and what can be gained using domain knowledge. Keywords- perceptual grouping, SAR, permanent scatterers, façade recognition","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129131599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using a VGG-16 Network for Individual Tree Species Detection with an Object-Based Approach 基于对象方法的VGG-16网络树种检测
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486395
M. Rezaee, Yun Zhang, Rakesh K. Mishra, Fei Tong, Hengjian Tong
Acquiring information about forest stands such as individual tree species is crucial for monitoring forests. To date, such information is assessed by human interpreters using airborne or an Unmanned Aerial Vehicle (UAV), which is time/cost consuming. The recent advancement in remote sensing image acquisition, such as WorldView-3, has increased the spatial resolution up to 30 cm and spectral resolution up to 16 bands. This advancement has significantly increased the potential for Individual Tree Species Detection (ITSD). In order to use the single source Worldview-3 images, our proposed method first segments the image to delineate trees, and then detects trees using a VGG-16 network. We developed a pipeline for feeding the deep CNN network using the information from all the 8 visible-near infrareds' bands and trained it. The result is compared with two state-of-the-art ensemble classifiers namely Random Forest (RF) and Gradient Boosting (GB). Results demonstrate that the VGG-16 outperforms all the other methods reaching an accuracy of about 92.13%.
获取森林林分的信息,如单个树种,对监测森林至关重要。迄今为止,这些信息是由使用机载或无人驾驶飞行器(UAV)的人工口译员评估的,这是耗时/成本消耗的。最近在遥感图像采集方面取得的进展,如WorldView-3,将空间分辨率提高到30厘米,光谱分辨率提高到16个波段。这一进展大大增加了单个树种检测(ITSD)的潜力。为了使用单源Worldview-3图像,我们提出的方法首先对图像进行分割以描绘树木,然后使用VGG-16网络进行树木检测。我们开发了一个管道,利用所有8个可见-近红外波段的信息馈送深度CNN网络,并对其进行训练。结果与两种最先进的集成分类器即随机森林(RF)和梯度增强(GB)进行了比较。结果表明,VGG-16的精度达到92.13%,优于其他方法。
{"title":"Using a VGG-16 Network for Individual Tree Species Detection with an Object-Based Approach","authors":"M. Rezaee, Yun Zhang, Rakesh K. Mishra, Fei Tong, Hengjian Tong","doi":"10.1109/PRRS.2018.8486395","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486395","url":null,"abstract":"Acquiring information about forest stands such as individual tree species is crucial for monitoring forests. To date, such information is assessed by human interpreters using airborne or an Unmanned Aerial Vehicle (UAV), which is time/cost consuming. The recent advancement in remote sensing image acquisition, such as WorldView-3, has increased the spatial resolution up to 30 cm and spectral resolution up to 16 bands. This advancement has significantly increased the potential for Individual Tree Species Detection (ITSD). In order to use the single source Worldview-3 images, our proposed method first segments the image to delineate trees, and then detects trees using a VGG-16 network. We developed a pipeline for feeding the deep CNN network using the information from all the 8 visible-near infrareds' bands and trained it. The result is compared with two state-of-the-art ensemble classifiers namely Random Forest (RF) and Gradient Boosting (GB). Results demonstrate that the VGG-16 outperforms all the other methods reaching an accuracy of about 92.13%.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126826878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Collaborative Classification of Hyperspectral and LIDAR Data Using Unsupervised Image-to-Image CNN 使用无监督图像对图像CNN的高光谱和激光雷达数据协同分类
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486164
Mengmeng Zhang, Wei Li, Xueling Wei, Xiang Li
Currently, how to efficiently exploit useful information from multi-source remote sensing data for better Earth observation becomes an interesting but challenging problem. In this paper, we propose an collaborative classification framework for hyperspectral image (HSI) and Light Detection and Ranging (LIDAR) data via image-to-image convolutional neural network (CNN). There is an image-to-image mapping, learning a representation from input source (i.e., HSI) to output source (i.e., LIDAR). Then, the extracted features are expected to own characteristics of both HSI and LIDAR data, and the collaborative classification is implemented by integrating hidden layers of the deep CNN. Experimental results on two real remote sensing data sets demonstrate the effectiveness of the proposed framework.
目前,如何有效地利用多源遥感数据中的有用信息进行对地观测已成为一个有趣而又具有挑战性的问题。在本文中,我们提出了一个基于图像到图像卷积神经网络(CNN)的高光谱图像(HSI)和光探测和测距(LIDAR)数据的协同分类框架。有一个图像到图像的映射,学习从输入源(即HSI)到输出源(即激光雷达)的表示。然后,期望提取的特征同时具有HSI和LIDAR数据的特征,并通过集成深度CNN的隐藏层来实现协同分类。在两个真实遥感数据集上的实验结果验证了该框架的有效性。
{"title":"Collaborative Classification of Hyperspectral and LIDAR Data Using Unsupervised Image-to-Image CNN","authors":"Mengmeng Zhang, Wei Li, Xueling Wei, Xiang Li","doi":"10.1109/PRRS.2018.8486164","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486164","url":null,"abstract":"Currently, how to efficiently exploit useful information from multi-source remote sensing data for better Earth observation becomes an interesting but challenging problem. In this paper, we propose an collaborative classification framework for hyperspectral image (HSI) and Light Detection and Ranging (LIDAR) data via image-to-image convolutional neural network (CNN). There is an image-to-image mapping, learning a representation from input source (i.e., HSI) to output source (i.e., LIDAR). Then, the extracted features are expected to own characteristics of both HSI and LIDAR data, and the collaborative classification is implemented by integrating hidden layers of the deep CNN. Experimental results on two real remote sensing data sets demonstrate the effectiveness of the proposed framework.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125191331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The UAV Image Classification Method Based on the Grey-Sigmoid Kernel Function Support Vector Machine 基于灰色- s型核函数支持向量机的无人机图像分类方法
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486193
Pel Pengcheng, Shi Yue, Wan ChengBo, Ma Xinming, Guo Wa, Qiao Rongbo
Since SVM is sensitive to the noises and outliers in the training set, a new SVM algorithm based on affinity Grey-Sigmoid kernel is proposed in the paper. The cluster membership is defined by the distance from the cluster center, but also defined by the affinity among samples. The affinity among samples is measured by the minimum super sphere which containing the maximum of the samples. Then the Grey degree of samples are defined by their position in the super sphere. Compared with the SVM based on traditional Sigmoid kernel, experimental results show that the Grey-Sigmoid kernel is more robust and efficient.
针对支持向量机对训练集中的噪声和离群点敏感的特点,提出了一种基于亲和性灰- sigmoid核的支持向量机算法。聚类隶属度由离聚类中心的距离来定义,但也由样本之间的亲和力来定义。样品间的亲和度用包含最大样品的最小超球来测定。然后根据样本在超球中的位置来定义样本的灰色度。实验结果表明,与基于传统Sigmoid核的支持向量机相比,灰色-Sigmoid核具有更强的鲁棒性和效率。
{"title":"The UAV Image Classification Method Based on the Grey-Sigmoid Kernel Function Support Vector Machine","authors":"Pel Pengcheng, Shi Yue, Wan ChengBo, Ma Xinming, Guo Wa, Qiao Rongbo","doi":"10.1109/PRRS.2018.8486193","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486193","url":null,"abstract":"Since SVM is sensitive to the noises and outliers in the training set, a new SVM algorithm based on affinity Grey-Sigmoid kernel is proposed in the paper. The cluster membership is defined by the distance from the cluster center, but also defined by the affinity among samples. The affinity among samples is measured by the minimum super sphere which containing the maximum of the samples. Then the Grey degree of samples are defined by their position in the super sphere. Compared with the SVM based on traditional Sigmoid kernel, experimental results show that the Grey-Sigmoid kernel is more robust and efficient.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"57 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114050358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Integrated with Multiscale Pixel and Object Features for Hyperspectral Image Classification 基于多尺度像素和目标特征的深度学习高光谱图像分类
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486304
Meng Zhang, L. Hong
The spectral resolution and spatial resolution of hyperspectral images are continuously improving, providing rich information for interpreting remote sensing image. How to improve the image classification accuracy has become the focus of many studies. Recently, Deep learning is capable to extract discriminating high-level abstract features for image classification task, and some interesting results have been acquired in image processing. However, when deep learning is applied to the classification of hyperspectral remote sensing images, the spectral-based classification method is short of spatial and scale information; the image patch-based classification method ignores the rich spectral information provided by hyperspectral images. In this study, a multi-scale feature fusion hyperspectral image classification method based on deep learning was proposed. Firstly, multiscale features were obtained by multi-scale segmentation. Then multiscale features were input into the convolution neural network to extract high-level features. Finally, the high-level features were used for classification. Experimental results show that the classification results of the fusion multi-scale features are better than the single-scale features and regional feature classification results.
高光谱影像的光谱分辨率和空间分辨率不断提高,为遥感影像解译提供了丰富的信息。如何提高图像的分类精度已成为众多研究的焦点。近年来,深度学习能够为图像分类任务提取有区别的高级抽象特征,并在图像处理中获得了一些有趣的结果。然而,当深度学习应用于高光谱遥感图像分类时,基于光谱的分类方法缺乏空间和尺度信息;基于图像补丁的分类方法忽略了高光谱图像提供的丰富光谱信息。本研究提出了一种基于深度学习的多尺度特征融合高光谱图像分类方法。首先,通过多尺度分割得到多尺度特征;然后将多尺度特征输入到卷积神经网络中,提取高阶特征。最后,利用高级特征进行分类。实验结果表明,融合多尺度特征的分类结果优于单尺度特征和区域特征分类结果。
{"title":"Deep Learning Integrated with Multiscale Pixel and Object Features for Hyperspectral Image Classification","authors":"Meng Zhang, L. Hong","doi":"10.1109/PRRS.2018.8486304","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486304","url":null,"abstract":"The spectral resolution and spatial resolution of hyperspectral images are continuously improving, providing rich information for interpreting remote sensing image. How to improve the image classification accuracy has become the focus of many studies. Recently, Deep learning is capable to extract discriminating high-level abstract features for image classification task, and some interesting results have been acquired in image processing. However, when deep learning is applied to the classification of hyperspectral remote sensing images, the spectral-based classification method is short of spatial and scale information; the image patch-based classification method ignores the rich spectral information provided by hyperspectral images. In this study, a multi-scale feature fusion hyperspectral image classification method based on deep learning was proposed. Firstly, multiscale features were obtained by multi-scale segmentation. Then multiscale features were input into the convolution neural network to extract high-level features. Finally, the high-level features were used for classification. Experimental results show that the classification results of the fusion multi-scale features are better than the single-scale features and regional feature classification results.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133976399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Comparative Study on Airborne Lidar Waveform Decomposition Methods 机载激光雷达波形分解方法的比较研究
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486228
Qinghua Li, S. Ural, J. Shan
This paper applies pattern recognition methods to airborne lidar waveform decomposition. The parametric and nonparametric approaches are compared in the experiments. The popular Gaussian mixture model (GMM) and expectation-maximization (EM) decomposition algorithm are selected as the parametric approach. Nonparametric mixture model (NMM) and fuzzy mean-shift (FMS) are used as the nonparametric approach. We first run our experiment on simulated waveforms. The experiment setup is in favor of the parametric approach because GMM is used to generate the waveforms. We show that both parametric and nonparametric approaches return satisfying results on the simulated mixture of Gaussian components. In the second experiment, real data acquired with an airborne lidar are used. We find that NMM fits the data better than GMM because the Gaussian assumption is not well satisfied in the real dataset. Considering that the emitted signals of a laser scanner may even not satisfy the Gaussian assumption, we conclude that nonparametric approaches should generally be utilized for practical applications.
本文将模式识别方法应用于机载激光雷达的波形分解。在实验中对参数方法和非参数方法进行了比较。选取流行的高斯混合模型(GMM)和期望最大化(EM)分解算法作为参数化方法。采用非参数混合模型(NMM)和模糊均值移(FMS)作为非参数方法。我们首先在模拟波形上进行实验。实验设置有利于参数化方法,因为使用GMM来产生波形。我们证明了参数和非参数方法在模拟高斯分量的混合上都能得到令人满意的结果。在第二个实验中,使用机载激光雷达获得的真实数据。我们发现NMM比GMM更适合数据,因为在真实数据集中高斯假设不能很好地满足。考虑到激光扫描仪的发射信号甚至可能不满足高斯假设,我们得出结论,非参数方法通常应用于实际应用。
{"title":"A Comparative Study on Airborne Lidar Waveform Decomposition Methods","authors":"Qinghua Li, S. Ural, J. Shan","doi":"10.1109/PRRS.2018.8486228","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486228","url":null,"abstract":"This paper applies pattern recognition methods to airborne lidar waveform decomposition. The parametric and nonparametric approaches are compared in the experiments. The popular Gaussian mixture model (GMM) and expectation-maximization (EM) decomposition algorithm are selected as the parametric approach. Nonparametric mixture model (NMM) and fuzzy mean-shift (FMS) are used as the nonparametric approach. We first run our experiment on simulated waveforms. The experiment setup is in favor of the parametric approach because GMM is used to generate the waveforms. We show that both parametric and nonparametric approaches return satisfying results on the simulated mixture of Gaussian components. In the second experiment, real data acquired with an airborne lidar are used. We find that NMM fits the data better than GMM because the Gaussian assumption is not well satisfied in the real dataset. Considering that the emitted signals of a laser scanner may even not satisfy the Gaussian assumption, we conclude that nonparametric approaches should generally be utilized for practical applications.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133734839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-End Road Centerline Extraction via Learning a Confidence Map 端到端道路中心线提取通过学习一个置信度地图
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486185
Wei Yujun, Xiangyun Hu, Gong Jinqi
Road extraction from aerial and satellite image is one of complex and challenging tasks in remote sensing field. The task is required for a wide range of application, such as autonomous driving, urban planning and automatic mapping for GIS data collection. Most approaches cast the road extraction as image segmentation and use thinning algorithm to get road centerline. However, these methods can easily produce spurs around the true centerline which affects the accuracy of road centerline extraction and lacks the topology of road network. In this paper, we propose a novel method to directly extract accurate road centerline from aerial images and construct the topology of the road network. First, an end-to-end regression network based on convolutional neural network is designed to learn and predict a road centerline confidence map which is a 2D representation of the probability of each pixel to be on the road centerline. Our network combines multi-scale and multi-level feature information to produce refined confidence map. Then a canny-like non-maximum suppression is followed to attain accurate road centerline. Finally, we use spoke wheel to find the road direction of the initialized road center point and take advantage of road tracking to construct the topology of road network. The results on the Massachusetts Road dataset shows an significant improvement on the accuracy of location of extracted road centerline.
从航空和卫星图像中提取道路是遥感领域中复杂而富有挑战性的任务之一。该任务需要广泛的应用,如自动驾驶,城市规划和GIS数据收集的自动制图。大多数方法将道路提取作为图像分割,使用细化算法得到道路中心线。然而,这些方法容易在真实中心线周围产生杂散,影响道路中心线提取的精度,并且缺乏路网的拓扑结构。本文提出了一种从航拍图像中直接提取精确道路中心线并构建路网拓扑结构的新方法。首先,设计基于卷积神经网络的端到端回归网络来学习和预测道路中心线置信图,该置信图是每个像素在道路中心线上的概率的二维表示。我们的网络结合了多尺度和多层次的特征信息,生成了精细的置信图谱。然后采用非最大抑制法获得精确的道路中心线。最后,利用辐条轮求出初始化道路中心点的道路方向,并利用道路跟踪构造路网拓扑。在马萨诸塞州道路数据集上的结果表明,提取的道路中心线的定位精度有了显著提高。
{"title":"End-to-End Road Centerline Extraction via Learning a Confidence Map","authors":"Wei Yujun, Xiangyun Hu, Gong Jinqi","doi":"10.1109/PRRS.2018.8486185","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486185","url":null,"abstract":"Road extraction from aerial and satellite image is one of complex and challenging tasks in remote sensing field. The task is required for a wide range of application, such as autonomous driving, urban planning and automatic mapping for GIS data collection. Most approaches cast the road extraction as image segmentation and use thinning algorithm to get road centerline. However, these methods can easily produce spurs around the true centerline which affects the accuracy of road centerline extraction and lacks the topology of road network. In this paper, we propose a novel method to directly extract accurate road centerline from aerial images and construct the topology of the road network. First, an end-to-end regression network based on convolutional neural network is designed to learn and predict a road centerline confidence map which is a 2D representation of the probability of each pixel to be on the road centerline. Our network combines multi-scale and multi-level feature information to produce refined confidence map. Then a canny-like non-maximum suppression is followed to attain accurate road centerline. Finally, we use spoke wheel to find the road direction of the initialized road center point and take advantage of road tracking to construct the topology of road network. The results on the Massachusetts Road dataset shows an significant improvement on the accuracy of location of extracted road centerline.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114918935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Automatic Identification of Soil Layer from Borehole Digital Optical Image and GPR Based on Color Features 基于颜色特征的钻孔数字光学图像与探地雷达的土层自动识别
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486325
L. Li, C. Yu, T. Sun, Z. Han, X. Tang
For the high-resolution borehole image obtained by digital panoramic borehole camera system, a method for recognizing soil layer based on color features is proposed. Due to the obvious difference in color between soil layer and common rock layer, a soil layer detection model based on HSV color space is established. The binarized image of soil layer is obtained by using this model. Secondly, the binary image is filtered to depress the noise effects. Then, the binarized image of the soil layer is divided and the density of pixels in each segmentation is calculated to determine the depth, area and direction of the soil layer, so that the identification of soil layer in the digital borehole image can be achieved. Through verifying this method with many actual borehole images and comparing them with the corresponding borehole radar images, the result illustrate that this method can identify all of the soil layer throughout the whole borehole digital optical image automatically and quickly. It provides a new reliable method for the automatic identification of borehole structural planes in engineering application.
针对数字全景钻孔相机系统获得的高分辨率钻孔图像,提出了一种基于颜色特征的土层识别方法。针对土层与普通岩层颜色存在明显差异的问题,建立了基于HSV颜色空间的土层检测模型。利用该模型得到了二值化后的土层图像。其次,对二值图像进行滤波,抑制噪声的影响。然后,对二值化后的土层图像进行分割,并计算每次分割像素的密度,确定土层的深度、面积和方向,从而实现数字钻孔图像中土层的识别。通过对大量实际钻孔图像进行验证,并与相应的钻孔雷达图像进行对比,结果表明,该方法能够自动、快速地识别整个钻孔数字光学图像中的所有土层。为工程应用中钻孔结构面自动识别提供了一种新的可靠方法。
{"title":"Automatic Identification of Soil Layer from Borehole Digital Optical Image and GPR Based on Color Features","authors":"L. Li, C. Yu, T. Sun, Z. Han, X. Tang","doi":"10.1109/PRRS.2018.8486325","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486325","url":null,"abstract":"For the high-resolution borehole image obtained by digital panoramic borehole camera system, a method for recognizing soil layer based on color features is proposed. Due to the obvious difference in color between soil layer and common rock layer, a soil layer detection model based on HSV color space is established. The binarized image of soil layer is obtained by using this model. Secondly, the binary image is filtered to depress the noise effects. Then, the binarized image of the soil layer is divided and the density of pixels in each segmentation is calculated to determine the depth, area and direction of the soil layer, so that the identification of soil layer in the digital borehole image can be achieved. Through verifying this method with many actual borehole images and comparing them with the corresponding borehole radar images, the result illustrate that this method can identify all of the soil layer throughout the whole borehole digital optical image automatically and quickly. It provides a new reliable method for the automatic identification of borehole structural planes in engineering application.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115272104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DispNet Based Stereo Matching for Planetary Scene Depth Estimation Using Remote Sensing Images 基于disnet的遥感影像行星景深立体匹配
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486195
Qingling Jia, Xue Wan, Baoqin Hei, Shengyang Li
Recent work has shown that convolutional neural network can solve the stereo matching problems in artificial scene successfully, such as buildings, roads and so on. However, whether it is suitable for remote sensing stereo image matching in featureless area, for example lunar surface, is uncertain. This paper exploits the ability of DispNet, an end-to-end disparity estimation algorithm based on convolutional neural network, for image matching in featureless lunar surface areas. Experiments using image pairs from NASA Polar Stereo Dataset demonstrate that DispNet has superior performance in the aspects of matching accuracy, the continuity of disparity and speed compared to three traditional stereo matching methods, SGM, BM and SAD. Thus it has the potential for the application in future planetary exploration tasks such as visual odometry for rover navigation and image matching for precise landing
近年来的研究表明,卷积神经网络可以成功地解决建筑物、道路等人工场景中的立体匹配问题。然而,它是否适用于无特征区域(如月球表面)的遥感立体图像匹配是不确定的。本文利用基于卷积神经网络的端到端视差估计算法disnet在无特征月球表面的图像匹配能力。利用NASA极地立体数据集的图像对进行的实验表明,与SGM、BM和SAD三种传统立体匹配方法相比,disnet在匹配精度、视差连续性和速度方面都具有优越的性能。因此,它在未来的行星探测任务中具有应用潜力,例如用于漫游车导航的视觉里程测量和用于精确着陆的图像匹配
{"title":"DispNet Based Stereo Matching for Planetary Scene Depth Estimation Using Remote Sensing Images","authors":"Qingling Jia, Xue Wan, Baoqin Hei, Shengyang Li","doi":"10.1109/PRRS.2018.8486195","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486195","url":null,"abstract":"Recent work has shown that convolutional neural network can solve the stereo matching problems in artificial scene successfully, such as buildings, roads and so on. However, whether it is suitable for remote sensing stereo image matching in featureless area, for example lunar surface, is uncertain. This paper exploits the ability of DispNet, an end-to-end disparity estimation algorithm based on convolutional neural network, for image matching in featureless lunar surface areas. Experiments using image pairs from NASA Polar Stereo Dataset demonstrate that DispNet has superior performance in the aspects of matching accuracy, the continuity of disparity and speed compared to three traditional stereo matching methods, SGM, BM and SAD. Thus it has the potential for the application in future planetary exploration tasks such as visual odometry for rover navigation and image matching for precise landing","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124693711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1