首页 > 最新文献

2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)最新文献

英文 中文
Extracting Rural Residential Areas from High-Resolution Remote Sensing Images in the Coastal Area of Shandong, China Based on Fast Acquisition of Training Samples and Fully Convoluted Network 基于快速获取训练样本和全卷积网络的山东沿海高分辨率遥感影像农村居民点提取
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486368
Chen-Gui Lu, Xiaomei Yang, Zhihua Wang, Yueming Liu
Automatic extraction of rural residential areas from high-resolution remote sensing images in large regions is a challenging task, because all kinds of background features, such as roads, green houses, and urban areas, must be excluded effectively by an extraction method. For the unsupervised methods of rural residential areas extraction, it is difficult to manually design features which are only sensitive to residential areas. At the same time, the supervised methods utilize training samples to obtain the discrimination between rural residential areas and the background features. However, manual labeling in large regions is tedious and time-consuming. The drawbacks of the existing methods for extracting rural residential areas limit their application in large regions. Therefore, we proposed a novel methodology for extracting rural residential areas in large regions based on fast acquisition of training samples and the fully convoluted network (FCN). A block-based method was proposed to extract rural residential areas rapidly and acquire training samples. Then, the large amount of training samples were used to train the FCN for rural residential area extraction. Finally, all ZY-3 satellite images in in the coastal area of Shandong, China were feed into the FCN, and the extraction result were obtained.
从大区域的高分辨率遥感图像中自动提取农村居民点是一项具有挑战性的任务,因为提取方法必须有效地排除道路、温室、城市等各种背景特征。在无监督的农村居民点提取方法中,难以人工设计仅对居民点敏感的特征。同时,监督方法利用训练样本获得农村居民点与背景特征的区分。然而,在大区域进行人工标注是繁琐且耗时的。现有的农村居民点提取方法存在诸多缺陷,限制了其在大范围内的应用。因此,我们提出了一种基于快速获取训练样本和全卷积网络(FCN)的大区域农村居民点提取方法。提出了一种基于分块的快速提取农村居民点并获取训练样本的方法。然后,利用大量的训练样本对FCN进行训练,用于农村居民点提取。最后,将中国山东沿海地区的所有ZY-3卫星图像输入FCN,得到提取结果。
{"title":"Extracting Rural Residential Areas from High-Resolution Remote Sensing Images in the Coastal Area of Shandong, China Based on Fast Acquisition of Training Samples and Fully Convoluted Network","authors":"Chen-Gui Lu, Xiaomei Yang, Zhihua Wang, Yueming Liu","doi":"10.1109/PRRS.2018.8486368","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486368","url":null,"abstract":"Automatic extraction of rural residential areas from high-resolution remote sensing images in large regions is a challenging task, because all kinds of background features, such as roads, green houses, and urban areas, must be excluded effectively by an extraction method. For the unsupervised methods of rural residential areas extraction, it is difficult to manually design features which are only sensitive to residential areas. At the same time, the supervised methods utilize training samples to obtain the discrimination between rural residential areas and the background features. However, manual labeling in large regions is tedious and time-consuming. The drawbacks of the existing methods for extracting rural residential areas limit their application in large regions. Therefore, we proposed a novel methodology for extracting rural residential areas in large regions based on fast acquisition of training samples and the fully convoluted network (FCN). A block-based method was proposed to extract rural residential areas rapidly and acquire training samples. Then, the large amount of training samples were used to train the FCN for rural residential area extraction. Finally, all ZY-3 satellite images in in the coastal area of Shandong, China were feed into the FCN, and the extraction result were obtained.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129951871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Extraction of Altered Mineral from Remote Sensing Data in Gold Exploration Based on the Nonlinear Analysis Technology 基于非线性分析技术的金矿遥感蚀变矿物提取
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486248
Han Hai-hui, Wang Yilin, Zhang Zhuan, Ren Guang-li, Yang Min
Researchers have found that the mixed pixels exist in complex geological conditions often lead to distortion of altered minerals' spectral curves, and the accuracy of altered mineral extract from remote sensing data was reduced. Fortunately, the nonlinear analysis is a feasible solution. In this paper, by analyzing the nonlinear characteristics of the geological anomalies, the Fractal Dimension Change Point Method (FDCPM) will be used to extract the altered minerals' threshold from multispectral image. The realization theory and access mechanism of the model are elaborated by an experiment with ASTER data in Xinjinchang and Laojinchang gold deposits. The results show that the findings produced by FDCPM are agreed with well with a mounting body of evidence from different perspectives. The extracting accuracy over 86% show that FDCPM is an effective extrating method for remote sensing alteration anomalies, and it could be used as an useful tool for mineral exploration in similar areas in Beishan mineralization belt in northwest China.
研究发现,复杂地质条件下存在的混合像元往往导致蚀变矿物光谱曲线失真,降低了遥感数据中蚀变矿物提取的精度。幸运的是,非线性分析是一个可行的解决方案。本文通过分析地质异常的非线性特征,采用分形维数变化点法(FDCPM)从多光谱图像中提取蚀变矿物阈值。通过对新金厂和老金厂金矿ASTER数据的实验,阐述了该模型的实现原理和获取机制。结果表明,FDCPM的研究结果与来自不同角度的越来越多的证据相一致。结果表明,FDCPM提取精度在86%以上,是一种有效的遥感蚀变异常提取方法,可作为西北北山成矿带类似地区找矿的有效工具。
{"title":"Extraction of Altered Mineral from Remote Sensing Data in Gold Exploration Based on the Nonlinear Analysis Technology","authors":"Han Hai-hui, Wang Yilin, Zhang Zhuan, Ren Guang-li, Yang Min","doi":"10.1109/PRRS.2018.8486248","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486248","url":null,"abstract":"Researchers have found that the mixed pixels exist in complex geological conditions often lead to distortion of altered minerals' spectral curves, and the accuracy of altered mineral extract from remote sensing data was reduced. Fortunately, the nonlinear analysis is a feasible solution. In this paper, by analyzing the nonlinear characteristics of the geological anomalies, the Fractal Dimension Change Point Method (FDCPM) will be used to extract the altered minerals' threshold from multispectral image. The realization theory and access mechanism of the model are elaborated by an experiment with ASTER data in Xinjinchang and Laojinchang gold deposits. The results show that the findings produced by FDCPM are agreed with well with a mounting body of evidence from different perspectives. The extracting accuracy over 86% show that FDCPM is an effective extrating method for remote sensing alteration anomalies, and it could be used as an useful tool for mineral exploration in similar areas in Beishan mineralization belt in northwest China.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134021214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Preliminary Investigation on Single Remote Sensing Image Inpainting Through a Modified GAN 基于改进GAN的单幅遥感图像涂装初探
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486163
S. Lou, Q. Fan, Feng Chen, Cheng Wang, Jonathan Li
Because of impacts resulted from sensor malfunction and clouds, there is usually a great deal of missing regions (pixels) in remotely sensed imagery. To make full use of the remotely sensed imagery affected, different algorithms for remote sensing images inpainting have been proposed. In this paper, an unsupervised convolutional neural network (CNN) context generate model was modified to recover the affected (or un-recorded) pixels in a single image without auxiliary information. Unlike existing nonparametric algorithms in which pixels located in surrounding region are used to estimate the unrecorded pixel, the proposed method directly generates content based on a neural network. To ensure recovered results with high quality, a modified reconstruction loss was used in training the model, which included structural similarity index (SSIM) loss and Ll loss. Comparison of the proposed model with bilinear interpolation was indicated through relative error. The performances of two methods in scenes with different complexity were discussed further. Results show that the proposed model performed better in simple scenes (i.e., with relative homogeneity), compared to the traditional method. Meanwhile, the corrupted images of channel blue were recovered more accurately, compared to the corrupted images of other channels (i.e., channel green and channel red). The relationship between scene complexities and channels shows that same scene has different complexities in different channels. The scene complexity presents significant correlation with recovered results, high complexity images are always accompanied by poor recovered results. It suggests that the recovering accuracy depends on scene complexity.
由于传感器故障和云层的影响,在遥感图像中通常存在大量的缺失区域(像素)。为了充分利用受影响的遥感图像,人们提出了不同的遥感图像绘制算法。在本文中,对无监督卷积神经网络(CNN)上下文生成模型进行了修改,以在没有辅助信息的情况下恢复单幅图像中受影响(或未记录)的像素。与现有的非参数算法使用位于周围区域的像素来估计未记录像素不同,该方法基于神经网络直接生成内容。为了保证高质量的恢复结果,在训练模型时使用了一种改进的重建损失,包括结构相似指数(SSIM)损失和Ll损失。通过相对误差对该模型与双线性插值进行了比较。进一步讨论了两种方法在不同复杂场景下的性能。结果表明,与传统方法相比,该模型在简单场景(即相对均匀的场景)中表现更好。同时,与其他通道(即绿色通道和红色通道)的损坏图像相比,蓝色通道的损坏图像恢复更准确。场景复杂性与通道的关系表明,同一场景在不同的通道中具有不同的复杂性。场景复杂度与恢复结果存在显著的相关性,高复杂度的图像往往伴随着较差的恢复效果。结果表明,恢复精度与场景复杂度有关。
{"title":"Preliminary Investigation on Single Remote Sensing Image Inpainting Through a Modified GAN","authors":"S. Lou, Q. Fan, Feng Chen, Cheng Wang, Jonathan Li","doi":"10.1109/PRRS.2018.8486163","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486163","url":null,"abstract":"Because of impacts resulted from sensor malfunction and clouds, there is usually a great deal of missing regions (pixels) in remotely sensed imagery. To make full use of the remotely sensed imagery affected, different algorithms for remote sensing images inpainting have been proposed. In this paper, an unsupervised convolutional neural network (CNN) context generate model was modified to recover the affected (or un-recorded) pixels in a single image without auxiliary information. Unlike existing nonparametric algorithms in which pixels located in surrounding region are used to estimate the unrecorded pixel, the proposed method directly generates content based on a neural network. To ensure recovered results with high quality, a modified reconstruction loss was used in training the model, which included structural similarity index (SSIM) loss and Ll loss. Comparison of the proposed model with bilinear interpolation was indicated through relative error. The performances of two methods in scenes with different complexity were discussed further. Results show that the proposed model performed better in simple scenes (i.e., with relative homogeneity), compared to the traditional method. Meanwhile, the corrupted images of channel blue were recovered more accurately, compared to the corrupted images of other channels (i.e., channel green and channel red). The relationship between scene complexities and channels shows that same scene has different complexities in different channels. The scene complexity presents significant correlation with recovered results, high complexity images are always accompanied by poor recovered results. It suggests that the recovering accuracy depends on scene complexity.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131138816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
2D-DFrFT Based Deep Network for Ship Classification in Remote Sensing Imagery 基于2D-DFrFT的遥感影像船舶分类深度网络
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486413
Qiaoqiao Shi, Wei Li, R. Tao
Ship classification in optical remote sensing images is a fundamental but challenging problem with wide range of applications. Deep convolutional neural network (CNN) has shown excellent performance in object classification; however, limited available training samples prevent CNN for ship classification. In this paper, a novel ship-classification framework consisting of two-branch CNN and two dimensional discrete fractional Fourier transform (2D-DFrFT) is proposed. Firstly, the amplitude and phase information of ship image in 2D-DFrFT is extracted. Due to the fact that different orders of 2D-DFrFT have different contribution on the process of feature extraction of ship image. Thus the amplitude (M) and phase (P) value obtained in different orders are regarded as the input of two-branch CNN that can learn the high-level features automatically. After multiple features learning, decision-level fusion is adopted for final classification. The remote sensing image data, named as BCCT200-resize, is utilized for validation. Compared to the existing state-of-art algorithms, the proposed method has superior performance.
光学遥感图像船舶分类是一个基础而又具有挑战性的问题,具有广泛的应用前景。深度卷积神经网络(CNN)在对象分类方面表现出优异的性能;然而,可用的训练样本有限,使得CNN无法用于船舶入级。本文提出了一种由两分支CNN和二维离散分数阶傅里叶变换(2D-DFrFT)组成的船舶分类框架。首先,提取2D-DFrFT中舰船图像的幅值和相位信息;由于不同阶次的2D-DFrFT对船舶图像特征提取过程的贡献不同。因此,将得到的不同阶次的幅值M和相位P作为自动学习高级特征的双分支CNN的输入。经过多特征学习后,采用决策级融合进行最终分类。使用命名为BCCT200-resize的遥感图像数据进行验证。与现有算法相比,该方法具有更好的性能。
{"title":"2D-DFrFT Based Deep Network for Ship Classification in Remote Sensing Imagery","authors":"Qiaoqiao Shi, Wei Li, R. Tao","doi":"10.1109/PRRS.2018.8486413","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486413","url":null,"abstract":"Ship classification in optical remote sensing images is a fundamental but challenging problem with wide range of applications. Deep convolutional neural network (CNN) has shown excellent performance in object classification; however, limited available training samples prevent CNN for ship classification. In this paper, a novel ship-classification framework consisting of two-branch CNN and two dimensional discrete fractional Fourier transform (2D-DFrFT) is proposed. Firstly, the amplitude and phase information of ship image in 2D-DFrFT is extracted. Due to the fact that different orders of 2D-DFrFT have different contribution on the process of feature extraction of ship image. Thus the amplitude (M) and phase (P) value obtained in different orders are regarded as the input of two-branch CNN that can learn the high-level features automatically. After multiple features learning, decision-level fusion is adopted for final classification. The remote sensing image data, named as BCCT200-resize, is utilized for validation. Compared to the existing state-of-art algorithms, the proposed method has superior performance.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124553983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Novel Ship Segmentation Method Based on Kurtosis Test in Complex-Valued SAR Imagery 基于峰度检验的复杂值SAR图像舰船分割新方法
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486227
Xiangguang Leng, K. Ji, Shilin Zhou
Traditional ship segmentation methods in synthetic aperture radar (SAR) imagery are mainly based on the intensity/amplitude information. They cannot take fully advantage of the complex information in SAR imagery. This paper proposes a novel ship segmentation method based on kurtosis test in the complex-valued SAR imagery. It can take benefit of the complex information of the SAR imagery. The segmentation rationale is that sea clutter usually obey a Gaussian distribution while ship targets usually obey a sup-Gaussian distribution. Thus, their kurtosis can be different. Kurtosis is invariant with respect to location shift and positive scale changes. It follows that kurtosis of sea clutter remains approximately constant while the amplitude decreases with the incidence angle increasing. Preliminary experimental results based on Gaofen-3 and Sentinel-1 data show that the proposed method can achieve good performance.
传统的合成孔径雷达(SAR)图像舰船分割方法主要是基于强度/幅度信息。它们不能充分利用SAR图像中的复杂信息。提出了一种基于峰度检验的复杂值SAR图像舰船分割方法。它可以充分利用SAR图像的复杂信息。分割的基本原理是海杂波通常服从高斯分布,而舰船目标通常服从超高斯分布。因此,它们的峰度是不同的。峰度对于位置移动和正尺度变化是不变的。可见,海杂波峰度随入射角的增大而减小,但峰度基本保持恒定。基于高分三号和哨兵一号数据的初步实验结果表明,该方法可以取得良好的性能。
{"title":"A Novel Ship Segmentation Method Based on Kurtosis Test in Complex-Valued SAR Imagery","authors":"Xiangguang Leng, K. Ji, Shilin Zhou","doi":"10.1109/PRRS.2018.8486227","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486227","url":null,"abstract":"Traditional ship segmentation methods in synthetic aperture radar (SAR) imagery are mainly based on the intensity/amplitude information. They cannot take fully advantage of the complex information in SAR imagery. This paper proposes a novel ship segmentation method based on kurtosis test in the complex-valued SAR imagery. It can take benefit of the complex information of the SAR imagery. The segmentation rationale is that sea clutter usually obey a Gaussian distribution while ship targets usually obey a sup-Gaussian distribution. Thus, their kurtosis can be different. Kurtosis is invariant with respect to location shift and positive scale changes. It follows that kurtosis of sea clutter remains approximately constant while the amplitude decreases with the incidence angle increasing. Preliminary experimental results based on Gaofen-3 and Sentinel-1 data show that the proposed method can achieve good performance.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129017931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Modified Extinction Profiles for Hyperspectral Image Classification 用于高光谱图像分类的改进消光轮廓
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486259
Wei Li, Zhongjian Wang, Lu Li, Q. Du
Spectral-Spatial features are helpful for hyperspectral image classification. One of the most successful approaches based morphology is Extinction Profiles (EPs), which is constructed based on the component trees (Max-tree/Mintree) and can accurately extract spatial and contextual information from remote sensing images. However, the dimension of feature extracted by EPs with component trees is large, which potentially causes high redundancy. In order to reduce redundancy information and achieve better feature extraction, we propose a modified EP with the Topological trees (Inclusion tree). The proposed method is carried out on two commonlyused hyperspectral datasets captured over North-western Indiana and Salinas, California. The results show that the proposed method has significantly improved in terms of both accuracy and complexity on the basis of a reduction of half of the feature dimensions compared to the original EPs.
光谱-空间特征有助于高光谱图像的分类。其中最成功的基于形态学的方法是消光剖面(EPs),该方法基于成分树(Max-tree/Mintree)构建,可以准确地提取遥感图像中的空间和上下文信息。然而,EPs利用组件树提取的特征维数较大,可能造成高冗余。为了减少冗余信息,实现更好的特征提取,我们提出了一种基于拓扑树(包含树)的改进EP。所提出的方法是在印第安纳州西北部和加利福尼亚州萨利纳斯捕获的两个常用的高光谱数据集上进行的。结果表明,与原始EPs相比,该方法在特征维数减少一半的基础上,在精度和复杂度方面都有显著提高。
{"title":"Modified Extinction Profiles for Hyperspectral Image Classification","authors":"Wei Li, Zhongjian Wang, Lu Li, Q. Du","doi":"10.1109/PRRS.2018.8486259","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486259","url":null,"abstract":"Spectral-Spatial features are helpful for hyperspectral image classification. One of the most successful approaches based morphology is Extinction Profiles (EPs), which is constructed based on the component trees (Max-tree/Mintree) and can accurately extract spatial and contextual information from remote sensing images. However, the dimension of feature extracted by EPs with component trees is large, which potentially causes high redundancy. In order to reduce redundancy information and achieve better feature extraction, we propose a modified EP with the Topological trees (Inclusion tree). The proposed method is carried out on two commonlyused hyperspectral datasets captured over North-western Indiana and Salinas, California. The results show that the proposed method has significantly improved in terms of both accuracy and complexity on the basis of a reduction of half of the feature dimensions compared to the original EPs.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130841538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Urban Local Climate Zone Classification with a Residual Convolutional Neural Network and Multi-Seasonal Sentinel-2 Images 基于残差卷积神经网络和多季节Sentinel-2图像的城市局地气候带分类
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486155
C. Qiu, M. Schmitt, Lichao Mou, Xiaoxiang Zhu
This study presents a classification framework for the urban Local Climate Zones (LCZs) based on a Residual Convolutional Neural Network (ResNet) architecture. In order to make full use of the temporal and spectral information contained in modern Earth observation data, multi-seasonal Sentinel-2 images are exploited. After training the ResNet, independent predictions are made from the multi-seasonal images. Subsequently, the seasonal predictions are fused in a decision fusion step based on majority voting. A systematical experiment is carried out in a large-scale study area located in the center of Europe. A significant accuracy improvement can be achieved by applying majority voting on multi-seasonal predictions. Based on the results, the main challenges and possible solutions of urban LCZ classification are further discussed, providing guidance for large-scale urban LCZ mapping.
本文提出了一种基于残差卷积神经网络(ResNet)架构的城市局地气候带分类框架。为了充分利用现代地球观测数据所包含的时间和光谱信息,利用了Sentinel-2多季节影像。对ResNet进行训练后,对多季节图像进行独立预测。随后,在基于多数投票的决策融合步骤中融合季节预测。在欧洲中部的一个大型研究区进行了系统的实验。通过对多季节预测应用多数投票,可以显著提高准确性。在此基础上,进一步探讨了城市LCZ分类面临的主要挑战和可能的解决方案,为大规模城市LCZ制图提供指导。
{"title":"Urban Local Climate Zone Classification with a Residual Convolutional Neural Network and Multi-Seasonal Sentinel-2 Images","authors":"C. Qiu, M. Schmitt, Lichao Mou, Xiaoxiang Zhu","doi":"10.1109/PRRS.2018.8486155","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486155","url":null,"abstract":"This study presents a classification framework for the urban Local Climate Zones (LCZs) based on a Residual Convolutional Neural Network (ResNet) architecture. In order to make full use of the temporal and spectral information contained in modern Earth observation data, multi-seasonal Sentinel-2 images are exploited. After training the ResNet, independent predictions are made from the multi-seasonal images. Subsequently, the seasonal predictions are fused in a decision fusion step based on majority voting. A systematical experiment is carried out in a large-scale study area located in the center of Europe. A significant accuracy improvement can be achieved by applying majority voting on multi-seasonal predictions. Based on the results, the main challenges and possible solutions of urban LCZ classification are further discussed, providing guidance for large-scale urban LCZ mapping.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121014650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fusion of Panchromatic and Multispectral Images via Morphological Operator and Improved PCNN in Mixed Multiscale Domain 基于形态学算子和改进PCNN的混合多尺度全色与多光谱图像融合
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486292
Jiao Jiao, Wu Lingda
In order to effectively combine the spectral information of the multispectral (MS) image with the spatial details of the panchromatic (PAN) image and improve the fusion quality, a fusion method based on morphological operator and improved pulse coupled neural network (PCNN) in mixed multi-scale (MM) domain is proposed. Firstly, the MS and PAN images are decomposed by nonsubsampled shearlet transform (NSST) to low- and high-frequency coefficients, respectively; secondly, morphological filter-based intensity modulation (MFIM) technology and stationary wavelet transform (SWT) are applied to the fusion of the low-frequency coefficients; an improved PCNN model is employed to the fusion of the high-frequency coefficients; thirdly, the final coefficients are reconstructed with inverse NSST. The experimental results on QuickBird satellite demonstrate that the proposed method is superior to five other kinds of traditional and popular methods: HIS, PCA, SWT, NSCT-PCNN and NSST-PCNN. The proposed method can improve the spatial resolution effectively while maintaining the spectral information well. The experimental results show that the proposed method outperforms the other methods in visual effect and objective evaluations.
为了有效地将多光谱(MS)图像的光谱信息与全色(PAN)图像的空间细节相结合,提高融合质量,提出了一种基于形态学算子和改进脉冲耦合神经网络(PCNN)的混合多尺度(MM)域融合方法。首先,采用非下采样剪切波变换(NSST)对MS和PAN图像分别进行低频和高频系数分解;其次,采用基于形态滤波的强度调制(MFIM)技术和平稳小波变换(SWT)进行低频系数融合;采用改进的PCNN模型对高频系数进行融合;第三,利用逆NSST重构最终系数。在QuickBird卫星上的实验结果表明,该方法优于HIS、PCA、SWT、NSCT-PCNN和nst - pcnn等五种传统和流行的方法。该方法在保持光谱信息的同时,有效地提高了空间分辨率。实验结果表明,该方法在视觉效果和客观评价方面优于其他方法。
{"title":"Fusion of Panchromatic and Multispectral Images via Morphological Operator and Improved PCNN in Mixed Multiscale Domain","authors":"Jiao Jiao, Wu Lingda","doi":"10.1109/PRRS.2018.8486292","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486292","url":null,"abstract":"In order to effectively combine the spectral information of the multispectral (MS) image with the spatial details of the panchromatic (PAN) image and improve the fusion quality, a fusion method based on morphological operator and improved pulse coupled neural network (PCNN) in mixed multi-scale (MM) domain is proposed. Firstly, the MS and PAN images are decomposed by nonsubsampled shearlet transform (NSST) to low- and high-frequency coefficients, respectively; secondly, morphological filter-based intensity modulation (MFIM) technology and stationary wavelet transform (SWT) are applied to the fusion of the low-frequency coefficients; an improved PCNN model is employed to the fusion of the high-frequency coefficients; thirdly, the final coefficients are reconstructed with inverse NSST. The experimental results on QuickBird satellite demonstrate that the proposed method is superior to five other kinds of traditional and popular methods: HIS, PCA, SWT, NSCT-PCNN and NSST-PCNN. The proposed method can improve the spatial resolution effectively while maintaining the spectral information well. The experimental results show that the proposed method outperforms the other methods in visual effect and objective evaluations.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129196916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Feature Fusion Through Multitask CNN for Large-scale Remote Sensing Image Segmentation 基于多任务CNN特征融合的大尺度遥感图像分割
Pub Date : 2018-07-24 DOI: 10.1109/PRRS.2018.8486170
Shihao Sun, Lei Yang, Wenjie Liu, Ruirui Li
In recent years, Fully Convolutional Networks (FCN) has been widely used in various semantic segmentation tasks, including multi-modal remote sensing imagery. How to fuse multi-modal data to improve the segmentation performance has always been a research hotspot. In this paper, a novel end-to-end fully convolutional neural network is proposed for semantic segmentation of natural color, infrared imagery and Digital Surface Models (DSM). It is based on a modified DeepUNet and perform the segmentation in a multi-task way. The channels are clustered into groups and processed on different task pipelines. After a series of segmentation and fusion, their shared features and private features are successfully merged together. Experiment results show that the feature fusion network is efficient. And our approach achieves good performance in ISPRS Semantic Labeling Contest (2D).
近年来,全卷积网络(Fully Convolutional Networks, FCN)被广泛应用于各种语义分割任务,包括多模态遥感图像。如何融合多模态数据以提高分割性能一直是研究的热点。本文提出了一种新的端到端全卷积神经网络,用于自然颜色、红外图像和数字表面模型(DSM)的语义分割。它基于改进的DeepUNet,并以多任务方式执行分割。这些通道被聚集成组,并在不同的任务管道上进行处理。经过一系列的分割和融合,将它们的共享特征和私有特征成功地融合在一起。实验结果表明,该特征融合网络是有效的。该方法在ISPRS语义标注大赛(2D)中取得了较好的成绩。
{"title":"Feature Fusion Through Multitask CNN for Large-scale Remote Sensing Image Segmentation","authors":"Shihao Sun, Lei Yang, Wenjie Liu, Ruirui Li","doi":"10.1109/PRRS.2018.8486170","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486170","url":null,"abstract":"In recent years, Fully Convolutional Networks (FCN) has been widely used in various semantic segmentation tasks, including multi-modal remote sensing imagery. How to fuse multi-modal data to improve the segmentation performance has always been a research hotspot. In this paper, a novel end-to-end fully convolutional neural network is proposed for semantic segmentation of natural color, infrared imagery and Digital Surface Models (DSM). It is based on a modified DeepUNet and perform the segmentation in a multi-task way. The channels are clustered into groups and processed on different task pipelines. After a series of segmentation and fusion, their shared features and private features are successfully merged together. Experiment results show that the feature fusion network is efficient. And our approach achieves good performance in ISPRS Semantic Labeling Contest (2D).","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133416171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1