首页 > 最新文献

2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)最新文献

英文 中文
A Method of Building Extraction Using Object Based Analysis of High Resolution Remote Sensing Images 基于目标分析的高分辨率遥感影像建筑物提取方法
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486404
Wang Yan, Tao Zui, Lyu Fenghua
More high spatial resolution remote sensing images can be available today. High resolution means more details on the images and it gives a chance to find the spatial relationship among ground objects. Aiming at extracting building from high resolution remote sensing images, this paper proposed a method based on Geographic Object-Based Image Analysis (GEOBIA), using the relationship among shadow, greenland, buildings and the building its own characteristics to try to extract all the buildings on the high resolution remote sensing images. The experiment chose the ISPRS sample images as study area and the result has proved the validity of the method.
今天可以获得更多的高空间分辨率遥感图像。高分辨率意味着图像上有更多的细节,它提供了一个机会来发现地面物体之间的空间关系。针对从高分辨率遥感影像中提取建筑物的问题,本文提出了一种基于地理目标图像分析(Geographic Object-Based Image Analysis, GEOBIA)的方法,利用阴影、绿地、建筑物之间的关系以及建筑物本身的特征,尝试提取高分辨率遥感影像上的所有建筑物。实验选择ISPRS样本图像作为研究区域,结果证明了该方法的有效性。
{"title":"A Method of Building Extraction Using Object Based Analysis of High Resolution Remote Sensing Images","authors":"Wang Yan, Tao Zui, Lyu Fenghua","doi":"10.1109/PRRS.2018.8486404","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486404","url":null,"abstract":"More high spatial resolution remote sensing images can be available today. High resolution means more details on the images and it gives a chance to find the spatial relationship among ground objects. Aiming at extracting building from high resolution remote sensing images, this paper proposed a method based on Geographic Object-Based Image Analysis (GEOBIA), using the relationship among shadow, greenland, buildings and the building its own characteristics to try to extract all the buildings on the high resolution remote sensing images. The experiment chose the ISPRS sample images as study area and the result has proved the validity of the method.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130517254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Automatic Generation and Data Organization Method of Control Points 控制点自动生成与数据组织方法研究
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486252
Lai Guangling, Z. Yongsheng, Tong Xiaochong, Li Kai, Ding Lu
High precision control points are indispensable for the improvement of geometric positioning accuracy of aerial and space images. At present, most control points need to be installed manually, and which obtained in this way are fixed to a specific area and have high installation and maintenance cost. Satellites can only correct their orbit and attitude in real time when they pass through the area with control points. Therefore, setting up control points by this way has poor flexibility and is not conducive to the improvement of satellite positioning accuracy. In order to solve this problem, an automatic control point generation algorithm based on natural ground object automatic recognition and detection is proposed. First, typical ground objects such as playground and road intersection are automatically identified by YOLO algorithm, and feature extraction is carried out by classic SIFT feature extraction operator on the basis of recognition. Then, the feature extraction results, along with the target attribute, location and other information are stored in the agreed format. Finally, the data of control points are organized by the multi-scale integer coding method based on quadruplication to improve the efficiency of data storage and access. This method can make full use of high precision surveying and mapping satellite image data and set up control points around the world. Satellites can correct their orbit and attitude at any time according to their needs, and can greatly improve the positioning accuracy of images.
高精度控制点是提高航空和空间图像几何定位精度的必要条件。目前,大多数控制点需要人工安装,并且这种方式获得的控制点固定在特定区域,安装和维护成本较高。卫星只有在经过控制点区域时才能实时校正轨道和姿态。因此,以这种方式设置控制点灵活性差,不利于卫星定位精度的提高。为了解决这一问题,提出了一种基于自然地物自动识别与检测的自动控制点生成算法。首先,利用YOLO算法自动识别运动场、道路交叉口等典型地物,在识别的基础上,利用经典SIFT特征提取算子进行特征提取。然后,将特征提取结果以及目标属性、位置等信息以约定的格式存储。最后,采用基于四乘的多尺度整数编码方法对控制点数据进行组织,提高数据存储和访问效率。该方法可以充分利用高精度的测绘卫星影像数据,在全球范围内设置控制点。卫星可以根据需要随时修正轨道和姿态,大大提高图像的定位精度。
{"title":"Research on Automatic Generation and Data Organization Method of Control Points","authors":"Lai Guangling, Z. Yongsheng, Tong Xiaochong, Li Kai, Ding Lu","doi":"10.1109/PRRS.2018.8486252","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486252","url":null,"abstract":"High precision control points are indispensable for the improvement of geometric positioning accuracy of aerial and space images. At present, most control points need to be installed manually, and which obtained in this way are fixed to a specific area and have high installation and maintenance cost. Satellites can only correct their orbit and attitude in real time when they pass through the area with control points. Therefore, setting up control points by this way has poor flexibility and is not conducive to the improvement of satellite positioning accuracy. In order to solve this problem, an automatic control point generation algorithm based on natural ground object automatic recognition and detection is proposed. First, typical ground objects such as playground and road intersection are automatically identified by YOLO algorithm, and feature extraction is carried out by classic SIFT feature extraction operator on the basis of recognition. Then, the feature extraction results, along with the target attribute, location and other information are stored in the agreed format. Finally, the data of control points are organized by the multi-scale integer coding method based on quadruplication to improve the efficiency of data storage and access. This method can make full use of high precision surveying and mapping satellite image data and set up control points around the world. Satellites can correct their orbit and attitude at any time according to their needs, and can greatly improve the positioning accuracy of images.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133833920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Elegant End-to-End Fully Convolutional Network (E3FCN) for Green Tide Detection Using MODIS Data 基于MODIS数据的优雅端到端全卷积网络(E3FCN)绿潮检测
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486160
Haoyu Yin, Yingjian Liu, Qiang Chen
Using remote sensing (RS) data to monitor the onset, proliferation and decline of green tide (GT) has great significance for disaster warning, trend prediction and decision-making support. However, remote sensing images vary under different observing conditions, which bring big challenges to detection missions. This paper proposes an accurate green tide detection method based on an Elegant End-to-End Fully Convolutional Network (E3FCN) using Moderate Resolution Imaging Spectroradiometer (MODIS) data. In preprocessing, RS images are firstly separated into subimages by a sliding window. To detect GT pixels more efficiently, the original Fully Convolutional Neural Network (FCN) architecture is modified into E3FCN, which can be trained end-to-end. The E3FCN model can be divided into two parts, contracting path and expanding path. The contracting path aims to extract high-level features and the expanding path aims to provide a pixel-level prediction by using a skip technique. The prediction result of whole image is generated by merging the prediction results of subimages, which can also improve the final performance. Experiment results show that the average precision of E3FCN on the whole data sets is 98.06%, compared to 73.27% of Support Vector Regression (SVR), 71.75% of Normalized Difference Vegetation Index (NDVI), and 64.41% of Enhanced Vegetation Index (EVI).
利用遥感数据监测绿潮的发生、扩散和消退,对灾害预警、趋势预测和决策支持具有重要意义。然而,在不同的观测条件下,遥感图像是不同的,这给探测任务带来了很大的挑战。本文提出了一种基于优雅端到端全卷积网络(E3FCN)的中分辨率成像光谱辐射计(MODIS)数据的精确绿潮检测方法。在预处理中,首先通过滑动窗口将RS图像分割成子图像。为了更有效地检测GT像素,将原来的全卷积神经网络(FCN)架构修改为E3FCN,可以端到端训练。E3FCN模型可分为收缩路径和扩张路径两部分。收缩路径旨在提取高级特征,扩展路径旨在通过使用跳过技术提供像素级预测。通过合并子图像的预测结果生成整幅图像的预测结果,也可以提高最终的性能。实验结果表明,E3FCN在整个数据集上的平均精度为98.06%,而支持向量回归(SVR)的平均精度为73.27%,归一化植被指数(NDVI)的平均精度为71.75%,增强植被指数(EVI)的平均精度为64.41%。
{"title":"An Elegant End-to-End Fully Convolutional Network (E3FCN) for Green Tide Detection Using MODIS Data","authors":"Haoyu Yin, Yingjian Liu, Qiang Chen","doi":"10.1109/PRRS.2018.8486160","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486160","url":null,"abstract":"Using remote sensing (RS) data to monitor the onset, proliferation and decline of green tide (GT) has great significance for disaster warning, trend prediction and decision-making support. However, remote sensing images vary under different observing conditions, which bring big challenges to detection missions. This paper proposes an accurate green tide detection method based on an Elegant End-to-End Fully Convolutional Network (E3FCN) using Moderate Resolution Imaging Spectroradiometer (MODIS) data. In preprocessing, RS images are firstly separated into subimages by a sliding window. To detect GT pixels more efficiently, the original Fully Convolutional Neural Network (FCN) architecture is modified into E3FCN, which can be trained end-to-end. The E3FCN model can be divided into two parts, contracting path and expanding path. The contracting path aims to extract high-level features and the expanding path aims to provide a pixel-level prediction by using a skip technique. The prediction result of whole image is generated by merging the prediction results of subimages, which can also improve the final performance. Experiment results show that the average precision of E3FCN on the whole data sets is 98.06%, compared to 73.27% of Support Vector Regression (SVR), 71.75% of Normalized Difference Vegetation Index (NDVI), and 64.41% of Enhanced Vegetation Index (EVI).","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Developing Process Detection of Red Tide Based on Multi-Temporal GOCI Images 基于多时相GOCI图像的赤潮过程检测研究
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486244
Zhang Feng, Yang Xuying, Sun Xiaoxiao, Du Zhenhong, L. Renyi
Red tide, as one of the major marine disasters in the coastal waters, has a significant temporal and spatial characteristics and pattern. A new understanding of red tides evolution can be used to make early predictions for emergency decision-making of red tides. The geostationary ocean color imager (GOCI) with a high space coverage and temporal resolution can fully meet the monitoring needs of the rapidly changing red tide. In this paper, we analyzed the spectral characteristics of red tide water, high turbid water and clean water based on GOCI imagery and proposed a red tide extraction index RrcH by combining the fluorescence line height (FLH). The comparison with buoy monitoring data validated the accuracy and reliability of the RrcH algorithm. The cases show that the formation of the red tides in a highly turbid water environment can be detected and monitored by using GOCI, which is beneficial to disaster prevention and reduction.
赤潮作为沿海海域的主要海洋灾害之一,具有显著的时空特征和格局。对赤潮演变的新认识可为赤潮应急决策的早期预测提供依据。静止海洋彩色成像仪(GOCI)具有较高的空间覆盖率和时间分辨率,可以充分满足快速变化的赤潮监测需求。本文基于GOCI影像分析了赤潮水、高浑浊水和清水的光谱特征,并结合荧光线高度(FLH)提出了赤潮提取指数RrcH。通过与浮标监测数据的对比,验证了RrcH算法的准确性和可靠性。实例表明,利用GOCI可以对高浑浊水体环境中赤潮的形成进行检测和监测,有利于防灾减灾。
{"title":"Developing Process Detection of Red Tide Based on Multi-Temporal GOCI Images","authors":"Zhang Feng, Yang Xuying, Sun Xiaoxiao, Du Zhenhong, L. Renyi","doi":"10.1109/PRRS.2018.8486244","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486244","url":null,"abstract":"Red tide, as one of the major marine disasters in the coastal waters, has a significant temporal and spatial characteristics and pattern. A new understanding of red tides evolution can be used to make early predictions for emergency decision-making of red tides. The geostationary ocean color imager (GOCI) with a high space coverage and temporal resolution can fully meet the monitoring needs of the rapidly changing red tide. In this paper, we analyzed the spectral characteristics of red tide water, high turbid water and clean water based on GOCI imagery and proposed a red tide extraction index RrcH by combining the fluorescence line height (FLH). The comparison with buoy monitoring data validated the accuracy and reliability of the RrcH algorithm. The cases show that the formation of the red tides in a highly turbid water environment can be detected and monitored by using GOCI, which is beneficial to disaster prevention and reduction.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125231962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Automatic Image Enhancement Method Based on the Improved HCTLS 一种基于改进hctl的图像自动增强方法
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486212
Junyu Chen, Jiahang Liu, Chenghu Zhou, F. Zhu, Tieqiao Chen, Hang Zhang
Remote sensing images often suffer low contrast, and the efficiency and robustness of contrast enhancement for remote sensing images is still a challenge. To meet with the requirements of applications, Liu et al recently proposed a self-adaptive contrast enhancement method (HCTLS) based on the histogram compacting transform (HCT). In this method, some gray levels on which the frequency is smaller than a certain reference, will merged into their adjacent levels for a compact level distribution. However, if the merged levels whose corresponding pixels in some connected regions, local contrast of these connected regions will decrease, even disappear. In this paper, an improved enhancement method (DPHCT) for remote sensing image based on the HCTLS is presented for preserving more the local detail and contrast. Firstly, extracting the connected regions from the enhanced result by HCT where the local contrast is decreased or disappeared. These connected regions are decomposed into the inner regions and the boundary regions adaptively. Then, construct pixel values by using the unified brightness function to maintain the contrast for the connected regions inside. At the same time eliminate stitching lines by using a weighted fusion spliced algorithm to eliminate the problem of borders outstanding in result of intensity roughness. Finally, the image is normalized into [0, 255] by linear stretch. Experimental results indicate that the proposed algorithm not only can enhance the global contrast but also can preserve local contrast and details.
遥感图像经常存在对比度低的问题,对遥感图像进行对比度增强的效率和鲁棒性仍然是一个挑战。为了满足应用需求,Liu等人最近提出了一种基于直方图压缩变换(HCT)的自适应对比度增强方法(hctl)。在该方法中,频率小于某个参考值的一些灰度级将被合并到相邻的灰度级中,以获得紧凑的灰度分布。然而,如果合并的层次对应的像素在某些连通区域,这些连通区域的局部对比度会降低,甚至消失。本文提出了一种基于HCTLS的改进的遥感图像增强方法(DPHCT),以更好地保留局部细节和对比度。首先,从HCT增强结果中提取局部对比度降低或消失的连通区域;这些连通区域被自适应地分解为内部区域和边界区域。然后,利用统一的亮度函数构造像素值,保持内部连通区域的对比度。同时,采用加权融合拼接算法消除拼接线,消除因图像强度粗糙度而导致的边界突出问题。最后,通过线性拉伸将图像归一化为[0,255]。实验结果表明,该算法既能增强图像的全局对比度,又能保持图像的局部对比度和细节。
{"title":"An Automatic Image Enhancement Method Based on the Improved HCTLS","authors":"Junyu Chen, Jiahang Liu, Chenghu Zhou, F. Zhu, Tieqiao Chen, Hang Zhang","doi":"10.1109/PRRS.2018.8486212","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486212","url":null,"abstract":"Remote sensing images often suffer low contrast, and the efficiency and robustness of contrast enhancement for remote sensing images is still a challenge. To meet with the requirements of applications, Liu et al recently proposed a self-adaptive contrast enhancement method (HCTLS) based on the histogram compacting transform (HCT). In this method, some gray levels on which the frequency is smaller than a certain reference, will merged into their adjacent levels for a compact level distribution. However, if the merged levels whose corresponding pixels in some connected regions, local contrast of these connected regions will decrease, even disappear. In this paper, an improved enhancement method (DPHCT) for remote sensing image based on the HCTLS is presented for preserving more the local detail and contrast. Firstly, extracting the connected regions from the enhanced result by HCT where the local contrast is decreased or disappeared. These connected regions are decomposed into the inner regions and the boundary regions adaptively. Then, construct pixel values by using the unified brightness function to maintain the contrast for the connected regions inside. At the same time eliminate stitching lines by using a weighted fusion spliced algorithm to eliminate the problem of borders outstanding in result of intensity roughness. Finally, the image is normalized into [0, 255] by linear stretch. Experimental results indicate that the proposed algorithm not only can enhance the global contrast but also can preserve local contrast and details.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122099348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Method of Interactively Extracting Region Objects from High-Resolution Remote Sensing Image Based on Full Connection CRF 基于全连通CRF的高分辨率遥感图像区域目标交互式提取方法
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486175
Zhang Chun-sen, Yu Zhen, Hu Yan
Aiming at the region objects of high resolution remote sensing images, this paper proposes an interactive region objects extraction method for high-resolution remote sensing images based on fully connected conditional random fields. This method estimates the foreground model by artificial interaction markers. On the basis of using the SLIC algorithm to over segment the input images, combining the color and texture features, the region-based maximum similarity fusion (MSRM) is used to expand the foreground region and establish the global information of the full-connection conditional random field description image. Then, based on the mean-field estimation, the model inference is realized by the high-dimensional Gauss filtering method, and then the contour of the area features is obtained. The experimental results show that the method is effective by extracting the area features such as waters, woodlands, terraces and bare lands on high resolution remote sensing images.
针对高分辨率遥感图像的区域目标,提出了一种基于全连通条件随机场的高分辨率遥感图像交互式区域目标提取方法。该方法通过人工交互标记对前景模型进行估计。在使用SLIC算法对输入图像进行过分割的基础上,结合颜色和纹理特征,采用基于区域的最大相似度融合(MSRM)对前景区域进行扩展,建立全连接条件随机场描述图像的全局信息。然后,在平均场估计的基础上,通过高维高斯滤波方法实现模型推理,得到区域特征轮廓;实验结果表明,该方法能够有效提取高分辨率遥感图像上的水域、林地、梯田和裸地等区域特征。
{"title":"A Method of Interactively Extracting Region Objects from High-Resolution Remote Sensing Image Based on Full Connection CRF","authors":"Zhang Chun-sen, Yu Zhen, Hu Yan","doi":"10.1109/PRRS.2018.8486175","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486175","url":null,"abstract":"Aiming at the region objects of high resolution remote sensing images, this paper proposes an interactive region objects extraction method for high-resolution remote sensing images based on fully connected conditional random fields. This method estimates the foreground model by artificial interaction markers. On the basis of using the SLIC algorithm to over segment the input images, combining the color and texture features, the region-based maximum similarity fusion (MSRM) is used to expand the foreground region and establish the global information of the full-connection conditional random field description image. Then, based on the mean-field estimation, the model inference is realized by the high-dimensional Gauss filtering method, and then the contour of the area features is obtained. The experimental results show that the method is effective by extracting the area features such as waters, woodlands, terraces and bare lands on high resolution remote sensing images.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117117093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-Branch Regression Network For Building Classification Using Remote Sensing Images 基于多分支回归网络的遥感影像建筑分类
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486177
Yuanyuan Gui, Xiang Li, Wei Li, Anzhi Yue
Convolutional neural networks (CNN) are widely used for processing high-resolution remote sensing images like segmentation or classification, and have been demonstrated excellent performance in recent years. In this paper, a novel classification framework based on segmentation method, called Multi-branch regression network (named as MBR-Net) is proposed. The proposed method can generate multiple losses rely on training images in different size of information. In addition, a complete training strategy for classifying remote sensing images, which can reduce the influence of uneven samples is also developed. Experimental results with Inrial aerial dataset demonstrate that the proposed framework can provide much better results compared to state-of-the-art U-Net and generate fine-grained prediction maps.
卷积神经网络(CNN)广泛应用于高分辨率遥感图像的分割或分类等处理,近年来表现出优异的性能。本文提出了一种新的基于分割方法的分类框架——多分支回归网络(MBR-Net)。该方法可以根据不同信息大小的训练图像产生多重损失。此外,还开发了一套完整的遥感图像分类训练策略,以减少样本不均匀的影响。实验结果表明,与最先进的U-Net相比,该框架可以提供更好的结果,并生成细粒度的预测图。
{"title":"Multi-Branch Regression Network For Building Classification Using Remote Sensing Images","authors":"Yuanyuan Gui, Xiang Li, Wei Li, Anzhi Yue","doi":"10.1109/PRRS.2018.8486177","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486177","url":null,"abstract":"Convolutional neural networks (CNN) are widely used for processing high-resolution remote sensing images like segmentation or classification, and have been demonstrated excellent performance in recent years. In this paper, a novel classification framework based on segmentation method, called Multi-branch regression network (named as MBR-Net) is proposed. The proposed method can generate multiple losses rely on training images in different size of information. In addition, a complete training strategy for classifying remote sensing images, which can reduce the influence of uneven samples is also developed. Experimental results with Inrial aerial dataset demonstrate that the proposed framework can provide much better results compared to state-of-the-art U-Net and generate fine-grained prediction maps.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121586127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
High-precision Centroid Extraction and PSF Calculation on Remote Sensing Image of Point Source Array 点源阵列遥感图像高精度质心提取与PSF计算
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486240
Li Kai, Z. Yongsheng, Z. Zhenchao, Xu Lin
The high-precision measurement of remote sensing image geometric and radiometric information is an important basis for remote sensing image geometric and radiometric processing. Based on the theory of image degradation, this paper describes the method of obtaining simulated degradation image of point source array using prior information. Then the shortcomings of traditional Point Spread Function (PSF) parameter solving methods are analyzed, and a new algorithm for PSF parameter solving is proposed on this basis. Experimental results show that the accuracy of geometric center of point source and full-width half-maximum width (FWHM) of PSF obtained by the proposed method from simulated degradation image are better than the traditional algorithms. When the SNR is 40dB, the RMSE of the geometrical position of the point source obtained by proposed algorithm is only 0.01 pixels; the RMSE of FWHM of PSF is only 0.03 pixels. Experimental results further show that the use of the multiphase point source arrays can effectively improve the accuracy of PSF parameter. This paper demonstrates that point source can provide both high precision geometry and radiation information for remote sensing images, and will potentially be an ideal tool for joint geometric and radiometric calibration.
遥感影像几何与辐射信息的高精度测量是遥感影像几何与辐射处理的重要基础。在图像退化理论的基础上,提出了利用先验信息获取点源阵列模拟退化图像的方法。然后分析了传统点扩散函数(PSF)参数求解方法的不足,在此基础上提出了一种新的点扩散函数参数求解算法。实验结果表明,该方法从模拟退化图像中获得的点源几何中心和PSF全宽半最大宽度(FWHM)的精度优于传统算法。当信噪比为40dB时,该算法得到的点源几何位置的RMSE仅为0.01像素;PSF的FWHM的RMSE仅为0.03像素。实验结果进一步表明,采用多相点源阵列可以有效地提高PSF参数的精度。本文论证了点源可以为遥感影像提供高精度的几何和辐射信息,有可能成为一种理想的几何和辐射联合定标工具。
{"title":"High-precision Centroid Extraction and PSF Calculation on Remote Sensing Image of Point Source Array","authors":"Li Kai, Z. Yongsheng, Z. Zhenchao, Xu Lin","doi":"10.1109/PRRS.2018.8486240","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486240","url":null,"abstract":"The high-precision measurement of remote sensing image geometric and radiometric information is an important basis for remote sensing image geometric and radiometric processing. Based on the theory of image degradation, this paper describes the method of obtaining simulated degradation image of point source array using prior information. Then the shortcomings of traditional Point Spread Function (PSF) parameter solving methods are analyzed, and a new algorithm for PSF parameter solving is proposed on this basis. Experimental results show that the accuracy of geometric center of point source and full-width half-maximum width (FWHM) of PSF obtained by the proposed method from simulated degradation image are better than the traditional algorithms. When the SNR is 40dB, the RMSE of the geometrical position of the point source obtained by proposed algorithm is only 0.01 pixels; the RMSE of FWHM of PSF is only 0.03 pixels. Experimental results further show that the use of the multiphase point source arrays can effectively improve the accuracy of PSF parameter. This paper demonstrates that point source can provide both high precision geometry and radiation information for remote sensing images, and will potentially be an ideal tool for joint geometric and radiometric calibration.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131916184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ship Detection by Modified RetinaNet 基于改进retanet的船舶检测
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486308
Yingying Wang, Wei Li, Xiang Li, Xu Sun
Ship detection in optical remote sensing imagery has been a hot topic in recent years and achieved promising performance. However, there are still several problems in detecting ships with various sizes. The key objective of all scales precise positioning is to obtain a high resolution feature map while having a high semantic characteristic information. Based on this idea, a modified RetinaNet (M-RetinaNet) is proposed to build dense connections between shallow and deep feature maps, which aims at solving problems resulting from different sizes of ships. It consists of a baseline residual network and a modified multi-scale network. The modified multi-scale network includes a top-down pathway and a bottom-up pathway, both of which build on the multi-scale base network. The benefits of this model are two folds: first, it can generate feature maps with high semantic information at each layer by introducing dense lateral connections from deep to shallow; second, it maintains high spatial resolution in deep layers. Comprehensive evaluations on a ship dataset and comparison with several state-of-the-art approaches demonstrate the effectiveness of the proposed network.
光学遥感图像中的船舶检测是近年来研究的热点,并取得了良好的效果。然而,在探测不同尺寸的船舶时仍然存在一些问题。全尺度精确定位的关键目标是获得高分辨率的特征图,同时具有高语义特征信息。在此基础上,提出了一种改进的retanet (M-RetinaNet),在浅层和深层特征图之间建立密集连接,以解决船舶尺寸不同带来的问题。它由一个基线残差网络和一个改进的多尺度网络组成。改进后的多尺度网络包括自顶向下路径和自底向上路径,两者都建立在多尺度基础网络上。该模型的优点有两个方面:首先,通过引入从深到浅的密集横向连接,可以在每一层生成具有高语义信息的特征图;二是在深层保持较高的空间分辨率。对船舶数据集的综合评估以及与几种最新方法的比较表明了所提出网络的有效性。
{"title":"Ship Detection by Modified RetinaNet","authors":"Yingying Wang, Wei Li, Xiang Li, Xu Sun","doi":"10.1109/PRRS.2018.8486308","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486308","url":null,"abstract":"Ship detection in optical remote sensing imagery has been a hot topic in recent years and achieved promising performance. However, there are still several problems in detecting ships with various sizes. The key objective of all scales precise positioning is to obtain a high resolution feature map while having a high semantic characteristic information. Based on this idea, a modified RetinaNet (M-RetinaNet) is proposed to build dense connections between shallow and deep feature maps, which aims at solving problems resulting from different sizes of ships. It consists of a baseline residual network and a modified multi-scale network. The modified multi-scale network includes a top-down pathway and a bottom-up pathway, both of which build on the multi-scale base network. The benefits of this model are two folds: first, it can generate feature maps with high semantic information at each layer by introducing dense lateral connections from deep to shallow; second, it maintains high spatial resolution in deep layers. Comprehensive evaluations on a ship dataset and comparison with several state-of-the-art approaches demonstrate the effectiveness of the proposed network.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126147406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
SAR Image Matching Area Selection Based on Actual Flight Real-Time Image 基于实际飞行实时图像的SAR图像匹配区域选择
Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486416
Wang Jianmei, Wang Zhong, Zhang Shaoming, F. Tiantian, Dong Jihui
Matching suitability analysis is a key issue of INS/SAR integrated navigation mode. The existing suitability area selection methods use the simulated real-time image to calculate the matching probability of the scene area and further label it “suitability” or “unsuitability”. If the imaging mode of the simulated image is the same as that of the real image, the suitability area selection model based on the simulated real-time image works well. Otherwise, the model is impractical. In order to address this issue, a novel method is proposed in this paper. The sample dataset is built on the actual flight real-time images, and a hybrid feature selection method based on D-Score and SVM is used to select the suitability features and build the suitability area selection model simultaneously. Experimental results show that the consistency between the prediction results of the model and the ones experts label reaches 81.92%.
匹配适宜性分析是INS/SAR组合导航模式的关键问题。现有的适宜区域选择方法是利用模拟的实时图像计算场景区域的匹配概率,并将其标记为“适宜”或“不适宜”。如果模拟图像的成像方式与真实图像相同,则基于模拟实时图像的适宜性区域选择模型效果良好。否则,该模型是不切实际的。为了解决这一问题,本文提出了一种新的方法。在实际飞行实时图像上构建样本数据集,采用基于D-Score和SVM的混合特征选择方法,选择适宜性特征,同时构建适宜性区域选择模型。实验结果表明,该模型的预测结果与专家标注的预测结果一致性达到81.92%。
{"title":"SAR Image Matching Area Selection Based on Actual Flight Real-Time Image","authors":"Wang Jianmei, Wang Zhong, Zhang Shaoming, F. Tiantian, Dong Jihui","doi":"10.1109/PRRS.2018.8486416","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486416","url":null,"abstract":"Matching suitability analysis is a key issue of INS/SAR integrated navigation mode. The existing suitability area selection methods use the simulated real-time image to calculate the matching probability of the scene area and further label it “suitability” or “unsuitability”. If the imaging mode of the simulated image is the same as that of the real image, the suitability area selection model based on the simulated real-time image works well. Otherwise, the model is impractical. In order to address this issue, a novel method is proposed in this paper. The sample dataset is built on the actual flight real-time images, and a hybrid feature selection method based on D-Score and SVM is used to select the suitability features and build the suitability area selection model simultaneously. Experimental results show that the consistency between the prediction results of the model and the ones experts label reaches 81.92%.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123433513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1