首页 > 最新文献

2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)最新文献

英文 中文
PolSAR image classification using discriminative clustering 基于判别聚类的PolSAR图像分类
Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958798
Haixia Bi, Jian Sun, Zongben Xu
This paper presents a novel unsupervised image classification method for polarimetric synthetic aperture radar (PolSAR) data. The proposed method is based on a discriminative clustering framework that explicitly relies on a discriminative supervised classification technique to perform unsupervised clustering. To implement this idea, we design an energy function for unsupervised PolSAR image classification by combining a supervised softmax regression model with a Markov Random Field (MRF) smoothness constraint. In this model, both the pixel-wise class labels and classifiers are taken as unknown variables to be optimized. Starting from the initialized class labels generated by Cloude-Pottier decomposition and K-Wishart distribution hypothesis, we iteratively optimize the classifiers and class labels by alternately minimizing the energy function w.r.t. them. Finally, the optimized class labels are taken as the classification result, and the classifiers for different classes are also derived as a side effect. We apply this approach to real PolSAR benchmark data. Extensive experiments justify that our approach can effectively classify the PolSAR image in an unsupervised way, and produce higher accuracies than the compared state-of-the-art methods.
针对偏振合成孔径雷达(PolSAR)数据,提出了一种新的无监督图像分类方法。该方法基于判别聚类框架,该框架明确地依赖于判别监督分类技术来执行无监督聚类。为了实现这一思想,我们将有监督的softmax回归模型与马尔可夫随机场(MRF)平滑约束相结合,设计了一个用于无监督PolSAR图像分类的能量函数。在该模型中,将像素级分类标签和分类器作为未知变量进行优化。从cloud - pottier分解和K-Wishart分布假设生成的初始化类标签开始,通过交替最小化能量函数w.r.t.对分类器和类标签进行迭代优化。最后,将优化后的类标签作为分类结果,并衍生出不同类的分类器作为副作用。我们将这种方法应用于真实的PolSAR基准数据。大量的实验证明,我们的方法可以有效地以无监督的方式对PolSAR图像进行分类,并且比比较的最先进的方法产生更高的精度。
{"title":"PolSAR image classification using discriminative clustering","authors":"Haixia Bi, Jian Sun, Zongben Xu","doi":"10.1109/RSIP.2017.7958798","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958798","url":null,"abstract":"This paper presents a novel unsupervised image classification method for polarimetric synthetic aperture radar (PolSAR) data. The proposed method is based on a discriminative clustering framework that explicitly relies on a discriminative supervised classification technique to perform unsupervised clustering. To implement this idea, we design an energy function for unsupervised PolSAR image classification by combining a supervised softmax regression model with a Markov Random Field (MRF) smoothness constraint. In this model, both the pixel-wise class labels and classifiers are taken as unknown variables to be optimized. Starting from the initialized class labels generated by Cloude-Pottier decomposition and K-Wishart distribution hypothesis, we iteratively optimize the classifiers and class labels by alternately minimizing the energy function w.r.t. them. Finally, the optimized class labels are taken as the classification result, and the classifiers for different classes are also derived as a side effect. We apply this approach to real PolSAR benchmark data. Extensive experiments justify that our approach can effectively classify the PolSAR image in an unsupervised way, and produce higher accuracies than the compared state-of-the-art methods.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122855545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An enhanced deep convolutional neural network for densely packed objects detection in remote sensing images 基于深度卷积神经网络的遥感图像密集目标检测
Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958800
Zhipeng Deng, Lin Lei, Hao Sun, H. Zou, Shilin Zhou, Juanping Zhao
Faster Region based convolutional neural networks (FRCN) has shown great success in object detection in recent years. However, its performance will degrade on densely packed objects in real remote sensing applications. To address this problem, an enhanced deep CNN based method is developed in this paper. Following the common pipeline of “CNN feature extraction + region proposal + Region classification”, our method is primarily based on the latest Residual Networks (ResNets) and consists of two sub-networks: an object proposal network and an object detection network. For detecting densely packed objects, the output of multi-scale layers are combined together to enhance the resolution of the feature maps. Our method is trained on the VHR-10 data set with limited samples and successfully tested on large-scale google earth images, such as aircraft boneyard or tank farm, containing a substantial number of densely packed objects.
近年来,基于快速区域的卷积神经网络(FRCN)在目标检测方面取得了巨大的成功。然而,在实际遥感应用中,其在密集物体上的性能会下降。为了解决这一问题,本文提出了一种基于深度CNN的增强方法。我们的方法遵循“CNN特征提取+区域建议+区域分类”的常用流程,主要基于最新的残差网络(ResNets),由两个子网络组成:目标建议网络和目标检测网络。为了检测密集堆积的物体,将多尺度层的输出组合在一起,以提高特征图的分辨率。我们的方法在样本有限的VHR-10数据集上进行了训练,并成功地在包含大量密集物体的大规模google earth图像上进行了测试,例如飞机墓地或油罐场。
{"title":"An enhanced deep convolutional neural network for densely packed objects detection in remote sensing images","authors":"Zhipeng Deng, Lin Lei, Hao Sun, H. Zou, Shilin Zhou, Juanping Zhao","doi":"10.1109/RSIP.2017.7958800","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958800","url":null,"abstract":"Faster Region based convolutional neural networks (FRCN) has shown great success in object detection in recent years. However, its performance will degrade on densely packed objects in real remote sensing applications. To address this problem, an enhanced deep CNN based method is developed in this paper. Following the common pipeline of “CNN feature extraction + region proposal + Region classification”, our method is primarily based on the latest Residual Networks (ResNets) and consists of two sub-networks: an object proposal network and an object detection network. For detecting densely packed objects, the output of multi-scale layers are combined together to enhance the resolution of the feature maps. Our method is trained on the VHR-10 data set with limited samples and successfully tested on large-scale google earth images, such as aircraft boneyard or tank farm, containing a substantial number of densely packed objects.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121427699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Airborne Ka-band digital beamforming SAR system and flight test 机载ka波段数字波束形成SAR系统及飞行试验
Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958791
Hui Wang, Shoulun Dai, Shichao Zheng
In this paper, a Ka-band digital beamforming (DBF) SAR demonstrator system is introduced which successfully demonstrated in 2016. The system has two modes, strip mode and GMTI mode. In GMTI mode, the receiving antenna of the demonstrator is divided into 24 channels. 8 channels in range are used to realize DBF-SCORE technique which can improve the signal to noise ratio of the system, and 3 channels in azimuth are used to realize GMTI. The system design and the architecture of the system are described. Finally, the flight test results are presented, and some real data processing results are given.
本文介绍了一种ka波段数字波束形成(DBF) SAR演示系统,并于2016年成功进行了演示。系统有条带模式和GMTI模式两种模式。在GMTI模式下,演示器的接收天线分为24个通道。利用距离上的8个通道实现DBF-SCORE技术,提高系统的信噪比;利用方位上的3个通道实现GMTI技术。介绍了系统设计和系统体系结构。最后给出了飞行试验结果,并给出了一些实际数据处理结果。
{"title":"Airborne Ka-band digital beamforming SAR system and flight test","authors":"Hui Wang, Shoulun Dai, Shichao Zheng","doi":"10.1109/RSIP.2017.7958791","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958791","url":null,"abstract":"In this paper, a Ka-band digital beamforming (DBF) SAR demonstrator system is introduced which successfully demonstrated in 2016. The system has two modes, strip mode and GMTI mode. In GMTI mode, the receiving antenna of the demonstrator is divided into 24 channels. 8 channels in range are used to realize DBF-SCORE technique which can improve the signal to noise ratio of the system, and 3 channels in azimuth are used to realize GMTI. The system design and the architecture of the system are described. Finally, the flight test results are presented, and some real data processing results are given.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131769106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fast vehicle detection in UAV images 无人机图像中的快速车辆检测
Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958795
Tianyu Tang, Zhipeng Deng, Shilin Zhou, Lin Lei, H. Zou
Fast and accurate vehicle detection in unmanned aerial vehicle (UAV) images remains a challenge, due to its very high spatial resolution and very few annotations. Although numerous vehicle detection methods exist, most of them cannot achieve real-time detection for different scenes. Recently, deep learning algorithms has achieved fantastic detection performance in computer vision, especially regression based convolutional neural networks YOLOv2. It's good both at accuracy and speed, outperforming other state-of-the-art detection methods. This paper for the first time aims to investigate the use of YOLOv2 for vehicle detection in UAV images, as well as to explore the new method for data annotation. Our method starts with image annotation and data augmentation. CSK tracking method is used to help annotate vehicles in images captured from simple scenes. Subsequently, a regression based single convolutional neural network YOLOv2 is used to detect vehicles in UAV images. To evaluate our method, UAV video images were taken over several urban areas, and experiments were conducted on this dataset and Stanford Drone dataset. The experimental results have proven that our data preparation strategy is useful, and YOLOv2 is effective for real-time vehicle detection of UAV video images.
由于无人机图像的空间分辨率非常高,且注释很少,因此在无人机图像中快速准确地检测车辆仍然是一个挑战。虽然存在众多的车辆检测方法,但大多数都无法实现对不同场景的实时检测。近年来,深度学习算法在计算机视觉领域取得了优异的检测性能,尤其是基于回归的卷积神经网络YOLOv2。它的准确性和速度都很好,优于其他最先进的检测方法。本文首次研究了YOLOv2在无人机图像中车辆检测的应用,并探索了数据标注的新方法。我们的方法从图像注释和数据增强开始。使用CSK跟踪方法在简单场景中捕获的图像中帮助标注车辆。随后,利用基于回归的单卷积神经网络YOLOv2对无人机图像中的车辆进行检测。为了评估我们的方法,在几个城市地区拍摄了无人机视频图像,并在该数据集和斯坦福无人机数据集上进行了实验。实验结果证明了我们的数据准备策略是有用的,YOLOv2对于无人机视频图像的实时车辆检测是有效的。
{"title":"Fast vehicle detection in UAV images","authors":"Tianyu Tang, Zhipeng Deng, Shilin Zhou, Lin Lei, H. Zou","doi":"10.1109/RSIP.2017.7958795","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958795","url":null,"abstract":"Fast and accurate vehicle detection in unmanned aerial vehicle (UAV) images remains a challenge, due to its very high spatial resolution and very few annotations. Although numerous vehicle detection methods exist, most of them cannot achieve real-time detection for different scenes. Recently, deep learning algorithms has achieved fantastic detection performance in computer vision, especially regression based convolutional neural networks YOLOv2. It's good both at accuracy and speed, outperforming other state-of-the-art detection methods. This paper for the first time aims to investigate the use of YOLOv2 for vehicle detection in UAV images, as well as to explore the new method for data annotation. Our method starts with image annotation and data augmentation. CSK tracking method is used to help annotate vehicles in images captured from simple scenes. Subsequently, a regression based single convolutional neural network YOLOv2 is used to detect vehicles in UAV images. To evaluate our method, UAV video images were taken over several urban areas, and experiments were conducted on this dataset and Stanford Drone dataset. The experimental results have proven that our data preparation strategy is useful, and YOLOv2 is effective for real-time vehicle detection of UAV video images.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129443809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
A modified faster R-CNN based on CFAR algorithm for SAR ship detection 基于CFAR算法的改进更快R-CNN SAR舰船检测
Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958815
Miao Kang, Xiangguang Leng, Zhao Lin, K. Ji
SAR ship detection is essential to marine monitoring. Recently, with the development of the deep neural network and the spring of the SAR images, SAR ship detection based on deep neural network has been a trend. However, the multi-scale ships in SAR images cause the undesirable differences of features, which decrease the accuracy of ship detection based on deep learning methods. Aiming at this problem, this paper modifies the Faster R-CNN, a state-of-the-art object detection networks, by the traditional constant false alarm rate (CFAR). Taking the objects proposals generated by Faster R-CNN for the guard windows of CFAR algorithm, this method picks up the small-sized targets. By reevaluating the bounding boxes which have relative low classification scores in detection network, this method gain better performance of detection.
SAR船舶探测是海洋监测的重要组成部分。近年来,随着深度神经网络的发展和SAR图像的兴起,基于深度神经网络的SAR船舶检测已成为一种趋势。然而,由于SAR图像中存在多尺度的船舶特征差异,降低了基于深度学习方法的船舶检测精度。针对这一问题,本文采用传统的恒虚警率(constant false alarm rate, CFAR)对最先进的目标检测网络Faster R-CNN进行了改进。该方法将Faster R-CNN生成的目标建议用于CFAR算法的保护窗口,提取出小尺寸目标。该方法通过对检测网络中分类分数相对较低的边界框进行重新评估,获得了更好的检测性能。
{"title":"A modified faster R-CNN based on CFAR algorithm for SAR ship detection","authors":"Miao Kang, Xiangguang Leng, Zhao Lin, K. Ji","doi":"10.1109/RSIP.2017.7958815","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958815","url":null,"abstract":"SAR ship detection is essential to marine monitoring. Recently, with the development of the deep neural network and the spring of the SAR images, SAR ship detection based on deep neural network has been a trend. However, the multi-scale ships in SAR images cause the undesirable differences of features, which decrease the accuracy of ship detection based on deep learning methods. Aiming at this problem, this paper modifies the Faster R-CNN, a state-of-the-art object detection networks, by the traditional constant false alarm rate (CFAR). Taking the objects proposals generated by Faster R-CNN for the guard windows of CFAR algorithm, this method picks up the small-sized targets. By reevaluating the bounding boxes which have relative low classification scores in detection network, this method gain better performance of detection.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129044529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 168
Monitoring ghost cities at prefecture level from multi-source remote sensing data 地级鬼城多源遥感监测
Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958810
Xiaolong Ma, Zhaoting Ma, X. Tong, Sicong Liu
Monitoring urban spatial information is important to hold the process of urbanization for keeping balance between the human activity and the environment. To promote the application extent of the remote sensing technology in the topic of ghost cities, an effective method was proposed to monitor and evaluate “ghost city” phenomenon in the prefecture level city of China by taking advantage of multi-source remote sensing datasets, namely the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS) nighttime light data and other auxiliary data such as Landsat images and the Land-cover/use datasets. Based on several indexes related urban expansion and landscape pattern, experiments were conducted by using the proposed approach in Weihai, as classified by statistics and Landsat images. Compared with the Optimized-Sample-Selection (OSS) method, the proposed method achieved better performance with respect to relative less errors and better visual display of the spatial dynamics of urban expansion in Weihai during the year of 2000–2010, so as to reveal the specific characteristics of urban expansion patterns in those periods.
城市空间信息监测对于把握城市化进程,保持人类活动与环境的平衡具有重要意义。为促进遥感技术在鬼城领域的应用,提出了利用多源遥感数据集,即国防气象卫星计划-业务线扫描系统(DMSP-OLS)夜间灯光数据和Landsat影像、土地覆盖/利用等辅助数据,对中国地级市“鬼城”现象进行监测和评价的有效方法。基于城市扩展与景观格局相关的多项指标,利用该方法在威海市进行了统计分类和Landsat影像分类实验。与优化样本选择(optimization - sample - selection, OSS)方法相比,该方法在相对误差较小的情况下取得了更好的效果,能更好地直观显示威海市2000-2010年城市扩展的空间动态,从而揭示该时期城市扩展格局的具体特征。
{"title":"Monitoring ghost cities at prefecture level from multi-source remote sensing data","authors":"Xiaolong Ma, Zhaoting Ma, X. Tong, Sicong Liu","doi":"10.1109/RSIP.2017.7958810","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958810","url":null,"abstract":"Monitoring urban spatial information is important to hold the process of urbanization for keeping balance between the human activity and the environment. To promote the application extent of the remote sensing technology in the topic of ghost cities, an effective method was proposed to monitor and evaluate “ghost city” phenomenon in the prefecture level city of China by taking advantage of multi-source remote sensing datasets, namely the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS) nighttime light data and other auxiliary data such as Landsat images and the Land-cover/use datasets. Based on several indexes related urban expansion and landscape pattern, experiments were conducted by using the proposed approach in Weihai, as classified by statistics and Landsat images. Compared with the Optimized-Sample-Selection (OSS) method, the proposed method achieved better performance with respect to relative less errors and better visual display of the spatial dynamics of urban expansion in Weihai during the year of 2000–2010, so as to reveal the specific characteristics of urban expansion patterns in those periods.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126692229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The development of deep learning in synthetic aperture radar imagery 合成孔径雷达图像中深度学习的发展
Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958802
C. Schwegmann, W. Kleynhans, B. P. Salmon
The usage of remote sensing to observe environments necessitates interdisciplinary approaches to derive effective, impactful research. One remote sensing technique, Synthetic Aperture Radar, has shown significant benefits over traditional remote sensing techniques but comes at the price of additional complexities. To adequately cope with these, researchers have begun to employ advanced machine learning techniques known as deep learning to Synthetic Aperture Radar data. Deep learning represents the next stage in the evolution of machine intelligence which places the onus of identifying salient features on the network rather than researcher. This paper will outline machine learning techniques as it has been used previously on SAR; what is deep learning and where it fits in compared to traditional machine learning; what benefits can be derived by applying it to Synthetic Aperture Radar imagery; and finally describe some obstacles that still need to be overcome in order to provide constient and long term results from deep learning in SAR.
利用遥感观察环境需要跨学科的方法来进行有效和有影响的研究。一种遥感技术,合成孔径雷达,已经显示出比传统遥感技术显著的优势,但代价是额外的复杂性。为了充分应对这些问题,研究人员已经开始采用先进的机器学习技术,即深度学习来合成孔径雷达数据。深度学习代表了机器智能进化的下一个阶段,它将识别显著特征的责任放在网络上,而不是研究人员身上。本文将概述机器学习技术,因为它已被用于以前的SAR;什么是深度学习?与传统机器学习相比,深度学习适用于哪里?将其应用于合成孔径雷达图像可以获得什么好处;最后描述了为了在SAR中提供一致和长期的深度学习结果,仍然需要克服的一些障碍。
{"title":"The development of deep learning in synthetic aperture radar imagery","authors":"C. Schwegmann, W. Kleynhans, B. P. Salmon","doi":"10.1109/RSIP.2017.7958802","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958802","url":null,"abstract":"The usage of remote sensing to observe environments necessitates interdisciplinary approaches to derive effective, impactful research. One remote sensing technique, Synthetic Aperture Radar, has shown significant benefits over traditional remote sensing techniques but comes at the price of additional complexities. To adequately cope with these, researchers have begun to employ advanced machine learning techniques known as deep learning to Synthetic Aperture Radar data. Deep learning represents the next stage in the evolution of machine intelligence which places the onus of identifying salient features on the network rather than researcher. This paper will outline machine learning techniques as it has been used previously on SAR; what is deep learning and where it fits in compared to traditional machine learning; what benefits can be derived by applying it to Synthetic Aperture Radar imagery; and finally describe some obstacles that still need to be overcome in order to provide constient and long term results from deep learning in SAR.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121697659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Change detection of SAR images based on supervised contractive autoencoders and fuzzy clustering 基于监督压缩自编码器和模糊聚类的SAR图像变化检测
Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958819
Jie Geng, Hongyu Wang, Jianchao Fan, Xiaorui Ma
In this paper, supervised contractive autoencoders (SCAEs) combined with fuzzy c-means (FCM) clustering are developed for change detection of synthetic aperture radar (SAR) images, which aim to take advantage of deep neural networks to capture changed features. Given two original SAR images, Lee filter is used in preprocessing and the difference image (DI) is obtained by log ratio method. Then, FCM is adopted to analyse DI, which yields pseudo labels for guiding the training of SCAEs. Finally, SCAEs are developed to learn changed features from bitemporal images and DI, which can obtain discriminative features and improve detection accuracies. Experiments on three data demonstrate that the proposed method outperforms some related approaches.
本文将监督收缩自编码器(SCAEs)与模糊c均值(FCM)聚类相结合,用于合成孔径雷达(SAR)图像的变化检测,旨在利用深度神经网络捕获变化特征。给定两幅原始SAR图像,采用Lee滤波进行预处理,并采用对数比法得到差分图像(DI)。然后,采用FCM对DI进行分析,得到伪标签,用于指导scae的训练。最后,利用双时图像和DI学习变化特征的scae,获得了判别特征,提高了检测精度。在三个数据上的实验表明,该方法优于一些相关方法。
{"title":"Change detection of SAR images based on supervised contractive autoencoders and fuzzy clustering","authors":"Jie Geng, Hongyu Wang, Jianchao Fan, Xiaorui Ma","doi":"10.1109/RSIP.2017.7958819","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958819","url":null,"abstract":"In this paper, supervised contractive autoencoders (SCAEs) combined with fuzzy c-means (FCM) clustering are developed for change detection of synthetic aperture radar (SAR) images, which aim to take advantage of deep neural networks to capture changed features. Given two original SAR images, Lee filter is used in preprocessing and the difference image (DI) is obtained by log ratio method. Then, FCM is adopted to analyse DI, which yields pseudo labels for guiding the training of SCAEs. Finally, SCAEs are developed to learn changed features from bitemporal images and DI, which can obtain discriminative features and improve detection accuracies. Experiments on three data demonstrate that the proposed method outperforms some related approaches.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124855522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
3-D imaging of high-speed moving space target 高速运动空间目标的三维成像
Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958797
Yu-Xue Sun, Ying Luo, Qun Zhang, Song Zhang
The high-speed moving of space targets introduces distortion and migration to range profile, which will have a negative effect on three-dimensional (3-D) imaging of targets. In this paper, based on parametric sparse representation, a 3-D imaging method for high-speed moving space target is proposed. First, the impact of high speed on range profile is analyzed. Then, based on L-shaped three-antenna interferometric system, a dynamic joint parametric sparse representation model of echoes from three antennas is established. The dictionary matrix is refined by iterative estimation of velocity. Finally, interferometric processing is conducted to obtain the 3-D image of target scatterers. The simulation results verify the effectiveness of the proposed method.
空间目标的高速运动引起距离像的畸变和偏移,对目标的三维成像产生不利影响。本文提出了一种基于参数稀疏表示的高速运动空间目标三维成像方法。首先,分析了高速对航程轮廓的影响。然后,基于l型三天线干涉系统,建立了三天线回波的动态联合参数稀疏表示模型;通过速度的迭代估计来细化字典矩阵。最后进行干涉处理,得到目标散射体的三维图像。仿真结果验证了该方法的有效性。
{"title":"3-D imaging of high-speed moving space target","authors":"Yu-Xue Sun, Ying Luo, Qun Zhang, Song Zhang","doi":"10.1109/RSIP.2017.7958797","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958797","url":null,"abstract":"The high-speed moving of space targets introduces distortion and migration to range profile, which will have a negative effect on three-dimensional (3-D) imaging of targets. In this paper, based on parametric sparse representation, a 3-D imaging method for high-speed moving space target is proposed. First, the impact of high speed on range profile is analyzed. Then, based on L-shaped three-antenna interferometric system, a dynamic joint parametric sparse representation model of echoes from three antennas is established. The dictionary matrix is refined by iterative estimation of velocity. Finally, interferometric processing is conducted to obtain the 3-D image of target scatterers. The simulation results verify the effectiveness of the proposed method.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129559860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral image classification based on spectral-spatial feature extraction 基于光谱空间特征提取的高光谱图像分类
Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958808
Zhen Ye, Li-ling Tan, Lin Bai
A novel hyperspectral classification algorithm based on spectral-spatial feature extraction is proposed. First, spectral-spatial features are extracted by Gabor transform in PCA-projected space. Following that, Gabor-feature bands are partitioned into multiple subsets. Afterwards, the adjacent features in each subset are fused. Finally, the fused features are processed by recursive filtering before feeding into support vector machine (SVM) classifier. Experimental results demonstrate that the proposed algorithm substantially outperforms the traditional and state-of-the-art methods.
提出了一种基于光谱空间特征提取的高光谱分类算法。首先,在pca投影空间中通过Gabor变换提取光谱空间特征;然后,将gabor特征带划分为多个子集。然后,对每个子集中的相邻特征进行融合。最后,对融合后的特征进行递归滤波处理,再输入支持向量机分类器。实验结果表明,该算法大大优于传统的和最先进的方法。
{"title":"Hyperspectral image classification based on spectral-spatial feature extraction","authors":"Zhen Ye, Li-ling Tan, Lin Bai","doi":"10.1109/RSIP.2017.7958808","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958808","url":null,"abstract":"A novel hyperspectral classification algorithm based on spectral-spatial feature extraction is proposed. First, spectral-spatial features are extracted by Gabor transform in PCA-projected space. Following that, Gabor-feature bands are partitioned into multiple subsets. Afterwards, the adjacent features in each subset are fused. Finally, the fused features are processed by recursive filtering before feeding into support vector machine (SVM) classifier. Experimental results demonstrate that the proposed algorithm substantially outperforms the traditional and state-of-the-art methods.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121106730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1