首页 > 最新文献

2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)最新文献

英文 中文
Analyzing Gully Planform Changes in GIS Based on Multi-level Topological Relations 基于多层次拓扑关系的GIS沟壑平台变化分析
Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177461
Feng Guoqiang, Leng Liang, Ye Yinghui, Han Dong-liang
Topological relations can be used to describe qualitative geometric position relations between spatial objects in geospatial world, which plays important roles in spatial query, spatial analysis and spatial reasoning. People can apply topological relations to describe the morphological changes of real objects, such as changes of cadastral parcels, rivers, water systems, etc. Gully planform changes (GPCs) reflect the state of surface soil erosion, so it is important and valuable to describe GPCs in detail. In this paper, based on a hierarchical topological relation description method and combined with the features of GPCs in GIS, we propose a simple hierarchical topological relationship description method to describe GPCs. This method can be used to completely describe GPCs, and is more concise and efficient than the former hierarchical topological relation description method in describing GPCs.
拓扑关系可以定性地描述地理空间世界中空间对象之间的几何位置关系,在空间查询、空间分析和空间推理中起着重要作用。人们可以运用拓扑关系来描述真实物体的形态变化,如地籍地块、河流、水系等的变化。沟壑区台地变化反映了地表土壤侵蚀状况,对其进行详细描述具有重要意义和价值。本文在层次化拓扑关系描述方法的基础上,结合GIS中gpc的特点,提出了一种简单的层次化拓扑关系描述方法来描述gpc。该方法可以完整地描述gpc,并且在描述gpc时比以前的分层拓扑关系描述方法更简洁、高效。
{"title":"Analyzing Gully Planform Changes in GIS Based on Multi-level Topological Relations","authors":"Feng Guoqiang, Leng Liang, Ye Yinghui, Han Dong-liang","doi":"10.1109/ICIVC50857.2020.9177461","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177461","url":null,"abstract":"Topological relations can be used to describe qualitative geometric position relations between spatial objects in geospatial world, which plays important roles in spatial query, spatial analysis and spatial reasoning. People can apply topological relations to describe the morphological changes of real objects, such as changes of cadastral parcels, rivers, water systems, etc. Gully planform changes (GPCs) reflect the state of surface soil erosion, so it is important and valuable to describe GPCs in detail. In this paper, based on a hierarchical topological relation description method and combined with the features of GPCs in GIS, we propose a simple hierarchical topological relationship description method to describe GPCs. This method can be used to completely describe GPCs, and is more concise and efficient than the former hierarchical topological relation description method in describing GPCs.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"29 1","pages":"292-295"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82513028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting Sparse Topics Mining for Temporal Event Summarization 利用稀疏主题挖掘进行时间事件摘要
Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177457
Zhen Yang, Yingzhe Yao, Shanshan Tu
Information explosion, both in cyberspace world and real world, nowadays has brought about pressing needs for comprehensive summary of information. The challenge for constructing a quality one lies in filtering out information of low relevance and mining out highly sparse relevant topics in the vast sea of data. It is a typical imbalanced learning task and we need to achieve a precise summary of temporal event via an accurate description and definition of the useful information and redundant information. In response to such challenge, we introduced: (1) a uniform framework of temporal event summarization with minimal residual optimization matrix factorization as its key part; and (2) a novel neighborhood preserving semantic measure (NPS) to capture the sparse candidate topics under that low-rank matrix factorization model. To evaluate the effectiveness of the proposed solution, a series of experiments are conducted on an annotated KBA corpus. The results of these experiments show that the solution proposed in this study can improve the quality of temporal summarization as compared with the established baselines.
在网络世界和现实世界信息爆炸的今天,人们迫切需要对信息进行全面汇总。构建高质量模型的挑战在于从海量数据中过滤掉低相关性的信息,挖掘出高度稀疏的相关主题。这是一个典型的不平衡学习任务,我们需要通过对有用信息和冗余信息的准确描述和定义来实现对时间事件的精确总结。针对这一挑战,本文提出了:(1)以最小残差优化矩阵分解为核心的时间事件汇总统一框架;(2)在低秩矩阵分解模型下,提出一种新的邻域保持语义度量(NPS)来捕获稀疏候选主题。为了评估所提出的解决方案的有效性,在一个标注的KBA语料上进行了一系列的实验。实验结果表明,与已建立的基线相比,本文提出的解决方案可以提高时间摘要的质量。
{"title":"Exploiting Sparse Topics Mining for Temporal Event Summarization","authors":"Zhen Yang, Yingzhe Yao, Shanshan Tu","doi":"10.1109/ICIVC50857.2020.9177457","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177457","url":null,"abstract":"Information explosion, both in cyberspace world and real world, nowadays has brought about pressing needs for comprehensive summary of information. The challenge for constructing a quality one lies in filtering out information of low relevance and mining out highly sparse relevant topics in the vast sea of data. It is a typical imbalanced learning task and we need to achieve a precise summary of temporal event via an accurate description and definition of the useful information and redundant information. In response to such challenge, we introduced: (1) a uniform framework of temporal event summarization with minimal residual optimization matrix factorization as its key part; and (2) a novel neighborhood preserving semantic measure (NPS) to capture the sparse candidate topics under that low-rank matrix factorization model. To evaluate the effectiveness of the proposed solution, a series of experiments are conducted on an annotated KBA corpus. The results of these experiments show that the solution proposed in this study can improve the quality of temporal summarization as compared with the established baselines.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"074 1","pages":"322-331"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89799160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Straw Burning Detection Method Based on Improved Frame Difference Method and Deep Learning 基于改进帧差法和深度学习的秸秆燃烧检测方法
Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177456
Shiwei Wang, Feng Yu, Changlong Zhou, Minghua Jiang
Straw burning has serious pollution to the air. Only by finding the location of straw burning can we stop the pollution caused by straw burning. The detection of straw burning can start from two aspects: flame and smoke. Because straw burning is usually accompanied by strong smoke, we decide to determine whether there is straw burning through smoke. The existing smoke detection methods all has various shortcomings, such as not using the dynamic characteristics of smoke, and inefficient and complex processing. Therefore, this paper proposes a smoke detection method based on improved frame difference method and Faster R-CNN. For smoke detection, first uses the improved frame difference method to extracts candidate regions, and then uses the Faster R-CNN model for smoke detection. For the extracted candidate areas, this paper proposes a variety of schemes to expands the candidate areas to ensure that the complete smoke information could be obtained to the maximum extent. Through the experiment, we get the best expansion scheme. Experiments shows that the improved frame difference method has obvious effects, compared to Faster R-CNN model method, the maximum accuracy rate has improved by 10.6%.
秸秆焚烧对空气污染严重。只有找到秸秆焚烧的地点,才能制止秸秆焚烧造成的污染。秸秆燃烧的检测可以从火焰和烟雾两个方面入手。因为秸秆燃烧通常伴随着强烈的烟雾,所以我们决定通过烟雾来判断是否有秸秆燃烧。现有的烟雾检测方法都存在着不利用烟雾的动态特性、处理效率低、复杂等缺点。因此,本文提出了一种基于改进帧差法和Faster R-CNN的烟雾检测方法。对于烟雾检测,首先使用改进的帧差方法提取候选区域,然后使用Faster R-CNN模型进行烟雾检测。对于提取的候选区域,本文提出了多种方案来扩大候选区域,以确保最大程度地获得完整的烟雾信息。通过实验,得出了最佳的扩展方案。实验表明,改进的帧差方法效果明显,与Faster R-CNN模型方法相比,最大准确率提高了10.6%。
{"title":"Straw Burning Detection Method Based on Improved Frame Difference Method and Deep Learning","authors":"Shiwei Wang, Feng Yu, Changlong Zhou, Minghua Jiang","doi":"10.1109/ICIVC50857.2020.9177456","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177456","url":null,"abstract":"Straw burning has serious pollution to the air. Only by finding the location of straw burning can we stop the pollution caused by straw burning. The detection of straw burning can start from two aspects: flame and smoke. Because straw burning is usually accompanied by strong smoke, we decide to determine whether there is straw burning through smoke. The existing smoke detection methods all has various shortcomings, such as not using the dynamic characteristics of smoke, and inefficient and complex processing. Therefore, this paper proposes a smoke detection method based on improved frame difference method and Faster R-CNN. For smoke detection, first uses the improved frame difference method to extracts candidate regions, and then uses the Faster R-CNN model for smoke detection. For the extracted candidate areas, this paper proposes a variety of schemes to expands the candidate areas to ensure that the complete smoke information could be obtained to the maximum extent. Through the experiment, we get the best expansion scheme. Experiments shows that the improved frame difference method has obvious effects, compared to Faster R-CNN model method, the maximum accuracy rate has improved by 10.6%.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"37 1","pages":"29-33"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88345097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A New Method for Polygon Detection Based on Hough Parameter Space and USAN Region 基于Hough参数空间和USAN区域的多边形检测新方法
Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177469
Li Shupei, Z. Hui, Zhang Zhisheng, Xia Zhijie
This paper propose a new approach that combine Hough Transform (HT) and corner detection to detect polygons, which consider integrated characteristics not the individual characteristics. We establish a Polygon Parameter Space (PPS) to fit and characterize polygons, which consist of angles, coordinates, USAN values and every two lines of intersections. Firstly, canny operator is used to extract edges map, applied HT to detect line along edges of polygon shape and compute PPS. Secondly, corner detection among intersections is realized by comparing USAN value with angle of intersections, an adaptive threshold and adjusted brightness of nucleus of USAN is introduced to obtain accurate vertices from corners. Finally, we propose an algorithm based on Deep First Search (DFS) to fit the set of vertices regardless convex polygons (CVPs) or concave polygons (CCPs) according to parameters in PPS. The experimental results show that the proposed approach can effectively detect polygons with a less running time and higher accuracy, and shows the advantage of detecting the CVP and CCP shapes of broken vertices.
本文提出了一种将霍夫变换与角点检测相结合的多边形检测方法,该方法考虑的是多边形的整体特征而不是单个特征。建立了多边形参数空间(Polygon Parameter Space, PPS)来拟合和表征多边形,多边形由角度、坐标、USAN值和每两条交点线组成。首先,利用canny算子提取边缘映射,利用HT检测多边形形状边缘的直线,计算PPS;其次,通过对比USAN值与交点角度实现交点间的角点检测,引入自适应阈值和调整USAN核亮度,从交点处获取精确的顶点;最后,我们提出了一种基于深度优先搜索(Deep First Search, DFS)的算法,根据深度优先搜索中的参数对凸多边形(CVPs)和凹多边形(ccp)的顶点集进行拟合。实验结果表明,该方法可以有效地检测多边形,且运行时间短,精度高,在检测破碎顶点的CVP和CCP形状方面具有优势。
{"title":"A New Method for Polygon Detection Based on Hough Parameter Space and USAN Region","authors":"Li Shupei, Z. Hui, Zhang Zhisheng, Xia Zhijie","doi":"10.1109/ICIVC50857.2020.9177469","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177469","url":null,"abstract":"This paper propose a new approach that combine Hough Transform (HT) and corner detection to detect polygons, which consider integrated characteristics not the individual characteristics. We establish a Polygon Parameter Space (PPS) to fit and characterize polygons, which consist of angles, coordinates, USAN values and every two lines of intersections. Firstly, canny operator is used to extract edges map, applied HT to detect line along edges of polygon shape and compute PPS. Secondly, corner detection among intersections is realized by comparing USAN value with angle of intersections, an adaptive threshold and adjusted brightness of nucleus of USAN is introduced to obtain accurate vertices from corners. Finally, we propose an algorithm based on Deep First Search (DFS) to fit the set of vertices regardless convex polygons (CVPs) or concave polygons (CCPs) according to parameters in PPS. The experimental results show that the proposed approach can effectively detect polygons with a less running time and higher accuracy, and shows the advantage of detecting the CVP and CCP shapes of broken vertices.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"1 1","pages":"44-49"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76291715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Properties of Points Generation Network 点生成网络的性质探讨
Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177470
Di Chen, Yi Wu
With the development of deep learning, learning-based 3D reconstruction has attracted a substantial amount of attention and various single-image 3D reconstruction networks have been proposed. However, due to self-occlusion, the information captured in a single image is highly limited, resulting in inaccuracy and instability in reconstruction results. In this paper, a feature combination module is proposed to enable existing single-image 3D reconstruction networks to perform 3D reconstruction from multiview images. In addition, we study the impact of the number of the input multiview images as well as the network output points on reconstruction quality, in order to determine the required number of the input multiview images and the output points for reasonable reconstruction. In experiment, point cloud generations with different number of input images and output points are conducted. Experimental results show that the Chamfer distance decreases by 20%∼30% with the optimal number of input multiview images of five and at least 1000 output points.
随着深度学习的发展,基于学习的三维重建备受关注,各种单图像三维重建网络被提出。然而,由于自遮挡,单幅图像中捕获的信息非常有限,导致重建结果不准确和不稳定。本文提出了一种特征组合模块,使现有的单图像三维重建网络能够从多视图图像进行三维重建。此外,我们还研究了输入多视图图像的数量和网络输出点对重建质量的影响,以确定合理重建所需的输入多视图图像数量和输出点。在实验中,采用不同数量的输入图像和输出点进行点云生成。实验结果表明,当输入多视图图像的最佳数量为5个且输出点至少为1000个时,倒角距离减少了20% ~ 30%。
{"title":"Exploring the Properties of Points Generation Network","authors":"Di Chen, Yi Wu","doi":"10.1109/ICIVC50857.2020.9177470","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177470","url":null,"abstract":"With the development of deep learning, learning-based 3D reconstruction has attracted a substantial amount of attention and various single-image 3D reconstruction networks have been proposed. However, due to self-occlusion, the information captured in a single image is highly limited, resulting in inaccuracy and instability in reconstruction results. In this paper, a feature combination module is proposed to enable existing single-image 3D reconstruction networks to perform 3D reconstruction from multiview images. In addition, we study the impact of the number of the input multiview images as well as the network output points on reconstruction quality, in order to determine the required number of the input multiview images and the output points for reasonable reconstruction. In experiment, point cloud generations with different number of input images and output points are conducted. Experimental results show that the Chamfer distance decreases by 20%∼30% with the optimal number of input multiview images of five and at least 1000 output points.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"39 6 1","pages":"272-277"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80911067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Three-Frame-Difference Algorithm for Infrared Moving Target 红外运动目标的改进三帧差分算法
Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177468
X. Luo, Ke-bin Jia, Pengyu Liu, Daoquan Xiong, Xiuchen Tian
An improved three-frame-difference algorithm is proposed. The algorithm contains two sub-algorithm‐‐‐‐double-layer three-frame-difference algorithm, and history location data statistics and analysis algorithm. Double-layer three-frame-difference algorithm can fill the incomplete parts, and the history location data statistics and analysis algorithm can eliminate noise. Two examples containing one target (with size about 10*40 pixels) and two targets (with size about 80*160 pixels) respectively are chosen. Results of them prove that the improved three-frame-difference algorithm can resolve the problems of traditional one-layer three-frame-difference algorithm, and get the accurate results.
提出了一种改进的三帧差分算法。该算法包含两个子算法——双层三帧差分算法,以及历史定位数据统计与分析算法。双层三帧差分算法可以填补不完整的部分,历史位置数据统计分析算法可以消除噪声。选择两个示例,分别包含一个目标(大小约为10*40像素)和两个目标(大小约为80*160像素)。实验结果表明,改进后的三帧差分算法能够很好地解决传统单层三帧差分算法存在的问题,并得到准确的结果。
{"title":"Improved Three-Frame-Difference Algorithm for Infrared Moving Target","authors":"X. Luo, Ke-bin Jia, Pengyu Liu, Daoquan Xiong, Xiuchen Tian","doi":"10.1109/ICIVC50857.2020.9177468","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177468","url":null,"abstract":"An improved three-frame-difference algorithm is proposed. The algorithm contains two sub-algorithm‐‐‐‐double-layer three-frame-difference algorithm, and history location data statistics and analysis algorithm. Double-layer three-frame-difference algorithm can fill the incomplete parts, and the history location data statistics and analysis algorithm can eliminate noise. Two examples containing one target (with size about 10*40 pixels) and two targets (with size about 80*160 pixels) respectively are chosen. Results of them prove that the improved three-frame-difference algorithm can resolve the problems of traditional one-layer three-frame-difference algorithm, and get the accurate results.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"66 1","pages":"108-112"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80218848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Novel 3D Surface Reconstruction Method with Posterior Constraints of Edge Detection 一种基于边缘检测后验约束的三维曲面重建方法
Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177450
Hongtao Wu, Ying Meng, Bingqing Niu
This paper proposed a novel 3D surface reconstruction method with posterior constraints of edge detection applying to general digital camera. The intrinsic parameters are calibrated with Zhang calibration method. After matching the images taken at two different orientations, the fundamental matrix and corresponding motion parameters by two different orientations are estimated, selecting optical center coordinate system of left camera as world coordinate system, and the projection matrix corresponding the two orientations is obtained. At last, the 3D coordinates of object feature point is computed and object surface is displayed with VRML technology. This system is simple, in addition, the proposed method is suit for general digital camera.
提出了一种适用于普通数码相机的基于后验约束的三维曲面重建方法。采用张氏定标法对固有参数进行了标定。将两个不同方位拍摄的图像进行匹配后,估计两个不同方位的基本矩阵和对应的运动参数,选择左相机光学中心坐标系作为世界坐标系,得到两个方位对应的投影矩阵。最后,利用VRML技术计算了目标特征点的三维坐标,并对目标表面进行了显示。该系统结构简单,适用于一般数码相机。
{"title":"A Novel 3D Surface Reconstruction Method with Posterior Constraints of Edge Detection","authors":"Hongtao Wu, Ying Meng, Bingqing Niu","doi":"10.1109/ICIVC50857.2020.9177450","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177450","url":null,"abstract":"This paper proposed a novel 3D surface reconstruction method with posterior constraints of edge detection applying to general digital camera. The intrinsic parameters are calibrated with Zhang calibration method. After matching the images taken at two different orientations, the fundamental matrix and corresponding motion parameters by two different orientations are estimated, selecting optical center coordinate system of left camera as world coordinate system, and the projection matrix corresponding the two orientations is obtained. At last, the 3D coordinates of object feature point is computed and object surface is displayed with VRML technology. This system is simple, in addition, the proposed method is suit for general digital camera.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"441 1","pages":"55-58"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82918862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recognition Method for Underwater Acoustic Target Based on DCGAN and DenseNet 基于DCGAN和DenseNet的水声目标识别方法
Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177493
Yingjie Gao, Yuechao Chen, Fangyong Wang, Yalong He
The scarcity and access difficulty of labeled underwater acoustic samples have created a bottleneck in introducing deep learning methods into recognition tasks of underwater acoustic targets. In this paper, a recognition method based on the combination of Deep Convolutional Generative Adversarial Network (DCGAN) and Densely Connected Convolutional Networks (DenseNet) for underwater acoustic targets is proposed aiming at these problems. On the basis of meeting the adaption requirements of the deep learning model for the input form, the sample set of wavelet time-frequency graph for the underwater acoustic target was constructed, combined with the prior knowledge of conventional sonar signal processing. The DCGAN model for generation of underwater acoustic sample and the DenseNet model for recognition of underwater acoustic target are designed, and the quality of generated samples is optimized through three stages of iterative training, thus expanding the training set, and improving the recognition effect of underwater acoustic target.
标记水声样本的稀缺性和获取难度成为将深度学习方法引入水声目标识别任务的瓶颈。针对这些问题,本文提出了一种基于深度卷积生成对抗网络(DCGAN)和密集连接卷积网络(DenseNet)相结合的水声目标识别方法。在满足深度学习模型对输入形式自适应要求的基础上,结合传统声纳信号处理的先验知识,构建了水声目标的小波时频图样本集。设计了水声样本生成的DCGAN模型和水声目标识别的DenseNet模型,并通过三个阶段的迭代训练优化生成样本的质量,从而扩大了训练集,提高了水声目标的识别效果。
{"title":"Recognition Method for Underwater Acoustic Target Based on DCGAN and DenseNet","authors":"Yingjie Gao, Yuechao Chen, Fangyong Wang, Yalong He","doi":"10.1109/ICIVC50857.2020.9177493","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177493","url":null,"abstract":"The scarcity and access difficulty of labeled underwater acoustic samples have created a bottleneck in introducing deep learning methods into recognition tasks of underwater acoustic targets. In this paper, a recognition method based on the combination of Deep Convolutional Generative Adversarial Network (DCGAN) and Densely Connected Convolutional Networks (DenseNet) for underwater acoustic targets is proposed aiming at these problems. On the basis of meeting the adaption requirements of the deep learning model for the input form, the sample set of wavelet time-frequency graph for the underwater acoustic target was constructed, combined with the prior knowledge of conventional sonar signal processing. The DCGAN model for generation of underwater acoustic sample and the DenseNet model for recognition of underwater acoustic target are designed, and the quality of generated samples is optimized through three stages of iterative training, thus expanding the training set, and improving the recognition effect of underwater acoustic target.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"1 1","pages":"215-221"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82970421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Improved Digital Orthorectification Map Generation Approach Using the Integrating of ZY3 and GF3 Image 基于ZY3和GF3图像集成的改进数字正射影地图生成方法
Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177473
Li Guo, Xia Wang, Mingyu Yue
The integrating of ZY3 and GF3 satellite image can achieved long-term acquisition of DOM product, however, the selection of point in SAR image was a difficult task, and the edge error cannot be guaranteed if the optical and SAR images were produced individually, an improved DOM product generation approach was proposed in order to ensure the accuracy and efficiency of DOM generation. 29 scenes of ZY3 image and 20 scenes of GF3 image in Hainan island were selected as experimental image, and the results showed that the horizontal accuracy of 73.47% of orthorectification result were better than 1 pixel (10 m), and 26.53% of orthorectification result were better than 2 pixels (20 m), which can meet the horizontal accuracy requirement of 1:50000 scale surveying and mapping in China. At the same time, an improved approach using the dodging and mosaic line editing was proposed to integrate the orthrectification image, it can be seen from this article that the color transition of ZY3 data region and the hue transition of GF3 data region was more natural, and the manual editing was not big. Therefore, the efficiency and accuracy of the improved approach of DOM generation proposed in this paper can be guaranteed in large areas, which can be used as a reference for users when generating the DOM using the integrating image of ZY3 and GF3.
结合ZY3和GF3卫星影像可以实现DOM产品的长期获取,但SAR影像中点的选择是一项艰巨的任务,如果单独生成光学影像和SAR影像,则无法保证边缘误差,为了保证DOM生成的准确性和效率,提出了一种改进的DOM产品生成方法。选取海南岛地区ZY3影像的29个场景和GF3影像的20个场景作为实验影像,结果表明:73.47%的正校正结果水平精度优于1像元(10 m), 26.53%的正校正结果水平精度优于2像元(20 m),可以满足中国1:5万比例尺测绘的水平精度要求。同时,提出了一种改进的方法,利用模糊和拼接线编辑对正校正图像进行整合,从本文中可以看出,ZY3数据区域的颜色过渡和GF3数据区域的色调过渡更加自然,手工编辑量不大。因此,本文提出的改进的DOM生成方法在大范围内可以保证生成的效率和准确性,可供用户在使用ZY3和GF3的积分图像生成DOM时参考。
{"title":"Improved Digital Orthorectification Map Generation Approach Using the Integrating of ZY3 and GF3 Image","authors":"Li Guo, Xia Wang, Mingyu Yue","doi":"10.1109/ICIVC50857.2020.9177473","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177473","url":null,"abstract":"The integrating of ZY3 and GF3 satellite image can achieved long-term acquisition of DOM product, however, the selection of point in SAR image was a difficult task, and the edge error cannot be guaranteed if the optical and SAR images were produced individually, an improved DOM product generation approach was proposed in order to ensure the accuracy and efficiency of DOM generation. 29 scenes of ZY3 image and 20 scenes of GF3 image in Hainan island were selected as experimental image, and the results showed that the horizontal accuracy of 73.47% of orthorectification result were better than 1 pixel (10 m), and 26.53% of orthorectification result were better than 2 pixels (20 m), which can meet the horizontal accuracy requirement of 1:50000 scale surveying and mapping in China. At the same time, an improved approach using the dodging and mosaic line editing was proposed to integrate the orthrectification image, it can be seen from this article that the color transition of ZY3 data region and the hue transition of GF3 data region was more natural, and the manual editing was not big. Therefore, the efficiency and accuracy of the improved approach of DOM generation proposed in this paper can be guaranteed in large areas, which can be used as a reference for users when generating the DOM using the integrating image of ZY3 and GF3.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"50 1","pages":"82-85"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84158370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1