首页 > 最新文献

Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition最新文献

英文 中文
A Spatial Attention-Enhanced Multi-Timescale Graph Convolutional Network for Skeleton-Based Action Recognition 基于骨架动作识别的空间注意增强多时间尺度图卷积网络
Shuqiong Zhu, Xiaolu Ding, Kai Yang, Wai Chen
How to effectively extract discriminative spatial and temporal features is important for skeleton-based action recognition. However, current researches on skeleton-based action recognition mainly focus on the natural connections of the skeleton and original temporal sequences of the skeleton frames, which ignore the inter-related relation of non-adjacent joints and the variant velocities of action instances. To overcome these limitations and therefore enhance the spatial and temporal features extraction for action recognition, we propose a novel Spatial Attention-Enhanced Multi-Timescale Graph Convolutional Network (SA-MTGCN) for skeleton-based action recognition. Specifically, as the relation of non-adjacent but inter-related joints is beneficial for action recognition, we propose an Attention-Enhanced Spatial Graph Convolutional Network (A-SGCN) to use both natural connection and inter-related relation of joints. Furthermore, a Multi-Timescale (MT) structure is proposed to enhance temporal feature extraction by gathering different network layers to model different velocities of action instances. Experimental results on the two widely used NTU and Kinetics datasets demonstrate the effectiveness of our approach.
如何有效地提取具有区别性的时空特征是基于骨架的动作识别的重要内容。然而,目前基于骨架的动作识别研究主要关注骨架与骨架框架的原始时间序列之间的自然联系,忽略了非相邻关节之间的相互关联关系和动作实例的不同速度。为了克服这些限制,从而增强动作识别的时空特征提取,我们提出了一种新的用于基于骨架的动作识别的空间注意增强多时间尺度图卷积网络(SA-MTGCN)。具体而言,由于不相邻但相互关联的关节关系有利于动作识别,我们提出了一种注意增强的空间图卷积网络(A-SGCN),该网络同时利用了关节的自然连接和相互关联关系。此外,提出了一种多时间尺度(MT)结构,通过收集不同的网络层来模拟动作实例的不同速度来增强时间特征提取。在两个广泛使用的NTU和Kinetics数据集上的实验结果证明了我们的方法的有效性。
{"title":"A Spatial Attention-Enhanced Multi-Timescale Graph Convolutional Network for Skeleton-Based Action Recognition","authors":"Shuqiong Zhu, Xiaolu Ding, Kai Yang, Wai Chen","doi":"10.1145/3430199.3430213","DOIUrl":"https://doi.org/10.1145/3430199.3430213","url":null,"abstract":"How to effectively extract discriminative spatial and temporal features is important for skeleton-based action recognition. However, current researches on skeleton-based action recognition mainly focus on the natural connections of the skeleton and original temporal sequences of the skeleton frames, which ignore the inter-related relation of non-adjacent joints and the variant velocities of action instances. To overcome these limitations and therefore enhance the spatial and temporal features extraction for action recognition, we propose a novel Spatial Attention-Enhanced Multi-Timescale Graph Convolutional Network (SA-MTGCN) for skeleton-based action recognition. Specifically, as the relation of non-adjacent but inter-related joints is beneficial for action recognition, we propose an Attention-Enhanced Spatial Graph Convolutional Network (A-SGCN) to use both natural connection and inter-related relation of joints. Furthermore, a Multi-Timescale (MT) structure is proposed to enhance temporal feature extraction by gathering different network layers to model different velocities of action instances. Experimental results on the two widely used NTU and Kinetics datasets demonstrate the effectiveness of our approach.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126857525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Map relative localization based on road lane matching with Iterative Closest Point algorithm 基于迭代最近点算法的道路车道匹配地图相对定位
A. Evlampev, I. Shapovalov, S. Gafurov
Accurate and reliable localization is necessary for vehicle autonomous driving. Existing localization systems based on the GNSS cannot always provide lane-level accuracy. This paper proposes the method that improves vehicle localization by using road lanes recognized from a camera and a digital map. Iterative Closest Point (ICP) matching is performed for generated point clouds to minimize lateral error. The neural network is used for lane detection, detections are post-processed and fitted to the polynomial. Changes that allowed improving ICP matching are described. Finally, we perform an experiment with GPS RTK signal as ground truth and demonstrate that the proposed method has a position error of less than 0.5 m for vehicle localization.
准确可靠的定位是实现汽车自动驾驶的必要条件。现有的基于GNSS的定位系统不能总是提供车道级精度。本文提出了一种利用摄像头和数字地图识别道路的方法来提高车辆定位。对生成的点云进行迭代最近点匹配,使横向误差最小化。将神经网络用于车道检测,对检测结果进行后处理并拟合到多项式上。描述了允许改进ICP匹配的更改。最后,以GPS RTK信号为地真值进行了实验,验证了该方法对车辆定位的定位误差小于0.5 m。
{"title":"Map relative localization based on road lane matching with Iterative Closest Point algorithm","authors":"A. Evlampev, I. Shapovalov, S. Gafurov","doi":"10.1145/3430199.3430229","DOIUrl":"https://doi.org/10.1145/3430199.3430229","url":null,"abstract":"Accurate and reliable localization is necessary for vehicle autonomous driving. Existing localization systems based on the GNSS cannot always provide lane-level accuracy. This paper proposes the method that improves vehicle localization by using road lanes recognized from a camera and a digital map. Iterative Closest Point (ICP) matching is performed for generated point clouds to minimize lateral error. The neural network is used for lane detection, detections are post-processed and fitted to the polynomial. Changes that allowed improving ICP matching are described. Finally, we perform an experiment with GPS RTK signal as ground truth and demonstrate that the proposed method has a position error of less than 0.5 m for vehicle localization.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124319968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Image Watermarking Scheme Based on Voting Mechanism in Balanced Multiwavelet Domain 一种基于平衡多小波域投票机制的图像水印方案
Shaobao Wu, Zhihua Wu, Guodong Wang, Dongsheng Shen
A digital image watermarking algorithm based on balanced Multiwavelet transform and voting mechanism is proposed in this paper. The algorithm embeds the binary watermark image bits which have been pre-processed into low-pass sub-band coefficients in multiwavelet transform domain. According to the virtually identical quality of the energy of four low-pass subbands, the binary watermark image bits are embedded into four low-pass sub-bands coefficients four times respectively. Due to the different characteristics of each low-pass coefficients block, the largest singular value of the selected blocks is adaptively operated by different quantization step for embedding watermark information. Finally, the voting mechanism is introduced when the watermark extracting. Experimental results show that the watermarking algorithm not only has good invisibility, but also has robustness against some common image processing such as JPEG compression, noise addition, filtering, etc.
提出了一种基于平衡多小波变换和投票机制的数字图像水印算法。该算法将经过预处理的二值水印图像位嵌入到多小波变换域的低通子带系数中。根据四个低通子带的能量质量几乎相同,将二值水印图像比特分别嵌入到四个低通子带系数中四次。由于各低通系数块的特征不同,采用不同的量化步长自适应地对所选块的最大奇异值进行处理,以嵌入水印信息。最后,在水印提取过程中引入了投票机制。实验结果表明,该水印算法不仅具有良好的不可见性,而且对JPEG压缩、加噪、滤波等常见图像处理具有较强的鲁棒性。
{"title":"An Image Watermarking Scheme Based on Voting Mechanism in Balanced Multiwavelet Domain","authors":"Shaobao Wu, Zhihua Wu, Guodong Wang, Dongsheng Shen","doi":"10.1145/3430199.3430240","DOIUrl":"https://doi.org/10.1145/3430199.3430240","url":null,"abstract":"A digital image watermarking algorithm based on balanced Multiwavelet transform and voting mechanism is proposed in this paper. The algorithm embeds the binary watermark image bits which have been pre-processed into low-pass sub-band coefficients in multiwavelet transform domain. According to the virtually identical quality of the energy of four low-pass subbands, the binary watermark image bits are embedded into four low-pass sub-bands coefficients four times respectively. Due to the different characteristics of each low-pass coefficients block, the largest singular value of the selected blocks is adaptively operated by different quantization step for embedding watermark information. Finally, the voting mechanism is introduced when the watermark extracting. Experimental results show that the watermarking algorithm not only has good invisibility, but also has robustness against some common image processing such as JPEG compression, noise addition, filtering, etc.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129019920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Deep CNN-BiLSTM Model for Network Intrusion Detection 网络入侵检测的高效深度CNN-BiLSTM模型
Jay Sinha, M. Manollas
The need for Network Intrusion Detection systems has risen since usage of cloud technologies has become mainstream. With the ever growing network traffic, Network Intrusion Detection is a critical part of network security and a very efficient NIDS is a must, given new variety of attack arises frequently. These Intrusion Detection systems are built on either a pattern matching system or AI/ML based anomaly detection system. Pattern matching methods usually have a high False Positive Rates whereas the AI/ML based method, relies on finding metric/feature or correlation between set of metrics/features to predict the possibility of an attack. The most common of these is KNN, SVM etc., operate on a limited set of features and have less accuracy and still suffer from higher False Positive Rates. In this paper, we propose a deep learning model combining the distinct strengths of a Convolutional Neural Network and a Bi-directional LSTM to incorporate learning of spatial and temporal features of the data. For this paper, we use publicly available datasets NSL-KDD and UNSW-NB15 to train and test the model. The proposed model offers a high detection rate and comparatively lower False Positive Rate. The proposed model performs better than many state-of-the-art Network Intrusion Detection systems leveraging Machine Learning/Deep Learning models.
由于云技术的使用已经成为主流,对网络入侵检测系统的需求已经上升。随着网络流量的不断增长,网络入侵检测是网络安全的重要组成部分,面对各种攻击的频繁出现,高效的网络入侵检测是必不可少的。这些入侵检测系统建立在模式匹配系统或基于AI/ML的异常检测系统之上。模式匹配方法通常有很高的误报率,而基于AI/ML的方法依赖于寻找指标/特征或一组指标/特征之间的相关性来预测攻击的可能性。其中最常见的是KNN, SVM等,它们在有限的特征集上运行,精度较低,并且仍然存在较高的误报率。在本文中,我们提出了一个深度学习模型,结合了卷积神经网络和双向LSTM的独特优势,以结合数据的空间和时间特征的学习。在本文中,我们使用公开可用的数据集NSL-KDD和UNSW-NB15来训练和测试模型。该模型具有较高的检测率和较低的误报率。所提出的模型比许多利用机器学习/深度学习模型的最先进的网络入侵检测系统表现得更好。
{"title":"Efficient Deep CNN-BiLSTM Model for Network Intrusion Detection","authors":"Jay Sinha, M. Manollas","doi":"10.1145/3430199.3430224","DOIUrl":"https://doi.org/10.1145/3430199.3430224","url":null,"abstract":"The need for Network Intrusion Detection systems has risen since usage of cloud technologies has become mainstream. With the ever growing network traffic, Network Intrusion Detection is a critical part of network security and a very efficient NIDS is a must, given new variety of attack arises frequently. These Intrusion Detection systems are built on either a pattern matching system or AI/ML based anomaly detection system. Pattern matching methods usually have a high False Positive Rates whereas the AI/ML based method, relies on finding metric/feature or correlation between set of metrics/features to predict the possibility of an attack. The most common of these is KNN, SVM etc., operate on a limited set of features and have less accuracy and still suffer from higher False Positive Rates. In this paper, we propose a deep learning model combining the distinct strengths of a Convolutional Neural Network and a Bi-directional LSTM to incorporate learning of spatial and temporal features of the data. For this paper, we use publicly available datasets NSL-KDD and UNSW-NB15 to train and test the model. The proposed model offers a high detection rate and comparatively lower False Positive Rate. The proposed model performs better than many state-of-the-art Network Intrusion Detection systems leveraging Machine Learning/Deep Learning models.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"68 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130054647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
An Improved Method of Object Detection Based on Chip 一种改进的基于芯片的目标检测方法
Ji-Xiang Wei, Tongwei Lu, Zhimeng Xin
In spite of methods for object detection based on convolutional neural networks, there's a problem that the information of objects missing in the convolutional progress with an immeasurable proportion. The reason is that while the network downsample in order to further obtain the abstract features, a certain pixel point in the feature map corresponding to more original image area, so there're less content that can be referred to. To handle this problem, an improved object detection method based on YOLOv3 is demonstrated. Our approach is composed of three steps, initial detector, adaptive chip generator, secondary detector. Firstly, figuring out which chips are worth detecting in the image. Secondly, screening the best associations for reduce the number of duplicate detections from these chips. Finally, detection progress will run on each chip and summarize the output. Benefit from it, this method achieves a significant performance especially in medium and large size objects.
尽管有基于卷积神经网络的目标检测方法,但存在一个问题,即在卷积过程中,目标信息丢失的比例不可测量。其原因是,当网络为了进一步获得抽象特征而下采样时,特征图中的某个像素点对应的原始图像区域较多,因此可供参考的内容较少。针对这一问题,提出了一种改进的基于YOLOv3的目标检测方法。我们的方法由三个步骤组成:初始检测器、自适应芯片发生器、二次检测器。首先,找出图像中哪些芯片值得检测。其次,筛选最佳关联以减少从这些芯片中重复检测的数量。最后,在每个芯片上运行检测进度并总结输出。受益于此,该方法取得了显著的性能,特别是在大中型对象中。
{"title":"An Improved Method of Object Detection Based on Chip","authors":"Ji-Xiang Wei, Tongwei Lu, Zhimeng Xin","doi":"10.1145/3430199.3430236","DOIUrl":"https://doi.org/10.1145/3430199.3430236","url":null,"abstract":"In spite of methods for object detection based on convolutional neural networks, there's a problem that the information of objects missing in the convolutional progress with an immeasurable proportion. The reason is that while the network downsample in order to further obtain the abstract features, a certain pixel point in the feature map corresponding to more original image area, so there're less content that can be referred to. To handle this problem, an improved object detection method based on YOLOv3 is demonstrated. Our approach is composed of three steps, initial detector, adaptive chip generator, secondary detector. Firstly, figuring out which chips are worth detecting in the image. Secondly, screening the best associations for reduce the number of duplicate detections from these chips. Finally, detection progress will run on each chip and summarize the output. Benefit from it, this method achieves a significant performance especially in medium and large size objects.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129147851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experimental and Theoretical Scrutiny of the Geometric Derivation of the Fundamental Matrix 基本矩阵几何推导的实验与理论研究
T. Basta
In this paper, we prove mathematically that the geometric derivation of the fundamental matrix F of the two-view reconstruction problem is flawed. Although the fundamental matrix approach is quite classic, it is still taught in universities around the world. Thus, analyzing the derivation of F now is a non-trivial subject. The geometric derivation of E is based on the cross product of vectors in R3. The cross product (or vector product) of two vectors is x × y where x = ⟨x1, x2, x3⟩ and y = ⟨y1, y2, y3⟩ in R3. The relationship between the skew-matrix of a vector t in R3 and the cross product is [t]×y = t × y for any vector y in R3. In the derivation of the essential matrix we have E = [t]×R which is the result of replacing t × R by [t]×R, the cross product of a vector t and a 3×3 matrix R. This is an undefined operation and therefore the essential matrix derivation is flawed. The derivation of F, is based on the assertion that the set of all points in the first image and their corresponding points in the second image are protectively equivalent and therefore there exists a homography H&pgr; between the two images. An assertion that does not hold for 3D non-planar scenes.
本文从数学上证明了二视图重构问题的基本矩阵F的几何推导是有缺陷的。尽管基本的矩阵方法非常经典,但世界各地的大学仍然在教授它。因此,现在分析F的推导是一个不平凡的主题。E的几何推导是基于R3中向量的叉乘。两个向量的外积(或向量积)是x × y,其中x =⟨x1, x2, x3⟩和y =⟨y1, y2, y3⟩在R3中。R3中向量t的斜矩阵和外积的关系是[t]×y = t ×y对于任意R3中的向量y。在基本矩阵的推导中我们有E = [t]×R这是用[t]×R代替t ×R的结果,向量t和3×3矩阵R的叉乘,这是一个未定义的操作因此基本矩阵的推导是有缺陷的。F的推导基于如下断言:第一图像中所有点与第二图像中对应点的集合是保护等价的,因此存在单应性H&pgr;在两个图像之间。这个断言不适用于3D非平面场景。
{"title":"Experimental and Theoretical Scrutiny of the Geometric Derivation of the Fundamental Matrix","authors":"T. Basta","doi":"10.1145/3430199.3430227","DOIUrl":"https://doi.org/10.1145/3430199.3430227","url":null,"abstract":"In this paper, we prove mathematically that the geometric derivation of the fundamental matrix F of the two-view reconstruction problem is flawed. Although the fundamental matrix approach is quite classic, it is still taught in universities around the world. Thus, analyzing the derivation of F now is a non-trivial subject. The geometric derivation of E is based on the cross product of vectors in R3. The cross product (or vector product) of two vectors is x × y where x = ⟨x1, x2, x3⟩ and y = ⟨y1, y2, y3⟩ in R3. The relationship between the skew-matrix of a vector t in R3 and the cross product is [t]×y = t × y for any vector y in R3. In the derivation of the essential matrix we have E = [t]×R which is the result of replacing t × R by [t]×R, the cross product of a vector t and a 3×3 matrix R. This is an undefined operation and therefore the essential matrix derivation is flawed. The derivation of F, is based on the assertion that the set of all points in the first image and their corresponding points in the second image are protectively equivalent and therefore there exists a homography H&pgr; between the two images. An assertion that does not hold for 3D non-planar scenes.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126921033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Image Retrieval Method of Bayonet Vehicle Based on the Improvement of Deep Learning Network 基于改进深度学习网络的刺刀车图像检索方法
Zilong Wang, Ling Xiong, Yang Chen
Aiming at the problems of low accuracy, slow calculation speed, large storage space and difficult to detect multiple targets in the search of existing bayonet vehicles, a multi-target staged image retrieval method based on Faster R-CNN preprocessing was proposed. First, the selective search network is used to obtain the probability vectors in the picture; then, the image compact semantic hash code is used to perform fingerprint encoding to quickly compare and narrow the range to obtain a range candidate pool; finally, the image to be retrieved is compared to the image in the pool Quickly compare quantized hash matrices, and use voting to select the most similar images from the pool as the output. The experimental results show that the design can achieve end-to-end training. The average accuracy rate (0.829) and retrieval response time (0.698s) are significantly improved compared to the conventional hash-based retrieval method on the BIT-Vehicle dataset. This meets the era of big data Image retrieval needs.
针对现有刺刀车辆搜索精度低、计算速度慢、存储空间大、难以检测到多个目标等问题,提出了一种基于Faster R-CNN预处理的多目标分段图像检索方法。首先,利用选择性搜索网络获取图像中的概率向量;然后,利用图像压缩语义哈希码进行指纹编码,快速比较和缩小范围,得到一个范围候选池;最后,将待检索的图像与池中的图像进行比较,快速比较量化哈希矩阵,并使用投票从池中选择最相似的图像作为输出。实验结果表明,该设计能够实现端到端训练。在BIT-Vehicle数据集上,与传统的基于哈希的检索方法相比,平均准确率(0.829)和检索响应时间(0.698s)有显著提高。这满足了大数据时代的图像检索需求。
{"title":"Image Retrieval Method of Bayonet Vehicle Based on the Improvement of Deep Learning Network","authors":"Zilong Wang, Ling Xiong, Yang Chen","doi":"10.1145/3430199.3430209","DOIUrl":"https://doi.org/10.1145/3430199.3430209","url":null,"abstract":"Aiming at the problems of low accuracy, slow calculation speed, large storage space and difficult to detect multiple targets in the search of existing bayonet vehicles, a multi-target staged image retrieval method based on Faster R-CNN preprocessing was proposed. First, the selective search network is used to obtain the probability vectors in the picture; then, the image compact semantic hash code is used to perform fingerprint encoding to quickly compare and narrow the range to obtain a range candidate pool; finally, the image to be retrieved is compared to the image in the pool Quickly compare quantized hash matrices, and use voting to select the most similar images from the pool as the output. The experimental results show that the design can achieve end-to-end training. The average accuracy rate (0.829) and retrieval response time (0.698s) are significantly improved compared to the conventional hash-based retrieval method on the BIT-Vehicle dataset. This meets the era of big data Image retrieval needs.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120938377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Mining Frequent Itemsets Algorithm in Stream Data Based on Sliding Time Decay Window 基于滑动时间衰减窗口的流数据频繁项集挖掘算法
Xin Lu, Shaonan Jin, Xun Wang, Jiao Yuan, Kun Fu, Ke Yang
In order to reduce the time and memory consumption of frequent itemsets mining in stream data, and weaken the impact of historical transactions on data patterns, this paper proposes a frequent itemsets mining algorithm SWFIUT-stream based on sliding decay time window. In this algorithm, the time attenuation factor is introduced to assign different weights to each window unit to weaken their influence on data mode. In order to realize the fast stream data mining processing, when mining the frequent itemsets, the two-dimensional table is used to scan and decompose the itemsets synchronously to mine all the frequent itemsets in the window, and the distributed parallel computing processing is carried out based on storm framework. Experimental data show that the algorithm consumes less time and consumes less memory space than conventional algorithms when mining frequent itemsets in stream data.
为了减少流数据中频繁项集挖掘的时间和内存消耗,减弱历史事务对数据模式的影响,提出了一种基于滑动衰减时间窗的频繁项集挖掘算法SWFIUT-stream。该算法引入时间衰减因子,对每个窗口单元赋予不同的权重,以减弱其对数据模式的影响。为了实现快速的流数据挖掘处理,在挖掘频繁项集时,采用二维表对频繁项集进行同步扫描和分解,挖掘窗口内的所有频繁项集,并基于storm框架进行分布式并行计算处理。实验数据表明,在挖掘流数据中的频繁项集时,该算法比传统算法消耗更少的时间和内存空间。
{"title":"A Mining Frequent Itemsets Algorithm in Stream Data Based on Sliding Time Decay Window","authors":"Xin Lu, Shaonan Jin, Xun Wang, Jiao Yuan, Kun Fu, Ke Yang","doi":"10.1145/3430199.3430226","DOIUrl":"https://doi.org/10.1145/3430199.3430226","url":null,"abstract":"In order to reduce the time and memory consumption of frequent itemsets mining in stream data, and weaken the impact of historical transactions on data patterns, this paper proposes a frequent itemsets mining algorithm SWFIUT-stream based on sliding decay time window. In this algorithm, the time attenuation factor is introduced to assign different weights to each window unit to weaken their influence on data mode. In order to realize the fast stream data mining processing, when mining the frequent itemsets, the two-dimensional table is used to scan and decompose the itemsets synchronously to mine all the frequent itemsets in the window, and the distributed parallel computing processing is carried out based on storm framework. Experimental data show that the algorithm consumes less time and consumes less memory space than conventional algorithms when mining frequent itemsets in stream data.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121834901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Improved OFDM Time-Frequency Synchronization Algorithm Based on CAZAC Sequence 基于CAZAC序列的改进OFDM时频同步算法
Xinming Xie, Bowei Wang, Pengfei Han
An improved OFDM time-frequency synchronization algorithm based on CAZAC (Constant Amplitude Zero Auto Correlation) sequence is proposed to solve the problem that the traditional algorithm is difficult to balance between timing synchronization accuracy and calculation complexity. The CAZAC sequence was introduced to improve the structure of the training sequence of conventional algorithms. The conjugate symmetry of the training sequence of the receiving end in the time domain was used for the timing estimation. Fractional frequency offset assessment Then the effect of the integral frequency offset on the CAZAC sequence was analyzed, and the integer frequency offset was completed by calculating the CAZAC sequence. The algorithm achieves higher timing synchronization accuracy with lower computational complexity, and the accuracy of frequency offset estimation is also higher than that of traditional algorithms. Theory and simulation prove that the proposed algorithm has good timing estimation and frequency offset estimation performance under the Multipath fading channel.
针对传统算法难以平衡定时同步精度和计算复杂度的问题,提出了一种改进的基于CAZAC(恒幅零自相关)序列的OFDM时频同步算法。为了改进传统算法训练序列的结构,引入了CAZAC序列。利用接收端训练序列在时域上的共轭对称性进行时序估计。然后分析了积分频率偏移对CAZAC序列的影响,通过计算CAZAC序列完成整数频率偏移。该算法以较低的计算复杂度实现了较高的定时同步精度,频率偏移估计精度也高于传统算法。理论和仿真证明了该算法在多径衰落信道下具有良好的时序估计和频偏估计性能。
{"title":"An Improved OFDM Time-Frequency Synchronization Algorithm Based on CAZAC Sequence","authors":"Xinming Xie, Bowei Wang, Pengfei Han","doi":"10.1145/3430199.3430232","DOIUrl":"https://doi.org/10.1145/3430199.3430232","url":null,"abstract":"An improved OFDM time-frequency synchronization algorithm based on CAZAC (Constant Amplitude Zero Auto Correlation) sequence is proposed to solve the problem that the traditional algorithm is difficult to balance between timing synchronization accuracy and calculation complexity. The CAZAC sequence was introduced to improve the structure of the training sequence of conventional algorithms. The conjugate symmetry of the training sequence of the receiving end in the time domain was used for the timing estimation. Fractional frequency offset assessment Then the effect of the integral frequency offset on the CAZAC sequence was analyzed, and the integer frequency offset was completed by calculating the CAZAC sequence. The algorithm achieves higher timing synchronization accuracy with lower computational complexity, and the accuracy of frequency offset estimation is also higher than that of traditional algorithms. Theory and simulation prove that the proposed algorithm has good timing estimation and frequency offset estimation performance under the Multipath fading channel.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130049163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Key Structure of Auroral Images Based on Weakly Supervised Learning 基于弱监督学习的极光图像关键结构检测
Qian Wang, Tongxin Xue, Yi Wu, Fan Hu, Pengfei Han
Weakly supervised learning is of interest and research by many people due to the large savings in labeling costs. To solve the high cost of manual labeling in the research of aurora image detection, an Aurora multi-scale network for aurora image dataset is proposed based on weakly-supervised learning. Firstly, the feature learning mechanism of dynamic hierarchical mimicking is adopted to improve the classification performance of the convolutional neural network based on the aurora image. Then, the multi-scale constraint is imposed on the network through the multi-branch input and output of different sizes. The final output of the auroral image class activation maps with more ideal results, the critical structure detection of auroral images based on imagelevel annotation is realized. Experiments show that the algorithm in this paper can effectively improve the class activation maps results of the auroral image, and has an ideal detection effect on the vital structure of the auroral image.
由于大大节省了标注成本,弱监督学习受到许多人的兴趣和研究。针对极光图像检测研究中人工标注成本高的问题,提出了一种基于弱监督学习的极光图像数据集多尺度网络。首先,采用动态分层模仿的特征学习机制,提高基于极光图像的卷积神经网络的分类性能;然后,通过不同大小的多支路输入输出对网络施加多尺度约束。最终输出的极光图像类激活图具有较为理想的结果,实现了基于图像级标注的极光图像关键结构检测。实验表明,本文算法能有效改善极光图像的类激活图结果,对极光图像的重要结构具有理想的检测效果。
{"title":"Detection of Key Structure of Auroral Images Based on Weakly Supervised Learning","authors":"Qian Wang, Tongxin Xue, Yi Wu, Fan Hu, Pengfei Han","doi":"10.1145/3430199.3430216","DOIUrl":"https://doi.org/10.1145/3430199.3430216","url":null,"abstract":"Weakly supervised learning is of interest and research by many people due to the large savings in labeling costs. To solve the high cost of manual labeling in the research of aurora image detection, an Aurora multi-scale network for aurora image dataset is proposed based on weakly-supervised learning. Firstly, the feature learning mechanism of dynamic hierarchical mimicking is adopted to improve the classification performance of the convolutional neural network based on the aurora image. Then, the multi-scale constraint is imposed on the network through the multi-branch input and output of different sizes. The final output of the auroral image class activation maps with more ideal results, the critical structure detection of auroral images based on imagelevel annotation is realized. Experiments show that the algorithm in this paper can effectively improve the class activation maps results of the auroral image, and has an ideal detection effect on the vital structure of the auroral image.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116109044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1