首页 > 最新文献

2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)最新文献

英文 中文
Application of deep intensive learning in RNA secondary structure prediction 深度强化学习在RNA二级结构预测中的应用
Jiaming Gu
With the development of the novel coronavirus epidemic, virus detection and research has gradually become a hot research direction. The structure of the virus is mainly divided into protein shell and ribonucleic acid (RNA). RNA is an important information-carrying biopolymer within biological cells that plays a key role in regulatory processes and transcription control. Studies of RNA-induced conditions, including human immunodeficiency viruses, neocymavirus, and even Alzheimer's and Parkinson's disease, require an understanding of the structure and function of RNA. As a result, the study of RNA is becoming increasingly important in a range of applications, including biology and medicine. The function of RNA is determined primarily by the thermodynamic three-stage folding of a sequence of nucleotides. The hydrogen bond between nucleotides determines the main driving force for the formation of a three-stage structure. Smaller folds around the hydrogen bond are called secondary structures of RNA. The three-stage structure determines the function and nature of RNA, and traditional manual exploration of RNA tertiary structures, such as X-ray crystal diffraction, and MRI to determine RNA tertiary structures, while accurate and reliable, is labor-intensive and time-consuming. Accurate judgment of secondary structures has greatly influenced the study of RNA tertiary structures and deeper studies, and the exploration of RNA secondary structures with artificial intelligence can lead to more accurate, rapid and efficient results. In the current field, artificial intelligence algorithms to predict RNA secondary structures usually use deep learning, genetic algorithms and other means, through neural network fitting to obtain prediction results. This approach is supervised learning, requiring a large amount of RNA secondary structure data to be collated prior to the study, while the models trained are not explanatory. As we all know, RNA folding is driven primarily by thermodynamics, can we train a model that learns the principles of RNA folding on its own, based on limited structural data? The main research direction of this paper is to explore the secondary structure model of ribonucleic acid independently by using algorithms in the way of computer deep-strengthening learning. Deep-enhanced learning primarily transforms the prediction process of the RNA secondary structure into the process of intelligent decision-making to explore optimal decision-making. Due to the limited training set and computing power, this paper explores the feasibility and development potential of deep-enhanced learning algorithms in RNA secondary structure prediction. In the current field, artificial intelligence algorithms to predict RNA secondary structures usually use deep learning, genetic algorithms and other means, through neural network fitting to obtain prediction results. This approach is supervised learning, requiring a large amount of RNA secondary structure data to be co
随着新型冠状病毒疫情的发展,病毒检测与研究逐渐成为热点研究方向。病毒的结构主要分为蛋白质外壳和核糖核酸(RNA)。RNA是生物细胞内重要的携带信息的生物聚合物,在调控过程和转录控制中起着关键作用。RNA诱导疾病的研究,包括人类免疫缺陷病毒、新细胞病毒,甚至阿尔茨海默病和帕金森病,都需要了解RNA的结构和功能。因此,RNA的研究在包括生物学和医学在内的一系列应用中变得越来越重要。RNA的功能主要是由核苷酸序列的热力学三阶段折叠决定的。核苷酸之间的氢键决定了三级结构形成的主要驱动力。氢键周围较小的褶皱被称为RNA的二级结构。三级结构决定了RNA的功能和性质,传统的手工探索RNA三级结构,如x射线晶体衍射、MRI确定RNA三级结构,虽然准确可靠,但费时费力。二级结构的准确判断极大地影响了RNA三级结构的研究和更深入的研究,利用人工智能对RNA二级结构的探索可以获得更准确、快速和高效的结果。在当前领域,人工智能算法预测RNA二级结构通常采用深度学习、遗传算法等手段,通过神经网络拟合获得预测结果。这种方法是监督式学习,在研究之前需要对大量的RNA二级结构数据进行整理,而训练出来的模型也不具有解释性。我们都知道,RNA折叠主要是由热力学驱动的,我们能不能训练一个基于有限的结构数据,自己学习RNA折叠原理的模型?本文的主要研究方向是以计算机深度强化学习的方式,利用算法独立探索核糖核酸的二级结构模型。深度增强学习主要将RNA二级结构的预测过程转化为智能决策过程,探索最优决策。由于训练集和计算能力有限,本文探讨了深度增强学习算法在RNA二级结构预测中的可行性和发展潜力。在当前领域,人工智能算法预测RNA二级结构通常采用深度学习、遗传算法等手段,通过神经网络拟合获得预测结果。这种方法是监督式学习,在研究之前需要对大量的RNA二级结构数据进行整理,而训练出来的模型也不具有解释性。我们都知道,RNA折叠主要是由热力学驱动的,我们能不能训练一个基于有限的结构数据,自己学习RNA折叠原理的模型?本文的主要研究方向是以计算机深度强化学习的方式,利用算法独立探索核糖核酸的二级结构模型。深度增强学习主要将RNA二级结构的预测过程转化为智能决策过程,探索最优决策。由于训练集和计算能力有限,本文探讨了深度增强学习算法在RNA二级结构预测中的可行性和发展潜力。本文的主要研究方向是以计算机深度强化学习的方式,利用算法独立探索核糖核酸的二级结构模型。深度增强学习主要将RNA二级结构的预测过程转化为智能决策过程,探索最优决策。由于训练集和计算能力有限,本文探讨了深度增强学习算法在RNA二级结构预测中的可行性和发展潜力。
{"title":"Application of deep intensive learning in RNA secondary structure prediction","authors":"Jiaming Gu","doi":"10.1109/CSAIEE54046.2021.9543134","DOIUrl":"https://doi.org/10.1109/CSAIEE54046.2021.9543134","url":null,"abstract":"With the development of the novel coronavirus epidemic, virus detection and research has gradually become a hot research direction. The structure of the virus is mainly divided into protein shell and ribonucleic acid (RNA). RNA is an important information-carrying biopolymer within biological cells that plays a key role in regulatory processes and transcription control. Studies of RNA-induced conditions, including human immunodeficiency viruses, neocymavirus, and even Alzheimer's and Parkinson's disease, require an understanding of the structure and function of RNA. As a result, the study of RNA is becoming increasingly important in a range of applications, including biology and medicine. The function of RNA is determined primarily by the thermodynamic three-stage folding of a sequence of nucleotides. The hydrogen bond between nucleotides determines the main driving force for the formation of a three-stage structure. Smaller folds around the hydrogen bond are called secondary structures of RNA. The three-stage structure determines the function and nature of RNA, and traditional manual exploration of RNA tertiary structures, such as X-ray crystal diffraction, and MRI to determine RNA tertiary structures, while accurate and reliable, is labor-intensive and time-consuming. Accurate judgment of secondary structures has greatly influenced the study of RNA tertiary structures and deeper studies, and the exploration of RNA secondary structures with artificial intelligence can lead to more accurate, rapid and efficient results. In the current field, artificial intelligence algorithms to predict RNA secondary structures usually use deep learning, genetic algorithms and other means, through neural network fitting to obtain prediction results. This approach is supervised learning, requiring a large amount of RNA secondary structure data to be collated prior to the study, while the models trained are not explanatory. As we all know, RNA folding is driven primarily by thermodynamics, can we train a model that learns the principles of RNA folding on its own, based on limited structural data? The main research direction of this paper is to explore the secondary structure model of ribonucleic acid independently by using algorithms in the way of computer deep-strengthening learning. Deep-enhanced learning primarily transforms the prediction process of the RNA secondary structure into the process of intelligent decision-making to explore optimal decision-making. Due to the limited training set and computing power, this paper explores the feasibility and development potential of deep-enhanced learning algorithms in RNA secondary structure prediction. In the current field, artificial intelligence algorithms to predict RNA secondary structures usually use deep learning, genetic algorithms and other means, through neural network fitting to obtain prediction results. This approach is supervised learning, requiring a large amount of RNA secondary structure data to be co","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115864836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detecting Adversarial Samples with Neuron Coverage 利用神经元覆盖检测对抗性样本
Huayang Cao, Wei Kong, Xiaohui Kuang, Jianwen Tian
Deep learning technologies have shown impressive performance in many areas. However, deep learning systems can be deceived by using intentionally crafted data, says, adversarial samples. This inherent vulnerability limits its application in safety-critical domains such as automatic driving, military applications and so on. As a kind of defense measures, various approaches have been proposed to detect adversarial samples, among which their efficiency should be further improved to accomplish practical application requirements. In this paper, we proposed a neuron coverage-based approach which detect adversarial samples by distinguishing the activated neurons' distribution features in classifier layer. The analysis and experiments showed that this approach achieves high accuracy while having relatively low computation and storage cost.
深度学习技术在许多领域都表现出了令人印象深刻的表现。然而,深度学习系统可能会被故意制作的数据所欺骗,比如对抗性样本。这种固有的脆弱性限制了其在安全关键领域的应用,如自动驾驶、军事应用等。作为一种防御措施,人们提出了多种检测对抗样本的方法,其中检测效率有待进一步提高,以满足实际应用需求。在本文中,我们提出了一种基于神经元覆盖率的方法,该方法通过识别分类器层中激活神经元的分布特征来检测对抗样本。分析和实验表明,该方法在具有较低的计算和存储成本的同时,获得了较高的精度。
{"title":"Detecting Adversarial Samples with Neuron Coverage","authors":"Huayang Cao, Wei Kong, Xiaohui Kuang, Jianwen Tian","doi":"10.1109/CSAIEE54046.2021.9543451","DOIUrl":"https://doi.org/10.1109/CSAIEE54046.2021.9543451","url":null,"abstract":"Deep learning technologies have shown impressive performance in many areas. However, deep learning systems can be deceived by using intentionally crafted data, says, adversarial samples. This inherent vulnerability limits its application in safety-critical domains such as automatic driving, military applications and so on. As a kind of defense measures, various approaches have been proposed to detect adversarial samples, among which their efficiency should be further improved to accomplish practical application requirements. In this paper, we proposed a neuron coverage-based approach which detect adversarial samples by distinguishing the activated neurons' distribution features in classifier layer. The analysis and experiments showed that this approach achieves high accuracy while having relatively low computation and storage cost.","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123194570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Size Optimization Scheme of SRAM 6T Memory Cell SRAM 6T存储单元的尺寸优化方案
Wei Guo, Lirong Hu, Dong Yu
The threshold voltage variations due to the random doping fluctuation and the increasing disturb due to the large number of cells on the chip affect the stability of the read, write, and hold operations of the SRAM circuit. As the smallest size and most numerous module within the chip, the stability of the SRAM memory cell, whether it is used in the configuration chain or in the storage array, is a prerequisite of the whole chip correct function. Most of the articles about the immunity of SRAM circuits focus on changing the circuit structure, such as using 7T, 8T, or 10T memory cells, or adding read and write assist circuits. However, such structural changes can significantly increase the area and the chip leakage current. In this paper, the impact of the size and threshold voltage on its operational stability is analyzed from the perspective of quantitative analysis and simulation. And the optimal SRAM 6T memory cell size is selected based on 28nm CMOS.
由于掺杂随机波动引起的阈值电压变化和芯片上大量单元导致的扰动增加影响了SRAM电路读、写和保持操作的稳定性。作为芯片内体积最小、数量最多的模块,SRAM存储单元的稳定性,无论是用于配置链还是存储阵列,都是保证整个芯片功能正确的前提。大多数关于SRAM电路抗扰性的文章都集中在改变电路结构上,如使用7T、8T或10T存储单元,或增加读写辅助电路。然而,这种结构变化会显著增加面积和芯片泄漏电流。本文从定量分析和仿真的角度分析了尺寸和阈值电压对其运行稳定性的影响。并基于28nm CMOS选择了最佳SRAM 6T存储单元尺寸。
{"title":"A Size Optimization Scheme of SRAM 6T Memory Cell","authors":"Wei Guo, Lirong Hu, Dong Yu","doi":"10.1109/CSAIEE54046.2021.9543209","DOIUrl":"https://doi.org/10.1109/CSAIEE54046.2021.9543209","url":null,"abstract":"The threshold voltage variations due to the random doping fluctuation and the increasing disturb due to the large number of cells on the chip affect the stability of the read, write, and hold operations of the SRAM circuit. As the smallest size and most numerous module within the chip, the stability of the SRAM memory cell, whether it is used in the configuration chain or in the storage array, is a prerequisite of the whole chip correct function. Most of the articles about the immunity of SRAM circuits focus on changing the circuit structure, such as using 7T, 8T, or 10T memory cells, or adding read and write assist circuits. However, such structural changes can significantly increase the area and the chip leakage current. In this paper, the impact of the size and threshold voltage on its operational stability is analyzed from the perspective of quantitative analysis and simulation. And the optimal SRAM 6T memory cell size is selected based on 28nm CMOS.","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126075375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Compiler Optimizations on Symbolic Execution 编译器优化对符号执行的影响
Jiaorui Shen
Software test plays an important role in software engineering, and dynamic symbolic execution (DSE) has become a popular technique in white-box testing. However, the efficiency of DSE is a big challenge of this technique. Compiler optimizations may have a big impact on DSE in some cases. In this paper, we introduce two small examples to visually show the impact of compiler optimizations on constraints solving and path exploration of DSE. After that, we propose a series of experiments using KLEE and LL VM compiler as a case to test real C programs in Coreutils-8.32. We use a simple model to assess the impact of different compiler optimizations and we also study on the combinations of compiler optimizations. The results show compiler optimizations can have both positive and negative effects, and some optimizations like FI may have greater influences than others. Moreover, some combinations of compiler optimizations can better improve the efficiency of DSE than single compiler optimization, which can be further studied.
软件测试在软件工程中起着重要的作用,动态符号执行(DSE)已成为白盒测试中的一种流行技术。然而,DSE的效率是该技术面临的一大挑战。在某些情况下,编译器优化可能对DSE有很大的影响。在本文中,我们介绍了两个小示例,以直观地展示编译器优化对DSE约束求解和路径探索的影响。在此基础上,以KLEE和LL VM编译器为例,在coretils -8.32中进行了一系列的实验。我们使用一个简单的模型来评估不同的编译器优化的影响,我们还研究了编译器优化的组合。结果表明,编译器优化既有积极的影响,也有消极的影响,一些优化(如FI)可能比其他优化有更大的影响。此外,一些编译器优化组合比单个编译器优化更能提高DSE的效率,这一点值得进一步研究。
{"title":"The Impact of Compiler Optimizations on Symbolic Execution","authors":"Jiaorui Shen","doi":"10.1109/CSAIEE54046.2021.9543388","DOIUrl":"https://doi.org/10.1109/CSAIEE54046.2021.9543388","url":null,"abstract":"Software test plays an important role in software engineering, and dynamic symbolic execution (DSE) has become a popular technique in white-box testing. However, the efficiency of DSE is a big challenge of this technique. Compiler optimizations may have a big impact on DSE in some cases. In this paper, we introduce two small examples to visually show the impact of compiler optimizations on constraints solving and path exploration of DSE. After that, we propose a series of experiments using KLEE and LL VM compiler as a case to test real C programs in Coreutils-8.32. We use a simple model to assess the impact of different compiler optimizations and we also study on the combinations of compiler optimizations. The results show compiler optimizations can have both positive and negative effects, and some optimizations like FI may have greater influences than others. Moreover, some combinations of compiler optimizations can better improve the efficiency of DSE than single compiler optimization, which can be further studied.","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127881346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight convolutional neural network of YOLO v3- Tiny algorithm on FPGA for target detection 轻量级卷积神经网络YOLO v3- Tiny算法在FPGA上的目标检测
Jitong Xin, Meiyi Cha, Luojia Shi, Chunyu Long, Hairong Li, Fangcong Wang, Peng Wang
With the rapid development of artificial intelligence, the convolution of the neural network has been deploying across a range of industries, and has certain applications in national defense security, transportation monitoring and medical research. However, due to the constraints of speed and the power consumption, convolutional neural networks are still limited to a great extent in the implementation of the edge computing and mobile devices, etc. For this reason, we design a lightweight convolutional neural network based on FPGA. In this paper, we use the YOLOv3- Tiny algorithm, which is fast in execution, small in computation and small in size, is suitable for deployments on embedded devices such as FPGA. This paper uses 16-bit fixed-point quantization, special data storage and hardware circuit for convolutional computation written in verilog - hardware description language, consumes 512 DSPs, consumes 37.037 ms to recognize a single image frame, and consumes 9.611W. The system basically achieves the design goal of target detection.
随着人工智能的快速发展,神经网络的卷积已经在多个行业得到部署,在国防安全、交通监控、医学研究等方面都有一定的应用。然而,由于速度和功耗的限制,卷积神经网络在边缘计算和移动设备等方面的实现仍然受到很大程度的限制。为此,我们设计了一个基于FPGA的轻量级卷积神经网络。本文采用YOLOv3- Tiny算法,该算法具有执行速度快、计算量小、体积小等特点,适合部署在FPGA等嵌入式设备上。本文采用16位定点量化、专用数据存储和用verilog -硬件描述语言编写的卷积计算硬件电路,消耗512个dsp,单帧图像识别耗时37.037 ms,功耗9.611W。该系统基本实现了目标检测的设计目标。
{"title":"Lightweight convolutional neural network of YOLO v3- Tiny algorithm on FPGA for target detection","authors":"Jitong Xin, Meiyi Cha, Luojia Shi, Chunyu Long, Hairong Li, Fangcong Wang, Peng Wang","doi":"10.1109/CSAIEE54046.2021.9543128","DOIUrl":"https://doi.org/10.1109/CSAIEE54046.2021.9543128","url":null,"abstract":"With the rapid development of artificial intelligence, the convolution of the neural network has been deploying across a range of industries, and has certain applications in national defense security, transportation monitoring and medical research. However, due to the constraints of speed and the power consumption, convolutional neural networks are still limited to a great extent in the implementation of the edge computing and mobile devices, etc. For this reason, we design a lightweight convolutional neural network based on FPGA. In this paper, we use the YOLOv3- Tiny algorithm, which is fast in execution, small in computation and small in size, is suitable for deployments on embedded devices such as FPGA. This paper uses 16-bit fixed-point quantization, special data storage and hardware circuit for convolutional computation written in verilog - hardware description language, consumes 512 DSPs, consumes 37.037 ms to recognize a single image frame, and consumes 9.611W. The system basically achieves the design goal of target detection.","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121896428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Stock Price Based On the XGBoost Algorithm With EMA-19 and SMA-15 Features 基于EMA-19和SMA-15特征的XGBoost算法的股票价格分析
Gao Yuan, Tong Zhang, Wanlu Zhang, Hongsheng Li
At present, the prediction of stock market is one of the most popular and valuable research fields in the financial field. More and more scholars are engaged in the research of stock market forecast, exploring the law of stock market development, and new science and technology are constantly applied to the stock price forecast. In this paper, we proposed a stock closing price prediction model based on the XGBoost and Grid SearchCV algorithms. Experimental results show that our idea represents better performance than the other machine learning methods. Specifically, the RMSE value is 1.39%, 2.43% and 8.33% lower than SVM algorithm, neural network algorithm and LightGBM algorithm, respectively. In addition, we also give the importance ranking of the characteristics that affect the stock closing price, and obtain some interesting and instructive suggestions. For example, the “EMA-9” and “SMA-15” feature has the biggest and smallest impact on stock prices, which can guide our future work.
目前,股票市场预测是金融领域最热门和最有价值的研究领域之一。越来越多的学者从事股票市场预测的研究,探索股票市场发展规律,新的科学技术不断应用于股票价格预测。本文提出了一种基于XGBoost和Grid SearchCV算法的股票收盘价预测模型。实验结果表明,我们的想法比其他机器学习方法表现出更好的性能。其中,RMSE值分别比SVM算法、神经网络算法和LightGBM算法低1.39%、2.43%和8.33%。此外,我们还对影响股票收盘价的特征进行了重要程度排序,得出了一些有趣且有指导意义的建议。例如,“EMA-9”和“SMA-15”特征对股价的影响最大和最小,可以指导我们未来的工作。
{"title":"Analysis of Stock Price Based On the XGBoost Algorithm With EMA-19 and SMA-15 Features","authors":"Gao Yuan, Tong Zhang, Wanlu Zhang, Hongsheng Li","doi":"10.1109/CSAIEE54046.2021.9543136","DOIUrl":"https://doi.org/10.1109/CSAIEE54046.2021.9543136","url":null,"abstract":"At present, the prediction of stock market is one of the most popular and valuable research fields in the financial field. More and more scholars are engaged in the research of stock market forecast, exploring the law of stock market development, and new science and technology are constantly applied to the stock price forecast. In this paper, we proposed a stock closing price prediction model based on the XGBoost and Grid SearchCV algorithms. Experimental results show that our idea represents better performance than the other machine learning methods. Specifically, the RMSE value is 1.39%, 2.43% and 8.33% lower than SVM algorithm, neural network algorithm and LightGBM algorithm, respectively. In addition, we also give the importance ranking of the characteristics that affect the stock closing price, and obtain some interesting and instructive suggestions. For example, the “EMA-9” and “SMA-15” feature has the biggest and smallest impact on stock prices, which can guide our future work.","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115843508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Session Interest Network Based on the Time Interval Encoding for the Click-through Rate Prediction 基于时间间隔编码的深度会话兴趣网络的点击率预测
Xi Sun, Z. Lv
The click-through rate prediction is with significance in the field of recommendation systems, especially in advertising recommendation systems. At present, some sequence models based on deep learning have been directly used in the field of the click-through rate prediction to dig out the rule of user behavior and have achieved good results, but they ignored the influence of time information on the rule of user behavior. To solve the above problems, we propose a model named Time Interval Encoding Deep Session Interest Network (TIED-DSIN). In the TIED-DSIN model, a time interval encoding method is designed to integrate time interval information into the sequence model, and time decay factor is introduced in the encoding process to make the model consider the influence of time information fully when mining the rule of users' dynamic behaviors. Correspondingly, a comparative experiment is conducted on the real Alimama public data set, and the results show that the accuracy of the TIED-DSIN model is better than other models that commonly used in the click-through rate prediction.
点击率预测在推荐系统领域,尤其是广告推荐系统中具有重要意义。目前,一些基于深度学习的序列模型直接应用于点击率预测领域,挖掘用户行为规律,取得了较好的效果,但忽略了时间信息对用户行为规律的影响。为了解决上述问题,我们提出了一种时间间隔编码深度会话兴趣网络(tie - dsin)模型。在tie - dsin模型中,设计了一种时间间隔编码方法,将时间间隔信息整合到序列模型中,并在编码过程中引入时间衰减因子,使模型在挖掘用户动态行为规律时充分考虑时间信息的影响。相应的,在真实的Alimama公共数据集上进行了对比实验,结果表明,在点击率预测中,TIED-DSIN模型的准确率优于其他常用模型。
{"title":"Deep Session Interest Network Based on the Time Interval Encoding for the Click-through Rate Prediction","authors":"Xi Sun, Z. Lv","doi":"10.1109/CSAIEE54046.2021.9543196","DOIUrl":"https://doi.org/10.1109/CSAIEE54046.2021.9543196","url":null,"abstract":"The click-through rate prediction is with significance in the field of recommendation systems, especially in advertising recommendation systems. At present, some sequence models based on deep learning have been directly used in the field of the click-through rate prediction to dig out the rule of user behavior and have achieved good results, but they ignored the influence of time information on the rule of user behavior. To solve the above problems, we propose a model named Time Interval Encoding Deep Session Interest Network (TIED-DSIN). In the TIED-DSIN model, a time interval encoding method is designed to integrate time interval information into the sequence model, and time decay factor is introduced in the encoding process to make the model consider the influence of time information fully when mining the rule of users' dynamic behaviors. Correspondingly, a comparative experiment is conducted on the real Alimama public data set, and the results show that the accuracy of the TIED-DSIN model is better than other models that commonly used in the click-through rate prediction.","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132209778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rule-Based Approach to the Automatic Detection of Individual Tree Crowns in RGB Satellite Images 基于规则的RGB卫星图像单个树冠自动检测方法
Tanqiu Jiang, Ziyu Xiong
Along with the rapid growth of wildfire events around the globe, the appeal to a better forest management strategy is becoming increasingly stronger recently. “Tree Delineation”, which refers to the process of identifying each individual tree from images, is a crucial element in the fields of forest management and remote sensing. Many efforts have been done to locate each individual tree in an image, but the vast majority of the researches were not based on the RGB images that are the most common and the most easily available at a large scale. In our study, we used RGB satellite images from Google Earth and attempted to identify each tree in the images with a rule-based methodology. Our method involves steps including recognizing vegetation, isolating trees, and locating local maxima. The result of our algorithm is comparable to labeling trees manually, and the robustness was confirmed by repeating the same approach on multiple images of different locations.
随着全球野火事件的快速增长,对更好的森林管理战略的呼吁越来越强烈。“树木圈定”是指从图像中识别每棵树的过程,是森林管理和遥感领域的一个关键要素。已经做了很多努力来定位图像中的每棵树,但是绝大多数的研究都不是基于最常见和最容易大规模获得的RGB图像。在我们的研究中,我们使用了来自Google Earth的RGB卫星图像,并试图用基于规则的方法识别图像中的每棵树。我们的方法包括识别植被、隔离树木和定位局部最大值等步骤。我们的算法的结果与手动标记树相当,并且通过在不同位置的多幅图像上重复相同的方法来验证鲁棒性。
{"title":"Rule-Based Approach to the Automatic Detection of Individual Tree Crowns in RGB Satellite Images","authors":"Tanqiu Jiang, Ziyu Xiong","doi":"10.1109/CSAIEE54046.2021.9543379","DOIUrl":"https://doi.org/10.1109/CSAIEE54046.2021.9543379","url":null,"abstract":"Along with the rapid growth of wildfire events around the globe, the appeal to a better forest management strategy is becoming increasingly stronger recently. “Tree Delineation”, which refers to the process of identifying each individual tree from images, is a crucial element in the fields of forest management and remote sensing. Many efforts have been done to locate each individual tree in an image, but the vast majority of the researches were not based on the RGB images that are the most common and the most easily available at a large scale. In our study, we used RGB satellite images from Google Earth and attempted to identify each tree in the images with a rule-based methodology. Our method involves steps including recognizing vegetation, isolating trees, and locating local maxima. The result of our algorithm is comparable to labeling trees manually, and the robustness was confirmed by repeating the same approach on multiple images of different locations.","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"30 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125701484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Temperature Error Compensation method for MEMS Gyros Based on WOA-SVR 基于WOA-SVR的MEMS陀螺温度误差补偿新方法
Gongliu Yang, Kun Zhang, Ruizhao Cheng, Yongfeng Zhang
Most MEMS inertial navigation systems (INS) need to meet a wide working temperature range of - 20 ~+ 55°C. MEMS gyroscope is the core component of MEMS INS and the accuracy of MEMS gyros directly affects the navigation performance. However, the output of MEMS gyros is inevitably affected by temperature. Due to the limited temperature error compensation accuracy of traditional method based on least squares polynomial fitting, the paper presents a new temperature error compensation method based on Whale Optimization Algorithm (WOA) optimized support vector regression (SVR), which is achieved the optimization of SVR by WOA. After simulation and MEMS gyros temperature error compensation test. The results show that the WOA-SVR can effectively compensate the temperature error of MEMS gyros. And the accuracy is significantly improved compared with the traditional methods.
大多数MEMS惯性导航系统(INS)需要满足- 20 ~+ 55℃的宽工作温度范围。MEMS陀螺仪是MEMS INS的核心部件,其精度直接影响到系统的导航性能。然而,MEMS陀螺的输出不可避免地会受到温度的影响。针对传统基于最小二乘多项式拟合的温度误差补偿方法精度有限的问题,提出了一种基于鲸鱼优化算法(WOA)优化支持向量回归(SVR)的温度误差补偿新方法,通过WOA实现对SVR的优化。经过仿真和MEMS陀螺温度误差补偿试验。结果表明,WOA-SVR能有效补偿MEMS陀螺的温度误差。与传统方法相比,精度有明显提高。
{"title":"A Novel Temperature Error Compensation method for MEMS Gyros Based on WOA-SVR","authors":"Gongliu Yang, Kun Zhang, Ruizhao Cheng, Yongfeng Zhang","doi":"10.1109/CSAIEE54046.2021.9543296","DOIUrl":"https://doi.org/10.1109/CSAIEE54046.2021.9543296","url":null,"abstract":"Most MEMS inertial navigation systems (INS) need to meet a wide working temperature range of - 20 ~+ 55°C. MEMS gyroscope is the core component of MEMS INS and the accuracy of MEMS gyros directly affects the navigation performance. However, the output of MEMS gyros is inevitably affected by temperature. Due to the limited temperature error compensation accuracy of traditional method based on least squares polynomial fitting, the paper presents a new temperature error compensation method based on Whale Optimization Algorithm (WOA) optimized support vector regression (SVR), which is achieved the optimization of SVR by WOA. After simulation and MEMS gyros temperature error compensation test. The results show that the WOA-SVR can effectively compensate the temperature error of MEMS gyros. And the accuracy is significantly improved compared with the traditional methods.","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124896572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Basic Ensemble Learning of Encoder Representations from Transformer for Disaster-mentioning Tweets Classification 基于Transformer的编码器表示的基本集成学习,用于提及灾难的推文分类
Yizhou Yang
Ensemble learning is a system which used to train multiple learning models and combine their results, treating them as a “committee” of decision makers. To explore effect of ensemble learning, this paper applied two basic ensemble systems of encoder to natural language processing. To compare the individual models and ensemble systems, this paper varied the number models which used to calculate ensemble accuracies. The result is that the decision of the model, with all models combined, usually have better overall accuracy, on average, than any single model. It shown that ensemble system used all models usually have better performance. This paper given explanation in the conclusion section of this result.
集成学习是一种用于训练多个学习模型并将其结果组合起来的系统,将它们视为决策者的“委员会”。为了探索集成学习的效果,本文将编码器的两种基本集成系统应用于自然语言处理。为了比较单个模型和系综系统,本文改变了用于计算系综精度的模型数量。结果是,将所有模型组合在一起的模型的决策,平均而言通常比任何单一模型具有更好的总体准确性。结果表明,使用所有模型的集成系统通常具有较好的性能。本文在结论部分对这一结果进行了说明。
{"title":"Basic Ensemble Learning of Encoder Representations from Transformer for Disaster-mentioning Tweets Classification","authors":"Yizhou Yang","doi":"10.1109/CSAIEE54046.2021.9543322","DOIUrl":"https://doi.org/10.1109/CSAIEE54046.2021.9543322","url":null,"abstract":"Ensemble learning is a system which used to train multiple learning models and combine their results, treating them as a “committee” of decision makers. To explore effect of ensemble learning, this paper applied two basic ensemble systems of encoder to natural language processing. To compare the individual models and ensemble systems, this paper varied the number models which used to calculate ensemble accuracies. The result is that the decision of the model, with all models combined, usually have better overall accuracy, on average, than any single model. It shown that ensemble system used all models usually have better performance. This paper given explanation in the conclusion section of this result.","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114669459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1