Yangjun Zhou, Chenying Yi, Like Gao, Jiannan Ouyang, Wei Zhang
Distribution network has complicated grid structure and various kinds of faults,which make the current measures hard to accurately locate fault zones in field operation. This paper proposes a differential evolutionary algorithm to locate the fault based on the transient wave record data from the distribution network, using the mechanism of cooperative coevolution with penalty factor to optimize the solution set and punishment factor. The simulation result shows that the proposed method has good convergence and fault-tolerant ability when single point or multi-points fault occurs in the distribution network. In addition, it can provide important technical support for the fault position in the distribution network based on the transient wave data.
{"title":"Calculation and analysis method for distribution network fault location based on improved differential evolution algorithm","authors":"Yangjun Zhou, Chenying Yi, Like Gao, Jiannan Ouyang, Wei Zhang","doi":"10.1117/12.2667784","DOIUrl":"https://doi.org/10.1117/12.2667784","url":null,"abstract":"Distribution network has complicated grid structure and various kinds of faults,which make the current measures hard to accurately locate fault zones in field operation. This paper proposes a differential evolutionary algorithm to locate the fault based on the transient wave record data from the distribution network, using the mechanism of cooperative coevolution with penalty factor to optimize the solution set and punishment factor. The simulation result shows that the proposed method has good convergence and fault-tolerant ability when single point or multi-points fault occurs in the distribution network. In addition, it can provide important technical support for the fault position in the distribution network based on the transient wave data.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116950916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the oldest and most famous cryptocurrency, the price of Bitcoin has increased nearly 2 million times in the last decade. As a result, predicting the price of Bitcoin through machine learning has become a big hit in recent years. This paper analyzes the correlation between the price of Bitcoin and market or social factors that may affect the price of Bitcoin. Then the author uses these factors with higher correlation to predict the price of bitcoin by LSTM. The experiments show that the average absolute percentage error of the LSTM prediction of bitcoin price decreases from 11.52% to 10.16%, 9.79%, 9.73%, 9.59%, 8.82%, and 8.50%, respectively, after the introduction of external correlation factors.
{"title":"Correlation analysis and prediction between bitcoin price and its influencing factors","authors":"Yinhao Liu","doi":"10.1117/12.2667868","DOIUrl":"https://doi.org/10.1117/12.2667868","url":null,"abstract":"As the oldest and most famous cryptocurrency, the price of Bitcoin has increased nearly 2 million times in the last decade. As a result, predicting the price of Bitcoin through machine learning has become a big hit in recent years. This paper analyzes the correlation between the price of Bitcoin and market or social factors that may affect the price of Bitcoin. Then the author uses these factors with higher correlation to predict the price of bitcoin by LSTM. The experiments show that the average absolute percentage error of the LSTM prediction of bitcoin price decreases from 11.52% to 10.16%, 9.79%, 9.73%, 9.59%, 8.82%, and 8.50%, respectively, after the introduction of external correlation factors.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125557691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuang Yang, Haomiao Wang, Tian-Qi Wang, Zhibin Wang, Naidi Kang
Under the background of today's era, the development of power grid intelligence is an inevitable trend in the development of power companies. In this process, what the power industry needs to do is to do a good job in the prevention of smart grid information security risks, so as to avoid power companies due to excessive power grid information security risks. bring certain losses. The power marketing system is a vital part of the power system. Starting from the marketing system, combined with the current business situation, we will study the development trend and overall construction plan of the new generation of marketing business application systems, and further study the security architecture of the system. From the perspective of multi-faceted security management and control, to implement the relevant national and company-related personal information protection requirements, and to improve the information security of the new generation of marketing systems has become an urgent problem to be solved.
{"title":"Research on information security architecture under the new generation marketing system","authors":"Shuang Yang, Haomiao Wang, Tian-Qi Wang, Zhibin Wang, Naidi Kang","doi":"10.1117/12.2668310","DOIUrl":"https://doi.org/10.1117/12.2668310","url":null,"abstract":"Under the background of today's era, the development of power grid intelligence is an inevitable trend in the development of power companies. In this process, what the power industry needs to do is to do a good job in the prevention of smart grid information security risks, so as to avoid power companies due to excessive power grid information security risks. bring certain losses. The power marketing system is a vital part of the power system. Starting from the marketing system, combined with the current business situation, we will study the development trend and overall construction plan of the new generation of marketing business application systems, and further study the security architecture of the system. From the perspective of multi-faceted security management and control, to implement the relevant national and company-related personal information protection requirements, and to improve the information security of the new generation of marketing systems has become an urgent problem to be solved.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"12566 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128761215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the continuous development of research in the field of machine learning, especially the progress in deep learning and the continuous improvement of arithmetic power such as image processors, the recognition technology using biometric big data has gained wide attention and has been well applied in many fields such as human-witness matching, intelligent monitoring and epidemic prevention and control. The development trend of big data biometric identification technology is analyzed, the types of biometric features and the development and application of big data-driven biometric identification technology are summarized, and the future development trend of big data biometric identification technology is discussed.
{"title":"Application of data-driven feature extraction methods in biometrics","authors":"Huixing Li, Yan Xue, Xiancai Zeng","doi":"10.1117/12.2667771","DOIUrl":"https://doi.org/10.1117/12.2667771","url":null,"abstract":"With the continuous development of research in the field of machine learning, especially the progress in deep learning and the continuous improvement of arithmetic power such as image processors, the recognition technology using biometric big data has gained wide attention and has been well applied in many fields such as human-witness matching, intelligent monitoring and epidemic prevention and control. The development trend of big data biometric identification technology is analyzed, the types of biometric features and the development and application of big data-driven biometric identification technology are summarized, and the future development trend of big data biometric identification technology is discussed.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128777293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Yang, Yaohua Luo, Xuben Wang, Haoyu Tang, S. Rao
Rapid detection and identification of landslide areas are very important for disaster prevention and mitigation. Aiming at the problems of time-consuming and labor-intensive traditional landslide information extraction methods and low recognition efficiency, a remote sensing landslide recognition method based on LinkNet, and convolution attention module was proposed. The model adopts the coding-decoding structure to improve the operation efficiency. The Convolutional Block Attention Module (CBAM) is applied to optimize the weight allocation from both channel and spatial dimensions to highlight the landslide feature information. And compared with the traditional U-Net and LinkNet models. The results show that the CBAM-LinkNet model has excellent performance in remote sensing landslide identification, which provides the possibility for rapid and accurate landslide identification.
{"title":"Remote sensing landslide recognition method based on LinkNet and attention mechanism","authors":"Jing Yang, Yaohua Luo, Xuben Wang, Haoyu Tang, S. Rao","doi":"10.1117/12.2667640","DOIUrl":"https://doi.org/10.1117/12.2667640","url":null,"abstract":"Rapid detection and identification of landslide areas are very important for disaster prevention and mitigation. Aiming at the problems of time-consuming and labor-intensive traditional landslide information extraction methods and low recognition efficiency, a remote sensing landslide recognition method based on LinkNet, and convolution attention module was proposed. The model adopts the coding-decoding structure to improve the operation efficiency. The Convolutional Block Attention Module (CBAM) is applied to optimize the weight allocation from both channel and spatial dimensions to highlight the landslide feature information. And compared with the traditional U-Net and LinkNet models. The results show that the CBAM-LinkNet model has excellent performance in remote sensing landslide identification, which provides the possibility for rapid and accurate landslide identification.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130007880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research is based on the attention mechanism English translation adaptive model. After analyzing the key factors that affect English language translation, the attention mechanism is used to extract the detailed features of such factors in each region to form a feature sample set, and the feature sample set is fused and normalized, so as to obtain a brand-new feature sample set. Input to build an English language translation model and output the translation results, According to the results, the overall translation effect of the model is predicted. The results show that the prediction model of this method has high prediction accuracy in training and testing.
{"title":"Research on adaptive model of English translation based on data fusion","authors":"Ruying Huang","doi":"10.1117/12.2667549","DOIUrl":"https://doi.org/10.1117/12.2667549","url":null,"abstract":"This research is based on the attention mechanism English translation adaptive model. After analyzing the key factors that affect English language translation, the attention mechanism is used to extract the detailed features of such factors in each region to form a feature sample set, and the feature sample set is fused and normalized, so as to obtain a brand-new feature sample set. Input to build an English language translation model and output the translation results, According to the results, the overall translation effect of the model is predicted. The results show that the prediction model of this method has high prediction accuracy in training and testing.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130163004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Chinese sentiment analysis, sentiment words are just a drop in the ocean compared with the whole corpus. In order to solve the problem of insufficient emotion lexicon and prior knowledge, proposes a method to predict the emotion intensity of target words based on neural network model (Neural Network Emebdding Score, NNES). By training a small number of labeled samples, using clustering algorithm to find the seed words, calculate the similarity between the target words and the seed words, and using it as the input of neural network to predict the emotional intensity of the unlabeled words. Compared with the traditional machine learning regression models, it has smaller mean square error. Meanwhile, a BiGRU model based on attention mechanism and convolution is proposed by integrating the predicted emotion intensity with word vector (Neural Network Emebdding Score with CNN and Attention-BiGRU, NNESC-Att-BiGRU). To compare several popular models on product and hotel review data sets, and the proposed model has better classification effect on Chinese sentiment classification task.
在汉语情感分析中,情感词与整个语料库相比只是沧海一粟。为了解决情感词汇和先验知识不足的问题,提出了一种基于神经网络模型(neural network Emebdding Score, NNES)的目标词情感强度预测方法。通过训练少量标记的样本,使用聚类算法寻找种子词,计算目标词与种子词之间的相似度,并将其作为神经网络的输入来预测未标记词的情感强度。与传统的机器学习回归模型相比,具有更小的均方误差。同时,将预测的情绪强度与词向量(Neural Network emebding Score with CNN and attention -BiGRU, nnesc - at -BiGRU)相结合,提出了一种基于注意机制和卷积的BiGRU模型。对比几种流行的产品评论和酒店评论数据集模型,发现本文提出的模型在中文情感分类任务上具有较好的分类效果。
{"title":"Emotion analysis method based on emotion intensity fusion and BiGRU","authors":"Haoyang Zhang, Changming Zhu","doi":"10.1117/12.2667864","DOIUrl":"https://doi.org/10.1117/12.2667864","url":null,"abstract":"In Chinese sentiment analysis, sentiment words are just a drop in the ocean compared with the whole corpus. In order to solve the problem of insufficient emotion lexicon and prior knowledge, proposes a method to predict the emotion intensity of target words based on neural network model (Neural Network Emebdding Score, NNES). By training a small number of labeled samples, using clustering algorithm to find the seed words, calculate the similarity between the target words and the seed words, and using it as the input of neural network to predict the emotional intensity of the unlabeled words. Compared with the traditional machine learning regression models, it has smaller mean square error. Meanwhile, a BiGRU model based on attention mechanism and convolution is proposed by integrating the predicted emotion intensity with word vector (Neural Network Emebdding Score with CNN and Attention-BiGRU, NNESC-Att-BiGRU). To compare several popular models on product and hotel review data sets, and the proposed model has better classification effect on Chinese sentiment classification task.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121636194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
X. Mei, Wenjing Guo, Zhu Liu, Yu Min Liu, Wen J. Li, Pei Zhang
The emergence of cryptocurrencies has promoted the development of blockchain technology. However, due to the low performance and poor scalability of the blockchain, it is difficult to apply the blockchain technology to production. Analysis of its essential reason is mainly caused by the distributed consensus protocol. Distributed consensus protocols provide data transparency, integrity, and immutability in a decentralized and untrusted environment, but good security greatly sacrifices scalability. In order to improve the performance and scalability of the system. This paper first improves the Byzantine consensus protocol and improves the throughput of a single shard; on this basis, an efficient shard formation protocol is designed, which can safely assign nodes to shards. This paper relies on trusted hardware (SGX) to achieve consensus and sharding protocol performance improvements. Second, we design a transaction protocol that ensures transaction security and flexibility even when the transaction coordinator is malicious; finally, our research is extensively evaluated on local clusters and on Google Cloud Platform. The results show that the consensus and shard formation protocol in this paper outperforms other advanced solutions in scale and can well scale the blockchain system through sharding and consensus formation protocol. More importantly, the scalable blockchain system based on the sharding strategy proposed in this paper achieves high throughput and can handle Visa-level workloads.
{"title":"Research on blockchain scalability based on sharding strategy","authors":"X. Mei, Wenjing Guo, Zhu Liu, Yu Min Liu, Wen J. Li, Pei Zhang","doi":"10.1117/12.2667350","DOIUrl":"https://doi.org/10.1117/12.2667350","url":null,"abstract":"The emergence of cryptocurrencies has promoted the development of blockchain technology. However, due to the low performance and poor scalability of the blockchain, it is difficult to apply the blockchain technology to production. Analysis of its essential reason is mainly caused by the distributed consensus protocol. Distributed consensus protocols provide data transparency, integrity, and immutability in a decentralized and untrusted environment, but good security greatly sacrifices scalability. In order to improve the performance and scalability of the system. This paper first improves the Byzantine consensus protocol and improves the throughput of a single shard; on this basis, an efficient shard formation protocol is designed, which can safely assign nodes to shards. This paper relies on trusted hardware (SGX) to achieve consensus and sharding protocol performance improvements. Second, we design a transaction protocol that ensures transaction security and flexibility even when the transaction coordinator is malicious; finally, our research is extensively evaluated on local clusters and on Google Cloud Platform. The results show that the consensus and shard formation protocol in this paper outperforms other advanced solutions in scale and can well scale the blockchain system through sharding and consensus formation protocol. More importantly, the scalable blockchain system based on the sharding strategy proposed in this paper achieves high throughput and can handle Visa-level workloads.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115981416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, the balance problem of pulsating assembly line under mixed production mode is analyzed, which is decomposed into the balance problem within and between stations. On the basis of the balance between stations, the balance problem within stations is studied. Considering the versatility of personnel and the different characteristics of personnel's mastery of skills, the mathematical model is built with the goal of minimizing the total idle time of the assembly line. In view of the constructed mathematical model, an improved genetic algorithm based on two-bit coding is proposed to solve the problem. Finally, an example is given to verify the effectiveness of the algorithm.
{"title":"Human resource scheduling technology based on improved genetic algorithm for pulse assembly beat balancing","authors":"Yaqi Cao, Aimin Wang, Tao Ding","doi":"10.1117/12.2667281","DOIUrl":"https://doi.org/10.1117/12.2667281","url":null,"abstract":"In this paper, the balance problem of pulsating assembly line under mixed production mode is analyzed, which is decomposed into the balance problem within and between stations. On the basis of the balance between stations, the balance problem within stations is studied. Considering the versatility of personnel and the different characteristics of personnel's mastery of skills, the mathematical model is built with the goal of minimizing the total idle time of the assembly line. In view of the constructed mathematical model, an improved genetic algorithm based on two-bit coding is proposed to solve the problem. Finally, an example is given to verify the effectiveness of the algorithm.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126848808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The long sequence time-sequence forecasting problem attracts a lot of organizations. Many prediction application scenes are about long sequence time-sequence forecasting problems. Under such circumstances, many researchers have tried to solve these problems by employing some models that have proved efficient in the Natural Language Processing field, like long short term memory networks and Transformers, etc. And there are a lot of improvements based on the primary recurrent neural network, and Transformer. Recently, a model called informer which is made for the LSTF was proposed. This model claimed that it improves prediction performance on the long sequence time-series forecasting problem. But in the later experiments, more and more researchers found that informers still cannot handle all the long sequence time-sequence forecasting problems. This paper is going to look at how datasets effect the performance of different models. The experiment is carried out on the Bitcoin dataset with four features and one output. The result shows that the Informer (transformer-like model) cannot always perform well so that sometimes choosing models with simple architecture may gain better results.
{"title":"Transformer and long short-term memory networks for long sequence time sequence forecasting problem","authors":"Wei Fang","doi":"10.1117/12.2667895","DOIUrl":"https://doi.org/10.1117/12.2667895","url":null,"abstract":"The long sequence time-sequence forecasting problem attracts a lot of organizations. Many prediction application scenes are about long sequence time-sequence forecasting problems. Under such circumstances, many researchers have tried to solve these problems by employing some models that have proved efficient in the Natural Language Processing field, like long short term memory networks and Transformers, etc. And there are a lot of improvements based on the primary recurrent neural network, and Transformer. Recently, a model called informer which is made for the LSTF was proposed. This model claimed that it improves prediction performance on the long sequence time-series forecasting problem. But in the later experiments, more and more researchers found that informers still cannot handle all the long sequence time-sequence forecasting problems. This paper is going to look at how datasets effect the performance of different models. The experiment is carried out on the Bitcoin dataset with four features and one output. The result shows that the Informer (transformer-like model) cannot always perform well so that sometimes choosing models with simple architecture may gain better results.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125230580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}