首页 > 最新文献

Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference最新文献

英文 中文
Remote Pedestrian Detection Algorithm Based on Edge Information Input CNN 基于边缘信息输入CNN的远程行人检测算法
Chi Zhang, Nanlin Tan, Yingxia Lin
In order to solve remote pedestrian detection problem, the target need to be detected in the absence of information, a new pedestrian detection algorithm based on Convolution Neural Network (CNN) is proposed. The algorithm uses shallow layer edge features combined with grayscale images to replace the RGB color information of the original image, as an input to the Convolutional Neural Network to increase the amount of effective information. Then, in deep learning training process, the cross entropy is combined with the learning rate to optimize the cross entropy function. Finally, the improved Convolutional Neural Network is trained on four common pedestrian hybrid datasets to apply it to the remote pedestrian intrusion detection of the railway industry using transfer learning. The experimental results show that compared with the existing Convolutional Neural Network remote pedestrian detection algorithm, the new method can effectively improve the accuracy of detection 2% and has a good universality.
为了解决在缺乏信息的情况下需要对目标进行检测的远距离行人检测问题,提出了一种新的基于卷积神经网络(CNN)的行人检测算法。该算法利用浅层边缘特征结合灰度图像代替原始图像的RGB颜色信息,作为卷积神经网络的输入,增加有效信息量。然后,在深度学习训练过程中,将交叉熵与学习率结合,对交叉熵函数进行优化。最后,在四种常见的行人混合数据集上训练改进的卷积神经网络,利用迁移学习将其应用于铁路行业的远程行人入侵检测。实验结果表明,与现有的卷积神经网络远程行人检测算法相比,新方法可有效提高检测准确率2%,具有良好的通用性。
{"title":"Remote Pedestrian Detection Algorithm Based on Edge Information Input CNN","authors":"Chi Zhang, Nanlin Tan, Yingxia Lin","doi":"10.1145/3341069.3342969","DOIUrl":"https://doi.org/10.1145/3341069.3342969","url":null,"abstract":"In order to solve remote pedestrian detection problem, the target need to be detected in the absence of information, a new pedestrian detection algorithm based on Convolution Neural Network (CNN) is proposed. The algorithm uses shallow layer edge features combined with grayscale images to replace the RGB color information of the original image, as an input to the Convolutional Neural Network to increase the amount of effective information. Then, in deep learning training process, the cross entropy is combined with the learning rate to optimize the cross entropy function. Finally, the improved Convolutional Neural Network is trained on four common pedestrian hybrid datasets to apply it to the remote pedestrian intrusion detection of the railway industry using transfer learning. The experimental results show that compared with the existing Convolutional Neural Network remote pedestrian detection algorithm, the new method can effectively improve the accuracy of detection 2% and has a good universality.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126617873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Research on Knowledge Management Technology of Aerospace Engineering Based on Big Data 基于大数据的航天工程知识管理技术研究
Jun Liu
In the era of big data, mass production, analysis and application of data have become a new trend. In the long-term design, production, operation and testing process of aerospace enterprises, a large number of valuable data have been generated. Collection and analysis of these data can improve the management of aerospace enterprises and gain competitive advantages. With the increase of semi-structured and unstructured data produced by aerospace enterprises year by year, how to store and analyze data, how to mine and share knowledge has become a major problem. The existing knowledge management system cannot meet the diversified needs of users only by traditional database technology. It also needs to combine distributed computing and storage technology to solve the problems of knowledge storage, knowledge sharing, knowledge mining, knowledge retrieval and recommendation in big data environment. Aerospace enterprises need to build a knowledge management system based on big data technology to support knowledge innovation and knowledge application. From the perspective of data operation and relying on Hadoop ecosystem related big data technology, this paper constructs a knowledge management framework model for aerospace enterprises based on Hadoop.
在大数据时代,数据的大量生产、分析和应用已成为一种新趋势。航天企业在长期的设计、生产、运行和试验过程中,产生了大量有价值的数据。对这些数据进行收集和分析,可以提高航天企业的管理水平,获得竞争优势。随着航天企业产生的半结构化和非结构化数据逐年增加,如何存储和分析数据,如何挖掘和共享知识已成为一个重大问题。现有的知识管理系统仅依靠传统的数据库技术已不能满足用户多样化的需求。还需要结合分布式计算和存储技术来解决大数据环境下的知识存储、知识共享、知识挖掘、知识检索和推荐等问题。航天企业需要构建基于大数据技术的知识管理系统,支持知识创新和知识应用。本文从数据运营的角度出发,依托Hadoop生态系统相关的大数据技术,构建了一个基于Hadoop的航空航天企业知识管理框架模型。
{"title":"Research on Knowledge Management Technology of Aerospace Engineering Based on Big Data","authors":"Jun Liu","doi":"10.1145/3341069.3342996","DOIUrl":"https://doi.org/10.1145/3341069.3342996","url":null,"abstract":"In the era of big data, mass production, analysis and application of data have become a new trend. In the long-term design, production, operation and testing process of aerospace enterprises, a large number of valuable data have been generated. Collection and analysis of these data can improve the management of aerospace enterprises and gain competitive advantages. With the increase of semi-structured and unstructured data produced by aerospace enterprises year by year, how to store and analyze data, how to mine and share knowledge has become a major problem. The existing knowledge management system cannot meet the diversified needs of users only by traditional database technology. It also needs to combine distributed computing and storage technology to solve the problems of knowledge storage, knowledge sharing, knowledge mining, knowledge retrieval and recommendation in big data environment. Aerospace enterprises need to build a knowledge management system based on big data technology to support knowledge innovation and knowledge application. From the perspective of data operation and relying on Hadoop ecosystem related big data technology, this paper constructs a knowledge management framework model for aerospace enterprises based on Hadoop.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131086822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Realizing Specific Weather Forecast through Machine Learning Enabled Prediction Model 通过机器学习预测模型实现特定天气预报
I-Ching Chen, Shueh-Cheng Hu
To general people, it is more convenient to know weather condition at a specific location and particular time. However, current weather forecasting services offered by meteorological observation organizations only provide a wide-range or coarse-grained forecast. This research work tried to utilize historical weather observation data and machine learning (ML) techniques to build models enabling specific weather forecast. Different settings of models were applied and the corresponding results were compared and analyzed in terms of training cost and prediction quality. The preliminary results indicate that the ML-enabled forecast model can serve as a supplementary source for people who need to know finer-grained whether condition. To improve the quality of the ML forecasting models, besides more fine-tuning and algorithms renovation, large volume of long-term historical weather data are critical since climate changes to a large extent, possess subtle periodical characteristics.
对于一般人来说,更方便的是了解特定地点和特定时间的天气情况。然而,目前气象观测机构提供的天气预报服务只能提供大范围或粗粒度的预报。这项研究工作试图利用历史天气观测数据和机器学习(ML)技术来建立能够实现特定天气预报的模型。采用不同的模型设置,从训练成本和预测质量两方面对相应的结果进行了比较和分析。初步结果表明,支持机器学习的预测模型可以作为需要了解细粒度是否条件的人的补充来源。为了提高机器学习预测模型的质量,除了更多的微调和算法更新之外,大量的长期历史天气数据至关重要,因为气候变化在很大程度上具有微妙的周期性特征。
{"title":"Realizing Specific Weather Forecast through Machine Learning Enabled Prediction Model","authors":"I-Ching Chen, Shueh-Cheng Hu","doi":"10.1145/3341069.3341084","DOIUrl":"https://doi.org/10.1145/3341069.3341084","url":null,"abstract":"To general people, it is more convenient to know weather condition at a specific location and particular time. However, current weather forecasting services offered by meteorological observation organizations only provide a wide-range or coarse-grained forecast. This research work tried to utilize historical weather observation data and machine learning (ML) techniques to build models enabling specific weather forecast. Different settings of models were applied and the corresponding results were compared and analyzed in terms of training cost and prediction quality. The preliminary results indicate that the ML-enabled forecast model can serve as a supplementary source for people who need to know finer-grained whether condition. To improve the quality of the ML forecasting models, besides more fine-tuning and algorithms renovation, large volume of long-term historical weather data are critical since climate changes to a large extent, possess subtle periodical characteristics.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121262338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bank Account Abnormal Transaction Recognition Based on Relief Algorithm and BalanceCascade 基于救济算法和BalanceCascade的银行账户异常交易识别
Yun-xiang Liu, Ze-Shen Tang, Qi Xu
With the rapid development of the banking industry, the number of transactions is exponential growth.At the same time, abnormal transactions are also increasing, causing immeasurable losses and risks.In terms of how to accurately identify suspicious transactions from massive customer information and bank account transaction data, this paper adopts the BalanceCascade algorithm based on Relief to solve the problem of unbalanced data in the identification of abnormal transactions in bank accounts, and proposes an effective abnormal transaction identification model.At the same time, the AUC and K-S index as the unbalanced data classification standards of performance evaluation, and finally to Kaggle data platform of bank accounts abnormal transaction data set, the results show that the proposed identification model of performance evaluation index of AUC 0.90 KS value at the same time also is as high as 0.64, shows that the model in as much as possible to reduce the rate of false positives and has high ability of classification and recognition, the method of bank accounts abnormal transaction identification has a certain reference value, enhance rapid response and improve the level of customer service for Banks have certain effect.
随着银行业的快速发展,交易数量呈指数级增长。与此同时,异常交易也在增加,造成了不可估量的损失和风险。针对如何从海量客户信息和银行账户交易数据中准确识别可疑交易,本文采用基于Relief的BalanceCascade算法解决银行账户异常交易识别中数据不平衡的问题,提出了一种有效的异常交易识别模型。同时,AUC和钴指数作为绩效评估的不平衡数据分类标准,最后Kaggle数据平台的银行账户异常事务数据集,结果表明,该识别模型的性能评价指标的AUC 0.90 k值同时也高达0.64,表明该模型在尽可能减少误报率,具有较高的分类和识别的能力,该方法对银行账户异常交易识别具有一定的参考价值,对银行增强快速反应能力和提高客户服务水平具有一定的作用。
{"title":"Bank Account Abnormal Transaction Recognition Based on Relief Algorithm and BalanceCascade","authors":"Yun-xiang Liu, Ze-Shen Tang, Qi Xu","doi":"10.1145/3341069.3342981","DOIUrl":"https://doi.org/10.1145/3341069.3342981","url":null,"abstract":"With the rapid development of the banking industry, the number of transactions is exponential growth.At the same time, abnormal transactions are also increasing, causing immeasurable losses and risks.In terms of how to accurately identify suspicious transactions from massive customer information and bank account transaction data, this paper adopts the BalanceCascade algorithm based on Relief to solve the problem of unbalanced data in the identification of abnormal transactions in bank accounts, and proposes an effective abnormal transaction identification model.At the same time, the AUC and K-S index as the unbalanced data classification standards of performance evaluation, and finally to Kaggle data platform of bank accounts abnormal transaction data set, the results show that the proposed identification model of performance evaluation index of AUC 0.90 KS value at the same time also is as high as 0.64, shows that the model in as much as possible to reduce the rate of false positives and has high ability of classification and recognition, the method of bank accounts abnormal transaction identification has a certain reference value, enhance rapid response and improve the level of customer service for Banks have certain effect.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116255217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improve Theoretical Upper Bound of Jumpk Function by Evolutionary Multitasking 用进化多任务改进跳跃函数的理论上界
Y. Lian, Zhengxin Huang, Yuren Zhou, Zefeng Chen
Recently, the concept of evolutionary multitasking has emerged in the field of evolutionary computation as a promising approach to exploit the latent synergies among distinct optimization problems automatically. Many experimental studies have shown multifactorial evolutionary algorithm (MFEA), an implemented algorithm of evolutionary multitasking, can outperform the traditional optimization approaches of solving each task independently on handling synthetic and real-world multi-task optimization (MTO) problems in terms of solution quality and computation resource. However, as far as we know, there exists no study demonstrating the superiority of evolutionary multitasking from the aspect of theoretical analysis. In this paper, we propose a simple (4+2) MFEA to optimize the benchmarks Jumpk and LeadingOnes functions simultaneously. Our theoretical analysis shows that the upper bound of expected running time for the proposed algorithm on the Jumpk function can be improved to O(n2 + 2k) while the best upper bound for single-task optimization on this problem is O(nk-1). Moreover, the upper bound of expected running time to optimize LeadingOnes function is not increased. This result indicates that evolutionary multitasking is probably a promising approach to deal with some problems which traditional optimization methods can't well tackle. This paper provides an evidence of the effectiveness of the evolutionary multitasking from the aspect of theoretical analysis.
近年来,进化多任务作为一种自动挖掘不同优化问题之间潜在协同效应的方法,在进化计算领域得到了广泛的应用。许多实验研究表明,多因子进化算法(multifactor evolutionary algorithm, MFEA)作为一种进化多任务的实现算法,在处理合成多任务优化问题和实际多任务优化问题时,在求解质量和计算资源方面都优于独立求解各任务的传统优化方法。然而,据我们所知,还没有研究从理论分析的角度证明进化多任务的优越性。在本文中,我们提出了一个简单的(4+2)MFEA来同时优化基准测试Jumpk和LeadingOnes功能。我们的理论分析表明,该算法对Jumpk函数的期望运行时间上界可以提高到O(n2 + 2k),而单任务优化的最佳上界是O(nk-1)。此外,优化LeadingOnes函数的预期运行时间上限没有增加。这一结果表明,进化多任务可能是解决传统优化方法无法很好解决的一些问题的一种很有前途的方法。本文从理论分析的角度证明了进化多任务处理的有效性。
{"title":"Improve Theoretical Upper Bound of Jumpk Function by Evolutionary Multitasking","authors":"Y. Lian, Zhengxin Huang, Yuren Zhou, Zefeng Chen","doi":"10.1145/3341069.3342982","DOIUrl":"https://doi.org/10.1145/3341069.3342982","url":null,"abstract":"Recently, the concept of evolutionary multitasking has emerged in the field of evolutionary computation as a promising approach to exploit the latent synergies among distinct optimization problems automatically. Many experimental studies have shown multifactorial evolutionary algorithm (MFEA), an implemented algorithm of evolutionary multitasking, can outperform the traditional optimization approaches of solving each task independently on handling synthetic and real-world multi-task optimization (MTO) problems in terms of solution quality and computation resource. However, as far as we know, there exists no study demonstrating the superiority of evolutionary multitasking from the aspect of theoretical analysis. In this paper, we propose a simple (4+2) MFEA to optimize the benchmarks Jumpk and LeadingOnes functions simultaneously. Our theoretical analysis shows that the upper bound of expected running time for the proposed algorithm on the Jumpk function can be improved to O(n2 + 2k) while the best upper bound for single-task optimization on this problem is O(nk-1). Moreover, the upper bound of expected running time to optimize LeadingOnes function is not increased. This result indicates that evolutionary multitasking is probably a promising approach to deal with some problems which traditional optimization methods can't well tackle. This paper provides an evidence of the effectiveness of the evolutionary multitasking from the aspect of theoretical analysis.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116652863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Resource-Aware Decentralized Adaptive Computational Offloading & Task-Caching for Multi-Access Edge Computing 面向多访问边缘计算的资源感知分散自适应计算卸载与任务缓存
Getenet Tefera, Kun She, F. Deeba, Awais Ahmed
Smart technologies or IoT devices have been designed to execute intensive applications that request more computational and other computer system resources. However, those devices have a resource constraint. To address the challenge, we adopt Multi-Access Edge Computing which is a new paradigm that transforms and localize Cloud services and capabilities at the Edge of Radio-Access Network based on proximity for mobile subscribers. In this paper, we proposed a Resource-Aware Decentralized Computing and Caching framework for Multi-Access Edge Computing. So, smart end-user devices work collaboratively and independently with resourceful edge devices or peer devices in close proximity during the unreliable network. Moreover, those devices can offload intensive application or access completed cached tasks to provide efficient resource utilization & Quality of User Experience. The drawback is expressed based on Non-Cooperative Game Theory which is NP-hard to solve and we show that the game concedes a Nash Equilibrium. Our Scheme optimizes computational and storage resources efficiently. We have done exhaustive observation the outcome shows that our scheme provides better performance than the conventional scheme in terms of enhanced storage capability, high Quality of User Experience, and low energy consumption.
智能技术或物联网设备被设计用于执行需要更多计算和其他计算机系统资源的密集型应用程序。然而,这些设备有资源限制。为了应对这一挑战,我们采用了多接入边缘计算,这是一种新的范例,可以根据移动用户的接近程度对无线接入网络边缘的云服务和功能进行转换和本地化。本文提出了一种面向多访问边缘计算的资源感知分散计算和缓存框架。因此,在不可靠的网络中,智能终端用户设备可以与资源丰富的边缘设备或邻近的对等设备协同且独立地工作。此外,这些设备可以卸载密集的应用程序或访问已完成的缓存任务,以提供有效的资源利用和用户体验质量。基于np难解的非合作博弈理论,给出了该博弈的纳什均衡。我们的方案有效地优化了计算和存储资源。我们做了详尽的观察,结果表明我们的方案在增强的存储能力、高质量的用户体验和低能耗方面比传统方案提供了更好的性能。
{"title":"Resource-Aware Decentralized Adaptive Computational Offloading & Task-Caching for Multi-Access Edge Computing","authors":"Getenet Tefera, Kun She, F. Deeba, Awais Ahmed","doi":"10.1145/3341069.3341075","DOIUrl":"https://doi.org/10.1145/3341069.3341075","url":null,"abstract":"Smart technologies or IoT devices have been designed to execute intensive applications that request more computational and other computer system resources. However, those devices have a resource constraint. To address the challenge, we adopt Multi-Access Edge Computing which is a new paradigm that transforms and localize Cloud services and capabilities at the Edge of Radio-Access Network based on proximity for mobile subscribers. In this paper, we proposed a Resource-Aware Decentralized Computing and Caching framework for Multi-Access Edge Computing. So, smart end-user devices work collaboratively and independently with resourceful edge devices or peer devices in close proximity during the unreliable network. Moreover, those devices can offload intensive application or access completed cached tasks to provide efficient resource utilization & Quality of User Experience. The drawback is expressed based on Non-Cooperative Game Theory which is NP-hard to solve and we show that the game concedes a Nash Equilibrium. Our Scheme optimizes computational and storage resources efficiently. We have done exhaustive observation the outcome shows that our scheme provides better performance than the conventional scheme in terms of enhanced storage capability, high Quality of User Experience, and low energy consumption.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128168229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Research on Satellite Orbit Prediction Based on Neural Network Algorithm 基于神经网络算法的卫星轨道预测研究
H. Ren, Xiaolin Chen, Bei Guan, Yongji Wang, Tiantian Liu, Kongyang Peng
Satellite orbits predictions is a significant research problem for collision avoidance in space area. However, current prediction methods for satellite orbits are not accurate enough because of the lack of information such as space environment condition. The traditional methods tend to construct a perturbation model. Because of the intrinsic low accuracy of the perturbation model, the prediction accuracy of the low-order analytical solution is relatively low. While the high-order analytical solution is extremely complex, it results in low computational efficiency and even no solution. This paper presents a satellite orbit prediction method based on neural network algorithm, which discovers the orbital variation law by training historical TLE data to predict satellite orbit. The experiment results show that the proposed algorithm is feasible.
卫星轨道预测是空间区域避碰的重要研究问题。然而,由于缺乏空间环境条件等信息,现有的卫星轨道预测方法精度不够。传统的方法倾向于构造一个摄动模型。由于微扰模型固有的低精度,低阶解析解的预测精度相对较低。而高阶解析解非常复杂,导致计算效率低,甚至无解。提出了一种基于神经网络算法的卫星轨道预测方法,通过训练历史TLE数据发现卫星轨道变化规律,进行卫星轨道预测。实验结果表明,该算法是可行的。
{"title":"Research on Satellite Orbit Prediction Based on Neural Network Algorithm","authors":"H. Ren, Xiaolin Chen, Bei Guan, Yongji Wang, Tiantian Liu, Kongyang Peng","doi":"10.1145/3341069.3342995","DOIUrl":"https://doi.org/10.1145/3341069.3342995","url":null,"abstract":"Satellite orbits predictions is a significant research problem for collision avoidance in space area. However, current prediction methods for satellite orbits are not accurate enough because of the lack of information such as space environment condition. The traditional methods tend to construct a perturbation model. Because of the intrinsic low accuracy of the perturbation model, the prediction accuracy of the low-order analytical solution is relatively low. While the high-order analytical solution is extremely complex, it results in low computational efficiency and even no solution. This paper presents a satellite orbit prediction method based on neural network algorithm, which discovers the orbital variation law by training historical TLE data to predict satellite orbit. The experiment results show that the proposed algorithm is feasible.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126767918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-UAVs Cooperative Coverage Reconnaissance with Neural Network and Genetic Algorithm 基于神经网络和遗传算法的多无人机协同覆盖侦察
Chang Liu, Wen-jun Xie, Peng Zhang, Qing Guo, Doujian Ding
Aiming at the problem of multi-UAVs cooperative coverage reconnaissance mission planning, a planning method combining neural network and genetic algorithm is proposed. Firstly, the relative position relationship between multiple UAVs, the position relationship between each UAV and the boundary of the target area and the motion performance of each UAV are taken as inputs of the neural network, and the output is rough path of each UAV. Then, the weights and thresholds of neural network are optimized by using genetic algorithm, and the optimal paths of multi-UAVs cooperative regional reconnaissance is solved. The simulation results show that the method can not only enable UAVs to learn reconnaissance rules autonomously, but also plan the cooperative reconnaissance paths of each UAV, achieve effective coverage of the target area, and have good reconnaissance efficiency.
针对多无人机协同覆盖侦察任务规划问题,提出了一种神经网络与遗传算法相结合的规划方法。首先,将多架无人机之间的相对位置关系、每架无人机与目标区域边界的位置关系以及每架无人机的运动性能作为神经网络的输入,输出为每架无人机的粗糙路径;然后,利用遗传算法对神经网络的权值和阈值进行优化,求解多无人机协同区域侦察的最优路径;仿真结果表明,该方法不仅能使无人机自主学习侦察规则,还能规划各无人机的协同侦察路径,实现对目标区域的有效覆盖,具有良好的侦察效率。
{"title":"Multi-UAVs Cooperative Coverage Reconnaissance with Neural Network and Genetic Algorithm","authors":"Chang Liu, Wen-jun Xie, Peng Zhang, Qing Guo, Doujian Ding","doi":"10.1145/3341069.3342968","DOIUrl":"https://doi.org/10.1145/3341069.3342968","url":null,"abstract":"Aiming at the problem of multi-UAVs cooperative coverage reconnaissance mission planning, a planning method combining neural network and genetic algorithm is proposed. Firstly, the relative position relationship between multiple UAVs, the position relationship between each UAV and the boundary of the target area and the motion performance of each UAV are taken as inputs of the neural network, and the output is rough path of each UAV. Then, the weights and thresholds of neural network are optimized by using genetic algorithm, and the optimal paths of multi-UAVs cooperative regional reconnaissance is solved. The simulation results show that the method can not only enable UAVs to learn reconnaissance rules autonomously, but also plan the cooperative reconnaissance paths of each UAV, achieve effective coverage of the target area, and have good reconnaissance efficiency.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122531388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Adaptive Sparse Matrix-Vector Multiplication on CPU-GPU Heterogeneous Architecture CPU-GPU异构架构下的自适应稀疏矩阵向量乘法
Jing Nie, Chunlei Zhang, Dan Zou, Fei Xia, Lina Lu, Xiang Wang, Fei Zhao
SpMV is the core algorithm in solving the sparse linear equations, which is widely used in many research and engineering application field. GPU is the most common coprocessor in high-performance computing domain, and has already been proven to researchers the practical value in accelerating various algorithms. A lot of reletead work has been carried out to optimize parallel SpMV on CPU-GPU platforms, which mainly focuses on reducing the computing overhead on the GPU, including branch divergence and cache missing, and little attention was paid to the overall efficiency of the heterogeneous platform. In this paper, we describe the design and implementation of an adaptive sparse matrix-vector multiplication (SpMV) on CPU-GPU heterogeneous architecture. We propose a dynamic task scheduling framework for CPU-GPU platform to improve the utilization of both CPU and GPU. A double buffering scheme is also presented to hide the data transfer overhead between CPU and GPU. Two deeply optimized SpMV kernels are deployed for CPU and GPU respectively. The evaluation on typical sparse matrices indicates that the proposed algorithm obtains both significant performance increase and adaptability to different types of sparse matrices.
SpMV算法是求解稀疏线性方程的核心算法,在许多研究和工程应用领域得到了广泛的应用。GPU是高性能计算领域最常用的协处理器,在加速各种算法方面已经向研究人员证明了它的实用价值。在CPU-GPU平台上对并行SpMV进行了大量的相关优化工作,主要集中在降低GPU的计算开销,包括分支发散和缓存丢失,而很少关注异构平台的整体效率。本文描述了一种基于CPU-GPU异构架构的自适应稀疏矩阵向量乘法(SpMV)算法的设计与实现。为了提高CPU和GPU的利用率,我们提出了一种CPU-GPU平台的动态任务调度框架。提出了一种双缓冲方案来隐藏CPU和GPU之间的数据传输开销。分别针对CPU和GPU部署了两个深度优化的SpMV内核。对典型稀疏矩阵的评价表明,该算法在性能上有显著提高,并且对不同类型的稀疏矩阵具有较强的适应性。
{"title":"Adaptive Sparse Matrix-Vector Multiplication on CPU-GPU Heterogeneous Architecture","authors":"Jing Nie, Chunlei Zhang, Dan Zou, Fei Xia, Lina Lu, Xiang Wang, Fei Zhao","doi":"10.1145/3341069.3341072","DOIUrl":"https://doi.org/10.1145/3341069.3341072","url":null,"abstract":"SpMV is the core algorithm in solving the sparse linear equations, which is widely used in many research and engineering application field. GPU is the most common coprocessor in high-performance computing domain, and has already been proven to researchers the practical value in accelerating various algorithms. A lot of reletead work has been carried out to optimize parallel SpMV on CPU-GPU platforms, which mainly focuses on reducing the computing overhead on the GPU, including branch divergence and cache missing, and little attention was paid to the overall efficiency of the heterogeneous platform. In this paper, we describe the design and implementation of an adaptive sparse matrix-vector multiplication (SpMV) on CPU-GPU heterogeneous architecture. We propose a dynamic task scheduling framework for CPU-GPU platform to improve the utilization of both CPU and GPU. A double buffering scheme is also presented to hide the data transfer overhead between CPU and GPU. Two deeply optimized SpMV kernels are deployed for CPU and GPU respectively. The evaluation on typical sparse matrices indicates that the proposed algorithm obtains both significant performance increase and adaptability to different types of sparse matrices.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126405936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Inventory Management of Automobile After-sales Parts Based on Data Mining 基于数据挖掘的汽车售后零部件库存管理
Qun Liu, Kehua Miao, Kaihong Lin
The inventory management of automotive aftermarket parts is of great significance to the after-sales activities of automobile dealers and the reduction of operating costs. In view of the problem of insufficient utilization of automobile after-sales service data, it is necessary to introduce data mining methods to further analyze and mine data. Taking the historical sales data of auto parts as the mining object, K-means clustering algorithm and LSTM recurrent neural network were applied, and the Python tool was used to develop the automobile after-sales parts classification model and the parts inventory prediction model. The classification results can be used to analyze whether the dealer's inventory structure is reasonable. The forecast results can predict the demand for parts in the next stage. Comprehensive classification and prediction results, the study provides reference for the auto dealer to determine the variety structure and quantity structure of the auto parts.
汽车后市场零部件库存管理对汽车经销商的售后活动和降低经营成本具有重要意义。针对汽车售后服务数据利用不足的问题,有必要引入数据挖掘方法,对数据进行进一步分析和挖掘。以汽车零部件历史销售数据为挖掘对象,应用K-means聚类算法和LSTM递归神经网络,利用Python工具开发汽车售后零部件分类模型和零部件库存预测模型。分类结果可以用来分析经销商的库存结构是否合理。预测结果可以预测下一阶段的零件需求。综合分类和预测结果,为汽车经销商确定汽车零部件品种结构和数量结构提供参考。
{"title":"Inventory Management of Automobile After-sales Parts Based on Data Mining","authors":"Qun Liu, Kehua Miao, Kaihong Lin","doi":"10.1145/3341069.3342975","DOIUrl":"https://doi.org/10.1145/3341069.3342975","url":null,"abstract":"The inventory management of automotive aftermarket parts is of great significance to the after-sales activities of automobile dealers and the reduction of operating costs. In view of the problem of insufficient utilization of automobile after-sales service data, it is necessary to introduce data mining methods to further analyze and mine data. Taking the historical sales data of auto parts as the mining object, K-means clustering algorithm and LSTM recurrent neural network were applied, and the Python tool was used to develop the automobile after-sales parts classification model and the parts inventory prediction model. The classification results can be used to analyze whether the dealer's inventory structure is reasonable. The forecast results can predict the demand for parts in the next stage. Comprehensive classification and prediction results, the study provides reference for the auto dealer to determine the variety structure and quantity structure of the auto parts.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116877877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1