In order to solve remote pedestrian detection problem, the target need to be detected in the absence of information, a new pedestrian detection algorithm based on Convolution Neural Network (CNN) is proposed. The algorithm uses shallow layer edge features combined with grayscale images to replace the RGB color information of the original image, as an input to the Convolutional Neural Network to increase the amount of effective information. Then, in deep learning training process, the cross entropy is combined with the learning rate to optimize the cross entropy function. Finally, the improved Convolutional Neural Network is trained on four common pedestrian hybrid datasets to apply it to the remote pedestrian intrusion detection of the railway industry using transfer learning. The experimental results show that compared with the existing Convolutional Neural Network remote pedestrian detection algorithm, the new method can effectively improve the accuracy of detection 2% and has a good universality.
{"title":"Remote Pedestrian Detection Algorithm Based on Edge Information Input CNN","authors":"Chi Zhang, Nanlin Tan, Yingxia Lin","doi":"10.1145/3341069.3342969","DOIUrl":"https://doi.org/10.1145/3341069.3342969","url":null,"abstract":"In order to solve remote pedestrian detection problem, the target need to be detected in the absence of information, a new pedestrian detection algorithm based on Convolution Neural Network (CNN) is proposed. The algorithm uses shallow layer edge features combined with grayscale images to replace the RGB color information of the original image, as an input to the Convolutional Neural Network to increase the amount of effective information. Then, in deep learning training process, the cross entropy is combined with the learning rate to optimize the cross entropy function. Finally, the improved Convolutional Neural Network is trained on four common pedestrian hybrid datasets to apply it to the remote pedestrian intrusion detection of the railway industry using transfer learning. The experimental results show that compared with the existing Convolutional Neural Network remote pedestrian detection algorithm, the new method can effectively improve the accuracy of detection 2% and has a good universality.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126617873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the era of big data, mass production, analysis and application of data have become a new trend. In the long-term design, production, operation and testing process of aerospace enterprises, a large number of valuable data have been generated. Collection and analysis of these data can improve the management of aerospace enterprises and gain competitive advantages. With the increase of semi-structured and unstructured data produced by aerospace enterprises year by year, how to store and analyze data, how to mine and share knowledge has become a major problem. The existing knowledge management system cannot meet the diversified needs of users only by traditional database technology. It also needs to combine distributed computing and storage technology to solve the problems of knowledge storage, knowledge sharing, knowledge mining, knowledge retrieval and recommendation in big data environment. Aerospace enterprises need to build a knowledge management system based on big data technology to support knowledge innovation and knowledge application. From the perspective of data operation and relying on Hadoop ecosystem related big data technology, this paper constructs a knowledge management framework model for aerospace enterprises based on Hadoop.
{"title":"Research on Knowledge Management Technology of Aerospace Engineering Based on Big Data","authors":"Jun Liu","doi":"10.1145/3341069.3342996","DOIUrl":"https://doi.org/10.1145/3341069.3342996","url":null,"abstract":"In the era of big data, mass production, analysis and application of data have become a new trend. In the long-term design, production, operation and testing process of aerospace enterprises, a large number of valuable data have been generated. Collection and analysis of these data can improve the management of aerospace enterprises and gain competitive advantages. With the increase of semi-structured and unstructured data produced by aerospace enterprises year by year, how to store and analyze data, how to mine and share knowledge has become a major problem. The existing knowledge management system cannot meet the diversified needs of users only by traditional database technology. It also needs to combine distributed computing and storage technology to solve the problems of knowledge storage, knowledge sharing, knowledge mining, knowledge retrieval and recommendation in big data environment. Aerospace enterprises need to build a knowledge management system based on big data technology to support knowledge innovation and knowledge application. From the perspective of data operation and relying on Hadoop ecosystem related big data technology, this paper constructs a knowledge management framework model for aerospace enterprises based on Hadoop.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131086822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To general people, it is more convenient to know weather condition at a specific location and particular time. However, current weather forecasting services offered by meteorological observation organizations only provide a wide-range or coarse-grained forecast. This research work tried to utilize historical weather observation data and machine learning (ML) techniques to build models enabling specific weather forecast. Different settings of models were applied and the corresponding results were compared and analyzed in terms of training cost and prediction quality. The preliminary results indicate that the ML-enabled forecast model can serve as a supplementary source for people who need to know finer-grained whether condition. To improve the quality of the ML forecasting models, besides more fine-tuning and algorithms renovation, large volume of long-term historical weather data are critical since climate changes to a large extent, possess subtle periodical characteristics.
{"title":"Realizing Specific Weather Forecast through Machine Learning Enabled Prediction Model","authors":"I-Ching Chen, Shueh-Cheng Hu","doi":"10.1145/3341069.3341084","DOIUrl":"https://doi.org/10.1145/3341069.3341084","url":null,"abstract":"To general people, it is more convenient to know weather condition at a specific location and particular time. However, current weather forecasting services offered by meteorological observation organizations only provide a wide-range or coarse-grained forecast. This research work tried to utilize historical weather observation data and machine learning (ML) techniques to build models enabling specific weather forecast. Different settings of models were applied and the corresponding results were compared and analyzed in terms of training cost and prediction quality. The preliminary results indicate that the ML-enabled forecast model can serve as a supplementary source for people who need to know finer-grained whether condition. To improve the quality of the ML forecasting models, besides more fine-tuning and algorithms renovation, large volume of long-term historical weather data are critical since climate changes to a large extent, possess subtle periodical characteristics.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121262338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of the banking industry, the number of transactions is exponential growth.At the same time, abnormal transactions are also increasing, causing immeasurable losses and risks.In terms of how to accurately identify suspicious transactions from massive customer information and bank account transaction data, this paper adopts the BalanceCascade algorithm based on Relief to solve the problem of unbalanced data in the identification of abnormal transactions in bank accounts, and proposes an effective abnormal transaction identification model.At the same time, the AUC and K-S index as the unbalanced data classification standards of performance evaluation, and finally to Kaggle data platform of bank accounts abnormal transaction data set, the results show that the proposed identification model of performance evaluation index of AUC 0.90 KS value at the same time also is as high as 0.64, shows that the model in as much as possible to reduce the rate of false positives and has high ability of classification and recognition, the method of bank accounts abnormal transaction identification has a certain reference value, enhance rapid response and improve the level of customer service for Banks have certain effect.
{"title":"Bank Account Abnormal Transaction Recognition Based on Relief Algorithm and BalanceCascade","authors":"Yun-xiang Liu, Ze-Shen Tang, Qi Xu","doi":"10.1145/3341069.3342981","DOIUrl":"https://doi.org/10.1145/3341069.3342981","url":null,"abstract":"With the rapid development of the banking industry, the number of transactions is exponential growth.At the same time, abnormal transactions are also increasing, causing immeasurable losses and risks.In terms of how to accurately identify suspicious transactions from massive customer information and bank account transaction data, this paper adopts the BalanceCascade algorithm based on Relief to solve the problem of unbalanced data in the identification of abnormal transactions in bank accounts, and proposes an effective abnormal transaction identification model.At the same time, the AUC and K-S index as the unbalanced data classification standards of performance evaluation, and finally to Kaggle data platform of bank accounts abnormal transaction data set, the results show that the proposed identification model of performance evaluation index of AUC 0.90 KS value at the same time also is as high as 0.64, shows that the model in as much as possible to reduce the rate of false positives and has high ability of classification and recognition, the method of bank accounts abnormal transaction identification has a certain reference value, enhance rapid response and improve the level of customer service for Banks have certain effect.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116255217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, the concept of evolutionary multitasking has emerged in the field of evolutionary computation as a promising approach to exploit the latent synergies among distinct optimization problems automatically. Many experimental studies have shown multifactorial evolutionary algorithm (MFEA), an implemented algorithm of evolutionary multitasking, can outperform the traditional optimization approaches of solving each task independently on handling synthetic and real-world multi-task optimization (MTO) problems in terms of solution quality and computation resource. However, as far as we know, there exists no study demonstrating the superiority of evolutionary multitasking from the aspect of theoretical analysis. In this paper, we propose a simple (4+2) MFEA to optimize the benchmarks Jumpk and LeadingOnes functions simultaneously. Our theoretical analysis shows that the upper bound of expected running time for the proposed algorithm on the Jumpk function can be improved to O(n2 + 2k) while the best upper bound for single-task optimization on this problem is O(nk-1). Moreover, the upper bound of expected running time to optimize LeadingOnes function is not increased. This result indicates that evolutionary multitasking is probably a promising approach to deal with some problems which traditional optimization methods can't well tackle. This paper provides an evidence of the effectiveness of the evolutionary multitasking from the aspect of theoretical analysis.
{"title":"Improve Theoretical Upper Bound of Jumpk Function by Evolutionary Multitasking","authors":"Y. Lian, Zhengxin Huang, Yuren Zhou, Zefeng Chen","doi":"10.1145/3341069.3342982","DOIUrl":"https://doi.org/10.1145/3341069.3342982","url":null,"abstract":"Recently, the concept of evolutionary multitasking has emerged in the field of evolutionary computation as a promising approach to exploit the latent synergies among distinct optimization problems automatically. Many experimental studies have shown multifactorial evolutionary algorithm (MFEA), an implemented algorithm of evolutionary multitasking, can outperform the traditional optimization approaches of solving each task independently on handling synthetic and real-world multi-task optimization (MTO) problems in terms of solution quality and computation resource. However, as far as we know, there exists no study demonstrating the superiority of evolutionary multitasking from the aspect of theoretical analysis. In this paper, we propose a simple (4+2) MFEA to optimize the benchmarks Jumpk and LeadingOnes functions simultaneously. Our theoretical analysis shows that the upper bound of expected running time for the proposed algorithm on the Jumpk function can be improved to O(n2 + 2k) while the best upper bound for single-task optimization on this problem is O(nk-1). Moreover, the upper bound of expected running time to optimize LeadingOnes function is not increased. This result indicates that evolutionary multitasking is probably a promising approach to deal with some problems which traditional optimization methods can't well tackle. This paper provides an evidence of the effectiveness of the evolutionary multitasking from the aspect of theoretical analysis.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116652863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart technologies or IoT devices have been designed to execute intensive applications that request more computational and other computer system resources. However, those devices have a resource constraint. To address the challenge, we adopt Multi-Access Edge Computing which is a new paradigm that transforms and localize Cloud services and capabilities at the Edge of Radio-Access Network based on proximity for mobile subscribers. In this paper, we proposed a Resource-Aware Decentralized Computing and Caching framework for Multi-Access Edge Computing. So, smart end-user devices work collaboratively and independently with resourceful edge devices or peer devices in close proximity during the unreliable network. Moreover, those devices can offload intensive application or access completed cached tasks to provide efficient resource utilization & Quality of User Experience. The drawback is expressed based on Non-Cooperative Game Theory which is NP-hard to solve and we show that the game concedes a Nash Equilibrium. Our Scheme optimizes computational and storage resources efficiently. We have done exhaustive observation the outcome shows that our scheme provides better performance than the conventional scheme in terms of enhanced storage capability, high Quality of User Experience, and low energy consumption.
{"title":"Resource-Aware Decentralized Adaptive Computational Offloading & Task-Caching for Multi-Access Edge Computing","authors":"Getenet Tefera, Kun She, F. Deeba, Awais Ahmed","doi":"10.1145/3341069.3341075","DOIUrl":"https://doi.org/10.1145/3341069.3341075","url":null,"abstract":"Smart technologies or IoT devices have been designed to execute intensive applications that request more computational and other computer system resources. However, those devices have a resource constraint. To address the challenge, we adopt Multi-Access Edge Computing which is a new paradigm that transforms and localize Cloud services and capabilities at the Edge of Radio-Access Network based on proximity for mobile subscribers. In this paper, we proposed a Resource-Aware Decentralized Computing and Caching framework for Multi-Access Edge Computing. So, smart end-user devices work collaboratively and independently with resourceful edge devices or peer devices in close proximity during the unreliable network. Moreover, those devices can offload intensive application or access completed cached tasks to provide efficient resource utilization & Quality of User Experience. The drawback is expressed based on Non-Cooperative Game Theory which is NP-hard to solve and we show that the game concedes a Nash Equilibrium. Our Scheme optimizes computational and storage resources efficiently. We have done exhaustive observation the outcome shows that our scheme provides better performance than the conventional scheme in terms of enhanced storage capability, high Quality of User Experience, and low energy consumption.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128168229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Ren, Xiaolin Chen, Bei Guan, Yongji Wang, Tiantian Liu, Kongyang Peng
Satellite orbits predictions is a significant research problem for collision avoidance in space area. However, current prediction methods for satellite orbits are not accurate enough because of the lack of information such as space environment condition. The traditional methods tend to construct a perturbation model. Because of the intrinsic low accuracy of the perturbation model, the prediction accuracy of the low-order analytical solution is relatively low. While the high-order analytical solution is extremely complex, it results in low computational efficiency and even no solution. This paper presents a satellite orbit prediction method based on neural network algorithm, which discovers the orbital variation law by training historical TLE data to predict satellite orbit. The experiment results show that the proposed algorithm is feasible.
{"title":"Research on Satellite Orbit Prediction Based on Neural Network Algorithm","authors":"H. Ren, Xiaolin Chen, Bei Guan, Yongji Wang, Tiantian Liu, Kongyang Peng","doi":"10.1145/3341069.3342995","DOIUrl":"https://doi.org/10.1145/3341069.3342995","url":null,"abstract":"Satellite orbits predictions is a significant research problem for collision avoidance in space area. However, current prediction methods for satellite orbits are not accurate enough because of the lack of information such as space environment condition. The traditional methods tend to construct a perturbation model. Because of the intrinsic low accuracy of the perturbation model, the prediction accuracy of the low-order analytical solution is relatively low. While the high-order analytical solution is extremely complex, it results in low computational efficiency and even no solution. This paper presents a satellite orbit prediction method based on neural network algorithm, which discovers the orbital variation law by training historical TLE data to predict satellite orbit. The experiment results show that the proposed algorithm is feasible.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126767918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aiming at the problem of multi-UAVs cooperative coverage reconnaissance mission planning, a planning method combining neural network and genetic algorithm is proposed. Firstly, the relative position relationship between multiple UAVs, the position relationship between each UAV and the boundary of the target area and the motion performance of each UAV are taken as inputs of the neural network, and the output is rough path of each UAV. Then, the weights and thresholds of neural network are optimized by using genetic algorithm, and the optimal paths of multi-UAVs cooperative regional reconnaissance is solved. The simulation results show that the method can not only enable UAVs to learn reconnaissance rules autonomously, but also plan the cooperative reconnaissance paths of each UAV, achieve effective coverage of the target area, and have good reconnaissance efficiency.
{"title":"Multi-UAVs Cooperative Coverage Reconnaissance with Neural Network and Genetic Algorithm","authors":"Chang Liu, Wen-jun Xie, Peng Zhang, Qing Guo, Doujian Ding","doi":"10.1145/3341069.3342968","DOIUrl":"https://doi.org/10.1145/3341069.3342968","url":null,"abstract":"Aiming at the problem of multi-UAVs cooperative coverage reconnaissance mission planning, a planning method combining neural network and genetic algorithm is proposed. Firstly, the relative position relationship between multiple UAVs, the position relationship between each UAV and the boundary of the target area and the motion performance of each UAV are taken as inputs of the neural network, and the output is rough path of each UAV. Then, the weights and thresholds of neural network are optimized by using genetic algorithm, and the optimal paths of multi-UAVs cooperative regional reconnaissance is solved. The simulation results show that the method can not only enable UAVs to learn reconnaissance rules autonomously, but also plan the cooperative reconnaissance paths of each UAV, achieve effective coverage of the target area, and have good reconnaissance efficiency.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122531388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Nie, Chunlei Zhang, Dan Zou, Fei Xia, Lina Lu, Xiang Wang, Fei Zhao
SpMV is the core algorithm in solving the sparse linear equations, which is widely used in many research and engineering application field. GPU is the most common coprocessor in high-performance computing domain, and has already been proven to researchers the practical value in accelerating various algorithms. A lot of reletead work has been carried out to optimize parallel SpMV on CPU-GPU platforms, which mainly focuses on reducing the computing overhead on the GPU, including branch divergence and cache missing, and little attention was paid to the overall efficiency of the heterogeneous platform. In this paper, we describe the design and implementation of an adaptive sparse matrix-vector multiplication (SpMV) on CPU-GPU heterogeneous architecture. We propose a dynamic task scheduling framework for CPU-GPU platform to improve the utilization of both CPU and GPU. A double buffering scheme is also presented to hide the data transfer overhead between CPU and GPU. Two deeply optimized SpMV kernels are deployed for CPU and GPU respectively. The evaluation on typical sparse matrices indicates that the proposed algorithm obtains both significant performance increase and adaptability to different types of sparse matrices.
{"title":"Adaptive Sparse Matrix-Vector Multiplication on CPU-GPU Heterogeneous Architecture","authors":"Jing Nie, Chunlei Zhang, Dan Zou, Fei Xia, Lina Lu, Xiang Wang, Fei Zhao","doi":"10.1145/3341069.3341072","DOIUrl":"https://doi.org/10.1145/3341069.3341072","url":null,"abstract":"SpMV is the core algorithm in solving the sparse linear equations, which is widely used in many research and engineering application field. GPU is the most common coprocessor in high-performance computing domain, and has already been proven to researchers the practical value in accelerating various algorithms. A lot of reletead work has been carried out to optimize parallel SpMV on CPU-GPU platforms, which mainly focuses on reducing the computing overhead on the GPU, including branch divergence and cache missing, and little attention was paid to the overall efficiency of the heterogeneous platform. In this paper, we describe the design and implementation of an adaptive sparse matrix-vector multiplication (SpMV) on CPU-GPU heterogeneous architecture. We propose a dynamic task scheduling framework for CPU-GPU platform to improve the utilization of both CPU and GPU. A double buffering scheme is also presented to hide the data transfer overhead between CPU and GPU. Two deeply optimized SpMV kernels are deployed for CPU and GPU respectively. The evaluation on typical sparse matrices indicates that the proposed algorithm obtains both significant performance increase and adaptability to different types of sparse matrices.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126405936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inventory management of automotive aftermarket parts is of great significance to the after-sales activities of automobile dealers and the reduction of operating costs. In view of the problem of insufficient utilization of automobile after-sales service data, it is necessary to introduce data mining methods to further analyze and mine data. Taking the historical sales data of auto parts as the mining object, K-means clustering algorithm and LSTM recurrent neural network were applied, and the Python tool was used to develop the automobile after-sales parts classification model and the parts inventory prediction model. The classification results can be used to analyze whether the dealer's inventory structure is reasonable. The forecast results can predict the demand for parts in the next stage. Comprehensive classification and prediction results, the study provides reference for the auto dealer to determine the variety structure and quantity structure of the auto parts.
{"title":"Inventory Management of Automobile After-sales Parts Based on Data Mining","authors":"Qun Liu, Kehua Miao, Kaihong Lin","doi":"10.1145/3341069.3342975","DOIUrl":"https://doi.org/10.1145/3341069.3342975","url":null,"abstract":"The inventory management of automotive aftermarket parts is of great significance to the after-sales activities of automobile dealers and the reduction of operating costs. In view of the problem of insufficient utilization of automobile after-sales service data, it is necessary to introduce data mining methods to further analyze and mine data. Taking the historical sales data of auto parts as the mining object, K-means clustering algorithm and LSTM recurrent neural network were applied, and the Python tool was used to develop the automobile after-sales parts classification model and the parts inventory prediction model. The classification results can be used to analyze whether the dealer's inventory structure is reasonable. The forecast results can predict the demand for parts in the next stage. Comprehensive classification and prediction results, the study provides reference for the auto dealer to determine the variety structure and quantity structure of the auto parts.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116877877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}