{"title":"Pipeline-based Optimization Method for Large-Scale End-to-End Inference","authors":"Caili Gao, Y. Dou, P. Qiao","doi":"10.1145/3611450.3611463","DOIUrl":null,"url":null,"abstract":"Enhancing the utilization of computing resources is a crucial technical challenge within the realm of deep learning model deployment and application. It holds significant importance in effectively leveraging various deep learning models. However, when it comes to actual deployment and operation, deep learning models face an urgent task—processing large-scale data. This processing flow is an end-to-end procedure that typically involves three essential steps: preprocessing, model inference, and postprocessing. Presently, existing research mainly focuses on the optimization of deep learning model algorithms, and rarely considers the coordinated utilization of CPU and accelerator resources after model deployment, resulting in low resource utilization and execution efficiency. In order to solve this problem, in this study, we comprehensively analyzed the demand for computing resources and the mutual adaptation relationship between the end-to-end processing flow in the model application and designed a general algorithm based on the pipeline idea to Realize the overlapping of CPU processing and accelerator operation process. Through this scheme, the serial execution flow of the end-to-end processing can be performed in parallel, resulting in a significant reduction in accelerator latency. We extensively conducted experiments on two specific tasks, and the outcomes demonstrated that our proposed method considerably enhances the accelerator’s utilization rate and program execution efficiency. Specifically, the utilization rate of the accelerator surged from 26% to over 97%, while the program’s execution efficiency witnessed a remarkable improvement of 3.41 to 5.54 times.","PeriodicalId":289906,"journal":{"name":"Proceedings of the 2023 3rd International Conference on Artificial Intelligence, Automation and Algorithms","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 3rd International Conference on Artificial Intelligence, Automation and Algorithms","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3611450.3611463","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Enhancing the utilization of computing resources is a crucial technical challenge within the realm of deep learning model deployment and application. It holds significant importance in effectively leveraging various deep learning models. However, when it comes to actual deployment and operation, deep learning models face an urgent task—processing large-scale data. This processing flow is an end-to-end procedure that typically involves three essential steps: preprocessing, model inference, and postprocessing. Presently, existing research mainly focuses on the optimization of deep learning model algorithms, and rarely considers the coordinated utilization of CPU and accelerator resources after model deployment, resulting in low resource utilization and execution efficiency. In order to solve this problem, in this study, we comprehensively analyzed the demand for computing resources and the mutual adaptation relationship between the end-to-end processing flow in the model application and designed a general algorithm based on the pipeline idea to Realize the overlapping of CPU processing and accelerator operation process. Through this scheme, the serial execution flow of the end-to-end processing can be performed in parallel, resulting in a significant reduction in accelerator latency. We extensively conducted experiments on two specific tasks, and the outcomes demonstrated that our proposed method considerably enhances the accelerator’s utilization rate and program execution efficiency. Specifically, the utilization rate of the accelerator surged from 26% to over 97%, while the program’s execution efficiency witnessed a remarkable improvement of 3.41 to 5.54 times.