{"title":"Towards SLO-Optimized LLM Serving via Automatic Inference Engine Tuning","authors":"Ke Cheng, Zhi Wang, Wen Hu, Tiannuo Yang, Jianguo Li, Sheng Zhang","doi":"arxiv-2408.04323","DOIUrl":null,"url":null,"abstract":"A service-level objective (SLO) is a target performance metric of service\nthat cloud vendors aim to ensure. Delivering optimized SLOs can enhance user\nsatisfaction and improve the competitiveness of cloud vendors. As large\nlanguage models (LLMs) are gaining increasing popularity across various fields,\nit is of great significance to optimize SLOs for LLM inference services. In\nthis paper, we observe that adjusting the parameters of LLM inference engines\ncan improve service performance, and the optimal parameter configurations of\ndifferent services are different. Therefore, we propose SCOOT, an automatic\nperformance tuning system to optimize SLOs for each LLM inference service by\ntuning the parameters of the inference engine. We first propose a generalized\nformulation of the tuning problem to handle various objectives and constraints\nbetween parameters, and SCOOT exploits the Bayesian optimization (BO) technique\nto resolve the problem via exploration and exploitation. Moreover, SCOOT adopts\na random forest to learn hidden constraints during the tuning process to\nmitigate invalid exploration. To improve the tuning efficiency, SCOOT utilizes\nthe parallel suggestion to accelerate the tuning process. Extensive experiments\ndemonstrate that SCOOT can significantly outperform existing tuning techniques\nin SLO optimization while greatly improving the tuning efficiency.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"184 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.04323","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A service-level objective (SLO) is a target performance metric of service
that cloud vendors aim to ensure. Delivering optimized SLOs can enhance user
satisfaction and improve the competitiveness of cloud vendors. As large
language models (LLMs) are gaining increasing popularity across various fields,
it is of great significance to optimize SLOs for LLM inference services. In
this paper, we observe that adjusting the parameters of LLM inference engines
can improve service performance, and the optimal parameter configurations of
different services are different. Therefore, we propose SCOOT, an automatic
performance tuning system to optimize SLOs for each LLM inference service by
tuning the parameters of the inference engine. We first propose a generalized
formulation of the tuning problem to handle various objectives and constraints
between parameters, and SCOOT exploits the Bayesian optimization (BO) technique
to resolve the problem via exploration and exploitation. Moreover, SCOOT adopts
a random forest to learn hidden constraints during the tuning process to
mitigate invalid exploration. To improve the tuning efficiency, SCOOT utilizes
the parallel suggestion to accelerate the tuning process. Extensive experiments
demonstrate that SCOOT can significantly outperform existing tuning techniques
in SLO optimization while greatly improving the tuning efficiency.