{"title":"Optimal Workload Placement on Multi-Instance GPUs","authors":"Bekir Turkkan, Pavankumar Murali, Pavithra Harsha, Rohan Arora, Gerard Vanloo, Chandra Narayanaswami","doi":"arxiv-2409.06646","DOIUrl":null,"url":null,"abstract":"There is an urgent and pressing need to optimize usage of Graphical\nProcessing Units (GPUs), which have arguably become one of the most expensive\nand sought after IT resources. To help with this goal, several of the current\ngeneration of GPUs support a partitioning feature, called Multi-Instance GPU\n(MIG) to allow multiple workloads to share a GPU, albeit with some constraints.\nIn this paper we investigate how to optimize the placement of Large Language\nModel (LLM)-based AI Inferencing workloads on GPUs. We first identify and\npresent several use cases that are encountered in practice that require\nworkloads to be efficiently placed or migrated to other GPUs to make room for\nincoming workloads. The overarching goal is to use as few GPUs as possible and\nto further minimize memory and compute wastage on GPUs that are utilized. We\nhave developed two approaches to address this problem: an optimization method\nand a heuristic method. We benchmark these with two workload scheduling\nheuristics for multiple use cases. Our results show up to 2.85x improvement in\nthe number of GPUs used and up to 70% reduction in GPU wastage over baseline\nheuristics. We plan to enable the SRE community to leverage our proposed method\nin production environments.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"410 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06646","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
There is an urgent and pressing need to optimize usage of Graphical
Processing Units (GPUs), which have arguably become one of the most expensive
and sought after IT resources. To help with this goal, several of the current
generation of GPUs support a partitioning feature, called Multi-Instance GPU
(MIG) to allow multiple workloads to share a GPU, albeit with some constraints.
In this paper we investigate how to optimize the placement of Large Language
Model (LLM)-based AI Inferencing workloads on GPUs. We first identify and
present several use cases that are encountered in practice that require
workloads to be efficiently placed or migrated to other GPUs to make room for
incoming workloads. The overarching goal is to use as few GPUs as possible and
to further minimize memory and compute wastage on GPUs that are utilized. We
have developed two approaches to address this problem: an optimization method
and a heuristic method. We benchmark these with two workload scheduling
heuristics for multiple use cases. Our results show up to 2.85x improvement in
the number of GPUs used and up to 70% reduction in GPU wastage over baseline
heuristics. We plan to enable the SRE community to leverage our proposed method
in production environments.