{"title":"专家与精确度混合物用于调整服务质量","authors":"HamidReza Imani, Abdolah Amirany, Tarek El-Ghazawi","doi":"arxiv-2407.14417","DOIUrl":null,"url":null,"abstract":"The increasing demand for deploying large Mixture-of-Experts (MoE) models in\nresource-constrained environments necessitates efficient approaches to address\ntheir high memory and computational requirements challenges. Moreover, given\nthat tasks come in different user-defined constraints and the available\nresources change over time in multi-tenant environments, it is necessary to\ndesign an approach which provides a flexible configuration space. This paper\npresents an adaptive serving approach for the efficient deployment of MoE\nmodels, capitalizing on partial quantization of the experts. By dynamically\ndetermining the number of quantized experts and their distribution across CPU\nand GPU, our approach explores the Pareto frontier and offers a fine-grained\nrange of configurations for tuning throughput and model quality. Our evaluation\non an NVIDIA A100 GPU using a Mixtral 8x7B MoE model for three language\nmodelling benchmarks demonstrates that the throughput of token generation can\nbe adjusted from 0.63 to 13.00 token per second. This enhancement comes with a\nmarginal perplexity increase of 2.62 to 2.80, 6.48 to 7.24, and 3.24 to 3.53\nfor WikiText2, PTB, and C4 datasets respectively under maximum quantization.\nThese results highlight the practical applicability of our approach in dynamic\nand accuracy-sensitive applications where both memory usage and output quality\nare important.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"69 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mixture of Experts with Mixture of Precisions for Tuning Quality of Service\",\"authors\":\"HamidReza Imani, Abdolah Amirany, Tarek El-Ghazawi\",\"doi\":\"arxiv-2407.14417\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The increasing demand for deploying large Mixture-of-Experts (MoE) models in\\nresource-constrained environments necessitates efficient approaches to address\\ntheir high memory and computational requirements challenges. Moreover, given\\nthat tasks come in different user-defined constraints and the available\\nresources change over time in multi-tenant environments, it is necessary to\\ndesign an approach which provides a flexible configuration space. This paper\\npresents an adaptive serving approach for the efficient deployment of MoE\\nmodels, capitalizing on partial quantization of the experts. By dynamically\\ndetermining the number of quantized experts and their distribution across CPU\\nand GPU, our approach explores the Pareto frontier and offers a fine-grained\\nrange of configurations for tuning throughput and model quality. Our evaluation\\non an NVIDIA A100 GPU using a Mixtral 8x7B MoE model for three language\\nmodelling benchmarks demonstrates that the throughput of token generation can\\nbe adjusted from 0.63 to 13.00 token per second. This enhancement comes with a\\nmarginal perplexity increase of 2.62 to 2.80, 6.48 to 7.24, and 3.24 to 3.53\\nfor WikiText2, PTB, and C4 datasets respectively under maximum quantization.\\nThese results highlight the practical applicability of our approach in dynamic\\nand accuracy-sensitive applications where both memory usage and output quality\\nare important.\",\"PeriodicalId\":501291,\"journal\":{\"name\":\"arXiv - CS - Performance\",\"volume\":\"69 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Performance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.14417\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.14417","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mixture of Experts with Mixture of Precisions for Tuning Quality of Service
The increasing demand for deploying large Mixture-of-Experts (MoE) models in
resource-constrained environments necessitates efficient approaches to address
their high memory and computational requirements challenges. Moreover, given
that tasks come in different user-defined constraints and the available
resources change over time in multi-tenant environments, it is necessary to
design an approach which provides a flexible configuration space. This paper
presents an adaptive serving approach for the efficient deployment of MoE
models, capitalizing on partial quantization of the experts. By dynamically
determining the number of quantized experts and their distribution across CPU
and GPU, our approach explores the Pareto frontier and offers a fine-grained
range of configurations for tuning throughput and model quality. Our evaluation
on an NVIDIA A100 GPU using a Mixtral 8x7B MoE model for three language
modelling benchmarks demonstrates that the throughput of token generation can
be adjusted from 0.63 to 13.00 token per second. This enhancement comes with a
marginal perplexity increase of 2.62 to 2.80, 6.48 to 7.24, and 3.24 to 3.53
for WikiText2, PTB, and C4 datasets respectively under maximum quantization.
These results highlight the practical applicability of our approach in dynamic
and accuracy-sensitive applications where both memory usage and output quality
are important.