Wangsong Yin, Mengwei Xu, Yuanchun Li, Xuanzhe Liu
{"title":"LLM as a System Service on Mobile Devices","authors":"Wangsong Yin, Mengwei Xu, Yuanchun Li, Xuanzhe Liu","doi":"arxiv-2403.11805","DOIUrl":null,"url":null,"abstract":"Being more powerful and intrusive into user-device interactions, LLMs are\neager for on-device execution to better preserve user privacy. In this work, we\npropose a new paradigm of mobile AI: LLM as a system service on mobile devices\n(LLMaaS). Unlike traditional DNNs that execute in a stateless manner, such a\nsystem service is stateful: LLMs execution often needs to maintain persistent\nstates (mainly KV cache) across multiple invocations. To minimize the LLM\ncontext switching overhead under tight device memory budget, this work presents\nLLMS, which decouples the memory management of app and LLM contexts with a key\nidea of fine-grained, chunk-wise, globally-optimized KV cache compression and\nswapping. By fully leveraging KV cache's unique characteristics, it proposes\nthree novel techniques: (1) Tolerance-Aware Compression: it compresses chunks\nbased on their measured accuracy tolerance to compression. (2) IO-Recompute\nPipelined Loading: it introduces recompute to swapping-in for acceleration. (3)\nChunk Lifecycle Management: it optimizes the memory activities of chunks with\nan ahead-of-time swapping-out and an LCTRU (Least Compression-Tolerable and\nRecently-Used) queue based eviction. In evaluations conducted on\nwell-established traces and various edge devices, \\sys reduces context\nswitching latency by up to 2 orders of magnitude when compared to competitive\nbaseline solutions.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"147 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Operating Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.11805","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Being more powerful and intrusive into user-device interactions, LLMs are
eager for on-device execution to better preserve user privacy. In this work, we
propose a new paradigm of mobile AI: LLM as a system service on mobile devices
(LLMaaS). Unlike traditional DNNs that execute in a stateless manner, such a
system service is stateful: LLMs execution often needs to maintain persistent
states (mainly KV cache) across multiple invocations. To minimize the LLM
context switching overhead under tight device memory budget, this work presents
LLMS, which decouples the memory management of app and LLM contexts with a key
idea of fine-grained, chunk-wise, globally-optimized KV cache compression and
swapping. By fully leveraging KV cache's unique characteristics, it proposes
three novel techniques: (1) Tolerance-Aware Compression: it compresses chunks
based on their measured accuracy tolerance to compression. (2) IO-Recompute
Pipelined Loading: it introduces recompute to swapping-in for acceleration. (3)
Chunk Lifecycle Management: it optimizes the memory activities of chunks with
an ahead-of-time swapping-out and an LCTRU (Least Compression-Tolerable and
Recently-Used) queue based eviction. In evaluations conducted on
well-established traces and various edge devices, \sys reduces context
switching latency by up to 2 orders of magnitude when compared to competitive
baseline solutions.