QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero Overhead

Amir Zandieh, Majid Daliri, Insu Han
{"title":"QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero Overhead","authors":"Amir Zandieh, Majid Daliri, Insu Han","doi":"arxiv-2406.03482","DOIUrl":null,"url":null,"abstract":"Serving LLMs requires substantial memory due to the storage requirements of\nKey-Value (KV) embeddings in the KV cache, which grows with sequence length. An\neffective approach to compress KV cache is quantization. However, traditional\nquantization methods face significant memory overhead due to the need to store\nquantization constants (at least a zero point and a scale) in full precision\nper data block. Depending on the block size, this overhead can add 1 or 2 bits\nper quantized number. We introduce QJL, a new quantization approach that\nconsists of a Johnson-Lindenstrauss (JL) transform followed by sign-bit\nquantization. In contrast to existing methods, QJL eliminates memory overheads\nby removing the need for storing quantization constants. We propose an\nasymmetric estimator for the inner product of two vectors and demonstrate that\napplying QJL to one vector and a standard JL transform without quantization to\nthe other provides an unbiased estimator with minimal distortion. We have\ndeveloped an efficient implementation of the QJL sketch and its corresponding\ninner product estimator, incorporating a lightweight CUDA kernel for optimized\ncomputation. When applied across various LLMs and NLP tasks to quantize the KV\ncache to only 3 bits, QJL demonstrates a more than fivefold reduction in KV\ncache memory usage without compromising accuracy, all while achieving faster\nruntime. Codes are available at \\url{https://github.com/amirzandieh/QJL}.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"35 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.03482","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Serving LLMs requires substantial memory due to the storage requirements of Key-Value (KV) embeddings in the KV cache, which grows with sequence length. An effective approach to compress KV cache is quantization. However, traditional quantization methods face significant memory overhead due to the need to store quantization constants (at least a zero point and a scale) in full precision per data block. Depending on the block size, this overhead can add 1 or 2 bits per quantized number. We introduce QJL, a new quantization approach that consists of a Johnson-Lindenstrauss (JL) transform followed by sign-bit quantization. In contrast to existing methods, QJL eliminates memory overheads by removing the need for storing quantization constants. We propose an asymmetric estimator for the inner product of two vectors and demonstrate that applying QJL to one vector and a standard JL transform without quantization to the other provides an unbiased estimator with minimal distortion. We have developed an efficient implementation of the QJL sketch and its corresponding inner product estimator, incorporating a lightweight CUDA kernel for optimized computation. When applied across various LLMs and NLP tasks to quantize the KV cache to only 3 bits, QJL demonstrates a more than fivefold reduction in KV cache memory usage without compromising accuracy, all while achieving faster runtime. Codes are available at \url{https://github.com/amirzandieh/QJL}.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
QJL:用于 KV 缓存量化的 1 位量化 JL 变换,零开销
为 LLM 服务需要大量内存,这是因为 KV 缓存中的键值(KV)嵌入需要存储,而 KV 缓存随着序列长度的增加而增加。压缩 KV 缓存的一种有效方法是量化。然而,传统的量化方法需要在每个数据块中以全精度存储量化常数(至少一个零点和一个刻度),因此内存开销很大。根据数据块的大小,这种开销会使每个量化数字增加 1 或 2 个比特。我们引入了 QJL,这是一种新的量化方法,包括约翰逊-林登斯特劳斯(JL)变换和符号位量化。与现有方法相比,QJL 无需存储量化常数,从而消除了内存开销。我们为两个向量的内积提出了一个非对称估计器,并证明对一个向量应用 QJL,对另一个向量应用标准 JL 变换而不进行量化,可以提供一个失真最小的无偏估计器。我们开发了一种 QJL 草图及其相应内积估计器的高效实现方法,其中包含一个用于优化计算的轻量级 CUDA 内核。当在各种 LLM 和 NLP 任务中应用 QJL 将 KVcache 量化到仅 3 位时,QJL 显示 KVcache 内存使用量减少了五倍多,而精度却没有受到影响,同时还实现了更快的运行时间。代码可在(url{https://github.com/amirzandieh/QJL}.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
HRA: A Multi-Criteria Framework for Ranking Metaheuristic Optimization Algorithms Temporal Load Imbalance on Ondes3D Seismic Simulator for Different Multicore Architectures Can Graph Reordering Speed Up Graph Neural Network Training? An Experimental Study The Landscape of GPU-Centric Communication A Global Perspective on the Past, Present, and Future of Video Streaming over Starlink
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1