Byeori Kim;Changhun Lee;Gwangsun Kim;Eunhyeok Park
{"title":"Cost-Effective Extension of DRAM-PIM for Group-Wise LLM Quantization","authors":"Byeori Kim;Changhun Lee;Gwangsun Kim;Eunhyeok Park","doi":"10.1109/LCA.2025.3532682","DOIUrl":null,"url":null,"abstract":"Processing-in-Memory (PIM) is emerging as a promising next-generation hardware to address memory bottlenecks in large language model (LLM) inference by leveraging internal memory bandwidth, enabling more energy-efficient on-device AI. However, LLMs’ large footprint poses significant challenges for accelerating them on PIM due to limited available space. Recent advances in weight-only quantization, especially group-wise weight quantization (GWQ), reduce LLM model sizes, enabling parameters to be stored at 4-bit precision or lower with minimal accuracy loss. Despite this, current PIM architectures experience performance degradation when handling the additional computations required for quantized weights. While incorporating extra logic could mitigate this degradation, it is often prohibitively expensive due to the constraints of memory technology, necessitating solutions with minimal area overhead. This work introduces two key innovations: 1) scale cascading, and 2) an INT2FP converter, to support GWQ-applied LLMs on PIM with minimal dequantization latency and area overhead compared to FP16 GEMV. Experimental results show that the proposed approach adds less than 0.6% area overhead to the existing PIM unit and achieves a 7% latency overhead for dequantization and GEMV in 4-bit GWQ with a group size of 128, compared to FP16 GEMV, while offering a 1.55× performance gain over baseline dequantization.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 1","pages":"53-56"},"PeriodicalIF":1.4000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10886951","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Computer Architecture Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10886951/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Processing-in-Memory (PIM) is emerging as a promising next-generation hardware to address memory bottlenecks in large language model (LLM) inference by leveraging internal memory bandwidth, enabling more energy-efficient on-device AI. However, LLMs’ large footprint poses significant challenges for accelerating them on PIM due to limited available space. Recent advances in weight-only quantization, especially group-wise weight quantization (GWQ), reduce LLM model sizes, enabling parameters to be stored at 4-bit precision or lower with minimal accuracy loss. Despite this, current PIM architectures experience performance degradation when handling the additional computations required for quantized weights. While incorporating extra logic could mitigate this degradation, it is often prohibitively expensive due to the constraints of memory technology, necessitating solutions with minimal area overhead. This work introduces two key innovations: 1) scale cascading, and 2) an INT2FP converter, to support GWQ-applied LLMs on PIM with minimal dequantization latency and area overhead compared to FP16 GEMV. Experimental results show that the proposed approach adds less than 0.6% area overhead to the existing PIM unit and achieves a 7% latency overhead for dequantization and GEMV in 4-bit GWQ with a group size of 128, compared to FP16 GEMV, while offering a 1.55× performance gain over baseline dequantization.
期刊介绍:
IEEE Computer Architecture Letters is a rigorously peer-reviewed forum for publishing early, high-impact results in the areas of uni- and multiprocessor computer systems, computer architecture, microarchitecture, workload characterization, performance evaluation and simulation techniques, and power-aware computing. Submissions are welcomed on any topic in computer architecture, especially but not limited to: microprocessor and multiprocessor systems, microarchitecture and ILP processors, workload characterization, performance evaluation and simulation techniques, compiler-hardware and operating system-hardware interactions, interconnect architectures, memory and cache systems, power and thermal issues at the architecture level, I/O architectures and techniques, independent validation of previously published results, analysis of unsuccessful techniques, domain-specific processor architectures (e.g., embedded, graphics, network, etc.), real-time and high-availability architectures, reconfigurable systems.