Ahmed Mujtaba, Wai Kong Lee, Byoung Chul Ko, Hyung Jin Chang, Seong Oun Hwang
{"title":"AuGQ: Augmented quantization granularity to overcome accuracy degradation for sub-byte quantized deep neural networks","authors":"Ahmed Mujtaba, Wai Kong Lee, Byoung Chul Ko, Hyung Jin Chang, Seong Oun Hwang","doi":"10.1007/s10489-025-06495-1","DOIUrl":null,"url":null,"abstract":"<div><p>Deployment of neural networks on IoT devices unleashes the potential for various innovative applications, but the sheer size and computation of many deep learning (DL) networks prevented its widespread. Quantization mitigates this issue by reducing model precision, enabling deployment on resource-constrained edge devices. However, at extremely low bit-widths, such as 2-bit and 4-bit, the aggressive compression leads to significant accuracy degradation due to the reduced representational capacity of the neural network. A critical aspect of effective quantization is identifying the range of real values (FP32) that impact model accuracy. To address accuracy loss at sub-byte levels, we introduce Augmented Quantization (AuGQ), a novel granularity technique tailored for low bit-width quantization. AuGQ segments the range of real-valued (FP32) weight and activation distributions into small uniform intervals, applying affine quantization in each interval to enhance accuracy. We evaluated AuGQ using both post-training quantization (PTQ) and quantization-aware training (QAT) methods, achieving accuracy levels comparable to full precision (32-bit) DL networks. Our findings demonstrate that AuGQ is agnostic to the training pipeline and batch normalization folding, distinguishing it from conventional quantization techniques. Furthermore, when integrated into state-of-the-art PTQ algorithms, AuGQ necessitates only 64 training samples for fine-tuning which is <span>\\(16\\times \\)</span> fewer than traditional methods. This reduction facilitates the application of high-accuracy quantization at sub-byte bit-widths, making it suitable for practical IoT deployments and enhancing computational efficiency on edge devices.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10489-025-06495-1.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-025-06495-1","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deployment of neural networks on IoT devices unleashes the potential for various innovative applications, but the sheer size and computation of many deep learning (DL) networks prevented its widespread. Quantization mitigates this issue by reducing model precision, enabling deployment on resource-constrained edge devices. However, at extremely low bit-widths, such as 2-bit and 4-bit, the aggressive compression leads to significant accuracy degradation due to the reduced representational capacity of the neural network. A critical aspect of effective quantization is identifying the range of real values (FP32) that impact model accuracy. To address accuracy loss at sub-byte levels, we introduce Augmented Quantization (AuGQ), a novel granularity technique tailored for low bit-width quantization. AuGQ segments the range of real-valued (FP32) weight and activation distributions into small uniform intervals, applying affine quantization in each interval to enhance accuracy. We evaluated AuGQ using both post-training quantization (PTQ) and quantization-aware training (QAT) methods, achieving accuracy levels comparable to full precision (32-bit) DL networks. Our findings demonstrate that AuGQ is agnostic to the training pipeline and batch normalization folding, distinguishing it from conventional quantization techniques. Furthermore, when integrated into state-of-the-art PTQ algorithms, AuGQ necessitates only 64 training samples for fine-tuning which is \(16\times \) fewer than traditional methods. This reduction facilitates the application of high-accuracy quantization at sub-byte bit-widths, making it suitable for practical IoT deployments and enhancing computational efficiency on edge devices.
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.