Reem Abdel-Salam, Ahmed H. Abdel-Gawad, Amr G. Wassal
{"title":"VLCQ: Post-training quantization for deep neural networks using variable length coding","authors":"Reem Abdel-Salam, Ahmed H. Abdel-Gawad, Amr G. Wassal","doi":"10.1016/j.future.2024.107654","DOIUrl":null,"url":null,"abstract":"<div><div>Quantization plays a crucial role in efficiently deploying deep learning models on resources constraint devices. Post-training quantization does not require either access to the original dataset or retraining the full model. Current methods that achieve high performance (near baseline results) require INT8 fixed-point integers. However, to achieve high model compression by achieving lower bit-width, significant degradation to the performance becomes the challenge. In this paper, we propose VLCQ, which relaxes the constraint of fixed-point encoding which limits the quantization techniques from better quantizing the weights. Therefore, this work utilizes variable-length encoding which allows for exploring the whole space of quantization techniques. Thus, achieving much better results (close to or even better than the baseline results) while achieving lower bit-widths without the need to access any training data or to fine-tune the model. Extensive experiments were carried out on various deep-learning models for the image classification and segmentation, and object detection tasks. When compared to state-of-the-art post-training quantization approaches, experimental results reveal that our suggested method offers improved performance with better model compression (lower bit-rate). For per-channel quantization, our method surpassed the FP32 accuracy and Piece-Wise Linear Quantization (PWLQ) method in most models while achieving up-to 6X model compression ratio compared to the FP32 and up-to 1.7X compared to PWLQ. If the model compression is the concern with little effect on performance, our method achieves up-to 12.25X compression ratio compared to FP32 within 4% performance loss. For per-tensor, our method is competitive with Data-Free Quantization scheme (DFQ) in achieving the best performance. However, our method is more flexible in getting lower bit rates than DFQ across the different tasks and models.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107654"},"PeriodicalIF":6.2000,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X24006186","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Quantization plays a crucial role in efficiently deploying deep learning models on resources constraint devices. Post-training quantization does not require either access to the original dataset or retraining the full model. Current methods that achieve high performance (near baseline results) require INT8 fixed-point integers. However, to achieve high model compression by achieving lower bit-width, significant degradation to the performance becomes the challenge. In this paper, we propose VLCQ, which relaxes the constraint of fixed-point encoding which limits the quantization techniques from better quantizing the weights. Therefore, this work utilizes variable-length encoding which allows for exploring the whole space of quantization techniques. Thus, achieving much better results (close to or even better than the baseline results) while achieving lower bit-widths without the need to access any training data or to fine-tune the model. Extensive experiments were carried out on various deep-learning models for the image classification and segmentation, and object detection tasks. When compared to state-of-the-art post-training quantization approaches, experimental results reveal that our suggested method offers improved performance with better model compression (lower bit-rate). For per-channel quantization, our method surpassed the FP32 accuracy and Piece-Wise Linear Quantization (PWLQ) method in most models while achieving up-to 6X model compression ratio compared to the FP32 and up-to 1.7X compared to PWLQ. If the model compression is the concern with little effect on performance, our method achieves up-to 12.25X compression ratio compared to FP32 within 4% performance loss. For per-tensor, our method is competitive with Data-Free Quantization scheme (DFQ) in achieving the best performance. However, our method is more flexible in getting lower bit rates than DFQ across the different tasks and models.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.