Progressive Bitwidth Assignment Approaches for Efficient Capsule Networks Quantization

IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Access Pub Date : 2025-01-27 DOI:10.1109/ACCESS.2025.3534434
Mohsen Raji;Amir Ghazizadeh Ahsaei;Kimia Soroush;Behnam Ghavami
{"title":"Progressive Bitwidth Assignment Approaches for Efficient Capsule Networks Quantization","authors":"Mohsen Raji;Amir Ghazizadeh Ahsaei;Kimia Soroush;Behnam Ghavami","doi":"10.1109/ACCESS.2025.3534434","DOIUrl":null,"url":null,"abstract":"Capsule Networks (CapsNets) are a class of neural network architectures that can be used to more accurately model hierarchical relationships due to their hierarchical structure and dynamic routing algorithms. However, their high accuracy comes at the cost of significant memory and computational resources, making them less feasible for deployment on resource-constrained devices. In this paper, progressive bitwidth assignment approaches are introduced to efficiently quantize the CapsNets. Initially, a comprehensive and detailed analysis of parameter quantization in CapsNets is performed exploring various granularities, such as block-wise quantization and dynamic routing quantization. Then, three quantization approaches are applied to progressively quantize the CapsNet, considering various insights into the susceptibility of layers to quantization. The proposed approaches include Post-Training Quantization (PTQ) strategies that minimize the dependence on floating-point operations and incorporates layer-specific integer bit-widths based on quantization error analysis. PTQ strategies employ Power-of-Two (PoT) scaling factors to simplify computations, effectively utilizing hardware shifts and significantly reducing the computational complexity. This technique not only reduces the memory footprint but also maintains accuracy by introducing a range clipping method tailored to the hardware’s capabilities, obviating the need for data preprocessing. Our experimental results on ShallowCaps and DeepCaps networks across multiple datasets (MNIST, Fashion-MNIST, CIFAR-10, and SVHN) demonstrate the efficiency of our approach. Specifically, on the CIFAR-10 dataset using the DeepCaps architecture, we achieved a substantial memory reduction (<inline-formula> <tex-math>$7.02\\times $ </tex-math></inline-formula> for weights and <inline-formula> <tex-math>$3.74\\times $ </tex-math></inline-formula> for activations) with a minimal accuracy loss of only 0.09%. By using progressive bitwidth assignment and post-training quantization, this work optimizes CapsNets for efficient, real-time visual processing on resource-constrained edge devices, enabling applications in IoT, mobile platforms, and embedded systems.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"21533-21546"},"PeriodicalIF":3.6000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10854429","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10854429/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Capsule Networks (CapsNets) are a class of neural network architectures that can be used to more accurately model hierarchical relationships due to their hierarchical structure and dynamic routing algorithms. However, their high accuracy comes at the cost of significant memory and computational resources, making them less feasible for deployment on resource-constrained devices. In this paper, progressive bitwidth assignment approaches are introduced to efficiently quantize the CapsNets. Initially, a comprehensive and detailed analysis of parameter quantization in CapsNets is performed exploring various granularities, such as block-wise quantization and dynamic routing quantization. Then, three quantization approaches are applied to progressively quantize the CapsNet, considering various insights into the susceptibility of layers to quantization. The proposed approaches include Post-Training Quantization (PTQ) strategies that minimize the dependence on floating-point operations and incorporates layer-specific integer bit-widths based on quantization error analysis. PTQ strategies employ Power-of-Two (PoT) scaling factors to simplify computations, effectively utilizing hardware shifts and significantly reducing the computational complexity. This technique not only reduces the memory footprint but also maintains accuracy by introducing a range clipping method tailored to the hardware’s capabilities, obviating the need for data preprocessing. Our experimental results on ShallowCaps and DeepCaps networks across multiple datasets (MNIST, Fashion-MNIST, CIFAR-10, and SVHN) demonstrate the efficiency of our approach. Specifically, on the CIFAR-10 dataset using the DeepCaps architecture, we achieved a substantial memory reduction ( $7.02\times $ for weights and $3.74\times $ for activations) with a minimal accuracy loss of only 0.09%. By using progressive bitwidth assignment and post-training quantization, this work optimizes CapsNets for efficient, real-time visual processing on resource-constrained edge devices, enabling applications in IoT, mobile platforms, and embedded systems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
高效胶囊网络量化的渐进式位宽分配方法
胶囊网络(Capsule Networks, CapsNets)是一类神经网络体系结构,由于其分层结构和动态路由算法,可以用来更准确地建模层次关系。然而,它们的高准确性是以大量内存和计算资源为代价的,这使得它们在资源受限的设备上部署不太可行。本文引入渐进式位宽分配方法来有效地量化capnet。首先,对capnet中的参数量化进行了全面而详细的分析,探索了各种粒度,如块量化和动态路由量化。然后,考虑到层对量化敏感性的各种见解,应用三种量化方法逐步量化CapsNet。所提出的方法包括训练后量化(PTQ)策略,该策略最大限度地减少了对浮点运算的依赖,并结合了基于量化误差分析的层特定整数位宽度。PTQ策略采用2次幂(PoT)比例因子来简化计算,有效地利用硬件移位并显著降低计算复杂度。这种技术不仅减少了内存占用,而且通过引入适合硬件功能的范围裁剪方法来保持准确性,从而避免了数据预处理的需要。我们在多个数据集(MNIST、Fashion-MNIST、CIFAR-10和SVHN)上的ShallowCaps和DeepCaps网络的实验结果证明了我们方法的有效性。具体来说,在使用deepcap架构的CIFAR-10数据集上,我们实现了显著的内存减少(权重为7.02\times $,激活为3.74\times $),最小的精度损失仅为0.09%。通过使用渐进式位宽分配和训练后量化,本研究优化了capnet,以便在资源受限的边缘设备上进行高效、实时的视觉处理,从而实现了物联网、移动平台和嵌入式系统中的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Access
IEEE Access COMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍: IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest. IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on: Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals. Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering. Development of new or improved fabrication or manufacturing techniques. Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.
期刊最新文献
Named Entity Recognition With Clue-Word Tags From Patent Documents in Materials Science Development of a Neural Network-Based Model to Generate an Absolute Luminance Map of an Interior Using a Camera Raw Image File Reinforcement Learning-Based Fuzzer for 5G RRC Security Evaluation Cite and Seek: Automated Literary Reference Mining at Corpus Scale RSMA-Enabled RIS-Assisted Integrated Sensing and Communication for 6G: A Comprehensive Survey
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1