I²NQ: Inter and Intra Nonuniform Quantization for Single Image Super-Resolution

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2024-11-18 DOI:10.1109/TNNLS.2024.3493155
Liting Sun;Jingwei Xin;Keyu Li;Jie Li;Nannan Wang;Xinbo Gao
{"title":"I²NQ: Inter and Intra Nonuniform Quantization for Single Image Super-Resolution","authors":"Liting Sun;Jingwei Xin;Keyu Li;Jie Li;Nannan Wang;Xinbo Gao","doi":"10.1109/TNNLS.2024.3493155","DOIUrl":null,"url":null,"abstract":"Quantizing neural network is an efficient model compression technique that converts weights and activations from floating-point to integer. However, existing model quantization methods are primarily designed for high-level visual tasks. They do not sufficiently consider the unique characteristics of feature distribution in image super-resolution (SR) reconstruction models. On the one hand, the objective of SR is to restore high-frequency and fine-detail information while preserving the overall feature distribution. Therefore, the regularization techniques are removed to maintain the original distribution. However, vanilla quantization methods often employ regularization techniques to normalize the features for stable network training, which destroys the inherent information of the feature distribution. On the other hand, the feature distribution in SR models exhibits a nonuniform bell-shaped form. Common quantization methods adopt a uniform quantization strategy with equal quantization intervals. This fails to effectively capture the nonuniform feature distribution in SR. To address the above issue, we propose a novel method named Inter and Intra Nonuniform Quantization, which takes into account the specific characteristics of the feature distribution in the context of SR reconstruction models. Additionally, we propose a weight adjustment method called flex-scale-weight-adjust (FSWA). It can maintain the diversity of weight information and reduce quantization errors. Extensive experiments demonstrate that our proposed method surpasses other quantization methods in both the evaluation of reconstruction metrics and visual reconstruction performance.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 7","pages":"12131-12145"},"PeriodicalIF":8.9000,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10756200/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Quantizing neural network is an efficient model compression technique that converts weights and activations from floating-point to integer. However, existing model quantization methods are primarily designed for high-level visual tasks. They do not sufficiently consider the unique characteristics of feature distribution in image super-resolution (SR) reconstruction models. On the one hand, the objective of SR is to restore high-frequency and fine-detail information while preserving the overall feature distribution. Therefore, the regularization techniques are removed to maintain the original distribution. However, vanilla quantization methods often employ regularization techniques to normalize the features for stable network training, which destroys the inherent information of the feature distribution. On the other hand, the feature distribution in SR models exhibits a nonuniform bell-shaped form. Common quantization methods adopt a uniform quantization strategy with equal quantization intervals. This fails to effectively capture the nonuniform feature distribution in SR. To address the above issue, we propose a novel method named Inter and Intra Nonuniform Quantization, which takes into account the specific characteristics of the feature distribution in the context of SR reconstruction models. Additionally, we propose a weight adjustment method called flex-scale-weight-adjust (FSWA). It can maintain the diversity of weight information and reduce quantization errors. Extensive experiments demonstrate that our proposed method surpasses other quantization methods in both the evaluation of reconstruction metrics and visual reconstruction performance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
I$^2$NQ:单图像超级分辨率的内部和内部非均匀量化
量化神经网络是一种有效的模型压缩技术,它将权重和激活值从浮点数转换为整数。然而,现有的模型量化方法主要是针对高级视觉任务而设计的。它们没有充分考虑图像超分辨率(SR)重建模型中特征分布的独特性。一方面,SR的目标是在保留整体特征分布的同时恢复高频和细细节信息。因此,去除正则化技术以保持原始分布。然而,传统的量化方法通常采用正则化技术对特征进行归一化以进行稳定的网络训练,这破坏了特征分布的固有信息。另一方面,SR模型中的特征分布呈不均匀的钟形分布。常用的量化方法采用等量化间隔的统一量化策略。为了解决上述问题,我们提出了一种新的方法,称为Inter and Intra nonuniform Quantization,该方法考虑了SR重建模型中特征分布的具体特征。此外,我们提出了一种称为柔性尺度-重量调节(FSWA)的重量调节方法。它既能保持权重信息的多样性,又能减小量化误差。大量的实验表明,我们提出的方法在评估重建指标和视觉重建性能方面都优于其他量化方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
Learning From M-Tuple One-vs-All Confidence Comparison Data. Sparse Variational Student-t Processes for Heavy-Tailed Modeling. Prompt Then Refine: Prompt-Free SAM-Enhanced Collaborative Learning Network for Detecting Salient Objects in Underwater Images CoreKD: A Context-Aware Local Region Structural Contrastive Knowledge Distillation Framework for Object Detection Enhancing Stability of Probabilistic Model-Based Reinforcement Learning by Adaptive Noise Filtering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1