AdderIC:迈向低计算成本的图像压缩

Bowen Li, Xin Yao, Chao Li, Youneng Bao, Fanyang Meng, Yongsheng Liang
{"title":"AdderIC:迈向低计算成本的图像压缩","authors":"Bowen Li, Xin Yao, Chao Li, Youneng Bao, Fanyang Meng, Yongsheng Liang","doi":"10.1109/icassp43922.2022.9747652","DOIUrl":null,"url":null,"abstract":"Recently, learned image compression methods have shown their outstanding rate-distortion performance when compared to traditional frameworks. Although numerous progress has been made in learned image compression, the computation cost is still at a high level. To address this problem, we propose AdderIC, which utilizes adder neural networks (AdderNet) to construct an image compression framework. According to the characteristics of image compression, we introduce several strategies to improve the performance of AdderNet in this field. Specifically, Haar Wavelet Transform is adopted to make AdderIC learn high-frequency information efficiently. In addition, implicit deconvolution with the kernel size of 1 is applied after each adder layer to reduce spatial redundancies. Moreover, we develop a novel Adder-ID-PixelShuffle cascade upsampling structure to remove checkerboard artifacts. Experiments demonstrate that our AdderIC model can largely outperform conventional AdderNet when applied in image compression and achieve comparable rate-distortion performance to that of its CNN baseline with about 80% multiplication FLOPs and 30% energy consumption reduction.","PeriodicalId":272439,"journal":{"name":"ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"AdderIC: Towards Low Computation Cost Image Compression\",\"authors\":\"Bowen Li, Xin Yao, Chao Li, Youneng Bao, Fanyang Meng, Yongsheng Liang\",\"doi\":\"10.1109/icassp43922.2022.9747652\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, learned image compression methods have shown their outstanding rate-distortion performance when compared to traditional frameworks. Although numerous progress has been made in learned image compression, the computation cost is still at a high level. To address this problem, we propose AdderIC, which utilizes adder neural networks (AdderNet) to construct an image compression framework. According to the characteristics of image compression, we introduce several strategies to improve the performance of AdderNet in this field. Specifically, Haar Wavelet Transform is adopted to make AdderIC learn high-frequency information efficiently. In addition, implicit deconvolution with the kernel size of 1 is applied after each adder layer to reduce spatial redundancies. Moreover, we develop a novel Adder-ID-PixelShuffle cascade upsampling structure to remove checkerboard artifacts. Experiments demonstrate that our AdderIC model can largely outperform conventional AdderNet when applied in image compression and achieve comparable rate-distortion performance to that of its CNN baseline with about 80% multiplication FLOPs and 30% energy consumption reduction.\",\"PeriodicalId\":272439,\"journal\":{\"name\":\"ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)\",\"volume\":\"119 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icassp43922.2022.9747652\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icassp43922.2022.9747652","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

近年来,与传统的图像压缩框架相比,学习得到的图像压缩方法表现出了出色的率失真性能。尽管在图像学习压缩方面取得了许多进展,但计算成本仍然很高。为了解决这个问题,我们提出了AdderIC,它利用加法器神经网络(AdderNet)来构建一个图像压缩框架。根据图像压缩的特点,介绍了几种提高AdderNet在该领域性能的策略。具体来说,采用Haar小波变换使AdderIC能够高效地学习高频信息。此外,在每个加法器层后进行核大小为1的隐式反卷积,以减少空间冗余。此外,我们开发了一种新的Adder-ID-PixelShuffle级联上采样结构来去除棋盘伪影。实验表明,我们的AdderIC模型在图像压缩中可以大大优于传统的AdderNet模型,并且可以实现与CNN基线相当的率失真性能,其乘法FLOPs约为80%,能耗降低30%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AdderIC: Towards Low Computation Cost Image Compression
Recently, learned image compression methods have shown their outstanding rate-distortion performance when compared to traditional frameworks. Although numerous progress has been made in learned image compression, the computation cost is still at a high level. To address this problem, we propose AdderIC, which utilizes adder neural networks (AdderNet) to construct an image compression framework. According to the characteristics of image compression, we introduce several strategies to improve the performance of AdderNet in this field. Specifically, Haar Wavelet Transform is adopted to make AdderIC learn high-frequency information efficiently. In addition, implicit deconvolution with the kernel size of 1 is applied after each adder layer to reduce spatial redundancies. Moreover, we develop a novel Adder-ID-PixelShuffle cascade upsampling structure to remove checkerboard artifacts. Experiments demonstrate that our AdderIC model can largely outperform conventional AdderNet when applied in image compression and achieve comparable rate-distortion performance to that of its CNN baseline with about 80% multiplication FLOPs and 30% energy consumption reduction.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Spatio-Temporal Attention Graph Convolution Network for Functional Connectome Classification Improving Biomedical Named Entity Recognition with a Unified Multi-Task MRC Framework Combining Multiple Style Transfer Networks and Transfer Learning For LGE-CMR Segmentation Sensors to Sign Language: A Natural Approach to Equitable Communication Estimation of the Admittance Matrix in Power Systems Under Laplacian and Physical Constraints
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1