速率自适应深度联合信源信道编码的低秩分解

Man Xu, C. Lam, Yuanhui Liang, B. Ng, S. Im
{"title":"速率自适应深度联合信源信道编码的低秩分解","authors":"Man Xu, C. Lam, Yuanhui Liang, B. Ng, S. Im","doi":"10.1109/ICCC56324.2022.10065853","DOIUrl":null,"url":null,"abstract":"Deep joint source-channel coding (DJSCC) has received extensive attention in the communications community. However, the high computational costs and storage requirements prevent the DJSCC model from being effectively deployed on embedded systems and mobile devices. Recently, convolutional neural network (CNN) compression via low-rank decomposition has achieved remarkable performance. In this paper, we conduct a comparative study of low-rank decomposition for lowering the computational complexity and storage requirement for Rate-Adaptive DJSCC, including CANDECOMP/PARAFAC (CP) de-composition, Tucker (TK) decomposition, and Tensor-train (TT) decomposition. We evaluate the compression ratio, speedup ratio, and Peak Signal-to-Noise Ratio (PSNR) performance loss for the CP, TK, and TT decomposition with fine-tuning and pruning. From the experimental results, we found that compared with the TT decomposition, CP decomposition with fine-tuning lowers the PSNR performance degradation at the expense of higher compression and speedup ratio.","PeriodicalId":263098,"journal":{"name":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Low-Rank Decomposition for Rate-Adaptive Deep Joint Source-Channel Coding\",\"authors\":\"Man Xu, C. Lam, Yuanhui Liang, B. Ng, S. Im\",\"doi\":\"10.1109/ICCC56324.2022.10065853\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep joint source-channel coding (DJSCC) has received extensive attention in the communications community. However, the high computational costs and storage requirements prevent the DJSCC model from being effectively deployed on embedded systems and mobile devices. Recently, convolutional neural network (CNN) compression via low-rank decomposition has achieved remarkable performance. In this paper, we conduct a comparative study of low-rank decomposition for lowering the computational complexity and storage requirement for Rate-Adaptive DJSCC, including CANDECOMP/PARAFAC (CP) de-composition, Tucker (TK) decomposition, and Tensor-train (TT) decomposition. We evaluate the compression ratio, speedup ratio, and Peak Signal-to-Noise Ratio (PSNR) performance loss for the CP, TK, and TT decomposition with fine-tuning and pruning. From the experimental results, we found that compared with the TT decomposition, CP decomposition with fine-tuning lowers the PSNR performance degradation at the expense of higher compression and speedup ratio.\",\"PeriodicalId\":263098,\"journal\":{\"name\":\"2022 IEEE 8th International Conference on Computer and Communications (ICCC)\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 8th International Conference on Computer and Communications (ICCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCC56324.2022.10065853\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC56324.2022.10065853","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

深度联合源信道编码(DJSCC)在通信领域受到了广泛的关注。然而,高昂的计算成本和存储需求阻碍了DJSCC模型在嵌入式系统和移动设备上的有效部署。近年来,卷积神经网络(CNN)通过低秩分解进行压缩,取得了显著的效果。在本文中,为了降低速率自适应DJSCC的计算复杂度和存储需求,我们对CANDECOMP/PARAFAC (CP)分解、Tucker (TK)分解和tensortrain (TT)分解进行了比较研究。我们评估了压缩比,加速比和峰值信噪比(PSNR)性能损失的CP, TK和TT分解与微调和修剪。实验结果表明,与TT分解相比,微调后的CP分解以更高的压缩率和加速比为代价降低了PSNR性能的下降。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Low-Rank Decomposition for Rate-Adaptive Deep Joint Source-Channel Coding
Deep joint source-channel coding (DJSCC) has received extensive attention in the communications community. However, the high computational costs and storage requirements prevent the DJSCC model from being effectively deployed on embedded systems and mobile devices. Recently, convolutional neural network (CNN) compression via low-rank decomposition has achieved remarkable performance. In this paper, we conduct a comparative study of low-rank decomposition for lowering the computational complexity and storage requirement for Rate-Adaptive DJSCC, including CANDECOMP/PARAFAC (CP) de-composition, Tucker (TK) decomposition, and Tensor-train (TT) decomposition. We evaluate the compression ratio, speedup ratio, and Peak Signal-to-Noise Ratio (PSNR) performance loss for the CP, TK, and TT decomposition with fine-tuning and pruning. From the experimental results, we found that compared with the TT decomposition, CP decomposition with fine-tuning lowers the PSNR performance degradation at the expense of higher compression and speedup ratio.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Backward Edge Pointer Protection Technology Based on Dynamic Instrumentation Experimental Design of Router Debugging based Neighbor Cache States Change of IPv6 Nodes Sharing Big Data Storage for Air Traffic Management Study of Non-Orthogonal Multiple Access Technology for Satellite Communications A Joint Design of Polar Codes and Physical-layer Network Coding in Visible Light Communication System
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1