Efficient Topology Coding and Payload Partitioning Techniques for Neural Network Compression (NNC) Standard

Jaakko Laitinen, Alexandre Mercat, Jarno Vanne, H. R. Tavakoli, Francesco Cricri, Emre B. Aksu, M. Hannuksela
{"title":"Efficient Topology Coding and Payload Partitioning Techniques for Neural Network Compression (NNC) Standard","authors":"Jaakko Laitinen, Alexandre Mercat, Jarno Vanne, H. R. Tavakoli, Francesco Cricri, Emre B. Aksu, M. Hannuksela","doi":"10.1109/ICMEW56448.2022.9859467","DOIUrl":null,"url":null,"abstract":"A Neural Network Compression (NNC) standard aims to define a set of coding tools for efficient compression and transmission of neural networks. This paper addresses the high-level syntax (HLS) of NNC and proposes three HLS techniques for network topology coding and payload partitioning. Our first technique provides an efficient way to code prune topology information. It removes redundancy in the bitmask and thereby improves coding efficiency by 4–99% over existing approaches. The second technique processes bitmasks in larger chunks instead of one bit at a time. It is shown to reduce computational complexity of NNC encoding by 63% and NNC decoding by 82%. Our third technique makes use of partial data counters to partition an NNC bitstream into uniformly sized units for more efficient data transmission. Even though the smaller partition sizes introduce some overhead, our network simulations show better throughput due to lower packet retransmission rates. To our knowledge, this the first work to address the practical implementation aspects of HLS. The proposed techniques can be seen as key enabling factors for efficient adaptation and economical deployment of the NNC standard in a plurality of next-generation industrial and academic applications.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMEW56448.2022.9859467","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A Neural Network Compression (NNC) standard aims to define a set of coding tools for efficient compression and transmission of neural networks. This paper addresses the high-level syntax (HLS) of NNC and proposes three HLS techniques for network topology coding and payload partitioning. Our first technique provides an efficient way to code prune topology information. It removes redundancy in the bitmask and thereby improves coding efficiency by 4–99% over existing approaches. The second technique processes bitmasks in larger chunks instead of one bit at a time. It is shown to reduce computational complexity of NNC encoding by 63% and NNC decoding by 82%. Our third technique makes use of partial data counters to partition an NNC bitstream into uniformly sized units for more efficient data transmission. Even though the smaller partition sizes introduce some overhead, our network simulations show better throughput due to lower packet retransmission rates. To our knowledge, this the first work to address the practical implementation aspects of HLS. The proposed techniques can be seen as key enabling factors for efficient adaptation and economical deployment of the NNC standard in a plurality of next-generation industrial and academic applications.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
神经网络压缩(NNC)标准的高效拓扑编码和有效负载划分技术
神经网络压缩(NNC)标准旨在定义一套有效压缩和传输神经网络的编码工具。本文研究了NNC的高级语法(HLS),并提出了三种用于网络拓扑编码和有效负载划分的高级语法技术。我们的第一种技术提供了一种有效的方式来编码修剪拓扑信息。它消除了位掩码中的冗余,从而比现有方法提高了4-99%的编码效率。第二种技术是以更大的块来处理位掩码,而不是一次处理一个位。结果表明,该方法可将NNC编码的计算复杂度降低63%,将NNC解码的计算复杂度降低82%。我们的第三种技术利用部分数据计数器将NNC比特流划分为统一大小的单元,以提高数据传输效率。尽管较小的分区大小会带来一些开销,但我们的网络模拟显示,由于数据包重传率较低,吞吐量更好。据我们所知,这是解决HLS实际实施方面的第一个工作。所提出的技术可以被视为在多个下一代工业和学术应用中有效适应和经济部署NNC标准的关键使能因素。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Emotional Quality Evaluation for Generated Music Based on Emotion Recognition Model Bottleneck Detection in Crowded Video Scenes Utilizing Lagrangian Motion Analysis Via Density and Arc Length Measures Efficient Topology Coding and Payload Partitioning Techniques for Neural Network Compression (NNC) Standard Exploring Multisensory Feedback for Virtual Reality Relaxation A Unified Video Summarization for Video Anomalies Through Deep Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1