利用高斯过程进行结构化分段重定标,实现参数高效 ConvNets

IF 3.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Journal of Systems Architecture Pub Date : 2024-07-25 DOI:10.1016/j.sysarc.2024.103246
Bilal Siddiqui , Adel Alaeddini , Dakai Zhu
{"title":"利用高斯过程进行结构化分段重定标,实现参数高效 ConvNets","authors":"Bilal Siddiqui ,&nbsp;Adel Alaeddini ,&nbsp;Dakai Zhu","doi":"10.1016/j.sysarc.2024.103246","DOIUrl":null,"url":null,"abstract":"<div><p>We introduce a novel mechanism for structured pruning on ConvNet blocks and channels. Our mechanism, <em>Structured Segment Rescaling (SSR)</em> down-samples a ConvNet’s dimensions using depth and width modifiers that respectively remove whole blocks and channels. SSR is a systematic approach for constructing ConvNets that can replace arbitrary design heuristics. The SSR modifiers rescale logical partitions (segments) of a ConvNet with grouped layers. Different modifiers on segments yield many different architectures with unique rescales for their blocks. This diversity of architectures is then systemically explored using a Gaussian Process (GP) that optimizes for modifiers that maintain accuracy and reduce parameters. We analyze SSR in the context of resource constrained environments using ResNets trained on the CIFAR datasets. An initial set of <em>depth</em> and <em>width</em> modifiers explore extreme rescales of ResNet segments, where we find up to 70% parameter reduction. The GP then generalizes on these initial rescales by being trained on them and then predicts the accuracy of other rescaled ConvNet given their segment modifiers. SSR produces over <span><math><mrow><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>5</mn></mrow></msup></mrow></math></span> ConvNets that can be trained selectively based on their GP predicted accuracy. The GP enabled SSR pushes compression to over 80% with minimal accuracy impact. While both depth and width modifiers can reduce parameters, we show reducing blocks is better for reducing latency with up to 80% faster ConvNets. Using our mechanism, we can efficiently customize ConvNets using their parameter-accuracy trade-offs. SSR only requires <span><math><mrow><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>1</mn></mrow></msup></mrow></math></span> GPU hours and modest engineering to yield efficient new ConvNets that can facilitate edge inference.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"154 ","pages":"Article 103246"},"PeriodicalIF":3.7000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Structured segment rescaling with Gaussian processes for parameter efficient ConvNets\",\"authors\":\"Bilal Siddiqui ,&nbsp;Adel Alaeddini ,&nbsp;Dakai Zhu\",\"doi\":\"10.1016/j.sysarc.2024.103246\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>We introduce a novel mechanism for structured pruning on ConvNet blocks and channels. Our mechanism, <em>Structured Segment Rescaling (SSR)</em> down-samples a ConvNet’s dimensions using depth and width modifiers that respectively remove whole blocks and channels. SSR is a systematic approach for constructing ConvNets that can replace arbitrary design heuristics. The SSR modifiers rescale logical partitions (segments) of a ConvNet with grouped layers. Different modifiers on segments yield many different architectures with unique rescales for their blocks. This diversity of architectures is then systemically explored using a Gaussian Process (GP) that optimizes for modifiers that maintain accuracy and reduce parameters. We analyze SSR in the context of resource constrained environments using ResNets trained on the CIFAR datasets. An initial set of <em>depth</em> and <em>width</em> modifiers explore extreme rescales of ResNet segments, where we find up to 70% parameter reduction. The GP then generalizes on these initial rescales by being trained on them and then predicts the accuracy of other rescaled ConvNet given their segment modifiers. SSR produces over <span><math><mrow><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>5</mn></mrow></msup></mrow></math></span> ConvNets that can be trained selectively based on their GP predicted accuracy. The GP enabled SSR pushes compression to over 80% with minimal accuracy impact. While both depth and width modifiers can reduce parameters, we show reducing blocks is better for reducing latency with up to 80% faster ConvNets. Using our mechanism, we can efficiently customize ConvNets using their parameter-accuracy trade-offs. SSR only requires <span><math><mrow><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>1</mn></mrow></msup></mrow></math></span> GPU hours and modest engineering to yield efficient new ConvNets that can facilitate edge inference.</p></div>\",\"PeriodicalId\":50027,\"journal\":{\"name\":\"Journal of Systems Architecture\",\"volume\":\"154 \",\"pages\":\"Article 103246\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems Architecture\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1383762124001838\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems Architecture","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1383762124001838","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

我们介绍了一种对 ConvNet 块和通道进行结构化修剪的新机制。我们的机制--结构化分段重缩放(SSR)--使用深度和宽度修改器对 ConvNet 的维度进行向下采样,分别删除整个块和通道。SSR 是一种构建 ConvNet 的系统方法,可以取代任意的设计启发式方法。SSR 修饰符可对具有分组层的 ConvNet 的逻辑分区(段)进行重新缩放。段上的不同修改器会产生许多不同的架构,其区块具有独特的重定标。然后,使用高斯过程 (GP) 系统地探索这种架构的多样性,优化修改器,以保持准确性并减少参数。我们使用在 CIFAR 数据集上训练的 ResNets 分析了资源受限环境下的 SSR。一组初始的深度和宽度修改器探索了 ResNet 片段的极端重新缩放,我们发现最多可减少 70% 的参数。然后,GP 在这些初始重尺度上进行泛化训练,然后预测其他重尺度 ConvNet 的准确性,并给出它们的段修改器。SSR 可生成超过 105 个 ConvNet,这些 ConvNet 可根据 GP 预测的准确性进行选择性训练。支持 GP 的 SSR 将压缩率提高到 80% 以上,而对准确性的影响却微乎其微。虽然深度和宽度修改器都能减少参数,但我们发现减少块更能减少延迟,ConvNets 的速度最多可提高 80%。利用我们的机制,我们可以通过参数-精度权衡有效地定制 ConvNets。SSR 只需要 101 个 GPU 小时和适度的工程设计,就能产生高效的新 ConvNets,从而促进边缘推理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Structured segment rescaling with Gaussian processes for parameter efficient ConvNets

We introduce a novel mechanism for structured pruning on ConvNet blocks and channels. Our mechanism, Structured Segment Rescaling (SSR) down-samples a ConvNet’s dimensions using depth and width modifiers that respectively remove whole blocks and channels. SSR is a systematic approach for constructing ConvNets that can replace arbitrary design heuristics. The SSR modifiers rescale logical partitions (segments) of a ConvNet with grouped layers. Different modifiers on segments yield many different architectures with unique rescales for their blocks. This diversity of architectures is then systemically explored using a Gaussian Process (GP) that optimizes for modifiers that maintain accuracy and reduce parameters. We analyze SSR in the context of resource constrained environments using ResNets trained on the CIFAR datasets. An initial set of depth and width modifiers explore extreme rescales of ResNet segments, where we find up to 70% parameter reduction. The GP then generalizes on these initial rescales by being trained on them and then predicts the accuracy of other rescaled ConvNet given their segment modifiers. SSR produces over 105 ConvNets that can be trained selectively based on their GP predicted accuracy. The GP enabled SSR pushes compression to over 80% with minimal accuracy impact. While both depth and width modifiers can reduce parameters, we show reducing blocks is better for reducing latency with up to 80% faster ConvNets. Using our mechanism, we can efficiently customize ConvNets using their parameter-accuracy trade-offs. SSR only requires 101 GPU hours and modest engineering to yield efficient new ConvNets that can facilitate edge inference.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Systems Architecture
Journal of Systems Architecture 工程技术-计算机:硬件
CiteScore
8.70
自引率
15.60%
发文量
226
审稿时长
46 days
期刊介绍: The Journal of Systems Architecture: Embedded Software Design (JSA) is a journal covering all design and architectural aspects related to embedded systems and software. It ranges from the microarchitecture level via the system software level up to the application-specific architecture level. Aspects such as real-time systems, operating systems, FPGA programming, programming languages, communications (limited to analysis and the software stack), mobile systems, parallel and distributed architectures as well as additional subjects in the computer and system architecture area will fall within the scope of this journal. Technology will not be a main focus, but its use and relevance to particular designs will be. Case studies are welcome but must contribute more than just a design for a particular piece of software. Design automation of such systems including methodologies, techniques and tools for their design as well as novel designs of software components fall within the scope of this journal. Novel applications that use embedded systems are also central in this journal. While hardware is not a part of this journal hardware/software co-design methods that consider interplay between software and hardware components with and emphasis on software are also relevant here.
期刊最新文献
Non-interactive set intersection for privacy-preserving contact tracing NLTSP: A cost model for tensor program tuning using nested loop trees SAMFL: Secure Aggregation Mechanism for Federated Learning with Byzantine-robustness by functional encryption ZNS-Cleaner: Enhancing lifespan by reducing empty erase in ZNS SSDs Using MAST for modeling and response-time analysis of real-time applications with GPUs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1