RANP: Resource Aware Neuron Pruning at Initialization for 3D CNNs

Zhiwei Xu, Thalaiyasingam Ajanthan, Vibhav Vineet, R. Hartley
{"title":"RANP: Resource Aware Neuron Pruning at Initialization for 3D CNNs","authors":"Zhiwei Xu, Thalaiyasingam Ajanthan, Vibhav Vineet, R. Hartley","doi":"10.1109/3DV50981.2020.00028","DOIUrl":null,"url":null,"abstract":"Although 3D Convolutional Neural Networks (CNNs) are essential for most learning based applications involving dense 3D data, their applicability is limited due to excessive memory and computational requirements. Compressing such networks by pruning therefore becomes highly desirable. However, pruning 3D CNNs is largely unexplored possibly because of the complex nature of typical pruning algorithms that embeds pruning into an iterative optimization paradigm. In this work, we introduce a Resource Aware Neuron Pruning (RANP) algorithm that prunes 3D CNNs at initialization to high sparsity levels. Specifically, the core idea is to obtain an importance score for each neuron based on their sensitivity to the loss function. This neuron importance is then reweighted according to the neuron resource consumption related to FLOPs or memory. We demonstrate the effectiveness of our pruning method on 3D semantic segmentation with widely used 3D-UNets on ShapeNet and BraTS’18 as well as on video classification with MobileNetV2 and I3D on UCF101 dataset. In these experiments, our RANP leads to roughly 50%-95% reduction in FLOPs and 35%-80% reduction in memory with negligible loss in accuracy compared to the unpruned networks. This significantly reduces the computational resources required to train 3D CNNs. The pruned network obtained by our algorithm can also be easily scaled up and transferred to another dataset for training.","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on 3D Vision (3DV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DV50981.2020.00028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Although 3D Convolutional Neural Networks (CNNs) are essential for most learning based applications involving dense 3D data, their applicability is limited due to excessive memory and computational requirements. Compressing such networks by pruning therefore becomes highly desirable. However, pruning 3D CNNs is largely unexplored possibly because of the complex nature of typical pruning algorithms that embeds pruning into an iterative optimization paradigm. In this work, we introduce a Resource Aware Neuron Pruning (RANP) algorithm that prunes 3D CNNs at initialization to high sparsity levels. Specifically, the core idea is to obtain an importance score for each neuron based on their sensitivity to the loss function. This neuron importance is then reweighted according to the neuron resource consumption related to FLOPs or memory. We demonstrate the effectiveness of our pruning method on 3D semantic segmentation with widely used 3D-UNets on ShapeNet and BraTS’18 as well as on video classification with MobileNetV2 and I3D on UCF101 dataset. In these experiments, our RANP leads to roughly 50%-95% reduction in FLOPs and 35%-80% reduction in memory with negligible loss in accuracy compared to the unpruned networks. This significantly reduces the computational resources required to train 3D CNNs. The pruned network obtained by our algorithm can also be easily scaled up and transferred to another dataset for training.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
RANP: 3D cnn初始化时的资源感知神经元修剪
虽然3D卷积神经网络(cnn)对于大多数涉及密集3D数据的基于学习的应用是必不可少的,但由于过度的内存和计算需求,其适用性受到限制。因此,通过修剪来压缩这样的网络是非常可取的。然而,修剪3D cnn在很大程度上尚未被探索,这可能是因为典型修剪算法的复杂性,它将修剪嵌入到迭代优化范例中。在这项工作中,我们引入了一种资源感知神经元修剪(RANP)算法,该算法在初始化时将3D cnn修剪到高稀疏度水平。具体来说,核心思想是根据每个神经元对损失函数的敏感性来获得其重要性评分。然后根据与FLOPs或内存相关的神经元资源消耗重新加权该神经元的重要性。我们在ShapeNet和BraTS ' 18上广泛使用的3D- unets进行3D语义分割,以及在UCF101数据集上使用MobileNetV2和I3D进行视频分类时,证明了我们的修剪方法的有效性。在这些实验中,与未修剪的网络相比,我们的RANP使flop减少了大约50%-95%,内存减少了35%-80%,而准确性的损失可以忽略不计。这大大减少了训练3D cnn所需的计算资源。我们的算法得到的修剪后的网络也可以很容易地扩展并转移到另一个数据集进行训练。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Screen-space Regularization on Differentiable Rasterization Motion Annotation Programs: A Scalable Approach to Annotating Kinematic Articulations in Large 3D Shape Collections Two-Stage Relation Constraint for Semantic Segmentation of Point Clouds Time Shifted IMU Preintegration for Temporal Calibration in Incremental Visual-Inertial Initialization KeystoneDepth: History in 3D
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1