PruneFaceDet:通过稀疏性训练精简轻量级人脸检测网络

IF 1.2 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Cognitive Computation and Systems Pub Date : 2022-06-09 DOI:10.1049/ccs2.12065
Nanfei Jiang, Zhexiao Xiong, Hui Tian, Xu Zhao, Xiaojie Du, Chaoyang Zhao, Jinqiao Wang
{"title":"PruneFaceDet:通过稀疏性训练精简轻量级人脸检测网络","authors":"Nanfei Jiang,&nbsp;Zhexiao Xiong,&nbsp;Hui Tian,&nbsp;Xu Zhao,&nbsp;Xiaojie Du,&nbsp;Chaoyang Zhao,&nbsp;Jinqiao Wang","doi":"10.1049/ccs2.12065","DOIUrl":null,"url":null,"abstract":"<p>Face detection is the basic step of many face analysis tasks. In practice, face detectors usually run on mobile devices with limited memory and computing resources. Therefore, it is important to keep face detectors lightweight. To this end, current methods usually focus on directly designing lightweight detectors. Nevertheless, it is not fully explored whether the resource consumption of these lightweight detectors can be further suppressed without too much sacrifice on accuracy. In this study, we propose to apply the network pruning method to the lightweight face detection network, to further reduce its parameters and floating point operations. To identify the channels of less importance, we perform network training with sparsity regularisation on channel scaling factors of each layer. Then, we remove the connections and corresponding weights with near-zero scaling factors after sparsity training. We apply the proposed pruning pipeline to a state-of-the-art face detection method, EagleEye, and get a shrunken EagleEye model, which has a reduced number of computing operations and parameters. The shrunken model achieves comparable accuracy as the unpruned model. By using the proposed method, the shrunken EagleEye achieves a 56.3% reduction of parameter size with almost no accuracy loss on the WiderFace dataset.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2022-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12065","citationCount":"0","resultStr":"{\"title\":\"PruneFaceDet: Pruning lightweight face detection network by sparsity training\",\"authors\":\"Nanfei Jiang,&nbsp;Zhexiao Xiong,&nbsp;Hui Tian,&nbsp;Xu Zhao,&nbsp;Xiaojie Du,&nbsp;Chaoyang Zhao,&nbsp;Jinqiao Wang\",\"doi\":\"10.1049/ccs2.12065\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Face detection is the basic step of many face analysis tasks. In practice, face detectors usually run on mobile devices with limited memory and computing resources. Therefore, it is important to keep face detectors lightweight. To this end, current methods usually focus on directly designing lightweight detectors. Nevertheless, it is not fully explored whether the resource consumption of these lightweight detectors can be further suppressed without too much sacrifice on accuracy. In this study, we propose to apply the network pruning method to the lightweight face detection network, to further reduce its parameters and floating point operations. To identify the channels of less importance, we perform network training with sparsity regularisation on channel scaling factors of each layer. Then, we remove the connections and corresponding weights with near-zero scaling factors after sparsity training. We apply the proposed pruning pipeline to a state-of-the-art face detection method, EagleEye, and get a shrunken EagleEye model, which has a reduced number of computing operations and parameters. The shrunken model achieves comparable accuracy as the unpruned model. By using the proposed method, the shrunken EagleEye achieves a 56.3% reduction of parameter size with almost no accuracy loss on the WiderFace dataset.</p>\",\"PeriodicalId\":33652,\"journal\":{\"name\":\"Cognitive Computation and Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2022-06-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12065\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Computation and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12065\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

人脸检测是许多人脸分析任务的基本步骤。在实践中,人脸检测器通常运行在内存和计算资源有限的移动设备上。因此,保持面部检测器的轻量级是很重要的。为此,目前的方法通常侧重于直接设计轻量级探测器。然而,这些轻量级探测器的资源消耗能否在不牺牲精度的情况下得到进一步的抑制,目前还没有得到充分的探讨。在本研究中,我们提出将网络剪枝方法应用到轻量级人脸检测网络中,进一步减少其参数和浮点运算。为了识别不太重要的通道,我们对每层的通道缩放因子进行稀疏正则化的网络训练。然后,我们在稀疏性训练后,用接近零的尺度因子去除连接和相应的权值。我们将提出的修剪管道应用于最先进的人脸检测方法EagleEye,并得到一个缩小的EagleEye模型,该模型具有减少的计算操作和参数数量。压缩后的模型达到了与未修剪模型相当的精度。通过使用该方法,缩小后的EagleEye在WiderFace数据集上的参数大小减少了56.3%,几乎没有精度损失。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
PruneFaceDet: Pruning lightweight face detection network by sparsity training

Face detection is the basic step of many face analysis tasks. In practice, face detectors usually run on mobile devices with limited memory and computing resources. Therefore, it is important to keep face detectors lightweight. To this end, current methods usually focus on directly designing lightweight detectors. Nevertheless, it is not fully explored whether the resource consumption of these lightweight detectors can be further suppressed without too much sacrifice on accuracy. In this study, we propose to apply the network pruning method to the lightweight face detection network, to further reduce its parameters and floating point operations. To identify the channels of less importance, we perform network training with sparsity regularisation on channel scaling factors of each layer. Then, we remove the connections and corresponding weights with near-zero scaling factors after sparsity training. We apply the proposed pruning pipeline to a state-of-the-art face detection method, EagleEye, and get a shrunken EagleEye model, which has a reduced number of computing operations and parameters. The shrunken model achieves comparable accuracy as the unpruned model. By using the proposed method, the shrunken EagleEye achieves a 56.3% reduction of parameter size with almost no accuracy loss on the WiderFace dataset.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Computation and Systems
Cognitive Computation and Systems Computer Science-Computer Science Applications
CiteScore
2.50
自引率
0.00%
发文量
39
审稿时长
10 weeks
期刊最新文献
EF-CorrCA: A multi-modal EEG-fNIRS subject independent model to assess speech quality on brain activity using correlated component analysis Detection of autism spectrum disorder using multi-scale enhanced graph convolutional network Evolving usability heuristics for visualising Augmented Reality/Mixed Reality applications using cognitive model of information processing and fuzzy analytical hierarchy process Emotion classification with multi-modal physiological signals using multi-attention-based neural network Optimisation of deep neural network model using Reptile meta learning approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1