{"title":"基于注意力的低复杂度卷积神经网络层剪枝新方法","authors":"Md. Bipul Hossain, Na Gong, Mohamed Shaban","doi":"10.1002/aisy.202400161","DOIUrl":null,"url":null,"abstract":"<p>Deep learning (DL) has been very successful for classifying images, detecting targets, and segmenting regions in high-resolution images such as whole slide histopathology images. However, analysis of such high-resolution images requires very high DL complexity. Several AI optimization techniques have been recently proposed that aim at reducing the complexity of deep neural networks and hence expedite their execution and eventually allow the use of low-power, low-cost computing devices with limited computation and memory resources. These methods include parameter pruning and sharing, quantization, knowledge distillation, low-rank approximation, and resource efficient architectures. Rather than pruning network structures including filters, layers, and blocks of layers based on a manual selection of a significance metric such as <i>l</i>1<i>-</i>norm and <i>l</i>2<i>-</i>norm of the filter kernels, novel highly efficient AI-driven DL optimization algorithms using variations of the squeeze and excitation in order to prune filters and layers of deep models such as VGG-16 as well as eliminate filters and blocks of residual networks such as ResNet-56 are introduced. The proposed techniques achieve significantly higher reduction in the number of learning parameters, the number of floating point operations, and memory space as compared to the-state-of-the-art methods.</p>","PeriodicalId":93858,"journal":{"name":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","volume":"6 11","pages":""},"PeriodicalIF":6.8000,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aisy.202400161","citationCount":"0","resultStr":"{\"title\":\"A Novel Attention-Based Layer Pruning Approach for Low-Complexity Convolutional Neural Networks\",\"authors\":\"Md. Bipul Hossain, Na Gong, Mohamed Shaban\",\"doi\":\"10.1002/aisy.202400161\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Deep learning (DL) has been very successful for classifying images, detecting targets, and segmenting regions in high-resolution images such as whole slide histopathology images. However, analysis of such high-resolution images requires very high DL complexity. Several AI optimization techniques have been recently proposed that aim at reducing the complexity of deep neural networks and hence expedite their execution and eventually allow the use of low-power, low-cost computing devices with limited computation and memory resources. These methods include parameter pruning and sharing, quantization, knowledge distillation, low-rank approximation, and resource efficient architectures. Rather than pruning network structures including filters, layers, and blocks of layers based on a manual selection of a significance metric such as <i>l</i>1<i>-</i>norm and <i>l</i>2<i>-</i>norm of the filter kernels, novel highly efficient AI-driven DL optimization algorithms using variations of the squeeze and excitation in order to prune filters and layers of deep models such as VGG-16 as well as eliminate filters and blocks of residual networks such as ResNet-56 are introduced. The proposed techniques achieve significantly higher reduction in the number of learning parameters, the number of floating point operations, and memory space as compared to the-state-of-the-art methods.</p>\",\"PeriodicalId\":93858,\"journal\":{\"name\":\"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)\",\"volume\":\"6 11\",\"pages\":\"\"},\"PeriodicalIF\":6.8000,\"publicationDate\":\"2024-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aisy.202400161\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/aisy.202400161\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aisy.202400161","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
A Novel Attention-Based Layer Pruning Approach for Low-Complexity Convolutional Neural Networks
Deep learning (DL) has been very successful for classifying images, detecting targets, and segmenting regions in high-resolution images such as whole slide histopathology images. However, analysis of such high-resolution images requires very high DL complexity. Several AI optimization techniques have been recently proposed that aim at reducing the complexity of deep neural networks and hence expedite their execution and eventually allow the use of low-power, low-cost computing devices with limited computation and memory resources. These methods include parameter pruning and sharing, quantization, knowledge distillation, low-rank approximation, and resource efficient architectures. Rather than pruning network structures including filters, layers, and blocks of layers based on a manual selection of a significance metric such as l1-norm and l2-norm of the filter kernels, novel highly efficient AI-driven DL optimization algorithms using variations of the squeeze and excitation in order to prune filters and layers of deep models such as VGG-16 as well as eliminate filters and blocks of residual networks such as ResNet-56 are introduced. The proposed techniques achieve significantly higher reduction in the number of learning parameters, the number of floating point operations, and memory space as compared to the-state-of-the-art methods.