Hassen Louati, Ali Louati, Slim Bechikh, Elham Kariri
{"title":"卷积神经网络的联合滤波器和信道剪枝是一个双层优化问题","authors":"Hassen Louati, Ali Louati, Slim Bechikh, Elham Kariri","doi":"10.1007/s12293-024-00406-6","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Deep neural networks, specifically deep convolutional neural networks (DCNNs), have been highly successful in machine learning and computer vision, but a significant challenge when using these networks is choosing the right hyperparameters. As the number of layers in the network increases, the search space also becomes larger. To overcome this issue, researchers in deep learning have suggested using deep compression techniques to decrease memory usage and computational complexity. In this paper, we present a new approach for compressing deep CNNs by combining filter and channel pruning methods based on Evolutionary Algorithms (EA). This method involves eliminating filters and channels in order to decrease the number of parameters and computational complexity of the model. Additionally, we propose a bi-level optimization problem that interacts between the hyperparameters of the convolution layer. Bi-level optimization problems are known to be difficult as they involve two levels of optimization tasks, where only the optimal solutions to the lower-level problem are considered as feasible candidates for the upper-level problem. In this work, the upper-level problem is represented by a set of filters to be pruned in order to minimize the number of selected filters, while the lower-level problem is represented by a set of channels to be pruned in order to minimize the number of selected channels per filter. Our research has focused on developing a new method for solving bi-level problems, which we have named Bi-CNN-Pruning. To achieve this, we have adopted the Co-Evolutionary Migration-Based Algorithm (CEMBA) as our search engine. The Bi-CNN-Pruning method is then evaluated using image classification benchmarks on well-known datasets such as CIFAR-10 and CIFAR-100. The results of our evaluation demonstrate that our bi-level proposal outperforms state-of-the-art architectures, and we provide a detailed analysis of the results using commonly employed performance metrics.</p><h3 data-test=\"abstract-sub-heading\">Graphical abstract</h3>","PeriodicalId":48780,"journal":{"name":"Memetic Computing","volume":"3 1","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Joint filter and channel pruning of convolutional neural networks as a bi-level optimization problem\",\"authors\":\"Hassen Louati, Ali Louati, Slim Bechikh, Elham Kariri\",\"doi\":\"10.1007/s12293-024-00406-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h3 data-test=\\\"abstract-sub-heading\\\">Abstract</h3><p>Deep neural networks, specifically deep convolutional neural networks (DCNNs), have been highly successful in machine learning and computer vision, but a significant challenge when using these networks is choosing the right hyperparameters. As the number of layers in the network increases, the search space also becomes larger. To overcome this issue, researchers in deep learning have suggested using deep compression techniques to decrease memory usage and computational complexity. In this paper, we present a new approach for compressing deep CNNs by combining filter and channel pruning methods based on Evolutionary Algorithms (EA). This method involves eliminating filters and channels in order to decrease the number of parameters and computational complexity of the model. Additionally, we propose a bi-level optimization problem that interacts between the hyperparameters of the convolution layer. Bi-level optimization problems are known to be difficult as they involve two levels of optimization tasks, where only the optimal solutions to the lower-level problem are considered as feasible candidates for the upper-level problem. In this work, the upper-level problem is represented by a set of filters to be pruned in order to minimize the number of selected filters, while the lower-level problem is represented by a set of channels to be pruned in order to minimize the number of selected channels per filter. Our research has focused on developing a new method for solving bi-level problems, which we have named Bi-CNN-Pruning. To achieve this, we have adopted the Co-Evolutionary Migration-Based Algorithm (CEMBA) as our search engine. The Bi-CNN-Pruning method is then evaluated using image classification benchmarks on well-known datasets such as CIFAR-10 and CIFAR-100. The results of our evaluation demonstrate that our bi-level proposal outperforms state-of-the-art architectures, and we provide a detailed analysis of the results using commonly employed performance metrics.</p><h3 data-test=\\\"abstract-sub-heading\\\">Graphical abstract</h3>\",\"PeriodicalId\":48780,\"journal\":{\"name\":\"Memetic Computing\",\"volume\":\"3 1\",\"pages\":\"\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-02-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Memetic Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12293-024-00406-6\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Memetic Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12293-024-00406-6","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Joint filter and channel pruning of convolutional neural networks as a bi-level optimization problem
Abstract
Deep neural networks, specifically deep convolutional neural networks (DCNNs), have been highly successful in machine learning and computer vision, but a significant challenge when using these networks is choosing the right hyperparameters. As the number of layers in the network increases, the search space also becomes larger. To overcome this issue, researchers in deep learning have suggested using deep compression techniques to decrease memory usage and computational complexity. In this paper, we present a new approach for compressing deep CNNs by combining filter and channel pruning methods based on Evolutionary Algorithms (EA). This method involves eliminating filters and channels in order to decrease the number of parameters and computational complexity of the model. Additionally, we propose a bi-level optimization problem that interacts between the hyperparameters of the convolution layer. Bi-level optimization problems are known to be difficult as they involve two levels of optimization tasks, where only the optimal solutions to the lower-level problem are considered as feasible candidates for the upper-level problem. In this work, the upper-level problem is represented by a set of filters to be pruned in order to minimize the number of selected filters, while the lower-level problem is represented by a set of channels to be pruned in order to minimize the number of selected channels per filter. Our research has focused on developing a new method for solving bi-level problems, which we have named Bi-CNN-Pruning. To achieve this, we have adopted the Co-Evolutionary Migration-Based Algorithm (CEMBA) as our search engine. The Bi-CNN-Pruning method is then evaluated using image classification benchmarks on well-known datasets such as CIFAR-10 and CIFAR-100. The results of our evaluation demonstrate that our bi-level proposal outperforms state-of-the-art architectures, and we provide a detailed analysis of the results using commonly employed performance metrics.
Memetic ComputingCOMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-OPERATIONS RESEARCH & MANAGEMENT SCIENCE
CiteScore
6.80
自引率
12.80%
发文量
31
期刊介绍:
Memes have been defined as basic units of transferrable information that reside in the brain and are propagated across populations through the process of imitation. From an algorithmic point of view, memes have come to be regarded as building-blocks of prior knowledge, expressed in arbitrary computational representations (e.g., local search heuristics, fuzzy rules, neural models, etc.), that have been acquired through experience by a human or machine, and can be imitated (i.e., reused) across problems.
The Memetic Computing journal welcomes papers incorporating the aforementioned socio-cultural notion of memes into artificial systems, with particular emphasis on enhancing the efficacy of computational and artificial intelligence techniques for search, optimization, and machine learning through explicit prior knowledge incorporation. The goal of the journal is to thus be an outlet for high quality theoretical and applied research on hybrid, knowledge-driven computational approaches that may be characterized under any of the following categories of memetics:
Type 1: General-purpose algorithms integrated with human-crafted heuristics that capture some form of prior domain knowledge; e.g., traditional memetic algorithms hybridizing evolutionary global search with a problem-specific local search.
Type 2: Algorithms with the ability to automatically select, adapt, and reuse the most appropriate heuristics from a diverse pool of available choices; e.g., learning a mapping between global search operators and multiple local search schemes, given an optimization problem at hand.
Type 3: Algorithms that autonomously learn with experience, adaptively reusing data and/or machine learning models drawn from related problems as prior knowledge in new target tasks of interest; examples include, but are not limited to, transfer learning and optimization, multi-task learning and optimization, or any other multi-X evolutionary learning and optimization methodologies.