{"title":"SE-SqueezeNet: SqueezeNet extension with squeeze-and-excitation block","authors":"S. Kajkamhaeng, C. Phongpensri","doi":"10.1504/IJCSE.2021.115105","DOIUrl":null,"url":null,"abstract":"Convolutional neural networks have been popularly used for image recognition tasks. It is known that deep convolutional neural network can yield high recognition accuracy while training it can be very time-consuming. AlexNet was one of the very first networks shown to be effective for the tasks. However, due to its large kernel sizes and fully connected layers, the training time is significant. SqueezeNet has been known as smaller network that yields the same performance as AlexNet. Based on SqueezeNet, we are interested in exploring the effective insertion of the squeeze-and-excitation (SE) module into SqueezeNet that can further improve the performance and cost efficiency. The promising methodology and pattern of module insertion have been explored. The experimental results for evaluating the module insertion show the improvement on top1 accuracy by 1.55% and 3.32% while the model size is enlarged by up to 16% and 10% for CIFAR100 and ILSVRC2012 datasets respectively.","PeriodicalId":340410,"journal":{"name":"Int. J. Comput. Sci. Eng.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Comput. Sci. Eng.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1504/IJCSE.2021.115105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Convolutional neural networks have been popularly used for image recognition tasks. It is known that deep convolutional neural network can yield high recognition accuracy while training it can be very time-consuming. AlexNet was one of the very first networks shown to be effective for the tasks. However, due to its large kernel sizes and fully connected layers, the training time is significant. SqueezeNet has been known as smaller network that yields the same performance as AlexNet. Based on SqueezeNet, we are interested in exploring the effective insertion of the squeeze-and-excitation (SE) module into SqueezeNet that can further improve the performance and cost efficiency. The promising methodology and pattern of module insertion have been explored. The experimental results for evaluating the module insertion show the improvement on top1 accuracy by 1.55% and 3.32% while the model size is enlarged by up to 16% and 10% for CIFAR100 and ILSVRC2012 datasets respectively.