{"title":"Increasing Convolutional Neural Networks Training Speed by Incremental Complexity Learning","authors":"Miguel D. de S. Wanderley, R. Prudêncio","doi":"10.1109/BRACIS.2018.00026","DOIUrl":null,"url":null,"abstract":"Convolutional Neural Networks have been successfully applied in several image related tasks. On another hand, there are some overhead costs in most of the real applications. Often, the Deep Learning techniques demand a huge amount of data for training and also a crescent need for handling high definition images. For this reason, late network architectures are getting even more complex and deeper. These factors lead to a long training time even when specific hardware is available. In this paper, we present a novel incremental training procedure which is able to train faster with small performance losses, based on measuring and ordering the relative complexity of subsets of the training set. The findings reveal an expressive reduction in the number of training steps, without critical performance losses. Experiments showed that the proposed method can be about 40% faster, with less than 10% of accuracy loss.","PeriodicalId":405190,"journal":{"name":"2018 7th Brazilian Conference on Intelligent Systems (BRACIS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 7th Brazilian Conference on Intelligent Systems (BRACIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BRACIS.2018.00026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Convolutional Neural Networks have been successfully applied in several image related tasks. On another hand, there are some overhead costs in most of the real applications. Often, the Deep Learning techniques demand a huge amount of data for training and also a crescent need for handling high definition images. For this reason, late network architectures are getting even more complex and deeper. These factors lead to a long training time even when specific hardware is available. In this paper, we present a novel incremental training procedure which is able to train faster with small performance losses, based on measuring and ordering the relative complexity of subsets of the training set. The findings reveal an expressive reduction in the number of training steps, without critical performance losses. Experiments showed that the proposed method can be about 40% faster, with less than 10% of accuracy loss.