{"title":"Optimal Deep Neural Networks by Maximization of the Approximation Power","authors":"Hector F. Calvo-Pardo, Tullio Mancini, Jose Olmo","doi":"10.2139/ssrn.3578850","DOIUrl":null,"url":null,"abstract":"We propose an optimal architecture for deep neural networks of given size. The optimal architecture obtains from maximizing the minimum number of linear regions approximated by a deep neural network with a ReLu activation function. The accuracy of the approximation function relies on the neural network structure, characterized by the number, dependence and hierarchy between the nodes within and across layers. For a given number of nodes, we show how the accuracy of the approximation improves as we optimally choose the width and depth of the network. More complex datasets naturally summon bigger-sized architectures that perform better applying our optimization procedure. A Monte-Carlo simulation exercise illustrates the outperformance of the optimised architecture against cross-validation methods and gridsearch for linear and nonlinear prediction models. The application of this methodology to the Boston Housing dataset confirms empirically the outperformance of our method against state-of the-art machine learning models.","PeriodicalId":114865,"journal":{"name":"ERN: Neural Networks & Related Topics (Topic)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ERN: Neural Networks & Related Topics (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3578850","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
We propose an optimal architecture for deep neural networks of given size. The optimal architecture obtains from maximizing the minimum number of linear regions approximated by a deep neural network with a ReLu activation function. The accuracy of the approximation function relies on the neural network structure, characterized by the number, dependence and hierarchy between the nodes within and across layers. For a given number of nodes, we show how the accuracy of the approximation improves as we optimally choose the width and depth of the network. More complex datasets naturally summon bigger-sized architectures that perform better applying our optimization procedure. A Monte-Carlo simulation exercise illustrates the outperformance of the optimised architecture against cross-validation methods and gridsearch for linear and nonlinear prediction models. The application of this methodology to the Boston Housing dataset confirms empirically the outperformance of our method against state-of the-art machine learning models.