Telmo Amaral, Luís M. Silva, Luís A. Alexandre, Chetak Kandaswamy, Jorge M. Santos, J. M. D. Sá
{"title":"Using Different Cost Functions to Train Stacked Auto-Encoders","authors":"Telmo Amaral, Luís M. Silva, Luís A. Alexandre, Chetak Kandaswamy, Jorge M. Santos, J. M. D. Sá","doi":"10.1109/MICAI.2013.20","DOIUrl":null,"url":null,"abstract":"Deep neural networks comprise several hidden layers of units, which can be pre-trained one at a time via an unsupervised greedy approach. A whole network can then be trained (fine-tuned) in a supervised fashion. One possible pre-training strategy is to regard each hidden layer in the network as the input layer of an auto-encoder. Since auto-encoders aim to reconstruct their own input, their training must be based on some cost function capable of measuring reconstruction performance. Similarly, the supervised fine-tuning of a deep network needs to be based on some cost function that reflects prediction performance. In this work we compare different combinations of cost functions in terms of their impact on layer-wise reconstruction performance and on supervised classification performance of deep networks. We employed two classic functions, namely the cross-entropy (CE) cost and the sum of squared errors (SSE), as well as the exponential (EXP) cost, inspired by the error entropy concept. Our results were based on a number of artificial and real-world data sets.","PeriodicalId":340039,"journal":{"name":"2013 12th Mexican International Conference on Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"38","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 12th Mexican International Conference on Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MICAI.2013.20","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 38
Abstract
Deep neural networks comprise several hidden layers of units, which can be pre-trained one at a time via an unsupervised greedy approach. A whole network can then be trained (fine-tuned) in a supervised fashion. One possible pre-training strategy is to regard each hidden layer in the network as the input layer of an auto-encoder. Since auto-encoders aim to reconstruct their own input, their training must be based on some cost function capable of measuring reconstruction performance. Similarly, the supervised fine-tuning of a deep network needs to be based on some cost function that reflects prediction performance. In this work we compare different combinations of cost functions in terms of their impact on layer-wise reconstruction performance and on supervised classification performance of deep networks. We employed two classic functions, namely the cross-entropy (CE) cost and the sum of squared errors (SSE), as well as the exponential (EXP) cost, inspired by the error entropy concept. Our results were based on a number of artificial and real-world data sets.