{"title":"Deep Learning under Model Uncertainty","authors":"M. Merz, Mario V. Wuthrich","doi":"10.2139/ssrn.3875151","DOIUrl":null,"url":null,"abstract":"Deep learning has proven to lead to very powerful predictive models, often outperforming classical regression models such as generalized linear models. Deep learning models perform representation learning, which means that they do covariate engineering themselves so that explanatory variables are optimally transformed for the predictive problem at hand. A crucial object in deep learning is the loss function (objective function) for model fitting which implicitly reflects the distributional properties of the observed samples. The purpose of this article is to discuss the choice of this loss function, in particular, we give a specific proposal of a loss function choice under model uncertainty. This proposal turns out to robustify representation learning and prediction.","PeriodicalId":406435,"journal":{"name":"CompSciRN: Other Machine Learning (Topic)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CompSciRN: Other Machine Learning (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3875151","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Deep learning has proven to lead to very powerful predictive models, often outperforming classical regression models such as generalized linear models. Deep learning models perform representation learning, which means that they do covariate engineering themselves so that explanatory variables are optimally transformed for the predictive problem at hand. A crucial object in deep learning is the loss function (objective function) for model fitting which implicitly reflects the distributional properties of the observed samples. The purpose of this article is to discuss the choice of this loss function, in particular, we give a specific proposal of a loss function choice under model uncertainty. This proposal turns out to robustify representation learning and prediction.