{"title":"Forbidden Knowledge and Specialized Training: A Versatile Solution for the Two Main Sources of Overfitting in Linear Regression","authors":"Chris Rohlfs","doi":"10.1080/00031305.2022.2128874","DOIUrl":null,"url":null,"abstract":"Abstract Overfitting in linear regression is broken down into two main causes. First, the formula for the estimator includes “forbidden knowledge” about training observations’ residuals, and it loses this advantage when deployed out-of-sample. Second, the estimator has “specialized training” that makes it particularly capable of explaining movements in the predictors that are idiosyncratic to the training sample. An out-of-sample counterpart is introduced to the popular “leverage” measure of training observations’ importance. A new method is proposed to forecast out-of-sample fit at the time of deployment, when the values for the predictors are known but the true outcome variable is not. In Monte Carlo simulations and in an empirical application using MRI brain scans, the proposed estimator performs comparably to Predicted Residual Error Sum of Squares (PRESS) for the average out-of-sample case and unlike PRESS, also performs consistently across different test samples, even those that differ substantially from the training set.","PeriodicalId":342642,"journal":{"name":"The American Statistician","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The American Statistician","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/00031305.2022.2128874","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Abstract Overfitting in linear regression is broken down into two main causes. First, the formula for the estimator includes “forbidden knowledge” about training observations’ residuals, and it loses this advantage when deployed out-of-sample. Second, the estimator has “specialized training” that makes it particularly capable of explaining movements in the predictors that are idiosyncratic to the training sample. An out-of-sample counterpart is introduced to the popular “leverage” measure of training observations’ importance. A new method is proposed to forecast out-of-sample fit at the time of deployment, when the values for the predictors are known but the true outcome variable is not. In Monte Carlo simulations and in an empirical application using MRI brain scans, the proposed estimator performs comparably to Predicted Residual Error Sum of Squares (PRESS) for the average out-of-sample case and unlike PRESS, also performs consistently across different test samples, even those that differ substantially from the training set.