{"title":"离散选择模型中异质参数估计的深度学习","authors":"Stephan Hetzenecker, Maximilian Osterhaus","doi":"arxiv-2408.09560","DOIUrl":null,"url":null,"abstract":"This paper studies the finite sample performance of the flexible estimation\napproach of Farrell, Liang, and Misra (2021a), who propose to use deep learning\nfor the estimation of heterogeneous parameters in economic models, in the\ncontext of discrete choice models. The approach combines the structure imposed\nby economic models with the flexibility of deep learning, which assures the\ninterpretebility of results on the one hand, and allows estimating flexible\nfunctional forms of observed heterogeneity on the other hand. For inference\nafter the estimation with deep learning, Farrell et al. (2021a) derive an\ninfluence function that can be applied to many quantities of interest. We\nconduct a series of Monte Carlo experiments that investigate the impact of\nregularization on the proposed estimation and inference procedure in the\ncontext of discrete choice models. The results show that the deep learning\napproach generally leads to precise estimates of the true average parameters\nand that regular robust standard errors lead to invalid inference results,\nshowing the need for the influence function approach for inference. Without\nregularization, the influence function approach can lead to substantial bias\nand large estimated standard errors caused by extreme outliers. Regularization\nreduces this property and stabilizes the estimation procedure, but at the\nexpense of inducing an additional bias. The bias in combination with decreasing\nvariance associated with increasing regularization leads to the construction of\ninvalid inferential statements in our experiments. Repeated sample splitting,\nunlike regularization, stabilizes the estimation approach without introducing\nan additional bias, thereby allowing for the construction of valid inferential\nstatements.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Learning for the Estimation of Heterogeneous Parameters in Discrete Choice Models\",\"authors\":\"Stephan Hetzenecker, Maximilian Osterhaus\",\"doi\":\"arxiv-2408.09560\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper studies the finite sample performance of the flexible estimation\\napproach of Farrell, Liang, and Misra (2021a), who propose to use deep learning\\nfor the estimation of heterogeneous parameters in economic models, in the\\ncontext of discrete choice models. The approach combines the structure imposed\\nby economic models with the flexibility of deep learning, which assures the\\ninterpretebility of results on the one hand, and allows estimating flexible\\nfunctional forms of observed heterogeneity on the other hand. For inference\\nafter the estimation with deep learning, Farrell et al. (2021a) derive an\\ninfluence function that can be applied to many quantities of interest. We\\nconduct a series of Monte Carlo experiments that investigate the impact of\\nregularization on the proposed estimation and inference procedure in the\\ncontext of discrete choice models. The results show that the deep learning\\napproach generally leads to precise estimates of the true average parameters\\nand that regular robust standard errors lead to invalid inference results,\\nshowing the need for the influence function approach for inference. Without\\nregularization, the influence function approach can lead to substantial bias\\nand large estimated standard errors caused by extreme outliers. Regularization\\nreduces this property and stabilizes the estimation procedure, but at the\\nexpense of inducing an additional bias. The bias in combination with decreasing\\nvariance associated with increasing regularization leads to the construction of\\ninvalid inferential statements in our experiments. Repeated sample splitting,\\nunlike regularization, stabilizes the estimation approach without introducing\\nan additional bias, thereby allowing for the construction of valid inferential\\nstatements.\",\"PeriodicalId\":501293,\"journal\":{\"name\":\"arXiv - ECON - Econometrics\",\"volume\":\"26 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - ECON - Econometrics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.09560\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - ECON - Econometrics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.09560","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Learning for the Estimation of Heterogeneous Parameters in Discrete Choice Models
This paper studies the finite sample performance of the flexible estimation
approach of Farrell, Liang, and Misra (2021a), who propose to use deep learning
for the estimation of heterogeneous parameters in economic models, in the
context of discrete choice models. The approach combines the structure imposed
by economic models with the flexibility of deep learning, which assures the
interpretebility of results on the one hand, and allows estimating flexible
functional forms of observed heterogeneity on the other hand. For inference
after the estimation with deep learning, Farrell et al. (2021a) derive an
influence function that can be applied to many quantities of interest. We
conduct a series of Monte Carlo experiments that investigate the impact of
regularization on the proposed estimation and inference procedure in the
context of discrete choice models. The results show that the deep learning
approach generally leads to precise estimates of the true average parameters
and that regular robust standard errors lead to invalid inference results,
showing the need for the influence function approach for inference. Without
regularization, the influence function approach can lead to substantial bias
and large estimated standard errors caused by extreme outliers. Regularization
reduces this property and stabilizes the estimation procedure, but at the
expense of inducing an additional bias. The bias in combination with decreasing
variance associated with increasing regularization leads to the construction of
invalid inferential statements in our experiments. Repeated sample splitting,
unlike regularization, stabilizes the estimation approach without introducing
an additional bias, thereby allowing for the construction of valid inferential
statements.