离散选择模型中异质参数估计的深度学习

Stephan Hetzenecker, Maximilian Osterhaus
{"title":"离散选择模型中异质参数估计的深度学习","authors":"Stephan Hetzenecker, Maximilian Osterhaus","doi":"arxiv-2408.09560","DOIUrl":null,"url":null,"abstract":"This paper studies the finite sample performance of the flexible estimation\napproach of Farrell, Liang, and Misra (2021a), who propose to use deep learning\nfor the estimation of heterogeneous parameters in economic models, in the\ncontext of discrete choice models. The approach combines the structure imposed\nby economic models with the flexibility of deep learning, which assures the\ninterpretebility of results on the one hand, and allows estimating flexible\nfunctional forms of observed heterogeneity on the other hand. For inference\nafter the estimation with deep learning, Farrell et al. (2021a) derive an\ninfluence function that can be applied to many quantities of interest. We\nconduct a series of Monte Carlo experiments that investigate the impact of\nregularization on the proposed estimation and inference procedure in the\ncontext of discrete choice models. The results show that the deep learning\napproach generally leads to precise estimates of the true average parameters\nand that regular robust standard errors lead to invalid inference results,\nshowing the need for the influence function approach for inference. Without\nregularization, the influence function approach can lead to substantial bias\nand large estimated standard errors caused by extreme outliers. Regularization\nreduces this property and stabilizes the estimation procedure, but at the\nexpense of inducing an additional bias. The bias in combination with decreasing\nvariance associated with increasing regularization leads to the construction of\ninvalid inferential statements in our experiments. Repeated sample splitting,\nunlike regularization, stabilizes the estimation approach without introducing\nan additional bias, thereby allowing for the construction of valid inferential\nstatements.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Learning for the Estimation of Heterogeneous Parameters in Discrete Choice Models\",\"authors\":\"Stephan Hetzenecker, Maximilian Osterhaus\",\"doi\":\"arxiv-2408.09560\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper studies the finite sample performance of the flexible estimation\\napproach of Farrell, Liang, and Misra (2021a), who propose to use deep learning\\nfor the estimation of heterogeneous parameters in economic models, in the\\ncontext of discrete choice models. The approach combines the structure imposed\\nby economic models with the flexibility of deep learning, which assures the\\ninterpretebility of results on the one hand, and allows estimating flexible\\nfunctional forms of observed heterogeneity on the other hand. For inference\\nafter the estimation with deep learning, Farrell et al. (2021a) derive an\\ninfluence function that can be applied to many quantities of interest. We\\nconduct a series of Monte Carlo experiments that investigate the impact of\\nregularization on the proposed estimation and inference procedure in the\\ncontext of discrete choice models. The results show that the deep learning\\napproach generally leads to precise estimates of the true average parameters\\nand that regular robust standard errors lead to invalid inference results,\\nshowing the need for the influence function approach for inference. Without\\nregularization, the influence function approach can lead to substantial bias\\nand large estimated standard errors caused by extreme outliers. Regularization\\nreduces this property and stabilizes the estimation procedure, but at the\\nexpense of inducing an additional bias. The bias in combination with decreasing\\nvariance associated with increasing regularization leads to the construction of\\ninvalid inferential statements in our experiments. Repeated sample splitting,\\nunlike regularization, stabilizes the estimation approach without introducing\\nan additional bias, thereby allowing for the construction of valid inferential\\nstatements.\",\"PeriodicalId\":501293,\"journal\":{\"name\":\"arXiv - ECON - Econometrics\",\"volume\":\"26 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - ECON - Econometrics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.09560\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - ECON - Econometrics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.09560","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文研究了 Farrell、Liang 和 Misra(2021a)的灵活估计方法的有限样本性能,他们提出在离散选择模型的背景下,使用深度学习来估计经济模型中的异质性参数。该方法将经济模型的结构与深度学习的灵活性相结合,一方面保证了结果的可解释性,另一方面允许对观察到的异质性进行灵活的功能形式估计。为了在深度学习估计之后进行推理,Farrell 等人(2021a)推导出了一种可应用于许多相关量的影响函数。我们进行了一系列蒙特卡罗实验,研究了离散选择模型背景下规范化对所提出的估计和推理过程的影响。结果表明,深度学习方法通常能精确估计真实的平均参数,而常规稳健标准误差会导致无效的推断结果,这表明推断需要使用影响函数方法。如果不进行正则化,影响函数方法可能会因极端离群值而导致严重偏差和较大的估计标准误差。正则化可以减少这种特性并稳定估计过程,但代价是引起额外的偏差。在我们的实验中,偏差与随着正则化程度增加而递减的方差相结合,导致了无效推断语句的产生。重复样本拆分与正则化不同,它能稳定估计方法,而不会引入额外的偏差,从而允许构建有效的推断陈述。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep Learning for the Estimation of Heterogeneous Parameters in Discrete Choice Models
This paper studies the finite sample performance of the flexible estimation approach of Farrell, Liang, and Misra (2021a), who propose to use deep learning for the estimation of heterogeneous parameters in economic models, in the context of discrete choice models. The approach combines the structure imposed by economic models with the flexibility of deep learning, which assures the interpretebility of results on the one hand, and allows estimating flexible functional forms of observed heterogeneity on the other hand. For inference after the estimation with deep learning, Farrell et al. (2021a) derive an influence function that can be applied to many quantities of interest. We conduct a series of Monte Carlo experiments that investigate the impact of regularization on the proposed estimation and inference procedure in the context of discrete choice models. The results show that the deep learning approach generally leads to precise estimates of the true average parameters and that regular robust standard errors lead to invalid inference results, showing the need for the influence function approach for inference. Without regularization, the influence function approach can lead to substantial bias and large estimated standard errors caused by extreme outliers. Regularization reduces this property and stabilizes the estimation procedure, but at the expense of inducing an additional bias. The bias in combination with decreasing variance associated with increasing regularization leads to the construction of invalid inferential statements in our experiments. Repeated sample splitting, unlike regularization, stabilizes the estimation approach without introducing an additional bias, thereby allowing for the construction of valid inferential statements.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Simple robust two-stage estimation and inference for generalized impulse responses and multi-horizon causality GPT takes the SAT: Tracing changes in Test Difficulty and Math Performance of Students A Simple and Adaptive Confidence Interval when Nuisance Parameters Satisfy an Inequality Why you should also use OLS estimation of tail exponents On LASSO Inference for High Dimensional Predictive Regression
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1