This paper presents an analysis of Green Gross Domestic Product (GGDP) using the System of Environmental-Economic Accounting (SEEA) model to evaluate its impact on global climate mitigation and economic health. GGDP is proposed as a superior measure to tradi-tional GDP by incorporating natural resource consumption, environmental pollution control, and degradation factors. The study develops a GGDP model and employs grey correlation analysis and grey prediction models to assess its relationship with these factors. Key findings demonstrate that replacing GDP with GGDP can positively influence climate change, partic-ularly in reducing CO2 emissions and stabilizing global temperatures. The analysis further explores the implications of GGDP adoption across developed and developing countries, with specific predictions for China and the United States. The results indicate a potential increase in economic levels for developing countries, while developed nations may experi-ence a decrease. Additionally, the shift to GGDP is shown to significantly reduce natural re-source depletion and population growth rates in the United States, suggesting broader envi-ronmental and economic benefits. This paper highlights the universal applicability of the GGDP model and its potential to enhance environmental and economic policies globally.
本文利用环境经济核算体系(SEEA)模型对绿色国内生产总值(GGDP)进行了分析,以评估其对全球气候减缓和经济健康的影响。通过纳入自然资源消耗、环境污染控制和退化因素,GGDP 被提出作为传统 GDP 的更优衡量标准。研究建立了 GGDP 模型,并采用灰色关联分析和灰色预测模型来评估其与这些因素的关系。主要研究结果表明,用 GGDP 取代 GDP 可以对气候变化产生积极影响,特别是在减少二氧化碳排放和稳定全球温度方面。分析进一步探讨了发达国家和发展中国家采用 GGDP 的影响,并对中国和美国进行了具体预测。结果表明,发展中国家的经济水平可能会提高,而发达国家的经济水平可能会降低。此外,在美国,向 GGDP 的转变可显著减少自然资源损耗和人口增长率,从而带来更广泛的环境和经济效益。本文强调了 GGDP 模型的普遍适用性及其在加强全球环境和经济政策方面的潜力。
{"title":"The Application of Green GDP and Its Impact on Global Economy and Environment: Analysis of GGDP based on SEEA model","authors":"Mingpu Ma","doi":"arxiv-2409.02642","DOIUrl":"https://doi.org/arxiv-2409.02642","url":null,"abstract":"This paper presents an analysis of Green Gross Domestic Product (GGDP) using\u0000the System of Environmental-Economic Accounting (SEEA) model to evaluate its\u0000impact on global climate mitigation and economic health. GGDP is proposed as a\u0000superior measure to tradi-tional GDP by incorporating natural resource\u0000consumption, environmental pollution control, and degradation factors. The\u0000study develops a GGDP model and employs grey correlation analysis and grey\u0000prediction models to assess its relationship with these factors. Key findings\u0000demonstrate that replacing GDP with GGDP can positively influence climate\u0000change, partic-ularly in reducing CO2 emissions and stabilizing global\u0000temperatures. The analysis further explores the implications of GGDP adoption\u0000across developed and developing countries, with specific predictions for China\u0000and the United States. The results indicate a potential increase in economic\u0000levels for developing countries, while developed nations may experi-ence a\u0000decrease. Additionally, the shift to GGDP is shown to significantly reduce\u0000natural re-source depletion and population growth rates in the United States,\u0000suggesting broader envi-ronmental and economic benefits. This paper highlights\u0000the universal applicability of the GGDP model and its potential to enhance\u0000environmental and economic policies globally.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iván Fernández-Val, Jonas Meier, Aico van Vuuren, Francis Vella
We provide a simple distribution regression estimator for treatment effects in the difference-in-differences (DiD) design. Our procedure is particularly useful when the treatment effect differs across the distribution of the outcome variable. Our proposed estimator easily incorporates covariates and, importantly, can be extended to settings where the treatment potentially affects the joint distribution of multiple outcomes. Our key identifying restriction is that the counterfactual distribution of the treated in the untreated state has no interaction effect between treatment and time. This assumption results in a parallel trend assumption on a transformation of the distribution. We highlight the relationship between our procedure and assumptions with the changes-in-changes approach of Athey and Imbens (2006). We also reexamine two existing empirical examples which highlight the utility of our approach.
{"title":"Distribution Regression Difference-In-Differences","authors":"Iván Fernández-Val, Jonas Meier, Aico van Vuuren, Francis Vella","doi":"arxiv-2409.02311","DOIUrl":"https://doi.org/arxiv-2409.02311","url":null,"abstract":"We provide a simple distribution regression estimator for treatment effects\u0000in the difference-in-differences (DiD) design. Our procedure is particularly\u0000useful when the treatment effect differs across the distribution of the outcome\u0000variable. Our proposed estimator easily incorporates covariates and,\u0000importantly, can be extended to settings where the treatment potentially\u0000affects the joint distribution of multiple outcomes. Our key identifying\u0000restriction is that the counterfactual distribution of the treated in the\u0000untreated state has no interaction effect between treatment and time. This\u0000assumption results in a parallel trend assumption on a transformation of the\u0000distribution. We highlight the relationship between our procedure and\u0000assumptions with the changes-in-changes approach of Athey and Imbens (2006). We\u0000also reexamine two existing empirical examples which highlight the utility of\u0000our approach.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the problem of variable selection in convex nonparametric least squares (CNLS). Whereas the least absolute shrinkage and selection operator (Lasso) is a popular technique for least squares, its variable selection performance is unknown in CNLS problems. In this work, we investigate the performance of the Lasso CNLS estimator and find out it is usually unable to select variables efficiently. Exploiting the unique structure of the subgradients in CNLS, we develop a structured Lasso by combining $ell_1$-norm and $ell_{infty}$-norm. To improve its predictive performance, we propose a relaxed version of the structured Lasso where we can control the two effects--variable selection and model shrinkage--using an additional tuning parameter. A Monte Carlo study is implemented to verify the finite sample performances of the proposed approaches. In the application of Swedish electricity distribution networks, when the regression model is assumed to be semi-nonparametric, our methods are extended to the doubly penalized CNLS estimators. The results from the simulation and application confirm that the proposed structured Lasso performs favorably, generally leading to sparser and more accurate predictive models, relative to the other variable selection methods in the literature.
{"title":"Variable selection in convex nonparametric least squares via structured Lasso: An application to the Swedish electricity market","authors":"Zhiqiang Liao","doi":"arxiv-2409.01911","DOIUrl":"https://doi.org/arxiv-2409.01911","url":null,"abstract":"We study the problem of variable selection in convex nonparametric least\u0000squares (CNLS). Whereas the least absolute shrinkage and selection operator\u0000(Lasso) is a popular technique for least squares, its variable selection\u0000performance is unknown in CNLS problems. In this work, we investigate the\u0000performance of the Lasso CNLS estimator and find out it is usually unable to\u0000select variables efficiently. Exploiting the unique structure of the\u0000subgradients in CNLS, we develop a structured Lasso by combining $ell_1$-norm\u0000and $ell_{infty}$-norm. To improve its predictive performance, we propose a\u0000relaxed version of the structured Lasso where we can control the two\u0000effects--variable selection and model shrinkage--using an additional tuning\u0000parameter. A Monte Carlo study is implemented to verify the finite sample\u0000performances of the proposed approaches. In the application of Swedish\u0000electricity distribution networks, when the regression model is assumed to be\u0000semi-nonparametric, our methods are extended to the doubly penalized CNLS\u0000estimators. The results from the simulation and application confirm that the\u0000proposed structured Lasso performs favorably, generally leading to sparser and\u0000more accurate predictive models, relative to the other variable selection\u0000methods in the literature.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"141 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sushant More, Priya Kotwal, Sujith Chappidi, Dinesh Mandalapu, Chris Khawand
Causal Impact (CI) of customer actions are broadly used across the industry to inform both short- and long-term investment decisions of various types. In this paper, we apply the double machine learning (DML) methodology to estimate the CI values across 100s of customer actions of business interest and 100s of millions of customers. We operationalize DML through a causal ML library based on Spark with a flexible, JSON-driven model configuration approach to estimate CI at scale (i.e., across hundred of actions and millions of customers). We outline the DML methodology and implementation, and associated benefits over the traditional potential outcomes based CI model. We show population-level as well as customer-level CI values along with confidence intervals. The validation metrics show a 2.2% gain over the baseline methods and a 2.5X gain in the computational time. Our contribution is to advance the scalable application of CI, while also providing an interface that allows faster experimentation, cross-platform support, ability to onboard new use cases, and improves accessibility of underlying code for partner teams.
客户行为的因果影响(CI)被广泛应用于整个行业,为各种类型的短期和长期投资决策提供依据。在本文中,我们应用双重机器学习(DML)方法来估算企业感兴趣的数百种客户行为和数亿客户的 CI 值。我们通过基于 Spark 的因果 ML 库和灵活的 JSON 驱动型模型配置方法对 DML 进行操作,以大规模(即跨越数百个行为和数百万客户)估算 CI。我们概述了 DML 方法和实施,以及与传统的基于潜在结果的 CI 模型相比的相关优势。我们展示了人口级和客户级 CI 值以及置信区间。验证指标显示,与基线方法相比,DML 的收益为 2.2%,计算时间增加了 2.5 倍。我们的贡献在于推进了 CI 的可扩展应用,同时还提供了一个接口,允许快速实验、跨平台支持、加入新用例的能力,并提高了合作伙伴团队对底层代码的可访问性。
{"title":"Double Machine Learning at Scale to Predict Causal Impact of Customer Actions","authors":"Sushant More, Priya Kotwal, Sujith Chappidi, Dinesh Mandalapu, Chris Khawand","doi":"arxiv-2409.02332","DOIUrl":"https://doi.org/arxiv-2409.02332","url":null,"abstract":"Causal Impact (CI) of customer actions are broadly used across the industry\u0000to inform both short- and long-term investment decisions of various types. In\u0000this paper, we apply the double machine learning (DML) methodology to estimate\u0000the CI values across 100s of customer actions of business interest and 100s of\u0000millions of customers. We operationalize DML through a causal ML library based\u0000on Spark with a flexible, JSON-driven model configuration approach to estimate\u0000CI at scale (i.e., across hundred of actions and millions of customers). We\u0000outline the DML methodology and implementation, and associated benefits over\u0000the traditional potential outcomes based CI model. We show population-level as\u0000well as customer-level CI values along with confidence intervals. The\u0000validation metrics show a 2.2% gain over the baseline methods and a 2.5X gain\u0000in the computational time. Our contribution is to advance the scalable\u0000application of CI, while also providing an interface that allows faster\u0000experimentation, cross-platform support, ability to onboard new use cases, and\u0000improves accessibility of underlying code for partner teams.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Estimating causal effect using machine learning (ML) algorithms can help to relax functional form assumptions if used within appropriate frameworks. However, most of these frameworks assume settings with cross-sectional data, whereas researchers often have access to panel data, which in traditional methods helps to deal with unobserved heterogeneity between units. In this paper, we explore how we can adapt double/debiased machine learning (DML) (Chernozhukov et al., 2018) for panel data in the presence of unobserved heterogeneity. This adaptation is challenging because DML's cross-fitting procedure assumes independent data and the unobserved heterogeneity is not necessarily additively separable in settings with nonlinear observed confounding. We assess the performance of several intuitively appealing estimators in a variety of simulations. While we find violations of the cross-fitting assumptions to be largely inconsequential for the accuracy of the effect estimates, many of the considered methods fail to adequately account for the presence of unobserved heterogeneity. However, we find that using predictive models based on the correlated random effects approach (Mundlak, 1978) within DML leads to accurate coefficient estimates across settings, given a sample size that is large relative to the number of observed confounders. We also show that the influence of the unobserved heterogeneity on the observed confounders plays a significant role for the performance of most alternative methods.
{"title":"Double Machine Learning meets Panel Data -- Promises, Pitfalls, and Potential Solutions","authors":"Jonathan Fuhr, Dominik Papies","doi":"arxiv-2409.01266","DOIUrl":"https://doi.org/arxiv-2409.01266","url":null,"abstract":"Estimating causal effect using machine learning (ML) algorithms can help to\u0000relax functional form assumptions if used within appropriate frameworks.\u0000However, most of these frameworks assume settings with cross-sectional data,\u0000whereas researchers often have access to panel data, which in traditional\u0000methods helps to deal with unobserved heterogeneity between units. In this\u0000paper, we explore how we can adapt double/debiased machine learning (DML)\u0000(Chernozhukov et al., 2018) for panel data in the presence of unobserved\u0000heterogeneity. This adaptation is challenging because DML's cross-fitting\u0000procedure assumes independent data and the unobserved heterogeneity is not\u0000necessarily additively separable in settings with nonlinear observed\u0000confounding. We assess the performance of several intuitively appealing\u0000estimators in a variety of simulations. While we find violations of the\u0000cross-fitting assumptions to be largely inconsequential for the accuracy of the\u0000effect estimates, many of the considered methods fail to adequately account for\u0000the presence of unobserved heterogeneity. However, we find that using\u0000predictive models based on the correlated random effects approach (Mundlak,\u00001978) within DML leads to accurate coefficient estimates across settings, given\u0000a sample size that is large relative to the number of observed confounders. We\u0000also show that the influence of the unobserved heterogeneity on the observed\u0000confounders plays a significant role for the performance of most alternative\u0000methods.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"1583 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Static supervised learning-in which experimental data serves as a training sample for the estimation of an optimal treatment assignment policy-is a commonly assumed framework of policy learning. An arguably more realistic but challenging scenario is a dynamic setting in which the planner performs experimentation and exploitation simultaneously with subjects that arrive sequentially. This paper studies bandit algorithms for learning an optimal individualised treatment assignment policy. Specifically, we study applicability of the EXP4.P (Exponential weighting for Exploration and Exploitation with Experts) algorithm developed by Beygelzimer et al. (2011) to policy learning. Assuming that the class of policies has a finite Vapnik-Chervonenkis dimension and that the number of subjects to be allocated is known, we present a high probability welfare-regret bound of the algorithm. To implement the algorithm, we use an incremental enumeration algorithm for hyperplane arrangements. We perform extensive numerical analysis to assess the algorithm's sensitivity to its tuning parameters and its welfare-regret performance. Further simulation exercises are calibrated to the National Job Training Partnership Act (JTPA) Study sample to determine how the algorithm performs when applied to economic data. Our findings highlight various computational challenges and suggest that the limited welfare gain from the algorithm is due to substantial heterogeneity in causal effects in the JTPA data.
静态监督学习--即实验数据作为估计最优治疗分配政策的训练样本--是一种常见的政策学习假设框架。一个可以说更现实但更具挑战性的场景是动态环境,在这种环境中,规划者同时对按顺序到达的受试者进行实验和开发。本文研究了学习最优个体化治疗分配政策的匪徒算法。具体来说,我们研究了 Beygelzimer 等人(2011 年)开发的 EXP4.P(Exponential weighting for Exploration andExploitation with Experts)算法在政策学习中的适用性。假定政策类别具有有限的 Vapnik-Chervonenkis 维度,且待分配的研究对象数量已知,我们提出了该算法的高概率福利-遗憾约束。为了实现该算法,我们使用了一种针对超平面安排的增量枚举算法。我们进行了大量的数值分析,以评估该算法对其调整参数的敏感性及其福利-遗憾表现。此外,我们还根据《国家就业培训合作法案》(JTPA)研究样本进行了进一步的模拟练习,以确定该算法在应用于经济数据时的表现。我们的研究结果凸显了各种计算挑战,并表明该算法的福利收益有限是由于 JTPA 数据中因果效应的巨大异质性造成的。
{"title":"Bandit Algorithms for Policy Learning: Methods, Implementation, and Welfare-performance","authors":"Toru Kitagawa, Jeff Rowley","doi":"arxiv-2409.00379","DOIUrl":"https://doi.org/arxiv-2409.00379","url":null,"abstract":"Static supervised learning-in which experimental data serves as a training\u0000sample for the estimation of an optimal treatment assignment policy-is a\u0000commonly assumed framework of policy learning. An arguably more realistic but\u0000challenging scenario is a dynamic setting in which the planner performs\u0000experimentation and exploitation simultaneously with subjects that arrive\u0000sequentially. This paper studies bandit algorithms for learning an optimal\u0000individualised treatment assignment policy. Specifically, we study\u0000applicability of the EXP4.P (Exponential weighting for Exploration and\u0000Exploitation with Experts) algorithm developed by Beygelzimer et al. (2011) to\u0000policy learning. Assuming that the class of policies has a finite\u0000Vapnik-Chervonenkis dimension and that the number of subjects to be allocated\u0000is known, we present a high probability welfare-regret bound of the algorithm.\u0000To implement the algorithm, we use an incremental enumeration algorithm for\u0000hyperplane arrangements. We perform extensive numerical analysis to assess the\u0000algorithm's sensitivity to its tuning parameters and its welfare-regret\u0000performance. Further simulation exercises are calibrated to the National Job\u0000Training Partnership Act (JTPA) Study sample to determine how the algorithm\u0000performs when applied to economic data. Our findings highlight various\u0000computational challenges and suggest that the limited welfare gain from the\u0000algorithm is due to substantial heterogeneity in causal effects in the JTPA\u0000data.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many online domains, Sybil networks -- or cases where a single user assumes multiple identities -- is a pervasive feature. This complicates experiments, as off-the-shelf regression estimators at least assume known network topologies (if not fully independent observations) when Sybil network topologies in practice are often unknown. The literature has exclusively focused on techniques to detect Sybil networks, leading many experimenters to subsequently exclude suspected networks entirely before estimating treatment effects. I present a more efficient solution in the presence of these suspected Sybil networks: a weighted regression framework that applies weights based on the probabilities that sets of observations are controlled by single actors. I show in the paper that the MSE-minimizing solution is to set the weight matrix equal to the inverse of the expected network topology. I demonstrate the methodology on simulated data, and then I apply the technique to a competition with suspected Sybil networks run on the Sui blockchain and show reductions in the standard error of the estimate by 6 - 24%.
在许多在线领域,Sybil 网络(或单个用户假冒多个身份的情况)是一个普遍存在的特征。这使得实验变得复杂,因为现成的回归估计器至少假定网络拓扑结构是已知的(如果不是完全独立的观察结果),而实际上假冒网络拓扑结构往往是未知的。文献只关注检测假网络的技术,导致许多实验者在估计治疗效果之前不得不完全排除可疑网络。我提出了一种更有效的解决方案:加权回归框架,根据观测数据集受单个行为者控制的概率进行加权。我在文中指出,MSE 最小化的解决方案是将权重矩阵设置为预期网络拓扑结构的倒数。我在模拟数据上演示了这一方法,然后将该技术应用于在 Sui 区块链上运行的疑似 Sybil 网络竞赛,结果显示估计值的标准误差减少了 6 - 24%。
{"title":"Weighted Regression with Sybil Networks","authors":"Nihar Shah","doi":"arxiv-2408.17426","DOIUrl":"https://doi.org/arxiv-2408.17426","url":null,"abstract":"In many online domains, Sybil networks -- or cases where a single user\u0000assumes multiple identities -- is a pervasive feature. This complicates\u0000experiments, as off-the-shelf regression estimators at least assume known\u0000network topologies (if not fully independent observations) when Sybil network\u0000topologies in practice are often unknown. The literature has exclusively\u0000focused on techniques to detect Sybil networks, leading many experimenters to\u0000subsequently exclude suspected networks entirely before estimating treatment\u0000effects. I present a more efficient solution in the presence of these suspected\u0000Sybil networks: a weighted regression framework that applies weights based on\u0000the probabilities that sets of observations are controlled by single actors. I\u0000show in the paper that the MSE-minimizing solution is to set the weight matrix\u0000equal to the inverse of the expected network topology. I demonstrate the\u0000methodology on simulated data, and then I apply the technique to a competition\u0000with suspected Sybil networks run on the Sui blockchain and show reductions in\u0000the standard error of the estimate by 6 - 24%.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Volatility means the degree of variation of a stock price which is important in finance. Realized Volatility (RV) is an estimator of the volatility calculated using high-frequency observed prices. RV has lately attracted considerable attention of econometrics and mathematical finance. However, it is known that high-frequency data includes observation errors called market microstructure noise (MN). Nagakura and Watanabe[2015] proposed a state space model that resolves RV into true volatility and influence of MN. In this paper, we assume a dependent MN that autocorrelates and correlates with return as reported by Hansen and Lunde[2006] and extends the results of Nagakura and Watanabe[2015] and compare models by simulation and actual data.
{"title":"State Space Model of Realized Volatility under the Existence of Dependent Market Microstructure Noise","authors":"Toru Yano","doi":"arxiv-2408.17187","DOIUrl":"https://doi.org/arxiv-2408.17187","url":null,"abstract":"Volatility means the degree of variation of a stock price which is important\u0000in finance. Realized Volatility (RV) is an estimator of the volatility\u0000calculated using high-frequency observed prices. RV has lately attracted\u0000considerable attention of econometrics and mathematical finance. However, it is\u0000known that high-frequency data includes observation errors called market\u0000microstructure noise (MN). Nagakura and Watanabe[2015] proposed a state space\u0000model that resolves RV into true volatility and influence of MN. In this paper,\u0000we assume a dependent MN that autocorrelates and correlates with return as\u0000reported by Hansen and Lunde[2006] and extends the results of Nagakura and\u0000Watanabe[2015] and compare models by simulation and actual data.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In dynamic discrete choice models, some parameters, such as the discount factor, are being fixed instead of being estimated. This paper proposes two sensitivity analysis procedures for dynamic discrete choice models with respect to the fixed parameters. First, I develop a local sensitivity measure that estimates the change in the target parameter for a unit change in the fixed parameter. This measure is fast to compute as it does not require model re-estimation. Second, I propose a global sensitivity analysis procedure that uses model primitives to study the relationship between target parameters and fixed parameters. I show how to apply the sensitivity analysis procedures of this paper through two empirical applications.
{"title":"Sensitivity Analysis for Dynamic Discrete Choice Models","authors":"Chun Pong Lau","doi":"arxiv-2408.16330","DOIUrl":"https://doi.org/arxiv-2408.16330","url":null,"abstract":"In dynamic discrete choice models, some parameters, such as the discount\u0000factor, are being fixed instead of being estimated. This paper proposes two\u0000sensitivity analysis procedures for dynamic discrete choice models with respect\u0000to the fixed parameters. First, I develop a local sensitivity measure that\u0000estimates the change in the target parameter for a unit change in the fixed\u0000parameter. This measure is fast to compute as it does not require model\u0000re-estimation. Second, I propose a global sensitivity analysis procedure that\u0000uses model primitives to study the relationship between target parameters and\u0000fixed parameters. I show how to apply the sensitivity analysis procedures of\u0000this paper through two empirical applications.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A panel dataset satisfies marginal homogeneity if the time-specific marginal distributions are homogeneous or time-invariant. Marginal homogeneity is relevant in economic settings such as dynamic discrete games. In this paper, we propose several tests for the hypothesis of marginal homogeneity and investigate their properties. We consider an asymptotic framework in which the number of individuals n in the panel diverges, and the number of periods T is fixed. We implement our tests by comparing a studentized or non-studentized T-sample version of the Cramer-von Mises statistic with a suitable critical value. We propose three methods to construct the critical value: asymptotic approximations, the bootstrap, and time permutations. We show that the first two methods result in asymptotically exact hypothesis tests. The permutation test based on a non-studentized statistic is asymptotically exact when T=2, but is asymptotically invalid when T>2. In contrast, the permutation test based on a studentized statistic is always asymptotically exact. Finally, under a time-exchangeability assumption, the permutation test is exact in finite samples, both with and without studentization.
如果特定时间的边际分布是同质或时间不变的,那么面板数据集就满足边际同质性。边际同质性与动态离散博弈等经济环境相关。本文提出了边际同质性假设的几种检验方法,并对其性质进行了研究。我们考虑了一个渐进框架,在这个框架中,面板中的个体数 n 是发散的,而周期数 T 是固定的。我们通过比较 Cramer-von Mises 统计量的学生化或非学生化 T 样本版本与合适的临界值来进行检验。我们提出了三种构建临界值的方法:渐近逼近法、自引导法和时间排列法。我们证明,前两种方法可以得出渐近精确的假设检验。当 T=2 时,基于非研究统计量的置换检验在渐近上是精确的,但当 T>2 时,置换检验在渐近上是无效的。最后,在时间可交换性假设下,无论有无学生化,置换检验在有限样本中都是精确的。
{"title":"Marginal homogeneity tests with panel data","authors":"Federico Bugni, Jackson Bunting, Muyang Ren","doi":"arxiv-2408.15862","DOIUrl":"https://doi.org/arxiv-2408.15862","url":null,"abstract":"A panel dataset satisfies marginal homogeneity if the time-specific marginal\u0000distributions are homogeneous or time-invariant. Marginal homogeneity is\u0000relevant in economic settings such as dynamic discrete games. In this paper, we\u0000propose several tests for the hypothesis of marginal homogeneity and\u0000investigate their properties. We consider an asymptotic framework in which the\u0000number of individuals n in the panel diverges, and the number of periods T is\u0000fixed. We implement our tests by comparing a studentized or non-studentized\u0000T-sample version of the Cramer-von Mises statistic with a suitable critical\u0000value. We propose three methods to construct the critical value: asymptotic\u0000approximations, the bootstrap, and time permutations. We show that the first\u0000two methods result in asymptotically exact hypothesis tests. The permutation\u0000test based on a non-studentized statistic is asymptotically exact when T=2, but\u0000is asymptotically invalid when T>2. In contrast, the permutation test based on\u0000a studentized statistic is always asymptotically exact. Finally, under a\u0000time-exchangeability assumption, the permutation test is exact in finite\u0000samples, both with and without studentization.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}