This paper introduces a novel two-stage estimation and inference procedurefor generalized impulse responses (GIRs). GIRs encompass all coefficients in amulti-horizon linear projection model of future outcomes of y on lagged values(Dufour and Renault, 1998), which include the Sims' impulse response. Theconventional use of Least Squares (LS) with heteroskedasticity- andautocorrelation-consistent covariance estimation is less precise and oftenresults in unreliable finite sample tests, further complicated by the selectionof bandwidth and kernel functions. Our two-stage method surpasses the LSapproach in terms of estimation efficiency and inference robustness. Therobustness stems from our proposed covariance matrix estimates, which eliminatethe need to correct for serial correlation in the multi-horizon projectionresiduals. Our method accommodates non-stationary data and allows theprojection horizon to grow with sample size. Monte Carlo simulationsdemonstrate our two-stage method outperforms the LS method. We apply thetwo-stage method to investigate the GIRs, implement multi-horizon Grangercausality test, and find that economic uncertainty exerts both short-run (1-3months) and long-run (30 months) effects on economic activities.
本文为广义脉冲响应(GIRs)引入了一种新颖的两阶段估计和推断程序。广义脉冲响应包括 y 的未来结果对滞后值的多视距线性预测模型中的所有系数(Dufour 和 Renault,1998 年),其中包括 Sims 脉冲响应。传统的最小二乘法(LS)与异方差和自相关一致的协方差估计的精确度较低,往往会导致不可靠的有限样本检验,而带宽和核函数的选择又使问题更加复杂。我们的两阶段方法在估计效率和推断稳健性方面超过了 LS 方法。稳健性源于我们提出的协方差矩阵估计,它无需校正多视距投影残差中的序列相关性。我们的方法能适应非平稳数据,并允许投影视距随样本大小而增长。蒙特卡罗模拟证明,我们的两阶段方法优于 LS 方法。我们运用两阶段法研究了 GIRs,实施了多视距格兰杰因果检验,发现经济不确定性对经济活动产生了短期(1-3 个月)和长期(30 个月)的影响。
{"title":"Simple robust two-stage estimation and inference for generalized impulse responses and multi-horizon causality","authors":"Jean-Marie Dufour, Endong Wang","doi":"arxiv-2409.10820","DOIUrl":"https://doi.org/arxiv-2409.10820","url":null,"abstract":"This paper introduces a novel two-stage estimation and inference procedure\u0000for generalized impulse responses (GIRs). GIRs encompass all coefficients in a\u0000multi-horizon linear projection model of future outcomes of y on lagged values\u0000(Dufour and Renault, 1998), which include the Sims' impulse response. The\u0000conventional use of Least Squares (LS) with heteroskedasticity- and\u0000autocorrelation-consistent covariance estimation is less precise and often\u0000results in unreliable finite sample tests, further complicated by the selection\u0000of bandwidth and kernel functions. Our two-stage method surpasses the LS\u0000approach in terms of estimation efficiency and inference robustness. The\u0000robustness stems from our proposed covariance matrix estimates, which eliminate\u0000the need to correct for serial correlation in the multi-horizon projection\u0000residuals. Our method accommodates non-stationary data and allows the\u0000projection horizon to grow with sample size. Monte Carlo simulations\u0000demonstrate our two-stage method outperforms the LS method. We apply the\u0000two-stage method to investigate the GIRs, implement multi-horizon Granger\u0000causality test, and find that economic uncertainty exerts both short-run (1-3\u0000months) and long-run (30 months) effects on economic activities.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142260943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inequalities may appear in many models. They can be as simple as assuming aparameter is nonnegative, possibly a regression coefficient or a treatmenteffect. This paper focuses on the case that there is only one inequality andproposes a confidence interval that is particularly attractive, called theinequality-imposed confidence interval (IICI). The IICI is simple. It does notrequire simulations or tuning parameters. The IICI is adaptive. It reduces tothe usual confidence interval (calculated by adding and subtracting thestandard error times the $1 - alpha/2$ standard normal quantile) when theinequality is sufficiently slack. When the inequality is sufficiently violated,the IICI reduces to an equality-imposed confidence interval (the usualconfidence interval for the submodel where the inequality holds with equality).Also, the IICI is uniformly valid and has (weakly) shorter length than theusual confidence interval; it is never longer. The first empirical applicationconsiders a linear regression when a coefficient is known to be nonpositive. Asecond empirical application considers an instrumental variables regressionwhen the endogeneity of a regressor is known to be nonnegative.
{"title":"A Simple and Adaptive Confidence Interval when Nuisance Parameters Satisfy an Inequality","authors":"Gregory Fletcher Cox","doi":"arxiv-2409.09962","DOIUrl":"https://doi.org/arxiv-2409.09962","url":null,"abstract":"Inequalities may appear in many models. They can be as simple as assuming a\u0000parameter is nonnegative, possibly a regression coefficient or a treatment\u0000effect. This paper focuses on the case that there is only one inequality and\u0000proposes a confidence interval that is particularly attractive, called the\u0000inequality-imposed confidence interval (IICI). The IICI is simple. It does not\u0000require simulations or tuning parameters. The IICI is adaptive. It reduces to\u0000the usual confidence interval (calculated by adding and subtracting the\u0000standard error times the $1 - alpha/2$ standard normal quantile) when the\u0000inequality is sufficiently slack. When the inequality is sufficiently violated,\u0000the IICI reduces to an equality-imposed confidence interval (the usual\u0000confidence interval for the submodel where the inequality holds with equality).\u0000Also, the IICI is uniformly valid and has (weakly) shorter length than the\u0000usual confidence interval; it is never longer. The first empirical application\u0000considers a linear regression when a coefficient is known to be nonpositive. A\u0000second empirical application considers an instrumental variables regression\u0000when the endogeneity of a regressor is known to be nonnegative.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142260945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thiago Trafane Oliveira SantosCentral Bank of Brazil, Brasília, Brazil. Department of %Economics, University of Brasilia, Brazil, Daniel Oliveira CajueiroDepartment of Economics, University of Brasilia, Brazil. National Institute of Science and Technology for Complex Systems
Even though practitioners often estimate Pareto exponents running OLSrank-size regressions, the usual recommendation is to use the Hill MLE with asmall-sample correction instead, due to its unbiasedness and efficiency. Inthis paper, we advocate that you should also apply OLS in empiricalapplications. On the one hand, we demonstrate that, with a small-samplecorrection, the OLS estimator is also unbiased. On the other hand, we show thatthe MLE assigns significantly greater weight to smaller observations. Thissuggests that the OLS estimator may outperform the MLE in cases where thedistribution is (i) strictly Pareto but only in the upper tail or (ii)regularly varying rather than strictly Pareto. We substantiate our theoreticalfindings with Monte Carlo simulations and real-world applications,demonstrating the practical relevance of the OLS method in estimating tailexponents.
{"title":"Why you should also use OLS estimation of tail exponents","authors":"Thiago Trafane Oliveira SantosCentral Bank of Brazil, Brasília, Brazil. Department of %Economics, University of Brasilia, Brazil, Daniel Oliveira CajueiroDepartment of Economics, University of Brasilia, Brazil. National Institute of Science and Technology for Complex Systems","doi":"arxiv-2409.10448","DOIUrl":"https://doi.org/arxiv-2409.10448","url":null,"abstract":"Even though practitioners often estimate Pareto exponents running OLS\u0000rank-size regressions, the usual recommendation is to use the Hill MLE with a\u0000small-sample correction instead, due to its unbiasedness and efficiency. In\u0000this paper, we advocate that you should also apply OLS in empirical\u0000applications. On the one hand, we demonstrate that, with a small-sample\u0000correction, the OLS estimator is also unbiased. On the other hand, we show that\u0000the MLE assigns significantly greater weight to smaller observations. This\u0000suggests that the OLS estimator may outperform the MLE in cases where the\u0000distribution is (i) strictly Pareto but only in the upper tail or (ii)\u0000regularly varying rather than strictly Pareto. We substantiate our theoretical\u0000findings with Monte Carlo simulations and real-world applications,\u0000demonstrating the practical relevance of the OLS method in estimating tail\u0000exponents.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scholastic Aptitude Test (SAT) is crucial for college admissions but itseffectiveness and relevance are increasingly questioned. This paper enhancesSynthetic Control methods by introducing "Transformed Control", a novel methodthat employs Large Language Models (LLMs) powered by Artificial Intelligence togenerate control groups. We utilize OpenAI's API to generate a control groupwhere GPT-4, or ChatGPT, takes multiple SATs annually from 2008 to 2023. Thiscontrol group helps analyze shifts in SAT math difficulty over time, startingfrom the baseline year of 2008. Using parallel trends, we calculate the AverageDifference in Scores (ADS) to assess changes in high school students' mathperformance. Our results indicate a significant decrease in the difficulty ofthe SAT math section over time, alongside a decline in students' mathperformance. The analysis shows a 71-point drop in the rigor of SAT math from2008 to 2023, with student performance decreasing by 36 points, resulting in a107-point total divergence in average student math performance. We investigatepossible mechanisms for this decline in math proficiency, such as changinguniversity selection criteria, increased screen time, grade inflation, andworsening adolescent mental health. Disparities among demographic groups show a104-point drop for White students, 84 points for Black students, and 53 pointsfor Asian students. Male students saw a 117-point reduction, while femalestudents had a 100-point decrease.
{"title":"GPT takes the SAT: Tracing changes in Test Difficulty and Math Performance of Students","authors":"Vikram Krishnaveti, Saannidhya Rawat","doi":"arxiv-2409.10750","DOIUrl":"https://doi.org/arxiv-2409.10750","url":null,"abstract":"Scholastic Aptitude Test (SAT) is crucial for college admissions but its\u0000effectiveness and relevance are increasingly questioned. This paper enhances\u0000Synthetic Control methods by introducing \"Transformed Control\", a novel method\u0000that employs Large Language Models (LLMs) powered by Artificial Intelligence to\u0000generate control groups. We utilize OpenAI's API to generate a control group\u0000where GPT-4, or ChatGPT, takes multiple SATs annually from 2008 to 2023. This\u0000control group helps analyze shifts in SAT math difficulty over time, starting\u0000from the baseline year of 2008. Using parallel trends, we calculate the Average\u0000Difference in Scores (ADS) to assess changes in high school students' math\u0000performance. Our results indicate a significant decrease in the difficulty of\u0000the SAT math section over time, alongside a decline in students' math\u0000performance. The analysis shows a 71-point drop in the rigor of SAT math from\u00002008 to 2023, with student performance decreasing by 36 points, resulting in a\u0000107-point total divergence in average student math performance. We investigate\u0000possible mechanisms for this decline in math proficiency, such as changing\u0000university selection criteria, increased screen time, grade inflation, and\u0000worsening adolescent mental health. Disparities among demographic groups show a\u0000104-point drop for White students, 84 points for Black students, and 53 points\u0000for Asian students. Male students saw a 117-point reduction, while female\u0000students had a 100-point decrease.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"195 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142260944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
LASSO introduces shrinkage bias into estimated coefficients, which canadversely affect the desirable asymptotic normality and invalidate the standardinferential procedure based on the $t$-statistic. The desparsified LASSO hasemerged as a well-known remedy for this issue. In the context of highdimensional predictive regression, the desparsified LASSO faces an additionalchallenge: the Stambaugh bias arising from nonstationary regressors. To restorethe standard inferential procedure, we propose a novel estimator calledIVX-desparsified LASSO (XDlasso). XDlasso eliminates the shrinkage bias and theStambaugh bias simultaneously and does not require prior knowledge about theidentities of nonstationary and stationary regressors. We establish theasymptotic properties of XDlasso for hypothesis testing, and our theoreticalfindings are supported by Monte Carlo simulations. Applying our method toreal-world applications from the FRED-MD database -- which includes a rich setof control variables -- we investigate two important empirical questions: (i)the predictability of the U.S. stock returns based on the earnings-price ratio,and (ii) the predictability of the U.S. inflation using the unemployment rate.
{"title":"On LASSO Inference for High Dimensional Predictive Regression","authors":"Zhan Gao, Ji Hyung Lee, Ziwei Mei, Zhentao Shi","doi":"arxiv-2409.10030","DOIUrl":"https://doi.org/arxiv-2409.10030","url":null,"abstract":"LASSO introduces shrinkage bias into estimated coefficients, which can\u0000adversely affect the desirable asymptotic normality and invalidate the standard\u0000inferential procedure based on the $t$-statistic. The desparsified LASSO has\u0000emerged as a well-known remedy for this issue. In the context of high\u0000dimensional predictive regression, the desparsified LASSO faces an additional\u0000challenge: the Stambaugh bias arising from nonstationary regressors. To restore\u0000the standard inferential procedure, we propose a novel estimator called\u0000IVX-desparsified LASSO (XDlasso). XDlasso eliminates the shrinkage bias and the\u0000Stambaugh bias simultaneously and does not require prior knowledge about the\u0000identities of nonstationary and stationary regressors. We establish the\u0000asymptotic properties of XDlasso for hypothesis testing, and our theoretical\u0000findings are supported by Monte Carlo simulations. Applying our method to\u0000real-world applications from the FRED-MD database -- which includes a rich set\u0000of control variables -- we investigate two important empirical questions: (i)\u0000the predictability of the U.S. stock returns based on the earnings-price ratio,\u0000and (ii) the predictability of the U.S. inflation using the unemployment rate.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One thread of empirical work in social science focuses on decomposing groupdifferences in outcomes into unexplained components and components explained byobservable factors. In this paper, we study gender wage decompositions, whichrequire estimating the portion of the gender wage gap explained by careerhistories of workers. Classical methods for decomposing the wage gap employsimple predictive models of wages which condition on a small set of simplesummaries of labor history. The problem is that these predictive models cannottake advantage of the full complexity of a worker's history, and the resultingdecompositions thus suffer from omitted variable bias (OVB), where covariatesthat are correlated with both gender and wages are not included in the model.Here we explore an alternative methodology for wage gap decomposition thatemploys powerful foundation models, such as large language models, as thepredictive engine. Foundation models excel at making accurate predictions fromcomplex, high-dimensional inputs. We use a custom-built foundation model,designed to predict wages from full labor histories, to decompose the genderwage gap. We prove that the way such models are usually trained might stilllead to OVB, but develop fine-tuning algorithms that empirically mitigate thisissue. Our model captures a richer representation of career history than simplemodels and predicts wages more accurately. In detail, we first provide a novelset of conditions under which an estimator of the wage gap based on afine-tuned foundation model is $sqrt{n}$-consistent. Building on the theory,we then propose methods for fine-tuning foundation models that minimize OVB.Using data from the Panel Study of Income Dynamics, we find that historyexplains more of the gender wage gap than standard econometric models canmeasure, and we identify elements of history that are important for reducingOVB.
社会科学实证工作的一个重点是将结果中的群体差异分解为无法解释的部分和可观察因素所解释的部分。在本文中,我们研究了性别工资分解,这需要估算出工人的职业历史所解释的性别工资差距部分。分解工资差距的经典方法采用简单的工资预测模型,这些模型以一小部分简单的劳动历史记录为条件。问题在于,这些预测模型无法利用工人历史的全部复杂性,因此得出的分解结果存在遗漏变量偏差(OVB),即与性别和工资都相关的协变量未被纳入模型中。在此,我们探讨了工资差距分解的另一种方法,即利用强大的基础模型(如大型语言模型)作为预测引擎。基础模型擅长从复杂的高维输入中进行准确预测。我们使用一个定制的基础模型来分解性别工资差距,该模型旨在通过完整的劳动历史来预测工资。我们证明,通常训练此类模型的方式仍可能导致 OVB,但我们开发了微调算法,通过经验缓解了这一问题。与简单模型相比,我们的模型捕捉到了更丰富的职业历史表征,并能更准确地预测工资。具体来说,我们首先提供了一套新的条件,在这些条件下,基于微调基础模型的工资差距估计值是$sqrt{n}$一致的。利用《收入动态面板研究》(Panel Study of Income Dynamics)的数据,我们发现历史对性别工资差距的解释比标准计量经济学模型所能测量的要多,而且我们发现了历史中对减少 OVB 很重要的因素。
{"title":"Estimating Wage Disparities Using Foundation Models","authors":"Keyon Vafa, Susan Athey, David M. Blei","doi":"arxiv-2409.09894","DOIUrl":"https://doi.org/arxiv-2409.09894","url":null,"abstract":"One thread of empirical work in social science focuses on decomposing group\u0000differences in outcomes into unexplained components and components explained by\u0000observable factors. In this paper, we study gender wage decompositions, which\u0000require estimating the portion of the gender wage gap explained by career\u0000histories of workers. Classical methods for decomposing the wage gap employ\u0000simple predictive models of wages which condition on a small set of simple\u0000summaries of labor history. The problem is that these predictive models cannot\u0000take advantage of the full complexity of a worker's history, and the resulting\u0000decompositions thus suffer from omitted variable bias (OVB), where covariates\u0000that are correlated with both gender and wages are not included in the model.\u0000Here we explore an alternative methodology for wage gap decomposition that\u0000employs powerful foundation models, such as large language models, as the\u0000predictive engine. Foundation models excel at making accurate predictions from\u0000complex, high-dimensional inputs. We use a custom-built foundation model,\u0000designed to predict wages from full labor histories, to decompose the gender\u0000wage gap. We prove that the way such models are usually trained might still\u0000lead to OVB, but develop fine-tuning algorithms that empirically mitigate this\u0000issue. Our model captures a richer representation of career history than simple\u0000models and predicts wages more accurately. In detail, we first provide a novel\u0000set of conditions under which an estimator of the wage gap based on a\u0000fine-tuned foundation model is $sqrt{n}$-consistent. Building on the theory,\u0000we then propose methods for fine-tuning foundation models that minimize OVB.\u0000Using data from the Panel Study of Income Dynamics, we find that history\u0000explains more of the gender wage gap than standard econometric models can\u0000measure, and we identify elements of history that are important for reducing\u0000OVB.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a structural model-free methodology to analyze two types ofmacroeconomic counterfactuals related to policy path deviation: hypotheticaltrajectory and policy intervention. Our model-free approach is built on astructural vector moving-average (SVMA) model that relies solely on theidentification of policy shocks, thereby eliminating the need to specify anentire structural model. Analytical solutions are derived for thecounterfactual parameters, and statistical inference for these parameterestimates is provided using the Delta method. By utilizing externalinstruments, we introduce a projection-based method for the identification,estimation, and inference of these parameters. This approach connects ourcounterfactual analysis with the Local Projection literature. Asimulation-based approach with nonlinear model is provided to add in addressingLucas' critique. The innovative model-free methodology is applied in threecounterfactual studies on the U.S. monetary policy: (1) a historical scenarioanalysis for a hypothetical interest rate path in the post-pandemic era, (2) afuture scenario analysis under either hawkish or dovish interest rate policy,and (3) an evaluation of the policy intervention effect of an oil price shockby zeroing out the systematic responses of the interest rate.
{"title":"Structural counterfactual analysis in macroeconomics: theory and inference","authors":"Endong Wang","doi":"arxiv-2409.09577","DOIUrl":"https://doi.org/arxiv-2409.09577","url":null,"abstract":"We propose a structural model-free methodology to analyze two types of\u0000macroeconomic counterfactuals related to policy path deviation: hypothetical\u0000trajectory and policy intervention. Our model-free approach is built on a\u0000structural vector moving-average (SVMA) model that relies solely on the\u0000identification of policy shocks, thereby eliminating the need to specify an\u0000entire structural model. Analytical solutions are derived for the\u0000counterfactual parameters, and statistical inference for these parameter\u0000estimates is provided using the Delta method. By utilizing external\u0000instruments, we introduce a projection-based method for the identification,\u0000estimation, and inference of these parameters. This approach connects our\u0000counterfactual analysis with the Local Projection literature. A\u0000simulation-based approach with nonlinear model is provided to add in addressing\u0000Lucas' critique. The innovative model-free methodology is applied in three\u0000counterfactual studies on the U.S. monetary policy: (1) a historical scenario\u0000analysis for a hypothetical interest rate path in the post-pandemic era, (2) a\u0000future scenario analysis under either hawkish or dovish interest rate policy,\u0000and (3) an evaluation of the policy intervention effect of an oil price shock\u0000by zeroing out the systematic responses of the interest rate.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In social networks or spatial experiments, one unit's outcome often dependson another's treatment, a phenomenon called interference. Researchers areinterested in not only the presence and magnitude of interference but also itspattern based on factors like distance, neighboring units, and connectionstrength. However, the non-random nature of these factors and complexcorrelations across units pose challenges for inference. This paper introducesthe partial null randomization tests (PNRT) framework to address these issues.The proposed method is finite-sample valid and applicable with minimal networkstructure assumptions, utilizing randomization testing and pairwisecomparisons. Unlike existing conditional randomization tests, PNRT avoids theneed for conditioning events, making it more straightforward to implement.Simulations demonstrate the method's desirable power properties and itsapplicability to general interference scenarios.
{"title":"Unconditional Randomization Tests for Interference","authors":"Liang Zhong","doi":"arxiv-2409.09243","DOIUrl":"https://doi.org/arxiv-2409.09243","url":null,"abstract":"In social networks or spatial experiments, one unit's outcome often depends\u0000on another's treatment, a phenomenon called interference. Researchers are\u0000interested in not only the presence and magnitude of interference but also its\u0000pattern based on factors like distance, neighboring units, and connection\u0000strength. However, the non-random nature of these factors and complex\u0000correlations across units pose challenges for inference. This paper introduces\u0000the partial null randomization tests (PNRT) framework to address these issues.\u0000The proposed method is finite-sample valid and applicable with minimal network\u0000structure assumptions, utilizing randomization testing and pairwise\u0000comparisons. Unlike existing conditional randomization tests, PNRT avoids the\u0000need for conditioning events, making it more straightforward to implement.\u0000Simulations demonstrate the method's desirable power properties and its\u0000applicability to general interference scenarios.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cerqua Augusto, Di Stefano Roberta, Mattera Raffaele
Many treatments are non-randomly assigned, continuous in nature, and exhibitheterogeneous effects even at identical treatment intensities. Taken together,these characteristics pose significant challenges for identifying causaleffects, as no existing estimator can provide an unbiased estimate of theaverage causal dose-response function. To address this gap, we introduce theClustered Dose-Response Function (Cl-DRF), a novel estimator designed todiscern the continuous causal relationships between treatment intensity and thedependent variable across different subgroups. This approach leverages boththeoretical and data-driven sources of heterogeneity and operates under relaxedversions of the conditional independence and positivity assumptions, which arerequired to be met only within each identified subgroup. To demonstrate thecapabilities of the Cl-DRF estimator, we present both simulation evidence andan empirical application examining the impact of European Cohesion funds oneconomic growth.
{"title":"The Clustered Dose-Response Function Estimator for continuous treatment with heterogeneous treatment effects","authors":"Cerqua Augusto, Di Stefano Roberta, Mattera Raffaele","doi":"arxiv-2409.08773","DOIUrl":"https://doi.org/arxiv-2409.08773","url":null,"abstract":"Many treatments are non-randomly assigned, continuous in nature, and exhibit\u0000heterogeneous effects even at identical treatment intensities. Taken together,\u0000these characteristics pose significant challenges for identifying causal\u0000effects, as no existing estimator can provide an unbiased estimate of the\u0000average causal dose-response function. To address this gap, we introduce the\u0000Clustered Dose-Response Function (Cl-DRF), a novel estimator designed to\u0000discern the continuous causal relationships between treatment intensity and the\u0000dependent variable across different subgroups. This approach leverages both\u0000theoretical and data-driven sources of heterogeneity and operates under relaxed\u0000versions of the conditional independence and positivity assumptions, which are\u0000required to be met only within each identified subgroup. To demonstrate the\u0000capabilities of the Cl-DRF estimator, we present both simulation evidence and\u0000an empirical application examining the impact of European Cohesion funds on\u0000economic growth.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"85 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An updated and extended meta-analysis confirms that the central estimate ofthe social cost of carbon is around $200/tC with a large, right-skeweduncertainty and trending up. The pure rate of time preference and the inverseof the elasticity of intertemporal substitution are key assumptions, the totalimpact of 2.5K warming less so. The social cost of carbon is much higher ifclimate change is assumed to affect economic growth rather than the level ofoutput and welfare. The literature is dominated by a relatively small networkof authors, based in a few countries. Publication and citation bias have pushedthe social cost of carbon up.
{"title":"Trends and biases in the social cost of carbon","authors":"Richard S. J. Tol","doi":"arxiv-2409.08158","DOIUrl":"https://doi.org/arxiv-2409.08158","url":null,"abstract":"An updated and extended meta-analysis confirms that the central estimate of\u0000the social cost of carbon is around $200/tC with a large, right-skewed\u0000uncertainty and trending up. The pure rate of time preference and the inverse\u0000of the elasticity of intertemporal substitution are key assumptions, the total\u0000impact of 2.5K warming less so. The social cost of carbon is much higher if\u0000climate change is assumed to affect economic growth rather than the level of\u0000output and welfare. The literature is dominated by a relatively small network\u0000of authors, based in a few countries. Publication and citation bias have pushed\u0000the social cost of carbon up.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"1566 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}