Recent years have seen a resurgence in interest in marketing mix models (MMMs), which are aggregate-level models of marketing effectiveness. Often these models incorporate nonlinear effects, and either implicitly or explicitly assume that marketing effectiveness varies over time. In this paper, we show that nonlinear and time-varying effects are often not identifiable from standard marketing mix data: while certain data patterns may be suggestive of nonlinear effects, such patterns may also emerge under simpler models that incorporate dynamics in marketing effectiveness. This lack of identification is problematic because nonlinearities and dynamics suggest fundamentally different optimal marketing allocations. We examine this identification issue through theory and simulations, wherein we explore the exact conditions under which conflation between the two types of models is likely to occur. In doing so, we introduce a flexible Bayesian nonparametric model that allows us to both flexibly simulate and estimate different data-generating processes. We show that conflating the two types of effects is especially likely in the presence of autocorrelated marketing variables, which are common in practice, especially given the widespread use of stock variables to capture long-run effects of advertising. We illustrate these ideas through numerous empirical applications to real-world marketing mix data, showing the prevalence of the conflation issue in practice. Finally, we show how marketers can avoid this conflation, by designing experiments that strategically manipulate spending in ways that pin down model form.
{"title":"Your MMM is Broken: Identification of Nonlinear and Time-varying Effects in Marketing Mix Models","authors":"Ryan Dew, Nicolas Padilla, Anya Shchetkina","doi":"arxiv-2408.07678","DOIUrl":"https://doi.org/arxiv-2408.07678","url":null,"abstract":"Recent years have seen a resurgence in interest in marketing mix models\u0000(MMMs), which are aggregate-level models of marketing effectiveness. Often\u0000these models incorporate nonlinear effects, and either implicitly or explicitly\u0000assume that marketing effectiveness varies over time. In this paper, we show\u0000that nonlinear and time-varying effects are often not identifiable from\u0000standard marketing mix data: while certain data patterns may be suggestive of\u0000nonlinear effects, such patterns may also emerge under simpler models that\u0000incorporate dynamics in marketing effectiveness. This lack of identification is\u0000problematic because nonlinearities and dynamics suggest fundamentally different\u0000optimal marketing allocations. We examine this identification issue through\u0000theory and simulations, wherein we explore the exact conditions under which\u0000conflation between the two types of models is likely to occur. In doing so, we\u0000introduce a flexible Bayesian nonparametric model that allows us to both\u0000flexibly simulate and estimate different data-generating processes. We show\u0000that conflating the two types of effects is especially likely in the presence\u0000of autocorrelated marketing variables, which are common in practice, especially\u0000given the widespread use of stock variables to capture long-run effects of\u0000advertising. We illustrate these ideas through numerous empirical applications\u0000to real-world marketing mix data, showing the prevalence of the conflation\u0000issue in practice. Finally, we show how marketers can avoid this conflation, by\u0000designing experiments that strategically manipulate spending in ways that pin\u0000down model form.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantile and Distribution Treatment effects on the Treated (QTT/DTT) for non-continuous outcomes are either not identified or inference thereon is infeasible using existing methods. By introducing functional index parallel trends and no anticipation assumptions, this paper identifies and provides uniform inference procedures for QTT/DTT. The inference procedure applies under both the canonical two-group and staggered treatment designs with balanced panels, unbalanced panels, or repeated cross-sections. Monte Carlo experiments demonstrate the proposed method's robust and competitive performance, while an empirical application illustrates its practical utility.
{"title":"Quantile and Distribution Treatment Effects on the Treated with Possibly Non-Continuous Outcomes","authors":"Nelly K. Djuazon, Emmanuel Selorm Tsyawo","doi":"arxiv-2408.07842","DOIUrl":"https://doi.org/arxiv-2408.07842","url":null,"abstract":"Quantile and Distribution Treatment effects on the Treated (QTT/DTT) for\u0000non-continuous outcomes are either not identified or inference thereon is\u0000infeasible using existing methods. By introducing functional index parallel\u0000trends and no anticipation assumptions, this paper identifies and provides\u0000uniform inference procedures for QTT/DTT. The inference procedure applies under\u0000both the canonical two-group and staggered treatment designs with balanced\u0000panels, unbalanced panels, or repeated cross-sections. Monte Carlo experiments\u0000demonstrate the proposed method's robust and competitive performance, while an\u0000empirical application illustrates its practical utility.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wind and solar electricity generation account for 14% of total electricity generation in the United States and are expected to continue to grow in the next decades. In low carbon systems, generation from renewable energy sources displaces conventional fossil fuel power plants resulting in lower system-level emissions and emissions intensity. However, we find that intermittent generation from renewables changes the way conventional thermal power plants operate, and that the displacement of generation is not 1 to 1 as expected. Our work provides a method that allows policy and decision makers to continue to track the effect of additional renewable capacity and the resulting thermal power plant operational responses.
{"title":"What are the real implications for $CO_2$ as generation from renewables increases?","authors":"Dhruv Suri, Jacques de Chalendar, Ines Azevedo","doi":"arxiv-2408.05209","DOIUrl":"https://doi.org/arxiv-2408.05209","url":null,"abstract":"Wind and solar electricity generation account for 14% of total electricity\u0000generation in the United States and are expected to continue to grow in the\u0000next decades. In low carbon systems, generation from renewable energy sources\u0000displaces conventional fossil fuel power plants resulting in lower system-level\u0000emissions and emissions intensity. However, we find that intermittent\u0000generation from renewables changes the way conventional thermal power plants\u0000operate, and that the displacement of generation is not 1 to 1 as expected. Our\u0000work provides a method that allows policy and decision makers to continue to\u0000track the effect of additional renewable capacity and the resulting thermal\u0000power plant operational responses.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141947516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuo Feng, Ishani Ganguli, Youjin Lee, John Poe, Andrew Ryan, Alyssa Bilinski
Difference-in-differences (DiD) is the most popular observational causal inference method in health policy, employed to evaluate the real-world impact of policies and programs. To estimate treatment effects, DiD relies on the "parallel trends assumption", that on average treatment and comparison groups would have had parallel trajectories in the absence of an intervention. Historically, DiD has been considered broadly applicable and straightforward to implement, but recent years have seen rapid advancements in DiD methods. This paper reviews and synthesizes these innovations for medical and health policy researchers. We focus on four topics: (1) assessing the parallel trends assumption in health policy contexts; (2) relaxing the parallel trends assumption when appropriate; (3) employing estimators to account for staggered treatment timing; and (4) conducting robust inference for analyses in which normal-based clustered standard errors are inappropriate. For each, we explain challenges and common pitfalls in traditional DiD and modern methods available to address these issues.
差异推断法(DiD)是卫生政策领域最常用的观察因果推断方法,用于评估政策和项目在现实世界中的影响。为了估计治疗效果,差分法依赖于 "平行趋势假设",即在没有干预措施的情况下,治疗组和比较组的平均轨迹是平行的。本文为医学和卫生政策研究人员回顾并总结了这些创新。我们重点关注四个主题:(1) 评估卫生政策背景下的平行趋势假设;(2) 在适当的时候放宽平行趋势假设;(3) 使用估计器来考虑治疗时间的错开;(4) 在基于正态分布的聚类标准误差不合适的分析中进行稳健推断。我们将分别解释传统 DiD 中的挑战和常见陷阱,以及解决这些问题的现代方法。
{"title":"Difference-in-Differences for Health Policy and Practice: A Review of Modern Methods","authors":"Shuo Feng, Ishani Ganguli, Youjin Lee, John Poe, Andrew Ryan, Alyssa Bilinski","doi":"arxiv-2408.04617","DOIUrl":"https://doi.org/arxiv-2408.04617","url":null,"abstract":"Difference-in-differences (DiD) is the most popular observational causal\u0000inference method in health policy, employed to evaluate the real-world impact\u0000of policies and programs. To estimate treatment effects, DiD relies on the\u0000\"parallel trends assumption\", that on average treatment and comparison groups\u0000would have had parallel trajectories in the absence of an intervention.\u0000Historically, DiD has been considered broadly applicable and straightforward to\u0000implement, but recent years have seen rapid advancements in DiD methods. This\u0000paper reviews and synthesizes these innovations for medical and health policy\u0000researchers. We focus on four topics: (1) assessing the parallel trends\u0000assumption in health policy contexts; (2) relaxing the parallel trends\u0000assumption when appropriate; (3) employing estimators to account for staggered\u0000treatment timing; and (4) conducting robust inference for analyses in which\u0000normal-based clustered standard errors are inappropriate. For each, we explain\u0000challenges and common pitfalls in traditional DiD and modern methods available\u0000to address these issues.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141947554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The UN Office of Outer Space Affairs identifies synergy of space development activities and international cooperation through data and infrastructure sharing in their Sustainable Development Goal 17 (SDG17). Current multilateral space exploration paradigms, however, are divided between the Artemis and the Roscosmos-CNSA programs to return to the moon and establish permanent human settlements. As space agencies work to expand human presence in space, economic resource consolidation in pursuit of technologically ambitious space expeditions is the most sensible path to accomplish SDG17. This paper compiles a budget dataset for the top five federally-funded space agencies: CNSA, ESA, JAXA, NASA, and Roscosmos. Using time-series econometric anslysis methods in STATA, this work analyzes each agency's economic contributions toward space exploration. The dataset results are used to propose a multinational space mission, Vela, for the development of an orbiting space station around Mars in the late 2030s. Distribution of economic resources and technological capabilities by the respective space programs are proposed to ensure programmatic redundancy and increase the odds of success on the given timeline.
联合国外层空间事务办公室在其可持续发展目标 17(SDG17)中确定了通过数据和基础设施共享实现空间发展活动和国际合作的协同作用。然而,目前的多边空间探索范式分为阿尔忒弥斯计划和俄罗斯航天局-中国国家航天局重返月球计划,以及建立人类永久居住地计划。随着太空机构努力扩大人类在太空的存在,通过经济资源整合来追求技术上雄心勃勃的太空探索,是实现可持续发展目标 17 的最明智途径。本文汇编了联邦政府资助的五大航天机构的预算数据集:中国国家航天局(CNSA)、欧洲航天局(ESA)、日本宇宙航空研究开发机构(JAXA)、美国国家航空航天局(NASA)和俄罗斯航天局(Roscosmos)。利用 STATA 中的时间序列计量经济学分析方法,本文分析了各机构对太空探索的经济贡献。数据集的结果被用于提出一个多国太空发射计划--"维拉 "计划,以在 2030 年代末开发一个环绕火星的轨道空间站。提出了各太空计划的经济资源和技术能力分配方案,以确保计划的冗余性,提高在既定时间内取得成功的几率。
{"title":"Vela: A Data-Driven Proposal for Joint Collaboration in Space Exploration","authors":"Holly M. Dinkel, Jason K. Cornelius","doi":"arxiv-2408.04730","DOIUrl":"https://doi.org/arxiv-2408.04730","url":null,"abstract":"The UN Office of Outer Space Affairs identifies synergy of space development\u0000activities and international cooperation through data and infrastructure\u0000sharing in their Sustainable Development Goal 17 (SDG17). Current multilateral\u0000space exploration paradigms, however, are divided between the Artemis and the\u0000Roscosmos-CNSA programs to return to the moon and establish permanent human\u0000settlements. As space agencies work to expand human presence in space, economic\u0000resource consolidation in pursuit of technologically ambitious space\u0000expeditions is the most sensible path to accomplish SDG17. This paper compiles\u0000a budget dataset for the top five federally-funded space agencies: CNSA, ESA,\u0000JAXA, NASA, and Roscosmos. Using time-series econometric anslysis methods in\u0000STATA, this work analyzes each agency's economic contributions toward space\u0000exploration. The dataset results are used to propose a multinational space\u0000mission, Vela, for the development of an orbiting space station around Mars in\u0000the late 2030s. Distribution of economic resources and technological\u0000capabilities by the respective space programs are proposed to ensure\u0000programmatic redundancy and increase the odds of success on the given timeline.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141947518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dyadic network formation models have wide applicability in economic research, yet are difficult to estimate in the presence of individual specific effects and in the absence of distributional assumptions regarding the model noise component. The availability of (continuously distributed) individual or link characteristics generally facilitates estimation. Yet, while data on social networks has recently become more abundant, the characteristics of the entities involved in the link may not be measured. Adapting the procedure of citet{KS}, I propose to use network data alone in a semiparametric estimation of the individual fixed effect coefficients, which carry the interpretation of the individual relative popularity. This entails the possibility to anticipate how a new-coming individual will connect in a pre-existing group. The estimator, needed for its fast convergence, fails to implement the monotonicity assumption regarding the model noise component, thereby potentially reversing the order if the fixed effect coefficients. This and other numerical issues can be conveniently tackled by my novel, data-driven way of normalising the fixed effects, which proves to outperform a conventional standardisation in many cases. I demonstrate that the normalised coefficients converge both at the same rate and to the same limiting distribution as if the true error distribution was known. The cost of semiparametric estimation is thus purely computational, while the potential benefits are large whenever the errors have a strongly convex or strongly concave distribution.
{"title":"Semiparametric Estimation of Individual Coefficients in a Dyadic Link Formation Model Lacking Observable Characteristics","authors":"L. Sanna Stephan","doi":"arxiv-2408.04552","DOIUrl":"https://doi.org/arxiv-2408.04552","url":null,"abstract":"Dyadic network formation models have wide applicability in economic research,\u0000yet are difficult to estimate in the presence of individual specific effects\u0000and in the absence of distributional assumptions regarding the model noise\u0000component. The availability of (continuously distributed) individual or link\u0000characteristics generally facilitates estimation. Yet, while data on social\u0000networks has recently become more abundant, the characteristics of the entities\u0000involved in the link may not be measured. Adapting the procedure of citet{KS},\u0000I propose to use network data alone in a semiparametric estimation of the\u0000individual fixed effect coefficients, which carry the interpretation of the\u0000individual relative popularity. This entails the possibility to anticipate how\u0000a new-coming individual will connect in a pre-existing group. The estimator,\u0000needed for its fast convergence, fails to implement the monotonicity assumption\u0000regarding the model noise component, thereby potentially reversing the order if\u0000the fixed effect coefficients. This and other numerical issues can be\u0000conveniently tackled by my novel, data-driven way of normalising the fixed\u0000effects, which proves to outperform a conventional standardisation in many\u0000cases. I demonstrate that the normalised coefficients converge both at the same\u0000rate and to the same limiting distribution as if the true error distribution\u0000was known. The cost of semiparametric estimation is thus purely computational,\u0000while the potential benefits are large whenever the errors have a strongly\u0000convex or strongly concave distribution.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141947519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the robust estimation of linear regression models in the presence of potentially endogenous outliers. Through Monte Carlo simulations, we demonstrate that existing $L_1$-regularized estimation methods, including the Huber estimator and the least absolute deviation (LAD) estimator, exhibit significant bias when outliers are endogenous. Motivated by this finding, we investigate $L_0$-regularized estimation methods. We propose systematic heuristic algorithms, notably an iterative hard-thresholding algorithm and a local combinatorial search refinement, to solve the combinatorial optimization problem of the (L_0)-regularized estimation efficiently. Our Monte Carlo simulations yield two key results: (i) The local combinatorial search algorithm substantially improves solution quality compared to the initial projection-based hard-thresholding algorithm while offering greater computational efficiency than directly solving the mixed integer optimization problem. (ii) The $L_0$-regularized estimator demonstrates superior performance in terms of bias reduction, estimation accuracy, and out-of-sample prediction errors compared to $L_1$-regularized alternatives. We illustrate the practical value of our method through an empirical application to stock return forecasting.
{"title":"Robust Estimation of Regression Models with Potentially Endogenous Outliers via a Modern Optimization Lens","authors":"Zhan Gao, Hyungsik Roger Moon","doi":"arxiv-2408.03930","DOIUrl":"https://doi.org/arxiv-2408.03930","url":null,"abstract":"This paper addresses the robust estimation of linear regression models in the\u0000presence of potentially endogenous outliers. Through Monte Carlo simulations,\u0000we demonstrate that existing $L_1$-regularized estimation methods, including\u0000the Huber estimator and the least absolute deviation (LAD) estimator, exhibit\u0000significant bias when outliers are endogenous. Motivated by this finding, we\u0000investigate $L_0$-regularized estimation methods. We propose systematic\u0000heuristic algorithms, notably an iterative hard-thresholding algorithm and a\u0000local combinatorial search refinement, to solve the combinatorial optimization\u0000problem of the (L_0)-regularized estimation efficiently. Our Monte Carlo\u0000simulations yield two key results: (i) The local combinatorial search algorithm\u0000substantially improves solution quality compared to the initial\u0000projection-based hard-thresholding algorithm while offering greater\u0000computational efficiency than directly solving the mixed integer optimization\u0000problem. (ii) The $L_0$-regularized estimator demonstrates superior performance\u0000in terms of bias reduction, estimation accuracy, and out-of-sample prediction\u0000errors compared to $L_1$-regularized alternatives. We illustrate the practical\u0000value of our method through an empirical application to stock return\u0000forecasting.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141947556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper considers a robust identification of causal parameters in a randomized experiment setting with noncompliance where the standard local average treatment effect assumptions could be violated. Following Li, K'edagni, and Mourifi'e (2024), we propose a misspecification robust bound for a real-valued vector of various causal parameters. We discuss identification under two sets of weaker assumptions: random assignment and exclusion restriction (without monotonicity), and random assignment and monotonicity (without exclusion restriction). We introduce two causal parameters: the local average treatment-controlled direct effect (LATCDE), and the local average instrument-controlled direct effect (LAICDE). Under the random assignment and monotonicity assumptions, we derive sharp bounds on the local average treatment-controlled direct effects for the always-takers and never-takers, respectively, and the total average controlled direct effect for the compliers. Additionally, we show that the intent-to-treat effect can be expressed as a convex weighted average of these three effects. Finally, we apply our method on the proximity to college instrument and find that growing up near a four-year college increases the wage of never-takers (who represent more than 70% of the population) by a range of 4.15% to 27.07%.
{"title":"Robust Identification in Randomized Experiments with Noncompliance","authors":"Yi Cui, Désiré Kédagni, Huan Wu","doi":"arxiv-2408.03530","DOIUrl":"https://doi.org/arxiv-2408.03530","url":null,"abstract":"This paper considers a robust identification of causal parameters in a\u0000randomized experiment setting with noncompliance where the standard local\u0000average treatment effect assumptions could be violated. Following Li,\u0000K'edagni, and Mourifi'e (2024), we propose a misspecification robust bound\u0000for a real-valued vector of various causal parameters. We discuss\u0000identification under two sets of weaker assumptions: random assignment and\u0000exclusion restriction (without monotonicity), and random assignment and\u0000monotonicity (without exclusion restriction). We introduce two causal\u0000parameters: the local average treatment-controlled direct effect (LATCDE), and\u0000the local average instrument-controlled direct effect (LAICDE). Under the\u0000random assignment and monotonicity assumptions, we derive sharp bounds on the\u0000local average treatment-controlled direct effects for the always-takers and\u0000never-takers, respectively, and the total average controlled direct effect for\u0000the compliers. Additionally, we show that the intent-to-treat effect can be\u0000expressed as a convex weighted average of these three effects. Finally, we\u0000apply our method on the proximity to college instrument and find that growing\u0000up near a four-year college increases the wage of never-takers (who represent\u0000more than 70% of the population) by a range of 4.15% to 27.07%.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141947555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Asymmetric causality tests are increasingly gaining popularity in different scientific fields. This approach corresponds better to reality since logical reasons behind asymmetric behavior exist and need to be considered in empirical investigations. Hatemi-J (2012) introduced the asymmetric causality tests via partial cumulative sums for positive and negative components of the variables operating within the vector autoregressive (VAR) model. However, since the the residuals across the equations in the VAR model are not independent, the ordinary least squares method for estimating the parameters is not efficient. Additionally, asymmetric causality tests mean having different causal parameters (i.e., for positive or negative components), thus, it is crucial to assess not only if these causal parameters are individually statistically significant, but also if their difference is statistically significant. Consequently, tests of difference between estimated causal parameters should explicitly be conducted, which are neglected in the existing literature. The purpose of the current paper is to deal with these issues explicitly. An application is provided, and ten different hypotheses pertinent to the asymmetric causal interaction between two largest financial markets worldwide are efficiently tested within a multivariate setting.
{"title":"Efficient Asymmetric Causality Tests","authors":"Abdulnasser Hatemi-J","doi":"arxiv-2408.03137","DOIUrl":"https://doi.org/arxiv-2408.03137","url":null,"abstract":"Asymmetric causality tests are increasingly gaining popularity in different\u0000scientific fields. This approach corresponds better to reality since logical\u0000reasons behind asymmetric behavior exist and need to be considered in empirical\u0000investigations. Hatemi-J (2012) introduced the asymmetric causality tests via\u0000partial cumulative sums for positive and negative components of the variables\u0000operating within the vector autoregressive (VAR) model. However, since the the\u0000residuals across the equations in the VAR model are not independent, the\u0000ordinary least squares method for estimating the parameters is not efficient.\u0000Additionally, asymmetric causality tests mean having different causal\u0000parameters (i.e., for positive or negative components), thus, it is crucial to\u0000assess not only if these causal parameters are individually statistically\u0000significant, but also if their difference is statistically significant.\u0000Consequently, tests of difference between estimated causal parameters should\u0000explicitly be conducted, which are neglected in the existing literature. The\u0000purpose of the current paper is to deal with these issues explicitly. An\u0000application is provided, and ten different hypotheses pertinent to the\u0000asymmetric causal interaction between two largest financial markets worldwide\u0000are efficiently tested within a multivariate setting.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141947517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The association between log-price increments of exchange-traded equities, as measured by their spot correlation estimated from high-frequency data, exhibits a pronounced upward-sloping and almost piecewise linear relationship at the intraday horizon. There is notably lower-on average less positive-correlation in the morning than in the afternoon. We develop a nonparametric testing procedure to detect such deterministic variation in a correlation process. The test statistic has a known distribution under the null hypothesis, whereas it diverges under the alternative. It is robust against stochastic correlation. We run a Monte Carlo simulation to discover the finite sample properties of the test statistic, which are close to the large sample predictions, even for small sample sizes and realistic levels of diurnal variation. In an application, we implement the test on a monthly basis for a high-frequency dataset covering the stock market over an extended period. The test leads to rejection of the null most of the time. This suggests diurnal variation in the correlation process is a nontrivial effect in practice.
{"title":"A nonparametric test for diurnal variation in spot correlation processes","authors":"Kim Christensen, Ulrich Hounyo, Zhi Liu","doi":"arxiv-2408.02757","DOIUrl":"https://doi.org/arxiv-2408.02757","url":null,"abstract":"The association between log-price increments of exchange-traded equities, as\u0000measured by their spot correlation estimated from high-frequency data, exhibits\u0000a pronounced upward-sloping and almost piecewise linear relationship at the\u0000intraday horizon. There is notably lower-on average less positive-correlation\u0000in the morning than in the afternoon. We develop a nonparametric testing\u0000procedure to detect such deterministic variation in a correlation process. The\u0000test statistic has a known distribution under the null hypothesis, whereas it\u0000diverges under the alternative. It is robust against stochastic correlation. We\u0000run a Monte Carlo simulation to discover the finite sample properties of the\u0000test statistic, which are close to the large sample predictions, even for small\u0000sample sizes and realistic levels of diurnal variation. In an application, we\u0000implement the test on a monthly basis for a high-frequency dataset covering the\u0000stock market over an extended period. The test leads to rejection of the null\u0000most of the time. This suggests diurnal variation in the correlation process is\u0000a nontrivial effect in practice.","PeriodicalId":501293,"journal":{"name":"arXiv - ECON - Econometrics","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141947448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}