In a widely cited study, Healy, Malhotra, and Mo (henceforth HMM) report that college football games influence elections. We reassess this surprising finding and conclude that, despite arising from a sound research design, it is a false-positive result that arose by chance. HMM responded to our study and raised further objections in an online document. We responded to the primary objections of HMM in our own reply, which was limited to 500 words. In this document, we elaborate upon our previous points and respond to the specific criticisms from HMM’s online reply.
{"title":"Reply to Healy et al.'s Discussion of Auxiliary Tests","authors":"Anthony Fowler, Pablo Montagnes","doi":"10.2139/ssrn.2681144","DOIUrl":"https://doi.org/10.2139/ssrn.2681144","url":null,"abstract":"In a widely cited study, Healy, Malhotra, and Mo (henceforth HMM) report that college football games influence elections. We reassess this surprising finding and conclude that, despite arising from a sound research design, it is a false-positive result that arose by chance. HMM responded to our study and raised further objections in an online document. We responded to the primary objections of HMM in our own reply, which was limited to 500 words. In this document, we elaborate upon our previous points and respond to the specific criticisms from HMM’s online reply.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130728694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop importance sampling methods for computing two popular Bayesian model comparison criteria, namely, the marginal likelihood and deviance information criterion (DIC) for TVP-VARs with stochastic volatility. The proposed estimators are based on the integrated likelihood, which are substantially more reliable than alternatives. Specifically, integrated likelihood evaluation is achieved by integrating out the time-varying parameters analytically, while the log-volatilities are integrated out numerically via importance sampling. Using US and Australian data, we find overwhelming support for the TVPVAR with stochastic volatility compared to a conventional constant coefficients VAR with homoscedastic innovations. Most of the gains, however, appear to have come from allowing for stochastic volatility rather than time variation in the VAR coefficients or contemporaneous relationships. Indeed, according to both criteria, a constant coefficients VAR with stochastic volatility receives similar support as the more general model with time-varying parameters.
{"title":"Bayesian Model Comparison for Time-Varying Parameter VARs with Stochastic Volatility","authors":"J. Chan, Eric Eisenstat","doi":"10.2139/ssrn.2642091","DOIUrl":"https://doi.org/10.2139/ssrn.2642091","url":null,"abstract":"We develop importance sampling methods for computing two popular Bayesian model comparison criteria, namely, the marginal likelihood and deviance information criterion (DIC) for TVP-VARs with stochastic volatility. The proposed estimators are based on the integrated likelihood, which are substantially more reliable than alternatives. Specifically, integrated likelihood evaluation is achieved by integrating out the time-varying parameters analytically, while the log-volatilities are integrated out numerically via importance sampling. Using US and Australian data, we find overwhelming support for the TVPVAR with stochastic volatility compared to a conventional constant coefficients VAR with homoscedastic innovations. Most of the gains, however, appear to have come from allowing for stochastic volatility rather than time variation in the VAR coefficients or contemporaneous relationships. Indeed, according to both criteria, a constant coefficients VAR with stochastic volatility receives similar support as the more general model with time-varying parameters.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128862960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robust optimization is a methodology that can be applied to problems that are affected by uncertainty in the problem’s parameters. The classical robust counterpart (RC) of the problem requires the solution to be feasible for all uncertain parameter values in a so-called uncertainty set, and offers no guarantees for parameter values outside this uncertainty set. The globalized robust counterpart (GRC) extends this idea by allowing controlled constraint violations in a larger uncertainty set. The constraint violations are controlled by the distance of the parameter to the original uncertainty set. We derive tractable GRCs that extend the initial GRCs in the literature: our GRC is applicable to nonlinear constraints instead of only linear or conic constraints, and the GRC is more flexible with respect to both the uncertainty set and distance measure function, which are used to control the constraint violations. In addition, we present a GRC approach that can be used to provide an extended trade-off overview between the objective value and several robustness measures.
{"title":"Globalized Robust Optimization for Nonlinear Uncertain Inequalities","authors":"A. Ben-Tal, R. Brekelmans, D. Hertog, J. Vial","doi":"10.2139/ssrn.2618429","DOIUrl":"https://doi.org/10.2139/ssrn.2618429","url":null,"abstract":"Robust optimization is a methodology that can be applied to problems that are affected by uncertainty in the problem’s parameters. The classical robust counterpart (RC) of the problem requires the solution to be feasible for all uncertain parameter values in a so-called uncertainty set, and offers no guarantees for parameter values outside this uncertainty set. The globalized robust counterpart (GRC) extends this idea by allowing controlled constraint violations in a larger uncertainty set. The constraint violations are controlled by the distance of the parameter to the original uncertainty set. We derive tractable GRCs that extend the initial GRCs in the literature: our GRC is applicable to nonlinear constraints instead of only linear or conic constraints, and the GRC is more flexible with respect to both the uncertainty set and distance measure function, which are used to control the constraint violations. In addition, we present a GRC approach that can be used to provide an extended trade-off overview between the objective value and several robustness measures.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131720793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the design of optimal Bonus-Malus Systems (BMS) using generalized additive models for location, scale and shape (GAMLSS), extending the work of Tzougas, Frangos and Vrontos (2014). Specifically, for the frequency component we employ a Negative Binomial Type I, a Poisson-Inverse Gaussian, a Sichel and a finite Poisson mixture GAMLSS model, while for the severity component we employ a Pareto and a finite Exponential mixture GAMLSS models. In the path towards actuarial relevance the Bayesian view is taken and the premiums are calculated by updating the posterior mean and posterior probability of the policyholders' classes of risk. Our analysis shows that the employment of more advanced models can provide a measure of uncertainty regarding the credibility updates of claim frequency/severity of each specific risk class and the difference in the premium that they imply can act as a cushion against adverse experience. Finally, these "tailor-made" premiums are compared to those which correspond to the 'univariate',without regression components, models.
{"title":"Optimal Bonus-Malus Systems Using Generalized Additive Models for Location, Scale and Shape","authors":"G. Tzougas, Spyridon D. Vrontos, Nikolaos Fragos","doi":"10.2139/ssrn.2612640","DOIUrl":"https://doi.org/10.2139/ssrn.2612640","url":null,"abstract":"This paper presents the design of optimal Bonus-Malus Systems (BMS) using generalized additive models for location, scale and shape (GAMLSS), extending the work of Tzougas, Frangos and Vrontos (2014). Specifically, for the frequency component we employ a Negative Binomial Type I, a Poisson-Inverse Gaussian, a Sichel and a finite Poisson mixture GAMLSS model, while for the severity component we employ a Pareto and a finite Exponential mixture GAMLSS models. In the path towards actuarial relevance the Bayesian view is taken and the premiums are calculated by updating the posterior mean and posterior probability of the policyholders' classes of risk. Our analysis shows that the employment of more advanced models can provide a measure of uncertainty regarding the credibility updates of claim frequency/severity of each specific risk class and the difference in the premium that they imply can act as a cushion against adverse experience. Finally, these \"tailor-made\" premiums are compared to those which correspond to the 'univariate',without regression components, models.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126505466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using the regression discontinuity design of close gubernatorial elections in the U.S., we identify a significant and positive impact of the social networks of corporate directors and politicians on firm value. Firms connected to elected governors increase their value by 3.89%. Political connections are more valuable for firms connected to winning challengers, for smaller and financially dependent firms, in more corrupt states, in states of connected firms’ headquarters and operations, and in closer, smaller, and active networks. Post-election, firms connected to the winner receive significantly more state procurement contracts and invest more than do firms connected to the loser.
{"title":"Political Connections and Firm Value: Evidence from Close Gubernatorial Elections","authors":"Quoc-Anh Do, Y. Lee, B. Nguyen","doi":"10.2139/ssrn.2023191","DOIUrl":"https://doi.org/10.2139/ssrn.2023191","url":null,"abstract":"Using the regression discontinuity design of close gubernatorial elections in the U.S., we identify a significant and positive impact of the social networks of corporate directors and politicians on firm value. Firms connected to elected governors increase their value by 3.89%. Political connections are more valuable for firms connected to winning challengers, for smaller and financially dependent firms, in more corrupt states, in states of connected firms’ headquarters and operations, and in closer, smaller, and active networks. Post-election, firms connected to the winner receive significantly more state procurement contracts and invest more than do firms connected to the loser.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130110890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stata is fast, often very fast. However, when performing regressions on small sub-samples within a large host dataset (more than 1 million observations) performance can deteriorate by many orders of magnitude. For example, an OLS regression on a sub-sample of 100 consecutive observations takes 3.6 seconds in a host dataset with 1 billion observations, but only 3.8 milliseconds in a host dataset with 1000 observations. The difference in performance is due to the mechanism regress uses to mark estimation samples. This performance deterioration has practical implications in finance research, where many variables of interest are themselves estimated via millions of individual OLS regressions within large panel datasets. I suggest an approach that circumvents this issue by using a simple Mata implementation of regress which I call fastreg. As a test, I estimate daily Fama and French 3-factor betas for individual stocks in the CRSP database from 1923 to 2013 using a 250-day rolling window. In this setting fastreg is approximately 367 times faster than regress. The code for fastreg ado is included in the Appendix and is open-source licensed under the GNU GPL.
{"title":"Stata, Fast and Slow: Why Running Many Small Regressions in a Large Dataset Takes So Long; and What to Do About It","authors":"P. Geertsema","doi":"10.2139/ssrn.2423171","DOIUrl":"https://doi.org/10.2139/ssrn.2423171","url":null,"abstract":"Stata is fast, often very fast. However, when performing regressions on small sub-samples within a large host dataset (more than 1 million observations) performance can deteriorate by many orders of magnitude. For example, an OLS regression on a sub-sample of 100 consecutive observations takes 3.6 seconds in a host dataset with 1 billion observations, but only 3.8 milliseconds in a host dataset with 1000 observations. The difference in performance is due to the mechanism regress uses to mark estimation samples. This performance deterioration has practical implications in finance research, where many variables of interest are themselves estimated via millions of individual OLS regressions within large panel datasets. I suggest an approach that circumvents this issue by using a simple Mata implementation of regress which I call fastreg. As a test, I estimate daily Fama and French 3-factor betas for individual stocks in the CRSP database from 1923 to 2013 using a 250-day rolling window. In this setting fastreg is approximately 367 times faster than regress. The code for fastreg ado is included in the Appendix and is open-source licensed under the GNU GPL.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125607609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. These models are often ill-posed and hence it is difficult to verify whether a (possibly nonlinear) functional is root-n estimable or not. We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is root-n estimable or not. We establish the following new useful results: (1) the asymptotic normality of a plug-in penalized sieve minimum distance (PSMD) estimator of a (possibly nonlinear) functional; (2) the consistency of simple sieve variance estimators for the plug-in PSMD estimator, and hence the asymptotic chi-square distribution of the sieve Wald statistic; (3) the asymptotic chi-square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non-optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) asymptotic properties of sieve Wald and SQLR for functionals of increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.
{"title":"Sieve Wald and QLR Inferences on Semi/Nonparametric Conditional Moment Models","authors":"Xiaohong Chen, Demian Pouzo","doi":"10.2139/SSRN.2518456","DOIUrl":"https://doi.org/10.2139/SSRN.2518456","url":null,"abstract":"This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. These models are often ill-posed and hence it is difficult to verify whether a (possibly nonlinear) functional is root-n estimable or not. We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is root-n estimable or not. We establish the following new useful results: (1) the asymptotic normality of a plug-in penalized sieve minimum distance (PSMD) estimator of a (possibly nonlinear) functional; (2) the consistency of simple sieve variance estimators for the plug-in PSMD estimator, and hence the asymptotic chi-square distribution of the sieve Wald statistic; (3) the asymptotic chi-square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non-optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) asymptotic properties of sieve Wald and SQLR for functionals of increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124184828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Every density produced by an SDE which employs normal random variables for its simulation is either linear or non-linear transformation of the normal random variables. We find this transformation in case of a general SDE by taking into account how the variance evolves in that certain SDE. We map the domain of the normal distribution into the domain of the SDE by using the algorithm given in the paper which is based on how the variance grows in the SDE. We find the Jacobian of this transformation with respect to normal density and employ a change of variables formula for densities to get the density of simulated SDE. Briefly in our method, domain of the normal distribution is divided into equal subdivisions called standard deviation fractions that expand or contract as the variance increases or decreases such that probability mass within each SD fraction remains constant. Usually 300-500 SD fractions are enough for desired accuracy. Within each normal SD fraction, stochastic integrals are evolved/mapped from normal distribution to distribution of SDE based on change of local variance independently of other SD fractions. The work for each step is roughly the same as that of one step in monte carlo but since SD fractions are only a few hundred and are independent of each other, this technique is much faster than the monte carlo simulation. Since this technique is very fast, we are confident that it will be the method of choice to evolve distributions of the SDEs as compared to the monte carlo simulations and the partial differential equations.
{"title":"Stochastic Calculus of Standard Deviations: An Introduction","authors":"A. Amin","doi":"10.2139/ssrn.2337982","DOIUrl":"https://doi.org/10.2139/ssrn.2337982","url":null,"abstract":"Every density produced by an SDE which employs normal random variables for its simulation is either linear or non-linear transformation of the normal random variables. We find this transformation in case of a general SDE by taking into account how the variance evolves in that certain SDE. We map the domain of the normal distribution into the domain of the SDE by using the algorithm given in the paper which is based on how the variance grows in the SDE. We find the Jacobian of this transformation with respect to normal density and employ a change of variables formula for densities to get the density of simulated SDE. Briefly in our method, domain of the normal distribution is divided into equal subdivisions called standard deviation fractions that expand or contract as the variance increases or decreases such that probability mass within each SD fraction remains constant. Usually 300-500 SD fractions are enough for desired accuracy. Within each normal SD fraction, stochastic integrals are evolved/mapped from normal distribution to distribution of SDE based on change of local variance independently of other SD fractions. The work for each step is roughly the same as that of one step in monte carlo but since SD fractions are only a few hundred and are independent of each other, this technique is much faster than the monte carlo simulation. Since this technique is very fast, we are confident that it will be the method of choice to evolve distributions of the SDEs as compared to the monte carlo simulations and the partial differential equations.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115208689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce an approach for semi-parametric inference in dynamic binary choice models that does not impose distributional assumptions on the state variables unobserved by the econometrician. The proposed framework combines Bayesian inference with partial identification results. The method is applicable to models with finite space of observed states. We demonstrate the method on Rust's model of bus engine replacement. The estimation experiments show that the parametric assumptions about the distribution of the unobserved states can have a considerable effect on the estimates of per-period payoffs. At the same time, the effect of these assumptions on counterfactual conditional choice probabilities can be small for most of the observed states.
{"title":"Semi-Parametric Inference in Dynamic Binary Choice Models","authors":"Andriy Norets, Xun Tang","doi":"10.2139/ssrn.2340003","DOIUrl":"https://doi.org/10.2139/ssrn.2340003","url":null,"abstract":"We introduce an approach for semi-parametric inference in dynamic binary choice models that does not impose distributional assumptions on the state variables unobserved by the econometrician. The proposed framework combines Bayesian inference with partial identification results. The method is applicable to models with finite space of observed states. We demonstrate the method on Rust's model of bus engine replacement. The estimation experiments show that the parametric assumptions about the distribution of the unobserved states can have a considerable effect on the estimates of per-period payoffs. At the same time, the effect of these assumptions on counterfactual conditional choice probabilities can be small for most of the observed states.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128594581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}