In several situations agents need to be assigned to activities on basis of their preferences, and each agent can take part in at most one activity. Often, the preferences of the agents do not depend only on the activity itself but also on the number of participants in the respective activity. In the setting we consider, the agents hence have preferences over pairs "(activity, group size)" including the possibility "do nothing"; in this work, these preferences are assumed to be strict orders. The task will be to find stable assignments of agents to activities, for different concepts of stability such as Nash or core stability, and Pareto optimal assignments respectively. In this respect, particular focus is laid on two natural special cases of agents' preferences inherent in the considered model.
{"title":"Stable Group Activity Selection from Ordinal Preferences","authors":"Andreas Darmann","doi":"10.2139/ssrn.2906360","DOIUrl":"https://doi.org/10.2139/ssrn.2906360","url":null,"abstract":"In several situations agents need to be assigned to activities on basis of their preferences, and each agent can take part in at most one activity. Often, the preferences of the agents do not depend only on the activity itself but also on the number of participants in the respective activity. In the setting we consider, the agents hence have preferences over pairs \"(activity, group size)\" including the possibility \"do nothing\"; in this work, these preferences are assumed to be strict orders. The task will be to find stable assignments of agents to activities, for different concepts of stability such as Nash or core stability, and Pareto optimal assignments respectively. In this respect, particular focus is laid on two natural special cases of agents' preferences inherent in the considered model.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122162611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Model specification and selection are recurring themes in econometric analysis. Both topics become considerably more complicated in the case of large-dimensional data sets where the set of specification possibilities can become quite large. In the context of linear regression models, penalised regression has become the de facto benchmark technique used to trade off parsimony and fit when the number of possible covariates is large, often much larger than the number of available observations. However, issues such as the choice of a penalty function and tuning parameters associated with the use of penalized regressions remain contentious. In this paper, we provide an alternative approach that considers the statistical significance of the individual covariates one at a time, whilst taking full account of the multiple testing nature of the inferential problem involved. We refer to the proposed method as One Covariate at a Time Multiple Testing (OCMT) procedure. The OCMT provides an alternative to penalised regression methods: It is based on statistical inference and is therefore easier to interpret and relate to the classical statistical analysis, it allows working under more general assumptions, it is faster, and performs well in small samples for almost all of the different sets of experiments considered in this paper. We provide extensive theoretical and Monte Carlo results in support of adding the proposed OCMT model selection procedure to the toolbox of applied researchers. The usefulness of OCMT is also illustrated by an empirical application to forecasting U.S. output growth and inflation.
{"title":"A One-Covariate at a Time, Multiple Testing Approach to Variable Selection in High-Dimensional Linear Regression Models","authors":"A. Chudik, G. Kapetanios, M. Pesaran","doi":"10.24149/GWP290","DOIUrl":"https://doi.org/10.24149/GWP290","url":null,"abstract":"Model specification and selection are recurring themes in econometric analysis. Both topics become considerably more complicated in the case of large-dimensional data sets where the set of specification possibilities can become quite large. In the context of linear regression models, penalised regression has become the de facto benchmark technique used to trade off parsimony and fit when the number of possible covariates is large, often much larger than the number of available observations. However, issues such as the choice of a penalty function and tuning parameters associated with the use of penalized regressions remain contentious. In this paper, we provide an alternative approach that considers the statistical significance of the individual covariates one at a time, whilst taking full account of the multiple testing nature of the inferential problem involved. We refer to the proposed method as One Covariate at a Time Multiple Testing (OCMT) procedure. The OCMT provides an alternative to penalised regression methods: It is based on statistical inference and is therefore easier to interpret and relate to the classical statistical analysis, it allows working under more general assumptions, it is faster, and performs well in small samples for almost all of the different sets of experiments considered in this paper. We provide extensive theoretical and Monte Carlo results in support of adding the proposed OCMT model selection procedure to the toolbox of applied researchers. The usefulness of OCMT is also illustrated by an empirical application to forecasting U.S. output growth and inflation.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122422153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We formalize a noted [Guryan et al., 2009] but unexplored source of bias in peer effect estimation, arising because people cannot be their own peer. We derive, for linear-in-means models with non-overlapping peer groups, an exact formula of the bias in a test of random peer assignment. We demonstrate that, when estimating endogenous peer effects, the negative exclusion bias dominates the positive reflection bias when the true peer effect is small. We discuss conditions under which exclusion bias is aggravated by adding cluster fixed effects. By imposing restrictions on the error term, we show how to consistently estimate, without the need for instruments, all the structural parameters of an endogenous peer effect model with an arbitrary peer-group or network structure. We show that, under certain conditions, 2SLS do not suffer from exclusion bias. This may explain the counter-intuitive observation that OLS estimates of peer effects are often larger than their 2SLS counterpart.Institutional subscribers to the NBER working paper series, and residents of developing countries may download this paper without additional charge at www.nber.org.
{"title":"Exclusion Bias in the Estimation of Peer Effects","authors":"Bet Caeyers, M. Fafchamps","doi":"10.3386/W22565","DOIUrl":"https://doi.org/10.3386/W22565","url":null,"abstract":"We formalize a noted [Guryan et al., 2009] but unexplored source of bias in peer effect estimation, arising because people cannot be their own peer. We derive, for linear-in-means models with non-overlapping peer groups, an exact formula of the bias in a test of random peer assignment. We demonstrate that, when estimating endogenous peer effects, the negative exclusion bias dominates the positive reflection bias when the true peer effect is small. We discuss conditions under which exclusion bias is aggravated by adding cluster fixed effects. By imposing restrictions on the error term, we show how to consistently estimate, without the need for instruments, all the structural parameters of an endogenous peer effect model with an arbitrary peer-group or network structure. We show that, under certain conditions, 2SLS do not suffer from exclusion bias. This may explain the counter-intuitive observation that OLS estimates of peer effects are often larger than their 2SLS counterpart.Institutional subscribers to the NBER working paper series, and residents of developing countries may download this paper without additional charge at www.nber.org.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129697070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-03DOI: 10.1108/IMDS-07-2015-0302
Christian Nitzl, J. Roldán, Gabriel Cepeda
Indirect or mediated effects constitute a type of relationship between constructs that often occurs in partial least squares (PLS) path modeling. Over the past few years, the methods for testing mediation have become more sophisticated. However, many researchers continue to use outdated methods to test mediating effects in PLS, which can lead to erroneous results. One reason for the use of outdated methods or even the lack of their use altogether is that no systematic tutorials on PLS exist that draw on the newest statistical findings. The paper aims to discuss these issues.,This study illustrates the state-of-the-art use of mediation analysis in the context of PLS-structural equation modeling (SEM).,This study facilitates the adoption of modern procedures in PLS-SEM by challenging the conventional approach to mediation analysis and providing more accurate alternatives. In addition, the authors propose a decision tree and classification of mediation effects.,The recommended approach offers a wide range of testing options (e.g. multiple mediators) that go beyond simple mediation analysis alternatives, helping researchers discuss their studies in a more accurate way.
{"title":"Mediation Analysis in Partial Least Squares Path Modeling: Helping Researchers Discuss More Sophisticated Models","authors":"Christian Nitzl, J. Roldán, Gabriel Cepeda","doi":"10.1108/IMDS-07-2015-0302","DOIUrl":"https://doi.org/10.1108/IMDS-07-2015-0302","url":null,"abstract":"Indirect or mediated effects constitute a type of relationship between constructs that often occurs in partial least squares (PLS) path modeling. Over the past few years, the methods for testing mediation have become more sophisticated. However, many researchers continue to use outdated methods to test mediating effects in PLS, which can lead to erroneous results. One reason for the use of outdated methods or even the lack of their use altogether is that no systematic tutorials on PLS exist that draw on the newest statistical findings. The paper aims to discuss these issues.,This study illustrates the state-of-the-art use of mediation analysis in the context of PLS-structural equation modeling (SEM).,This study facilitates the adoption of modern procedures in PLS-SEM by challenging the conventional approach to mediation analysis and providing more accurate alternatives. In addition, the authors propose a decision tree and classification of mediation effects.,The recommended approach offers a wide range of testing options (e.g. multiple mediators) that go beyond simple mediation analysis alternatives, helping researchers discuss their studies in a more accurate way.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121566299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We provide evidence on the effects of criminal/corrupt politicians on firm value and investments. Using a regression discontinuity approach, we focus on close elections to establish a causal link between election of criminal-politicians and firms’ value and investment decisions. We utilize unique datasets on the criminal background of Indian politicians and details on investment projects in their districts. Election of criminal-politicians leads to lower election-period and project-announcement stock-market returns for local private-sector firms. There is sharp decline in total investment by private-sector firms in criminal-politician districts: Interestingly, the decline in private-sector investment is offset by a roughly equivalent increase in investment by state-owned firms. Corrupt politicians are less destructive when the overall corruption in the state is lower and when they belong to a political party that is in power at the state or national level.
{"title":"Do Criminal Politicians Affect Firm Investment and Value? Evidence from a Regression Discontinuity Approach","authors":"Vikram Nanda, Ankur Pareek","doi":"10.2139/ssrn.2782580","DOIUrl":"https://doi.org/10.2139/ssrn.2782580","url":null,"abstract":"We provide evidence on the effects of criminal/corrupt politicians on firm value and investments. Using a regression discontinuity approach, we focus on close elections to establish a causal link between election of criminal-politicians and firms’ value and investment decisions. We utilize unique datasets on the criminal background of Indian politicians and details on investment projects in their districts. Election of criminal-politicians leads to lower election-period and project-announcement stock-market returns for local private-sector firms. There is sharp decline in total investment by private-sector firms in criminal-politician districts: Interestingly, the decline in private-sector investment is offset by a roughly equivalent increase in investment by state-owned firms. Corrupt politicians are less destructive when the overall corruption in the state is lower and when they belong to a political party that is in power at the state or national level.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128629055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although Likert scale is numeric, it is intrinsically ordinal (1 – Strongly disagree to 5 - Strongly agree). Even ordinal, due to convenience it is usual to use a t-test to evaluate whether two groups are significantly different (testing population mean with unknown variance). In this paper I will investigate if when we have a survey that uses a Likert Scale, it is adequate to use a t-test. I will use bootstrapping by first “imposing” that the population verifies the null hypothesis. I conclude that, the use of the t-test it is valid to compare groups even when the variable is measured a Likert scale and the populations does not have a normal distribution.
{"title":"T-Test with Likert Scale Variables","authors":"P. Vieira","doi":"10.2139/ssrn.2770035","DOIUrl":"https://doi.org/10.2139/ssrn.2770035","url":null,"abstract":"Although Likert scale is numeric, it is intrinsically ordinal (1 – Strongly disagree to 5 - Strongly agree). Even ordinal, due to convenience it is usual to use a t-test to evaluate whether two groups are significantly different (testing population mean with unknown variance). In this paper I will investigate if when we have a survey that uses a Likert Scale, it is adequate to use a t-test. I will use bootstrapping by first “imposing” that the population verifies the null hypothesis. I conclude that, the use of the t-test it is valid to compare groups even when the variable is measured a Likert scale and the populations does not have a normal distribution.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"234 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124367886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To analyze data obtained by non-random sampling in the presence of cross-sectional dependence, estimation of a sample selection model with a spatial lag of a latent dependent variable or a spatial error in both the selection and outcome equations is considered. Since there is no estimation framework for the spatial lag model and the existing estimators for the spatial error model are either computationally demanding or have poor small sample properties, we suggest to estimate these models by the partial maximum likelihood estimator, following Wang et al. (2013)'s framework for a spatial error probit model. We show that the estimator is consistent and asymptotically normally distributed. To facilitate easy and precise estimation of the variance matrix without requiring the spatial stationarity of errors, we propose the parametric bootstrap method. Monte Carlo simulations demonstrate the advantages of the estimators.
{"title":"Estimation of Spatial Sample Selection Models: A Partial Maximum Likelihood Approach","authors":"R. Rabovic, P. Čížek","doi":"10.2139/ssrn.2756508","DOIUrl":"https://doi.org/10.2139/ssrn.2756508","url":null,"abstract":"To analyze data obtained by non-random sampling in the presence of cross-sectional dependence, estimation of a sample selection model with a spatial lag of a latent dependent variable or a spatial error in both the selection and outcome equations is considered. Since there is no estimation framework for the spatial lag model and the existing estimators for the spatial error model are either computationally demanding or have poor small sample properties, we suggest to estimate these models by the partial maximum likelihood estimator, following Wang et al. (2013)'s framework for a spatial error probit model. We show that the estimator is consistent and asymptotically normally distributed. To facilitate easy and precise estimation of the variance matrix without requiring the spatial stationarity of errors, we propose the parametric bootstrap method. Monte Carlo simulations demonstrate the advantages of the estimators.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"36 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128768326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article reviews recent advances in estimation and inference for nonparametric and semiparametric models with endogeneity. It first describes methods of sieves and penalization for estimating unknown functions identified via conditional moment restrictions. Examples include nonparametric instrumental variables (NPIV) regression, nonparametric quantile IV regression, and many more semi/nonparametric structural models. Asymptotic properties of the sieve estimators and the sieve Wald, quasi-likelihood ratio hypothesis tests of functionals with nonparametric endogeneity are presented. For sieve NPIV estimation, the rate-adaptive data-driven choices of sieve regularization parameters and the sieve score bootstrap uniform confidence bands are described. Finally, simple sieve variance estimation and overidentification tests for the semiparametric two-step generalized method of moments are reviewed. Monte Carlo examples are also included.
{"title":"Methods for Nonparametric and Semiparametric Regressions with Endogeneity: A Gentle Guide","authors":"Xiaohong Chen, Y. Qiu","doi":"10.2139/ssrn.2756199","DOIUrl":"https://doi.org/10.2139/ssrn.2756199","url":null,"abstract":"This article reviews recent advances in estimation and inference for nonparametric and semiparametric models with endogeneity. It first describes methods of sieves and penalization for estimating unknown functions identified via conditional moment restrictions. Examples include nonparametric instrumental variables (NPIV) regression, nonparametric quantile IV regression, and many more semi/nonparametric structural models. Asymptotic properties of the sieve estimators and the sieve Wald, quasi-likelihood ratio hypothesis tests of functionals with nonparametric endogeneity are presented. For sieve NPIV estimation, the rate-adaptive data-driven choices of sieve regularization parameters and the sieve score bootstrap uniform confidence bands are described. Finally, simple sieve variance estimation and overidentification tests for the semiparametric two-step generalized method of moments are reviewed. Monte Carlo examples are also included.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127155184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study a Large-Dimensional Non-Stationary Dynamic Factor Model where (1) the factors Ft are I (1) and singular, that is Ft has dimension r and is driven by q dynamic shocks with q less than r, (2) the idiosyncratic components are either I (0) or I (1). Under these assumption the factors Ft are cointegrated and modeled by a singular Error Correction Model. We provide conditions for consistent estimation, as both the cross-sectional size n, and the time dimension T, go to infinity, of the factors, the loadings, the shocks, the ECM coefficients and therefore the Impulse Response Functions. Finally, the numerical properties of our estimator are explored by means of a MonteCarlo exercise and of a real-data application, in which we study the effects of monetary policy and supply shocks on the US economy.
{"title":"Non-Stationary Dynamic Factor Models for Large Datasets","authors":"M. Barigozzi, Marco Lippi, Matteo Luciani","doi":"10.2139/ssrn.2741739","DOIUrl":"https://doi.org/10.2139/ssrn.2741739","url":null,"abstract":"We study a Large-Dimensional Non-Stationary Dynamic Factor Model where (1) the factors Ft are I (1) and singular, that is Ft has dimension r and is driven by q dynamic shocks with q less than r, (2) the idiosyncratic components are either I (0) or I (1). Under these assumption the factors Ft are cointegrated and modeled by a singular Error Correction Model. We provide conditions for consistent estimation, as both the cross-sectional size n, and the time dimension T, go to infinity, of the factors, the loadings, the shocks, the ECM coefficients and therefore the Impulse Response Functions. Finally, the numerical properties of our estimator are explored by means of a MonteCarlo exercise and of a real-data application, in which we study the effects of monetary policy and supply shocks on the US economy.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129456708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li & Racine (2004) have proposed a nonparametric kernel-based method for smoothing in the presence of categorical predictors as an alternative to the classical nonparametric approach that splits the data into subsets (‘cells’) defined by the unique combinations of the categorical predictors. Li, Simar & Zelenyuk (2014) present an alternative to Li & Racine’s (2004) method that they claim possesses lower mean square error and generalizes and improves upon the existing approaches. However, these claims do not appear to withstand scrutiny. A number of points need to be brought to the attention of practitioners, and two in particular stand out; a) Li et al.’s (2014) own simulation results reveal that their estimator performs worse than the existing classical ‘split’ estimator and appears to be inadmissible, and b) the claim that Li et al.’s (2014) estimator dominates that of Li & Racine (2004) on mean square error grounds does not appear to be the case. The classical split estimator and that of Li & Racine (2004) are both consistent, and it will be seen that Li & Racine’s (2004) estimator remains the best all around performer. And, as a practical matter, Li et al.’s (2014) estimator is not a feasible alternative in typical settings involving multinomial and multiple categorical predictors.
Li和Racine(2004)提出了一种基于非参数核的方法,用于在存在分类预测因子的情况下进行平滑,作为经典非参数方法的替代方法,该方法将数据分成由分类预测因子的唯一组合定义的子集(“单元格”)。Li, Simar和Zelenyuk(2014)提出了Li和Racine(2004)方法的替代方法,他们声称该方法具有更低的均方误差,并对现有方法进行了推广和改进。然而,这些说法似乎经不起推敲。有几点需要引起实践者的注意,其中有两点特别突出;a) Li et al.(2014)自己的模拟结果显示,他们的估计器比现有的经典“分裂”估计器性能更差,似乎是不可接受的,b) Li et al.(2014)的估计器在均方误差基础上优于Li & Racine(2004)的估计器的说法似乎并非如此。经典的分裂估计器和Li & Racine(2004)的估计器都是一致的,并且可以看到Li & Racine(2004)的估计器仍然是最好的全面执行器。而且,作为一个实际问题,Li等人(2014)的估计器在涉及多项和多个分类预测器的典型设置中并不是一个可行的选择。
{"title":"A Correction to 'Generalized Nonparametric Smoothing With Mixed Discrete and Continuous Data' by Li, Simar & Zelenyuk","authors":"J. Racine","doi":"10.2139/ssrn.2824510","DOIUrl":"https://doi.org/10.2139/ssrn.2824510","url":null,"abstract":"Li & Racine (2004) have proposed a nonparametric kernel-based method for smoothing in the presence of categorical predictors as an alternative to the classical nonparametric approach that splits the data into subsets (‘cells’) defined by the unique combinations of the categorical predictors. Li, Simar & Zelenyuk (2014) present an alternative to Li & Racine’s (2004) method that they claim possesses lower mean square error and generalizes and improves upon the existing approaches. However, these claims do not appear to withstand scrutiny. A number of points need to be brought to the attention of practitioners, and two in particular stand out; a) Li et al.’s (2014) own simulation results reveal that their estimator performs worse than the existing classical ‘split’ estimator and appears to be inadmissible, and b) the claim that Li et al.’s (2014) estimator dominates that of Li & Racine (2004) on mean square error grounds does not appear to be the case. The classical split estimator and that of Li & Racine (2004) are both consistent, and it will be seen that Li & Racine’s (2004) estimator remains the best all around performer. And, as a practical matter, Li et al.’s (2014) estimator is not a feasible alternative in typical settings involving multinomial and multiple categorical predictors.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115326409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}