Pub Date : 2022-09-01DOI: 10.1177/1536867X221124567
Yuya Sasaki, Yi Xin
We introduce a new command, xtusreg, that estimates parameters of fixed-effects dynamic panel regression models under unequal time spacing. After reviewing the method, we examine the finite-sample performance of the command using simulated data. We also illustrate the command with the National Longitudinal Survey Original Cohorts: Older Men, whose personal interviews took place in the unequally spaced years of 1966, 1967, 1969, 1971, 1976, 1981, and 1990. The methods underlying xtusreg are those discussed by Sasaki and Xin (2017, Journal of Econometrics 196: 320–330).
我们引入了一个新的命令xtusreg,用于在不等时间间隔下估计固定效应动态面板回归模型的参数。在回顾了该方法之后,我们使用模拟数据检查了该命令的有限样本性能。我们还用国家纵向调查原始队列来说明这一命令:老年男性,他们的个人访谈是在1966年、1967年、1969年、1971年、1976年、1981年和1990年的不均匀间隔年份进行的。xtusreg的基础方法是Sasaki和Xin (2017, Journal of Econometrics 196: 320-330)所讨论的方法。
{"title":"xtusreg: Software for dynamic panel regression under irregular time spacing","authors":"Yuya Sasaki, Yi Xin","doi":"10.1177/1536867X221124567","DOIUrl":"https://doi.org/10.1177/1536867X221124567","url":null,"abstract":"We introduce a new command, xtusreg, that estimates parameters of fixed-effects dynamic panel regression models under unequal time spacing. After reviewing the method, we examine the finite-sample performance of the command using simulated data. We also illustrate the command with the National Longitudinal Survey Original Cohorts: Older Men, whose personal interviews took place in the unequally spaced years of 1966, 1967, 1969, 1971, 1976, 1981, and 1990. The methods underlying xtusreg are those discussed by Sasaki and Xin (2017, Journal of Econometrics 196: 320–330).","PeriodicalId":51171,"journal":{"name":"Stata Journal","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45917182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01DOI: 10.1177/1536867X221124516
J. Huismans, Jan Willem Nijenhuis, A. Sirchenko
Ordinal responses can be generated, in a cross-sectional context, by different unobserved classes of population or, in a time-series context, by different latent regimes. We introduce a new command, swopit, that fits a mixture of ordered probit models with exogenous or endogenous switching between two latent classes (regimes). Switching is endogenous if unobservables in the classassignment model are correlated with unobservables in the outcome models. We provide a battery of postestimation commands; assess via Monte Carlo experiments the finite-sample performance of the maximum likelihood estimator of the parameters, probabilities, and their standard errors (both the asymptotic and bootstrap ones); and apply the new command to model the monetary policy interest rates.
{"title":"A mixture of ordered probit models with endogenous switching between two latent classes","authors":"J. Huismans, Jan Willem Nijenhuis, A. Sirchenko","doi":"10.1177/1536867X221124516","DOIUrl":"https://doi.org/10.1177/1536867X221124516","url":null,"abstract":"Ordinal responses can be generated, in a cross-sectional context, by different unobserved classes of population or, in a time-series context, by different latent regimes. We introduce a new command, swopit, that fits a mixture of ordered probit models with exogenous or endogenous switching between two latent classes (regimes). Switching is endogenous if unobservables in the classassignment model are correlated with unobservables in the outcome models. We provide a battery of postestimation commands; assess via Monte Carlo experiments the finite-sample performance of the maximum likelihood estimator of the parameters, probabilities, and their standard errors (both the asymptotic and bootstrap ones); and apply the new command to model the monetary policy interest rates.","PeriodicalId":51171,"journal":{"name":"Stata Journal","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44860851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01DOI: 10.1177/1536867X221124541
Pengyu Chen, Yiannis Karavias, Elias Tzavalis
In this article, we introduce a new community-contributed command called xtbunitroot, which implements the panel-data unit-root tests developed by Karavias and Tzavalis (2014, Computational Statistics and Data Analysis 76: 391–407). These tests allow for one or two structural breaks in deterministic components of the series and can be seen as panel-data counterparts of the tests by Zivot and Andrews (1992, Journal of Business and Economic Statistics 10: 251–270) and Lumsdaine and Papell (1997, Review of Economics and Statistics 79: 212–218). The dates of the breaks can be known or unknown. The tests allow for intercepts and linear trends, nonnormal errors, and cross-section heteroskedasticity and dependence. They have power against homogeneous and heterogeneous alternatives and can be applied to panels with small or large time-series dimensions.
在本文中,我们将介绍一个由社区贡献的新命令xtbunitroot,它实现了由Karavias和Tzavalis (2014, Computational Statistics and Data Analysis 76: 391-407)开发的面板数据单元根测试。这些测试允许在系列的确定性组成部分中出现一到两次结构中断,可以被视为Zivot和Andrews(1992年,《商业和经济统计杂志》10:251-270)和Lumsdaine和Papell(1997年,《经济与统计评论》79:212-218)的测试的面板数据对口。中断的日期可以是已知的,也可以是未知的。测试允许截距和线性趋势、非正态误差、横截面异方差和依赖性。它们具有对抗同质和异质替代品的能力,并且可以应用于具有小或大时间序列尺寸的面板。
{"title":"Panel unit-root tests with structural breaks","authors":"Pengyu Chen, Yiannis Karavias, Elias Tzavalis","doi":"10.1177/1536867X221124541","DOIUrl":"https://doi.org/10.1177/1536867X221124541","url":null,"abstract":"In this article, we introduce a new community-contributed command called xtbunitroot, which implements the panel-data unit-root tests developed by Karavias and Tzavalis (2014, Computational Statistics and Data Analysis 76: 391–407). These tests allow for one or two structural breaks in deterministic components of the series and can be seen as panel-data counterparts of the tests by Zivot and Andrews (1992, Journal of Business and Economic Statistics 10: 251–270) and Lumsdaine and Papell (1997, Review of Economics and Statistics 79: 212–218). The dates of the breaks can be known or unknown. The tests allow for intercepts and linear trends, nonnormal errors, and cross-section heteroskedasticity and dependence. They have power against homogeneous and heterogeneous alternatives and can be applied to panels with small or large time-series dimensions.","PeriodicalId":51171,"journal":{"name":"Stata Journal","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47534244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01DOI: 10.1177/1536867X221124471
Daniel Guinea-Martin, Ricardo Mora
Eight multigroup segregation indices are decomposable into a between and a within term. They are two versions of 1) the mutual information index, 2) the symmetric Atkinson index, 3) the relative diversity index, and 4) Theil’s H index. In this article, we present the command dseg, which obtains all of them. It contributes to the stock of segregation commands in Stata by 1) implementing the decomposition in a single call, 2) providing the weights and local indices used in the computation of the within term, 3) facilitating the deployment of the decomposability properties of the eight indices in complex scenarios that demand tailor-made solutions, and 4) leveraging sample data with bootstrapping and approximate randomization tests. We analyze 2017 census data of public schools in the United States to illustrate the use of dseg. The subject topic is school racial segregation.
{"title":"Computing decomposable multigroup indices of segregation","authors":"Daniel Guinea-Martin, Ricardo Mora","doi":"10.1177/1536867X221124471","DOIUrl":"https://doi.org/10.1177/1536867X221124471","url":null,"abstract":"Eight multigroup segregation indices are decomposable into a between and a within term. They are two versions of 1) the mutual information index, 2) the symmetric Atkinson index, 3) the relative diversity index, and 4) Theil’s H index. In this article, we present the command dseg, which obtains all of them. It contributes to the stock of segregation commands in Stata by 1) implementing the decomposition in a single call, 2) providing the weights and local indices used in the computation of the within term, 3) facilitating the deployment of the decomposability properties of the eight indices in complex scenarios that demand tailor-made solutions, and 4) leveraging sample data with bootstrapping and approximate randomization tests. We analyze 2017 census data of public schools in the United States to illustrate the use of dseg. The subject topic is school racial segregation.","PeriodicalId":51171,"journal":{"name":"Stata Journal","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44518937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01DOI: 10.1177/1536867X221124539
M. Karakaplan
In this article, I introduce xtsfkk as a new command for fitting panel stochastic frontier models with endogeneity. The advantage of xtsfkk is that it can control for the endogenous variables in the frontier and the inefficiency term in a longitudinal setting. Hence, xtsfkk performs better than standard panel frontier estimators such as xtfrontier that overlook endogeneity by design. Moreover, xtsfkk uses Mata’s moptimize() functions for substantially faster execution and completion speeds. I also present a set of Monte Carlo simulations and examples demonstrating the performance and usage of xtsfkk.
{"title":"Panel stochastic frontier models with endogeneity","authors":"M. Karakaplan","doi":"10.1177/1536867X221124539","DOIUrl":"https://doi.org/10.1177/1536867X221124539","url":null,"abstract":"In this article, I introduce xtsfkk as a new command for fitting panel stochastic frontier models with endogeneity. The advantage of xtsfkk is that it can control for the endogenous variables in the frontier and the inefficiency term in a longitudinal setting. Hence, xtsfkk performs better than standard panel frontier estimators such as xtfrontier that overlook endogeneity by design. Moreover, xtsfkk uses Mata’s moptimize() functions for substantially faster execution and completion speeds. I also present a set of Monte Carlo simulations and examples demonstrating the performance and usage of xtsfkk.","PeriodicalId":51171,"journal":{"name":"Stata Journal","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48763100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01DOI: 10.1177/1536867X221124538
A. Dallakyan
In modern multivariate statistics, where high-dimensional datasets are ubiquitous, learning large (inverse-) covariance matrices is imperative for data analysis. A popular approach to estimating a large inverse-covariance matrix is to regularize the Gaussian log-likelihood function by imposing a convex penalty function. In a seminal article, Friedman, Hastie, and Tibshirani (2008, Biostatistics 9: 432–441) proposed a graphical lasso (Glasso) algorithm to efficiently estimate sparse inverse-covariance matrices from the convex regularized log-likelihood function. In this article, I first explore the Glasso algorithm and then introduce a new graphiclasso command for the large inverse-covariance matrix estimation. Moreover, I provide a useful command for tuning parameter selection in the Glasso algorithm using the extended Bayesian information criterion, the Akaike information criterion, and cross-validation. I demonstrate the use of Glasso using simulation results and real-world data analysis.
{"title":"graphiclasso: Graphical lasso for learning sparse inverse-covariance matrices","authors":"A. Dallakyan","doi":"10.1177/1536867X221124538","DOIUrl":"https://doi.org/10.1177/1536867X221124538","url":null,"abstract":"In modern multivariate statistics, where high-dimensional datasets are ubiquitous, learning large (inverse-) covariance matrices is imperative for data analysis. A popular approach to estimating a large inverse-covariance matrix is to regularize the Gaussian log-likelihood function by imposing a convex penalty function. In a seminal article, Friedman, Hastie, and Tibshirani (2008, Biostatistics 9: 432–441) proposed a graphical lasso (Glasso) algorithm to efficiently estimate sparse inverse-covariance matrices from the convex regularized log-likelihood function. In this article, I first explore the Glasso algorithm and then introduce a new graphiclasso command for the large inverse-covariance matrix estimation. Moreover, I provide a useful command for tuning parameter selection in the Glasso algorithm using the extended Bayesian information criterion, the Akaike information criterion, and cross-validation. I demonstrate the use of Glasso using simulation results and real-world data analysis.","PeriodicalId":51171,"journal":{"name":"Stata Journal","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44870771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01DOI: 10.1177/1536867X221124552
H. Bower, T. Andersson, M. Crowther, P. Lambert
In this article, we describe methodology that allows for multiple timescales using flexible parametric survival models without the need for time splitting. When one fits flexible parametric survival models on the log-hazard scale, numerical integration is required in the log likelihood to fit the model. The use of numerical integration allows incorporation of arbitrary functions of time into the model and hence lends itself to the inclusion of multiple timescales in an appealing way. We describe and exemplify these methods and show how to use the command stmt , which implements these methods, alongside its postestimation commands.
{"title":"Flexible parametric survival analysis with multiple timescales: Estimation and implementation using stmt","authors":"H. Bower, T. Andersson, M. Crowther, P. Lambert","doi":"10.1177/1536867X221124552","DOIUrl":"https://doi.org/10.1177/1536867X221124552","url":null,"abstract":"In this article, we describe methodology that allows for multiple timescales using flexible parametric survival models without the need for time splitting. When one fits flexible parametric survival models on the log-hazard scale, numerical integration is required in the log likelihood to fit the model. The use of numerical integration allows incorporation of arbitrary functions of time into the model and hence lends itself to the inclusion of multiple timescales in an appealing way. We describe and exemplify these methods and show how to use the command stmt , which implements these methods, alongside its postestimation commands.","PeriodicalId":51171,"journal":{"name":"Stata Journal","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41889346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01DOI: 10.1177/1536867x221124568
Patching your computer is one of the most important ways to protect yourself online. Windows users should turn on Windows Update [2] and set it to download and install patches automatically Additionally make sure Microsoft Updates [3] has been activated so that Office and other Microsoft updates are being applied. For Mac OS X, patches are installed via the App Store [4], and the settings can be checked under System Preferences>App Store.
{"title":"Software Updates","authors":"","doi":"10.1177/1536867x221124568","DOIUrl":"https://doi.org/10.1177/1536867x221124568","url":null,"abstract":"Patching your computer is one of the most important ways to protect yourself online. Windows users should turn on Windows Update [2] and set it to download and install patches automatically Additionally make sure Microsoft Updates [3] has been activated so that Office and other Microsoft updates are being applied. For Mac OS X, patches are installed via the App Store [4], and the settings can be checked under System Preferences>App Store.","PeriodicalId":51171,"journal":{"name":"Stata Journal","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47255477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1177/1536867X221106371
Alifyah Y Kagalwala
Commonly used unit-root tests in time-series analysis—such as the Dickey–Fuller and Phillips–Perron tests—use a null hypothesis that the series contains a unit root. Such tests have low power against the alternative—when a time series is near integrated or highly autoregressive—implying that they do poorly in distinguishing such a series from having a unit root. Kwiatkowski et al. (1992, Journal of Econometrics 54: 159–178) introduced the Kwiatkowski, Phillips, Schmidt, and Shin test, in which the null hypothesis is that the series is stationary, to deal with this problem. One shortcoming of the presently available Kwiatkowski, Phillips, Schmidt, and Shin test in Stata is that it uses asymptotic critical values regardless of the sample size. This poses a problem in that researchers—especially social scientists—are often presented with short time series. I introduce kpsstest, a command that extends the previous implementation by including an option for a zero-mean-stationary null hypothesis, generating sample and test-specific critical values, and reporting appropriate p-values.
时间序列分析中常用的单位根检验,如Dickey–Fuller和Phillips–Perron检验,使用序列包含单位根的零假设。当时间序列接近积分或高度自回归时,这种测试相对于另一种测试的功率较低,这意味着它们在区分这种序列与具有单位根方面做得很差。Kwiatkowski等人(1992,Journal of Econometrics 54:159–178)引入了Kwiatkovski、Phillips、Schmidt和Shin检验,其中零假设是序列是平稳的,以处理这个问题。Stata中目前可用的Kwiatkowski、Phillips、Schmidt和Shin检验的一个缺点是,无论样本大小,它都使用渐近临界值。这带来了一个问题,因为研究人员——尤其是社会科学家——经常被呈现出短时间序列。我介绍了kpsstest,这是一个命令,它扩展了以前的实现,包括零均值平稳零假设的选项,生成样本和测试特定的临界值,并报告适当的p值。
{"title":"kpsstest: A command that implements the Kwiatkowski, Phillips, Schmidt, and Shin test with sample-specific critical values and reports p-values","authors":"Alifyah Y Kagalwala","doi":"10.1177/1536867X221106371","DOIUrl":"https://doi.org/10.1177/1536867X221106371","url":null,"abstract":"Commonly used unit-root tests in time-series analysis—such as the Dickey–Fuller and Phillips–Perron tests—use a null hypothesis that the series contains a unit root. Such tests have low power against the alternative—when a time series is near integrated or highly autoregressive—implying that they do poorly in distinguishing such a series from having a unit root. Kwiatkowski et al. (1992, Journal of Econometrics 54: 159–178) introduced the Kwiatkowski, Phillips, Schmidt, and Shin test, in which the null hypothesis is that the series is stationary, to deal with this problem. One shortcoming of the presently available Kwiatkowski, Phillips, Schmidt, and Shin test in Stata is that it uses asymptotic critical values regardless of the sample size. This poses a problem in that researchers—especially social scientists—are often presented with short time series. I introduce kpsstest, a command that extends the previous implementation by including an option for a zero-mean-stationary null hypothesis, generating sample and test-specific critical values, and reporting appropriate p-values.","PeriodicalId":51171,"journal":{"name":"Stata Journal","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43112305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1177/1536867X221106368
Bartosz Kondratek
In this article, I introduce the uirt command, which allows one to estimate parameters of a variety of unidimensional item response theory models (two-parameter logistic model, three-parameter logistic model, graded response model, partial credit model, and generalized partial credit model). uirt has extended item-fit analysis capabilities, features multigroup modeling, allows testing for differential item functioning, and provides tools for generating plausible values with a latent regression conditioning model. I provide examples to illustrate cases where uirt can be especially useful in conducting analyses within the item response theory approach.
{"title":"uirt: A command for unidimensional IRT modeling","authors":"Bartosz Kondratek","doi":"10.1177/1536867X221106368","DOIUrl":"https://doi.org/10.1177/1536867X221106368","url":null,"abstract":"In this article, I introduce the uirt command, which allows one to estimate parameters of a variety of unidimensional item response theory models (two-parameter logistic model, three-parameter logistic model, graded response model, partial credit model, and generalized partial credit model). uirt has extended item-fit analysis capabilities, features multigroup modeling, allows testing for differential item functioning, and provides tools for generating plausible values with a latent regression conditioning model. I provide examples to illustrate cases where uirt can be especially useful in conducting analyses within the item response theory approach.","PeriodicalId":51171,"journal":{"name":"Stata Journal","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48002765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}