We provide a comprehensive account of fundamental properties of a truncated discrete Zipf distribution, complementing the results available in the literature. In particular, we obtain results on existence and uniqueness of maximum likelihood parameter estimators and propose new testing methodology for the shape parameter. We also include data examples illustrating applicability of this stochastic model.
{"title":"A discrete truncated Zipf distribution","authors":"Kwame Boamah-Addo, T. Kozubowski, A. Panorska","doi":"10.1111/stan.12280","DOIUrl":"https://doi.org/10.1111/stan.12280","url":null,"abstract":"We provide a comprehensive account of fundamental properties of a truncated discrete Zipf distribution, complementing the results available in the literature. In particular, we obtain results on existence and uniqueness of maximum likelihood parameter estimators and propose new testing methodology for the shape parameter. We also include data examples illustrating applicability of this stochastic model.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77005196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01Epub Date: 2022-01-12DOI: 10.1111/stan.12261
Sally Hunsberger, Lori Long, Sarah E Reese, Gloria H Hong, Ian A Myles, Christa S Zerbe, Pleonchan Chetchotisakd, Joanna H Shih
This paper develops methods to test for associations between two variables with clustered data using a U-Statistic approach with a second-order approximation to the variance of the parameter estimate for the test statistic. The tests that are presented are for clustered versions of: Pearsons χ2 test, the Spearman rank correlation and Kendall's τ for continuous data or ordinal data and for alternative measures of Kendall's τ that allow for ties in the data. Shih and Fay use the U-Statistic approach but only consider a first-order approximation. The first-order approximation has inflated significance level in scenarios with small sample sizes. We derive the test statistics using the second-order approximations aiming to improve the type I error rates. The method applies to data where clusters have the same number of measurements for each variable or where one of the variables may be measured once per cluster while the other variable may be measured multiple times. We evaluate the performance of the test statistics through simulation with small sample sizes. The methods are all available in the R package cluscor.
{"title":"Rank correlation inferences for clustered data with small sample size.","authors":"Sally Hunsberger, Lori Long, Sarah E Reese, Gloria H Hong, Ian A Myles, Christa S Zerbe, Pleonchan Chetchotisakd, Joanna H Shih","doi":"10.1111/stan.12261","DOIUrl":"https://doi.org/10.1111/stan.12261","url":null,"abstract":"<p><p>This paper develops methods to test for associations between two variables with clustered data using a <i>U</i>-Statistic approach with a second-order approximation to the variance of the parameter estimate for the test statistic. The tests that are presented are for clustered versions of: Pearsons <i>χ</i> <sup>2</sup> test, the Spearman rank correlation and Kendall's <i>τ</i> for continuous data or ordinal data and for alternative measures of Kendall's <i>τ</i> that allow for ties in the data. Shih and Fay use the <i>U</i>-Statistic approach but only consider a first-order approximation. The first-order approximation has inflated significance level in scenarios with small sample sizes. We derive the test statistics using the second-order approximations aiming to improve the type I error rates. The method applies to data where clusters have the same number of measurements for each variable or where one of the variables may be measured once per cluster while the other variable may be measured multiple times. We evaluate the performance of the test statistics through simulation with small sample sizes. The methods are all available in the R package cluscor.</p>","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9355045/pdf/nihms-1774814.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40590090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The comparability of the scores obtained in different forms of a test is certainly an essential requirement. This paper proposes a statistical test for the detection of noncomparable scores based on item response theory (IRT) methods. When the IRT model is fit separately for different forms of a test, the item parameter estimates are expressed on different measurement scales. The first step to obtain comparable scores is to convert the item parameters to a common metric using two constants, called equating coefficients. The equating coefficients can be estimated for two forms with common items, or derived through a chain of forms. The proposal of this paper is a statistical test to verify whether the scale conversions provided by the equating coefficients are as expected when the assumptions of the model are satisfied, hence leading to comparable scores. The method is illustrated through simulation studies and a real‐data example.
{"title":"Testing for differences in chain equating","authors":"Michela Battauz","doi":"10.1111/stan.12277","DOIUrl":"https://doi.org/10.1111/stan.12277","url":null,"abstract":"The comparability of the scores obtained in different forms of a test is certainly an essential requirement. This paper proposes a statistical test for the detection of noncomparable scores based on item response theory (IRT) methods. When the IRT model is fit separately for different forms of a test, the item parameter estimates are expressed on different measurement scales. The first step to obtain comparable scores is to convert the item parameters to a common metric using two constants, called equating coefficients. The equating coefficients can be estimated for two forms with common items, or derived through a chain of forms. The proposal of this paper is a statistical test to verify whether the scale conversions provided by the equating coefficients are as expected when the assumptions of the model are satisfied, hence leading to comparable scores. The method is illustrated through simulation studies and a real‐data example.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86429443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Longxiang Fang, N. Balakrishnan, Wenyu Huang, Shuai Zhang
In this paper, we discuss stochastic comparison of the largest order statistics arising from two sets of dependent distribution‐free random variables with respect to multivariate chain majorization, where the dependency structure can be defined by Archimedean copulas. When a distribution‐free model with possibly two parameter vectors has its matrix of parameters changing to another matrix of parameters in a certain mathematical sense, we obtain the first sample maxima is larger than the second sample maxima with respect to the usual stochastic order, based on certain conditions. Applications of our results for scale proportional reverse hazards model, exponentiated gamma distribution, Gompertz–Makeham distribution, and location‐scale model, are also given. Meanwhile, we provide two numerical examples to illustrate the results established here.
{"title":"Usual stochastic ordering of the sample maxima from dependent distribution‐free random variables","authors":"Longxiang Fang, N. Balakrishnan, Wenyu Huang, Shuai Zhang","doi":"10.1111/stan.12275","DOIUrl":"https://doi.org/10.1111/stan.12275","url":null,"abstract":"In this paper, we discuss stochastic comparison of the largest order statistics arising from two sets of dependent distribution‐free random variables with respect to multivariate chain majorization, where the dependency structure can be defined by Archimedean copulas. When a distribution‐free model with possibly two parameter vectors has its matrix of parameters changing to another matrix of parameters in a certain mathematical sense, we obtain the first sample maxima is larger than the second sample maxima with respect to the usual stochastic order, based on certain conditions. Applications of our results for scale proportional reverse hazards model, exponentiated gamma distribution, Gompertz–Makeham distribution, and location‐scale model, are also given. Meanwhile, we provide two numerical examples to illustrate the results established here.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89908645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The stratified logrank test can be used to compare survival distributions of several groups of patients, while adjusting for the effect of some discrete variable that may be predictive of the survival outcome. In practice, it can happen that this discrete variable is missing for some patients. An inverse‐probability‐weighted version of the stratified logrank statistic is introduced to tackle this issue. Its asymptotic distribution is derived under the null hypothesis of equality of the survival distributions. A simulation study is conducted to assess behavior of the proposed test statistic in finite samples. An analysis of a medical dataset illustrates the methodology.
{"title":"Inverse‐probability‐weighted logrank test for stratified survival data with missing measurements","authors":"Rim Ben Elouefi, Foued Saâdaoui","doi":"10.1111/stan.12276","DOIUrl":"https://doi.org/10.1111/stan.12276","url":null,"abstract":"The stratified logrank test can be used to compare survival distributions of several groups of patients, while adjusting for the effect of some discrete variable that may be predictive of the survival outcome. In practice, it can happen that this discrete variable is missing for some patients. An inverse‐probability‐weighted version of the stratified logrank statistic is introduced to tackle this issue. Its asymptotic distribution is derived under the null hypothesis of equality of the survival distributions. A simulation study is conducted to assess behavior of the proposed test statistic in finite samples. An analysis of a medical dataset illustrates the methodology.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82520985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study a statistical framework for replicability based on a recently proposed quantitative measure of replication success, the sceptical p$$ p $$ ‐value. A recalibration is proposed to obtain exact overall Type‐I error control if the effect is null in both studies and additional bounds on the partial and conditional Type‐I error rate, which represent the case where only one study has a null effect. The approach avoids the double dichotomization for significance of the two‐trials rule and has larger project power to detect existing effects over both studies in combination. It can also be used for power calculations and requires a smaller replication sample size than the two‐trials rule for already convincing original studies. We illustrate the performance of the proposed methodology in an application to data from the Experimental Economics Replication Project.
我们研究了一个统计框架的可复制性基于最近提出的复制成功的定量测量,怀疑p $$ p $$‐值。如果两项研究的影响为零,以及部分和条件型I错误率的附加界限,则建议重新校准以获得精确的总体型I误差控制,这代表了只有一项研究具有零效应的情况。该方法避免了两次试验规则显著性的双重二分法,并且具有更大的项目能力来检测两项研究合并后的现有效应。它也可以用于功率计算,并且需要比已经令人信服的原始研究的两次试验规则更小的复制样本量。我们在实验经济学复制项目的数据应用中说明了所提出方法的性能。
{"title":"Assessing replicability with the sceptical p$$ p $$ ‐value: Type‐I error control and sample size planning","authors":"Charlotte Micheloud, F. Balabdaoui, L. Held","doi":"10.1111/stan.12312","DOIUrl":"https://doi.org/10.1111/stan.12312","url":null,"abstract":"We study a statistical framework for replicability based on a recently proposed quantitative measure of replication success, the sceptical p$$ p $$ ‐value. A recalibration is proposed to obtain exact overall Type‐I error control if the effect is null in both studies and additional bounds on the partial and conditional Type‐I error rate, which represent the case where only one study has a null effect. The approach avoids the double dichotomization for significance of the two‐trials rule and has larger project power to detect existing effects over both studies in combination. It can also be used for power calculations and requires a smaller replication sample size than the two‐trials rule for already convincing original studies. We illustrate the performance of the proposed methodology in an application to data from the Experimental Economics Replication Project.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83870470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hypothesis testing is challenging due to the test statistic's complicated asymptotic distribution when it is based on a regularized estimator in high dimensions. We propose a robust testing framework for ℓ1$$ {ell}_1 $$ ‐regularized M‐estimators to cope with non‐Gaussian distributed regression errors, using the robust approximate message passing algorithm. The proposed framework enjoys an automatically built‐in bias correction and is applicable with general convex nondifferentiable loss functions which also allows inference when the focus is a conditional quantile instead of the mean of the response. The estimator compares numerically well with the debiased and desparsified approaches while using the least squares loss function. The use of the Huber loss function demonstrates that the proposed construction provides stable confidence intervals under different regression error distributions.
{"title":"Automatic bias correction for testing in high‐dimensional linear models","authors":"Jing Zhou, G. Claeskens","doi":"10.1111/stan.12274","DOIUrl":"https://doi.org/10.1111/stan.12274","url":null,"abstract":"Hypothesis testing is challenging due to the test statistic's complicated asymptotic distribution when it is based on a regularized estimator in high dimensions. We propose a robust testing framework for ℓ1$$ {ell}_1 $$ ‐regularized M‐estimators to cope with non‐Gaussian distributed regression errors, using the robust approximate message passing algorithm. The proposed framework enjoys an automatically built‐in bias correction and is applicable with general convex nondifferentiable loss functions which also allows inference when the focus is a conditional quantile instead of the mean of the response. The estimator compares numerically well with the debiased and desparsified approaches while using the least squares loss function. The use of the Huber loss function demonstrates that the proposed construction provides stable confidence intervals under different regression error distributions.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86790588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is a matter of common observation that investors value substantial gains but are averse to heavy losses. Obvious as it may sound, this translates into an interesting preference for right‐skewed return distributions, whose right tails are heavier than their left tails. Skewness is thus not only a way to describe the shape of a distribution, but also a tool for risk measurement. We review the statistical literature on skewness and provide a comprehensive framework for its assessment. Then, we present a new measure of skewness, based on the decomposition of variance in its upward and downward components. We argue that this measure fills a gap in the literature and show in a simulation study that it strikes a good balance between robustness and sensitivity.
{"title":"Assessing skewness in financial markets","authors":"Giovanni Campisi, L. La Rocca, S. Muzzioli","doi":"10.1111/stan.12273","DOIUrl":"https://doi.org/10.1111/stan.12273","url":null,"abstract":"It is a matter of common observation that investors value substantial gains but are averse to heavy losses. Obvious as it may sound, this translates into an interesting preference for right‐skewed return distributions, whose right tails are heavier than their left tails. Skewness is thus not only a way to describe the shape of a distribution, but also a tool for risk measurement. We review the statistical literature on skewness and provide a comprehensive framework for its assessment. Then, we present a new measure of skewness, based on the decomposition of variance in its upward and downward components. We argue that this measure fills a gap in the literature and show in a simulation study that it strikes a good balance between robustness and sensitivity.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88552120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zero inflation is a common nuisance while monitoring disease progression over time. This article proposes a new observation‐driven model for zero‐inflated and over‐dispersed count time series. The counts given from the past history of the process and available information on covariates are assumed to be distributed as a mixture of a Poisson distribution and a distribution degenerated at zero, with a time‐dependent mixing probability, πt . Since, count data usually suffers from overdispersion, a Gamma distribution is used to model the excess variation, resulting in a zero‐inflated negative binomial regression model with mean parameter λt . Linear predictors with autoregressive and moving average (ARMA) type terms, covariates, seasonality and trend are fitted to λt and πt through canonical link generalized linear models. Estimation is done using maximum likelihood aided by iterative algorithms, such as Newton‐Raphson (NR) and Expectation and Maximization. Theoretical results on the consistency and asymptotic normality of the estimators are given. The proposed model is illustrated using in‐depth simulation studies and two disease datasets.
随着时间的推移监测疾病进展时,零通胀是一个常见的麻烦。本文提出了一个新的观测驱动模型,用于零膨胀和过分散计数时间序列。从过去的过程历史中给出的计数和有关协变量的可用信息被假设为泊松分布和在零处退化的分布的混合分布,具有时间相关的混合概率πt。由于计数数据通常存在过度分散,因此使用Gamma分布来模拟过度变化,从而产生具有平均参数λt的零膨胀负二项回归模型。通过正则链接广义线性模型拟合具有自回归和移动平均(ARMA)型项、协变量、季节性和趋势的线性预测因子λt和πt。估计是在迭代算法(如Newton - Raphson (NR)和Expectation and Maximization)的辅助下使用最大似然来完成的。给出了估计量的相合性和渐近正态性的理论结果。所提出的模型使用深度模拟研究和两个疾病数据集来说明。
{"title":"Autoregressive and moving average models for zero‐inflated count time series","authors":"Vurukonda Sathish, S. Mukhopadhyay, R. Tiwari","doi":"10.1111/stan.12255","DOIUrl":"https://doi.org/10.1111/stan.12255","url":null,"abstract":"Zero inflation is a common nuisance while monitoring disease progression over time. This article proposes a new observation‐driven model for zero‐inflated and over‐dispersed count time series. The counts given from the past history of the process and available information on covariates are assumed to be distributed as a mixture of a Poisson distribution and a distribution degenerated at zero, with a time‐dependent mixing probability, πt . Since, count data usually suffers from overdispersion, a Gamma distribution is used to model the excess variation, resulting in a zero‐inflated negative binomial regression model with mean parameter λt . Linear predictors with autoregressive and moving average (ARMA) type terms, covariates, seasonality and trend are fitted to λt and πt through canonical link generalized linear models. Estimation is done using maximum likelihood aided by iterative algorithms, such as Newton‐Raphson (NR) and Expectation and Maximization. Theoretical results on the consistency and asymptotic normality of the estimators are given. The proposed model is illustrated using in‐depth simulation studies and two disease datasets.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79903003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper considers a continuous three‐phase polynomial regression model with two threshold points for dependent data with heteroscedasticity. We assume the model is polynomial of order zero in the middle regime, and is polynomial of higher orders elsewhere. We denote this model by ℳ2$$ {mathcal{M}}_2 $$ , which includes models with one or no threshold points, denoted by ℳ1$$ {mathcal{M}}_1 $$ and ℳ0$$ {mathcal{M}}_0 $$ , respectively, as special cases. We provide an ordered iterative least squares (OiLS) method when estimating ℳ2$$ {mathcal{M}}_2 $$ and establish the consistency of the OiLS estimators under mild conditions. When the underlying model is ℳ1$$ {mathcal{M}}_1 $$ and is (d0−1)$$ left({d}_0-1right) $$ th‐order differentiable but not d0$$ {d}_0 $$ th‐order differentiable at the threshold point, we further show the Op(N−1/(d0+2))$$ {O}_pleft({N}^{-1/left({d}_0+2right)}right) $$ convergence rate of the OiLS estimators, which can be faster than the Op(N−1/(2d0))$$ {O}_pleft({N}^{-1/left(2{d}_0right)}right) $$ convergence rate given in Feder when d0≥3$$ {d}_0ge 3 $$ . We also apply a model‐selection procedure for selecting ℳκ$$ {mathcal{M}}_{kappa } $$ ; κ=0,1,2$$ kappa =0,1,2 $$ . When the underlying model exists, we establish the selection consistency under the aforementioned conditions. Finally, we conduct simulation experiments to demonstrate the finite‐sample performance of our asymptotic results.
{"title":"Threshold estimation for continuous three‐phase polynomial regression models with constant mean in the middle regime","authors":"Chih‐Hao Chang, Kam-Fai Wong, Wei‐Yee Lim","doi":"10.1111/stan.12268","DOIUrl":"https://doi.org/10.1111/stan.12268","url":null,"abstract":"This paper considers a continuous three‐phase polynomial regression model with two threshold points for dependent data with heteroscedasticity. We assume the model is polynomial of order zero in the middle regime, and is polynomial of higher orders elsewhere. We denote this model by ℳ2$$ {mathcal{M}}_2 $$ , which includes models with one or no threshold points, denoted by ℳ1$$ {mathcal{M}}_1 $$ and ℳ0$$ {mathcal{M}}_0 $$ , respectively, as special cases. We provide an ordered iterative least squares (OiLS) method when estimating ℳ2$$ {mathcal{M}}_2 $$ and establish the consistency of the OiLS estimators under mild conditions. When the underlying model is ℳ1$$ {mathcal{M}}_1 $$ and is (d0−1)$$ left({d}_0-1right) $$ th‐order differentiable but not d0$$ {d}_0 $$ th‐order differentiable at the threshold point, we further show the Op(N−1/(d0+2))$$ {O}_pleft({N}^{-1/left({d}_0+2right)}right) $$ convergence rate of the OiLS estimators, which can be faster than the Op(N−1/(2d0))$$ {O}_pleft({N}^{-1/left(2{d}_0right)}right) $$ convergence rate given in Feder when d0≥3$$ {d}_0ge 3 $$ . We also apply a model‐selection procedure for selecting ℳκ$$ {mathcal{M}}_{kappa } $$ ; κ=0,1,2$$ kappa =0,1,2 $$ . When the underlying model exists, we establish the selection consistency under the aforementioned conditions. Finally, we conduct simulation experiments to demonstrate the finite‐sample performance of our asymptotic results.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87649922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}