Pub Date : 2024-11-19DOI: 10.1016/j.jspi.2024.106250
Abbas Khalili , Archer Yi Yang , Xiaonan Da
Mixture-of-experts provide flexible statistical models for a wide range of regression (supervised learning) problems. Often a large number of covariates (features) are available in many modern applications yet only a small subset of them is useful in explaining a response variable of interest. This calls for a feature selection device. In this paper, we present new group-feature selection and estimation methods for sparse mixture-of-experts models when the number of features can be nearly comparable to the sample size. We prove the consistency of the methods in both parameter estimation and feature selection. We implement the methods using a modified EM algorithm combined with proximal gradient method which results in a convenient closed-form parameter update in the M-step of the algorithm. We examine the finite-sample performance of the methods through simulations, and demonstrate their applications in a real data example on exploring relationships in body measurements.
专家混合模型为各种回归(监督学习)问题提供了灵活的统计模型。在许多现代应用中,往往会有大量的协变量(特征),但其中只有一小部分对解释感兴趣的响应变量有用。这就需要一种特征选择装置。在本文中,我们针对稀疏专家混合物模型提出了新的分组特征选择和估计方法,当特征数量几乎与样本大小相当时,就可以使用这种方法。我们证明了这些方法在参数估计和特征选择方面的一致性。我们使用改进的 EM 算法结合近似梯度法来实现这些方法,从而在算法的 M 步中方便地进行闭式参数更新。我们通过仿真检验了这些方法的有限样本性能,并在一个探索人体测量关系的真实数据示例中演示了这些方法的应用。
{"title":"Estimation and group-feature selection in sparse mixture-of-experts with diverging number of parameters","authors":"Abbas Khalili , Archer Yi Yang , Xiaonan Da","doi":"10.1016/j.jspi.2024.106250","DOIUrl":"10.1016/j.jspi.2024.106250","url":null,"abstract":"<div><div>Mixture-of-experts provide flexible statistical models for a wide range of regression (supervised learning) problems. Often a large number of covariates (features) are available in many modern applications yet only a small subset of them is useful in explaining a response variable of interest. This calls for a feature selection device. In this paper, we present new group-feature selection and estimation methods for sparse mixture-of-experts models when the number of features can be nearly comparable to the sample size. We prove the consistency of the methods in both parameter estimation and feature selection. We implement the methods using a modified EM algorithm combined with proximal gradient method which results in a convenient closed-form parameter update in the M-step of the algorithm. We examine the finite-sample performance of the methods through simulations, and demonstrate their applications in a real data example on exploring relationships in body measurements.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"237 ","pages":"Article 106250"},"PeriodicalIF":0.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-15DOI: 10.1016/j.jspi.2024.106248
Yao Kang , Xiaojing Fan , Jie Zhang , Ying Tang
Count time series with bounded support frequently exhibit binomial overdispersion, zero inflation and right-endpoint inflation in practical scenarios. Numerous models have been proposed for the analysis of bounded count time series with binomial overdispersion and zero inflation, yet right-endpoint inflation has received comparatively less attention. To better capture these features, this article introduces three versions of extended first-order binomial autoregressive (BAR(1)) models with endpoint inflation. Corresponding stochastic properties of the new models are investigated and model parameters are estimated by the conditional maximum likelihood and quasi-maximum likelihood methods. A binomial right-endpoint inflation index is also constructed and further used to test whether the data set has endpoint-inflated characteristic with respect to a BAR(1) process. Finally, the proposed models are applied to two real data examples. Firstly, we illustrate the usefulness of the proposed models through an application to the voting data on supporting interest rate changes during consecutive monthly meetings of the Monetary Policy Council at the National Bank of Poland. Then, we apply the proposed models to the number of police stations that received at least one drunk driving report per month. The results of the two real data examples indicate that the new models have significant advantages in terms of fitting performance for the bounded count time series with endpoint inflation.
{"title":"Modeling and testing for endpoint-inflated count time series with bounded support","authors":"Yao Kang , Xiaojing Fan , Jie Zhang , Ying Tang","doi":"10.1016/j.jspi.2024.106248","DOIUrl":"10.1016/j.jspi.2024.106248","url":null,"abstract":"<div><div>Count time series with bounded support frequently exhibit binomial overdispersion, zero inflation and right-endpoint inflation in practical scenarios. Numerous models have been proposed for the analysis of bounded count time series with binomial overdispersion and zero inflation, yet right-endpoint inflation has received comparatively less attention. To better capture these features, this article introduces three versions of extended first-order binomial autoregressive (BAR(1)) models with endpoint inflation. Corresponding stochastic properties of the new models are investigated and model parameters are estimated by the conditional maximum likelihood and quasi-maximum likelihood methods. A binomial right-endpoint inflation index is also constructed and further used to test whether the data set has endpoint-inflated characteristic with respect to a BAR(1) process. Finally, the proposed models are applied to two real data examples. Firstly, we illustrate the usefulness of the proposed models through an application to the voting data on supporting interest rate changes during consecutive monthly meetings of the Monetary Policy Council at the National Bank of Poland. Then, we apply the proposed models to the number of police stations that received at least one drunk driving report per month. The results of the two real data examples indicate that the new models have significant advantages in terms of fitting performance for the bounded count time series with endpoint inflation.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"237 ","pages":"Article 106248"},"PeriodicalIF":0.8,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-14DOI: 10.1016/j.jspi.2024.106249
Li Xun , Xin Guan , Yong Zhou
Exploring quantile differences between two populations at various probability levels offers valuable insights into their distinctions, which are essential for practical applications such as assessing treatment effects. However, estimating these differences can be challenging due to the complex data often encountered in clinical trials. This paper assumes that right-censored data and length-biased right-censored data originate from two populations of interest. We propose an adjusted smoothed empirical likelihood (EL) method for inferring quantile differences and establish the asymptotic properties of the proposed estimators. Under mild conditions, we demonstrate that the adjusted log-EL ratio statistics asymptotically follow the standard chi-squared distribution. We construct confidence intervals for the quantile differences using both normal and chi-squared approximations and develop a likelihood ratio test for these differences. The performance of our proposed methods is illustrated through simulation studies. Finally, we present a case study utilizing Oscar award nomination data to demonstrate the application of our method.
{"title":"Semi-parametric empirical likelihood inference on quantile difference between two samples with length-biased and right-censored data","authors":"Li Xun , Xin Guan , Yong Zhou","doi":"10.1016/j.jspi.2024.106249","DOIUrl":"10.1016/j.jspi.2024.106249","url":null,"abstract":"<div><div>Exploring quantile differences between two populations at various probability levels offers valuable insights into their distinctions, which are essential for practical applications such as assessing treatment effects. However, estimating these differences can be challenging due to the complex data often encountered in clinical trials. This paper assumes that right-censored data and length-biased right-censored data originate from two populations of interest. We propose an adjusted smoothed empirical likelihood (EL) method for inferring quantile differences and establish the asymptotic properties of the proposed estimators. Under mild conditions, we demonstrate that the adjusted log-EL ratio statistics asymptotically follow the standard chi-squared distribution. We construct confidence intervals for the quantile differences using both normal and chi-squared approximations and develop a likelihood ratio test for these differences. The performance of our proposed methods is illustrated through simulation studies. Finally, we present a case study utilizing Oscar award nomination data to demonstrate the application of our method.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"237 ","pages":"Article 106249"},"PeriodicalIF":0.8,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.1016/j.jspi.2024.106247
Xiaoyang Li , Zhi-Sheng Ye , Xingqiu Zhao
Panel count data are gathered when subjects are examined at discrete times during a study, and only the number of recurrent events occurring before each examination time is recorded. We consider a semiparametric accelerated mean model for panel count data in which the effect of the covariates is to transform the time scale of the baseline mean function. Semiparametric inference for the model is inherently challenging because the finite-dimensional regression parameters appear in the argument of the (infinite-dimensional) functional parameter, i.e., the baseline mean function, leading to the phenomenon of bundled parameters. We propose sieve pseudolikelihood and likelihood methods to construct the random criterion function for estimating the model parameters. An inexact block coordinate ascent algorithm is used to obtain these estimators. We establish the consistency and rate of convergence of the proposed estimators, as well as the asymptotic normality of the estimators of the regression parameters. Novel consistent estimators of the asymptotic covariances of the estimated regression parameters are derived by leveraging the counting process associated with the examination times. Comprehensive simulation studies demonstrate that the optimization algorithm is much less sensitive to the initial values than the Newton–Raphson method. The proposed estimators perform well for practical sample sizes, and are more efficient than existing methods. An example based on real data shows that due to this efficiency gain, the proposed method is better able to detect the significance of practically meaningful covariates than an existing method.
{"title":"Sieve estimation of the accelerated mean model based on panel count data","authors":"Xiaoyang Li , Zhi-Sheng Ye , Xingqiu Zhao","doi":"10.1016/j.jspi.2024.106247","DOIUrl":"10.1016/j.jspi.2024.106247","url":null,"abstract":"<div><div>Panel count data are gathered when subjects are examined at discrete times during a study, and only the number of recurrent events occurring before each examination time is recorded. We consider a semiparametric accelerated mean model for panel count data in which the effect of the covariates is to transform the time scale of the baseline mean function. Semiparametric inference for the model is inherently challenging because the finite-dimensional regression parameters appear in the argument of the (infinite-dimensional) functional parameter, i.e., the baseline mean function, leading to the phenomenon of bundled parameters. We propose sieve pseudolikelihood and likelihood methods to construct the random criterion function for estimating the model parameters. An inexact block coordinate ascent algorithm is used to obtain these estimators. We establish the consistency and rate of convergence of the proposed estimators, as well as the asymptotic normality of the estimators of the regression parameters. Novel consistent estimators of the asymptotic covariances of the estimated regression parameters are derived by leveraging the counting process associated with the examination times. Comprehensive simulation studies demonstrate that the optimization algorithm is much less sensitive to the initial values than the Newton–Raphson method. The proposed estimators perform well for practical sample sizes, and are more efficient than existing methods. An example based on real data shows that due to this efficiency gain, the proposed method is better able to detect the significance of practically meaningful covariates than an existing method.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"237 ","pages":"Article 106247"},"PeriodicalIF":0.8,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.jspi.2024.106245
Jessie Li
We demonstrate how to conduct uniformly asymptotically valid inference for -consistent estimators defined as the solution to a constrained optimization problem with a possibly nonsmooth or nonconvex sample objective function and a possibly nonconvex constraint set. We allow for the solution to the problem to be on the boundary of the constraint set or to drift towards the boundary of the constraint set as the sample size goes to infinity. We construct a confidence set by benchmarking a test statistic against critical values that can be obtained from a simple unconstrained quadratic programming problem. Monte Carlo simulations illustrate the uniformly correct coverage of our method in a boundary constrained maximum likelihood model, a boundary constrained nonsmooth GMM model, and a conditional logit model with capacity constraints.
我们演示了如何对 n 个一致估计器进行统一渐近有效推断,这些估计器被定义为一个约束优化问题的解,该问题具有可能是非光滑或非凸的样本目标函数和可能是非凸的约束集。我们允许问题的解处于约束集的边界上,或随着样本量的增加而向约束集的边界漂移。我们通过将测试统计量与临界值进行比对来构建置信集,这些临界值可以从一个简单的无约束二次编程问题中获得。蒙特卡罗模拟说明了我们的方法在边界约束最大似然模型、边界约束非光滑 GMM 模型和带容量约束的条件 logit 模型中的均匀正确覆盖率。
{"title":"The proximal bootstrap for constrained estimators","authors":"Jessie Li","doi":"10.1016/j.jspi.2024.106245","DOIUrl":"10.1016/j.jspi.2024.106245","url":null,"abstract":"<div><div>We demonstrate how to conduct uniformly asymptotically valid inference for <span><math><msqrt><mrow><mi>n</mi></mrow></msqrt></math></span>-consistent estimators defined as the solution to a constrained optimization problem with a possibly nonsmooth or nonconvex sample objective function and a possibly nonconvex constraint set. We allow for the solution to the problem to be on the boundary of the constraint set or to drift towards the boundary of the constraint set as the sample size goes to infinity. We construct a confidence set by benchmarking a test statistic against critical values that can be obtained from a simple unconstrained quadratic programming problem. Monte Carlo simulations illustrate the uniformly correct coverage of our method in a boundary constrained maximum likelihood model, a boundary constrained nonsmooth GMM model, and a conditional logit model with capacity constraints.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106245"},"PeriodicalIF":0.8,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142571397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.1016/j.jspi.2024.106246
Tianxuan Ding , Zhimei Li , Yaowu Zhang
Comparing and testing for the homogeneity of two independent random samples is a fundamental statistical problem with many applications across various fields. However, existing methods may not be effective when the data is complex or high-dimensional. We propose a new method that integrates the maximum mean discrepancy (MMD) with a Gaussian kernel over all one-dimensional projections of the data. We derive the closed-form expression of the integrated MMD and prove its validity as a distributional similarity metric. We estimate the integrated MMD with the -statistic theory and study its asymptotic behaviors under the null and two kinds of alternative hypotheses. We demonstrate that our method has the benefits of the MMD, and outperforms existing methods on both synthetic and real datasets, especially when the data is complex and high-dimensional.
比较和检验两个独立随机样本的同质性是一个基本的统计问题,在各个领域都有很多应用。然而,当数据复杂或高维时,现有的方法可能无法奏效。我们提出了一种新方法,用高斯核对数据的所有一维投影进行最大均值差异(MMD)积分。我们推导出了集成 MMD 的闭式表达式,并证明了它作为分布相似度量的有效性。我们用 U 统计理论估计了综合 MMD,并研究了它在零假设和两种替代假设下的渐近行为。我们证明了我们的方法具有 MMD 的优点,并且在合成数据集和真实数据集上都优于现有方法,尤其是在数据复杂和高维的情况下。
{"title":"Testing the equality of distributions using integrated maximum mean discrepancy","authors":"Tianxuan Ding , Zhimei Li , Yaowu Zhang","doi":"10.1016/j.jspi.2024.106246","DOIUrl":"10.1016/j.jspi.2024.106246","url":null,"abstract":"<div><div>Comparing and testing for the homogeneity of two independent random samples is a fundamental statistical problem with many applications across various fields. However, existing methods may not be effective when the data is complex or high-dimensional. We propose a new method that integrates the maximum mean discrepancy (MMD) with a Gaussian kernel over all one-dimensional projections of the data. We derive the closed-form expression of the integrated MMD and prove its validity as a distributional similarity metric. We estimate the integrated MMD with the <span><math><mi>U</mi></math></span>-statistic theory and study its asymptotic behaviors under the null and two kinds of alternative hypotheses. We demonstrate that our method has the benefits of the MMD, and outperforms existing methods on both synthetic and real datasets, especially when the data is complex and high-dimensional.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106246"},"PeriodicalIF":0.8,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-05DOI: 10.1016/j.jspi.2024.106244
Yan-Yong Zhao , Ling-Ling Ge , Kong-Sheng Zhang
In this paper, we consider the estimation of functional coefficient panel data models with cross-sectional dependence. Borrowing the principal component structure, the functional coefficient panel data models can be transformed into a semiparametric panel data model. Combining the local linear dummy variable technique and profile least squares method, we develop a semiparametric profile method to estimate the coefficient functions. A gradient-descent iterative algorithm is employed to enhance computation speed and estimation accuracy. The main results show that the resulting parameter estimator enjoys asymptotic normality with a convergence rate and the nonparametric estimator is asymptotically normal with a nonparametric convergence rate when both the number of cross-sectional units and the length of time series go to infinity, under some regularity conditions. Monte Carlo simulations are carried out to evaluate the proposed methods, and an application to cigarette demand is investigated for illustration.
本文考虑了具有横截面依赖性的函数系数面板数据模型的估计。借用主成分结构,函数系数面板数据模型可以转化为半参数面板数据模型。结合局部线性虚拟变量技术和剖面最小二乘法,我们开发了一种估计系数函数的半参数剖面方法。我们采用梯度迭代算法来提高计算速度和估计精度。主要结果表明,在一些正则性条件下,当横截面单位数 N 和时间序列长度 T 都达到无穷大时,所得到的参数估计器具有渐近正态性和 NT 收敛率,而非参数估计器具有渐近正态性和非参数收敛率 NTh。为了评估所提出的方法,我们进行了蒙特卡罗模拟,并对卷烟需求的应用进行了研究以作说明。
{"title":"Semiparametric estimation of a principal functional coefficient panel data model with cross-sectional dependence and its application to cigarette demand","authors":"Yan-Yong Zhao , Ling-Ling Ge , Kong-Sheng Zhang","doi":"10.1016/j.jspi.2024.106244","DOIUrl":"10.1016/j.jspi.2024.106244","url":null,"abstract":"<div><div>In this paper, we consider the estimation of functional coefficient panel data models with cross-sectional dependence. Borrowing the principal component structure, the functional coefficient panel data models can be transformed into a semiparametric panel data model. Combining the local linear dummy variable technique and profile least squares method, we develop a semiparametric profile method to estimate the coefficient functions. A gradient-descent iterative algorithm is employed to enhance computation speed and estimation accuracy. The main results show that the resulting parameter estimator enjoys asymptotic normality with a <span><math><msqrt><mrow><mi>N</mi><mi>T</mi></mrow></msqrt></math></span> convergence rate and the nonparametric estimator is asymptotically normal with a nonparametric convergence rate <span><math><msqrt><mrow><mi>N</mi><mi>T</mi><mi>h</mi></mrow></msqrt></math></span> when both the number of cross-sectional units <span><math><mi>N</mi></math></span> and the length of time series <span><math><mi>T</mi></math></span> go to infinity, under some regularity conditions. Monte Carlo simulations are carried out to evaluate the proposed methods, and an application to cigarette demand is investigated for illustration.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106244"},"PeriodicalIF":0.8,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jspi.2024.106243
David J. Hessen
In this paper, a family of maximum-entropy distributions with general discrete support is derived. Members of the family are distinguished by the number of specified non-central moments. In addition, a subfamily of discrete symmetric distributions is defined. Attention is paid to maximum likelihood estimation of the parameters of any member of the general family. It is shown that the parameters of any special case with infinite support can be estimated using a conditional distribution given a finite subset of the total support. In an empirical data example, the procedures proposed are demonstrated.
{"title":"A family of discrete maximum-entropy distributions","authors":"David J. Hessen","doi":"10.1016/j.jspi.2024.106243","DOIUrl":"10.1016/j.jspi.2024.106243","url":null,"abstract":"<div><div>In this paper, a family of maximum-entropy distributions with general discrete support is derived. Members of the family are distinguished by the number of specified non-central moments. In addition, a subfamily of discrete symmetric distributions is defined. Attention is paid to maximum likelihood estimation of the parameters of any member of the general family. It is shown that the parameters of any special case with infinite support can be estimated using a conditional distribution given a finite subset of the total support. In an empirical data example, the procedures proposed are demonstrated.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106243"},"PeriodicalIF":0.8,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-29DOI: 10.1016/j.jspi.2024.106241
Ejub Talovic, Yves Tillé
For both experimental and sampling designs, the efficiency or balance of designs has been extensively studied. There are many ways to incorporate auxiliary information into designs. However, when we use balanced designs to decrease the variance due to an auxiliary variable, the variance may increase due to an effect which we define as lack of robustness. This robustness can be written as the largest eigenvalue of the variance operator of a sampling or experimental design. If this eigenvalue is large, then it might induce a large variance in the Horvitz–Thompson estimator of the total. We calculate or estimate the largest eigenvalue of the most common designs. We determine lower, upper bounds and approximations of this eigenvalue for different designs. Then, we compare these results with simulations that show the trade-off between efficiency and robustness. Those results can be used to determine the proper choice of designs for experiments such as clinical trials or surveys. We also propose a new and simple method for mixing two sampling designs, which allows to use a tuning parameter between two sampling designs. This method is then compared to the Gram–Schmidt walk design, which also governs the trade-off between robustness and efficiency. A set of simulation studies shows that our method of mixture gives similar results to the Gram–Schmidt walk design while having an interpretable variance matrix.
{"title":"Risk minimization using robust experimental or sampling designs and mixture of designs","authors":"Ejub Talovic, Yves Tillé","doi":"10.1016/j.jspi.2024.106241","DOIUrl":"10.1016/j.jspi.2024.106241","url":null,"abstract":"<div><div>For both experimental and sampling designs, the efficiency or balance of designs has been extensively studied. There are many ways to incorporate auxiliary information into designs. However, when we use balanced designs to decrease the variance due to an auxiliary variable, the variance may increase due to an effect which we define as lack of robustness. This robustness can be written as the largest eigenvalue of the variance operator of a sampling or experimental design. If this eigenvalue is large, then it might induce a large variance in the Horvitz–Thompson estimator of the total. We calculate or estimate the largest eigenvalue of the most common designs. We determine lower, upper bounds and approximations of this eigenvalue for different designs. Then, we compare these results with simulations that show the trade-off between efficiency and robustness. Those results can be used to determine the proper choice of designs for experiments such as clinical trials or surveys. We also propose a new and simple method for mixing two sampling designs, which allows to use a tuning parameter between two sampling designs. This method is then compared to the Gram–Schmidt walk design, which also governs the trade-off between robustness and efficiency. A set of simulation studies shows that our method of mixture gives similar results to the Gram–Schmidt walk design while having an interpretable variance matrix.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106241"},"PeriodicalIF":0.8,"publicationDate":"2024-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-27DOI: 10.1016/j.jspi.2024.106242
Zhaohui Yan, Shengli Zhao
In this paper, we explore the minimum aberration criterion for -level designs under baseline parameterization, called BP-MA. We give a complete search method and an incomplete search method to obtain the BP-MA (or nearly BP-MA) designs. The methodology has no restriction on , the levels of the factors. The catalogues of (nearly) BP-MA designs with levels are provided.
本文探讨了基线参数化条件下 s 级设计的最小畸变准则,称为 BP-MA。我们给出了一种完全搜索方法和一种不完全搜索方法来获得 BP-MA(或近似 BP-MA)设计。该方法对因子水平 s 没有限制。我们提供了 s=2,3,4,5 级的(近似)BP-MA 设计目录。
{"title":"Optimal s-level fractional factorial designs under baseline parameterization","authors":"Zhaohui Yan, Shengli Zhao","doi":"10.1016/j.jspi.2024.106242","DOIUrl":"10.1016/j.jspi.2024.106242","url":null,"abstract":"<div><div>In this paper, we explore the minimum aberration criterion for <span><math><mi>s</mi></math></span>-level designs under baseline parameterization, called BP-MA. We give a complete search method and an incomplete search method to obtain the BP-MA (or nearly BP-MA) designs. The methodology has no restriction on <span><math><mi>s</mi></math></span>, the levels of the factors. The catalogues of (nearly) BP-MA designs with <span><math><mrow><mi>s</mi><mo>=</mo><mn>2</mn><mo>,</mo><mn>3</mn><mo>,</mo><mn>4</mn><mo>,</mo><mn>5</mn></mrow></math></span> levels are provided.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106242"},"PeriodicalIF":0.8,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142357419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}