首页 > 最新文献

Journal of Statistical Planning and Inference最新文献

英文 中文
Oracle-efficient estimation and global inferences for variance function of functional data 函数数据方差函数的 Oracle 高效估计和全局推断
IF 0.8 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-07-04 DOI: 10.1016/j.jspi.2024.106210
Li Cai , Suojin Wang

A new two-step reconstruction-based moment estimator and an asymptotically correct smooth simultaneous confidence band as a global inference tool are proposed for the heteroscedastic variance function of dense functional data. Step one involves spline smoothing for individual trajectory reconstructions and step two employs kernel regression on the individual squared residuals to estimate each trajectory variability. Then by the method of moment an estimator for the variance function of functional data is constructed. The estimation procedure is innovative by synthesizing spline smoothing and kernel regression together, which allows one not only to apply the fast computing speed of spline regression but also to employ the flexible local estimation and the extreme value theory of kernel smoothing. The resulting estimator for the variance function is shown to be oracle-efficient in the sense that it is uniformly as efficient as the ideal estimator when all trajectories were known by “oracle”. As a result, an asymptotically correct simultaneous confidence band for the variance function is established. Simulation results support our asymptotic theory with fast computation. As an illustration, the proposed method is applied to the analyses of two real data sets leading to a number of discoveries.

针对密集函数数据的异方差函数,提出了一种新的基于两步重构的矩估计器和一种渐近正确的平滑同步置信带作为全局推断工具。第一步是对单个轨迹重建进行样条平滑,第二步是对单个平方残差进行核回归,以估计每个轨迹的变异性。然后通过矩方法构建功能数据方差函数的估计器。该估算程序的创新之处在于将样条平滑法和核回归法综合在一起,不仅可以应用样条平滑法的快速计算速度,还可以利用核平滑法灵活的局部估算和极值理论。结果表明,方差函数的估计器具有oracle效率,即当所有轨迹都由 "oracle "已知时,它的效率与理想估计器一样一致。因此,建立了方差函数的渐近正确同步置信带。仿真结果支持我们的渐近理论和快速计算。为了说明问题,我们将所提出的方法应用于对两个真实数据集的分析,结果发现了许多问题。
{"title":"Oracle-efficient estimation and global inferences for variance function of functional data","authors":"Li Cai ,&nbsp;Suojin Wang","doi":"10.1016/j.jspi.2024.106210","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106210","url":null,"abstract":"<div><p>A new two-step reconstruction-based moment estimator and an asymptotically correct smooth simultaneous confidence band as a global inference tool are proposed for the heteroscedastic variance function of dense functional data. Step one involves spline smoothing for individual trajectory reconstructions and step two employs kernel regression on the individual squared residuals to estimate each trajectory variability. Then by the method of moment an estimator for the variance function of functional data is constructed. The estimation procedure is innovative by synthesizing spline smoothing and kernel regression together, which allows one not only to apply the fast computing speed of spline regression but also to employ the flexible local estimation and the extreme value theory of kernel smoothing. The resulting estimator for the variance function is shown to be oracle-efficient in the sense that it is uniformly as efficient as the ideal estimator when all trajectories were known by “oracle”. As a result, an asymptotically correct simultaneous confidence band for the variance function is established. Simulation results support our asymptotic theory with fast computation. As an illustration, the proposed method is applied to the analyses of two real data sets leading to a number of discoveries.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":null,"pages":null},"PeriodicalIF":0.8,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141593789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Column expanded Latin hypercube designs 列扩展拉丁超立方设计
IF 0.8 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-06-27 DOI: 10.1016/j.jspi.2024.106208
Qiao Wei, Jian-Feng Yang, Min-Qian Liu

Maximin distance designs and orthogonal designs are extensively applied in computer experiments, but the construction of such designs is challenging, especially under the maximin distance criterion. In this paper, by adding columns to a fold-over optimal maximin L2-distance Latin hypercube design (LHD), we construct a class of LHDs, called column expanded LHDs, which are nearly optimal under both the maximin L2-distance and orthogonality criteria. The advantage of the proposed method is that the resulting designs have flexible numbers of factors without computer search. Detailed comparisons with existing LHDs show that the constructed LHDs have larger minimum distances between design points and smaller correlation coefficients between distinct columns.

最大距离设计和正交设计在计算机实验中得到了广泛应用,但这类设计的构建极具挑战性,尤其是在最大距离准则下。本文通过在折叠最优最大L2距离拉丁超立方设计(LHD)中添加列,构建了一类在最大L2距离准则和正交准则下都接近最优的LHD,称为列扩展LHD。所提方法的优势在于,设计结果具有灵活的因子数,无需计算机搜索。与现有 LHD 的详细比较表明,所构建的 LHD 设计点之间的最小距离更大,不同列之间的相关系数更小。
{"title":"Column expanded Latin hypercube designs","authors":"Qiao Wei,&nbsp;Jian-Feng Yang,&nbsp;Min-Qian Liu","doi":"10.1016/j.jspi.2024.106208","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106208","url":null,"abstract":"<div><p>Maximin distance designs and orthogonal designs are extensively applied in computer experiments, but the construction of such designs is challenging, especially under the maximin distance criterion. In this paper, by adding columns to a fold-over optimal maximin <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>-distance Latin hypercube design (LHD), we construct a class of LHDs, called column expanded LHDs, which are nearly optimal under both the maximin <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>-distance and orthogonality criteria. The advantage of the proposed method is that the resulting designs have flexible numbers of factors without computer search. Detailed comparisons with existing LHDs show that the constructed LHDs have larger minimum distances between design points and smaller correlation coefficients between distinct columns.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":null,"pages":null},"PeriodicalIF":0.8,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of misclassification on covariate-adaptive randomized clinical trials with generalized linear models 误分类对使用广义线性模型的协变量自适应随机临床试验的影响
IF 0.8 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-06-27 DOI: 10.1016/j.jspi.2024.106209
Tong Wang, Wei Ma

Covariate-adaptive randomization (CAR) is a type of randomization method that uses covariate information to enhance the comparability between different treatment groups. Under such randomization, the covariate is usually well balanced, i.e., the imbalance between the treatment group and placebo group is controlled. In practice, the covariate is sometimes misclassified. The covariate misclassification affects the CAR itself and statistical inferences after the CAR. In this paper, we examine the impact of covariate misclassification on CAR from two aspects. First, we study the balancing properties of CAR with unequal allocation in the presence of covariate misclassification. We show the convergence rate of the imbalance and compare it with that under true covariate. Second, we study the hypothesis test under CAR with misclassified covariates in a generalized linear model (GLM) framework. We consider both the unadjusted and adjusted models. To illustrate the theoretical results, we discuss the validity of test procedures for three commonly-used GLM, i.e., logistic regression, Poisson regression and exponential model. Specifically, we show that the adjusted model is often invalid when the misclassified covariates are adjusted. In this case, we provide a simple correction for the inflated Type-I error. The correction is useful and easy to implement because it does not require misclassification specification and estimation of the misclassification rate. Our study enriches the literature on the impact of covariate misclassification on CAR and provides a practical approach for handling misclassification.

协变量自适应随机化(CAR)是一种利用协变量信息来增强不同治疗组之间可比性的随机化方法。在这种随机化方法下,协变量通常是平衡的,即治疗组与安慰剂组之间的不平衡得到了控制。实际上,协变量有时会被误分类。协变量分类错误会影响 CAR 本身和 CAR 后的统计推断。本文将从两个方面研究协变量误分类对 CAR 的影响。首先,我们研究了存在协变量误分类时不平等分配 CAR 的平衡特性。我们展示了不平衡的收敛速率,并将其与真实协变量下的收敛速率进行了比较。其次,我们在广义线性模型(GLM)框架下研究了具有误分类协变量的 CAR 假设检验。我们同时考虑了未调整模型和调整模型。为了说明理论结果,我们讨论了三种常用 GLM(即逻辑回归、泊松回归和指数模型)测试程序的有效性。具体来说,我们表明,当对误判协变量进行调整时,调整后的模型往往是无效的。在这种情况下,我们对夸大的 I 类误差进行了简单的修正。该校正方法非常有用,而且易于实施,因为它不需要误分类规范和误分类率估计。我们的研究丰富了有关协变量误分类对 CAR 影响的文献,并提供了处理误分类的实用方法。
{"title":"The impact of misclassification on covariate-adaptive randomized clinical trials with generalized linear models","authors":"Tong Wang,&nbsp;Wei Ma","doi":"10.1016/j.jspi.2024.106209","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106209","url":null,"abstract":"<div><p>Covariate-adaptive randomization (CAR) is a type of randomization method that uses covariate information to enhance the comparability between different treatment groups. Under such randomization, the covariate is usually well balanced, i.e., the imbalance between the treatment group and placebo group is controlled. In practice, the covariate is sometimes misclassified. The covariate misclassification affects the CAR itself and statistical inferences after the CAR. In this paper, we examine the impact of covariate misclassification on CAR from two aspects. First, we study the balancing properties of CAR with unequal allocation in the presence of covariate misclassification. We show the convergence rate of the imbalance and compare it with that under true covariate. Second, we study the hypothesis test under CAR with misclassified covariates in a generalized linear model (GLM) framework. We consider both the unadjusted and adjusted models. To illustrate the theoretical results, we discuss the validity of test procedures for three commonly-used GLM, i.e., logistic regression, Poisson regression and exponential model. Specifically, we show that the adjusted model is often invalid when the misclassified covariates are adjusted. In this case, we provide a simple correction for the inflated Type-I error. The correction is useful and easy to implement because it does not require misclassification specification and estimation of the misclassification rate. Our study enriches the literature on the impact of covariate misclassification on CAR and provides a practical approach for handling misclassification.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":null,"pages":null},"PeriodicalIF":0.8,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141593759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A zero-estimator approach for estimating the signal level in a high-dimensional model-free setting 在高维无模型环境中估计信号水平的零估计器方法
IF 0.8 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-06-22 DOI: 10.1016/j.jspi.2024.106207
Ilan Livne, David Azriel, Yair Goldberg

We study a high-dimensional regression setting under the assumption of known covariate distribution. We aim at estimating the amount of explained variation in the response by the best linear function of the covariates (the signal level). In our setting, neither sparsity of the coefficient vector, nor normality of the covariates or linearity of the conditional expectation are assumed. We present an unbiased and consistent estimator and then improve it by using a zero-estimator approach, where a zero-estimator is a statistic whose expected value is zero. More generally, we present an algorithm based on the zero estimator approach that in principle can improve any given estimator. We study some asymptotic properties of the proposed estimators and demonstrate their finite sample performance in a simulation study.

我们研究的是已知协变量分布假设下的高维回归设置。我们的目标是通过协变量(信号水平)的最佳线性函数来估计响应中可解释的变化量。在我们的设置中,既不假设系数向量的稀疏性,也不假设协变量的正态性或条件期望的线性。我们提出了一个无偏且一致的估计器,然后通过使用零估计器方法对其进行改进,零估计器是一种期望值为零的统计量。更广泛地说,我们提出了一种基于零估计方法的算法,该算法原则上可以改进任何给定的估计值。我们研究了所提估计器的一些渐近特性,并在模拟研究中展示了它们的有限样本性能。
{"title":"A zero-estimator approach for estimating the signal level in a high-dimensional model-free setting","authors":"Ilan Livne,&nbsp;David Azriel,&nbsp;Yair Goldberg","doi":"10.1016/j.jspi.2024.106207","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106207","url":null,"abstract":"<div><p>We study a high-dimensional regression setting under the assumption of known covariate distribution. We aim at estimating the amount of explained variation in the response by the best linear function of the covariates (the signal level). In our setting, neither sparsity of the coefficient vector, nor normality of the covariates or linearity of the conditional expectation are assumed. We present an unbiased and consistent estimator and then improve it by using a zero-estimator approach, where a zero-estimator is a statistic whose expected value is zero. More generally, we present an algorithm based on the zero estimator approach that in principle can improve any given estimator. We study some asymptotic properties of the proposed estimators and demonstrate their finite sample performance in a simulation study.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":null,"pages":null},"PeriodicalIF":0.8,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141482213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Layer sparsity in neural networks 神经网络中的层稀疏性
IF 0.9 4区 数学 Q2 Mathematics Pub Date : 2024-06-09 DOI: 10.1016/j.jspi.2024.106195
Mohamed Hebiri , Johannes Lederer , Mahsa Taheri

Sparsity has become popular in machine learning because it can save computational resources, facilitate interpretations, and prevent overfitting. This paper discusses sparsity in the framework of neural networks. In particular, we formulate a new notion of sparsity, called layer sparsity, that concerns the networks’ layers and, therefore, aligns particularly well with the current trend toward deep networks. We then introduce corresponding regularization and refitting schemes that can complement standard deep-learning pipelines to generate more compact and accurate networks.

稀疏性在机器学习中很受欢迎,因为它可以节省计算资源、方便解释并防止过度拟合。本文讨论神经网络框架下的稀疏性。特别是,我们提出了一种新的稀疏性概念,称为层稀疏性,它与网络的层有关,因此特别符合当前的深度网络趋势。然后,我们介绍了相应的正则化和重拟合方案,这些方案可以补充标准深度学习管道,生成更紧凑、更精确的网络。
{"title":"Layer sparsity in neural networks","authors":"Mohamed Hebiri ,&nbsp;Johannes Lederer ,&nbsp;Mahsa Taheri","doi":"10.1016/j.jspi.2024.106195","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106195","url":null,"abstract":"<div><p>Sparsity has become popular in machine learning because it can save computational resources, facilitate interpretations, and prevent overfitting. This paper discusses sparsity in the framework of neural networks. In particular, we formulate a new notion of sparsity, called layer sparsity, that concerns the networks’ layers and, therefore, aligns particularly well with the current trend toward deep networks. We then introduce corresponding regularization and refitting schemes that can complement standard deep-learning pipelines to generate more compact and accurate networks.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0378375824000521/pdfft?md5=b1aa1392925da05f5ac50fc5d4831546&pid=1-s2.0-S0378375824000521-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High dimensional discriminant rules with shrinkage estimators of the covariance matrix and mean vector 使用协方差矩阵和均值向量收缩估计器的高维判别规则
IF 0.9 4区 数学 Q2 Mathematics Pub Date : 2024-06-08 DOI: 10.1016/j.jspi.2024.106199
Jaehoan Kim , Junyong Park , Hoyoung Park

Linear discriminant analysis (LDA) is a typical method for classification problems with large dimensions and small samples. There are various types of LDA methods that are based on the different types of estimators for the covariance matrices and mean vectors. In this paper, we consider shrinkage methods based on a non-parametric approach. For the precision matrix, methods based on the sparsity structure or data splitting are examined. Regarding the estimation of mean vectors, Non-parametric Empirical Bayes (NPEB) methods and Non-parametric Maximum Likelihood Estimation (NPMLE) methods, also known as f-modeling and g-modeling, respectively, are adopted. The performance of linear discriminant rules based on combined estimation strategies of the covariance matrix and mean vectors are analyzed in this study. Particularly, the study presents a theoretical result on the performance of the NPEB method and compares it with previous studies. Simulation studies with various covariance matrices and mean vector structures are conducted to evaluate the methods discussed in this paper. Furthermore, real data examples such as gene expressions and EEG data are also presented.

线性判别分析(LDA)是处理大维度、小样本分类问题的一种典型方法。基于协方差矩阵和均值向量的不同类型的估计值,有各种类型的线性判别分析方法。本文考虑基于非参数方法的收缩方法。对于精度矩阵,我们研究了基于稀疏性结构或数据分割的方法。关于均值向量的估计,采用了非参数经验贝叶斯(NPEB)方法和非参数最大似然估计(NPMLE)方法,也分别称为 f 建模和 g 建模。本研究分析了基于协方差矩阵和均值向量组合估计策略的线性判别规则的性能。特别是,本研究提出了 NPEB 方法性能的理论结果,并与之前的研究进行了比较。为了评估本文所讨论的方法,我们使用各种协方差矩阵和均值向量结构进行了仿真研究。此外,还介绍了基因表达和脑电图数据等真实数据示例。
{"title":"High dimensional discriminant rules with shrinkage estimators of the covariance matrix and mean vector","authors":"Jaehoan Kim ,&nbsp;Junyong Park ,&nbsp;Hoyoung Park","doi":"10.1016/j.jspi.2024.106199","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106199","url":null,"abstract":"<div><p>Linear discriminant analysis (LDA) is a typical method for classification problems with large dimensions and small samples. There are various types of LDA methods that are based on the different types of estimators for the covariance matrices and mean vectors. In this paper, we consider shrinkage methods based on a non-parametric approach. For the precision matrix, methods based on the sparsity structure or data splitting are examined. Regarding the estimation of mean vectors, Non-parametric Empirical Bayes (NPEB) methods and Non-parametric Maximum Likelihood Estimation (NPMLE) methods, also known as <span><math><mi>f</mi></math></span>-modeling and <span><math><mi>g</mi></math></span>-modeling, respectively, are adopted. The performance of linear discriminant rules based on combined estimation strategies of the covariance matrix and mean vectors are analyzed in this study. Particularly, the study presents a theoretical result on the performance of the NPEB method and compares it with previous studies. Simulation studies with various covariance matrices and mean vector structures are conducted to evaluate the methods discussed in this paper. Furthermore, real data examples such as gene expressions and EEG data are also presented.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141422982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixed-integer linear programming for computing optimal experimental designs 计算最佳实验设计的混合整数线性规划
IF 0.9 4区 数学 Q2 Mathematics Pub Date : 2024-06-06 DOI: 10.1016/j.jspi.2024.106200
Radoslav Harman, Samuel Rosa

The problem of computing an exact experimental design that is optimal for the least-squares estimation of the parameters of a regression model is considered. We show that this problem can be solved via mixed-integer linear programming (MILP) for a wide class of optimality criteria, including the criteria of A-, I-, G- and MV-optimality. This approach improves upon the current state-of-the-art mathematical programming formulation, which uses mixed-integer second-order cone programming. The key idea underlying the MILP formulation is McCormick relaxation, which critically depends on finite interval bounds for the elements of the covariance matrix of the least-squares estimator corresponding to an optimal exact design. We provide both analytic and algorithmic methods for constructing these bounds. We also demonstrate the unique advantages of the MILP approach, such as the possibility of incorporating multiple design constraints into the optimization problem, including constraints on the variances and covariances of the least-squares estimator.

我们考虑的问题是计算一个精确的实验设计,该设计对于回归模型参数的最小二乘估计是最优的。我们的研究表明,这个问题可以通过混合整数线性规划(MILP)来解决,适用于多种优化标准,包括 A-、I-、G- 和 MV-优化标准。这种方法改进了目前最先进的数学编程方法,即使用混合整数二阶锥编程。MILP 计算方法的关键思想是麦考密克松弛,它主要取决于与最优精确设计相对应的最小二乘估计器协方差矩阵元素的有限区间约束。我们提供了构建这些边界的分析和算法方法。我们还展示了 MILP 方法的独特优势,例如可以将多个设计约束纳入优化问题,包括对最小二乘估计器的方差和协方差的约束。
{"title":"Mixed-integer linear programming for computing optimal experimental designs","authors":"Radoslav Harman,&nbsp;Samuel Rosa","doi":"10.1016/j.jspi.2024.106200","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106200","url":null,"abstract":"<div><p>The problem of computing an exact experimental design that is optimal for the least-squares estimation of the parameters of a regression model is considered. We show that this problem can be solved via mixed-integer linear programming (MILP) for a wide class of optimality criteria, including the criteria of A-, I-, G- and MV-optimality. This approach improves upon the current state-of-the-art mathematical programming formulation, which uses mixed-integer second-order cone programming. The key idea underlying the MILP formulation is McCormick relaxation, which critically depends on finite interval bounds for the elements of the covariance matrix of the least-squares estimator corresponding to an optimal exact design. We provide both analytic and algorithmic methods for constructing these bounds. We also demonstrate the unique advantages of the MILP approach, such as the possibility of incorporating multiple design constraints into the optimization problem, including constraints on the variances and covariances of the least-squares estimator.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Some results for stochastic orders and aging properties related to the Laplace transform 与拉普拉斯变换有关的随机阶次和老化特性的一些结果
IF 0.9 4区 数学 Q2 Mathematics Pub Date : 2024-06-05 DOI: 10.1016/j.jspi.2024.106197
Lazaros Kanellopoulos, Konstadinos Politis

We study some properties and relations for stochastic orders and aging classes related to the Laplace transform. In particular, we show that the NBULt class of distributions is closed under convolution. We also obtain results for the ratio of derivatives of the Laplace transform between two distributions.

我们研究了与拉普拉斯变换相关的随机阶数和老化类的一些性质和关系。特别是,我们证明了 NBULt 类分布在卷积下是封闭的。我们还获得了两个分布之间拉普拉斯变换导数比的结果。
{"title":"Some results for stochastic orders and aging properties related to the Laplace transform","authors":"Lazaros Kanellopoulos,&nbsp;Konstadinos Politis","doi":"10.1016/j.jspi.2024.106197","DOIUrl":"10.1016/j.jspi.2024.106197","url":null,"abstract":"<div><p>We study some properties and relations for stochastic orders and aging classes related to the Laplace transform. In particular, we show that the NBU<span><math><msub><mrow></mrow><mrow><mtext>Lt</mtext></mrow></msub></math></span> class of distributions is closed under convolution. We also obtain results for the ratio of derivatives of the Laplace transform between two distributions.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141403038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical theory for image classification using deep convolutional neural network with cross-entropy loss under the hierarchical max-pooling model 分层最大池模型下使用具有交叉熵损失的深度卷积神经网络进行图像分类的统计理论
IF 0.9 4区 数学 Q2 Mathematics Pub Date : 2024-06-05 DOI: 10.1016/j.jspi.2024.106188
Michael Kohler , Sophie Langer

Convolutional neural networks (CNNs) trained with cross-entropy loss have proven to be extremely successful in classifying images. In recent years, much work has been done to also improve the theoretical understanding of neural networks. Nevertheless, it seems limited when these networks are trained with cross-entropy loss, mainly because of the unboundedness of the target function. In this paper, we aim to fill this gap by analysing the rate of the excess risk of a CNN classifier trained by cross-entropy loss. Under suitable assumptions on the smoothness and structure of the a posteriori probability, it is shown that these classifiers achieve a rate of convergence which is independent of the dimension of the image. These rates are in line with the practical observations about CNNs.

事实证明,使用交叉熵损失训练的卷积神经网络(CNN)在图像分类方面非常成功。近年来,人们做了大量工作来提高对神经网络的理论认识。然而,主要由于目标函数的无界性,在使用交叉熵损失训练这些网络时,研究似乎受到了限制。本文旨在通过分析用交叉熵损失训练的 CNN 分类器的超额风险率来填补这一空白。在对后验概率的平滑性和结构进行适当假设的情况下,结果表明这些分类器的收敛速度与图像的维度无关。这些收敛率与 CNN 的实际观察结果一致。
{"title":"Statistical theory for image classification using deep convolutional neural network with cross-entropy loss under the hierarchical max-pooling model","authors":"Michael Kohler ,&nbsp;Sophie Langer","doi":"10.1016/j.jspi.2024.106188","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106188","url":null,"abstract":"<div><p>Convolutional neural networks (CNNs) trained with cross-entropy loss have proven to be extremely successful in classifying images. In recent years, much work has been done to also improve the theoretical understanding of neural networks. Nevertheless, it seems limited when these networks are trained with cross-entropy loss, mainly because of the unboundedness of the target function. In this paper, we aim to fill this gap by analysing the rate of the excess risk of a CNN classifier trained by cross-entropy loss. Under suitable assumptions on the smoothness and structure of the a posteriori probability, it is shown that these classifiers achieve a rate of convergence which is independent of the dimension of the image. These rates are in line with the practical observations about CNNs.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0378375824000454/pdfft?md5=68a8b5f0ef9e0563ac8f09f8ca152533&pid=1-s2.0-S0378375824000454-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141422984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construction on large four-level designs via quaternary codes 通过四元编码构建大型四级设计
IF 0.9 4区 数学 Q2 Mathematics Pub Date : 2024-06-05 DOI: 10.1016/j.jspi.2024.106198
Xiangyu Fang , Hongyi Li , Zujun Ou

In this paper, two simple and effective construction methods are proposed to construct four-level design with large size via quaternary codes from some small two-level initial designs. Under the popular criteria for selecting optimal design, such as generalized minimum aberration, minimum moment aberration and uniformity measured by average Lee discrepancy, the close relationships between the constructed four-level design and its initial design are investigated, which provide the guidance for choosing the suitable initial design. Moreover, some lower bounds of average Lee discrepancy for the constructed four-level designs are obtained, which can be used as a benchmark for evaluating the uniformity of the constructed four-level designs. Some numerical examples show that the large four-level designs can be constructed with high efficiency.

本文提出了两种简单有效的构造方法,通过四元编码从一些小的两级初始设计构造出大尺寸的四级设计。在广义最小像差、最小矩像差和用平均李氏差异衡量的均匀性等常用的最优设计选择标准下,研究了构建的四级设计与其初始设计之间的密切关系,为选择合适的初始设计提供了指导。此外,还获得了构建的四电平设计的一些平均李氏偏差下限,可作为评价构建的四电平设计均匀性的基准。一些数值实例表明,大型四电平设计可以高效地构建。
{"title":"Construction on large four-level designs via quaternary codes","authors":"Xiangyu Fang ,&nbsp;Hongyi Li ,&nbsp;Zujun Ou","doi":"10.1016/j.jspi.2024.106198","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106198","url":null,"abstract":"<div><p>In this paper, two simple and effective construction methods are proposed to construct four-level design with large size via quaternary codes from some small two-level initial designs. Under the popular criteria for selecting optimal design, such as generalized minimum aberration, minimum moment aberration and uniformity measured by average Lee discrepancy, the close relationships between the constructed four-level design and its initial design are investigated, which provide the guidance for choosing the suitable initial design. Moreover, some lower bounds of average Lee discrepancy for the constructed four-level designs are obtained, which can be used as a benchmark for evaluating the uniformity of the constructed four-level designs. Some numerical examples show that the large four-level designs can be constructed with high efficiency.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141302737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Statistical Planning and Inference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1