Pub Date : 2024-04-20DOI: 10.1007/s00362-024-01552-2
Zhou Yu, Niloufar Dousti Mousavi, Jie Yang
Negative binomial related distributions have been widely used in practice. The calculation of the corresponding Fisher information matrices involves the expectation of trigamma function values which can only be calculated numerically and approximately. In this paper, we propose a trigamma-free approach to approximate the expectations involving the trigamma function, along with theoretical upper bounds for approximation errors. We show by numerical studies that our approach is highly efficient and much more accurate than previous methods. We also apply our approach to compute the Fisher information matrices of zero-inflated negative binomial (ZINB) and beta negative binomial (ZIBNB) probabilistic models, as well as ZIBNB regression models.
{"title":"A trigamma-free approach for computing information matrices related to trigamma function","authors":"Zhou Yu, Niloufar Dousti Mousavi, Jie Yang","doi":"10.1007/s00362-024-01552-2","DOIUrl":"https://doi.org/10.1007/s00362-024-01552-2","url":null,"abstract":"<p>Negative binomial related distributions have been widely used in practice. The calculation of the corresponding Fisher information matrices involves the expectation of trigamma function values which can only be calculated numerically and approximately. In this paper, we propose a trigamma-free approach to approximate the expectations involving the trigamma function, along with theoretical upper bounds for approximation errors. We show by numerical studies that our approach is highly efficient and much more accurate than previous methods. We also apply our approach to compute the Fisher information matrices of zero-inflated negative binomial (ZINB) and beta negative binomial (ZIBNB) probabilistic models, as well as ZIBNB regression models.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"27 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140623859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-17DOI: 10.1007/s00362-024-01554-0
Asma Ben Saber, Abderrazek Karoui
In this work, we develop two stable estimators for solving linear functional regression problems. It is well known that such a problem is an ill-posed stochastic inverse problem. Hence, a special interest has to be devoted to the stability issue in the design of an estimator for solving such a problem. Our proposed estimators are based on combining a stable least-squares technique and a random projection of the slope function (beta _0(cdot )in L^2(J),) where J is a compact interval. Moreover, these estimators have the advantage of having a fairly good convergence rate with reasonable computational load, since the involved random projections are generally performed over a fairly small dimensional subspace of (L^2(J).) More precisely, the first estimator is given as a least-squares solution of a regularized minimization problem over a finite dimensional subspace of (L^2(J).) In particular, we give an upper bound for the empirical risk error as well as the convergence rate of this estimator. The second proposed stable LFR estimator is based on combining the least-squares technique with a dyadic decomposition of the i.i.d. samples of the stochastic process, associated with the LFR model. In particular, we provide an (L^2)-risk error of this second LFR estimator. Finally, we provide some numerical simulations on synthetic as well as on real data that illustrate the results of this work. These results indicate that our proposed estimators are competitive with some existing and popular LFR estimators.
{"title":"On some stable linear functional regression estimators based on random projections","authors":"Asma Ben Saber, Abderrazek Karoui","doi":"10.1007/s00362-024-01554-0","DOIUrl":"https://doi.org/10.1007/s00362-024-01554-0","url":null,"abstract":"<p>In this work, we develop two stable estimators for solving linear functional regression problems. It is well known that such a problem is an ill-posed stochastic inverse problem. Hence, a special interest has to be devoted to the stability issue in the design of an estimator for solving such a problem. Our proposed estimators are based on combining a stable least-squares technique and a random projection of the slope function <span>(beta _0(cdot )in L^2(J),)</span> where <i>J</i> is a compact interval. Moreover, these estimators have the advantage of having a fairly good convergence rate with reasonable computational load, since the involved random projections are generally performed over a fairly small dimensional subspace of <span>(L^2(J).)</span> More precisely, the first estimator is given as a least-squares solution of a regularized minimization problem over a finite dimensional subspace of <span>(L^2(J).)</span> In particular, we give an upper bound for the empirical risk error as well as the convergence rate of this estimator. The second proposed stable LFR estimator is based on combining the least-squares technique with a dyadic decomposition of the i.i.d. samples of the stochastic process, associated with the LFR model. In particular, we provide an <span>(L^2)</span>-risk error of this second LFR estimator. Finally, we provide some numerical simulations on synthetic as well as on real data that illustrate the results of this work. These results indicate that our proposed estimators are competitive with some existing and popular LFR estimators.\u0000</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"14 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140612027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-17DOI: 10.1007/s00362-024-01549-x
Andrea Ongaro, Sonia Migliorati, Roberto Ascari, Enrico Ripamonti
Traditionally, common testing problems are formalized in terms of a precise null hypothesis representing an idealized situation such as absence of a certain “treatment effect”. However, in most applications the real purpose of the analysis is to assess evidence in favor of a practically relevant effect, rather than simply determining its presence/absence. This discrepancy leads to erroneous inferential conclusions, especially in case of moderate or large sample size. In particular, statistical significance, as commonly evaluated on the basis of a precise hypothesis low p value, bears little or no information on practical significance. This paper presents an innovative approach to the problem of testing the practical relevance of effects. This relies upon the proposal of a general method for modifying standard tests by making them suitable to deal with appropriate interval null hypotheses containing all practically irrelevant effect sizes. In addition, when it is difficult to specify exactly which effect sizes are irrelevant we provide the researcher with a benchmark value. Acceptance/rejection can be established purely by deciding on the (ir)relevance of this value. We illustrate our proposal in the context of many important testing setups, and we apply the proposed methods to two case studies in clinical medicine. First, we consider data on the evaluation of systolic blood pressure in a sample of adult participants at risk for nutritional deficit. Second, we focus on a study of the effects of remdesivir on patients hospitalized with COVID-19.
传统上,常见的检验问题都是通过一个精确的零假设来形式化的,它代表了一种理想化的情况,比如不存在某种 "治疗效果"。然而,在大多数应用中,分析的真正目的是评估有利于实际相关效应的证据,而不是简单地确定其存在/不存在。这种差异会导致错误的推断结论,尤其是在样本量适中或较大的情况下。特别是,通常根据精确假设的低 p 值来评估的统计意义,很少或根本没有关于实际意义的信息。本文针对效果的实际相关性测试问题提出了一种创新方法。这有赖于提出一种修改标准检验的通用方法,使其适用于包含所有实际无关效应大小的适当区间零假设。此外,当难以明确哪些效应大小不相关时,我们为研究人员提供了一个基准值。只需决定该值的(不)相关性,即可确定接受/拒绝。我们结合许多重要的测试设置来说明我们的提议,并将提议的方法应用到临床医学的两个案例研究中。首先,我们考虑了有营养缺乏风险的成年参与者样本中的收缩压评估数据。其次,我们重点研究了雷米替韦对 COVID-19 住院患者的影响。
{"title":"Testing practical relevance of treatment effects","authors":"Andrea Ongaro, Sonia Migliorati, Roberto Ascari, Enrico Ripamonti","doi":"10.1007/s00362-024-01549-x","DOIUrl":"https://doi.org/10.1007/s00362-024-01549-x","url":null,"abstract":"<p>Traditionally, common testing problems are formalized in terms of a precise null hypothesis representing an idealized situation such as absence of a certain “treatment effect”. However, in most applications the real purpose of the analysis is to assess evidence in favor of a practically relevant effect, rather than simply determining its presence/absence. This discrepancy leads to erroneous inferential conclusions, especially in case of moderate or large sample size. In particular, statistical significance, as commonly evaluated on the basis of a precise hypothesis low <i>p</i> value, bears little or no information on practical significance. This paper presents an innovative approach to the problem of testing the practical relevance of effects. This relies upon the proposal of a general method for modifying standard tests by making them suitable to deal with appropriate interval null hypotheses containing all practically irrelevant effect sizes. In addition, when it is difficult to specify exactly which effect sizes are irrelevant we provide the researcher with a benchmark value. Acceptance/rejection can be established purely by deciding on the (ir)relevance of this value. We illustrate our proposal in the context of many important testing setups, and we apply the proposed methods to two case studies in clinical medicine. First, we consider data on the evaluation of systolic blood pressure in a sample of adult participants at risk for nutritional deficit. Second, we focus on a study of the effects of remdesivir on patients hospitalized with COVID-19.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"190 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140612379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Functional time series model has been the subject of the most research in recent years, and since functional data is infinite dimensional, dimension reduction is essential for functional time series. However, the majority of the existing dimension reduction methods such as the functional principal component and fixed basis expansion are unsupervised and typically result in information loss. Then, the functional time series model has an urgent need for a supervised dimension reduction method. The functional sufficient dimension reduction method is a supervised technique that adequately exploits the regression structure information, resulting in minimal information loss. Functional sliced inverse regression (FSIR) is the most popular functional sufficient dimension reduction method, but it cannot be applied directly to functional time series model. In this paper, we examine a functional time series model in which the response is a scalar time series and the explanatory variable is functional time series. We propose a novel supervised dimension reduction technique for the regression model by combining the FSIR and blind source separation methods. Furthermore, we propose innovative strategies for selecting the dimensionality of dimension reduction space and the lags of the functional time series. Numerical studies, including simulation studies and a real data analysis are show the effectiveness of the proposed methods.
{"title":"Supervised dimension reduction for functional time series","authors":"Guochang Wang, Zengyao Wen, Shanming Jia, Shanshan Liang","doi":"10.1007/s00362-023-01505-1","DOIUrl":"https://doi.org/10.1007/s00362-023-01505-1","url":null,"abstract":"<p>Functional time series model has been the subject of the most research in recent years, and since functional data is infinite dimensional, dimension reduction is essential for functional time series. However, the majority of the existing dimension reduction methods such as the functional principal component and fixed basis expansion are unsupervised and typically result in information loss. Then, the functional time series model has an urgent need for a supervised dimension reduction method. The functional sufficient dimension reduction method is a supervised technique that adequately exploits the regression structure information, resulting in minimal information loss. Functional sliced inverse regression (FSIR) is the most popular functional sufficient dimension reduction method, but it cannot be applied directly to functional time series model. In this paper, we examine a functional time series model in which the response is a scalar time series and the explanatory variable is functional time series. We propose a novel supervised dimension reduction technique for the regression model by combining the FSIR and blind source separation methods. Furthermore, we propose innovative strategies for selecting the dimensionality of dimension reduction space and the lags of the functional time series. Numerical studies, including simulation studies and a real data analysis are show the effectiveness of the proposed methods.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"16 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140612442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1007/s00362-024-01553-1
Sima Sharghi, Kevin Stoll, Wei Ning
In this paper, we advance the application of empirical likelihood (EL) for missing response problems. Inspired by remedies for the shortcomings of EL for parameter hypothesis testing, we modify the EL approach used for statistical inference on the mean response when the response is subject to missing behavior. We propose consistent mean estimators, and associated confidence intervals. We extend the approach to estimate the average treatment effect in causal inference settings. We detail the analogous estimators for average treatment effect, prove their consistency, and example their use in estimating the average effect of smoking on renal function of the patients with atherosclerotic renal-artery stenosis and elevated blood pressure, chronic kidney disease, or both. Our proposed estimators outperform the historical mean estimators under missing responses and causal inference settings in terms of simulated relative RMSE and coverage probability on average.
在本文中,我们推进了经验似然法(EL)在缺失响应问题上的应用。受 EL 用于参数假设检验的缺陷补救措施的启发,我们对 EL 方法进行了修改,使之适用于当响应存在缺失行为时对平均响应的统计推断。我们提出了一致的平均估计值和相关的置信区间。我们将该方法扩展到因果推断设置中的平均治疗效果估计。我们详细介绍了平均治疗效果的类似估计器,证明了它们的一致性,并举例说明了它们在估计吸烟对动脉粥样硬化性肾动脉狭窄、血压升高、慢性肾病或两者兼有的患者的肾功能的平均影响时的应用。在缺失反应和因果推理设置下,我们提出的估计器在模拟相对均方根误差和平均覆盖概率方面优于历史平均估计器。
{"title":"Statistical inferences for missing response problems based on modified empirical likelihood","authors":"Sima Sharghi, Kevin Stoll, Wei Ning","doi":"10.1007/s00362-024-01553-1","DOIUrl":"https://doi.org/10.1007/s00362-024-01553-1","url":null,"abstract":"<p>In this paper, we advance the application of empirical likelihood (EL) for missing response problems. Inspired by remedies for the shortcomings of EL for parameter hypothesis testing, we modify the EL approach used for statistical inference on the mean response when the response is subject to missing behavior. We propose consistent mean estimators, and associated confidence intervals. We extend the approach to estimate the average treatment effect in causal inference settings. We detail the analogous estimators for average treatment effect, prove their consistency, and example their use in estimating the average effect of smoking on renal function of the patients with atherosclerotic renal-artery stenosis and elevated blood pressure, chronic kidney disease, or both. Our proposed estimators outperform the historical mean estimators under missing responses and causal inference settings in terms of simulated relative RMSE and coverage probability on average.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"2015 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140573740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-13DOI: 10.1007/s00362-024-01546-0
Hyung Park, Thaddeus Tarpey, Eva Petkova, R. Todd Ogden
This paper explores a methodology for dimension reduction in regression models for a treatment outcome, specifically to capture covariates’ moderating impact on the treatment-outcome association. The motivation behind this stems from the field of precision medicine, where a comprehensive understanding of the interactions between a treatment variable and pretreatment covariates is essential for developing individualized treatment regimes (ITRs). We provide a review of sufficient dimension reduction methods suitable for capturing treatment-covariate interactions and establish connections with linear model-based approaches for the proposed model. Within the framework of single-index regression models, we introduce a sparse estimation method for a dimension reduction vector to tackle the challenges posed by high-dimensional covariate data. Our methods offer insights into dimension reduction techniques specifically for interaction analysis, by providing a semiparametric framework for approximating the minimally sufficient subspace for interactions.
{"title":"A high-dimensional single-index regression for interactions between treatment and covariates","authors":"Hyung Park, Thaddeus Tarpey, Eva Petkova, R. Todd Ogden","doi":"10.1007/s00362-024-01546-0","DOIUrl":"https://doi.org/10.1007/s00362-024-01546-0","url":null,"abstract":"<p>This paper explores a methodology for dimension reduction in regression models for a treatment outcome, specifically to capture covariates’ moderating impact on the treatment-outcome association. The motivation behind this stems from the field of precision medicine, where a comprehensive understanding of the interactions between a treatment variable and pretreatment covariates is essential for developing individualized treatment regimes (ITRs). We provide a review of sufficient dimension reduction methods suitable for capturing treatment-covariate interactions and establish connections with linear model-based approaches for the proposed model. Within the framework of single-index regression models, we introduce a sparse estimation method for a dimension reduction vector to tackle the challenges posed by high-dimensional covariate data. Our methods offer insights into dimension reduction techniques specifically for interaction analysis, by providing a semiparametric framework for approximating the minimally sufficient subspace for interactions.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"12 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140573951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-06DOI: 10.1007/s00362-024-01547-z
Juan Baz, Diego García-Zamora, Irene Díaz, Susana Montes, Luis Martínez
Estimating the mean of a population is a recurrent topic in statistics because of its multiple applications. If previous data is available, or the distribution of the deviation between the measurements and the mean is known, it is possible to perform such estimation by using L-statistics, whose optimal linear coefficients, typically referred to as weights, are derived from a minimization of the mean squared error. However, such optimal weights can only manage sample sizes equal to the one used to derive them, while in real-world scenarios this size might slightly change. Therefore, this paper proposes a method to overcome such a limitation and derive approximations of flexible-dimensional optimal weights. To do so, a parametric family of functions based on extreme value reductions and amplifications is proposed to be adjusted to the cumulative optimal weights of a given sample from a symmetric distribution. Then, the application of Yager’s method to derive weights for ordered weighted average (OWA) operators allows computing the approximate optimal weights for sample sizes close to the original one. This method is justified from the theoretical point of view by proving a convergence result regarding the cumulative weights obtained for different sample sizes. Finally, the practical performance of the theoretical results is shown for several classical symmetric distributions.
由于应用广泛,估计群体的平均值是统计学中经常出现的话题。如果有以前的数据,或已知测量值与均值之间偏差的分布,就可以使用 L 统计法进行估计,其最优线性系数(通常称为权重)是通过最小化均方误差得出的。然而,这种最优权重只能管理与用于推导权重的样本大小相等的样本,而在现实世界中,样本大小可能会略有变化。因此,本文提出了一种方法来克服这种限制,并推导出灵活维度最优权重的近似值。为此,本文提出了一个基于极值还原和放大的参数函数族,用于调整对称分布中给定样本的累积最优权重。然后,应用雅格方法得出有序加权平均算子(OWA)的权重,从而计算出接近原始样本大小的近似最优权重。通过证明不同样本量下获得的累积权重的收敛结果,从理论角度证明了这一方法的合理性。最后,针对几个经典的对称分布,展示了理论结果的实际表现。
{"title":"Flexible-dimensional L-statistic for mean estimation of symmetric distributions","authors":"Juan Baz, Diego García-Zamora, Irene Díaz, Susana Montes, Luis Martínez","doi":"10.1007/s00362-024-01547-z","DOIUrl":"https://doi.org/10.1007/s00362-024-01547-z","url":null,"abstract":"<p>Estimating the mean of a population is a recurrent topic in statistics because of its multiple applications. If previous data is available, or the distribution of the deviation between the measurements and the mean is known, it is possible to perform such estimation by using L-statistics, whose optimal linear coefficients, typically referred to as weights, are derived from a minimization of the mean squared error. However, such optimal weights can only manage sample sizes equal to the one used to derive them, while in real-world scenarios this size might slightly change. Therefore, this paper proposes a method to overcome such a limitation and derive approximations of flexible-dimensional optimal weights. To do so, a parametric family of functions based on extreme value reductions and amplifications is proposed to be adjusted to the cumulative optimal weights of a given sample from a symmetric distribution. Then, the application of Yager’s method to derive weights for ordered weighted average (OWA) operators allows computing the approximate optimal weights for sample sizes close to the original one. This method is justified from the theoretical point of view by proving a convergence result regarding the cumulative weights obtained for different sample sizes. Finally, the practical performance of the theoretical results is shown for several classical symmetric distributions.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"7 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140573737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-06DOI: 10.1007/s00362-024-01540-6
Tianqi Sun, Weiyu Li, Lu Lin
Matrix-variate generalized linear model (mvGLM) has been investigated successfully under the framework of tensor generalized linear model, because matrix-form data can be regarded as a specific tensor (2-dimension). But there are few works focusing on matrix-form data with measurement error (ME), since tensor in conjunction with ME is relatively complex in structure. In this paper we introduce a mvGLM to primarily explore the influence of ME in the model with matrix-form data. We calculate the asymptotic bias based on error-prone mvGLM, and then develop bias-correction methods to tackle the affect of ME. Statistical properties for all methods are established, and the practical performance of all methods is further evaluated in analysis on synthetic and real data sets.
在张量广义线性模型的框架下,矩阵变量广义线性模型(mvGLM)已经得到了成功的研究,因为矩阵形式的数据可以被视为一个特定的张量(二维)。但是,由于带有测量误差(ME)的张量结构相对复杂,因此关注带有测量误差(ME)的矩阵形式数据的研究很少。在本文中,我们引入了 mvGLM,主要探讨 ME 在矩阵形式数据模型中的影响。我们基于易出错的 mvGLM 计算渐近偏差,然后开发偏差修正方法来解决 ME 的影响。我们建立了所有方法的统计特性,并通过对合成数据集和真实数据集的分析进一步评估了所有方法的实际性能。
{"title":"Matrix-variate generalized linear model with measurement error","authors":"Tianqi Sun, Weiyu Li, Lu Lin","doi":"10.1007/s00362-024-01540-6","DOIUrl":"https://doi.org/10.1007/s00362-024-01540-6","url":null,"abstract":"<p>Matrix-variate generalized linear model (mvGLM) has been investigated successfully under the framework of tensor generalized linear model, because matrix-form data can be regarded as a specific tensor (2-dimension). But there are few works focusing on matrix-form data with measurement error (ME), since tensor in conjunction with ME is relatively complex in structure. In this paper we introduce a mvGLM to primarily explore the influence of ME in the model with matrix-form data. We calculate the asymptotic bias based on error-prone mvGLM, and then develop bias-correction methods to tackle the affect of ME. Statistical properties for all methods are established, and the practical performance of all methods is further evaluated in analysis on synthetic and real data sets.</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"55 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140602947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.1007/s00362-024-01543-3
Dagmara Dudek, Anna Kuczmaszewska
The paper contains the comparative analysis of the efficiency of different qunatile estimators for various distributions. Additionally, we show strong consistency of different quantile estimators and we study the Bahadur representation for each of the quantile estimators, when the sample is taken from NA, (varphi ), (rho ^*), (rho )-mixing population.
{"title":"Some practical and theoretical issues related to the quantile estimators","authors":"Dagmara Dudek, Anna Kuczmaszewska","doi":"10.1007/s00362-024-01543-3","DOIUrl":"https://doi.org/10.1007/s00362-024-01543-3","url":null,"abstract":"<p>The paper contains the comparative analysis of the efficiency of different qunatile estimators for various distributions. Additionally, we show strong consistency of different quantile estimators and we study the Bahadur representation for each of the quantile estimators, when the sample is taken from NA, <span>(varphi )</span>, <span>(rho ^*)</span>, <span>(rho )</span>-mixing population.\u0000</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"46 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140573472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Memory-type control charts are widely used for monitoring small to moderate shifts in the process parameter(s). In the present article, we present an exponentiated exponentially weighted moving average (Exp-EWMA) control chart that weights the past observations of a process using an exponentiated function. We evaluated the run-length characteristics of the Exp-EWMA chart via Monte Carlo simulations. A comparison study versus the CUSUM, EWMA and extended EWMA (EEWMA) charts under similar in-control (IC) run-length properties demonstrates that the Exp-EWMA chart is more effective for detecting small and, under certain circumstances, moderate shifts for both the zero-state and steady-state cases. Moreover, the Exp-EWMA chart has better zero-state out-of-control (OOC) performance than an EWMA chart with smoothing parameter equal to the limit to the infinity of the exponentiated function, while the two charts perform similarly for the steady-state case. Finally, it is shown that the Exp-EWMA chart is more IC robust than its competitors under several non-normal distributions. Two examples are provided to explain the implementation of the proposed chart
{"title":"The exponentiated exponentially weighted moving average control chart","authors":"Vasileios Alevizakos, Arpita Chatterjee, Kashinath Chatterjee, Christos Koukouvinos","doi":"10.1007/s00362-024-01544-2","DOIUrl":"https://doi.org/10.1007/s00362-024-01544-2","url":null,"abstract":"<p>Memory-type control charts are widely used for monitoring small to moderate shifts in the process parameter(s). In the present article, we present an exponentiated exponentially weighted moving average (Exp-EWMA) control chart that weights the past observations of a process using an exponentiated function. We evaluated the run-length characteristics of the Exp-EWMA chart via Monte Carlo simulations. A comparison study versus the CUSUM, EWMA and extended EWMA (EEWMA) charts under similar in-control (IC) run-length properties demonstrates that the Exp-EWMA chart is more effective for detecting small and, under certain circumstances, moderate shifts for both the zero-state and steady-state cases. Moreover, the Exp-EWMA chart has better zero-state out-of-control (OOC) performance than an EWMA chart with smoothing parameter equal to the limit to the infinity of the exponentiated function, while the two charts perform similarly for the steady-state case. Finally, it is shown that the Exp-EWMA chart is more IC robust than its competitors under several non-normal distributions. Two examples are provided to explain the implementation of the proposed chart</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"40 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140573634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}