Pub Date : 2026-09-01Epub Date: 2026-02-07DOI: 10.1016/j.jspi.2026.106387
Rami El Haddad , Diala Wehbe , Nicolas Wicker , Matthias Hwai Yong Tan
This paper aims to construct Latin Hypercube Samples (LHSs) with enhanced repulsion between their points. Our main contribution is a novel Markov chain Monte Carlo algorithm designed to generate an LHS with a large Euclidean distance between each pair of points, thus promoting better spread. The proposed algorithm is a Metropolis–Hastings algorithm that defines a Markov chain whose stationary distribution favors samples with greater separation between points. The convergence of the algorithm is rigorously Moreover, numerical experiments are presented to demonstrate that the LHSs produced by our algorithm exhibit improved point distribution, leading to better uniform coverage of the sampling space compared to standard Latin Hypercube designs. In addition, the method yields a diverse collection of designs with better spread of points (reflected in small values of a scattering criterion), highlighting the value of variety among well-performing designs.
{"title":"A Markov chain Monte Carlo construction of space-filling Latin Hypercube Samples","authors":"Rami El Haddad , Diala Wehbe , Nicolas Wicker , Matthias Hwai Yong Tan","doi":"10.1016/j.jspi.2026.106387","DOIUrl":"10.1016/j.jspi.2026.106387","url":null,"abstract":"<div><div>This paper aims to construct Latin Hypercube Samples (LHSs) with enhanced repulsion between their points. Our main contribution is a novel Markov chain Monte Carlo algorithm designed to generate an LHS with a large Euclidean distance between each pair of points, thus promoting better spread. The proposed algorithm is a Metropolis–Hastings algorithm that defines a Markov chain whose stationary distribution favors samples with greater separation between points. The convergence of the algorithm is rigorously Moreover, numerical experiments are presented to demonstrate that the LHSs produced by our algorithm exhibit improved point distribution, leading to better uniform coverage of the sampling space compared to standard Latin Hypercube designs. In addition, the method yields a diverse collection of designs with better spread of points (reflected in small values of a scattering criterion), highlighting the value of variety among well-performing designs.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"244 ","pages":"Article 106387"},"PeriodicalIF":0.8,"publicationDate":"2026-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-09-01Epub Date: 2026-02-06DOI: 10.1016/j.jspi.2026.106380
Xietao Zhou, Steven G. Gilmour
There has been recent interest in the baseline parameterization for two-level factorial designs. The association matrix that expresses the estimator of effects under the baseline parameterization is obtained in an equivalent form as a linear function of estimators of effects under the traditional centered parameterization. This allows the generalization of the criterion which evaluates designs under model uncertainty in the traditional centered parameterization to be applicable to the baseline parameterization. Some optimal designs under the baseline parameterization seen in the previous literature are evaluated and it has been shown that at a given prior probability of a main effect being in the best fitted model from the experimental data, the design in the literature converges to being optimal as the probability of an interaction being in that model converges to 0 from above. The optimal designs for two setups of factors and run sizes at various priors are found by an extended coordinate exchange algorithm and the evaluation of their performances are discussed. Comparisons have been made to those optimal designs restricted to be level-balanced and orthogonal.
{"title":"QB-optimal two-level designs for the baseline parameterization","authors":"Xietao Zhou, Steven G. Gilmour","doi":"10.1016/j.jspi.2026.106380","DOIUrl":"10.1016/j.jspi.2026.106380","url":null,"abstract":"<div><div>There has been recent interest in the baseline parameterization for two-level factorial designs. The association matrix that expresses the estimator of effects under the baseline parameterization is obtained in an equivalent form as a linear function of estimators of effects under the traditional centered parameterization. This allows the generalization of the <span><math><msub><mrow><mi>Q</mi></mrow><mrow><mi>B</mi></mrow></msub></math></span> criterion which evaluates designs under model uncertainty in the traditional centered parameterization to be applicable to the baseline parameterization. Some optimal designs under the baseline parameterization seen in the previous literature are evaluated and it has been shown that at a given prior probability of a main effect being in the best fitted model from the experimental data, the design in the literature converges to being <span><math><msub><mrow><mi>Q</mi></mrow><mrow><mi>B</mi></mrow></msub></math></span> optimal as the probability of an interaction being in that model converges to 0 from above. The <span><math><msub><mrow><mi>Q</mi></mrow><mrow><mi>B</mi></mrow></msub></math></span> optimal designs for two setups of factors and run sizes at various priors are found by an extended coordinate exchange algorithm and the evaluation of their performances are discussed. Comparisons have been made to those optimal designs restricted to be level-balanced and orthogonal.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"244 ","pages":"Article 106380"},"PeriodicalIF":0.8,"publicationDate":"2026-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2026-01-19DOI: 10.1016/j.jspi.2026.106376
Yoann Potiron
This paper derives an explicit formula for the probability that a continuous local martingale crosses a one or two-sided random constant boundary for a finite time interval. The boundary crossing probability of a continuous local martingale to a constant boundary is equal to the boundary crossing probability of a standard Wiener process, which is time-changed by the martingale quadratic variation, to a constant boundary. This paper also derives an explicit solution to the inverse first passage time problem of quadratic variation. These results are obtained by an application of the Dambis, Dubins–Schwarz theorem. The main elementary idea of the proof is the scale invariant property of the time-changed Wiener process and thus the scale invariant property of the first passage time. This is due to the constancy of the boundary.
{"title":"First passage time and inverse problem for continuous local martingales","authors":"Yoann Potiron","doi":"10.1016/j.jspi.2026.106376","DOIUrl":"10.1016/j.jspi.2026.106376","url":null,"abstract":"<div><div>This paper derives an explicit formula for the probability that a continuous local martingale crosses a one or two-sided random constant boundary for a finite time interval. The boundary crossing probability of a continuous local martingale to a constant boundary is equal to the boundary crossing probability of a standard Wiener process, which is time-changed by the martingale quadratic variation, to a constant boundary. This paper also derives an explicit solution to the inverse first passage time problem of quadratic variation. These results are obtained by an application of the Dambis, Dubins–Schwarz theorem. The main elementary idea of the proof is the scale invariant property of the time-changed Wiener process and thus the scale invariant property of the first passage time. This is due to the constancy of the boundary.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"243 ","pages":"Article 106376"},"PeriodicalIF":0.8,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2025-12-19DOI: 10.1016/j.jspi.2025.106372
Zuohang Kang , Zujun Ou
With the increasing complexity of experimental scenarios, mixed-level designs with large size are urgently needed. A class of mixed-level designs are constructed through amplification, which enlarges both the run size and number of factors of initial design. The space-filling properties of amplified designs are discussed under generalized minimum aberration criterion, wordlength enumerator and maximin -distance criterion, and attainable upper bound of maximin -distance and lower bound of wordlength enumerator for amplified design are respectively obtained. Numerical examples demonstrate that the construction method of amplified designs is very simple and effective, and is recommended for application in high dimension topics of statistics or large-scale experiments.
{"title":"A class of mixed-level amplified designs and their space-filling properties","authors":"Zuohang Kang , Zujun Ou","doi":"10.1016/j.jspi.2025.106372","DOIUrl":"10.1016/j.jspi.2025.106372","url":null,"abstract":"<div><div>With the increasing complexity of experimental scenarios, mixed-level designs with large size are urgently needed. A class of mixed-level designs are constructed through amplification, which enlarges both the run size and number of factors of initial design. The space-filling properties of amplified designs are discussed under generalized minimum aberration criterion, wordlength enumerator and maximin <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>-distance criterion, and attainable upper bound of maximin <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>-distance and lower bound of wordlength enumerator for amplified design are respectively obtained. Numerical examples demonstrate that the construction method of amplified designs is very simple and effective, and is recommended for application in high dimension topics of statistics or large-scale experiments.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"243 ","pages":"Article 106372"},"PeriodicalIF":0.8,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2026-01-06DOI: 10.1016/j.jspi.2026.106373
Steven Abrams , Paul Janssen , Noël Veraverbeke
In medical research interest is often in studying either the association between an event time and a continuous covariate or between two event times and where event times are typically subject to (right) censoring. Although the strength of dependence between such random variables can be expressed in terms of global and local association measures, it is interesting to study alternative quantities such as percentiles of the residual lifetime distribution with regard to , conditional on taking values in a given interval. In this paper, we extend existing methods to estimate quantiles of the conditional residual lifetime distribution needed to encompass a more flexible classification of subjects into subgroups based on their respective -values. More specifically, we propose two estimators under one-component, respectively under univariate censoring, and provide a detailed study of their finite-sample performance. We demonstrate the use of these estimators for two medical datasets on (1) monoclonal gammopathy of undetermined significance, and (2) on overall mortality in Danish twin members.
{"title":"Nonparametric estimation of the quantiles of the conditional residual lifetime distribution","authors":"Steven Abrams , Paul Janssen , Noël Veraverbeke","doi":"10.1016/j.jspi.2026.106373","DOIUrl":"10.1016/j.jspi.2026.106373","url":null,"abstract":"<div><div>In medical research interest is often in studying either the association between an event time <span><math><msub><mrow><mi>T</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> and a continuous covariate <span><math><msub><mrow><mi>T</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span> or between two event times <span><math><msub><mrow><mi>T</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> and <span><math><msub><mrow><mi>T</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span> where event times are typically subject to (right) censoring. Although the strength of dependence between such random variables can be expressed in terms of global and local association measures, it is interesting to study alternative quantities such as percentiles of the residual lifetime distribution with regard to <span><math><msub><mrow><mi>T</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>, conditional on <span><math><msub><mrow><mi>T</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span> taking values in a given interval. In this paper, we extend existing methods to estimate quantiles of the conditional residual lifetime distribution needed to encompass a more flexible classification of subjects into subgroups based on their respective <span><math><msub><mrow><mi>T</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>-values. More specifically, we propose two estimators under one-component, respectively under univariate censoring, and provide a detailed study of their finite-sample performance. We demonstrate the use of these estimators for two medical datasets on (1) monoclonal gammopathy of undetermined significance, and (2) on overall mortality in Danish twin members.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"243 ","pages":"Article 106373"},"PeriodicalIF":0.8,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2026-01-14DOI: 10.1016/j.jspi.2026.106375
Jan Beran , Jeremy Näscher , Franziska Farquharson , Jan Graßhoff , Stephan Walterspacher
Sample paths of physiological measurements often exhibit periodically similar patterns. The shapes of observed curves can be complicated, and between-subject variability is typically high. Modeling and prediction therefore need to be done at a patient-specific level. We consider models based on stationary warping of subject-specific template functions. The proposed models can be understood as state space processes or functional time series, with warping functions and vertical deviations characterized by real valued latent processes. Estimation, asymptotic results and prediction regions are derived. The methodology is motivated by a study of mechanical ventilation where the aim is to design automated noninvasive procedures for neurally derived regulation of mechanical ventilation, applying surface electromyography of the respiratory muscles.
{"title":"Template based functional prediction with applications to noninvasive mechanical ventilation and surface EMG techniques","authors":"Jan Beran , Jeremy Näscher , Franziska Farquharson , Jan Graßhoff , Stephan Walterspacher","doi":"10.1016/j.jspi.2026.106375","DOIUrl":"10.1016/j.jspi.2026.106375","url":null,"abstract":"<div><div>Sample paths of physiological measurements often exhibit periodically similar patterns. The shapes of observed curves can be complicated, and between-subject variability is typically high. Modeling and prediction therefore need to be done at a patient-specific level. We consider models based on stationary warping of subject-specific template functions. The proposed models can be understood as state space processes or functional time series, with warping functions and vertical deviations characterized by real valued latent processes. Estimation, asymptotic results and prediction regions are derived. The methodology is motivated by a study of mechanical ventilation where the aim is to design automated noninvasive procedures for neurally derived regulation of mechanical ventilation, applying surface electromyography of the respiratory muscles.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"243 ","pages":"Article 106375"},"PeriodicalIF":0.8,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2026-01-23DOI: 10.1016/j.jspi.2026.106377
Na Zou, Yao Xiao
This paper mainly investigates the projection uniformity of experimental designs based on absolute discrepancy, a newly proposed uniformity criterion particularly suitable for modern experiments with discrete experimental domains. We first establish a general framework of the uniformity pattern defined over the discrete experimental domain via the theory of the reproducing kernel. Then, we highlight the uniformity pattern based on absolute discrepancy and propose a new minimum projection uniformity criterion to evaluate and compare designs more effectively. Theoretical results show the consistence between the proposed minimum projection uniformity criterion and other classical design screening criteria in the sense of level permutations, while several examples demonstrate the superior performance of the new criterion. We also present the uniform projection design under absolute discrepancy, exhibiting superior projection uniformity across all dimensions. Numerical simulations show that uniform projection designs outperform uniform designs under absolute discrepancy in prediction performance, especially in experiments with inert factors.
{"title":"Uniformity pattern and uniform projection designs based on absolute discrepancy","authors":"Na Zou, Yao Xiao","doi":"10.1016/j.jspi.2026.106377","DOIUrl":"10.1016/j.jspi.2026.106377","url":null,"abstract":"<div><div>This paper mainly investigates the projection uniformity of experimental designs based on absolute discrepancy, a newly proposed uniformity criterion particularly suitable for modern experiments with discrete experimental domains. We first establish a general framework of the uniformity pattern defined over the discrete experimental domain via the theory of the reproducing kernel. Then, we highlight the uniformity pattern based on absolute discrepancy and propose a new minimum projection uniformity criterion to evaluate and compare designs more effectively. Theoretical results show the consistence between the proposed minimum projection uniformity criterion and other classical design screening criteria in the sense of level permutations, while several examples demonstrate the superior performance of the new criterion. We also present the uniform projection design under absolute discrepancy, exhibiting superior projection uniformity across all dimensions. Numerical simulations show that uniform projection designs outperform uniform designs under absolute discrepancy in prediction performance, especially in experiments with inert factors.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"243 ","pages":"Article 106377"},"PeriodicalIF":0.8,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2025-11-26DOI: 10.1016/j.jspi.2025.106368
Sadegh Chegini, Mahmoud Zarepour
The Liouville distribution, a generalization of the Dirichlet distribution, serves as a well-known conjugate prior for the multinomial distribution. Just as the Dirichlet process is derived from the finite-dimensional Dirichlet distribution, it is natural and important to introduce and derive a Liouville process in a similar manner. We introduce a discrete random probability measure constructed from a random vector following a Liouville distribution and subsequently derive its weak limit to define our proposed Liouville process. The resulting process is a spike-and-slab process, where the Dirichlet process serves as the slab and a single point from its mean acts as the spike. These two components are linearly combined using a random weight generated from the Liouville distribution. By using the Liouville process as a prior on the space of probability measures, we derive the corresponding posterior process as well as the predictive distribution.
{"title":"On deriving Liouville process from Liouville distribution and its application in nonparametric Bayesian inference","authors":"Sadegh Chegini, Mahmoud Zarepour","doi":"10.1016/j.jspi.2025.106368","DOIUrl":"10.1016/j.jspi.2025.106368","url":null,"abstract":"<div><div>The Liouville distribution, a generalization of the Dirichlet distribution, serves as a well-known conjugate prior for the multinomial distribution. Just as the Dirichlet process is derived from the finite-dimensional Dirichlet distribution, it is natural and important to introduce and derive a Liouville process in a similar manner. We introduce a discrete random probability measure constructed from a random vector following a Liouville distribution and subsequently derive its weak limit to define our proposed Liouville process. The resulting process is a spike-and-slab process, where the Dirichlet process serves as the slab and a single point from its mean acts as the spike. These two components are linearly combined using a random weight generated from the Liouville distribution. By using the Liouville process as a prior on the space of probability measures, we derive the corresponding posterior process as well as the predictive distribution.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"243 ","pages":"Article 106368"},"PeriodicalIF":0.8,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2026-01-24DOI: 10.1016/j.jspi.2026.106378
Zhenglong Zhang, Houlin Zhou, Xuejun Wang
Transfer learning enhances statistical modeling by utilizing source-task information, but its effectiveness can be compromised when the common assumption of error-free covariates is violated, as measurement error often leads to biased estimates and invalid inference. To address this critical issue, we propose a novel transfer learning framework for generalized linear errors-in-variables models (GLEVMs), which account for classical additive measurement error in covariates. We introduce a functional similarity structure linking source and target parameters, and develop the errors-in-variables transfer learning likelihood (ev-TLL) method based on weighted likelihood. Under mild regularity conditions, we establish the asymptotic normality of the proposed estimator and demonstrate that it achieves faster convergence rates than traditional methods without transfer learning. Extensive simulations under both linear and nonlinear GLEVMs confirm the superior estimation accuracy of our approach. Finally, a real data application to the Maryland Biological Stream Survey highlights the practical benefits of ev-TLL over models using only target-domain data.
{"title":"Robust transfer learning under generalized linear errors-in-variables models","authors":"Zhenglong Zhang, Houlin Zhou, Xuejun Wang","doi":"10.1016/j.jspi.2026.106378","DOIUrl":"10.1016/j.jspi.2026.106378","url":null,"abstract":"<div><div>Transfer learning enhances statistical modeling by utilizing source-task information, but its effectiveness can be compromised when the common assumption of error-free covariates is violated, as measurement error often leads to biased estimates and invalid inference. To address this critical issue, we propose a novel transfer learning framework for generalized linear errors-in-variables models (GLEVMs), which account for classical additive measurement error in covariates. We introduce a functional similarity structure linking source and target parameters, and develop the errors-in-variables transfer learning likelihood (ev-TLL) method based on weighted likelihood. Under mild regularity conditions, we establish the asymptotic normality of the proposed estimator and demonstrate that it achieves faster convergence rates than traditional methods without transfer learning. Extensive simulations under both linear and nonlinear GLEVMs confirm the superior estimation accuracy of our approach. Finally, a real data application to the Maryland Biological Stream Survey highlights the practical benefits of ev-TLL over models using only target-domain data.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"243 ","pages":"Article 106378"},"PeriodicalIF":0.8,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146076893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2026-01-06DOI: 10.1016/j.jspi.2026.106374
Faouzi Hakimi
Latin Hypercube Sampling (LHS) is a widely used stratified sampling method in computer experiments. In this work, we extend existing convergence results for the sample mean under LHS to the broader class of -estimators — estimators defined as the zeros of a sample mean function. We derive the asymptotic variance of these estimators and demonstrate that it is smaller when using LHS compared to traditional independent and identically distributed sampling. Furthermore, we establish a Central Limit Theorem for -estimators under LHS, providing a theoretical foundation for its improved efficiency.
{"title":"Robust estimation with Latin Hypercube Sampling: A Central Limit Theorem for Z-estimators","authors":"Faouzi Hakimi","doi":"10.1016/j.jspi.2026.106374","DOIUrl":"10.1016/j.jspi.2026.106374","url":null,"abstract":"<div><div>Latin Hypercube Sampling (LHS) is a widely used stratified sampling method in computer experiments. In this work, we extend existing convergence results for the sample mean under LHS to the broader class of <span><math><mi>Z</mi></math></span>-estimators — estimators defined as the zeros of a sample mean function. We derive the asymptotic variance of these estimators and demonstrate that it is smaller when using LHS compared to traditional independent and identically distributed sampling. Furthermore, we establish a Central Limit Theorem for <span><math><mi>Z</mi></math></span>-estimators under LHS, providing a theoretical foundation for its improved efficiency.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"243 ","pages":"Article 106374"},"PeriodicalIF":0.8,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}