Pub Date : 2025-11-27DOI: 10.1016/j.jspi.2025.106369
Guanfu Liu , Yuejiao Fu
The finite mixtures of multivariate Poisson (FMMP) distributions have wide applications in the real world. Testing for homogeneity under the FMMP models is important, however, there is no generic solution to this problem as far as we know. In this paper, we propose an EM-test for homogeneity under the FMMP models to fulfill the gap. We establish the strong consistency of the maximum likelihood estimator for the mixing distribution by relaxing two conditions required in existing literature. The null limiting distribution of the proposed test is studied, and based on the limiting distribution, a resampling procedure is constructed to approximate the -value of the test. The loss of the strong identifiability for the multivariate Poisson distribution poses a significant challenge in deriving the null limiting distribution. Finally, simulation studies and real-data analysis demonstrate the good performance of the proposed test.
{"title":"Homogeneity testing under finite mixtures of multivariate Poisson distributions","authors":"Guanfu Liu , Yuejiao Fu","doi":"10.1016/j.jspi.2025.106369","DOIUrl":"10.1016/j.jspi.2025.106369","url":null,"abstract":"<div><div>The finite mixtures of multivariate Poisson (FMMP) distributions have wide applications in the real world. Testing for homogeneity under the FMMP models is important, however, there is no generic solution to this problem as far as we know. In this paper, we propose an EM-test for homogeneity under the FMMP models to fulfill the gap. We establish the strong consistency of the maximum likelihood estimator for the mixing distribution by relaxing two conditions required in existing literature. The null limiting distribution of the proposed test is studied, and based on the limiting distribution, a resampling procedure is constructed to approximate the <span><math><mi>p</mi></math></span>-value of the test. The loss of the strong identifiability for the multivariate Poisson distribution poses a significant challenge in deriving the null limiting distribution. Finally, simulation studies and real-data analysis demonstrate the good performance of the proposed test.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"243 ","pages":"Article 106369"},"PeriodicalIF":0.8,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145610584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-26DOI: 10.1016/j.jspi.2025.106368
Sadegh Chegini, Mahmoud Zarepour
The Liouville distribution, a generalization of the Dirichlet distribution, serves as a well-known conjugate prior for the multinomial distribution. Just as the Dirichlet process is derived from the finite-dimensional Dirichlet distribution, it is natural and important to introduce and derive a Liouville process in a similar manner. We introduce a discrete random probability measure constructed from a random vector following a Liouville distribution and subsequently derive its weak limit to define our proposed Liouville process. The resulting process is a spike-and-slab process, where the Dirichlet process serves as the slab and a single point from its mean acts as the spike. These two components are linearly combined using a random weight generated from the Liouville distribution. By using the Liouville process as a prior on the space of probability measures, we derive the corresponding posterior process as well as the predictive distribution.
{"title":"On deriving Liouville process from Liouville distribution and its application in nonparametric Bayesian inference","authors":"Sadegh Chegini, Mahmoud Zarepour","doi":"10.1016/j.jspi.2025.106368","DOIUrl":"10.1016/j.jspi.2025.106368","url":null,"abstract":"<div><div>The Liouville distribution, a generalization of the Dirichlet distribution, serves as a well-known conjugate prior for the multinomial distribution. Just as the Dirichlet process is derived from the finite-dimensional Dirichlet distribution, it is natural and important to introduce and derive a Liouville process in a similar manner. We introduce a discrete random probability measure constructed from a random vector following a Liouville distribution and subsequently derive its weak limit to define our proposed Liouville process. The resulting process is a spike-and-slab process, where the Dirichlet process serves as the slab and a single point from its mean acts as the spike. These two components are linearly combined using a random weight generated from the Liouville distribution. By using the Liouville process as a prior on the space of probability measures, we derive the corresponding posterior process as well as the predictive distribution.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"243 ","pages":"Article 106368"},"PeriodicalIF":0.8,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12DOI: 10.1016/j.jspi.2025.106361
Weiwei Zhuang , Weiqi Yang , Wenchen Liao , Yukun Liu
Lorenz dominance is a fundamental tool for assessing whether wealth or income disparity is greater in one population than another. Based on the well-established density ratio model, we propose a new semiparametric test for Lorenz dominance. We show that the limiting distribution of the proposed test statistic is the supremum of a Gaussian process. To facilitate practical application, we devise a bootstrap procedure to calculate the -value and establish its theoretical validity. Our simulation studies demonstrate that the proposed test correctly controls the Type I error and outperforms its competitors in terms of statistical power. Finally, we apply the test to compare salary distributions among higher education employees in Ohio from 2011 to 2015.
{"title":"Semiparametric tests for Lorenz dominance based on density ratio model","authors":"Weiwei Zhuang , Weiqi Yang , Wenchen Liao , Yukun Liu","doi":"10.1016/j.jspi.2025.106361","DOIUrl":"10.1016/j.jspi.2025.106361","url":null,"abstract":"<div><div>Lorenz dominance is a fundamental tool for assessing whether wealth or income disparity is greater in one population than another. Based on the well-established density ratio model, we propose a new semiparametric test for Lorenz dominance. We show that the limiting distribution of the proposed test statistic is the supremum of a Gaussian process. To facilitate practical application, we devise a bootstrap procedure to calculate the <span><math><mi>p</mi></math></span>-value and establish its theoretical validity. Our simulation studies demonstrate that the proposed test correctly controls the Type I error and outperforms its competitors in terms of statistical power. Finally, we apply the test to compare salary distributions among higher education employees in Ohio from 2011 to 2015.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"242 ","pages":"Article 106361"},"PeriodicalIF":0.8,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145527966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-08DOI: 10.1016/j.jspi.2025.106360
Yuze Yuan , Shuyu Liu , Rongmao Zhang
Zhang and Chan (2021) considered the augmented Dickey–Fuller (ADF) test for an unit root process with linear noise driven by generalized autoregressive conditional heteroskedasticity (GARCH), and showed that the ADF test may perform even worse than the Dickey–Fuller test. The main reason is that the parameters of the lag terms in the ADF regression cannot be estimated consistently for infinite variance GARCH noises based on least square estimation (LSE). In this paper, we propose a self-weighted least square estimation (SWLSE) procedure to solve this problem. Consequently, a new test based on SWLSE for the unit-root is also proposed. It is shown that the SWLSE are consistent, and the proposed test converges to a functional of a stable process and a Brownian motion and performs well in term of size and power. Simulation study is conducted to evaluate the performance of our procedure, and a real-world illustrative example is provided.
{"title":"Self-weighted estimation for nonstationary processes with infinite variance GARCH errors","authors":"Yuze Yuan , Shuyu Liu , Rongmao Zhang","doi":"10.1016/j.jspi.2025.106360","DOIUrl":"10.1016/j.jspi.2025.106360","url":null,"abstract":"<div><div>Zhang and Chan (2021) considered the augmented Dickey–Fuller (ADF) test for an unit root process with linear noise driven by generalized autoregressive conditional heteroskedasticity (GARCH), and showed that the ADF test may perform even worse than the Dickey–Fuller test. The main reason is that the parameters of the lag terms in the ADF regression cannot be estimated consistently for infinite variance GARCH noises based on least square estimation (LSE). In this paper, we propose a self-weighted least square estimation (SWLSE) procedure to solve this problem. Consequently, a new test based on SWLSE for the unit-root is also proposed. It is shown that the SWLSE are consistent, and the proposed test converges to a functional of a stable process and a Brownian motion and performs well in term of size and power. Simulation study is conducted to evaluate the performance of our procedure, and a real-world illustrative example is provided.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"242 ","pages":"Article 106360"},"PeriodicalIF":0.8,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145527965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.jspi.2025.106359
Yu Shi , Grace Y. Yi
Graphical models are powerful tools for characterizing conditional dependence structures among variables with complex relationships. Although many methods have been developed under the graphical modeling framework, their validity often hinges on the quality of the data. A fundamental assumption in most existing approaches is that all variables are measured precisely, an assumption frequently violated in practice. In many applications, mismeasurement of mixed discrete and continuous variables is a common challenge. In this paper, we address error-contaminated data involving both continuous and discrete variables by proposing a mixed latent Gaussian copula graphical measurement error model. To perform inference, we develop a simulation-based expectation–maximization procedure that explicitly accounts for mismeasurement effects. We further introduce a computationally efficient refinement to reduce the computational burden. Asymptotic properties of the proposed estimator are established, and its finite-sample performance is evaluated through numerical studies.
{"title":"Mixed latent graphical models with mixed measurement error and misclassification in variables","authors":"Yu Shi , Grace Y. Yi","doi":"10.1016/j.jspi.2025.106359","DOIUrl":"10.1016/j.jspi.2025.106359","url":null,"abstract":"<div><div>Graphical models are powerful tools for characterizing conditional dependence structures among variables with complex relationships. Although many methods have been developed under the graphical modeling framework, their validity often hinges on the quality of the data. A fundamental assumption in most existing approaches is that all variables are measured precisely, an assumption frequently violated in practice. In many applications, mismeasurement of mixed discrete and continuous variables is a common challenge. In this paper, we address error-contaminated data involving both continuous and discrete variables by proposing a mixed latent Gaussian copula graphical measurement error model. To perform inference, we develop a simulation-based expectation–maximization procedure that explicitly accounts for mismeasurement effects. We further introduce a computationally efficient refinement to reduce the computational burden. Asymptotic properties of the proposed estimator are established, and its finite-sample performance is evaluated through numerical studies.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"242 ","pages":"Article 106359"},"PeriodicalIF":0.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1016/j.jspi.2025.106357
Yuliang Zhou, Qianqian Zhao, Shengli Zhao
Sliced designs are widely used in multi-platform experiments. A sliced design contains several sub-designs divided by the sliced factor, and each sub-design is assigned to a platform, respectively. In some experimental scenarios, it is necessary to consider the optimality of both the sub-designs and the complete sliced designs, such sliced designs are referred to as general sliced (GS) designs. To construct the optimal GS designs for such scenarios, we propose the general sliced effect hierarchy principle (GSEHP). Based on the GSEHP, we introduce the general sliced minimum aberration (GSMA) criterion and choose the GSMA designs as optimal GS designs when the sliced factor and design factors are equally important. Some GSMA designs with 32 and 64 runs are tabulated. Additionally, we present a practical example to illustrate the application of GSMA designs in guiding strategies of webpage setting on two platforms.
{"title":"General sliced minimum aberration designs for multi-platform experiments","authors":"Yuliang Zhou, Qianqian Zhao, Shengli Zhao","doi":"10.1016/j.jspi.2025.106357","DOIUrl":"10.1016/j.jspi.2025.106357","url":null,"abstract":"<div><div>Sliced designs are widely used in multi-platform experiments. A sliced design contains several sub-designs divided by the sliced factor, and each sub-design is assigned to a platform, respectively. In some experimental scenarios, it is necessary to consider the optimality of both the sub-designs and the complete sliced designs, such sliced designs are referred to as general sliced (GS) designs. To construct the optimal GS designs for such scenarios, we propose the general sliced effect hierarchy principle (GSEHP). Based on the GSEHP, we introduce the general sliced minimum aberration (GSMA) criterion and choose the GSMA designs as optimal GS designs when the sliced factor and design factors are equally important. Some GSMA designs with 32 and 64 runs are tabulated. Additionally, we present a practical example to illustrate the application of GSMA designs in guiding strategies of webpage setting on two platforms.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"242 ","pages":"Article 106357"},"PeriodicalIF":0.8,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28DOI: 10.1016/j.jspi.2025.106358
Sumito Kurata, Kei Hirose
Most of the regularization methods such as the LASSO have one (or more) regularization parameter(s), and to select the value of the regularization parameter is essentially equal to select a model. Thus, to obtain a model suitable for the data and phenomenon, we need to determine an adequate value of the regularization parameter. Regarding the determination of the regularization parameter in the linear regression model, we often apply the information criteria like the AIC and BIC, however, it has been pointed out that these criteria are sensitive to outliers and tend not to perform well in high-dimensional settings. Outliers generally have a negative effect on not only estimation but also model selection, consequently, it is important to employ a selection method with robustness against outliers. In addition, when the number of explanatory variables is quite large, most conventional criteria are prone to select unnecessary explanatory variables. In this paper, we propose model evaluation criteria based on the statistical divergence with excellence in robustness in both of parametric estimation and model selection, by applying the quasi-Bayesian procedure. Our proposed criteria achieve the selection consistency even in high-dimensional settings due to precise approximation, simultaneously with robustness. We also investigate the conditions for establishing robustness and consistency, and provide an appropriate example of the divergence and penalty term that can achieve the desirable properties. We finally report the results of some numerical examples to verify that the proposed criteria perform robust and consistent variable selection compared with the conventional selection methods.
{"title":"Robust and consistent model evaluation criteria in high-dimensional regression","authors":"Sumito Kurata, Kei Hirose","doi":"10.1016/j.jspi.2025.106358","DOIUrl":"10.1016/j.jspi.2025.106358","url":null,"abstract":"<div><div>Most of the regularization methods such as the LASSO have one (or more) regularization parameter(s), and to select the value of the regularization parameter is essentially equal to select a model. Thus, to obtain a model suitable for the data and phenomenon, we need to determine an adequate value of the regularization parameter. Regarding the determination of the regularization parameter in the linear regression model, we often apply the information criteria like the AIC and BIC, however, it has been pointed out that these criteria are sensitive to outliers and tend not to perform well in high-dimensional settings. Outliers generally have a negative effect on not only estimation but also model selection, consequently, it is important to employ a selection method with robustness against outliers. In addition, when the number of explanatory variables is quite large, most conventional criteria are prone to select unnecessary explanatory variables. In this paper, we propose model evaluation criteria based on the statistical divergence with excellence in robustness in both of parametric estimation and model selection, by applying the quasi-Bayesian procedure. Our proposed criteria achieve the selection consistency even in high-dimensional settings due to precise approximation, simultaneously with robustness. We also investigate the conditions for establishing robustness and consistency, and provide an appropriate example of the divergence and penalty term that can achieve the desirable properties. We finally report the results of some numerical examples to verify that the proposed criteria perform robust and consistent variable selection compared with the conventional selection methods.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"242 ","pages":"Article 106358"},"PeriodicalIF":0.8,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-21DOI: 10.1016/j.jspi.2025.106356
Samuel Onyambu, Hongquan Xu
Space-filling designs are extensively used in computer experiments to analyze complex systems. Among these, uniform projection designs stand out for their desirable low-dimensional projection properties and robustness against other criteria. However, no efficient algorithm currently exists for generating such designs. This study explores the construction of uniform projection designs using a differential evolution (DE) algorithm. DE, an evolutionary algorithm, is known for its simplicity, robustness, and effectiveness in solving complex optimization problems, though its performance is highly sensitive to several hyperparameters. Our goal is to investigate the structure of the hyperparameter space, evaluate the contribution of each hyperparameter, and provide guidelines for optimal hyperparameter settings across various scenarios. To achieve this, we conduct a comprehensive comparison of different experimental designs and surrogate models.
{"title":"Tuning differential evolution algorithm for constructing uniform projection designs","authors":"Samuel Onyambu, Hongquan Xu","doi":"10.1016/j.jspi.2025.106356","DOIUrl":"10.1016/j.jspi.2025.106356","url":null,"abstract":"<div><div>Space-filling designs are extensively used in computer experiments to analyze complex systems. Among these, uniform projection designs stand out for their desirable low-dimensional projection properties and robustness against other criteria. However, no efficient algorithm currently exists for generating such designs. This study explores the construction of uniform projection designs using a differential evolution (DE) algorithm. DE, an evolutionary algorithm, is known for its simplicity, robustness, and effectiveness in solving complex optimization problems, though its performance is highly sensitive to several hyperparameters. Our goal is to investigate the structure of the hyperparameter space, evaluate the contribution of each hyperparameter, and provide guidelines for optimal hyperparameter settings across various scenarios. To achieve this, we conduct a comprehensive comparison of different experimental designs and surrogate models.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"242 ","pages":"Article 106356"},"PeriodicalIF":0.8,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-30DOI: 10.1016/j.jspi.2025.106355
Yiping Yang , Peixin Zhao
To address the challenges of variable selection in panel data models with fixed effects and varying coefficients, we introduce a novel method that combines basis function approximations with group nonconcave penalty functions. By utilizing a forward orthogonal deviation transformation, we eliminate fixed effects, allowing us to select significant variables and estimate non-zero coefficient functions. Under certain regularity conditions, we demonstrate that our method consistently identifies the true model structure, and the resulting estimators exhibit oracle properties. For computational efficiency, we have developed a group gradient descent algorithm that incorporates a transformation of the penalty terms. Simulation studies reveal that nonconvex penalties (SCAD/MCP) outperform the Lasso across various performance metrics. Furthermore, compared to existing methods, our approach significantly reduces false positives (FPs). To demonstrate the practical applicability and effectiveness of our method, we present an analysis of a real dataset.
{"title":"Variable selection in high-dimensional varying coefficient panel data models with fixed effects","authors":"Yiping Yang , Peixin Zhao","doi":"10.1016/j.jspi.2025.106355","DOIUrl":"10.1016/j.jspi.2025.106355","url":null,"abstract":"<div><div>To address the challenges of variable selection in panel data models with fixed effects and varying coefficients, we introduce a novel method that combines basis function approximations with group nonconcave penalty functions. By utilizing a forward orthogonal deviation transformation, we eliminate fixed effects, allowing us to select significant variables and estimate non-zero coefficient functions. Under certain regularity conditions, we demonstrate that our method consistently identifies the true model structure, and the resulting estimators exhibit oracle properties. For computational efficiency, we have developed a group gradient descent algorithm that incorporates a transformation of the penalty terms. Simulation studies reveal that nonconvex penalties (SCAD/MCP) outperform the Lasso across various performance metrics. Furthermore, compared to existing methods, our approach significantly reduces false positives (FPs). To demonstrate the practical applicability and effectiveness of our method, we present an analysis of a real dataset.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"242 ","pages":"Article 106355"},"PeriodicalIF":0.8,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Causal inference tools, in particular those of variance decomposition, hierarchical data structures and counterfactuals, are applied to the study of the methodology of dose-finding studies in oncology. A detailed variance decomposition brings into a much sharper focus the relative performance of different designs. We develop and present new results on the role played by the order of patient inclusions into a sequential dose-finding study. These results make it clear why, previously, authors could easily be misled into a conclusion that different designs enjoy similar performances. This is not so and we show how to avoid making that mistake. We highlight our findings via both theoretical and numerical studies.
{"title":"Causal inference in early phase clinical trials: Variance decomposition and order of patient inclusion","authors":"Matthieu Clertant , Meliha Akouba , Alexia Iasonos , John O’Quigley","doi":"10.1016/j.jspi.2025.106352","DOIUrl":"10.1016/j.jspi.2025.106352","url":null,"abstract":"<div><div>Causal inference tools, in particular those of variance decomposition, hierarchical data structures and counterfactuals, are applied to the study of the methodology of dose-finding studies in oncology. A detailed variance decomposition brings into a much sharper focus the relative performance of different designs. We develop and present new results on the role played by the order of patient inclusions into a sequential dose-finding study. These results make it clear why, previously, authors could easily be misled into a conclusion that different designs enjoy similar performances. This is not so and we show how to avoid making that mistake. We highlight our findings via both theoretical and numerical studies.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"242 ","pages":"Article 106352"},"PeriodicalIF":0.8,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145267234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}