Watermarking has offered an effective approach to distinguishing text generated by large language models (LLMs) from human-written text. However, the pervasive presence of human edits on LLM-generated text dilutes watermark signals, thereby significantly degrading detection performance of existing methods. In this paper, by modeling human edits through mixture model detection, we introduce a new method in the form of a truncated goodness-of-fit test for detecting watermarked text under human edits, which we refer to as Tr-GoF. We prove that the Tr-GoF test achieves optimality in robust detection of the Gumbel-max watermark in a certain asymptotic regime of substantial text modifications and vanishing watermark signals. Importantly, Tr-GoF achieves this optimality adaptively as it does not require precise knowledge of human edit levels or probabilistic specifications of the LLMs, in contrast to the optimal but impractical (Neyman-Pearson) likelihood ratio test. Moreover, we establish that the Tr-GoF test attains the highest detection efficiency rate in a certain regime of moderate text modifications. In stark contrast, we show that sum-based detection rules, as employed by existing methods, fail to achieve optimal robustness in both regimes because the additive nature of their statistics is less resilient to edit-induced noise. Finally, we demonstrate the competitive and sometimes superior empirical performance of the Tr-GoF test on both synthetic data and open-source LLMs in the OPT and LLaMA families.
{"title":"Robust Detection of Watermarks for Large Language Models Under Human Edits.","authors":"Xiang Li, Feng Ruan, Huiyuan Wang, Qi Long, Weijie J Su","doi":"10.1093/jrsssb/qkaf056","DOIUrl":"10.1093/jrsssb/qkaf056","url":null,"abstract":"<p><p>Watermarking has offered an effective approach to distinguishing text generated by large language models (LLMs) from human-written text. However, the pervasive presence of human edits on LLM-generated text dilutes watermark signals, thereby significantly degrading detection performance of existing methods. In this paper, by modeling human edits through mixture model detection, we introduce a new method in the form of a truncated goodness-of-fit test for detecting watermarked text under human edits, which we refer to as Tr-GoF. We prove that the Tr-GoF test achieves optimality in robust detection of the Gumbel-max watermark in a certain asymptotic regime of substantial text modifications and vanishing watermark signals. Importantly, Tr-GoF achieves this optimality <i>adaptively</i> as it does not require precise knowledge of human edit levels or probabilistic specifications of the LLMs, in contrast to the optimal but impractical (Neyman-Pearson) likelihood ratio test. Moreover, we establish that the Tr-GoF test attains the highest detection efficiency rate in a certain regime of moderate text modifications. In stark contrast, we show that sum-based detection rules, as employed by existing methods, fail to achieve optimal robustness in both regimes because the additive nature of their statistics is less resilient to edit-induced noise. Finally, we demonstrate the competitive and sometimes superior empirical performance of the Tr-GoF test on both synthetic data and open-source LLMs in the OPT and LLaMA families.</p>","PeriodicalId":49982,"journal":{"name":"Journal of the Royal Statistical Society Series B-Statistical Methodology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12851586/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146088005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-27eCollection Date: 2025-11-01DOI: 10.1093/jrsssb/qkaf028
Alexander W Levis, Matteo Bonvini, Zhenghao Zeng, Luke Keele, Edward H Kennedy
When an exposure of interest is confounded by unmeasured factors, an instrumental variable (IV) can be used to identify and estimate certain causal contrasts. Identification of the marginal average treatment effect (ATE) from IVs relies on strong untestable structural assumptions. When one is unwilling to assert such structure, IVs can nonetheless be used to construct bounds on the ATE. Famously, Alexander Balke and Judea Pearl proved tight bounds on the ATE for a binary outcome, in a randomized trial with noncompliance and no covariate information. We demonstrate how these bounds remain useful in observational settings with baseline confounders of the IV, as well as randomized trials with measured baseline covariates. The resulting bounds on the ATE are nonsmooth functionals, and thus standard nonparametric efficiency theory is not immediately applicable. To remedy this, we propose (1) under a novel margin condition, influence function-based estimators of the bounds that can attain parametric convergence rates when the nuisance functions are modelled flexibly, and (2) estimators of smooth approximations of these bounds. We propose extensions to continuous outcomes, explore finite sample properties in simulations, and illustrate the proposed estimators in an observational study targeting the effect of higher education on wages.
{"title":"Covariate-assisted bounds on causal effects with instrumental variables.","authors":"Alexander W Levis, Matteo Bonvini, Zhenghao Zeng, Luke Keele, Edward H Kennedy","doi":"10.1093/jrsssb/qkaf028","DOIUrl":"10.1093/jrsssb/qkaf028","url":null,"abstract":"<p><p>When an exposure of interest is confounded by unmeasured factors, an instrumental variable (IV) can be used to identify and estimate certain causal contrasts. Identification of the marginal average treatment effect (ATE) from IVs relies on strong untestable structural assumptions. When one is unwilling to assert such structure, IVs can nonetheless be used to construct bounds on the ATE. Famously, Alexander Balke and Judea Pearl proved tight bounds on the ATE for a binary outcome, in a randomized trial with noncompliance and no covariate information. We demonstrate how these bounds remain useful in observational settings with baseline confounders of the IV, as well as randomized trials with measured baseline covariates. The resulting bounds on the ATE are nonsmooth functionals, and thus standard nonparametric efficiency theory is not immediately applicable. To remedy this, we propose (1) under a novel margin condition, influence function-based estimators of the bounds that can attain parametric convergence rates when the nuisance functions are modelled flexibly, and (2) estimators of smooth approximations of these bounds. We propose extensions to continuous outcomes, explore finite sample properties in simulations, and illustrate the proposed estimators in an observational study targeting the effect of higher education on wages.</p>","PeriodicalId":49982,"journal":{"name":"Journal of the Royal Statistical Society Series B-Statistical Methodology","volume":"87 5","pages":"1508-1527"},"PeriodicalIF":3.6,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12602419/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of selecting a small subset of representative variables from a large dataset. In the computer science literature, this dimensionality reduction problem is typically formalized as column subset selection (CSS). Meanwhile, the typical statistical formalization is to find an information-maximizing set of principal variables. This paper shows that these two approaches are equivalent, and moreover, both can be viewed as maximum-likelihood estimation within a certain semi-parametric model. Within this model, we establish suitable conditions under which the CSS estimate is consistent in high dimensions, specifically in the proportional asymptotic regime where the number of variables over the sample size converges to a constant. Using these connections, we show how to efficiently (1) perform CSS using only summary statistics from the original dataset; (2) perform CSS in the presence of missing and/or censored data; and (3) select the subset size for CSS in a hypothesis testing framework.
{"title":"A statistical view of column subset selection.","authors":"Anav Sood, Trevor Hastie","doi":"10.1093/jrsssb/qkaf023","DOIUrl":"10.1093/jrsssb/qkaf023","url":null,"abstract":"<p><p>We consider the problem of selecting a small subset of representative variables from a large dataset. In the computer science literature, this dimensionality reduction problem is typically formalized as column subset selection (CSS). Meanwhile, the typical statistical formalization is to find an information-maximizing set of principal variables. This paper shows that these two approaches are equivalent, and moreover, both can be viewed as maximum-likelihood estimation within a certain semi-parametric model. Within this model, we establish suitable conditions under which the CSS estimate is consistent in high dimensions, specifically in the proportional asymptotic regime where the number of variables over the sample size converges to a constant. Using these connections, we show how to efficiently (1) perform CSS using only summary statistics from the original dataset; (2) perform CSS in the presence of missing and/or censored data; and (3) select the subset size for CSS in a hypothesis testing framework.</p>","PeriodicalId":49982,"journal":{"name":"Journal of the Royal Statistical Society Series B-Statistical Methodology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12288642/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While there is an immense literature on Bayesian methods for clustering, the multiview case has received little attention. This problem focuses on obtaining distinct but statistically dependent clusterings in a common set of entities for different data types. For example, clustering patients into subgroups with subgroup membership varying according to the domain of the patient variables. A challenge is how to model the across-view dependence between the partitions of patients into subgroups. The complexities of the partition space make standard methods to model dependence, such as correlation, infeasible. In this article, we propose CLustering with Independence Centring (CLIC), a clustering prior that uses a single parameter to explicitly model dependence between clusterings across views. CLIC is induced by the product centred Dirichlet process (PCDP), a novel hierarchical prior that bridges between independent and equivalent partitions. We show appealing theoretic properties, provide a finite approximation and prove its accuracy, present a marginal Gibbs sampler for posterior computation, and derive closed form expressions for the marginal and joint partition distributions for the CLIC model. On synthetic data and in an application to epidemiology, CLIC accurately characterizes view-specific partitions while providing inference on the dependence level.
{"title":"Product Centred Dirichlet Processes for Bayesian Multiview Clustering.","authors":"Alexander Dombowsky, David B Dunson","doi":"10.1093/jrsssb/qkaf021","DOIUrl":"10.1093/jrsssb/qkaf021","url":null,"abstract":"<p><p>While there is an immense literature on Bayesian methods for clustering, the multiview case has received little attention. This problem focuses on obtaining distinct but statistically dependent clusterings in a common set of entities for different data types. For example, clustering patients into subgroups with subgroup membership varying according to the domain of the patient variables. A challenge is how to model the across-view dependence between the partitions of patients into subgroups. The complexities of the partition space make standard methods to model dependence, such as correlation, infeasible. In this article, we propose CLustering with Independence Centring (CLIC), a clustering prior that uses a single parameter to explicitly model dependence between clusterings across views. CLIC is induced by the product centred Dirichlet process (PCDP), a novel hierarchical prior that bridges between independent and equivalent partitions. We show appealing theoretic properties, provide a finite approximation and prove its accuracy, present a marginal Gibbs sampler for posterior computation, and derive closed form expressions for the marginal and joint partition distributions for the CLIC model. On synthetic data and in an application to epidemiology, CLIC accurately characterizes view-specific partitions while providing inference on the dependence level.</p>","PeriodicalId":49982,"journal":{"name":"Journal of the Royal Statistical Society Series B-Statistical Methodology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12392789/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rejective sampling improves design and estimation efficiency of single-phase sampling when auxiliary information in a finite population is available. When such auxiliary information is unavailable, we propose to use two-phase rejective sampling (TPRS), which involves measuring auxiliary variables for the sample of units in the first phase, followed by the implementation of rejective sampling for the outcome in the second phase. We explore the asymptotic design properties of double expansion and regression estimators under TPRS. We show that TPRS enhances the efficiency of the double-expansion estimator, rendering it comparable to a regression estimator. We further refine the design to accommodate varying importance of covariates and extend it to multi-phase sampling. We start with the theory for the population mean and then extend the theory to parameters defined by general estimating equations. Our asymptotic results for TPRS immediately cover the existing single-phase rejective sampling, under which the asymptotic theory has not been fully established.
{"title":"Two-phase rejective sampling and its asymptotic properties.","authors":"Shu Yang, Peng Ding","doi":"10.1093/jrsssb/qkaf002","DOIUrl":"10.1093/jrsssb/qkaf002","url":null,"abstract":"<p><p>Rejective sampling improves design and estimation efficiency of single-phase sampling when auxiliary information in a finite population is available. When such auxiliary information is unavailable, we propose to use two-phase rejective sampling (TPRS), which involves measuring auxiliary variables for the sample of units in the first phase, followed by the implementation of rejective sampling for the outcome in the second phase. We explore the asymptotic design properties of double expansion and regression estimators under TPRS. We show that TPRS enhances the efficiency of the double-expansion estimator, rendering it comparable to a regression estimator. We further refine the design to accommodate varying importance of covariates and extend it to multi-phase sampling. We start with the theory for the population mean and then extend the theory to parameters defined by general estimating equations. Our asymptotic results for TPRS immediately cover the existing single-phase rejective sampling, under which the asymptotic theory has not been fully established.</p>","PeriodicalId":49982,"journal":{"name":"Journal of the Royal Statistical Society Series B-Statistical Methodology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12355938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144876438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-26eCollection Date: 2025-04-01DOI: 10.1093/jrsssb/qkae098
Chris J Oates, Toni Karvonen, Aretha L Teckentrup, Marina Strocchi, Steven A Niederer
For over a century, extrapolation methods have provided a powerful tool to improve the convergence order of a numerical method. However, these tools are not well-suited to modern computer codes, where multiple continua are discretized and convergence orders are not easily analysed. To address this challenge, we present a probabilistic perspective on Richardson extrapolation, a point of view that unifies classical extrapolation methods with modern multi-fidelity modelling, and handles uncertain convergence orders by allowing these to be statistically estimated. The approach is developed using Gaussian processes, leading to Gauss-Richardson Extrapolation. Conditions are established under which extrapolation using the conditional mean achieves a polynomial (or even an exponential) speed-up compared to the original numerical method. Further, the probabilistic formulation unlocks the possibility of experimental design, casting the selection of fidelities as a continuous optimization problem, which can then be (approximately) solved. A case study involving a computational cardiac model demonstrates that practical gains in accuracy can be achieved using the GRE method.
{"title":"Probabilistic Richardson extrapolation.","authors":"Chris J Oates, Toni Karvonen, Aretha L Teckentrup, Marina Strocchi, Steven A Niederer","doi":"10.1093/jrsssb/qkae098","DOIUrl":"https://doi.org/10.1093/jrsssb/qkae098","url":null,"abstract":"<p><p>For over a century, extrapolation methods have provided a powerful tool to improve the convergence order of a numerical method. However, these tools are not well-suited to modern computer codes, where multiple continua are discretized and convergence orders are not easily analysed. To address this challenge, we present a probabilistic perspective on Richardson extrapolation, a point of view that unifies classical extrapolation methods with modern multi-fidelity modelling, and handles uncertain convergence orders by allowing these to be statistically estimated. The approach is developed using Gaussian processes, leading to <i>Gauss-Richardson Extrapolation</i>. Conditions are established under which extrapolation using the conditional mean achieves a polynomial (or even an exponential) speed-up compared to the original numerical method. Further, the probabilistic formulation unlocks the possibility of experimental design, casting the selection of fidelities as a continuous optimization problem, which can then be (approximately) solved. A case study involving a computational cardiac model demonstrates that practical gains in accuracy can be achieved using the GRE method.</p>","PeriodicalId":49982,"journal":{"name":"Journal of the Royal Statistical Society Series B-Statistical Methodology","volume":"87 2","pages":"457-479"},"PeriodicalIF":3.1,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11985099/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-26eCollection Date: 2025-07-01DOI: 10.1093/jrsssb/qkae119
Denis Agniel, Layla Parast
The development of statistical methods to evaluate surrogate markers is an active area of research. In many clinical settings, the surrogate marker is not simply a single measurement but is instead a longitudinal trajectory of measurements over time, e.g. fasting plasma glucose measured every 6 months for 3 years. In general, available methods developed for the single-surrogate setting cannot accommodate a longitudinal surrogate marker. Furthermore, many of the methods have not been developed for use with primary outcomes that are time-to-event outcomes and/or subject to censoring. In this paper, we propose robust methods to evaluate a longitudinal surrogate marker in a censored time-to-event outcome setting. Specifically, we propose a method to define and estimate the proportion of the treatment effect on a censored primary outcome that is explained by the treatment effect on a longitudinal surrogate marker measured up to time . We accommodate both potential censoring of the primary outcome and of the surrogate marker. A simulation study demonstrates a good finite-sample performance of our proposed methods. We illustrate our procedures by examining repeated measures of fasting plasma glucose, a surrogate marker for diabetes diagnosis, using data from the diabetes prevention programme.
{"title":"Robust evaluation of longitudinal surrogate markers with censored data.","authors":"Denis Agniel, Layla Parast","doi":"10.1093/jrsssb/qkae119","DOIUrl":"10.1093/jrsssb/qkae119","url":null,"abstract":"<p><p>The development of statistical methods to evaluate surrogate markers is an active area of research. In many clinical settings, the surrogate marker is not simply a single measurement but is instead a longitudinal trajectory of measurements over time, e.g. fasting plasma glucose measured every 6 months for 3 years. In general, available methods developed for the single-surrogate setting cannot accommodate a longitudinal surrogate marker. Furthermore, many of the methods have not been developed for use with primary outcomes that are time-to-event outcomes and/or subject to censoring. In this paper, we propose robust methods to evaluate a longitudinal surrogate marker in a censored time-to-event outcome setting. Specifically, we propose a method to define and estimate the proportion of the treatment effect on a censored primary outcome that is explained by the treatment effect on a longitudinal surrogate marker measured up to time <math><msub><mi>t</mi> <mn>0</mn></msub> </math> . We accommodate both potential censoring of the primary outcome and of the surrogate marker. A simulation study demonstrates a good finite-sample performance of our proposed methods. We illustrate our procedures by examining repeated measures of fasting plasma glucose, a surrogate marker for diabetes diagnosis, using data from the diabetes prevention programme.</p>","PeriodicalId":49982,"journal":{"name":"Journal of the Royal Statistical Society Series B-Statistical Methodology","volume":"87 3","pages":"891-907"},"PeriodicalIF":3.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12256123/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144638525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-03eCollection Date: 2025-07-01DOI: 10.1093/jrsssb/qkae111
Tian Gu, Yi Han, Rui Duan
Transfer learning improves target model performance by leveraging data from related source populations, especially when target data are scarce. This study addresses the challenge of training high-dimensional regression models with limited target data in the presence of heterogeneous source populations. We focus on a practical setting where only parameter estimates of pretrained source models are available, rather than individual-level source data. For a single source model, we propose a novel angle-based transfer learning (angleTL) method that leverages concordance between source and target model parameters. AngleTL adapts to the signal strength of the target model, unifies several benchmark methods, and mitigates negative transfer when between-population heterogeneity is large. We extend angleTL to incorporate multiple source models, accounting for varying levels of relevance among them. Our high-dimensional asymptotic analysis provides insights into when a source model benefits the target model and demonstrates the superiority of angleTL over other methods. Extensive simulations validate these findings and highlight the feasibility of applying angleTL to transfer genetic risk prediction models across multiple biobanks.
{"title":"Robust angle-based transfer learning in high dimensions.","authors":"Tian Gu, Yi Han, Rui Duan","doi":"10.1093/jrsssb/qkae111","DOIUrl":"10.1093/jrsssb/qkae111","url":null,"abstract":"<p><p>Transfer learning improves target model performance by leveraging data from related source populations, especially when target data are scarce. This study addresses the challenge of training high-dimensional regression models with limited target data in the presence of heterogeneous source populations. We focus on a practical setting where only parameter estimates of pretrained source models are available, rather than individual-level source data. For a single source model, we propose a novel angle-based transfer learning (angleTL) method that leverages concordance between source and target model parameters. AngleTL adapts to the signal strength of the target model, unifies several benchmark methods, and mitigates negative transfer when between-population heterogeneity is large. We extend angleTL to incorporate multiple source models, accounting for varying levels of relevance among them. Our high-dimensional asymptotic analysis provides insights into when a source model benefits the target model and demonstrates the superiority of angleTL over other methods. Extensive simulations validate these findings and highlight the feasibility of applying angleTL to transfer genetic risk prediction models across multiple biobanks.</p>","PeriodicalId":49982,"journal":{"name":"Journal of the Royal Statistical Society Series B-Statistical Methodology","volume":"87 3","pages":"723-745"},"PeriodicalIF":3.6,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12256125/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144638524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28eCollection Date: 2025-07-01DOI: 10.1093/jrsssb/qkae109
Jeremiah Jones, Ashkan Ertefaie, Robert L Strawderman
Researchers are often interested in learning not only the effect of treatments on outcomes, but also the mechanisms that transmit these effects. A mediator is a variable that is affected by treatment and subsequently affects outcome. Existing methods for penalized mediation analyses may lead to ignoring important mediators and either assume that finite-dimensional linear models are sufficient to remove confounding bias, or perform no confounding control at all. In practice, these assumptions may not hold. We propose a method that considers the confounding functions as nuisance parameters to be estimated using data-adaptive methods. We then use a novel regularization method applied to this objective function to identify a set of important mediators. We consider natural direct and indirect effects as our target parameters. We then proceed to derive the asymptotic properties of our estimators and establish the oracle property under specific assumptions. Asymptotic results are also presented in a local setting, which contrast the proposal with the standard adaptive lasso. We also propose a perturbation bootstrap technique to provide asymptotically valid postselection inference for the mediated effects of interest. The performance of these methods will be discussed and demonstrated through simulation studies.
{"title":"Causal mediation analysis: selection with asymptotically valid inference.","authors":"Jeremiah Jones, Ashkan Ertefaie, Robert L Strawderman","doi":"10.1093/jrsssb/qkae109","DOIUrl":"10.1093/jrsssb/qkae109","url":null,"abstract":"<p><p>Researchers are often interested in learning not only the effect of treatments on outcomes, but also the mechanisms that transmit these effects. A mediator is a variable that is affected by treatment and subsequently affects outcome. Existing methods for penalized mediation analyses may lead to ignoring important mediators and either assume that finite-dimensional linear models are sufficient to remove confounding bias, or perform no confounding control at all. In practice, these assumptions may not hold. We propose a method that considers the confounding functions as nuisance parameters to be estimated using data-adaptive methods. We then use a novel regularization method applied to this objective function to identify a set of important mediators. We consider natural direct and indirect effects as our target parameters. We then proceed to derive the asymptotic properties of our estimators and establish the oracle property under specific assumptions. Asymptotic results are also presented in a local setting, which contrast the proposal with the standard adaptive lasso. We also propose a perturbation bootstrap technique to provide asymptotically valid postselection inference for the mediated effects of interest. The performance of these methods will be discussed and demonstrated through simulation studies.</p>","PeriodicalId":49982,"journal":{"name":"Journal of the Royal Statistical Society Series B-Statistical Methodology","volume":"87 3","pages":"678-700"},"PeriodicalIF":3.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12256126/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144638523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-21eCollection Date: 2025-04-01DOI: 10.1093/jrsssb/qkae101
Sai Li, Ting Ye
Mendelian randomization (MR) is a powerful method that uses genetic variants as instrumental variables to infer the causal effect of a modifiable exposure on an outcome. We study inference for bi-directional causal relationships and causal directions with possibly pleiotropic genetic variants. We show that assumptions for common MR methods are often impossible or too stringent given the potential bi-directional relationships. We propose a new focusing framework for testing bi-directional causal effects and it can be coupled with many state-of-the-art MR methods. We provide theoretical guarantees for our proposal and demonstrate its performance using several simulated and real datasets.
{"title":"A focusing framework for testing bi-directional causal effects in Mendelian randomization.","authors":"Sai Li, Ting Ye","doi":"10.1093/jrsssb/qkae101","DOIUrl":"10.1093/jrsssb/qkae101","url":null,"abstract":"<p><p>Mendelian randomization (MR) is a powerful method that uses genetic variants as instrumental variables to infer the causal effect of a modifiable exposure on an outcome. We study inference for bi-directional causal relationships and causal directions with possibly pleiotropic genetic variants. We show that assumptions for common MR methods are often impossible or too stringent given the potential bi-directional relationships. We propose a new focusing framework for testing bi-directional causal effects and it can be coupled with many state-of-the-art MR methods. We provide theoretical guarantees for our proposal and demonstrate its performance using several simulated and real datasets.</p>","PeriodicalId":49982,"journal":{"name":"Journal of the Royal Statistical Society Series B-Statistical Methodology","volume":"87 2","pages":"529-548"},"PeriodicalIF":3.6,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11985100/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144047748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}