Pub Date : 2023-03-01Epub Date: 2022-04-05DOI: 10.1093/biomet/asac021
Jinsong Chen, Quefeng Li, Hua Yun Chen
Generalized linear models often have a high-dimensional nuisance parameters, as seen in applications such as testing gene-environment interactions or gene-gene interactions. In these scenarios, it is essential to test the significance of a high-dimensional sub-vector of the model's coefficients. Although some existing methods can tackle this problem, they often rely on the bootstrap to approximate the asymptotic distribution of the test statistic, and thus are computationally expensive. Here, we propose a computationally efficient test with a closed-form limiting distribution, which allows the parameter being tested to be either sparse or dense. We show that under certain regularity conditions, the type I error of the proposed method is asymptotically correct, and we establish its power under high-dimensional alternatives. Extensive simulations demonstrate the good performance of the proposed test and its robustness when certain sparsity assumptions are violated. We also apply the proposed method to Chinese famine sample data in order to show its performance when testing the significance of gene-environment interactions.
广义线性模型通常有一个高维的干扰参数,这在测试基因与环境的相互作用或基因与基因的相互作用等应用中可以看到。在这些情况下,必须对模型系数的高维子向量进行显著性检验。虽然现有的一些方法可以解决这个问题,但它们往往依赖于引导法来近似检验统计量的渐近分布,因此计算成本很高。在这里,我们提出了一种具有闭式极限分布的计算效率高的检验方法,它允许被检验参数是稀疏或密集的。我们证明,在某些规则性条件下,所提方法的 I 型误差是渐进正确的,并确定了其在高维替代条件下的威力。大量的仿真证明了所提检验方法的良好性能,以及在违反某些稀疏性假设时的稳健性。我们还将所提方法应用于中国饥荒样本数据,以展示其在检验基因-环境交互作用显著性时的性能。
{"title":"Testing generalized linear models with high-dimensional nuisance parameter.","authors":"Jinsong Chen, Quefeng Li, Hua Yun Chen","doi":"10.1093/biomet/asac021","DOIUrl":"10.1093/biomet/asac021","url":null,"abstract":"<p><p>Generalized linear models often have a high-dimensional nuisance parameters, as seen in applications such as testing gene-environment interactions or gene-gene interactions. In these scenarios, it is essential to test the significance of a high-dimensional sub-vector of the model's coefficients. Although some existing methods can tackle this problem, they often rely on the bootstrap to approximate the asymptotic distribution of the test statistic, and thus are computationally expensive. Here, we propose a computationally efficient test with a closed-form limiting distribution, which allows the parameter being tested to be either sparse or dense. We show that under certain regularity conditions, the type I error of the proposed method is asymptotically correct, and we establish its power under high-dimensional alternatives. Extensive simulations demonstrate the good performance of the proposed test and its robustness when certain sparsity assumptions are violated. We also apply the proposed method to Chinese famine sample data in order to show its performance when testing the significance of gene-environment interactions.</p>","PeriodicalId":9001,"journal":{"name":"Biometrika","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9933885/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10800040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary This paper is concerned with nonparametric estimation of the intensity function of a point process on a Riemannian manifold. It provides a first-order asymptotic analysis of the proposed kernel estimator for Poisson processes, supplemented by empirical work to probe the behaviour in finite samples and under other generative regimes. The investigation highlights the scope for finite-sample improvements by allowing the bandwidth to adapt to local curvature.
{"title":"Nonparametric estimation of the intensity function of a spatial point process on a Riemannian manifold","authors":"S Ward, H S Battey, E A K Cohen","doi":"10.1093/biomet/asad012","DOIUrl":"https://doi.org/10.1093/biomet/asad012","url":null,"abstract":"Summary This paper is concerned with nonparametric estimation of the intensity function of a point process on a Riemannian manifold. It provides a first-order asymptotic analysis of the proposed kernel estimator for Poisson processes, supplemented by empirical work to probe the behaviour in finite samples and under other generative regimes. The investigation highlights the scope for finite-sample improvements by allowing the bandwidth to adapt to local curvature.","PeriodicalId":9001,"journal":{"name":"Biometrika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135827732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Fréchet mean generalizes the concept of a mean to a metric space setting. In this work we consider equivariant estimation of Fréchet means for parametric models on metric spaces that are Riemannian manifolds. The geometry and symmetry of such a space is partially encoded by its isometry group of distance preserving transformations. Estimators that are equivariant under the isometry group take into account the symmetry of the metric space. For some models there exists an optimal equivariant estimator, which necessarily will perform as well or better than other common equivariant estimators, such as the maximum likelihood estimator or the sample Fréchet mean. We derive the general form of this minimum risk equivariant estimator and in a few cases provide explicit expressions for it. A result for finding the Fréchet mean for distributions with radially decreasing densities is presented and used to find expressions for the minimum risk equivariant estimator. In some models the isometry group is not large enough relative to the parametric family of distributions for there to exist a minimum risk equivariant estimator. In such cases, we introduce an adaptive equivariant estimator that uses the data to select a submodel for which there is a minimum risk equivariant estimator. Simulation results show that the adaptive equivariant estimator performs favourably relative to alternative estimators.
{"title":"Equivariant Estimation of Fréchet Means","authors":"A. Mccormack, P. Hoff","doi":"10.1093/biomet/asad014","DOIUrl":"https://doi.org/10.1093/biomet/asad014","url":null,"abstract":"\u0000 The Fréchet mean generalizes the concept of a mean to a metric space setting. In this work we consider equivariant estimation of Fréchet means for parametric models on metric spaces that are Riemannian manifolds. The geometry and symmetry of such a space is partially encoded by its isometry group of distance preserving transformations. Estimators that are equivariant under the isometry group take into account the symmetry of the metric space. For some models there exists an optimal equivariant estimator, which necessarily will perform as well or better than other common equivariant estimators, such as the maximum likelihood estimator or the sample Fréchet mean. We derive the general form of this minimum risk equivariant estimator and in a few cases provide explicit expressions for it. A result for finding the Fréchet mean for distributions with radially decreasing densities is presented and used to find expressions for the minimum risk equivariant estimator. In some models the isometry group is not large enough relative to the parametric family of distributions for there to exist a minimum risk equivariant estimator. In such cases, we introduce an adaptive equivariant estimator that uses the data to select a submodel for which there is a minimum risk equivariant estimator. Simulation results show that the adaptive equivariant estimator performs favourably relative to alternative estimators.","PeriodicalId":9001,"journal":{"name":"Biometrika","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41441805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary Modern longitudinal data, for example from wearable devices, may consist of measurements of biological signals on a fixed set of participants at a diverging number of time-points. Traditional statistical methods are not equipped to handle the computational burden of repeatedly analysing the cumulatively growing dataset each time new data are collected. We propose a new estimation and inference framework for dynamic updating of point estimates and their standard errors along sequentially collected datasets with dependence, both within and between the datasets. The key technique is a decomposition of the extended inference function vector of the quadratic inference function constructed over the cumulative longitudinal data into a sum of summary statistics over data batches. We show how this sum can be recursively updated without the need to access the whole dataset, resulting in a computationally efficient streaming procedure with minimal loss of statistical efficiency. We prove consistency and asymptotic normality of our streaming estimator as the number of data batches diverges, even as the number of independent participants remains fixed. Simulations demonstrate the advantages of our approach over traditional statistical methods that assume independence between data batches. Finally, we investigate the relationship between physical activity and several diseases through analysis of accelerometry data from the National Health and Nutrition Examination Survey.
{"title":"Statistical inference for streamed longitudinal data","authors":"Lan Luo, Jingshen Wang, Emily C Hector","doi":"10.1093/biomet/asad010","DOIUrl":"https://doi.org/10.1093/biomet/asad010","url":null,"abstract":"Summary Modern longitudinal data, for example from wearable devices, may consist of measurements of biological signals on a fixed set of participants at a diverging number of time-points. Traditional statistical methods are not equipped to handle the computational burden of repeatedly analysing the cumulatively growing dataset each time new data are collected. We propose a new estimation and inference framework for dynamic updating of point estimates and their standard errors along sequentially collected datasets with dependence, both within and between the datasets. The key technique is a decomposition of the extended inference function vector of the quadratic inference function constructed over the cumulative longitudinal data into a sum of summary statistics over data batches. We show how this sum can be recursively updated without the need to access the whole dataset, resulting in a computationally efficient streaming procedure with minimal loss of statistical efficiency. We prove consistency and asymptotic normality of our streaming estimator as the number of data batches diverges, even as the number of independent participants remains fixed. Simulations demonstrate the advantages of our approach over traditional statistical methods that assume independence between data batches. Finally, we investigate the relationship between physical activity and several diseases through analysis of accelerometry data from the National Health and Nutrition Examination Survey.","PeriodicalId":9001,"journal":{"name":"Biometrika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134905480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary The single-regression Granger–Geweke causality estimator has previously been shown to solve known problems associated with the more conventional likelihood ratio estimator; however, its sampling distribution has remained unknown. We show that, under the null hypothesis of vanishing Granger causality, the single-regression estimator converges to a generalized χ2 distribution, which is well approximated by a Γ distribution. We show that this holds too for Geweke’s spectral causality averaged over a given frequency band, and derive explicit expressions for the generalized χ2 and Γ-approximation parameters in both cases. We present a Neyman–Pearson test based on the single-regression estimators, and discuss how it may be deployed in empirical scenarios. We outline how our analysis may be extended to the conditional case, point-frequency spectral Granger causality and the important case of state-space Granger causality.
{"title":"Sampling distribution for single-regression Granger causality estimators","authors":"A J Gutknecht, L Barnett","doi":"10.1093/biomet/asad009","DOIUrl":"https://doi.org/10.1093/biomet/asad009","url":null,"abstract":"Summary The single-regression Granger–Geweke causality estimator has previously been shown to solve known problems associated with the more conventional likelihood ratio estimator; however, its sampling distribution has remained unknown. We show that, under the null hypothesis of vanishing Granger causality, the single-regression estimator converges to a generalized χ2 distribution, which is well approximated by a Γ distribution. We show that this holds too for Geweke’s spectral causality averaged over a given frequency band, and derive explicit expressions for the generalized χ2 and Γ-approximation parameters in both cases. We present a Neyman–Pearson test based on the single-regression estimators, and discuss how it may be deployed in empirical scenarios. We outline how our analysis may be extended to the conditional case, point-frequency spectral Granger causality and the important case of state-space Granger causality.","PeriodicalId":9001,"journal":{"name":"Biometrika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135727143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary Bootstrapping spectral mean statistics has been a notoriously difficult problem over the past 25 years. Many frequency domain bootstraps are valid only for certain time series structures, e.g., linear processes, or for special types of statistics, i.e., ratio statistics, because such bootstraps fail to capture the limiting variance of spectral statistics in general settings. We address this issue with a different form of resampling, namely, subsampling. While not considered previously, subsampling provides consistent variance estimation under much weaker conditions than any existing bootstrap in the frequency domain. Mixing is not used, as is often standard with subsampling. Rather, subsampling can be generally justified under the same conditions needed for original spectral mean statistics to have distributional limits in the first place. This result has impacts for other bootstrap methods. Subsampling then applies to extending the validity of recent state-of-the-art bootstraps in the frequency domain. We nontrivially link subsampling to such bootstraps, which broadens their range, as moment and block assumptions needed for these are cut by more than half. Essentially, state-of-the-art bootstraps then require no more stringent assumptions than those needed for a target limit distribution to exist, which is unusual in the bootstrap world. We also close a gap in the theory of subsampling for time series with distributional approximations, in addition to variance estimation, for frequency domain statistics.
{"title":"A subsampling perspective for extending the validity of state-of-the-art bootstraps in the frequency domain","authors":"Haihan Yu, Mark S Kaiser, Daniel J Nordman","doi":"10.1093/biomet/asad006","DOIUrl":"https://doi.org/10.1093/biomet/asad006","url":null,"abstract":"Summary Bootstrapping spectral mean statistics has been a notoriously difficult problem over the past 25 years. Many frequency domain bootstraps are valid only for certain time series structures, e.g., linear processes, or for special types of statistics, i.e., ratio statistics, because such bootstraps fail to capture the limiting variance of spectral statistics in general settings. We address this issue with a different form of resampling, namely, subsampling. While not considered previously, subsampling provides consistent variance estimation under much weaker conditions than any existing bootstrap in the frequency domain. Mixing is not used, as is often standard with subsampling. Rather, subsampling can be generally justified under the same conditions needed for original spectral mean statistics to have distributional limits in the first place. This result has impacts for other bootstrap methods. Subsampling then applies to extending the validity of recent state-of-the-art bootstraps in the frequency domain. We nontrivially link subsampling to such bootstraps, which broadens their range, as moment and block assumptions needed for these are cut by more than half. Essentially, state-of-the-art bootstraps then require no more stringent assumptions than those needed for a target limit distribution to exist, which is unusual in the bootstrap world. We also close a gap in the theory of subsampling for time series with distributional approximations, in addition to variance estimation, for frequency domain statistics.","PeriodicalId":9001,"journal":{"name":"Biometrika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135554424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we develop a systematic theory for high dimensional analysis of variance in multivariate linear regression, where the dimension and the number of coefficients can both grow with the sample size. We propose a new U type test statistic to test linear hypotheses and establish a high dimensional Gaussian approximation result under fairly mild moment assumptions. Our general framework and theory can be applied to deal with the classical one-way multivariate analysis of variance and the nonparametric one-way multivariate analysis of variance in high dimensions. To implement the test procedure, we introduce a sample-splitting based estimator of the second moment of the error covariance and discuss its properties. A simulation study shows that our proposed test outperforms some existing tests in various settings.
{"title":"High Dimensional Analysis of Variance in Multivariate Linear Regression","authors":"Zhipeng Lou, Xianyang Zhang, Weichi Wu","doi":"10.1093/biomet/asad001","DOIUrl":"https://doi.org/10.1093/biomet/asad001","url":null,"abstract":"\u0000 In this paper, we develop a systematic theory for high dimensional analysis of variance in multivariate linear regression, where the dimension and the number of coefficients can both grow with the sample size. We propose a new U type test statistic to test linear hypotheses and establish a high dimensional Gaussian approximation result under fairly mild moment assumptions. Our general framework and theory can be applied to deal with the classical one-way multivariate analysis of variance and the nonparametric one-way multivariate analysis of variance in high dimensions. To implement the test procedure, we introduce a sample-splitting based estimator of the second moment of the error covariance and discuss its properties. A simulation study shows that our proposed test outperforms some existing tests in various settings.","PeriodicalId":9001,"journal":{"name":"Biometrika","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47351243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Point processes are probabilistic tools for modelling event data. While there exists a fast-growing literature studying the relationships between point processes, it remains unexplored how such relationships connect to causal effects. In the presence of unmeasured confounders, parameters from point process models do not necessarily have causal interpretations. We propose an instrumental variable method for causal inference with point process treatment and outcome. We define causal quantities based on potential outcomes and establish nonparametric identification results with a binary instrumental variable. We extend the traditional Wald estimation to deal with point process treatment and outcome, showing that it should be performed after a Fourier transform of the intention-to-treat effects on the treatment and outcome and thus takes the form of deconvolution. We term this generalized Wald estimation and propose an estimation strategy based on well-established deconvolution methods.
{"title":"An instrumental variable method for point processes: generalized Wald estimation based on deconvolution","authors":"Zhichao Jiang, Shizhe Chen, Peng Ding","doi":"10.1093/biomet/asad005","DOIUrl":"https://doi.org/10.1093/biomet/asad005","url":null,"abstract":"\u0000 Point processes are probabilistic tools for modelling event data. While there exists a fast-growing literature studying the relationships between point processes, it remains unexplored how such relationships connect to causal effects. In the presence of unmeasured confounders, parameters from point process models do not necessarily have causal interpretations. We propose an instrumental variable method for causal inference with point process treatment and outcome. We define causal quantities based on potential outcomes and establish nonparametric identification results with a binary instrumental variable. We extend the traditional Wald estimation to deal with point process treatment and outcome, showing that it should be performed after a Fourier transform of the intention-to-treat effects on the treatment and outcome and thus takes the form of deconvolution. We term this generalized Wald estimation and propose an estimation strategy based on well-established deconvolution methods.","PeriodicalId":9001,"journal":{"name":"Biometrika","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45747425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01Epub Date: 2022-11-18DOI: 10.1093/biomet/asab059
Ian W McKeague, Xin Zhang
We consider the problem of testing for the presence of linear relationships between large sets of random variables based on a post-selection inference approach to canonical correlation analysis. The challenge is to adjust for the selection of subsets of variables having linear combinations with maximal sample correlation. To this end, we construct a stabilized one-step estimator of the euclidean-norm of the canonical correlations maximized over subsets of variables of pre-specified cardinality. This estimator is shown to be consistent for its target parameter and asymptotically normal, provided the dimensions of the variables do not grow too quickly with sample size. We also develop a greedy search algorithm to accurately compute the estimator, leading to a computationally tractable omnibus test for the global null hypothesis that there are no linear relationships between any subsets of variables having the pre-specified cardinality. We further develop a confidence interval that takes the variable selection into account.
{"title":"Significance testing for canonical correlation analysis in high dimensions.","authors":"Ian W McKeague, Xin Zhang","doi":"10.1093/biomet/asab059","DOIUrl":"10.1093/biomet/asab059","url":null,"abstract":"<p><p>We consider the problem of testing for the presence of linear relationships between large sets of random variables based on a post-selection inference approach to canonical correlation analysis. The challenge is to adjust for the selection of subsets of variables having linear combinations with maximal sample correlation. To this end, we construct a stabilized one-step estimator of the euclidean-norm of the canonical correlations maximized over subsets of variables of pre-specified cardinality. This estimator is shown to be consistent for its target parameter and asymptotically normal, provided the dimensions of the variables do not grow too quickly with sample size. We also develop a greedy search algorithm to accurately compute the estimator, leading to a computationally tractable omnibus test for the global null hypothesis that there are no linear relationships between any subsets of variables having the pre-specified cardinality. We further develop a confidence interval that takes the variable selection into account.</p>","PeriodicalId":9001,"journal":{"name":"Biometrika","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9857302/pdf/nihms-1771870.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10613294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}