Pub Date : 2024-11-20DOI: 10.1146/annurev-statistics-040722-061813
Mark E. Glickman, Albyn C. Jones
One of the most important tasks in sports analytics is the development of binary response models for head-to-head game outcomes to estimate team and player strength. We discuss commonly used probability models for game outcomes, including the Bradley–Terry and Thurstone–Mosteller models, as well as extensions to ties as a third outcome and to the inclusion of a home-field advantage. We consider dynamic extensions to these models to account for the evolution of competitor strengths over time. Full likelihood-based analyses of these time-varying models can be simplified into rating systems, such as the Elo and Glicko rating systems. We present other modern rating systems, including popular methods for online gaming, and novel systems that have been implemented for online chess and Go. The discussion of the analytic methods are accompanied by examples of where these approaches have been implemented for various gaming organizations, as well as a detailed application to National Basketball Association game outcomes.
体育分析中最重要的任务之一是为正面交锋的比赛结果建立二元响应模型,以估计球队和球员的实力。我们讨论了常用的比赛结果概率模型,包括 Bradley-Terry 模型和 Thurstone-Mosteller 模型,以及将平局作为第三种结果和包含主场优势的扩展模型。我们考虑对这些模型进行动态扩展,以考虑竞争对手实力随时间的变化。这些时变模型的完全似然分析可简化为评级系统,如 Elo 和 Glicko 评级系统。我们还介绍了其他现代评级系统,包括用于在线对弈的流行方法,以及用于在线国际象棋和围棋的新型系统。在讨论分析方法的同时,我们还举例说明了这些方法在各种游戏组织中的应用,以及在美国国家篮球协会比赛结果中的详细应用。
{"title":"Models and Rating Systems for Head-to-Head Competition","authors":"Mark E. Glickman, Albyn C. Jones","doi":"10.1146/annurev-statistics-040722-061813","DOIUrl":"https://doi.org/10.1146/annurev-statistics-040722-061813","url":null,"abstract":"One of the most important tasks in sports analytics is the development of binary response models for head-to-head game outcomes to estimate team and player strength. We discuss commonly used probability models for game outcomes, including the Bradley–Terry and Thurstone–Mosteller models, as well as extensions to ties as a third outcome and to the inclusion of a home-field advantage. We consider dynamic extensions to these models to account for the evolution of competitor strengths over time. Full likelihood-based analyses of these time-varying models can be simplified into rating systems, such as the Elo and Glicko rating systems. We present other modern rating systems, including popular methods for online gaming, and novel systems that have been implemented for online chess and Go. The discussion of the analytic methods are accompanied by examples of where these approaches have been implemented for various gaming organizations, as well as a detailed application to National Basketball Association game outcomes.","PeriodicalId":48855,"journal":{"name":"Annual Review of Statistics and Its Application","volume":"1 1","pages":""},"PeriodicalIF":7.9,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-15DOI: 10.1146/annurev-statistics-112723-034423
Yahui Bai, Yuhe Gao, Runzhe Wan, Sheng Zhang, Rui Song
In recent years, there has been a growing trend of applying reinforcement learning (RL) in financial applications. This approach has shown great potential for decision-making tasks in finance. In this review, we present a comprehensive study of the applications of RL in finance and conduct a series of meta-analyses to investigate the common themes in the literature, such as the factors that most significantly affect RL's performance compared with traditional methods. Moreover, we identify challenges, including explainability, Markov decision process modeling, and robustness, that hinder the broader utilization of RL in the financial industry and discuss recent advancements in overcoming these challenges. Finally, we propose future research directions, such as benchmarking, contextual RL, multi-agent RL, and model-based RL to address these challenges and to further enhance the implementation of RL in finance.
{"title":"A Review of Reinforcement Learning in Financial Applications","authors":"Yahui Bai, Yuhe Gao, Runzhe Wan, Sheng Zhang, Rui Song","doi":"10.1146/annurev-statistics-112723-034423","DOIUrl":"https://doi.org/10.1146/annurev-statistics-112723-034423","url":null,"abstract":"In recent years, there has been a growing trend of applying reinforcement learning (RL) in financial applications. This approach has shown great potential for decision-making tasks in finance. In this review, we present a comprehensive study of the applications of RL in finance and conduct a series of meta-analyses to investigate the common themes in the literature, such as the factors that most significantly affect RL's performance compared with traditional methods. Moreover, we identify challenges, including explainability, Markov decision process modeling, and robustness, that hinder the broader utilization of RL in the financial industry and discuss recent advancements in overcoming these challenges. Finally, we propose future research directions, such as benchmarking, contextual RL, multi-agent RL, and model-based RL to address these challenges and to further enhance the implementation of RL in finance.","PeriodicalId":48855,"journal":{"name":"Annual Review of Statistics and Its Application","volume":"25 1","pages":""},"PeriodicalIF":7.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142642981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-14DOI: 10.1146/annurev-statistics-112723-034334
Jane-Ling Wang, Qixian Zhong
In medical studies, time-to-event outcomes such as time to death or relapse of a disease are routinely recorded along with longitudinal data that are observed intermittently during the follow-up period. For various reasons, marginal approaches to model the event time, corresponding to separate approaches for survival data/longitudinal data, tend to induce bias and lose efficiency. Instead, a joint modeling approach that brings the two types of data together can reduce or eliminate the bias and yield a more efficient estimation procedure. A well-established avenue for joint modeling is the joint likelihood approach that often produces semiparametric efficient estimators for the finite-dimensional parameter vectors in both models. Through a transformation survival model with an unspecified baseline hazard function, this review introduces joint modeling that accommodates both baseline covariates and time-varying covariates. The focus is on the major challenges faced by joint modeling and how they can be overcome. A review of available software implementations and a brief discussion of future directions of the field are also included.
{"title":"Joint Modeling of Longitudinal and Survival Data","authors":"Jane-Ling Wang, Qixian Zhong","doi":"10.1146/annurev-statistics-112723-034334","DOIUrl":"https://doi.org/10.1146/annurev-statistics-112723-034334","url":null,"abstract":"In medical studies, time-to-event outcomes such as time to death or relapse of a disease are routinely recorded along with longitudinal data that are observed intermittently during the follow-up period. For various reasons, marginal approaches to model the event time, corresponding to separate approaches for survival data/longitudinal data, tend to induce bias and lose efficiency. Instead, a joint modeling approach that brings the two types of data together can reduce or eliminate the bias and yield a more efficient estimation procedure. A well-established avenue for joint modeling is the joint likelihood approach that often produces semiparametric efficient estimators for the finite-dimensional parameter vectors in both models. Through a transformation survival model with an unspecified baseline hazard function, this review introduces joint modeling that accommodates both baseline covariates and time-varying covariates. The focus is on the major challenges faced by joint modeling and how they can be overcome. A review of available software implementations and a brief discussion of future directions of the field are also included.","PeriodicalId":48855,"journal":{"name":"Annual Review of Statistics and Its Application","volume":"246 1","pages":""},"PeriodicalIF":7.9,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142637200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.1146/annurev-statistics-112723-034123
Andrew Zammit-Mangion, Matthew Sainsbury-Dale, Raphaël Huser
Simulation-based methods for statistical inference have evolved dramatically over the past 50 years, keeping pace with technological advancements. The field is undergoing a new revolution as it embraces the representational capacity of neural networks, optimization libraries, and graphics processing units for learning complex mappings between data and inferential targets. The resulting tools are amortized, in the sense that, after an initial setup cost, they allow rapid inference through fast feed-forward operations. In this article we review recent progress in the context of point estimation, approximate Bayesian inference, summary-statistic construction, and likelihood approximation. We also cover software and include a simple illustration to showcase the wide array of tools available for amortized inference and the benefits they offer over Markov chain Monte Carlo methods. The article concludes with an overview of relevant topics and an outlook on future research directions.
{"title":"Neural Methods for Amortized Inference","authors":"Andrew Zammit-Mangion, Matthew Sainsbury-Dale, Raphaël Huser","doi":"10.1146/annurev-statistics-112723-034123","DOIUrl":"https://doi.org/10.1146/annurev-statistics-112723-034123","url":null,"abstract":"Simulation-based methods for statistical inference have evolved dramatically over the past 50 years, keeping pace with technological advancements. The field is undergoing a new revolution as it embraces the representational capacity of neural networks, optimization libraries, and graphics processing units for learning complex mappings between data and inferential targets. The resulting tools are amortized, in the sense that, after an initial setup cost, they allow rapid inference through fast feed-forward operations. In this article we review recent progress in the context of point estimation, approximate Bayesian inference, summary-statistic construction, and likelihood approximation. We also cover software and include a simple illustration to showcase the wide array of tools available for amortized inference and the benefits they offer over Markov chain Monte Carlo methods. The article concludes with an overview of relevant topics and an outlook on future research directions.","PeriodicalId":48855,"journal":{"name":"Annual Review of Statistics and Its Application","volume":"95 1","pages":""},"PeriodicalIF":7.9,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142601277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.1146/annurev-statistics-112723-034351
Jing Huang, Jeffrey S. Morris
Infectious diseases pose a persistent challenge to public health worldwide. Recent global health crises, such as the COVID-19 pandemic and Ebola outbreaks, have underscored the vital role of infectious disease modeling in guiding public health policy and response. Infectious disease modeling is a critical tool for society, informing risk mitigation measures, prompting timely interventions, and aiding preparedness for healthcare delivery systems. This article synthesizes the current landscape of infectious disease modeling, emphasizing the integration of statistical methods in understanding and predicting the spread of infectious diseases. We begin by examining the historical context and the foundational models that have shaped the field, such as the SIR (susceptible, infectious, recovered) and SEIR (susceptible, exposed, infectious, recovered) models. Subsequently, we delve into the methodological innovations that have arisen, including stochastic modeling, network-based approaches, and the use of big data analytics. We also explore the integration of machine learning techniques in enhancing model accuracy and responsiveness. The review identifies the challenges of parameter estimation, model validation, and the incorporation of real-time data streams. Moreover, we discuss the ethical implications of modeling, such as privacy concerns and the communication of risk. The article concludes by discussing future directions for research, highlighting the need for data integration and interdisciplinary collaboration for advancing infectious disease modeling.
{"title":"Infectious Disease Modeling","authors":"Jing Huang, Jeffrey S. Morris","doi":"10.1146/annurev-statistics-112723-034351","DOIUrl":"https://doi.org/10.1146/annurev-statistics-112723-034351","url":null,"abstract":"Infectious diseases pose a persistent challenge to public health worldwide. Recent global health crises, such as the COVID-19 pandemic and Ebola outbreaks, have underscored the vital role of infectious disease modeling in guiding public health policy and response. Infectious disease modeling is a critical tool for society, informing risk mitigation measures, prompting timely interventions, and aiding preparedness for healthcare delivery systems. This article synthesizes the current landscape of infectious disease modeling, emphasizing the integration of statistical methods in understanding and predicting the spread of infectious diseases. We begin by examining the historical context and the foundational models that have shaped the field, such as the SIR (susceptible, infectious, recovered) and SEIR (susceptible, exposed, infectious, recovered) models. Subsequently, we delve into the methodological innovations that have arisen, including stochastic modeling, network-based approaches, and the use of big data analytics. We also explore the integration of machine learning techniques in enhancing model accuracy and responsiveness. The review identifies the challenges of parameter estimation, model validation, and the incorporation of real-time data streams. Moreover, we discuss the ethical implications of modeling, such as privacy concerns and the communication of risk. The article concludes by discussing future directions for research, highlighting the need for data integration and interdisciplinary collaboration for advancing infectious disease modeling.","PeriodicalId":48855,"journal":{"name":"Annual Review of Statistics and Its Application","volume":"15 1","pages":""},"PeriodicalIF":7.9,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142601274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.1146/annurev-statistics-112723-034548
Arnab Auddy, Dong Xia, Ming Yuan
Large amounts of multidimensional data represented by multiway arrays or tensors are prevalent in modern applications across various fields such as chemometrics, genomics, physics, psychology, and signal processing. The structural complexity of such data provides vast new opportunities for modeling and analysis, but efficiently extracting information content from them, both statistically and computationally, presents unique and fundamental challenges. Addressing these challenges requires an interdisciplinary approach that brings together tools and insights from statistics, optimization, and numerical linear algebra, among other fields. Despite these hurdles, significant progress has been made in the past decade. This review seeks to examine some of the key advancements and identify common threads among them, under a number of different statistical settings.
{"title":"Tensors in High-Dimensional Data Analysis: Methodological Opportunities and Theoretical Challenges","authors":"Arnab Auddy, Dong Xia, Ming Yuan","doi":"10.1146/annurev-statistics-112723-034548","DOIUrl":"https://doi.org/10.1146/annurev-statistics-112723-034548","url":null,"abstract":"Large amounts of multidimensional data represented by multiway arrays or tensors are prevalent in modern applications across various fields such as chemometrics, genomics, physics, psychology, and signal processing. The structural complexity of such data provides vast new opportunities for modeling and analysis, but efficiently extracting information content from them, both statistically and computationally, presents unique and fundamental challenges. Addressing these challenges requires an interdisciplinary approach that brings together tools and insights from statistics, optimization, and numerical linear algebra, among other fields. Despite these hurdles, significant progress has been made in the past decade. This review seeks to examine some of the key advancements and identify common threads among them, under a number of different statistical settings.","PeriodicalId":48855,"journal":{"name":"Annual Review of Statistics and Its Application","volume":"40 1","pages":""},"PeriodicalIF":7.9,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142601276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.1146/annurev-statistics-112723-034236
Jon Wakefield, Victoria Knutson
Estimating the mortality associated with a specific mortality crisis event (for example, a pandemic, natural disaster, or conflict) is clearly an important public health undertaking. In many situations, deaths may be directly or indirectly attributable to the mortality crisis event, and both contributions may be of interest. The totality of the mortality impact on the population (direct and indirect deaths) includes the knock-on effects of the event, such as a breakdown of the health care system, or increased mortality due to shortages of resources. Unfortunately, estimating the deaths directly attributable to the event is frequently problematic. Hence, the excess mortality, defined as the difference between the observed mortality and that which would have occurred in the absence of the crisis event, is an estimation target. If the region of interest contains a functioning vital registration system, so that the mortality is fully observed and reliable, then the only modeling required is to produce the expected deaths counts, but this is a nontrivial exercise. In low- and middle-income countries it is common for there to be incomplete (or nonexistent) mortality data, and one must then use additional data and/or modeling, including predicting mortality using auxiliary variables. We describe and review each of these aspects, give examples of excess mortality studies, and provide a case study on excess mortality across states of the United States during the COVID-19 pandemic.
{"title":"Excess Mortality Estimation","authors":"Jon Wakefield, Victoria Knutson","doi":"10.1146/annurev-statistics-112723-034236","DOIUrl":"https://doi.org/10.1146/annurev-statistics-112723-034236","url":null,"abstract":"Estimating the mortality associated with a specific mortality crisis event (for example, a pandemic, natural disaster, or conflict) is clearly an important public health undertaking. In many situations, deaths may be directly or indirectly attributable to the mortality crisis event, and both contributions may be of interest. The totality of the mortality impact on the population (direct and indirect deaths) includes the knock-on effects of the event, such as a breakdown of the health care system, or increased mortality due to shortages of resources. Unfortunately, estimating the deaths directly attributable to the event is frequently problematic. Hence, the excess mortality, defined as the difference between the observed mortality and that which would have occurred in the absence of the crisis event, is an estimation target. If the region of interest contains a functioning vital registration system, so that the mortality is fully observed and reliable, then the only modeling required is to produce the expected deaths counts, but this is a nontrivial exercise. In low- and middle-income countries it is common for there to be incomplete (or nonexistent) mortality data, and one must then use additional data and/or modeling, including predicting mortality using auxiliary variables. We describe and review each of these aspects, give examples of excess mortality studies, and provide a case study on excess mortality across states of the United States during the COVID-19 pandemic.","PeriodicalId":48855,"journal":{"name":"Annual Review of Statistics and Its Application","volume":"6 1","pages":""},"PeriodicalIF":7.9,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142601402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.1146/annurev-statistics-112723-034225
Hsin-wen Chang, Ian W. McKeague
Functional data analysis (FDA) studies data that include infinite-dimensional functions or objects, generalizing traditional univariate or multivariate observations from each study unit. Among inferential approaches without parametric assumptions, empirical likelihood (EL) offers a principled method in that it extends the framework of parametric likelihood ratio–based inference via the nonparametric likelihood. There has been increasing use of EL in FDA due to its many favorable properties, including self-normalization and the data-driven shape of confidence regions. This article presents a review of EL approaches in FDA, starting with finite-dimensional features, then covering infinite-dimensional features. We contrast smooth and nonsmooth frameworks in FDA and show how EL has been incorporated into both of them. The article concludes with a discussion of some future research directions, including the possibility of applying EL to conformal inference.
功能数据分析(FDA)研究包括无限维函数或对象的数据,从每个研究单元概括传统的单变量或多变量观察结果。在没有参数假设的推论方法中,经验似然法(EL)提供了一种原则性方法,它通过非参数似然法扩展了基于参数似然比的推论框架。由于 EL 具有许多有利特性,包括自归一化和置信区的数据驱动形状,因此在 FDA 中的应用越来越多。本文综述了 EL 在 FDA 中的应用,从有限维特征开始,然后涵盖无限维特征。我们对比了 FDA 中的平滑框架和非平滑框架,并展示了 EL 如何融入这两种框架。文章最后讨论了一些未来的研究方向,包括将 EL 应用于保形推理的可能性。
{"title":"Empirical Likelihood in Functional Data Analysis","authors":"Hsin-wen Chang, Ian W. McKeague","doi":"10.1146/annurev-statistics-112723-034225","DOIUrl":"https://doi.org/10.1146/annurev-statistics-112723-034225","url":null,"abstract":"Functional data analysis (FDA) studies data that include infinite-dimensional functions or objects, generalizing traditional univariate or multivariate observations from each study unit. Among inferential approaches without parametric assumptions, empirical likelihood (EL) offers a principled method in that it extends the framework of parametric likelihood ratio–based inference via the nonparametric likelihood. There has been increasing use of EL in FDA due to its many favorable properties, including self-normalization and the data-driven shape of confidence regions. This article presents a review of EL approaches in FDA, starting with finite-dimensional features, then covering infinite-dimensional features. We contrast smooth and nonsmooth frameworks in FDA and show how EL has been incorporated into both of them. The article concludes with a discussion of some future research directions, including the possibility of applying EL to conformal inference.","PeriodicalId":48855,"journal":{"name":"Annual Review of Statistics and Its Application","volume":"60 1","pages":""},"PeriodicalIF":7.9,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142601275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1146/annurev-statistics-040622-031653
Haoyu Yang, Zhonghua Liu, Ruoyu Wang, En-Yu Lai, Joel Schwartz, Andrea A. Baccarelli, Yen-Tsung Huang, Xihong Lin
Causal mediation analysis provides an attractive framework for integrating diverse types of exposure, genomic, and phenotype data. Recently, this field has seen a surge of interest, largely driven by the increasing need for causal mediation analyses in health and social sciences. This article aims to provide a review of recent developments in mediation analysis, encompassing mediation analysis of a single mediator and a large number of mediators, as well as mediation analysis with multiple exposures and mediators. Our review focuses on the recent advancements in statistical inference for causal mediation analysis, especially in the context of high-dimensional mediation analysis. We delve into the complexities of testing mediation effects, especially addressing the challenge of testing a large number of composite null hypotheses. Through extensive simulation studies, we compare the existing methods across a range of scenarios. We also include an analysis of data from the Normative Aging Study, which examines DNA methylation CpG sites as potential mediators of the effect of smoking status on lung function. We discuss the pros and cons of these methods and future research directions.
因果中介分析为整合不同类型的暴露、基因组和表型数据提供了一个极具吸引力的框架。最近,人们对这一领域的兴趣大增,主要原因是健康和社会科学领域对因果中介分析的需求日益增长。本文旨在综述中介分析的最新进展,包括对单一中介和大量中介的中介分析,以及对多重暴露和中介的中介分析。我们的综述侧重于因果中介分析统计推断的最新进展,尤其是在高维中介分析方面。我们深入探讨了检验中介效应的复杂性,特别是解决检验大量复合零假设的难题。通过大量的模拟研究,我们比较了各种情况下的现有方法。我们还分析了 "正常老龄化研究"(Normative Aging Study)的数据,该研究将 DNA 甲基化 CpG 位点作为吸烟状态对肺功能影响的潜在中介。我们将讨论这些方法的优缺点以及未来的研究方向。
{"title":"Causal Mediation Analysis for Integrating Exposure, Genomic, and Phenotype Data","authors":"Haoyu Yang, Zhonghua Liu, Ruoyu Wang, En-Yu Lai, Joel Schwartz, Andrea A. Baccarelli, Yen-Tsung Huang, Xihong Lin","doi":"10.1146/annurev-statistics-040622-031653","DOIUrl":"https://doi.org/10.1146/annurev-statistics-040622-031653","url":null,"abstract":"Causal mediation analysis provides an attractive framework for integrating diverse types of exposure, genomic, and phenotype data. Recently, this field has seen a surge of interest, largely driven by the increasing need for causal mediation analyses in health and social sciences. This article aims to provide a review of recent developments in mediation analysis, encompassing mediation analysis of a single mediator and a large number of mediators, as well as mediation analysis with multiple exposures and mediators. Our review focuses on the recent advancements in statistical inference for causal mediation analysis, especially in the context of high-dimensional mediation analysis. We delve into the complexities of testing mediation effects, especially addressing the challenge of testing a large number of composite null hypotheses. Through extensive simulation studies, we compare the existing methods across a range of scenarios. We also include an analysis of data from the Normative Aging Study, which examines DNA methylation CpG sites as potential mediators of the effect of smoking status on lung function. We discuss the pros and cons of these methods and future research directions.","PeriodicalId":48855,"journal":{"name":"Annual Review of Statistics and Its Application","volume":"26 1","pages":""},"PeriodicalIF":7.9,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142555732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1146/annurev-statistics-033121-120121
M. Elizabeth Halloran
Due to dependent happenings, vaccines can have different effects in populations. In addition to direct protective effects in the vaccinated, vaccination in a population can have indirect effects in the unvaccinated individuals. Vaccination can also reduce person-to-person transmission to vaccinated individuals or from vaccinated individuals compared with unvaccinated individuals. Design of vaccine studies has a history extending back over a century. Emerging infectious diseases, such as the SARS-CoV-2 pandemic and the Ebola outbreak in West Africa, have stimulated new interest in vaccine studies. We focus on some recent developments, such as target trial emulation, test-negative design, and regression discontinuity design. Methods for evaluating durability of vaccine effects were developed in the context of both blinded and unblinded placebo crossover studies. The case-ascertained design is used to assess the transmission effects of vaccines. The novel ring vaccination trial design was first used in the Ebola outbreak in West Africa.
{"title":"Designs for Vaccine Studies","authors":"M. Elizabeth Halloran","doi":"10.1146/annurev-statistics-033121-120121","DOIUrl":"https://doi.org/10.1146/annurev-statistics-033121-120121","url":null,"abstract":"Due to dependent happenings, vaccines can have different effects in populations. In addition to direct protective effects in the vaccinated, vaccination in a population can have indirect effects in the unvaccinated individuals. Vaccination can also reduce person-to-person transmission to vaccinated individuals or from vaccinated individuals compared with unvaccinated individuals. Design of vaccine studies has a history extending back over a century. Emerging infectious diseases, such as the SARS-CoV-2 pandemic and the Ebola outbreak in West Africa, have stimulated new interest in vaccine studies. We focus on some recent developments, such as target trial emulation, test-negative design, and regression discontinuity design. Methods for evaluating durability of vaccine effects were developed in the context of both blinded and unblinded placebo crossover studies. The case-ascertained design is used to assess the transmission effects of vaccines. The novel ring vaccination trial design was first used in the Ebola outbreak in West Africa.","PeriodicalId":48855,"journal":{"name":"Annual Review of Statistics and Its Application","volume":"5 1","pages":""},"PeriodicalIF":7.9,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142555733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}