Academic research productivity relies upon the contribution of statisticians, who are typically clustered in statistics and biostatistics departments, isolated from clinical researchers. Most academic health centres have created consultation hubs or research incubators to make statisticians available for individual collaboration to support the clinical research enterprise. Additionally, some clinical departments within academic health centres have recognized the value in colocating statisticians within their clinical departments to improve availability for collaboration with physicians/researchers. Embedded statisticians encounter the same challenges of isolated statisticians regarding professional support and networking, mentorship and clear role expectations. While for all collaborative statisticians, it is important to effectively communicate value to both collaborators and supervisors, this may be especially problematic for embedded statisticians in clinical departments where their supervisors may not have backgrounds in research or statistics. Previous papers have reported valuable metrics for statisticians, particularly those associated with Biostatistics, Epidemiology and Research Design Cores. There is a knowledge gap regarding metrics tailored to meet the needs of the embedded statistician and clinical supervisors. This paper is a first step towards addressing this important need.In this paper, we explore (1) the critical role of collaborative statisticians and the benefits and challenges of the embedded statistician model, (2) the need for additional metrics specific to embedded statisticians which measure value and (3) how to design a value report. We offer a framework for evaluation of the contributions of the embedded statistician with the following domains: (1) collaboration, (2) research output/productivity, (3) mentoring and (4) education. Metrics that are particularly specific to embedded statisticians and that are not routinely captured include time from project initiation to completion/outcome, time from initial statistical consultation to statistical outcome completion and summary of level of contribution for manuscripts and presentations in addition to author order. We conclude with thoughts on future directions for development of metrics and reporting systems for statisticians embedded in clinical departments.
{"title":"Documenting and communicating the contributions of embedded statisticians: Show me the value!","authors":"Terrie Vasilopoulos, Amy Crisp, Gerard Garvan, Keith Howell, Gregory Janelle, Cynthia Garvan","doi":"10.1002/sta4.691","DOIUrl":"https://doi.org/10.1002/sta4.691","url":null,"abstract":"Academic research productivity relies upon the contribution of statisticians, who are typically clustered in statistics and biostatistics departments, isolated from clinical researchers. Most academic health centres have created consultation hubs or research incubators to make statisticians available for individual collaboration to support the clinical research enterprise. Additionally, some clinical departments within academic health centres have recognized the value in colocating statisticians within their clinical departments to improve availability for collaboration with physicians/researchers. Embedded statisticians encounter the same challenges of isolated statisticians regarding professional support and networking, mentorship and clear role expectations. While for all collaborative statisticians, it is important to effectively communicate value to both collaborators and supervisors, this may be especially problematic for embedded statisticians in clinical departments where their supervisors may not have backgrounds in research or statistics. Previous papers have reported valuable metrics for statisticians, particularly those associated with Biostatistics, Epidemiology and Research Design Cores. There is a knowledge gap regarding metrics tailored to meet the needs of the embedded statistician and clinical supervisors. This paper is a first step towards addressing this important need.In this paper, we explore (1) the critical role of collaborative statisticians and the benefits and challenges of the embedded statistician model, (2) the need for additional metrics specific to embedded statisticians which measure value and (3) how to design a value report. We offer a framework for evaluation of the contributions of the embedded statistician with the following domains: (1) collaboration, (2) research output/productivity, (3) mentoring and (4) education. Metrics that are particularly specific to embedded statisticians and that are not routinely captured include time from project initiation to completion/outcome, time from initial statistical consultation to statistical outcome completion and summary of level of contribution for manuscripts and presentations in addition to author order. We conclude with thoughts on future directions for development of metrics and reporting systems for statisticians embedded in clinical departments.","PeriodicalId":56159,"journal":{"name":"Stat","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140980076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christina Maimone, Julia L. Sharp, Ofira Schwartz‐Soicher, Jeffrey C. Oliver, Lencia Beltran
Leading a data science or statistical consulting team in an academic environment can have many challenges, including institutional infrastructure, funding and technical expertise. Even in the most challenging environment, however, leading such a team with inclusive practices can be rewarding for the leader, the team members and collaborators. We describe nine leadership and management practices that are especially relevant to the dynamics of data science or statistics consulting teams and an academic environment: ensuring people get credit, making tacit knowledge explicit, establishing clear performance review processes, championing career development, empowering team members to work autonomously, learning from diverse experiences, supporting team members in navigating power dynamics, having difficult conversations and developing foundational management skills. Active engagement in these areas will help those who lead data science or statistics consulting groups – whether faculty or staff, regardless of title – create and support inclusive teams.
{"title":"Do good: Strategies for leading an inclusive data science or statistics consulting team","authors":"Christina Maimone, Julia L. Sharp, Ofira Schwartz‐Soicher, Jeffrey C. Oliver, Lencia Beltran","doi":"10.1002/sta4.687","DOIUrl":"https://doi.org/10.1002/sta4.687","url":null,"abstract":"Leading a data science or statistical consulting team in an academic environment can have many challenges, including institutional infrastructure, funding and technical expertise. Even in the most challenging environment, however, leading such a team with inclusive practices can be rewarding for the leader, the team members and collaborators. We describe nine leadership and management practices that are especially relevant to the dynamics of data science or statistics consulting teams and an academic environment: ensuring people get credit, making tacit knowledge explicit, establishing clear performance review processes, championing career development, empowering team members to work autonomously, learning from diverse experiences, supporting team members in navigating power dynamics, having difficult conversations and developing foundational management skills. Active engagement in these areas will help those who lead data science or statistics consulting groups – whether faculty or staff, regardless of title – create and support inclusive teams.","PeriodicalId":56159,"journal":{"name":"Stat","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140936203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Differential network analysis plays a crucial role in capturing nuanced changes in conditional correlations between two samples. Under the high‐dimensional setting, the differential network, that is, the difference between the two precision matrices are usually stylized with sparse signals and some low‐rank latent factors. Recognizing the distinctions inherent in the precision matrices of such networks, we introduce a novel approach, termed ‘SR‐Network’ for the estimation of sparse and reduced‐rank differential networks. This method directly assesses the differential network by formulating a convex empirical loss function with ‐norm and nuclear norm penalties. The study establishes finite‐sample error bounds for parameter estimation and highlights the superior performance of the proposed method through extensive simulations and real data studies. This research significantly contributes to the advancement of methodologies for accurate analysis of differential networks, particularly in the context of structures characterized by sparsity and low‐rank features.
{"title":"High‐dimensional differential networks with sparsity and reduced‐rank","authors":"Yao Wang, Cheng Wang, Binyan Jiang","doi":"10.1002/sta4.690","DOIUrl":"https://doi.org/10.1002/sta4.690","url":null,"abstract":"Differential network analysis plays a crucial role in capturing nuanced changes in conditional correlations between two samples. Under the high‐dimensional setting, the differential network, that is, the difference between the two precision matrices are usually stylized with sparse signals and some low‐rank latent factors. Recognizing the distinctions inherent in the precision matrices of such networks, we introduce a novel approach, termed ‘SR‐Network’ for the estimation of sparse and reduced‐rank differential networks. This method directly assesses the differential network by formulating a convex empirical loss function with ‐norm and nuclear norm penalties. The study establishes finite‐sample error bounds for parameter estimation and highlights the superior performance of the proposed method through extensive simulations and real data studies. This research significantly contributes to the advancement of methodologies for accurate analysis of differential networks, particularly in the context of structures characterized by sparsity and low‐rank features.","PeriodicalId":56159,"journal":{"name":"Stat","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The latent position model (LPM) is a popular method used in network data analysis where nodes are assumed to be positioned in a ‐dimensional latent space. The latent shrinkage position model (LSPM) is an extension of the LPM which automatically determines the number of effective dimensions of the latent space via a Bayesian nonparametric shrinkage prior. However, the LSPM's reliance on Markov chain Monte Carlo for inference, while rigorous, is computationally expensive, making it challenging to scale to networks with large numbers of nodes. We introduce a variational inference approach for the LSPM, aiming to reduce computational demands while retaining the model's ability to intrinsically determine the number of effective latent dimensions. The performance of the variational LSPM is illustrated through simulation studies and its application to real‐world network data. To promote wider adoption and ease of implementation, we also provide open‐source code.
{"title":"Variational inference for the latent shrinkage position model","authors":"Xian Yao Gwee, Isobel Claire Gormley, Michael Fop","doi":"10.1002/sta4.685","DOIUrl":"https://doi.org/10.1002/sta4.685","url":null,"abstract":"The latent position model (LPM) is a popular method used in network data analysis where nodes are assumed to be positioned in a ‐dimensional latent space. The latent shrinkage position model (LSPM) is an extension of the LPM which automatically determines the number of effective dimensions of the latent space via a Bayesian nonparametric shrinkage prior. However, the LSPM's reliance on Markov chain Monte Carlo for inference, while rigorous, is computationally expensive, making it challenging to scale to networks with large numbers of nodes. We introduce a variational inference approach for the LSPM, aiming to reduce computational demands while retaining the model's ability to intrinsically determine the number of effective latent dimensions. The performance of the variational LSPM is illustrated through simulation studies and its application to real‐world network data. To promote wider adoption and ease of implementation, we also provide open‐source code.","PeriodicalId":56159,"journal":{"name":"Stat","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140936205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alyssa Platt, Tracy Truong, Mary Boulos, Nichole E. Carlson, Manisha Desai, Monica M. Elam, Emily Slade, Alexandra L. Hanlon, Jillian H. Hurst, Maren K. Olsen, Laila M. Poisson, Lacey Rende, Gina‐Maria Pomann
Data‐intensive research continues to expand with the goal of improving healthcare delivery, clinical decision‐making, and patient outcomes. Quantitative scientists, such as biostatisticians, epidemiologists, and informaticists, are tasked with turning data into health knowledge. In academic health centres, quantitative scientists are critical to the missions of biomedical discovery and improvement of health. Many academic health centres have developed centralized Quantitative Science Units which foster dual goals of professional development of quantitative scientists and producing high quality, reproducible domain research. Such units then develop teams of quantitative scientists who can collaborate with researchers. However, existing literature does not provide guidance on how such teams are formed or how to manage and sustain them. Leaders of Quantitative Science Units across six institutions formed a working group to examine common practices and tools that can serve as best practices for Quantitative Science Units that wish to achieve these dual goals through building long‐term partnerships with researchers. The results of this working group are presented to provide tools and guidance for Quantitative Science Units challenged with developing, managing, and evaluating Quantitative Science Teams. This guidance aims to help Quantitative Science Units effectively participate in and enhance the research that is conducted throughout the academic health centre—shaping their resources to fit evolving research needs.
{"title":"A guide to successful management of collaborative partnerships in quantitative research: An illustration of the science of team science","authors":"Alyssa Platt, Tracy Truong, Mary Boulos, Nichole E. Carlson, Manisha Desai, Monica M. Elam, Emily Slade, Alexandra L. Hanlon, Jillian H. Hurst, Maren K. Olsen, Laila M. Poisson, Lacey Rende, Gina‐Maria Pomann","doi":"10.1002/sta4.674","DOIUrl":"https://doi.org/10.1002/sta4.674","url":null,"abstract":"Data‐intensive research continues to expand with the goal of improving healthcare delivery, clinical decision‐making, and patient outcomes. Quantitative scientists, such as biostatisticians, epidemiologists, and informaticists, are tasked with turning data into health knowledge. In academic health centres, quantitative scientists are critical to the missions of biomedical discovery and improvement of health. Many academic health centres have developed centralized Quantitative Science Units which foster dual goals of professional development of quantitative scientists and producing high quality, reproducible domain research. Such units then develop teams of quantitative scientists who can collaborate with researchers. However, existing literature does not provide guidance on how such teams are formed or how to manage and sustain them. Leaders of Quantitative Science Units across six institutions formed a working group to examine common practices and tools that can serve as best practices for Quantitative Science Units that wish to achieve these dual goals through building long‐term partnerships with researchers. The results of this working group are presented to provide tools and guidance for Quantitative Science Units challenged with developing, managing, and evaluating Quantitative Science Teams. This guidance aims to help Quantitative Science Units effectively participate in and enhance the research that is conducted throughout the academic health centre—shaping their resources to fit evolving research needs.","PeriodicalId":56159,"journal":{"name":"Stat","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140936210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The table with a structural zero represents a common scenario in clinical trials and epidemiology, characterized by a specific empty cell. In such cases, the risk ratio serves as a vital parameter for statistical inference. However, existing confidence intervals, such as those constructed through the score test and Bayesian methods, fail to achieve the prescribed nominal level. Our focus is on numerically constructing exact confidence intervals for the risk ratio. We achieve this by optimally combining the modified inferential model method and the ‐function method. The resulting interval is then compared with intervals generated by four existing methods: the score method, the exact score method, the Bayesian tailed‐based method and the inferential model method. This comparison is conducted based on the infimum coverage probability, average interval length and non‐coverage probability criteria. Remarkably, our proposed interval outperforms other exact intervals, being notably shorter. To illustrate the effectiveness of our approach, we discuss two examples in detail.
{"title":"An optimal exact interval for the risk ratio in the 2×2$$ 2times 2 $$ table with structural zero","authors":"Weizhen Wang, Xingyun Cao, Tianfa Xie","doi":"10.1002/sta4.681","DOIUrl":"https://doi.org/10.1002/sta4.681","url":null,"abstract":"The table with a structural zero represents a common scenario in clinical trials and epidemiology, characterized by a specific empty cell. In such cases, the risk ratio serves as a vital parameter for statistical inference. However, existing confidence intervals, such as those constructed through the score test and Bayesian methods, fail to achieve the prescribed nominal level. Our focus is on numerically constructing exact confidence intervals for the risk ratio. We achieve this by optimally combining the modified inferential model method and the ‐function method. The resulting interval is then compared with intervals generated by four existing methods: the score method, the exact score method, the Bayesian tailed‐based method and the inferential model method. This comparison is conducted based on the infimum coverage probability, average interval length and non‐coverage probability criteria. Remarkably, our proposed interval outperforms other exact intervals, being notably shorter. To illustrate the effectiveness of our approach, we discuss two examples in detail.","PeriodicalId":56159,"journal":{"name":"Stat","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140936202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SummaryHeterogeneity in response to treatment is a pervasive problem in medicine. Many researchers have proposed individualized treatment rule methods for this problem, which personalize treatment recommendations based on an individual's recorded covariates. A challenge with using these methods in practice is that they determine a treatment rule, rather than quantify treatment benefit. This can be problematic, as a recommended treatment could be burdensome and have negligible improvements in outcome for some individuals. With the aim of helping practitioners make informed modelling choices, we identify two families of loss functions to use with individualized treatment rule methods. Under the assumption of correct model specification, estimation with a loss function from one family ensures that the model's treatment recommendations can be interpreted in terms of the risk difference, while the other family of loss functions ensures that the model's treatment recommendations can be interpreted in terms of the risk ratio. We also derive two upper bounds for a model's error in risk difference and risk ratio estimation. Each upper bound can be calculated using observed data and can provide insight to practitioners regarding model error in estimating treatment effects. We illustrate our contributions with simulation studies as well as with data from the ACTG‐175 AIDS study.
{"title":"On quantifying heterogeneous treatment effects with regression‐based individualized treatment rules: Loss function families and bounds on estimation error","authors":"Michael T. Gorczyca, Chaeryon Kang","doi":"10.1002/sta4.680","DOIUrl":"https://doi.org/10.1002/sta4.680","url":null,"abstract":"SummaryHeterogeneity in response to treatment is a pervasive problem in medicine. Many researchers have proposed individualized treatment rule methods for this problem, which personalize treatment recommendations based on an individual's recorded covariates. A challenge with using these methods in practice is that they determine a treatment rule, rather than quantify treatment benefit. This can be problematic, as a recommended treatment could be burdensome and have negligible improvements in outcome for some individuals. With the aim of helping practitioners make informed modelling choices, we identify two families of loss functions to use with individualized treatment rule methods. Under the assumption of correct model specification, estimation with a loss function from one family ensures that the model's treatment recommendations can be interpreted in terms of the risk difference, while the other family of loss functions ensures that the model's treatment recommendations can be interpreted in terms of the risk ratio. We also derive two upper bounds for a model's error in risk difference and risk ratio estimation. Each upper bound can be calculated using observed data and can provide insight to practitioners regarding model error in estimating treatment effects. We illustrate our contributions with simulation studies as well as with data from the ACTG‐175 AIDS study.","PeriodicalId":56159,"journal":{"name":"Stat","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140942412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Peskoe, Emily Slade, Lacey Rende, Mary Boulos, Manisha Desai, Mihir Gandhi, Jonathan A. L. Gelfond, Shokoufeh Khalatbari, Phillip J. Schulte, Denise C. Snyder, Sandra L. Taylor, Jesse D. Troy, Roger Vaughan, Gina‐Maria Pomann
Collaborative quantitative scientists, including biostatisticians, epidemiologists, bioinformaticists, and data‐related professionals, play vital roles in research, from study design to data analysis and dissemination. It is imperative that academic health care centers (AHCs) establish an environment that provides opportunities for the quantitative scientists who are hired as staff to develop and advance their careers. With the rapid growth of clinical and translational research, AHCs are charged with establishing organizational methods, training tools, best practices, and guidelines to accelerate and support hiring, training, and retaining this staff workforce. This paper describes three essential elements for building and maintaining a successful unit of collaborative staff quantitative scientists in academic health care centers: (1) organizational infrastructure and management, (2) recruitment, and (3) career development and retention. Specific strategies are provided as examples of how AHCs can excel in these areas.
{"title":"Methods for building a staff workforce of quantitative scientists in academic health care","authors":"Sarah Peskoe, Emily Slade, Lacey Rende, Mary Boulos, Manisha Desai, Mihir Gandhi, Jonathan A. L. Gelfond, Shokoufeh Khalatbari, Phillip J. Schulte, Denise C. Snyder, Sandra L. Taylor, Jesse D. Troy, Roger Vaughan, Gina‐Maria Pomann","doi":"10.1002/sta4.683","DOIUrl":"https://doi.org/10.1002/sta4.683","url":null,"abstract":"Collaborative quantitative scientists, including biostatisticians, epidemiologists, bioinformaticists, and data‐related professionals, play vital roles in research, from study design to data analysis and dissemination. It is imperative that academic health care centers (AHCs) establish an environment that provides opportunities for the quantitative scientists who are hired as staff to develop and advance their careers. With the rapid growth of clinical and translational research, AHCs are charged with establishing organizational methods, training tools, best practices, and guidelines to accelerate and support hiring, training, and retaining this staff workforce. This paper describes three essential elements for building and maintaining a successful unit of collaborative staff quantitative scientists in academic health care centers: (1) organizational infrastructure and management, (2) recruitment, and (3) career development and retention. Specific strategies are provided as examples of how AHCs can excel in these areas.","PeriodicalId":56159,"journal":{"name":"Stat","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140883732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In operating an academic statistical consulting centre, it is essential to develop a strategy for covering the anticipated costs incurred, such as personnel, facilities, third‐party data, professional development and marketing, and for handling the revenues generated from sources such as university commitments, extramural grants, fees for service, internal memorandums of understanding and consulting courses. As such, this article describes each of these costs and revenue sources in turn, discusses how they vary over phases of a project and life cycles of a centre, provides a review of both historical and modern perspectives in the literature and includes illustrative examples of financial models from three different institutions. These points of consideration are meant to inform consulting groups who are interested in becoming either more or less centrally structured.
{"title":"Considerations in developing a financial model for an academic statistical consulting centre","authors":"Christy Brown, Yanming Di, Stacey Slone","doi":"10.1002/sta4.688","DOIUrl":"https://doi.org/10.1002/sta4.688","url":null,"abstract":"In operating an academic statistical consulting centre, it is essential to develop a strategy for covering the anticipated costs incurred, such as personnel, facilities, third‐party data, professional development and marketing, and for handling the revenues generated from sources such as university commitments, extramural grants, fees for service, internal memorandums of understanding and consulting courses. As such, this article describes each of these costs and revenue sources in turn, discusses how they vary over phases of a project and life cycles of a centre, provides a review of both historical and modern perspectives in the literature and includes illustrative examples of financial models from three different institutions. These points of consideration are meant to inform consulting groups who are interested in becoming either more or less centrally structured.","PeriodicalId":56159,"journal":{"name":"Stat","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140833310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sparse structure learning in high‐dimensional Gaussian graphical models is an important problem in multivariate statistical inference, since the sparsity pattern naturally encodes the conditional independence relationship among variables. However, maximum a posteriori (MAP) estimation is challenging under hierarchical prior models, and traditional numerical optimization routines or expectation–maximization algorithms are difficult to implement. To this end, our contribution is a novel local linear approximation scheme that circumvents this issue using a very simple computational algorithm. Most importantly, the condition under which our algorithm is guaranteed to converge to the MAP estimate is explicitly stated and is shown to cover a broad class of completely monotone priors, including the graphical horseshoe. Further, the resulting MAP estimate is shown to be sparse and consistent in the ‐norm. Numerical results validate the speed, scalability and statistical performance of the proposed method.
{"title":"Maximum a posteriori estimation in graphical models using local linear approximation","authors":"Ksheera Sagar, Jyotishka Datta, Sayantan Banerjee, Anindya Bhadra","doi":"10.1002/sta4.682","DOIUrl":"https://doi.org/10.1002/sta4.682","url":null,"abstract":"Sparse structure learning in high‐dimensional Gaussian graphical models is an important problem in multivariate statistical inference, since the sparsity pattern naturally encodes the conditional independence relationship among variables. However, maximum a posteriori (MAP) estimation is challenging under hierarchical prior models, and traditional numerical optimization routines or expectation–maximization algorithms are difficult to implement. To this end, our contribution is a novel local linear approximation scheme that circumvents this issue using a very simple computational algorithm. Most importantly, the condition under which our algorithm is guaranteed to converge to the MAP estimate is explicitly stated and is shown to cover a broad class of completely monotone priors, including the graphical horseshoe. Further, the resulting MAP estimate is shown to be sparse and consistent in the ‐norm. Numerical results validate the speed, scalability and statistical performance of the proposed method.","PeriodicalId":56159,"journal":{"name":"Stat","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140833216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}