Hongxiang Qiu, Xu Shi, Wang Miao, Edgar Dobriban, Eric Tchetgen Tchetgen
To infer the treatment effect for a single treated unit using panel data, synthetic control (SC) methods construct a linear combination of control units' outcomes that mimics the treated unit's pre-treatment outcome trajectory. This linear combination is subsequently used to impute the counterfactual outcomes of the treated unit had it not been treated in the post-treatment period, and used to estimate the treatment effect. Existing SC methods rely on correctly modeling certain aspects of the counterfactual outcome generating mechanism and may require near-perfect matching of the pre-treatment trajectory. Inspired by proximal causal inference, we obtain two novel nonparametric identifying formulas for the average treatment effect for the treated unit: one is based on weighting, and the other combines models for the counterfactual outcome and the weighting function. We introduce the concept of covariate shift to SCs to obtain these identification results conditional on the treatment assignment. We also develop two treatment effect estimators based on these two formulas and generalized method of moments. One new estimator is doubly robust: it is consistent and asymptotically normal if at least one of the outcome and weighting models is correctly specified. We demonstrate the performance of the methods via simulations and apply them to evaluate the effectiveness of a pneumococcal conjugate vaccine on the risk of all-cause pneumonia in Brazil.
{"title":"Doubly robust proximal synthetic controls.","authors":"Hongxiang Qiu, Xu Shi, Wang Miao, Edgar Dobriban, Eric Tchetgen Tchetgen","doi":"10.1093/biomtc/ujae055","DOIUrl":"10.1093/biomtc/ujae055","url":null,"abstract":"<p><p>To infer the treatment effect for a single treated unit using panel data, synthetic control (SC) methods construct a linear combination of control units' outcomes that mimics the treated unit's pre-treatment outcome trajectory. This linear combination is subsequently used to impute the counterfactual outcomes of the treated unit had it not been treated in the post-treatment period, and used to estimate the treatment effect. Existing SC methods rely on correctly modeling certain aspects of the counterfactual outcome generating mechanism and may require near-perfect matching of the pre-treatment trajectory. Inspired by proximal causal inference, we obtain two novel nonparametric identifying formulas for the average treatment effect for the treated unit: one is based on weighting, and the other combines models for the counterfactual outcome and the weighting function. We introduce the concept of covariate shift to SCs to obtain these identification results conditional on the treatment assignment. We also develop two treatment effect estimators based on these two formulas and generalized method of moments. One new estimator is doubly robust: it is consistent and asymptotically normal if at least one of the outcome and weighting models is correctly specified. We demonstrate the performance of the methods via simulations and apply them to evaluate the effectiveness of a pneumococcal conjugate vaccine on the risk of all-cause pneumonia in Brazil.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140850/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141178765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain-effective connectivity analysis quantifies directed influence of one neural element or region over another, and it is of great scientific interest to understand how effective connectivity pattern is affected by variations of subject conditions. Vector autoregression (VAR) is a useful tool for this type of problems. However, there is a paucity of solutions when there is measurement error, when there are multiple subjects, and when the focus is the inference of the transition matrix. In this article, we study the problem of transition matrix inference under the high-dimensional VAR model with measurement error and multiple subjects. We propose a simultaneous testing procedure, with three key components: a modified expectation-maximization (EM) algorithm, a test statistic based on the tensor regression of a bias-corrected estimator of the lagged auto-covariance given the covariates, and a properly thresholded simultaneous test. We establish the uniform consistency for the estimators of our modified EM, and show that the subsequent test achieves both a consistent false discovery control, and its power approaches one asymptotically. We demonstrate the efficacy of our method through both simulations and a brain connectivity study of task-evoked functional magnetic resonance imaging.
大脑有效连接分析量化了一个神经元素或区域对另一个神经元素或区域的定向影响,了解有效连接模式如何受主体条件变化的影响具有重大的科学意义。向量自回归(VAR)是解决这类问题的有效工具。然而,当存在测量误差、有多个受试者以及重点是推断过渡矩阵时,解决方案却非常匮乏。本文研究了具有测量误差和多主体的高维 VAR 模型下的转换矩阵推断问题。我们提出了一种同步检验程序,包括三个关键部分:改进的期望最大化(EM)算法、基于给定协变量的滞后自协方差偏差校正估计器的张量回归的检验统计量,以及适当阈值化的同步检验。我们建立了修正 EM 估计数的统一一致性,并证明随后的检验既实现了一致的误发现控制,其功率也渐近于 1。我们通过模拟和任务诱发功能磁共振成像的大脑连接研究证明了我们方法的有效性。
{"title":"High-dimensional multisubject time series transition matrix inference with application to brain connectivity analysis.","authors":"Xiang Lyu, Jian Kang, Lexin Li","doi":"10.1093/biomtc/ujae021","DOIUrl":"10.1093/biomtc/ujae021","url":null,"abstract":"<p><p>Brain-effective connectivity analysis quantifies directed influence of one neural element or region over another, and it is of great scientific interest to understand how effective connectivity pattern is affected by variations of subject conditions. Vector autoregression (VAR) is a useful tool for this type of problems. However, there is a paucity of solutions when there is measurement error, when there are multiple subjects, and when the focus is the inference of the transition matrix. In this article, we study the problem of transition matrix inference under the high-dimensional VAR model with measurement error and multiple subjects. We propose a simultaneous testing procedure, with three key components: a modified expectation-maximization (EM) algorithm, a test statistic based on the tensor regression of a bias-corrected estimator of the lagged auto-covariance given the covariates, and a properly thresholded simultaneous test. We establish the uniform consistency for the estimators of our modified EM, and show that the subsequent test achieves both a consistent false discovery control, and its power approaches one asymptotically. We demonstrate the efficacy of our method through both simulations and a brain connectivity study of task-evoked functional magnetic resonance imaging.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10988359/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140850327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discussion on \"Bayesian meta-analysis of penetrance for cancer risk\" by Thanthirige Lakshika M. Ruberu, Danielle Braun, Giovanni Parmigiani, and Swati Biswas.","authors":"Gianluca Baio","doi":"10.1093/biomtc/ujae041","DOIUrl":"https://doi.org/10.1093/biomtc/ujae041","url":null,"abstract":"","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141178720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hajime Uno, Lu Tian, Miki Horiguchi, Satoshi Hattori, Kenneth L Kehl
Limitations of using the traditional Cox's hazard ratio for summarizing the magnitude of the treatment effect on time-to-event outcomes have been widely discussed, and alternative measures that do not have such limitations are gaining attention. One of the alternative methods recently proposed, in a simple 2-sample comparison setting, uses the average hazard with survival weight (AH), which can be interpreted as the general censoring-free person-time incidence rate on a given time window. In this paper, we propose a new regression analysis approach for the AH with a truncation time τ. We investigate 3 versions of AH regression analysis, assuming (1) independent censoring, (2) group-specific censoring, and (3) covariate-dependent censoring. The proposed AH regression methods are closely related to robust Poisson regression. While the new approach needs to require a truncation time τ explicitly, it can be more robust than Poisson regression in the presence of censoring. With the AH regression approach, one can summarize the between-group treatment difference in both absolute difference and relative terms, adjusting for covariates that are associated with the outcome. This property will increase the likelihood that the treatment effect magnitude is correctly interpreted. The AH regression approach can be a useful alternative to the traditional Cox's hazard ratio approach for estimating and reporting the magnitude of the treatment effect on time-to-event outcomes.
{"title":"Regression models for average hazard.","authors":"Hajime Uno, Lu Tian, Miki Horiguchi, Satoshi Hattori, Kenneth L Kehl","doi":"10.1093/biomtc/ujae037","DOIUrl":"10.1093/biomtc/ujae037","url":null,"abstract":"<p><p>Limitations of using the traditional Cox's hazard ratio for summarizing the magnitude of the treatment effect on time-to-event outcomes have been widely discussed, and alternative measures that do not have such limitations are gaining attention. One of the alternative methods recently proposed, in a simple 2-sample comparison setting, uses the average hazard with survival weight (AH), which can be interpreted as the general censoring-free person-time incidence rate on a given time window. In this paper, we propose a new regression analysis approach for the AH with a truncation time τ. We investigate 3 versions of AH regression analysis, assuming (1) independent censoring, (2) group-specific censoring, and (3) covariate-dependent censoring. The proposed AH regression methods are closely related to robust Poisson regression. While the new approach needs to require a truncation time τ explicitly, it can be more robust than Poisson regression in the presence of censoring. With the AH regression approach, one can summarize the between-group treatment difference in both absolute difference and relative terms, adjusting for covariates that are associated with the outcome. This property will increase the likelihood that the treatment effect magnitude is correctly interpreted. The AH regression approach can be a useful alternative to the traditional Cox's hazard ratio approach for estimating and reporting the magnitude of the treatment effect on time-to-event outcomes.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11107592/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141074620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discussion on \"Bayesian meta-analysis of penetrance for cancer risk\" by Thanthirige Lakshika M. Ruberu, Danielle Braun, Giovanni Parmigiani, and Swati Biswas.","authors":"Paul Gustafson","doi":"10.1093/biomtc/ujae044","DOIUrl":"https://doi.org/10.1093/biomtc/ujae044","url":null,"abstract":"","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141178728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A crossover trial is an efficient trial design when there is no carry-over effect. To reduce the impact of the biological carry-over effect, a washout period is often designed. However, the carry-over effect remains an outstanding concern when a washout period is unethical or cannot sufficiently diminish the impact of the carry-over effect. The latter can occur in comparative effectiveness research, where the carry-over effect is often non-biological but behavioral. In this paper, we investigate the crossover design under a potential outcomes framework with and without the carry-over effect. We find that when the carry-over effect exists and satisfies a sign condition, the basic estimator underestimates the treatment effect, which does not inflate the type I error of one-sided tests but negatively impacts the power. This leads to a power trade-off between the crossover design and the parallel-group design, and we derive the condition under which the crossover design does not lead to type I error inflation and is still more powerful than the parallel-group design. We also develop covariate adjustment methods for crossover trials. We evaluate the performance of cross-over design and covariate adjustment using data from the MTN-034/REACH study.
交叉试验是一种不存在带入效应的高效试验设计。为了减少生物转化效应的影响,通常会设计一个冲洗期。然而,当冲洗期不符合伦理道德或无法充分降低携带效应的影响时,携带效应仍然是一个突出的问题。后者可能发生在比较效益研究中,因为这种效应通常不是生物性的,而是行为性的。在本文中,我们研究了潜在结果框架下的交叉设计,包括带入效应和不带入效应。我们发现,当携带效应存在并满足符号条件时,基本估计器会低估治疗效果,这不会扩大单侧检验的 I 型误差,但会对功率产生负面影响。这导致了交叉设计和平行组设计之间的功率权衡,我们推导出了交叉设计不会导致 I 型误差膨胀且仍比平行组设计更有效的条件。我们还开发了交叉试验的协变量调整方法。我们使用 MTN-034/REACH 研究的数据评估了交叉设计和协变量调整的性能。
{"title":"Behavioral carry-over effect and power consideration in crossover trials.","authors":"Danni Shi, Ting Ye","doi":"10.1093/biomtc/ujae023","DOIUrl":"10.1093/biomtc/ujae023","url":null,"abstract":"<p><p>A crossover trial is an efficient trial design when there is no carry-over effect. To reduce the impact of the biological carry-over effect, a washout period is often designed. However, the carry-over effect remains an outstanding concern when a washout period is unethical or cannot sufficiently diminish the impact of the carry-over effect. The latter can occur in comparative effectiveness research, where the carry-over effect is often non-biological but behavioral. In this paper, we investigate the crossover design under a potential outcomes framework with and without the carry-over effect. We find that when the carry-over effect exists and satisfies a sign condition, the basic estimator underestimates the treatment effect, which does not inflate the type I error of one-sided tests but negatively impacts the power. This leads to a power trade-off between the crossover design and the parallel-group design, and we derive the condition under which the crossover design does not lead to type I error inflation and is still more powerful than the parallel-group design. We also develop covariate adjustment methods for crossover trials. We evaluate the performance of cross-over design and covariate adjustment using data from the MTN-034/REACH study.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10985791/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140334554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A spatial sampling design determines where sample locations are placed in a study area so that population parameters can be estimated with relatively high precision. If the response variable has spatial trends, spatially balanced or well-spread designs give precise results for commonly used estimators. This article proposes a new method that draws well-spread samples over arbitrary auxiliary spaces and can be used for master sampling applications. All we require is a measure of the distance between population units. Numerical results show that the method generates well-spread samples and compares favorably with existing designs. We provide an example application using several auxiliary variables to estimate total aboveground biomass over a large study area in Eastern Amazonia, Brazil. Multipurpose surveys are also considered, where the totals of aboveground biomass, primary production, and clay content (3 responses) are estimated from a single well-spread sample over the auxiliary space.
{"title":"Well-spread samples with dynamic sample sizes.","authors":"Blair Robertson, Chris Price, Marco Reale","doi":"10.1093/biomtc/ujae026","DOIUrl":"https://doi.org/10.1093/biomtc/ujae026","url":null,"abstract":"<p><p>A spatial sampling design determines where sample locations are placed in a study area so that population parameters can be estimated with relatively high precision. If the response variable has spatial trends, spatially balanced or well-spread designs give precise results for commonly used estimators. This article proposes a new method that draws well-spread samples over arbitrary auxiliary spaces and can be used for master sampling applications. All we require is a measure of the distance between population units. Numerical results show that the method generates well-spread samples and compares favorably with existing designs. We provide an example application using several auxiliary variables to estimate total aboveground biomass over a large study area in Eastern Amazonia, Brazil. Multipurpose surveys are also considered, where the totals of aboveground biomass, primary production, and clay content (3 responses) are estimated from a single well-spread sample over the auxiliary space.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140856036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventional supervised learning usually operates under the premise that data are collected from the same underlying population. However, challenges may arise when integrating new data from different populations, resulting in a phenomenon known as dataset shift. This paper focuses on prior probability shift, where the distribution of the outcome varies across datasets but the conditional distribution of features given the outcome remains the same. To tackle the challenges posed by such shift, we propose an estimation algorithm that can efficiently combine information from multiple sources. Unlike existing methods that are restricted to discrete outcomes, the proposed approach accommodates both discrete and continuous outcomes. It also handles high-dimensional covariate vectors through variable selection using an adaptive least absolute shrinkage and selection operator penalty, producing efficient estimates that possess the oracle property. Moreover, a novel semiparametric likelihood ratio test is proposed to check the validity of prior probability shift assumptions by embedding the null conditional density function into Neyman's smooth alternatives (Neyman, 1937) and testing study-specific parameters. We demonstrate the effectiveness of our proposed method through extensive simulations and a real data example. The proposed methods serve as a useful addition to the repertoire of tools for dealing dataset shifts.
{"title":"Efficient data integration under prior probability shift.","authors":"Ming-Yueh Huang, Jing Qin, Chiung-Yu Huang","doi":"10.1093/biomtc/ujae035","DOIUrl":"https://doi.org/10.1093/biomtc/ujae035","url":null,"abstract":"<p><p>Conventional supervised learning usually operates under the premise that data are collected from the same underlying population. However, challenges may arise when integrating new data from different populations, resulting in a phenomenon known as dataset shift. This paper focuses on prior probability shift, where the distribution of the outcome varies across datasets but the conditional distribution of features given the outcome remains the same. To tackle the challenges posed by such shift, we propose an estimation algorithm that can efficiently combine information from multiple sources. Unlike existing methods that are restricted to discrete outcomes, the proposed approach accommodates both discrete and continuous outcomes. It also handles high-dimensional covariate vectors through variable selection using an adaptive least absolute shrinkage and selection operator penalty, producing efficient estimates that possess the oracle property. Moreover, a novel semiparametric likelihood ratio test is proposed to check the validity of prior probability shift assumptions by embedding the null conditional density function into Neyman's smooth alternatives (Neyman, 1937) and testing study-specific parameters. We demonstrate the effectiveness of our proposed method through extensive simulations and a real data example. The proposed methods serve as a useful addition to the repertoire of tools for dealing dataset shifts.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141070300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many randomized placebo-controlled trials with a biomarker defined subgroup, it is believed that this subgroup has the same or higher treatment effect compared with its complement. These subgroups are often referred to as the biomarker positive and negative subgroups. Most biomarker-stratified pivotal trials are aimed at demonstrating a significant treatment effect either in the biomarker positive subgroup or in the overall population. A major shortcoming of this approach is that the treatment can be declared effective in the overall population even though it has no effect in the biomarker negative subgroup. We use the isotonic assumption about the treatment effects in the two subgroups to construct an efficient way to test for a treatment effect in both the biomarker positive and negative subgroups. A substantial reduction in the required sample size for such a trial compared with existing methods makes evaluating the treatment effect in both the biomarker positive and negative subgroups feasible in pivotal trials especially when the prevalence of the biomarker positive subgroup is less than 0.5.
{"title":"Efficient testing of the biomarker positive and negative subgroups in a biomarker-stratified trial.","authors":"Lang Li, Anastasia Ivanova","doi":"10.1093/biomtc/ujae056","DOIUrl":"10.1093/biomtc/ujae056","url":null,"abstract":"<p><p>In many randomized placebo-controlled trials with a biomarker defined subgroup, it is believed that this subgroup has the same or higher treatment effect compared with its complement. These subgroups are often referred to as the biomarker positive and negative subgroups. Most biomarker-stratified pivotal trials are aimed at demonstrating a significant treatment effect either in the biomarker positive subgroup or in the overall population. A major shortcoming of this approach is that the treatment can be declared effective in the overall population even though it has no effect in the biomarker negative subgroup. We use the isotonic assumption about the treatment effects in the two subgroups to construct an efficient way to test for a treatment effect in both the biomarker positive and negative subgroups. A substantial reduction in the required sample size for such a trial compared with existing methods makes evaluating the treatment effect in both the biomarker positive and negative subgroups feasible in pivotal trials especially when the prevalence of the biomarker positive subgroup is less than 0.5.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11166030/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141305337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We congratulate the authors for the new meta-analysis model that accounts for different outcomes. We discuss the modeling choice and the Bayesian setting, specifically, we point out the connection between the Bayesian hierarchical model and a mixed-effect model formulation to subsequently discuss possible future method extensions.
{"title":"Discussion on \"Bayesian meta-analysis of penetrance for cancer risk\" by Thanthirige Lakshika M. Ruberu, Danielle Braun, Giovanni Parmigiani, and Swati Biswas.","authors":"Moreno Ursino, Sarah Zohar","doi":"10.1093/biomtc/ujae043","DOIUrl":"https://doi.org/10.1093/biomtc/ujae043","url":null,"abstract":"<p><p>We congratulate the authors for the new meta-analysis model that accounts for different outcomes. We discuss the modeling choice and the Bayesian setting, specifically, we point out the connection between the Bayesian hierarchical model and a mixed-effect model formulation to subsequently discuss possible future method extensions.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141178726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}