Unanchored population-adjusted indirect comparisons (PAICs) such as matching-adjusted indirect comparison (MAIC) and simulated treatment comparison (STC) attracted a significant attention in the health technology assessment field in recent years. These methods allow for indirect comparisons by balancing different patient characteristics in single-arm studies in the case where individual patient-level data are only available for one study. However, the validity of findings from unanchored MAIC/STC analyses is frequently questioned by decision makers, due to the assumption that all potential prognostic factors and effect modifiers are accounted for. Addressing this critical concern, we introduce a sensitivity analysis algorithm for unanchored PAICs by extending quantitative bias analysis techniques traditionally used in epidemiology. Our proposed sensitivity analysis involves simulating important covariates that were not reported by the comparator study when conducting unanchored STC and enables the formal evaluating of the impact of unmeasured confounding in a quantitative manner without additional assumptions. We demonstrate the practical application of this method through a real-world case study of metastatic colorectal cancer, highlighting its utility in enhancing the robustness and credibility of unanchored PAIC results. Our findings emphasise the necessity of formal quantitative sensitivity analysis in interpreting unanchored PAIC results, as it quantifies the robustness of conclusions regarding potential unmeasured confounders and supports more robust, reliable, and informative decision-making in healthcare.
{"title":"Quantitative bias analysis for unmeasured confounding in unanchored population-adjusted indirect comparisons.","authors":"Shijie Ren, Sa Ren, Nicky J Welton, Mark Strong","doi":"10.1017/rsm.2025.13","DOIUrl":"10.1017/rsm.2025.13","url":null,"abstract":"<p><p>Unanchored population-adjusted indirect comparisons (PAICs) such as matching-adjusted indirect comparison (MAIC) and simulated treatment comparison (STC) attracted a significant attention in the health technology assessment field in recent years. These methods allow for indirect comparisons by balancing different patient characteristics in single-arm studies in the case where individual patient-level data are only available for one study. However, the validity of findings from unanchored MAIC/STC analyses is frequently questioned by decision makers, due to the assumption that all potential prognostic factors and effect modifiers are accounted for. Addressing this critical concern, we introduce a sensitivity analysis algorithm for unanchored PAICs by extending quantitative bias analysis techniques traditionally used in epidemiology. Our proposed sensitivity analysis involves simulating important covariates that were not reported by the comparator study when conducting unanchored STC and enables the formal evaluating of the impact of unmeasured confounding in a quantitative manner without additional assumptions. We demonstrate the practical application of this method through a real-world case study of metastatic colorectal cancer, highlighting its utility in enhancing the robustness and credibility of unanchored PAIC results. Our findings emphasise the necessity of formal quantitative sensitivity analysis in interpreting unanchored PAIC results, as it quantifies the robustness of conclusions regarding potential unmeasured confounders and supports more robust, reliable, and informative decision-making in healthcare.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 3","pages":"509-527"},"PeriodicalIF":6.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beatrice C Downing, Nicky J Welton, Hugo Pedder, Ifigeneia Mavranezouli, Odette Megnin-Viggars, A E Ades
Several methods have been proposed for the synthesis of continuous outcomes reported on different scales, including the Standardised Mean Difference (SMD) and the Ratio of Means (RoM). SMDs can be formed by dividing the study mean treatment effect either by a study-specific (Study-SMD) or a scale-specific (Scale-SMD) standard deviation (SD). We compared the performance of RoM to the different standardisation methods with and without meta-regression (MR) on baseline severity, in a Bayesian network meta-analysis (NMA) of 14 treatments for depression, reported on five different scales. There was substantial between-study variation in the SDs reported on the same scale. Based on the Deviance Information Criterion, RoM was preferred as having better model fit than the SMD models. Model fit for SMD models was not improved with meta-regression. Percentage shrinkage was used as a scale-independent measure with higher % shrinkage indicating lower heterogeneity. Heterogeneity was lowest for RoM (20.5% shrinkage), then Scale-SMD (18.2% shrinkage), and highest for Study-SMD (16.7% shrinkage). Model choice impacted which treatment was estimated to be most effective. However, all models picked out the same three highest-ranked treatments using the GRADE criteria. Alongside other indicators, higher shrinkage of RoM models suggests that treatments for depression act multiplicatively rather than additively. Further research is needed to determine whether these findings extend to Patient- and Clinician-Reported Outcomes used in other application areas. Where treatment effects are additive, we recommend using Scale-SMD for standardisation to avoid the additional heterogeneity introduced by Study-SMD.
{"title":"Synthesis of depression outcomes reported on different scales: A comparison of methods for modelling mean differences.","authors":"Beatrice C Downing, Nicky J Welton, Hugo Pedder, Ifigeneia Mavranezouli, Odette Megnin-Viggars, A E Ades","doi":"10.1017/rsm.2025.7","DOIUrl":"10.1017/rsm.2025.7","url":null,"abstract":"<p><p>Several methods have been proposed for the synthesis of continuous outcomes reported on different scales, including the Standardised Mean Difference (SMD) and the Ratio of Means (RoM). SMDs can be formed by dividing the study mean treatment effect either by a study-specific (Study-SMD) or a scale-specific (Scale-SMD) standard deviation (SD). We compared the performance of RoM to the different standardisation methods with and without meta-regression (MR) on baseline severity, in a Bayesian network meta-analysis (NMA) of 14 treatments for depression, reported on five different scales. There was substantial between-study variation in the SDs reported on the same scale. Based on the Deviance Information Criterion, RoM was preferred as having better model fit than the SMD models. Model fit for SMD models was not improved with meta-regression. Percentage shrinkage was used as a scale-independent measure with higher % shrinkage indicating lower heterogeneity. Heterogeneity was lowest for RoM (20.5% shrinkage), then Scale-SMD (18.2% shrinkage), and highest for Study-SMD (16.7% shrinkage). Model choice impacted which treatment was estimated to be most effective. However, all models picked out the same three highest-ranked treatments using the GRADE criteria. Alongside other indicators, higher shrinkage of RoM models suggests that treatments for depression act multiplicatively rather than additively. Further research is needed to determine whether these findings extend to Patient- and Clinician-Reported Outcomes used in other application areas. Where treatment effects are additive, we recommend using Scale-SMD for standardisation to avoid the additional heterogeneity introduced by Study-SMD.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 3","pages":"460-478"},"PeriodicalIF":6.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527519/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yue Wang, Jianhua Zhao, Fen Jiang, Lei Shi, Jianxin Pan
Random effects meta-analysis model is an important tool for integrating results from multiple independent studies. However, the standard model is based on the assumption of normal distributions for both random effects and within-study errors, making it susceptible to outlying studies. Although robust modeling using the t distribution is an appealing idea, the existing work, that explores the use of the t distribution only for random effects, involves complicated numerical integration and numerical optimization. In this article, a novel robust meta-analysis model using the t distribution is proposed (tMeta). The novelty is that the marginal distribution of the effect size in tMeta follows the t distribution, enabling that tMeta can simultaneously accommodate and detect outlying studies in a simple and adaptive manner. A simple and fast EM-type algorithm is developed for maximum likelihood estimation. Due to the mathematical tractability of the t distribution, tMeta frees from numerical integration and allows for efficient optimization. Experiments on real data demonstrate that tMeta is compared favorably with related competitors in situations involving mild outliers. Moreover, in the presence of gross outliers, while related competitors may fail, tMeta continues to perform consistently and robustly.
{"title":"A novel robust meta-analysis model using the <i>t</i> distribution for outlier accommodation and detection.","authors":"Yue Wang, Jianhua Zhao, Fen Jiang, Lei Shi, Jianxin Pan","doi":"10.1017/rsm.2025.8","DOIUrl":"10.1017/rsm.2025.8","url":null,"abstract":"<p><p>Random effects meta-analysis model is an important tool for integrating results from multiple independent studies. However, the standard model is based on the assumption of normal distributions for both random effects and within-study errors, making it susceptible to outlying studies. Although robust modeling using the <i>t</i> distribution is an appealing idea, the existing work, that explores the use of the <i>t</i> distribution only for random effects, involves complicated numerical integration and numerical optimization. In this article, a novel robust meta-analysis model using the <i>t</i> distribution is proposed (<i>t</i>Meta). The novelty is that the marginal distribution of the effect size in <i>t</i>Meta follows the <i>t</i> distribution, enabling that <i>t</i>Meta can simultaneously accommodate and detect outlying studies in a simple and adaptive manner. A simple and fast EM-type algorithm is developed for maximum likelihood estimation. Due to the mathematical tractability of the <i>t</i> distribution, <i>t</i>Meta frees from numerical integration and allows for efficient optimization. Experiments on real data demonstrate that <i>t</i>Meta is compared favorably with related competitors in situations involving mild outliers. Moreover, in the presence of gross outliers, while related competitors may fail, <i>t</i>Meta continues to perform consistently and robustly.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 3","pages":"442-459"},"PeriodicalIF":6.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527545/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-03-12DOI: 10.1017/rsm.2025.5
Guanbo Wang, Sean McGrath, Yi Lian
Researchers would often like to leverage data from a collection of sources (e.g., meta-analyses of randomized trials, multi-center trials, pooled analyses of observational cohorts) to estimate causal effects in a target population of interest. However, because different data sources typically represent different underlying populations, traditional meta-analytic methods may not produce causally interpretable estimates that apply to any reasonable target population. In this article, we present the CausalMetaR R package, which implements robust and efficient methods to estimate causal effects in a given internal or external target population using multi-source data. The package includes estimators of average and subgroup treatment effects for the entire target population. To produce efficient and robust estimates of causal effects, the package implements doubly robust and non-parametric efficient estimators and supports using flexible data-adaptive (e.g., machine learning techniques) methods and cross-fitting techniques to estimate the nuisance models (e.g., the treatment model, the outcome model). We briefly review the methods, describe the key features of the package, and demonstrate how to use the package through an example. The package aims to facilitate causal analyses in the context of meta-analysis.
{"title":"CausalMetaR: An R package for performing causally interpretable meta-analyses.","authors":"Guanbo Wang, Sean McGrath, Yi Lian","doi":"10.1017/rsm.2025.5","DOIUrl":"10.1017/rsm.2025.5","url":null,"abstract":"<p><p>Researchers would often like to leverage data from a collection of sources (e.g., meta-analyses of randomized trials, multi-center trials, pooled analyses of observational cohorts) to estimate causal effects in a target population of interest. However, because different data sources typically represent different underlying populations, traditional meta-analytic methods may not produce causally interpretable estimates that apply to any reasonable target population. In this article, we present the CausalMetaR R package, which implements robust and efficient methods to estimate causal effects in a given internal or external target population using multi-source data. The package includes estimators of average and subgroup treatment effects for the entire target population. To produce efficient and robust estimates of causal effects, the package implements doubly robust and non-parametric efficient estimators and supports using flexible data-adaptive (e.g., machine learning techniques) methods and cross-fitting techniques to estimate the nuisance models (e.g., the treatment model, the outcome model). We briefly review the methods, describe the key features of the package, and demonstrate how to use the package through an example. The package aims to facilitate causal analyses in the context of meta-analysis.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 2","pages":"425-440"},"PeriodicalIF":6.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527535/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-03-10DOI: 10.1017/rsm.2025.6
Marianne A Jonker, Hassan Pazira, Anthony C C Coolen
To estimate accurately the parameters of a regression model, the sample size must be large enough relative to the number of possible predictors for the model. In practice, sufficient data is often lacking, which can lead to overfitting of the model and, as a consequence, unreliable predictions of the outcome of new patients. Pooling data from different data sets collected in different (medical) centers would alleviate this problem, but is often not feasible due to privacy regulation or logistic problems. An alternative route would be to analyze the local data in the centers separately and combine the statistical inference results with the Bayesian Federated Inference (BFI) methodology. The aim of this approach is to compute from the inference results in separate centers what would have been found if the statistical analysis was performed on the combined data. We explain the methodology under homogeneity and heterogeneity across the populations in the separate centers, and give real life examples for better understanding. Excellent performance of the proposed methodology is shown. An R-package to do all the calculations has been developed and is illustrated in this article. The mathematical details are given in the Appendix.
{"title":"Bayesian Federated Inference for regression models based on non-shared medical center data.","authors":"Marianne A Jonker, Hassan Pazira, Anthony C C Coolen","doi":"10.1017/rsm.2025.6","DOIUrl":"10.1017/rsm.2025.6","url":null,"abstract":"<p><p>To estimate accurately the parameters of a regression model, the sample size must be large enough relative to the number of possible predictors for the model. In practice, sufficient data is often lacking, which can lead to overfitting of the model and, as a consequence, unreliable predictions of the outcome of new patients. Pooling data from different data sets collected in different (medical) centers would alleviate this problem, but is often not feasible due to privacy regulation or logistic problems. An alternative route would be to analyze the local data in the centers separately and combine the statistical inference results with the Bayesian Federated Inference (BFI) methodology. The aim of this approach is to compute from the inference results in separate centers what would have been found if the statistical analysis was performed on the combined data. We explain the methodology under homogeneity and heterogeneity across the populations in the separate centers, and give real life examples for better understanding. Excellent performance of the proposed methodology is shown. An R-package to do all the calculations has been developed and is illustrated in this article. The mathematical details are given in the Appendix.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 2","pages":"383-423"},"PeriodicalIF":6.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527543/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-03-07DOI: 10.1017/rsm.2025.1
Gary C K Chan, Estrid He, Janni Leung, Karin Verspoor
When conducting a systematic review, screening the vast body of literature to identify the small set of relevant studies is a labour-intensive and error-prone process. Although there is an increasing number of fully automated tools for screening, their performance is suboptimal and varies substantially across review topic areas. Many of these tools are only trained on small datasets, and most are not tested on a wide range of review topic areas. This study presents two systematic review datasets compiled from more than 8600 systematic reviews and more than 540000 abstracts covering 51 research topic areas in health and medical research. These datasets are the largest of their kinds to date. We demonstrate their utility in training and evaluating language models for title and abstract screening. Our dataset includes detailed metadata of each review, including title, background, objectives and selection criteria. We demonstrated that a small language model trained on this dataset with additional metadata has excellent performance with an average recall above 95% and specificity over 70% across a wide range of review topic areas. Future research can build on our dataset to further improve the performance of fully automated tools for systematic review title and abstract screening.
{"title":"A comprehensive systematic review dataset is a rich resource for training and evaluation of AI systems for title and abstract screening.","authors":"Gary C K Chan, Estrid He, Janni Leung, Karin Verspoor","doi":"10.1017/rsm.2025.1","DOIUrl":"10.1017/rsm.2025.1","url":null,"abstract":"<p><p>When conducting a systematic review, screening the vast body of literature to identify the small set of relevant studies is a labour-intensive and error-prone process. Although there is an increasing number of fully automated tools for screening, their performance is suboptimal and varies substantially across review topic areas. Many of these tools are only trained on small datasets, and most are not tested on a wide range of review topic areas. This study presents two systematic review datasets compiled from more than 8600 systematic reviews and more than 540000 abstracts covering 51 research topic areas in health and medical research. These datasets are the largest of their kinds to date. We demonstrate their utility in training and evaluating language models for title and abstract screening. Our dataset includes detailed metadata of each review, including title, background, objectives and selection criteria. We demonstrated that a small language model trained on this dataset with additional metadata has excellent performance with an average recall above 95% and specificity over 70% across a wide range of review topic areas. Future research can build on our dataset to further improve the performance of fully automated tools for systematic review title and abstract screening.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 2","pages":"308-322"},"PeriodicalIF":6.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527522/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-03-17DOI: 10.1017/rsm.2024.12
Yu-Lun Liu, Bingyu Zhang, Haitao Chu, Yong Chen
Network meta-analysis (NMA), also known as mixed treatment comparison meta-analysis or multiple treatments meta-analysis, extends conventional pairwise meta-analysis by simultaneously synthesizing multiple interventions in a single integrated analysis. Despite the growing popularity of NMA within comparative effectiveness research, it comes with potential challenges. For example, within-study correlations among treatment comparisons are rarely reported in the published literature. Yet, these correlations are pivotal for valid statistical inference. As demonstrated in earlier studies, ignoring these correlations can inflate mean squared errors of the resulting point estimates and lead to inaccurate standard error estimates. This article introduces a composite likelihood-based approach that ensures accurate statistical inference without requiring knowledge of the within-study correlations. The proposed method is computationally robust and efficient, with substantially reduced computational time compared to the state-of-the-science methods implemented in R packages. The proposed method was evaluated through extensive simulations and applied to two important applications including an NMA comparing interventions for primary open-angle glaucoma, and another comparing treatments for chronic prostatitis and chronic pelvic pain syndrome.
{"title":"Network meta-analysis made simple: A composite likelihood approach.","authors":"Yu-Lun Liu, Bingyu Zhang, Haitao Chu, Yong Chen","doi":"10.1017/rsm.2024.12","DOIUrl":"10.1017/rsm.2024.12","url":null,"abstract":"<p><p>Network meta-analysis (NMA), also known as mixed treatment comparison meta-analysis or multiple treatments meta-analysis, extends conventional pairwise meta-analysis by simultaneously synthesizing multiple interventions in a single integrated analysis. Despite the growing popularity of NMA within comparative effectiveness research, it comes with potential challenges. For example, within-study correlations among treatment comparisons are rarely reported in the published literature. Yet, these correlations are pivotal for valid statistical inference. As demonstrated in earlier studies, ignoring these correlations can inflate mean squared errors of the resulting point estimates and lead to inaccurate standard error estimates. This article introduces a composite likelihood-based approach that ensures accurate statistical inference without requiring knowledge of the within-study correlations. The proposed method is computationally robust and efficient, with substantially reduced computational time compared to the state-of-the-science methods implemented in R packages. The proposed method was evaluated through extensive simulations and applied to two important applications including an NMA comparing interventions for primary open-angle glaucoma, and another comparing treatments for chronic prostatitis and chronic pelvic pain syndrome.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 2","pages":"272-290"},"PeriodicalIF":6.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527528/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marianne A Jonker, Hassan Pazira, Anthony C C Coolen
{"title":"Bayesian Federated Inference for regression models based on non-shared medical center data - ERRATUM.","authors":"Marianne A Jonker, Hassan Pazira, Anthony C C Coolen","doi":"10.1017/rsm.2025.23","DOIUrl":"10.1017/rsm.2025.23","url":null,"abstract":"","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 2","pages":"424"},"PeriodicalIF":6.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527526/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CausalMetaR: An R package for performing causally interpretable meta-analyses - ERRATUM.","authors":"Guanbo Wang, Sean McGrath, Yi Lian","doi":"10.1017/rsm.2025.22","DOIUrl":"10.1017/rsm.2025.22","url":null,"abstract":"","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 2","pages":"441"},"PeriodicalIF":6.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527490/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-03-21DOI: 10.1017/rsm.2024.4
Lu Li, Lifeng Lin, Joseph C Cappelleri, Haitao Chu, Yong Chen
Double-zero-event studies (DZS) pose a challenge for accurately estimating the overall treatment effect in meta-analysis (MA). Current approaches, such as continuity correction or omission of DZS, are commonly employed, yet these ad hoc methods can yield biased conclusions. Although the standard bivariate generalized linear mixed model (BGLMM) can accommodate DZS, it fails to address the potential systemic differences between DZS and other studies. In this article, we propose a zero-inflated bivariate generalized linear mixed model (ZIBGLMM) to tackle this issue. This two-component finite mixture model includes zero inflation for a subpopulation with negligible or extremely low risk. We develop both frequentist and Bayesian versions of ZIBGLMM and examine its performance in estimating risk ratios against the BGLMM and conventional two-stage MA that excludes DZS. Through extensive simulation studies and real-world MA case studies, we demonstrate that ZIBGLMM outperforms the BGLMM and conventional two-stage MA that excludes DZS in estimating the true effect size with substantially less bias and comparable coverage probability.
{"title":"ZIBGLMM: Zero-inflated bivariate generalized linear mixed model for meta-analysis with double-zero-event studies.","authors":"Lu Li, Lifeng Lin, Joseph C Cappelleri, Haitao Chu, Yong Chen","doi":"10.1017/rsm.2024.4","DOIUrl":"10.1017/rsm.2024.4","url":null,"abstract":"<p><p>Double-zero-event studies (DZS) pose a challenge for accurately estimating the overall treatment effect in meta-analysis (MA). Current approaches, such as continuity correction or omission of DZS, are commonly employed, yet these ad hoc methods can yield biased conclusions. Although the standard bivariate generalized linear mixed model (BGLMM) can accommodate DZS, it fails to address the potential systemic differences between DZS and other studies. In this article, we propose a zero-inflated bivariate generalized linear mixed model (ZIBGLMM) to tackle this issue. This two-component finite mixture model includes zero inflation for a subpopulation with negligible or extremely low risk. We develop both frequentist and Bayesian versions of ZIBGLMM and examine its performance in estimating risk ratios against the BGLMM and conventional two-stage MA that excludes DZS. Through extensive simulation studies and real-world MA case studies, we demonstrate that ZIBGLMM outperforms the BGLMM and conventional two-stage MA that excludes DZS in estimating the true effect size with substantially less bias and comparable coverage probability.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 2","pages":"251-271"},"PeriodicalIF":6.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527523/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}