Pub Date : 2025-09-01Epub Date: 2025-06-18DOI: 10.1017/rsm.2025.10010
Cynthia Huber, Tim Friede
Model-based recursive partitioning (MOB) and its extension, metaMOB, are tools for identifying subgroups with differential treatment effects. When pooling data from various trials the metaMOB approach uses random effects to model the heterogeneity of treatment effects. In situations where interventions offer only small overall benefits and require extensive, costly trials with a large participant enrollment, leveraging individual-participant data (IPD) from multiple trials can help identify individuals who are most likely to benefit from the intervention. We explore the application of MOB and metaMOB in the context of non-specific low back pain treatment, using synthetic data based on a subset of the individual participant data meta-analysis by Patel et al. 1 Our study underscores the need to explore heterogeneity in intercepts and treatment effects to identify subgroups with differential treatment effects in IPD meta-analyses.
{"title":"Subgroup identification using individual participant data from multiple trials: An application in low back pain.","authors":"Cynthia Huber, Tim Friede","doi":"10.1017/rsm.2025.10010","DOIUrl":"10.1017/rsm.2025.10010","url":null,"abstract":"<p><p>Model-based recursive partitioning (MOB) and its extension, metaMOB, are tools for identifying subgroups with differential treatment effects. When pooling data from various trials the metaMOB approach uses random effects to model the heterogeneity of treatment effects. In situations where interventions offer only small overall benefits and require extensive, costly trials with a large participant enrollment, leveraging individual-participant data (IPD) from multiple trials can help identify individuals who are most likely to benefit from the intervention. We explore the application of MOB and metaMOB in the context of non-specific low back pain treatment, using synthetic data based on a subset of the individual participant data meta-analysis by Patel et al. <sup>1</sup> Our study underscores the need to explore heterogeneity in intercepts and treatment effects to identify subgroups with differential treatment effects in IPD meta-analyses.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 5","pages":"813-822"},"PeriodicalIF":6.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527538/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-10DOI: 10.1017/rsm.2025.10017
Shahab Jolani
Modern quantitative evidence synthesis methods often combine patient-level data from different sources, known as individual participants data (IPD) sets. A specific challenge in meta-analysis of IPD sets is the presence of systematically missing data, when certain variables are not measured in some studies, and sporadically missing data, when measurements of certain variables are incomplete across different studies. Multiple imputation (MI) is among the better approaches to deal with missing data. However, MI of hierarchical data, such as IPD meta-analysis, requires advanced imputation routines that preserve the hierarchical data structure and accommodate the presence of both systematically and sporadically missing data. We have recently developed a new class of hierarchical imputation methods within the MICE framework tailored for continuous variables. This article discusses the extensions of this methodology to categorical variables, accommodating the simultaneous presence of systematically and sporadically missing data in nested designs with arbitrary missing data patterns. To address the challenge of the categorical nature of the data, we propose an accept-reject algorithm during the imputation process. Following theoretical discussions, we evaluate the performance of the new methodology through simulation studies and demonstrate its application using an IPD set from patients with kidney disease.
{"title":"Hierarchical imputation of categorical variables in the presence of systematically and sporadically missing data.","authors":"Shahab Jolani","doi":"10.1017/rsm.2025.10017","DOIUrl":"10.1017/rsm.2025.10017","url":null,"abstract":"<p><p>Modern quantitative evidence synthesis methods often combine patient-level data from different sources, known as individual participants data (IPD) sets. A specific challenge in meta-analysis of IPD sets is the presence of systematically missing data, when certain variables are not measured in some studies, and sporadically missing data, when measurements of certain variables are incomplete across different studies. Multiple imputation (MI) is among the better approaches to deal with missing data. However, MI of hierarchical data, such as IPD meta-analysis, requires advanced imputation routines that preserve the hierarchical data structure and accommodate the presence of both systematically and sporadically missing data. We have recently developed a new class of hierarchical imputation methods within the MICE framework tailored for continuous variables. This article discusses the extensions of this methodology to categorical variables, accommodating the simultaneous presence of systematically and sporadically missing data in nested designs with arbitrary missing data patterns. To address the challenge of the categorical nature of the data, we propose an accept-reject algorithm during the imputation process. Following theoretical discussions, we evaluate the performance of the new methodology through simulation studies and demonstrate its application using an IPD set from patients with kidney disease.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 5","pages":"729-757"},"PeriodicalIF":6.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527547/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-16DOI: 10.1017/rsm.2025.25
Hanan Khalil, Vivian Welch, Matthew Grainger, Fiona Campbell
Mapping reviews are valuable tools for synthesizing and visualizing research evidence, providing a comprehensive overview of studies within a specific field. Their visual approach enhances accessibility, enabling researchers, policymakers, and practitioners to efficiently identify key findings, trends, and knowledge gaps. These reviews are particularly significant in guiding future research, informing funding decisions, and shaping evidence-based policymaking. In environmental science-similar to health and social sciences-mapping reviews play a crucial role in identifying effective conservation strategies, tracking interventions, and supporting targeted programs.Unlike systematic reviews, which assess intervention effectiveness, mapping reviews focus on broad research questions, aiming to chart the existing evidence on a given topic. They use structured methodologies to identify patterns, gaps, and trends, often employing visual tools to enhance data accessibility. A well-defined scope, guided by inclusion and exclusion criteria, ensures a transparent study selection process. Comprehensive search strategies, often spanning multiple databases, maximize evidence capture. Effective screening, combining automated and manual processes, ensures relevance, while data extraction emphasizes high-level categories such as study design and population demographics. Advanced software tools, including EPPI-Reviewer and MindMeister, support data extraction and visualization, with evidence gap maps highlighting robust areas and research voids.Despite their advantages, mapping reviews present challenges. The categorization and coding of studies can introduce subjective biases, and the process demands substantial resources. Automation and artificial intelligence offer promising solutions, improving efficiency while addressing integration and multilingual limitations. As methodological advancements continue, interdisciplinary collaboration will be essential to fully realize the potential of mapping reviews across scientific disciplines.
{"title":"Methodology for mapping reviews, evidence maps, and gap maps.","authors":"Hanan Khalil, Vivian Welch, Matthew Grainger, Fiona Campbell","doi":"10.1017/rsm.2025.25","DOIUrl":"10.1017/rsm.2025.25","url":null,"abstract":"<p><p>Mapping reviews are valuable tools for synthesizing and visualizing research evidence, providing a comprehensive overview of studies within a specific field. Their visual approach enhances accessibility, enabling researchers, policymakers, and practitioners to efficiently identify key findings, trends, and knowledge gaps. These reviews are particularly significant in guiding future research, informing funding decisions, and shaping evidence-based policymaking. In environmental science-similar to health and social sciences-mapping reviews play a crucial role in identifying effective conservation strategies, tracking interventions, and supporting targeted programs.Unlike systematic reviews, which assess intervention effectiveness, mapping reviews focus on broad research questions, aiming to chart the existing evidence on a given topic. They use structured methodologies to identify patterns, gaps, and trends, often employing visual tools to enhance data accessibility. A well-defined scope, guided by inclusion and exclusion criteria, ensures a transparent study selection process. Comprehensive search strategies, often spanning multiple databases, maximize evidence capture. Effective screening, combining automated and manual processes, ensures relevance, while data extraction emphasizes high-level categories such as study design and population demographics. Advanced software tools, including EPPI-Reviewer and MindMeister, support data extraction and visualization, with evidence gap maps highlighting robust areas and research voids.Despite their advantages, mapping reviews present challenges. The categorization and coding of studies can introduce subjective biases, and the process demands substantial resources. Automation and artificial intelligence offer promising solutions, improving efficiency while addressing integration and multilingual limitations. As methodological advancements continue, interdisciplinary collaboration will be essential to fully realize the potential of mapping reviews across scientific disciplines.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 5","pages":"786-796"},"PeriodicalIF":6.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-06DOI: 10.1017/rsm.2025.10012
A E Ades, Deborah M Caldwell, Sumayya Anwer, Sofia Dias
{"title":"Continuity corrections with Mantel-Haenszel estimators in Cochrane reviews.","authors":"A E Ades, Deborah M Caldwell, Sumayya Anwer, Sofia Dias","doi":"10.1017/rsm.2025.10012","DOIUrl":"10.1017/rsm.2025.10012","url":null,"abstract":"","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 5","pages":"823-825"},"PeriodicalIF":6.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527532/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-16DOI: 10.1017/rsm.2025.24
Zheng Wang, Thomas A Murray, Wenshan Han, Lifeng Lin, Lianne K Siegel, Haitao Chu
Network meta-analysis (NMA) enables simultaneous assessment of multiple treatments by combining both direct and indirect evidence. While NMAs are increasingly important in healthcare decision-making, challenges remain due to limited direct comparisons between treatments. This data sparsity complicates the accurate estimation of correlations among treatments in arm-based NMA (AB-NMA). To address these challenges, we introduce a novel sensitivity analysis tool tailored for AB-NMA. This study pioneers a tipping point analysis within a Bayesian framework, specifically targeting correlation parameters to assess their influence on the robustness of conclusions about relative treatment effects. The analysis explores changes in the conclusion based on whether the 95% credible interval includes the null value (referred to as the interval conclusion) and the magnitude of point estimates. Applying this approach to multiple NMA datasets, including 112 treatment pairs, we identified tipping points in 13 pairs (11.6%) for interval conclusion change and in 29 pairs (25.9%) for magnitude change with a threshold at 15%. These findings underscore potential commonality in tipping points and emphasize the importance of our proposed analysis, especially in networks with sparse direct comparisons or wide credible intervals for correlation estimates. A case study provides a visual illustration and interpretation of the tipping point analysis. We recommend integrating this tipping point analysis as a standard practice in AB-NMA.
{"title":"Tipping point analysis in network meta-analysis.","authors":"Zheng Wang, Thomas A Murray, Wenshan Han, Lifeng Lin, Lianne K Siegel, Haitao Chu","doi":"10.1017/rsm.2025.24","DOIUrl":"10.1017/rsm.2025.24","url":null,"abstract":"<p><p>Network meta-analysis (NMA) enables simultaneous assessment of multiple treatments by combining both direct and indirect evidence. While NMAs are increasingly important in healthcare decision-making, challenges remain due to limited direct comparisons between treatments. This data sparsity complicates the accurate estimation of correlations among treatments in arm-based NMA (AB-NMA). To address these challenges, we introduce a novel sensitivity analysis tool tailored for AB-NMA. This study pioneers a tipping point analysis within a Bayesian framework, specifically targeting correlation parameters to assess their influence on the robustness of conclusions about relative treatment effects. The analysis explores changes in the conclusion based on whether the 95% credible interval includes the null value (referred to as the <i>interval conclusion</i>) and the magnitude of point estimates. Applying this approach to multiple NMA datasets, including 112 treatment pairs, we identified tipping points in 13 pairs (11.6%) for <i>interval conclusion change</i> and in 29 pairs (25.9%) for <i>magnitude change</i> with a threshold at 15%. These findings underscore potential commonality in tipping points and emphasize the importance of our proposed analysis, especially in networks with sparse direct comparisons or wide credible intervals for correlation estimates. A case study provides a visual illustration and interpretation of the tipping point analysis. We recommend integrating this tipping point analysis as a standard practice in AB-NMA.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 5","pages":"797-812"},"PeriodicalIF":6.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-18DOI: 10.1017/rsm.2025.26
Leonhard Held, Felix Hofmann, Samuel Pawel
P-value functions are modern statistical tools that unify effect estimation and hypothesis testing and can provide alternative point and interval estimates compared to standard meta-analysis methods, using any of the many p-value combination procedures available (Xie et al., 2011, JASA). We provide a systematic comparison of different combination procedures, both from a theoretical perspective and through simulation. We show that many prominent p-value combination methods (e.g. Fisher's method) are not invariant to the orientation of the underlying one-sided p-values. Only Edgington's method, a lesser-known combination method based on the sum of p-values, is orientation-invariant and still provides confidence intervals not restricted to be symmetric around the point estimate. Adjustments for heterogeneity can also be made and results from a simulation study indicate that Edgington's method can compete with more standard meta-analytic methods.
p值函数是统一效应估计和假设检验的现代统计工具,与标准的荟萃分析方法相比,可以使用许多p值组合程序中的任何一种,提供替代的点和区间估计(Xie et al., 2011, JASA)。我们从理论和仿真两方面对不同的组合过程进行了系统的比较。我们证明了许多著名的p值组合方法(例如Fisher的方法)对潜在的单侧p值的方向不是不变的。只有Edgington的方法,一种鲜为人知的基于p值和的组合方法,是方向不变的,并且仍然提供不限于围绕点估计对称的置信区间。也可以对异质性进行调整,模拟研究的结果表明,Edgington的方法可以与更标准的元分析方法竞争。
{"title":"A comparison of combined <i>p</i>-value functions for meta-analysis.","authors":"Leonhard Held, Felix Hofmann, Samuel Pawel","doi":"10.1017/rsm.2025.26","DOIUrl":"10.1017/rsm.2025.26","url":null,"abstract":"<p><p><i>P</i>-value functions are modern statistical tools that unify effect estimation and hypothesis testing and can provide alternative point and interval estimates compared to standard meta-analysis methods, using any of the many <i>p</i>-value combination procedures available (Xie et al., 2011, JASA). We provide a systematic comparison of different combination procedures, both from a theoretical perspective and through simulation. We show that many prominent <i>p</i>-value combination methods (e.g. Fisher's method) are not invariant to the orientation of the underlying one-sided <i>p</i>-values. Only Edgington's method, a lesser-known combination method based on the sum of <i>p</i>-values, is orientation-invariant and still provides confidence intervals not restricted to be symmetric around the point estimate. Adjustments for heterogeneity can also be made and results from a simulation study indicate that Edgington's method can compete with more standard meta-analytic methods.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 5","pages":"758-785"},"PeriodicalIF":6.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527540/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-06-04DOI: 10.1017/rsm.2025.10011
Will Robinson, Alex Sutton, Clareece Nevill, Nicola Cooper
Graphical displays are often utilised for high-quality reporting of meta-analyses. Previous work has presented augmentations to funnel plots that assess the impact that an additional trial would have on an existing meta-analysis. However, decision-makers, such as the National Institute for Health and Care Excellence in the United Kingdom, assess health technologies based on their cost-effectiveness, as opposed to efficacy alone. Motivated by this fact, this article outlines a novel approach, developed for augmenting funnel plots, based on the ability of an additional trial to change a decision regarding the optimal intervention. The approach is presented for a generalised class of economic decision models, where the clinical effectiveness of the health technology of interest is informed by a meta-analysis, and is illustrated with an example application. The 'decision contours' produced from the proposed methods have various potential uses not only for decision-makers and research funders but also for other researchers, such as meta-analysts and primary researchers designing new studies, as well as those developing health technologies, such as pharmaceutical companies. The relationship between the new approach and existing methods for determining sample size calculations for future trials is also considered.
{"title":"Exploring graphical approaches to assess the impact of an additional trial on a decision model via updated meta-analysis.","authors":"Will Robinson, Alex Sutton, Clareece Nevill, Nicola Cooper","doi":"10.1017/rsm.2025.10011","DOIUrl":"10.1017/rsm.2025.10011","url":null,"abstract":"<p><p>Graphical displays are often utilised for high-quality reporting of meta-analyses. Previous work has presented augmentations to funnel plots that assess the impact that an additional trial would have on an existing meta-analysis. However, decision-makers, such as the National Institute for Health and Care Excellence in the United Kingdom, assess health technologies based on their cost-effectiveness, as opposed to efficacy alone. Motivated by this fact, this article outlines a novel approach, developed for augmenting funnel plots, based on the ability of an additional trial to change a decision regarding the optimal intervention. The approach is presented for a generalised class of economic decision models, where the clinical effectiveness of the health technology of interest is informed by a meta-analysis, and is illustrated with an example application. The 'decision contours' produced from the proposed methods have various potential uses not only for decision-makers and research funders but also for other researchers, such as meta-analysts and primary researchers designing new studies, as well as those developing health technologies, such as pharmaceutical companies. The relationship between the new approach and existing methods for determining sample size calculations for future trials is also considered.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 4","pages":"672-687"},"PeriodicalIF":6.1,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-03-24DOI: 10.1017/rsm.2025.15
Adriana López-Pineda, Rauf Nouni-García, Álvaro Carbonell-Soliva, Vicente F Gil-Guillén, Concepción Carratalá-Munuera, Fernando Borrás
With the increasing volume of scientific literature, there is a need to streamline the screening process for titles and abstracts in systematic reviews, reduce the workload for reviewers, and minimize errors. This study validated artificial intelligence (AI) tools, specifically Llama 3 70B via Groq's application programming interface (API) and ChatGPT-4o mini via OpenAI's API, for automating this process in biomedical research. It compared these AI tools with human reviewers using 1,081 articles after duplicate removal. Each AI model was tested in three configurations to assess sensitivity, specificity, predictive values, and likelihood ratios. The Llama 3 model's LLA_2 configuration achieved 77.5% sensitivity and 91.4% specificity, with 90.2% accuracy, a positive predictive value (PPV) of 44.3%, and a negative predictive value (NPV) of 97.9%. The ChatGPT-4o mini model's CHAT_2 configuration showed 56.2% sensitivity, 95.1% specificity, 92.0% accuracy, a PPV of 50.6%, and an NPV of 96.1%. Both models demonstrated strong specificity, with CHAT_2 having higher overall accuracy. Despite these promising results, manual validation remains necessary to address false positives and negatives, ensuring that no important studies are overlooked. This study suggests that AI can significantly enhance efficiency and accuracy in systematic reviews, potentially revolutionizing not only biomedical research but also other fields requiring extensive literature reviews.
{"title":"Validation of large language models (Llama 3 and ChatGPT-4o mini) for title and abstract screening in biomedical systematic reviews.","authors":"Adriana López-Pineda, Rauf Nouni-García, Álvaro Carbonell-Soliva, Vicente F Gil-Guillén, Concepción Carratalá-Munuera, Fernando Borrás","doi":"10.1017/rsm.2025.15","DOIUrl":"10.1017/rsm.2025.15","url":null,"abstract":"<p><p>With the increasing volume of scientific literature, there is a need to streamline the screening process for titles and abstracts in systematic reviews, reduce the workload for reviewers, and minimize errors. This study validated artificial intelligence (AI) tools, specifically Llama 3 70B via Groq's application programming interface (API) and ChatGPT-4o mini via OpenAI's API, for automating this process in biomedical research. It compared these AI tools with human reviewers using 1,081 articles after duplicate removal. Each AI model was tested in three configurations to assess sensitivity, specificity, predictive values, and likelihood ratios. The Llama 3 model's LLA_2 configuration achieved 77.5% sensitivity and 91.4% specificity, with 90.2% accuracy, a positive predictive value (PPV) of 44.3%, and a negative predictive value (NPV) of 97.9%. The ChatGPT-4o mini model's CHAT_2 configuration showed 56.2% sensitivity, 95.1% specificity, 92.0% accuracy, a PPV of 50.6%, and an NPV of 96.1%. Both models demonstrated strong specificity, with CHAT_2 having higher overall accuracy. Despite these promising results, manual validation remains necessary to address false positives and negatives, ensuring that no important studies are overlooked. This study suggests that AI can significantly enhance efficiency and accuracy in systematic reviews, potentially revolutionizing not only biomedical research but also other fields requiring extensive literature reviews.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 4","pages":"620-630"},"PeriodicalIF":6.1,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12623132/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-05-15DOI: 10.1017/rsm.2025.21
Harlan Campbell, Dylan Maciel, Keith Chan, Jeroen P Jansen, Sven Klijn, Kevin Towle, Bill Malcolm, Shannon Cope
The importance of network meta-analysis (NMA) methods for time-to-event (TTE) that do not rely on the proportional hazard (PH) assumption is increasingly recognized in oncology, where clinical trials evaluating new interventions versus standard comparators often violate this assumption. However, existing NMA methods that allow for time-varying treatment effects do not directly leverage individual events and censor times that can be reconstructed from Kaplan-Meier curves, which may be more accurate than discrete hazards. They are also challenging to implement given reparameterizations that rely on discrete hazards. Additionally, two-step methods require assumptions regarding within-study normality and variance. We propose a one-step fully Bayesian parametric individual patient data (IPD)-NMA model that fits TTE data with the exact likelihood and allows for time-varying treatment effects. We define fixed or random effects with the following distributions: Weibull, Gompertz, log-normal, log-logistic, gamma, or generalized gamma distributions. We apply the one-step model to a network of randomized controlled trials (RCTs) evaluating multiple interventions for advanced melanoma and compare results with those obtained with the two-step approach. Additionally, a simulation study was performed to compare the proposed one-step method to the two-step method. The one-step method allows for straightforward model selection among the "standard" distributions, now including gamma and generalized gamma, with treatment effects on either the scale alone or with multivariate treatment effects. Generalized gamma offers flexibility to model U-shaped hazards within a network of RCTs, with accessible interpretation of parameters that simplifies to exponential, Weibull, log-normal, or gamma in special cases.
{"title":"One-step parametric network meta-analysis models using the exact likelihood that allow for time-varying treatment effects.","authors":"Harlan Campbell, Dylan Maciel, Keith Chan, Jeroen P Jansen, Sven Klijn, Kevin Towle, Bill Malcolm, Shannon Cope","doi":"10.1017/rsm.2025.21","DOIUrl":"10.1017/rsm.2025.21","url":null,"abstract":"<p><p>The importance of network meta-analysis (NMA) methods for time-to-event (TTE) that do not rely on the proportional hazard (PH) assumption is increasingly recognized in oncology, where clinical trials evaluating new interventions versus standard comparators often violate this assumption. However, existing NMA methods that allow for time-varying treatment effects do not directly leverage individual events and censor times that can be reconstructed from Kaplan-Meier curves, which may be more accurate than discrete hazards. They are also challenging to implement given reparameterizations that rely on discrete hazards. Additionally, two-step methods require assumptions regarding within-study normality and variance. We propose a one-step fully Bayesian parametric individual patient data (IPD)-NMA model that fits TTE data with the exact likelihood and allows for time-varying treatment effects. We define fixed or random effects with the following distributions: Weibull, Gompertz, log-normal, log-logistic, gamma, or generalized gamma distributions. We apply the one-step model to a network of randomized controlled trials (RCTs) evaluating multiple interventions for advanced melanoma and compare results with those obtained with the two-step approach. Additionally, a simulation study was performed to compare the proposed one-step method to the two-step method. The one-step method allows for straightforward model selection among the \"standard\" distributions, now including gamma and generalized gamma, with treatment effects on either the scale alone or with multivariate treatment effects. Generalized gamma offers flexibility to model U-shaped hazards within a network of RCTs, with accessible interpretation of parameters that simplifies to exponential, Weibull, log-normal, or gamma in special cases.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 4","pages":"650-671"},"PeriodicalIF":6.1,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527511/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-04-25DOI: 10.1017/rsm.2025.18
Amalia Karahalios, Ian R White, Simon L Turner, Georgia Salanti, G Peter Herbison, Areti Angeliki Veroniki, Adriani Nikolakopoulou, Joanne E McKenzie
Network meta-analysis allows the synthesis of relative effects from several treatments. Two broad approaches are available to synthesize the data: arm-synthesis and contrast-synthesis, with several models that can be fitted within each. Limited evaluations comparing these approaches are available. We re-analyzed 118 networks of interventions with binary outcomes using three contrast-synthesis models (CSM; one fitted in a frequentist framework and two in a Bayesian framework) and two arm-synthesis models (ASM; both fitted in a Bayesian framework). We compared the estimated log odds ratios, their standard errors, ranking measures and the between-trial heterogeneity using the different models and investigated if differences in the results were modified by network characteristics. In general, we observed good agreement with respect to the odds ratios, their standard errors and the ranking metrics between the two Bayesian CSMs. However, differences were observed when comparing the frequentist CSM and the ASMs to each other and to the Bayesian CSMs. The network characteristics that we investigated, which represented the connectedness of the networks and rareness of events, were associated with the differences observed between models, but no single factor was associated with the differences across all of the metrics. In conclusion, we found that different models used to synthesize evidence in a network meta-analysis (NMA) can yield different estimates of odds ratios and standard errors that can impact the final ranking of the treatment options compared.
{"title":"An investigation of the impact of using contrast- and arm-synthesis models for network meta-analysis.","authors":"Amalia Karahalios, Ian R White, Simon L Turner, Georgia Salanti, G Peter Herbison, Areti Angeliki Veroniki, Adriani Nikolakopoulou, Joanne E McKenzie","doi":"10.1017/rsm.2025.18","DOIUrl":"10.1017/rsm.2025.18","url":null,"abstract":"<p><p>Network meta-analysis allows the synthesis of relative effects from several treatments. Two broad approaches are available to synthesize the data: arm-synthesis and contrast-synthesis, with several models that can be fitted within each. Limited evaluations comparing these approaches are available. We re-analyzed 118 networks of interventions with binary outcomes using three contrast-synthesis models (CSM; one fitted in a frequentist framework and two in a Bayesian framework) and two arm-synthesis models (ASM; both fitted in a Bayesian framework). We compared the estimated log odds ratios, their standard errors, ranking measures and the between-trial heterogeneity using the different models and investigated if differences in the results were modified by network characteristics. In general, we observed good agreement with respect to the odds ratios, their standard errors and the ranking metrics between the two Bayesian CSMs. However, differences were observed when comparing the frequentist CSM and the ASMs to each other and to the Bayesian CSMs. The network characteristics that we investigated, which represented the connectedness of the networks and rareness of events, were associated with the differences observed between models, but no single factor was associated with the differences across all of the metrics. In conclusion, we found that different models used to synthesize evidence in a network meta-analysis (NMA) can yield different estimates of odds ratios and standard errors that can impact the final ranking of the treatment options compared.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 4","pages":"631-649"},"PeriodicalIF":6.1,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527487/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}