Pub Date : 2025-11-01Epub Date: 2025-08-07DOI: 10.1017/rsm.2025.10027
Ronny Scherer, Diego G Campos
To synthesize evidence on the relations among multiple constructs, measures, or concepts, meta-analyzing correlation matrices across primary studies has become a crucial analytic approach. Common meta-analytic approaches employ univariate or multivariate models to estimate a pooled correlation matrix, which is subjected to further analyses, such as structural equation modeling. In practice, meta-analysts often extract multiple correlation matrices per study from various samples, study sites, labs, or countries, thus introducing hierarchical effect size multiplicity into the meta-analytic data. However, this feature has largely been ignored when pooling correlation matrices for meta-analysis. To contribute to the methodological development in this area, we describe a multilevel, multivariate, and random-effects modeling approach, which pools correlation matrices meta-analytically and, at the same time, addresses hierarchical effect size multiplicity. Specifically, it allows meta-analysts to test various assumptions on the dependencies among random effects, aiding the selection of a meta-analytic baseline model. We describe this approach, present four working models within it, and illustrate them with an example and the corresponding R code.
{"title":"Meta-analyzing correlation matrices in the presence of hierarchical effect size multiplicity.","authors":"Ronny Scherer, Diego G Campos","doi":"10.1017/rsm.2025.10027","DOIUrl":"10.1017/rsm.2025.10027","url":null,"abstract":"<p><p>To synthesize evidence on the relations among multiple constructs, measures, or concepts, meta-analyzing correlation matrices across primary studies has become a crucial analytic approach. Common meta-analytic approaches employ univariate or multivariate models to estimate a pooled correlation matrix, which is subjected to further analyses, such as structural equation modeling. In practice, meta-analysts often extract multiple correlation matrices per study from various samples, study sites, labs, or countries, thus introducing hierarchical effect size multiplicity into the meta-analytic data. However, this feature has largely been ignored when pooling correlation matrices for meta-analysis. To contribute to the methodological development in this area, we describe a multilevel, multivariate, and random-effects modeling approach, which pools correlation matrices meta-analytically and, at the same time, addresses hierarchical effect size multiplicity. Specifically, it allows meta-analysts to test various assumptions on the dependencies among random effects, aiding the selection of a meta-analytic baseline model. We describe this approach, present four working models within it, and illustrate them with an example and the corresponding R code.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 6","pages":"828-858"},"PeriodicalIF":6.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12657669/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-07-10DOI: 10.1017/rsm.2025.10021
Chengyang Gao, Anna Heath, Gianluca Baio
Background: Understanding the relative costs and effectiveness of all competing interventions is crucial to informing health resource allocations. However, to receive regulatory approval for efficacy, novel pharmaceuticals are typically only compared against placebo or standard of care. The relative efficacy against the best alternative intervention relies on indirect comparisons of different interventions. When treatment effect modifiers are distributed differently across trials, population adjustment is necessary to ensure a fair comparison. Matching-Adjusted Indirect Comparisons (MAIC) is the most widely adopted weighting-based method for this purpose. Nevertheless, MAIC can exhibit instability under poor population overlap. Regression-based approaches to overcome this issue are heavily dependent on parametric assumptions.
Methods: We introduce a novel method, 'G-MAIC,' which combines outcome regression and weighting-adjustment to address these limitations. Inspired by Bayesian survey inference, G-MAIC employs Bayesian bootstrap to propagate the uncertainty of population-adjusted estimates. We evaluate the performance of G-MAIC against standard non-adjusted methods, MAIC and Parametric G-computation, in a simulation study encompassing 18 scenarios with varying trial sample sizes, population overlaps, and covariate structures.
Results: Under poor overlap and small sample sizes, MAIC can produce non-sensible variance estimations or increased bias compared to non-adjusted methods, depending on covariate structures in the two trials compared. G-MAIC mitigates this issue, achieving comparable performance to parametric G-computation with reduced reliance on parametric assumptions.
Conclusion: G-MAIC presents a robust alternative to the widely adopted MAIC for population-adjusted indirect comparisons. The underlying framework is flexible such that it can accommodate advanced nonparametric outcome models and alternative weighting schemes.
{"title":"Regression augmented weighting adjustment for indirect comparisons in health decision modelling.","authors":"Chengyang Gao, Anna Heath, Gianluca Baio","doi":"10.1017/rsm.2025.10021","DOIUrl":"10.1017/rsm.2025.10021","url":null,"abstract":"<p><strong>Background: </strong>Understanding the relative costs and effectiveness of all competing interventions is crucial to informing health resource allocations. However, to receive regulatory approval for efficacy, novel pharmaceuticals are typically only compared against placebo or standard of care. The relative efficacy against the best alternative intervention relies on indirect comparisons of different interventions. When treatment effect modifiers are distributed differently across trials, population adjustment is necessary to ensure a fair comparison. Matching-Adjusted Indirect Comparisons (MAIC) is the most widely adopted weighting-based method for this purpose. Nevertheless, MAIC can exhibit instability under poor population overlap. Regression-based approaches to overcome this issue are heavily dependent on parametric assumptions.</p><p><strong>Methods: </strong>We introduce a novel method, 'G-MAIC,' which combines outcome regression and weighting-adjustment to address these limitations. Inspired by Bayesian survey inference, G-MAIC employs Bayesian bootstrap to propagate the uncertainty of population-adjusted estimates. We evaluate the performance of G-MAIC against standard non-adjusted methods, MAIC and Parametric G-computation, in a simulation study encompassing 18 scenarios with varying trial sample sizes, population overlaps, and covariate structures.</p><p><strong>Results: </strong>Under poor overlap and small sample sizes, MAIC can produce non-sensible variance estimations or increased bias compared to non-adjusted methods, depending on covariate structures in the two trials compared. G-MAIC mitigates this issue, achieving comparable performance to parametric G-computation with reduced reliance on parametric assumptions.</p><p><strong>Conclusion: </strong>G-MAIC presents a robust alternative to the widely adopted MAIC for population-adjusted indirect comparisons. The underlying framework is flexible such that it can accommodate advanced nonparametric outcome models and alternative weighting schemes.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 6","pages":"900-921"},"PeriodicalIF":6.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12657667/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-08-07DOI: 10.1017/rsm.2025.10028
Danni Xia, Honghao Lai, Weilong Zhao, Jiajie Huang, Jiayi Liu, Ziying Ye, Jianing Liu, Mingyao Sun, Liangying Hou, Bei Pan, Long Ge
This study aims to explore the feasibility and accuracy of utilizing large language models (LLMs) to assess the risk of bias (ROB) in cohort studies. We conducted a pilot and feasibility study in 30 cohort studies randomly selected from reference lists of published Cochrane reviews. We developed a structured prompt to guide the ChatGPT-4o, Moonshot-v1-128k, and DeepSeek-V3 to assess the ROB of each cohort twice. We used the ROB results assessed by three evidence-based medicine experts as the gold standard, and then we evaluated the accuracy of LLMs by calculating the correct assessment rate, sensitivity, specificity, and F1 scores for overall and item-specific levels. The consistency of the overall and item-specific assessment results was evaluated using Cohen's kappa (κ) and prevalence-adjusted bias-adjusted kappa. Efficiency was estimated by the mean assessment time required. This study assessed three LLMs (ChatGPT-4o, Moonshot-v1-128k, and DeepSeek-V3) and revealed distinct performance across eight assessment items. Overall accuracy was comparable (80.8%-83.3%). Moonshot-v1-128k showed superior sensitivity in population selection (0.92 versus ChatGPT-4o's 0.55, P < 0.001). In terms of F1 scores, Moonshot-v1-128k led in population selection (F = 0.80 versus ChatGPT-4o's 0.67, P = 0.004). ChatGPT-4o demonstrated the highest consistency (mean κ = 96.5%), with perfect agreement (100%) in outcome confidence. ChatGPT-4o was 97.3% faster per article (32.8 seconds versus 20 minutes manually) and outperformed Moonshot-v1-128k and DeepSeek-V3 by 47-50% in processing speed. The efficient and accurate assessment of ROB in cohort studies by ChatGPT-4o, Moonshot-v1-128k, and DeepSeek-V3 highlights the potential of LLMs to enhance the systematic review process.
{"title":"Assessing risk of bias of cohort studies with large language models.","authors":"Danni Xia, Honghao Lai, Weilong Zhao, Jiajie Huang, Jiayi Liu, Ziying Ye, Jianing Liu, Mingyao Sun, Liangying Hou, Bei Pan, Long Ge","doi":"10.1017/rsm.2025.10028","DOIUrl":"10.1017/rsm.2025.10028","url":null,"abstract":"<p><p>This study aims to explore the feasibility and accuracy of utilizing large language models (LLMs) to assess the risk of bias (ROB) in cohort studies. We conducted a pilot and feasibility study in 30 cohort studies randomly selected from reference lists of published Cochrane reviews. We developed a structured prompt to guide the ChatGPT-4o, Moonshot-v1-128k, and DeepSeek-V3 to assess the ROB of each cohort twice. We used the ROB results assessed by three evidence-based medicine experts as the gold standard, and then we evaluated the accuracy of LLMs by calculating the correct assessment rate, sensitivity, specificity, and <i>F</i>1 scores for overall and item-specific levels. The consistency of the overall and item-specific assessment results was evaluated using Cohen's kappa (κ) and prevalence-adjusted bias-adjusted kappa. Efficiency was estimated by the mean assessment time required. This study assessed three LLMs (ChatGPT-4o, Moonshot-v1-128k, and DeepSeek-V3) and revealed distinct performance across eight assessment items. Overall accuracy was comparable (80.8%-83.3%). Moonshot-v1-128k showed superior sensitivity in population selection (0.92 versus ChatGPT-4o's 0.55, <i>P</i> < 0.001). In terms of <i>F</i>1 scores, Moonshot-v1-128k led in population selection (<i>F</i> = 0.80 versus ChatGPT-4o's 0.67, <i>P</i> = 0.004). ChatGPT-4o demonstrated the highest consistency (mean κ = 96.5%), with perfect agreement (100%) in outcome confidence. ChatGPT-4o was 97.3% faster per article (32.8 seconds versus 20 minutes manually) and outperformed Moonshot-v1-128k and DeepSeek-V3 by 47-50% in processing speed. The efficient and accurate assessment of ROB in cohort studies by ChatGPT-4o, Moonshot-v1-128k, and DeepSeek-V3 highlights the potential of LLMs to enhance the systematic review process.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 6","pages":"990-1004"},"PeriodicalIF":6.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12657654/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-07-10DOI: 10.1017/rsm.2025.10022
Yajie Duan, Thomas Mathew, Demissie Alemayehu, Ge Cheng
Random-effects meta-analyses with only a few studies often face challenges in accurately estimating between-study heterogeneity, leading to biased effect estimates and confidence intervals with poor coverage. This issue is especially the case when dealing with rare diseases. To address this problem for normally distributed outcomes, two new approaches have been proposed to provide confidence limits of the global mean: one based on fiducial inference, and the other involving two modifications of the signed log-likelihood ratio test statistic in order to have improved performance with small numbers of studies. The performance of the proposed methods was evaluated numerically and compared with the Hartung-Knapp-Sidik-Jonkman approach and its modification to handle small numbers of studies. The simulation results indicated that the proposed methods achieved coverage probabilities closer to the nominal level and produced shorter confidence intervals compared to those based on existing methods. Two real examples are used to illustrate the proposed methods.
{"title":"Novel approaches for random-effects meta-analysis of a small number of studies under normality.","authors":"Yajie Duan, Thomas Mathew, Demissie Alemayehu, Ge Cheng","doi":"10.1017/rsm.2025.10022","DOIUrl":"10.1017/rsm.2025.10022","url":null,"abstract":"<p><p>Random-effects meta-analyses with only a few studies often face challenges in accurately estimating between-study heterogeneity, leading to biased effect estimates and confidence intervals with poor coverage. This issue is especially the case when dealing with rare diseases. To address this problem for normally distributed outcomes, two new approaches have been proposed to provide confidence limits of the global mean: one based on fiducial inference, and the other involving two modifications of the signed log-likelihood ratio test statistic in order to have improved performance with small numbers of studies. The performance of the proposed methods was evaluated numerically and compared with the Hartung-Knapp-Sidik-Jonkman approach and its modification to handle small numbers of studies. The simulation results indicated that the proposed methods achieved coverage probabilities closer to the nominal level and produced shorter confidence intervals compared to those based on existing methods. Two real examples are used to illustrate the proposed methods.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 6","pages":"922-936"},"PeriodicalIF":6.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12657671/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-23DOI: 10.1017/rsm.2025.10019
Gena Nelson, Sarah Quinn, Sean Grant, Shaina D Trevino, Elizabeth Day, Maria Schweer-Collins, Hannah Carter, Peter Boedeker, Emily Tanner-Smith
Study coding is an essential component of the research synthesis process. Data extracted during study coding serve as a direct link between the included studies and the synthesis results, allowing reviewers to justify claims about the findings from a set of related studies. The purpose of this tutorial is to provide authors, particularly those new to research synthesis, with recommendations to develop study coding manuals and forms that result in efficient, high-quality data extraction. Each of the 10 easy-to-follow practices is supported with additional resources, examples, or non-examples to help authors develop high-quality study coding materials. With the increase in publication of meta-analyses in recent years across many disciplines, a primary goal of this article is to enhance the quality of study coding materials that authors develop.
{"title":"Ten practices for successful study coding in research syntheses: Developing coding manuals and coding forms.","authors":"Gena Nelson, Sarah Quinn, Sean Grant, Shaina D Trevino, Elizabeth Day, Maria Schweer-Collins, Hannah Carter, Peter Boedeker, Emily Tanner-Smith","doi":"10.1017/rsm.2025.10019","DOIUrl":"10.1017/rsm.2025.10019","url":null,"abstract":"<p><p>Study coding is an essential component of the research synthesis process. Data extracted during study coding serve as a direct link between the included studies and the synthesis results, allowing reviewers to justify claims about the findings from a set of related studies. The purpose of this tutorial is to provide authors, particularly those new to research synthesis, with recommendations to develop study coding manuals and forms that result in efficient, high-quality data extraction. Each of the 10 easy-to-follow practices is supported with additional resources, examples, or non-examples to help authors develop high-quality study coding materials. With the increase in publication of meta-analyses in recent years across many disciplines, a primary goal of this article is to enhance the quality of study coding materials that authors develop.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 5","pages":"709-728"},"PeriodicalIF":6.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527492/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-18DOI: 10.1017/rsm.2025.10010
Cynthia Huber, Tim Friede
Model-based recursive partitioning (MOB) and its extension, metaMOB, are tools for identifying subgroups with differential treatment effects. When pooling data from various trials the metaMOB approach uses random effects to model the heterogeneity of treatment effects. In situations where interventions offer only small overall benefits and require extensive, costly trials with a large participant enrollment, leveraging individual-participant data (IPD) from multiple trials can help identify individuals who are most likely to benefit from the intervention. We explore the application of MOB and metaMOB in the context of non-specific low back pain treatment, using synthetic data based on a subset of the individual participant data meta-analysis by Patel et al. 1 Our study underscores the need to explore heterogeneity in intercepts and treatment effects to identify subgroups with differential treatment effects in IPD meta-analyses.
{"title":"Subgroup identification using individual participant data from multiple trials: An application in low back pain.","authors":"Cynthia Huber, Tim Friede","doi":"10.1017/rsm.2025.10010","DOIUrl":"10.1017/rsm.2025.10010","url":null,"abstract":"<p><p>Model-based recursive partitioning (MOB) and its extension, metaMOB, are tools for identifying subgroups with differential treatment effects. When pooling data from various trials the metaMOB approach uses random effects to model the heterogeneity of treatment effects. In situations where interventions offer only small overall benefits and require extensive, costly trials with a large participant enrollment, leveraging individual-participant data (IPD) from multiple trials can help identify individuals who are most likely to benefit from the intervention. We explore the application of MOB and metaMOB in the context of non-specific low back pain treatment, using synthetic data based on a subset of the individual participant data meta-analysis by Patel et al. <sup>1</sup> Our study underscores the need to explore heterogeneity in intercepts and treatment effects to identify subgroups with differential treatment effects in IPD meta-analyses.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 5","pages":"813-822"},"PeriodicalIF":6.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527538/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-10DOI: 10.1017/rsm.2025.10017
Shahab Jolani
Modern quantitative evidence synthesis methods often combine patient-level data from different sources, known as individual participants data (IPD) sets. A specific challenge in meta-analysis of IPD sets is the presence of systematically missing data, when certain variables are not measured in some studies, and sporadically missing data, when measurements of certain variables are incomplete across different studies. Multiple imputation (MI) is among the better approaches to deal with missing data. However, MI of hierarchical data, such as IPD meta-analysis, requires advanced imputation routines that preserve the hierarchical data structure and accommodate the presence of both systematically and sporadically missing data. We have recently developed a new class of hierarchical imputation methods within the MICE framework tailored for continuous variables. This article discusses the extensions of this methodology to categorical variables, accommodating the simultaneous presence of systematically and sporadically missing data in nested designs with arbitrary missing data patterns. To address the challenge of the categorical nature of the data, we propose an accept-reject algorithm during the imputation process. Following theoretical discussions, we evaluate the performance of the new methodology through simulation studies and demonstrate its application using an IPD set from patients with kidney disease.
{"title":"Hierarchical imputation of categorical variables in the presence of systematically and sporadically missing data.","authors":"Shahab Jolani","doi":"10.1017/rsm.2025.10017","DOIUrl":"10.1017/rsm.2025.10017","url":null,"abstract":"<p><p>Modern quantitative evidence synthesis methods often combine patient-level data from different sources, known as individual participants data (IPD) sets. A specific challenge in meta-analysis of IPD sets is the presence of systematically missing data, when certain variables are not measured in some studies, and sporadically missing data, when measurements of certain variables are incomplete across different studies. Multiple imputation (MI) is among the better approaches to deal with missing data. However, MI of hierarchical data, such as IPD meta-analysis, requires advanced imputation routines that preserve the hierarchical data structure and accommodate the presence of both systematically and sporadically missing data. We have recently developed a new class of hierarchical imputation methods within the MICE framework tailored for continuous variables. This article discusses the extensions of this methodology to categorical variables, accommodating the simultaneous presence of systematically and sporadically missing data in nested designs with arbitrary missing data patterns. To address the challenge of the categorical nature of the data, we propose an accept-reject algorithm during the imputation process. Following theoretical discussions, we evaluate the performance of the new methodology through simulation studies and demonstrate its application using an IPD set from patients with kidney disease.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 5","pages":"729-757"},"PeriodicalIF":6.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527547/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-16DOI: 10.1017/rsm.2025.25
Hanan Khalil, Vivian Welch, Matthew Grainger, Fiona Campbell
Mapping reviews are valuable tools for synthesizing and visualizing research evidence, providing a comprehensive overview of studies within a specific field. Their visual approach enhances accessibility, enabling researchers, policymakers, and practitioners to efficiently identify key findings, trends, and knowledge gaps. These reviews are particularly significant in guiding future research, informing funding decisions, and shaping evidence-based policymaking. In environmental science-similar to health and social sciences-mapping reviews play a crucial role in identifying effective conservation strategies, tracking interventions, and supporting targeted programs.Unlike systematic reviews, which assess intervention effectiveness, mapping reviews focus on broad research questions, aiming to chart the existing evidence on a given topic. They use structured methodologies to identify patterns, gaps, and trends, often employing visual tools to enhance data accessibility. A well-defined scope, guided by inclusion and exclusion criteria, ensures a transparent study selection process. Comprehensive search strategies, often spanning multiple databases, maximize evidence capture. Effective screening, combining automated and manual processes, ensures relevance, while data extraction emphasizes high-level categories such as study design and population demographics. Advanced software tools, including EPPI-Reviewer and MindMeister, support data extraction and visualization, with evidence gap maps highlighting robust areas and research voids.Despite their advantages, mapping reviews present challenges. The categorization and coding of studies can introduce subjective biases, and the process demands substantial resources. Automation and artificial intelligence offer promising solutions, improving efficiency while addressing integration and multilingual limitations. As methodological advancements continue, interdisciplinary collaboration will be essential to fully realize the potential of mapping reviews across scientific disciplines.
{"title":"Methodology for mapping reviews, evidence maps, and gap maps.","authors":"Hanan Khalil, Vivian Welch, Matthew Grainger, Fiona Campbell","doi":"10.1017/rsm.2025.25","DOIUrl":"10.1017/rsm.2025.25","url":null,"abstract":"<p><p>Mapping reviews are valuable tools for synthesizing and visualizing research evidence, providing a comprehensive overview of studies within a specific field. Their visual approach enhances accessibility, enabling researchers, policymakers, and practitioners to efficiently identify key findings, trends, and knowledge gaps. These reviews are particularly significant in guiding future research, informing funding decisions, and shaping evidence-based policymaking. In environmental science-similar to health and social sciences-mapping reviews play a crucial role in identifying effective conservation strategies, tracking interventions, and supporting targeted programs.Unlike systematic reviews, which assess intervention effectiveness, mapping reviews focus on broad research questions, aiming to chart the existing evidence on a given topic. They use structured methodologies to identify patterns, gaps, and trends, often employing visual tools to enhance data accessibility. A well-defined scope, guided by inclusion and exclusion criteria, ensures a transparent study selection process. Comprehensive search strategies, often spanning multiple databases, maximize evidence capture. Effective screening, combining automated and manual processes, ensures relevance, while data extraction emphasizes high-level categories such as study design and population demographics. Advanced software tools, including EPPI-Reviewer and MindMeister, support data extraction and visualization, with evidence gap maps highlighting robust areas and research voids.Despite their advantages, mapping reviews present challenges. The categorization and coding of studies can introduce subjective biases, and the process demands substantial resources. Automation and artificial intelligence offer promising solutions, improving efficiency while addressing integration and multilingual limitations. As methodological advancements continue, interdisciplinary collaboration will be essential to fully realize the potential of mapping reviews across scientific disciplines.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 5","pages":"786-796"},"PeriodicalIF":6.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-06DOI: 10.1017/rsm.2025.10012
A E Ades, Deborah M Caldwell, Sumayya Anwer, Sofia Dias
{"title":"Continuity corrections with Mantel-Haenszel estimators in Cochrane reviews.","authors":"A E Ades, Deborah M Caldwell, Sumayya Anwer, Sofia Dias","doi":"10.1017/rsm.2025.10012","DOIUrl":"10.1017/rsm.2025.10012","url":null,"abstract":"","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"16 5","pages":"823-825"},"PeriodicalIF":6.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527532/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}