Shuai Ma, Zhihong Xu, Peng Lu, Jean Parrella, Ashlynn Kogut
This is the protocol for a Campbell systematic review. The objectives are as follows. Our review will exclusively emphasize quantitative evidence from experimental studies, identifying the important factors and providing comprehensive and in-depth recommendations, with a primary focus on identifying the effective communication and marketing strategies that have been evaluated. The findings of the study focusing on consumers' acceptance will provide valuable insight for policymakers to combat the food waste issue. The research questions are as follows: RQ1: What are the key factors influencing consumer acceptance in experimental studies on upcycled foods? RQ2: What communication and marketing strategies have been used to increase consumer acceptance in experimental studies on upcycled foods?
{"title":"Protocol: The Effects of Communication Strategies on Upcycled Food Acceptance: A Systematic Review and Meta-Analysis","authors":"Shuai Ma, Zhihong Xu, Peng Lu, Jean Parrella, Ashlynn Kogut","doi":"10.1002/cl2.70075","DOIUrl":"https://doi.org/10.1002/cl2.70075","url":null,"abstract":"<p>This is the protocol for a Campbell systematic review. The objectives are as follows. Our review will exclusively emphasize quantitative evidence from experimental studies, identifying the important factors and providing comprehensive and in-depth recommendations, with a primary focus on identifying the effective communication and marketing strategies that have been evaluated. The findings of the study focusing on consumers' acceptance will provide valuable insight for policymakers to combat the food waste issue. The research questions are as follows: RQ1: What are the key factors influencing consumer acceptance in experimental studies on upcycled foods? RQ2: What communication and marketing strategies have been used to increase consumer acceptance in experimental studies on upcycled foods?</p>","PeriodicalId":36698,"journal":{"name":"Campbell Systematic Reviews","volume":"21 4","pages":""},"PeriodicalIF":7.1,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cl2.70075","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145521980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ella Flemyng, Anna Noel-Storr, Biljana Macura, Gerald Gartlehner, James Thomas, Joerg J. Meerpohl, Zoe Jordan, Jan Minx, Angelika Eisele-Metzger, Candyce Hamel, Paweł Jemioło, Kylie Porritt, Matthew Grainger
<p>Evidence syntheses, including systematic reviews, are a type of research that uses systematic, replicable methods to evaluate all available evidence on a specific question. They are built on the principles of research integrity, including rigor, transparency, and reproducibility. There is wide recognition that artificial intelligence (AI) and automation have the potential to transform the way we produce evidence syntheses, making the process significantly more efficient. However, this technology is potentially disruptive, characterized by opaque decision-making and black-box predictions, susceptible to overfitting, potentially embedded with algorithmic biases, and at risk of fabricated outputs and hallucinations. To safeguard evidence synthesis as the cornerstone of trusted, evidence-informed decision making, Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence (CEE), have come together to collaborate on a responsible and pragmatic approach to AI use in evidence synthesis.</p><p>By AI, we mean different types of automation, as described within the Responsible use of AI in evidence SynthEsis recommendations (RAISE) (Thomas et al. <span>2025a</span>), specifically, “advanced technologies that enable machines to do highly complex tasks effectively – which would require intelligence if a person were to perform them.” This ranges from general automation applications, such as rule-based or trained machine learning algorithms, to more recent large language models and generative AI approaches.</p><p>Incorporating AI in evidence synthesis comes with challenges as well as opportunities. While it is clear we need to make better use of AI for evidence synthesis to become more timely, affordable, and sustainable, we must also acknowledge the environmental and social costs associated with some forms of AI, particularly large-scale language models. There are risks that misuse could erode methodological standards by exacerbating existing biases and reducing reliability (Hanna et al. <span>2025</span>; Siemens et al. <span>2025</span>). These concerns are particularly relevant as current AI developments are often driven by commercial interests and, as such, are often opaque regarding limitations and lacking appropriate validation and evaluation. Overall, this undermines the reliability and replicability of AI-driven outputs.</p><p>To this end, Cochrane, the Campbell Collaboration, JBI, and the CEE have come together to form a joint AI Methods Group (AI Methods Group <span>2025</span>). The group officially supports the aims of RAISE (Thomas et al. <span>2025a</span>), which states that we need to work together to ensure AI does not compromise the principles of research integrity on which evidence synthesis was built. RAISE offers tailored recommendations for roles across the evidence synthesis ecosystem, from evidence synthesists to methodologists, from AI development teams to organizations or publishers involved in eviden
{"title":"Position Statement on Artificial Intelligence (AI) Use in Evidence Synthesis Across Cochrane, the Campbell Collaboration, JBI, and the Collaboration for Environmental Evidence 2025","authors":"Ella Flemyng, Anna Noel-Storr, Biljana Macura, Gerald Gartlehner, James Thomas, Joerg J. Meerpohl, Zoe Jordan, Jan Minx, Angelika Eisele-Metzger, Candyce Hamel, Paweł Jemioło, Kylie Porritt, Matthew Grainger","doi":"10.1002/cl2.70074","DOIUrl":"10.1002/cl2.70074","url":null,"abstract":"<p>Evidence syntheses, including systematic reviews, are a type of research that uses systematic, replicable methods to evaluate all available evidence on a specific question. They are built on the principles of research integrity, including rigor, transparency, and reproducibility. There is wide recognition that artificial intelligence (AI) and automation have the potential to transform the way we produce evidence syntheses, making the process significantly more efficient. However, this technology is potentially disruptive, characterized by opaque decision-making and black-box predictions, susceptible to overfitting, potentially embedded with algorithmic biases, and at risk of fabricated outputs and hallucinations. To safeguard evidence synthesis as the cornerstone of trusted, evidence-informed decision making, Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence (CEE), have come together to collaborate on a responsible and pragmatic approach to AI use in evidence synthesis.</p><p>By AI, we mean different types of automation, as described within the Responsible use of AI in evidence SynthEsis recommendations (RAISE) (Thomas et al. <span>2025a</span>), specifically, “advanced technologies that enable machines to do highly complex tasks effectively – which would require intelligence if a person were to perform them.” This ranges from general automation applications, such as rule-based or trained machine learning algorithms, to more recent large language models and generative AI approaches.</p><p>Incorporating AI in evidence synthesis comes with challenges as well as opportunities. While it is clear we need to make better use of AI for evidence synthesis to become more timely, affordable, and sustainable, we must also acknowledge the environmental and social costs associated with some forms of AI, particularly large-scale language models. There are risks that misuse could erode methodological standards by exacerbating existing biases and reducing reliability (Hanna et al. <span>2025</span>; Siemens et al. <span>2025</span>). These concerns are particularly relevant as current AI developments are often driven by commercial interests and, as such, are often opaque regarding limitations and lacking appropriate validation and evaluation. Overall, this undermines the reliability and replicability of AI-driven outputs.</p><p>To this end, Cochrane, the Campbell Collaboration, JBI, and the CEE have come together to form a joint AI Methods Group (AI Methods Group <span>2025</span>). The group officially supports the aims of RAISE (Thomas et al. <span>2025a</span>), which states that we need to work together to ensure AI does not compromise the principles of research integrity on which evidence synthesis was built. RAISE offers tailored recommendations for roles across the evidence synthesis ecosystem, from evidence synthesists to methodologists, from AI development teams to organizations or publishers involved in eviden","PeriodicalId":36698,"journal":{"name":"Campbell Systematic Reviews","volume":"21 4","pages":""},"PeriodicalIF":7.1,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12603384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tarun M. Khanna, Diana Danilenko, Qianyi Wang, Luke A. Smith, Bhumika T. V., Aditya Narayan Rai, Jorge Sánchez Canales, Tim Repke, Max Callaghan, Mark Andor, Julian H. Elliott, Jan C. Minx
<p>Policymakers have little time left to prevent the worst impacts of climate change and limit global warming to well below two degrees. However, a systematic assessment of the available scientific evidence—that is up to date—is not always available to understand what climate policies work, to what extent, in what context, why, and for whom. This is also true for demand-side policies, including those that use behavioral change to reduce energy demand and the related carbon emissions. There is an ever-burgeoning literature on policy interventions that target behavioral change among households, with new insights and evidence of their efficacy in different contexts. This living systematic review (LSR) and network meta-analysis (NMA) synthesizes this evidence to provide timely, rigorous and up-to-date insights on this topic. Our LSR and NMA integrate the evidence available from multiple disciplines to answer the following questions: (1) to what extent can information, behavioral (including feedback, social comparison and motivation), and monetary based interventions reduce energy consumption of households; (2) what the relative effectiveness of interventions is; and (3) how effective are the combinations of different interventions. In doing so, we also pilot an LSR for climate policy solutions and share learnings with the community. To fulfill these objectives, we searched the academic and gray literature for experimental and quasi-experimental studies that quantitatively assessed the impact of either behavioral, monetary, or information interventions (or a combination of these) on energy consumption (including electricity and heat) of the households in residential buildings. We searched the relevant databases: Web of Science Core Collections Citation Indexes, Scopus, JSTOR, RePec, Google Scholar, and gray literature repository Policy Commons to retrieve over 109,000 potentially relevant article abstracts and apply machine learning algorithms to identify the most likely relevant papers. Note that with this update, that includes the relevant literature published till end of December 2024, we added roughly 53,000 potentially relevant documents to the previously existing pool of potentially relevant literature from Khanna et al. (2021). A team of four reviewers screened the titles and abstracts of studies identified as being potentially relevant by the machine learning algorithm, with full-text assessments and double-coded data collection following for a set of included studies. The effect sizes reported by different studies were harmonized to Cohen's d for synthesis. We used a multilevel random effects model and NMA for calculating the average intervention effect. We adjust our estimates for possible small-study effects (publication bias). The NMA allows us to visualize the relative efficacy of the interventions through rankograms and cumulative ranking probability plots. Unlike previous meta-analyses in this field of research, this study also implemen
{"title":"Behavioral, Information, and Monetary Interventions to Reduce Energy Consumption in Households: A Living Systematic Review and Network Meta-Analysis","authors":"Tarun M. Khanna, Diana Danilenko, Qianyi Wang, Luke A. Smith, Bhumika T. V., Aditya Narayan Rai, Jorge Sánchez Canales, Tim Repke, Max Callaghan, Mark Andor, Julian H. Elliott, Jan C. Minx","doi":"10.1002/cl2.70070","DOIUrl":"https://doi.org/10.1002/cl2.70070","url":null,"abstract":"<p>Policymakers have little time left to prevent the worst impacts of climate change and limit global warming to well below two degrees. However, a systematic assessment of the available scientific evidence—that is up to date—is not always available to understand what climate policies work, to what extent, in what context, why, and for whom. This is also true for demand-side policies, including those that use behavioral change to reduce energy demand and the related carbon emissions. There is an ever-burgeoning literature on policy interventions that target behavioral change among households, with new insights and evidence of their efficacy in different contexts. This living systematic review (LSR) and network meta-analysis (NMA) synthesizes this evidence to provide timely, rigorous and up-to-date insights on this topic. Our LSR and NMA integrate the evidence available from multiple disciplines to answer the following questions: (1) to what extent can information, behavioral (including feedback, social comparison and motivation), and monetary based interventions reduce energy consumption of households; (2) what the relative effectiveness of interventions is; and (3) how effective are the combinations of different interventions. In doing so, we also pilot an LSR for climate policy solutions and share learnings with the community. To fulfill these objectives, we searched the academic and gray literature for experimental and quasi-experimental studies that quantitatively assessed the impact of either behavioral, monetary, or information interventions (or a combination of these) on energy consumption (including electricity and heat) of the households in residential buildings. We searched the relevant databases: Web of Science Core Collections Citation Indexes, Scopus, JSTOR, RePec, Google Scholar, and gray literature repository Policy Commons to retrieve over 109,000 potentially relevant article abstracts and apply machine learning algorithms to identify the most likely relevant papers. Note that with this update, that includes the relevant literature published till end of December 2024, we added roughly 53,000 potentially relevant documents to the previously existing pool of potentially relevant literature from Khanna et al. (2021). A team of four reviewers screened the titles and abstracts of studies identified as being potentially relevant by the machine learning algorithm, with full-text assessments and double-coded data collection following for a set of included studies. The effect sizes reported by different studies were harmonized to Cohen's d for synthesis. We used a multilevel random effects model and NMA for calculating the average intervention effect. We adjust our estimates for possible small-study effects (publication bias). The NMA allows us to visualize the relative efficacy of the interventions through rankograms and cumulative ranking probability plots. Unlike previous meta-analyses in this field of research, this study also implemen","PeriodicalId":36698,"journal":{"name":"Campbell Systematic Reviews","volume":"21 4","pages":""},"PeriodicalIF":7.1,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cl2.70070","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145469836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingbo Wang, Samantha L. Huey, Rui Sheng, Saurabh Mehta, Fei Wang
The explosion of scientific literature has made the efficient and accurate extraction of structured data a critical component for advancing scientific knowledge and supporting evidence-based decision-making. However, existing tools often struggle to extract and structure multimodal, varied, and inconsistent information across documents into standardized formats. We introduce SciDaSynth, a novel interactive system powered by large language models that automatically generates structured data tables according to users' queries by integrating information from diverse sources, including text, tables, and figures. Furthermore, SciDaSynth supports efficient table data validation and refinement, featuring multi-faceted visual summaries and semantic grouping capabilities to resolve cross-document data inconsistencies. A within-subjects study with nutrition and NLP researchers demonstrates SciDaSynth's effectiveness in producing high-quality structured data more efficiently than baseline methods. We discuss design implications for human–AI collaborative systems supporting data extraction tasks.
{"title":"SciDaSynth: Interactive Structured Data Extraction From Scientific Literature With Large Language Model","authors":"Xingbo Wang, Samantha L. Huey, Rui Sheng, Saurabh Mehta, Fei Wang","doi":"10.1002/cl2.70073","DOIUrl":"10.1002/cl2.70073","url":null,"abstract":"<p>The explosion of scientific literature has made the efficient and accurate extraction of structured data a critical component for advancing scientific knowledge and supporting evidence-based decision-making. However, existing tools often struggle to extract and structure multimodal, varied, and inconsistent information across documents into standardized formats. We introduce SciDaSynth, a novel interactive system powered by large language models that automatically generates structured data tables according to users' queries by integrating information from diverse sources, including text, tables, and figures. Furthermore, SciDaSynth supports efficient table data validation and refinement, featuring multi-faceted visual summaries and semantic grouping capabilities to resolve cross-document data inconsistencies. A within-subjects study with nutrition and NLP researchers demonstrates SciDaSynth's effectiveness in producing high-quality structured data more efficiently than baseline methods. We discuss design implications for human–AI collaborative systems supporting data extraction tasks.</p>","PeriodicalId":36698,"journal":{"name":"Campbell Systematic Reviews","volume":"21 4","pages":""},"PeriodicalIF":7.1,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12581027/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145446136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Addressing the climate change and biodiversity loss crises while ensuring livelihoods are not negatively affected is a matter that requires urgent action. A recently published Evidence Gap Map (EGM) identified no recent systematic reviews on land management interventions. Drawing from this EGM, the review aims to examine and synthesise the latest evidence on what works, how, and at what cost to improve environmental and human welfare outcomes in land management in low- and middle-income countries. We will address the following research questions: (1) What are the effects of protected areas, land rights and decentralisation interventions on environmental and poverty outcomes? Do effects vary by population, location, or other factors? (2) What are the barriers and enablers that impact the effectiveness of these interventions? (3) What is the cost-effectiveness of these interventions? The set of interventions will be based on the studies identified in the EGM, and we will search, appraise and synthesise additional evidence on influencing factors and cost data.
{"title":"PROTOCOL: The Effects of Land Management Policies on the Environment and People in Low- and Middle-Income Countries: A Systematic Review","authors":"Pierre Marion, Ingunn Storhaug, Sanghwa Lee, Claudia Romero, Constanza Gonzalez Parrao, Birte Snilstveit","doi":"10.1002/cl2.70062","DOIUrl":"10.1002/cl2.70062","url":null,"abstract":"<p>Addressing the climate change and biodiversity loss crises while ensuring livelihoods are not negatively affected is a matter that requires urgent action. A recently published Evidence Gap Map (EGM) identified no recent systematic reviews on land management interventions. Drawing from this EGM, the review aims to examine and synthesise the latest evidence on what works, how, and at what cost to improve environmental and human welfare outcomes in land management in low- and middle-income countries. We will address the following research questions: (1) What are the effects of protected areas, land rights and decentralisation interventions on environmental and poverty outcomes? Do effects vary by population, location, or other factors? (2) What are the barriers and enablers that impact the effectiveness of these interventions? (3) What is the cost-effectiveness of these interventions? The set of interventions will be based on the studies identified in the EGM, and we will search, appraise and synthesise additional evidence on influencing factors and cost data.</p>","PeriodicalId":36698,"journal":{"name":"Campbell Systematic Reviews","volume":"21 4","pages":""},"PeriodicalIF":7.1,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12558594/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145393626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sara Valdebenito, Hannah Gaffney, Maria Jose Arosemena-Burbano, Sydney Hitchcock, Darrick Jolliffe, Alex Sutherland
<p>School exclusion—commonly referred to as suspension—is a disciplinary response employed by school authorities to address student misbehaviour. Typically, it involves temporary removal from regular teaching or, in more serious cases, complete removal from the school premises. A substantial body of research has associated exclusion with adverse developmental outcomes. In response, various school-based interventions have been developed to reduce exclusion rates. While some programmes have shown promising effects, the evidence on their effectiveness remains inconclusive. This mixed-methods systematic review and multi-level meta-analysis updates the previous review by Valdebenito et al. (2018), which included literature published between 1980 and 2015. The present update extends the evidence base by including studies until 2022. The primary aim of this review was to assess the effectiveness of school-based interventions in reducing disciplinary exclusions, with secondary aims focused on related behavioural outcomes including conduct problems, delinquency, and substance use. Systematic searches conducted between November and December 2022 yielded over 11,000 references for quantitative studies. Following title and abstract screening, 777 records were reviewed at full text by two independent coders. Thirty-two studies met the inclusion criteria for meta-analysis, comprising 2765 effect sizes from 67 primary evaluations (1980–2022) and representing approximately 394,242 students. Meta-analysis was conducted using a multilevel random-effects model with robust variance estimation to account for the nested structure of the data. Quantitative impact evaluations were eligible if they used a randomised controlled or quasi-experimental design, included both a control group and pre/post-test data, and used statistical methods to minimise selection bias (e.g., propensity score matching or matched cohort design). Studies were excluded if they exhibited substantial baseline differences between treatment and control groups. The qualitative synthesis explored implementation barriers and facilitators based on nine UK-based process evaluations, identified through searches completed in September 2023. Process evaluations were included if they focused on the perceptions of stakeholders—teachers, students, or school leadership—within UK schools. Data collection followed two stages: initial selection based on titles, abstracts, and keywords, followed by full-text review. Two independent coders applied inclusion criteria, extracted data, and resolved discrepancies with the principal investigators. All steps were documented to inform the PRISMA flow chart. To evaluate interventions reducing school exclusions, we conducted a multilevel meta-analysis using robust variance estimation. We explored heterogeneity via meta-regression (e.g., gender, intervention type), conducted sensitivity analyses for outliers and correlation structures, and assessed quality data using the EPOC
{"title":"School-Based Interventions for Reducing Disciplinary School Exclusion. An Updated Systematic Review","authors":"Sara Valdebenito, Hannah Gaffney, Maria Jose Arosemena-Burbano, Sydney Hitchcock, Darrick Jolliffe, Alex Sutherland","doi":"10.1002/cl2.70063","DOIUrl":"10.1002/cl2.70063","url":null,"abstract":"<p>School exclusion—commonly referred to as suspension—is a disciplinary response employed by school authorities to address student misbehaviour. Typically, it involves temporary removal from regular teaching or, in more serious cases, complete removal from the school premises. A substantial body of research has associated exclusion with adverse developmental outcomes. In response, various school-based interventions have been developed to reduce exclusion rates. While some programmes have shown promising effects, the evidence on their effectiveness remains inconclusive. This mixed-methods systematic review and multi-level meta-analysis updates the previous review by Valdebenito et al. (2018), which included literature published between 1980 and 2015. The present update extends the evidence base by including studies until 2022. The primary aim of this review was to assess the effectiveness of school-based interventions in reducing disciplinary exclusions, with secondary aims focused on related behavioural outcomes including conduct problems, delinquency, and substance use. Systematic searches conducted between November and December 2022 yielded over 11,000 references for quantitative studies. Following title and abstract screening, 777 records were reviewed at full text by two independent coders. Thirty-two studies met the inclusion criteria for meta-analysis, comprising 2765 effect sizes from 67 primary evaluations (1980–2022) and representing approximately 394,242 students. Meta-analysis was conducted using a multilevel random-effects model with robust variance estimation to account for the nested structure of the data. Quantitative impact evaluations were eligible if they used a randomised controlled or quasi-experimental design, included both a control group and pre/post-test data, and used statistical methods to minimise selection bias (e.g., propensity score matching or matched cohort design). Studies were excluded if they exhibited substantial baseline differences between treatment and control groups. The qualitative synthesis explored implementation barriers and facilitators based on nine UK-based process evaluations, identified through searches completed in September 2023. Process evaluations were included if they focused on the perceptions of stakeholders—teachers, students, or school leadership—within UK schools. Data collection followed two stages: initial selection based on titles, abstracts, and keywords, followed by full-text review. Two independent coders applied inclusion criteria, extracted data, and resolved discrepancies with the principal investigators. All steps were documented to inform the PRISMA flow chart. To evaluate interventions reducing school exclusions, we conducted a multilevel meta-analysis using robust variance estimation. We explored heterogeneity via meta-regression (e.g., gender, intervention type), conducted sensitivity analyses for outliers and correlation structures, and assessed quality data using the EPOC","PeriodicalId":36698,"journal":{"name":"Campbell Systematic Reviews","volume":"21 4","pages":""},"PeriodicalIF":7.1,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12541690/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}