{"title":"The Logic of Generalization From Systematic Reviews and Meta-Analyses of Impact Evaluations.","authors":"Julia H Littell","doi":"10.1177/0193841X241227481","DOIUrl":null,"url":null,"abstract":"<p><p>Systematic reviews and meta-analyses are viewed as potent tools for generalized causal inference. These reviews are routinely used to inform decision makers about expected effects of interventions. However, the logic of generalization from research reviews to diverse policy and practice contexts is not well developed. Building on sampling theory, concerns about epistemic uncertainty, and principles of generalized causal inference, this article presents a pragmatic approach to generalizability assessment for use with systematic reviews and meta-analyses. This approach is applied to two systematic reviews and meta-analyses of effects of \"evidence-based\" psychosocial interventions for youth and families. Evaluations included in systematic reviews are not necessarily representative of populations and treatments of interest. Generalizability of results is limited by high risks of bias, uncertain estimates, and insufficient descriptive data from impact evaluations. Systematic reviews and meta-analyses can be used to test generalizability claims, explore heterogeneity, and identify potential moderators of effects. These reviews can also produce pooled estimates that are not representative of any larger sets of studies, programs, or people. Further work is needed to improve the conduct and reporting of impact evaluations and systematic reviews, and to develop practical approaches to generalizability assessment and guide applications of interventions in diverse policy and practice contexts.</p>","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":" ","pages":"427-460"},"PeriodicalIF":3.0000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evaluation Review","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/0193841X241227481","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/23 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Systematic reviews and meta-analyses are viewed as potent tools for generalized causal inference. These reviews are routinely used to inform decision makers about expected effects of interventions. However, the logic of generalization from research reviews to diverse policy and practice contexts is not well developed. Building on sampling theory, concerns about epistemic uncertainty, and principles of generalized causal inference, this article presents a pragmatic approach to generalizability assessment for use with systematic reviews and meta-analyses. This approach is applied to two systematic reviews and meta-analyses of effects of "evidence-based" psychosocial interventions for youth and families. Evaluations included in systematic reviews are not necessarily representative of populations and treatments of interest. Generalizability of results is limited by high risks of bias, uncertain estimates, and insufficient descriptive data from impact evaluations. Systematic reviews and meta-analyses can be used to test generalizability claims, explore heterogeneity, and identify potential moderators of effects. These reviews can also produce pooled estimates that are not representative of any larger sets of studies, programs, or people. Further work is needed to improve the conduct and reporting of impact evaluations and systematic reviews, and to develop practical approaches to generalizability assessment and guide applications of interventions in diverse policy and practice contexts.
期刊介绍:
Evaluation Review is the forum for researchers, planners, and policy makers engaged in the development, implementation, and utilization of studies aimed at the betterment of the human condition. The Editors invite submission of papers reporting the findings of evaluation studies in such fields as child development, health, education, income security, manpower, mental health, criminal justice, and the physical and social environments. In addition, Evaluation Review will contain articles on methodological developments, discussions of the state of the art, and commentaries on issues related to the application of research results. Special features will include periodic review essays, "research briefs", and "craft reports".