{"title":"Making Sense of Effect Sizes: Systematic Differences in Intervention Effect Sizes by Outcome Measure Type","authors":"Betsy Wolf, Erica Harbatkin","doi":"10.1080/19345747.2022.2071364","DOIUrl":null,"url":null,"abstract":"Abstract One challenge in understanding “what works” in education is that effect sizes may not be comparable across studies, raising questions for practitioners and policymakers using research to select interventions. One factor that consistently relates to the magnitude of effect sizes is the type of outcome measure. This article uses study data from the What Works Clearinghouse to determine average effect sizes by outcome measure type. Outcome measures were categorized by whether the group who developed the measure potentially had a stake in the intervention (non-independent) or not (independent). Using meta-analysis and controlling for study quality and intervention characteristics, we find larger average effect sizes for non-independent measures than for independent measures. Results suggest that larger effect sizes for non-independent measures are not due to differences in implementation fidelity, study quality, or intervention or sample characteristics. Instead, non-independent and independent measures appear to represent partially but minimally overlapping latent constructs. Findings call into question whether policymakers and practitioners should make decisions based on non-independent measures when they are ultimately responsible for improving outcomes on independent measures.","PeriodicalId":47260,"journal":{"name":"Journal of Research on Educational Effectiveness","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Research on Educational Effectiveness","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/19345747.2022.2071364","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 9
Abstract
Abstract One challenge in understanding “what works” in education is that effect sizes may not be comparable across studies, raising questions for practitioners and policymakers using research to select interventions. One factor that consistently relates to the magnitude of effect sizes is the type of outcome measure. This article uses study data from the What Works Clearinghouse to determine average effect sizes by outcome measure type. Outcome measures were categorized by whether the group who developed the measure potentially had a stake in the intervention (non-independent) or not (independent). Using meta-analysis and controlling for study quality and intervention characteristics, we find larger average effect sizes for non-independent measures than for independent measures. Results suggest that larger effect sizes for non-independent measures are not due to differences in implementation fidelity, study quality, or intervention or sample characteristics. Instead, non-independent and independent measures appear to represent partially but minimally overlapping latent constructs. Findings call into question whether policymakers and practitioners should make decisions based on non-independent measures when they are ultimately responsible for improving outcomes on independent measures.
摘要理解教育中“什么有效”的一个挑战是,不同研究的效果大小可能不可比较,这对使用研究选择干预措施的从业者和政策制定者提出了问题。与效应大小大小大小一致相关的一个因素是结果测量的类型。本文使用What Works Clearinghouse的研究数据,根据结果测量类型确定平均效应大小。结果指标根据制定指标的群体是否可能参与干预(非独立)进行分类。通过荟萃分析和对研究质量和干预特征的控制,我们发现非独立测量的平均效应大小大于独立测量。结果表明,非独立测量的较大效应大小不是由于实施保真度、研究质量、干预或样本特征的差异。相反,非独立和独立的措施似乎代表了部分但重叠程度最低的潜在结构。调查结果让人怀疑,当决策者和从业者最终负责改善独立措施的结果时,他们是否应该根据非独立措施做出决定。
期刊介绍:
As the flagship publication for the Society for Research on Educational Effectiveness, the Journal of Research on Educational Effectiveness (JREE) publishes original articles from the multidisciplinary community of researchers who are committed to applying principles of scientific inquiry to the study of educational problems. Articles published in JREE should advance our knowledge of factors important for educational success and/or improve our ability to conduct further disciplined studies of pressing educational problems. JREE welcomes manuscripts that fit into one of the following categories: (1) intervention, evaluation, and policy studies; (2) theory, contexts, and mechanisms; and (3) methodological studies. The first category includes studies that focus on process and implementation and seek to demonstrate causal claims in educational research. The second category includes meta-analyses and syntheses, descriptive studies that illuminate educational conditions and contexts, and studies that rigorously investigate education processes and mechanism. The third category includes studies that advance our understanding of theoretical and technical features of measurement and research design and describe advances in data analysis and data modeling. To establish a stronger connection between scientific evidence and educational practice, studies submitted to JREE should focus on pressing problems found in classrooms and schools. Studies that help advance our understanding and demonstrate effectiveness related to challenges in reading, mathematics education, and science education are especially welcome as are studies related to cognitive functions, social processes, organizational factors, and cultural features that mediate and/or moderate critical educational outcomes. On occasion, invited responses to JREE articles and rejoinders to those responses will be included in an issue.