Empirical Benchmarks for Planning and Interpreting Causal Effects of Community College Interventions

Michael Weiss, Marie-Andrée Somers, Colin Hill
{"title":"Empirical Benchmarks for Planning and Interpreting Causal Effects of Community College Interventions","authors":"Michael Weiss, Marie-Andrée Somers, Colin Hill","doi":"10.33009/fsop_jpss132759","DOIUrl":null,"url":null,"abstract":"Randomized controlled trials (RCTs) are an increasingly common research design for evaluating the effectiveness of community college (CC) interventions. However, when planning an RCT evaluation of a CC intervention, there is limited empirical information about what sized effects an intervention might reasonably achieve, which can lead to under- or over-powered studies. Relatedly, when interpreting results from an evaluation of a CC intervention, there is limited empirical information to contextualize the magnitude of an effect estimate relative to what sized effects have been observed in past evaluations. We provide empirical benchmarks to help with the planning and interpretation of community college evaluations. To do so, we present findings across well-executed RCTs of 39 CC interventions that are part of a unique dataset known as The Higher Education Randomized Controlled Trials (THE-RCT). The analyses include 21,163–65,604 students (depending on outcome and semester) enrolled in 44 institutions. Outcomes include enrollment, credits earned, and credential attainment. Effect size distributions are presented by outcome and semester. For example, across the interventions examined, the mean effect on cumulative credits earned after three semesters is 1.14 credits. Effects around 0.16 credits are at the 25th percentile of the distribution. Effects around 1.69 credits are at the 75th percentile of the distribution. This work begins to provide empirical benchmarks for planning and interpreting effects of CC evaluations. A public database with effect sizes is available to researchers (https://www.mdrc.org/the-rct-empirical-benchmarks).","PeriodicalId":477179,"journal":{"name":"Journal of postsecondary student success","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of postsecondary student success","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33009/fsop_jpss132759","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Randomized controlled trials (RCTs) are an increasingly common research design for evaluating the effectiveness of community college (CC) interventions. However, when planning an RCT evaluation of a CC intervention, there is limited empirical information about what sized effects an intervention might reasonably achieve, which can lead to under- or over-powered studies. Relatedly, when interpreting results from an evaluation of a CC intervention, there is limited empirical information to contextualize the magnitude of an effect estimate relative to what sized effects have been observed in past evaluations. We provide empirical benchmarks to help with the planning and interpretation of community college evaluations. To do so, we present findings across well-executed RCTs of 39 CC interventions that are part of a unique dataset known as The Higher Education Randomized Controlled Trials (THE-RCT). The analyses include 21,163–65,604 students (depending on outcome and semester) enrolled in 44 institutions. Outcomes include enrollment, credits earned, and credential attainment. Effect size distributions are presented by outcome and semester. For example, across the interventions examined, the mean effect on cumulative credits earned after three semesters is 1.14 credits. Effects around 0.16 credits are at the 25th percentile of the distribution. Effects around 1.69 credits are at the 75th percentile of the distribution. This work begins to provide empirical benchmarks for planning and interpreting effects of CC evaluations. A public database with effect sizes is available to researchers (https://www.mdrc.org/the-rct-empirical-benchmarks).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
规划和解释社区大学干预因果效应的实证基准
随机对照试验(RCTs)是评估社区学院(CC)干预措施有效性的一种越来越普遍的研究设计。然而,当计划对CC干预进行RCT评估时,关于干预可能合理达到的效果大小的经验信息有限,这可能导致研究的不足或过度。与此相关的是,在解释CC干预评估的结果时,与过去评估中观察到的效果大小相比,将效果估计的大小置于背景下的经验信息有限。我们提供了经验基准,以帮助社区大学评估的规划和解释。为了做到这一点,我们展示了39项CC干预措施的良好执行的随机对照试验的结果,这些随机对照试验是高等教育随机对照试验(The - rct)的独特数据集的一部分。这些分析包括来自44所院校的21,163 - 65604名学生(取决于结果和学期)。结果包括入学、获得的学分和获得的证书。效应量分布由结果和学期呈现。例如,在调查的干预措施中,三个学期后累积学分的平均影响为1.14学分。0.16学分左右的效应位于分布的第25百分位。1.69学分左右的影响位于分布的第75百分位。这项工作开始为规划和解释CC评估的效果提供经验基准。研究人员可以使用一个具有效应大小的公共数据库(https://www.mdrc.org/the-rct-empirical-benchmarks)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
0.90
自引率
0.00%
发文量
0
期刊最新文献
“To Make this Leap”: Understanding Relationships that Support Community College Students’ Transfer Journeys Test-Optional Policies: Impacts to Date and Recommendations for Equity in Admissions Quality of Student–Faculty Interactions, Persistence, and the Mediating Role of Student Satisfaction Academic Learning Experiences and Challenges of Students With Disabilities in Higher Education Campus Support Program Service Use by Students Who Experienced Foster Care, Relative Care, or Homelessness
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1