Emily R. Fyfe, J. D. Leeuw, Paulo F. Carvalho, Robert L. Goldstone, Janelle Sherman, D. Admiraal, Laura K. Alford, Alison Bonner, C. Brassil, Christopher A. Brooks, Tracey Carbonetto, Sau Hou Chang, Laura Cruz, Melina T. Czymoniewicz-Klippel, F. Daniel, M. Driessen, Noel Habashy, Carrie Hanson-Bradley, E. Hirt, Virginia Hojas Carbonell, Daniel K. Jackson, Shay Jones, Jennifer L. Keagy, Brandi Keith, Sarah J. Malmquist, B. McQuarrie, K. Metzger, Maung Min, S. Patil, Ryan S. Patrick, Etienne Pelaprat, Maureen L. Petrunich-Rutherford, Meghan R. Porter, Kristina K. Prescott, Cathrine Reck, Terri Renner, E. Robbins, Adam R. Smith, P. Stuczynski, J. Thompson, N. Tsotakos, J. Turk, Kyle Unruh, Jennifer Webb, S. Whitehead, E. Wisniewski, Ke Anne Zhang, Benjamin A. Motz
{"title":"许多课程1:评估即时反馈与延迟反馈在许多大学课程中的推广效果","authors":"Emily R. Fyfe, J. D. Leeuw, Paulo F. Carvalho, Robert L. Goldstone, Janelle Sherman, D. Admiraal, Laura K. Alford, Alison Bonner, C. Brassil, Christopher A. Brooks, Tracey Carbonetto, Sau Hou Chang, Laura Cruz, Melina T. Czymoniewicz-Klippel, F. Daniel, M. Driessen, Noel Habashy, Carrie Hanson-Bradley, E. Hirt, Virginia Hojas Carbonell, Daniel K. Jackson, Shay Jones, Jennifer L. Keagy, Brandi Keith, Sarah J. Malmquist, B. McQuarrie, K. Metzger, Maung Min, S. Patil, Ryan S. Patrick, Etienne Pelaprat, Maureen L. Petrunich-Rutherford, Meghan R. Porter, Kristina K. Prescott, Cathrine Reck, Terri Renner, E. Robbins, Adam R. Smith, P. Stuczynski, J. Thompson, N. Tsotakos, J. Turk, Kyle Unruh, Jennifer Webb, S. Whitehead, E. Wisniewski, Ke Anne Zhang, Benjamin A. Motz","doi":"10.1177/25152459211027575","DOIUrl":null,"url":null,"abstract":"Psychology researchers have long attempted to identify educational practices that improve student learning. However, experimental research on these practices is often conducted in laboratory contexts or in a single course, which threatens the external validity of the results. In this article, we establish an experimental paradigm for evaluating the benefits of recommended practices across a variety of authentic educational contexts—a model we call ManyClasses. The core feature is that researchers examine the same research question and measure the same experimental effect across many classes spanning a range of topics, institutions, teacher implementations, and student populations. We report the first ManyClasses study, in which we examined how the timing of feedback on class assignments, either immediate or delayed by a few days, affected subsequent performance on class assessments. Across 38 classes, the overall estimate for the effect of feedback timing was 0.002 (95% highest density interval = [−0.05, 0.05]), which indicates that there was no effect of immediate feedback compared with delayed feedback on student learning that generalizes across classes. Furthermore, there were no credibly nonzero effects for 40 preregistered moderators related to class-level and student-level characteristics. Yet our results provide hints that in certain kinds of classes, which were undersampled in the current study, there may be modest advantages for delayed feedback. More broadly, these findings provide insights regarding the feasibility of conducting within-class randomized experiments across a range of naturally occurring learning environments.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":15.6000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/25152459211027575","citationCount":"19","resultStr":"{\"title\":\"ManyClasses 1: Assessing the Generalizable Effect of Immediate Feedback Versus Delayed Feedback Across Many College Classes\",\"authors\":\"Emily R. Fyfe, J. D. Leeuw, Paulo F. Carvalho, Robert L. Goldstone, Janelle Sherman, D. Admiraal, Laura K. Alford, Alison Bonner, C. Brassil, Christopher A. Brooks, Tracey Carbonetto, Sau Hou Chang, Laura Cruz, Melina T. Czymoniewicz-Klippel, F. Daniel, M. Driessen, Noel Habashy, Carrie Hanson-Bradley, E. Hirt, Virginia Hojas Carbonell, Daniel K. Jackson, Shay Jones, Jennifer L. Keagy, Brandi Keith, Sarah J. Malmquist, B. McQuarrie, K. Metzger, Maung Min, S. Patil, Ryan S. Patrick, Etienne Pelaprat, Maureen L. Petrunich-Rutherford, Meghan R. Porter, Kristina K. Prescott, Cathrine Reck, Terri Renner, E. Robbins, Adam R. Smith, P. Stuczynski, J. Thompson, N. Tsotakos, J. Turk, Kyle Unruh, Jennifer Webb, S. Whitehead, E. Wisniewski, Ke Anne Zhang, Benjamin A. Motz\",\"doi\":\"10.1177/25152459211027575\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Psychology researchers have long attempted to identify educational practices that improve student learning. However, experimental research on these practices is often conducted in laboratory contexts or in a single course, which threatens the external validity of the results. In this article, we establish an experimental paradigm for evaluating the benefits of recommended practices across a variety of authentic educational contexts—a model we call ManyClasses. The core feature is that researchers examine the same research question and measure the same experimental effect across many classes spanning a range of topics, institutions, teacher implementations, and student populations. We report the first ManyClasses study, in which we examined how the timing of feedback on class assignments, either immediate or delayed by a few days, affected subsequent performance on class assessments. Across 38 classes, the overall estimate for the effect of feedback timing was 0.002 (95% highest density interval = [−0.05, 0.05]), which indicates that there was no effect of immediate feedback compared with delayed feedback on student learning that generalizes across classes. Furthermore, there were no credibly nonzero effects for 40 preregistered moderators related to class-level and student-level characteristics. Yet our results provide hints that in certain kinds of classes, which were undersampled in the current study, there may be modest advantages for delayed feedback. More broadly, these findings provide insights regarding the feasibility of conducting within-class randomized experiments across a range of naturally occurring learning environments.\",\"PeriodicalId\":55645,\"journal\":{\"name\":\"Advances in Methods and Practices in Psychological Science\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":15.6000,\"publicationDate\":\"2021-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1177/25152459211027575\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advances in Methods and Practices in Psychological Science\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/25152459211027575\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Methods and Practices in Psychological Science","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/25152459211027575","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY","Score":null,"Total":0}
ManyClasses 1: Assessing the Generalizable Effect of Immediate Feedback Versus Delayed Feedback Across Many College Classes
Psychology researchers have long attempted to identify educational practices that improve student learning. However, experimental research on these practices is often conducted in laboratory contexts or in a single course, which threatens the external validity of the results. In this article, we establish an experimental paradigm for evaluating the benefits of recommended practices across a variety of authentic educational contexts—a model we call ManyClasses. The core feature is that researchers examine the same research question and measure the same experimental effect across many classes spanning a range of topics, institutions, teacher implementations, and student populations. We report the first ManyClasses study, in which we examined how the timing of feedback on class assignments, either immediate or delayed by a few days, affected subsequent performance on class assessments. Across 38 classes, the overall estimate for the effect of feedback timing was 0.002 (95% highest density interval = [−0.05, 0.05]), which indicates that there was no effect of immediate feedback compared with delayed feedback on student learning that generalizes across classes. Furthermore, there were no credibly nonzero effects for 40 preregistered moderators related to class-level and student-level characteristics. Yet our results provide hints that in certain kinds of classes, which were undersampled in the current study, there may be modest advantages for delayed feedback. More broadly, these findings provide insights regarding the feasibility of conducting within-class randomized experiments across a range of naturally occurring learning environments.
期刊介绍:
In 2021, Advances in Methods and Practices in Psychological Science will undergo a transition to become an open access journal. This journal focuses on publishing innovative developments in research methods, practices, and conduct within the field of psychological science. It embraces a wide range of areas and topics and encourages the integration of methodological and analytical questions.
The aim of AMPPS is to bring the latest methodological advances to researchers from various disciplines, even those who are not methodological experts. Therefore, the journal seeks submissions that are accessible to readers with different research interests and that represent the diverse research trends within the field of psychological science.
The types of content that AMPPS welcomes include articles that communicate advancements in methods, practices, and metascience, as well as empirical scientific best practices. Additionally, tutorials, commentaries, and simulation studies on new techniques and research tools are encouraged. The journal also aims to publish papers that bring advances from specialized subfields to a broader audience. Lastly, AMPPS accepts Registered Replication Reports, which focus on replicating important findings from previously published studies.
Overall, the transition of Advances in Methods and Practices in Psychological Science to an open access journal aims to increase accessibility and promote the dissemination of new developments in research methods and practices within the field of psychological science.