Margarita Olivera-Aguilar, Hee-Sun Lee, Amy Pallant, Vinetha Belur, Matthew Mulholland, Ou Lydia Liu
{"title":"Comparing the Effect of Contextualized Versus Generic Automated Feedback on Students' Scientific Argumentation","authors":"Margarita Olivera-Aguilar, Hee-Sun Lee, Amy Pallant, Vinetha Belur, Matthew Mulholland, Ou Lydia Liu","doi":"10.1002/ets2.12344","DOIUrl":null,"url":null,"abstract":"<p>This study uses a computerized formative assessment system that provides automated scoring and feedback to help students write scientific arguments in a climate change curriculum. We compared the effect of contextualized versus generic automated feedback on students' explanations of scientific claims and attributions of uncertainty to those claims. Classes were randomly assigned to the contextualized feedback condition (227 students from 11 classes) or to the generic feedback condition (138 students from 9 classes). The results indicate that the formative assessment helped students improve their scores in both explanation and uncertainty scores, but larger score gains were found in the uncertainty attribution scores. Although the contextualized feedback was associated with higher final scores, this effect was moderated by the number of revisions made, the initial score, and gender. We discuss how the results might be related to students' familiarity with writing scientific explanations versus uncertainty attributions at school.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-14"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12344","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ETS Research Report Series","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ets2.12344","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
This study uses a computerized formative assessment system that provides automated scoring and feedback to help students write scientific arguments in a climate change curriculum. We compared the effect of contextualized versus generic automated feedback on students' explanations of scientific claims and attributions of uncertainty to those claims. Classes were randomly assigned to the contextualized feedback condition (227 students from 11 classes) or to the generic feedback condition (138 students from 9 classes). The results indicate that the formative assessment helped students improve their scores in both explanation and uncertainty scores, but larger score gains were found in the uncertainty attribution scores. Although the contextualized feedback was associated with higher final scores, this effect was moderated by the number of revisions made, the initial score, and gender. We discuss how the results might be related to students' familiarity with writing scientific explanations versus uncertainty attributions at school.