Annette Kinder , Fiona J. Briese , Marius Jacobs , Niclas Dern , Niels Glodny , Simon Jacobs , Samuel Leßmann
{"title":"Effects of adaptive feedback generated by a large language model: A case study in teacher education","authors":"Annette Kinder , Fiona J. Briese , Marius Jacobs , Niclas Dern , Niels Glodny , Simon Jacobs , Samuel Leßmann","doi":"10.1016/j.caeai.2024.100349","DOIUrl":null,"url":null,"abstract":"<div><div>This study investigates the effects of adaptive feedback generated by large language models (LLMs), specifically ChatGPT, on performance in a written diagnostic reasoning task among German pre-service teachers (<em>n</em> = 269). Additionally, the study analyzed user evaluations of the feedback and feedback processing time. Diagnostic reasoning, a critical skill for making informed pedagogical decisions, was assessed through a writing task integrated into a teacher preparation course. Participants were randomly assigned to receive either adaptive feedback generated by ChatGPT or static feedback prepared in advance by a human expert, which was identical for all participants in that condition, before completing a second writing task. The findings reveal that ChatGPT-generated adaptive feedback significantly improved the quality of justification in the students’ writing compared to the static feedback written by an expert. However, no significant difference was observed in decision accuracy between the two groups, suggesting that the type and source of feedback did not impact decision-making processes. Additionally, students who had received LLM-generated adaptive feedback spent more time processing the feedback and subsequently wrote longer texts, indicating longer engagement with the feedback and the task. Participants also rated adaptive feedback as more useful and interesting than static feedback, aligning with previous research on the motivational benefits of adaptive feedback. The study highlights the potential of LLMs like ChatGPT as valuable tools in educational settings, particularly in large courses where providing adaptive feedback is challenging.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100349"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Education Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666920X24001528","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
This study investigates the effects of adaptive feedback generated by large language models (LLMs), specifically ChatGPT, on performance in a written diagnostic reasoning task among German pre-service teachers (n = 269). Additionally, the study analyzed user evaluations of the feedback and feedback processing time. Diagnostic reasoning, a critical skill for making informed pedagogical decisions, was assessed through a writing task integrated into a teacher preparation course. Participants were randomly assigned to receive either adaptive feedback generated by ChatGPT or static feedback prepared in advance by a human expert, which was identical for all participants in that condition, before completing a second writing task. The findings reveal that ChatGPT-generated adaptive feedback significantly improved the quality of justification in the students’ writing compared to the static feedback written by an expert. However, no significant difference was observed in decision accuracy between the two groups, suggesting that the type and source of feedback did not impact decision-making processes. Additionally, students who had received LLM-generated adaptive feedback spent more time processing the feedback and subsequently wrote longer texts, indicating longer engagement with the feedback and the task. Participants also rated adaptive feedback as more useful and interesting than static feedback, aligning with previous research on the motivational benefits of adaptive feedback. The study highlights the potential of LLMs like ChatGPT as valuable tools in educational settings, particularly in large courses where providing adaptive feedback is challenging.