Elisabeth Bauer, Michael Sailer, Frank Niklas, Samuel Greiff, Sven Sarbu-Rothsching, Jan M. Zottmann, Jan Kiesewetter, Matthias Stadler, Martin R. Fischer, Tina Seidel, Detlef Urhahne, Maximilian Sailer, Frank Fischer
{"title":"AI-Based Adaptive Feedback in Simulations for Teacher Education: An Experimental Replication in the Field","authors":"Elisabeth Bauer, Michael Sailer, Frank Niklas, Samuel Greiff, Sven Sarbu-Rothsching, Jan M. Zottmann, Jan Kiesewetter, Matthias Stadler, Martin R. Fischer, Tina Seidel, Detlef Urhahne, Maximilian Sailer, Frank Fischer","doi":"10.1111/jcal.13123","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Artificial intelligence, particularly natural language processing (NLP), enables automating the formative assessment of written task solutions to provide adaptive feedback automatically. A laboratory study found that, compared with static feedback (an expert solution), adaptive feedback automated through artificial neural networks enhanced preservice teachers' diagnostic reasoning in a digital case-based simulation. However, the effectiveness of the simulation with the different feedback types and the generalizability to field settings remained unclear.</p>\n </section>\n \n <section>\n \n <h3> Objectives</h3>\n \n <p>We tested the generalizability of the previous findings and the effectiveness of a single simulation session with either feedback type in an experimental field study.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>In regular online courses, 332 preservice teachers at five German universities participated in one of three randomly assigned groups: (1) a simulation group with NLP-based adaptive feedback, (2) a simulation group with static feedback and (3) a no-simulation control group. We analysed the effect of the simulation with the two feedback types on participants' judgement accuracy and justification quality.</p>\n </section>\n \n <section>\n \n <h3> Results and Conclusions</h3>\n \n <p>Compared with static feedback, adaptive feedback significantly enhanced justification quality but not judgement accuracy. Only the simulation with adaptive feedback significantly benefited learners' justification quality over the no-simulation control group, while no significant differences in judgement accuracy were found.</p>\n \n <p>Our field experiment replicated the findings of the laboratory study. Only a simulation session with adaptive feedback, unlike static feedback, seems to enhance learners' justification quality but not judgement accuracy. Under field conditions, learners require adaptive support in simulations and can benefit from NLP-based adaptive feedback using artificial neural networks.</p>\n </section>\n </div>","PeriodicalId":48071,"journal":{"name":"Journal of Computer Assisted Learning","volume":"41 1","pages":""},"PeriodicalIF":5.1000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jcal.13123","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computer Assisted Learning","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jcal.13123","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Background
Artificial intelligence, particularly natural language processing (NLP), enables automating the formative assessment of written task solutions to provide adaptive feedback automatically. A laboratory study found that, compared with static feedback (an expert solution), adaptive feedback automated through artificial neural networks enhanced preservice teachers' diagnostic reasoning in a digital case-based simulation. However, the effectiveness of the simulation with the different feedback types and the generalizability to field settings remained unclear.
Objectives
We tested the generalizability of the previous findings and the effectiveness of a single simulation session with either feedback type in an experimental field study.
Methods
In regular online courses, 332 preservice teachers at five German universities participated in one of three randomly assigned groups: (1) a simulation group with NLP-based adaptive feedback, (2) a simulation group with static feedback and (3) a no-simulation control group. We analysed the effect of the simulation with the two feedback types on participants' judgement accuracy and justification quality.
Results and Conclusions
Compared with static feedback, adaptive feedback significantly enhanced justification quality but not judgement accuracy. Only the simulation with adaptive feedback significantly benefited learners' justification quality over the no-simulation control group, while no significant differences in judgement accuracy were found.
Our field experiment replicated the findings of the laboratory study. Only a simulation session with adaptive feedback, unlike static feedback, seems to enhance learners' justification quality but not judgement accuracy. Under field conditions, learners require adaptive support in simulations and can benefit from NLP-based adaptive feedback using artificial neural networks.
期刊介绍:
The Journal of Computer Assisted Learning is an international peer-reviewed journal which covers the whole range of uses of information and communication technology to support learning and knowledge exchange. It aims to provide a medium for communication among researchers as well as a channel linking researchers, practitioners, and policy makers. JCAL is also a rich source of material for master and PhD students in areas such as educational psychology, the learning sciences, instructional technology, instructional design, collaborative learning, intelligent learning systems, learning analytics, open, distance and networked learning, and educational evaluation and assessment. This is the case for formal (e.g., schools), non-formal (e.g., workplace learning) and informal learning (e.g., museums and libraries) situations and environments. Volumes often include one Special Issue which these provides readers with a broad and in-depth perspective on a specific topic. First published in 1985, JCAL continues to have the aim of making the outcomes of contemporary research and experience accessible. During this period there have been major technological advances offering new opportunities and approaches in the use of a wide range of technologies to support learning and knowledge transfer more generally. There is currently much emphasis on the use of network functionality and the challenges its appropriate uses pose to teachers/tutors working with students locally and at a distance. JCAL welcomes: -Empirical reports, single studies or programmatic series of studies on the use of computers and information technologies in learning and assessment -Critical and original meta-reviews of literature on the use of computers for learning -Empirical studies on the design and development of innovative technology-based systems for learning -Conceptual articles on issues relating to the Aims and Scope