{"title":"Exploring the impact of language models, such as <scp>ChatGPT</scp>, on student learning and assessment","authors":"Araz Zirar","doi":"10.1002/rev3.3433","DOIUrl":null,"url":null,"abstract":"Abstract Recent developments in language models, such as ChatGPT, have sparked debate. These tools can help, for example, dyslexic people, to write formal emails from a prompt and can be used by students to generate assessed work. Proponents argue that language models enhance the student experience and academic achievement. Those concerned argue that language models impede student learning and call for a cautious approach to their adoption. This paper aims to provide insights into the role of language models in reshaping student learning and assessment in higher education. For that purpose, it probes the impact of language models, specifically ChatGPT, on student learning and assessment. It also explores the implications of language models in higher education settings, focusing on their effects on pedagogy and evaluation. Using the Scopus database, a search protocol was employed to identify 25 articles based on relevant keywords and selection criteria. The developed themes suggest that language models may alter how students learn and are assessed. While language models can provide information for problem‐solving and critical thinking, reliance on them without critical evaluation adversely impacts student learning. Language models can also generate teaching and assessment material and evaluate student responses, but their role should be limited to ‘play a specific and defined role’. Integration of language models in student learning and assessment is only helpful if students and educators play an active and effective role in checking the generated material's validity, reliability and accuracy. Propositions and potential research questions are included to encourage future research.","PeriodicalId":45076,"journal":{"name":"Review of Education","volume":"24 ","pages":"0"},"PeriodicalIF":2.7000,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Review of Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/rev3.3433","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract Recent developments in language models, such as ChatGPT, have sparked debate. These tools can help, for example, dyslexic people, to write formal emails from a prompt and can be used by students to generate assessed work. Proponents argue that language models enhance the student experience and academic achievement. Those concerned argue that language models impede student learning and call for a cautious approach to their adoption. This paper aims to provide insights into the role of language models in reshaping student learning and assessment in higher education. For that purpose, it probes the impact of language models, specifically ChatGPT, on student learning and assessment. It also explores the implications of language models in higher education settings, focusing on their effects on pedagogy and evaluation. Using the Scopus database, a search protocol was employed to identify 25 articles based on relevant keywords and selection criteria. The developed themes suggest that language models may alter how students learn and are assessed. While language models can provide information for problem‐solving and critical thinking, reliance on them without critical evaluation adversely impacts student learning. Language models can also generate teaching and assessment material and evaluate student responses, but their role should be limited to ‘play a specific and defined role’. Integration of language models in student learning and assessment is only helpful if students and educators play an active and effective role in checking the generated material's validity, reliability and accuracy. Propositions and potential research questions are included to encourage future research.