ChatGPT 和高等教育评估:魔杖还是破坏者?

IF 2.4 Q1 EDUCATION & EDUCATIONAL RESEARCH Electronic Journal of e-Learning Pub Date : 2024-02-09 DOI:10.34190/ejel.21.5.3114
Maira Klyshbekova, Pamela Abbott
{"title":"ChatGPT 和高等教育评估:魔杖还是破坏者?","authors":"Maira Klyshbekova, Pamela Abbott","doi":"10.34190/ejel.21.5.3114","DOIUrl":null,"url":null,"abstract":"There is a current debate about the extent to which ChatGPT, a natural language AI chatbot, can disrupt processes in higher education settings. The chatbot is capable of not only answering queries in a human-like way within seconds but can also provide long tracts of texts which can be in the form of essays, emails, and coding. In this study, in the context of higher education settings, by adopting an experimental design approach, we applied ChatGPT-3 to a traditional form of assessment to determine its capabilities and limitations. Specifically, we tested its ability to produce an essay on a topic of our choice, created a rubric, and assessed the produced work in accordance with the designed rubric. We then evaluated the chatbot’s work by assessing ChatGPT’s application of its rubric according to a modified version of Paul’s (2005) Intellectual Standards rubric. Using Christensen et al.’s (2015) framework on disruptive innovations, our study found that ChatGPT was capable of completing the set tasks competently, quickly, and easily, like a “magic wand”. However, our findings also challenge the extent to which all of the ChatGPT’s demonstrated capabilities can disrupt this traditional form of assessment, given that there are aspects of its construction and evaluation that the technology is not yet able to replicate as a human expert would. These limitations of the chatbot can provide us with an opportunity for addressing vulnerabilities in traditional forms of assessment in higher education that are subject to academic integrity issues posed by this form of AI. We conclude the article with implications for teachers and higher education institutions by urging them to reconsider and revisit their practices when it comes to assessment.","PeriodicalId":46105,"journal":{"name":"Electronic Journal of e-Learning","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ChatGPT and Assessment in Higher Education: A Magic Wand or a Disruptor?\",\"authors\":\"Maira Klyshbekova, Pamela Abbott\",\"doi\":\"10.34190/ejel.21.5.3114\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is a current debate about the extent to which ChatGPT, a natural language AI chatbot, can disrupt processes in higher education settings. The chatbot is capable of not only answering queries in a human-like way within seconds but can also provide long tracts of texts which can be in the form of essays, emails, and coding. In this study, in the context of higher education settings, by adopting an experimental design approach, we applied ChatGPT-3 to a traditional form of assessment to determine its capabilities and limitations. Specifically, we tested its ability to produce an essay on a topic of our choice, created a rubric, and assessed the produced work in accordance with the designed rubric. We then evaluated the chatbot’s work by assessing ChatGPT’s application of its rubric according to a modified version of Paul’s (2005) Intellectual Standards rubric. Using Christensen et al.’s (2015) framework on disruptive innovations, our study found that ChatGPT was capable of completing the set tasks competently, quickly, and easily, like a “magic wand”. However, our findings also challenge the extent to which all of the ChatGPT’s demonstrated capabilities can disrupt this traditional form of assessment, given that there are aspects of its construction and evaluation that the technology is not yet able to replicate as a human expert would. These limitations of the chatbot can provide us with an opportunity for addressing vulnerabilities in traditional forms of assessment in higher education that are subject to academic integrity issues posed by this form of AI. We conclude the article with implications for teachers and higher education institutions by urging them to reconsider and revisit their practices when it comes to assessment.\",\"PeriodicalId\":46105,\"journal\":{\"name\":\"Electronic Journal of e-Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-02-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Electronic Journal of e-Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.34190/ejel.21.5.3114\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronic Journal of e-Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34190/ejel.21.5.3114","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

目前,关于自然语言人工智能聊天机器人 ChatGPT 能在多大程度上扰乱高等教育环境中的流程还存在争议。聊天机器人不仅能在数秒内以类似人类的方式回答询问,还能提供长篇文本,其形式可以是论文、电子邮件和编码。本研究以高等教育为背景,采用实验设计方法,将 ChatGPT-3 应用于传统的评估形式,以确定其能力和局限性。具体来说,我们测试了聊天机器人就我们选择的主题撰写论文的能力,创建了一个评分标准,并根据设计的评分标准对撰写的作品进行了评估。然后,我们根据保罗(2005)智力标准评分标准的修订版,评估了 ChatGPT 对其评分标准的应用情况,从而对聊天机器人的工作进行了评估。利用克里斯滕森等人(2015)关于颠覆性创新的框架,我们的研究发现,ChatGPT 能够像 "魔杖 "一样胜任、快速、轻松地完成既定任务。然而,我们的研究结果也对 ChatGPT 所展示的所有能力能在多大程度上颠覆这种传统的评估形式提出了质疑,因为在其构建和评估的某些方面,该技术还无法像人类专家那样进行复制。聊天机器人的这些局限性为我们提供了一个机会,可以解决高等教育中传统评估形式的漏洞,因为这种形式的人工智能会带来学术诚信问题。文章的最后,我们呼吁教师和高等教育机构重新考虑和审视他们在评估方面的做法,从而为他们带来启示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ChatGPT and Assessment in Higher Education: A Magic Wand or a Disruptor?
There is a current debate about the extent to which ChatGPT, a natural language AI chatbot, can disrupt processes in higher education settings. The chatbot is capable of not only answering queries in a human-like way within seconds but can also provide long tracts of texts which can be in the form of essays, emails, and coding. In this study, in the context of higher education settings, by adopting an experimental design approach, we applied ChatGPT-3 to a traditional form of assessment to determine its capabilities and limitations. Specifically, we tested its ability to produce an essay on a topic of our choice, created a rubric, and assessed the produced work in accordance with the designed rubric. We then evaluated the chatbot’s work by assessing ChatGPT’s application of its rubric according to a modified version of Paul’s (2005) Intellectual Standards rubric. Using Christensen et al.’s (2015) framework on disruptive innovations, our study found that ChatGPT was capable of completing the set tasks competently, quickly, and easily, like a “magic wand”. However, our findings also challenge the extent to which all of the ChatGPT’s demonstrated capabilities can disrupt this traditional form of assessment, given that there are aspects of its construction and evaluation that the technology is not yet able to replicate as a human expert would. These limitations of the chatbot can provide us with an opportunity for addressing vulnerabilities in traditional forms of assessment in higher education that are subject to academic integrity issues posed by this form of AI. We conclude the article with implications for teachers and higher education institutions by urging them to reconsider and revisit their practices when it comes to assessment.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Electronic Journal of e-Learning
Electronic Journal of e-Learning EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
5.90
自引率
18.20%
发文量
34
审稿时长
20 weeks
期刊最新文献
Exploring Student and AI Generated Texts: Reflections on Reflection Texts Technostress Impact on Educator Productivity: Gender Differences in Jordan's Higher Education Quo Vadis, University? A Roadmap for AI and Ethics in Higher Education Examining Student Characteristics, Self-Regulated Learning Strategies, and Their Perceived Effects on Satisfaction and Academic Performance in MOOCs Operationalizing a Weighted Performance Scoring Model for Sustainable e-Learning in Medical Education: Insights from Expert Judgement
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1