首页 > 最新文献

Computers and Education Artificial Intelligence最新文献

英文 中文
Analyzing K-12 AI education: A large language model study of classroom instruction on learning theories, pedagogy, tools, and AI literacy 分析 K-12 人工智能教育:关于学习理论、教学法、工具和人工智能素养的课堂教学大语言模型研究
Q1 Social Sciences Pub Date : 2024-09-17 DOI: 10.1016/j.caeai.2024.100295
Di Wu, Meng Chen, Xu Chen, Xing Liu

There is growing recognition among researchers and stakeholders about the significant impact of artificial intelligence (AI) technology on classroom instruction. As a crucial element in developing AI literacy, AI education in K-12 schools is increasingly gaining attention. However, most existing research on K-12 AI education relies on experiential methodologies and suffers from a lack of quantitative analysis based on extensive classroom data, hindering a comprehensive depiction of AI education's current state at these educational levels. To address this gap, this article employs the advanced semantic understanding capabilities of large language models (LLMs) to create an intelligent analysis framework that identifies learning theories, pedagogical approaches, learning tools, and levels of AI literacy in AI classroom instruction. Compared with the results of manual analysis, analysis based on LLMs can achieve more than 90% consistency. Our findings, based on the analysis of 98 classroom instruction videos in central Chinese cities, reveal that current AI classroom instruction insufficiently foster AI literacy, with only 35.71% addressing higher-level skills such as evaluating and creating AI. AI ethics are even less commonly addressed, featured in just 5.1% of classroom instruction. We classified AI classroom instruction into three categories: conceptual (50%), heuristic (18.37%), and experimental (31.63%). Correlation analysis suggests a significant relationship between the adoption of pedagogical approaches and the development of advanced AI literacy. Specifically, integrating Project-based/Problem-based learning (PBL) with Collaborative learning appears effective in cultivating the capacity to evaluate and create AI.

研究人员和利益相关者越来越认识到人工智能(AI)技术对课堂教学的重大影响。作为培养人工智能素养的关键因素,K-12 学校的人工智能教育日益受到关注。然而,关于 K-12 人工智能教育的现有研究大多依赖于经验方法,缺乏基于大量课堂数据的定量分析,这阻碍了对这些教育阶段人工智能教育现状的全面描述。为了弥补这一不足,本文利用大型语言模型(LLM)的高级语义理解能力,创建了一个智能分析框架,用于识别人工智能课堂教学中的学习理论、教学方法、学习工具和人工智能素养水平。与人工分析的结果相比,基于 LLMs 的分析可以达到 90% 以上的一致性。我们基于对中国中心城市 98 个课堂教学视频的分析发现,目前的人工智能课堂教学对人工智能素养的培养不足,仅有 35.71% 的课堂教学涉及评估和创建人工智能等更高层次的技能。人工智能伦理方面的内容更少,仅占课堂教学的 5.1%。我们将人工智能课堂教学分为三类:概念型(50%)、启发式(18.37%)和实验型(31.63%)。相关分析表明,教学方法的采用与高级人工智能素养的发展之间存在显著关系。具体来说,将基于项目/问题的学习(PBL)与协作学习相结合,似乎能有效培养学生评估和创建人工智能的能力。
{"title":"Analyzing K-12 AI education: A large language model study of classroom instruction on learning theories, pedagogy, tools, and AI literacy","authors":"Di Wu,&nbsp;Meng Chen,&nbsp;Xu Chen,&nbsp;Xing Liu","doi":"10.1016/j.caeai.2024.100295","DOIUrl":"10.1016/j.caeai.2024.100295","url":null,"abstract":"<div><p>There is growing recognition among researchers and stakeholders about the significant impact of artificial intelligence (AI) technology on classroom instruction. As a crucial element in developing AI literacy, AI education in K-12 schools is increasingly gaining attention. However, most existing research on K-12 AI education relies on experiential methodologies and suffers from a lack of quantitative analysis based on extensive classroom data, hindering a comprehensive depiction of AI education's current state at these educational levels. To address this gap, this article employs the advanced semantic understanding capabilities of large language models (LLMs) to create an intelligent analysis framework that identifies learning theories, pedagogical approaches, learning tools, and levels of AI literacy in AI classroom instruction. Compared with the results of manual analysis, analysis based on LLMs can achieve more than 90% consistency. Our findings, based on the analysis of 98 classroom instruction videos in central Chinese cities, reveal that current AI classroom instruction insufficiently foster AI literacy, with only 35.71% addressing higher-level skills such as evaluating and creating AI. AI ethics are even less commonly addressed, featured in just 5.1% of classroom instruction. We classified AI classroom instruction into three categories: conceptual (50%), heuristic (18.37%), and experimental (31.63%). Correlation analysis suggests a significant relationship between the adoption of pedagogical approaches and the development of advanced AI literacy. Specifically, integrating Project-based/Problem-based learning (PBL) with Collaborative learning appears effective in cultivating the capacity to evaluate and create AI.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100295"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000985/pdfft?md5=79b917cbae807f8c5d6d3d47fcc54e84&pid=1-s2.0-S2666920X24000985-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining the relationship between the L2 motivational self system and technology acceptance model post ChatGPT introduction and utilization 研究引入和使用 ChatGPT 后 L2 动机自我系统与技术接受模式之间的关系
Q1 Social Sciences Pub Date : 2024-09-17 DOI: 10.1016/j.caeai.2024.100302
Jerry Huang, Atsushi Mizumoto

Since the introduction of the L2 Motivational Self System (L2MSS), numerous studies worldwide have highlighted its effectiveness in elucidating Second Language Acquisition. However, the influence of generative artificial intelligence (GenAI) technology on this model remains largely unexplored. The Technology Acceptance Model (TAM) is a widely employed framework for examining the impact of a new technology, and this study explores the intercorrelation when these two models are considered together. Conducted with 35 s-year university English as a foreign language (EFL) students in humanities, the study involved two sessions of instructor-led ChatGPT usage writing workshops, followed by the collection of survey responses. Data analysis unveiled a notable correlation between the L2 Motivational Self System and the Technology Acceptance Model. Particularly noteworthy is the finding that Ought-to L2 Self positively predict Actual Usage. The study discusses pedagogical and theoretical implications, along with suggesting future research directions.

自 "第二语言动机自我系统"(L2MSS)问世以来,世界各地的许多研究都强调了它在阐明第二语言习得方面的有效性。然而,生成式人工智能(GenAI)技术对该模型的影响在很大程度上仍未得到探讨。技术接受模型(TAM)是研究新技术影响的一个广泛使用的框架,本研究探讨了将这两个模型结合起来考虑时的相互关系。本研究以 35 名人文科学专业的大学英语(EFL)一年级学生为对象,开展了两节由教师指导的 ChatGPT 使用写作研讨会,随后收集了调查问卷。数据分析揭示了 L2 动机自我系统与技术接受模型之间的显著相关性。尤其值得注意的是,"需要学习 "的自我对 "实际使用 "有积极的预测作用。本研究讨论了教学和理论影响,并提出了未来的研究方向。
{"title":"Examining the relationship between the L2 motivational self system and technology acceptance model post ChatGPT introduction and utilization","authors":"Jerry Huang,&nbsp;Atsushi Mizumoto","doi":"10.1016/j.caeai.2024.100302","DOIUrl":"10.1016/j.caeai.2024.100302","url":null,"abstract":"<div><p>Since the introduction of the L2 Motivational Self System (L2MSS), numerous studies worldwide have highlighted its effectiveness in elucidating Second Language Acquisition. However, the influence of generative artificial intelligence (GenAI) technology on this model remains largely unexplored. The Technology Acceptance Model (TAM) is a widely employed framework for examining the impact of a new technology, and this study explores the intercorrelation when these two models are considered together. Conducted with 35 s-year university English as a foreign language (EFL) students in humanities, the study involved two sessions of instructor-led ChatGPT usage writing workshops, followed by the collection of survey responses. Data analysis unveiled a notable correlation between the L2 Motivational Self System and the Technology Acceptance Model. Particularly noteworthy is the finding that Ought-to L2 Self positively predict Actual Usage. The study discusses pedagogical and theoretical implications, along with suggesting future research directions.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100302"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X2400105X/pdfft?md5=7ea85e3b8e97e5831a851694fb6a8b69&pid=1-s2.0-S2666920X2400105X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142270841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing student outcome predictions through generative adversarial networks 通过生成式对抗网络推进学生成绩预测
Q1 Social Sciences Pub Date : 2024-09-17 DOI: 10.1016/j.caeai.2024.100293
Helia Farhood , Ibrahim Joudah , Amin Beheshti , Samuel Muller

Predicting student outcomes is essential in educational analytics for creating personalised learning experiences. The effectiveness of these predictive models relies on having access to sufficient and accurate data. However, privacy concerns and the lack of student consent often restrict data collection, limiting the applicability of predictive models. To tackle this obstacle, we employ Generative Adversarial Networks, a type of Generative AI, to generate tabular data replicating and enlarging the dimensions of two distinct publicly available student datasets. The ‘Math dataset’ has 395 observations and 33 features, whereas the ‘Exam dataset’ has 1000 observations and 8 features. Using advanced Python libraries, Conditional Tabular Generative Adversarial Networks and Copula Generative Adversarial Networks, our methodology consists of two phases. First, a mirroring approach where we produce synthetic data matching the volume of the real datasets, focusing on privacy and evaluating predictive accuracy. Second, augmenting the real datasets with newly created synthetic observations to fill gaps in datasets that lack student data. We validate the synthetic data before employing these approaches using Correlation Analysis, Density Analysis, Correlation Heatmaps, and Principal Component Analysis. We then compare the predictive accuracy of whether students will pass or fail their exams across original, synthetic, and augmented datasets. Employing Feedforward Neural Networks, Convolutional Neural Networks, and Gradient-boosted Neural Networks, and using Bayesian optimisation for hyperparameter tuning, this research methodically examines the impact of synthetic data on prediction accuracy. We implement and optimize these models using Python. Our mirroring approach aims to achieve accuracy rates that closely align with the original data. Meanwhile, our augmenting approach seeks to reach a slightly higher accuracy level than when solely learning from the original data. Our findings provide actionable insights into leveraging advanced Generative AI techniques to enhance educational outcomes and meet our objectives successfully.

在教育分析中,预测学生成绩对于创造个性化学习体验至关重要。这些预测模型的有效性取决于能否获得充足而准确的数据。然而,隐私问题和未经学生同意往往限制了数据收集,从而限制了预测模型的适用性。为了解决这一障碍,我们采用了生成式人工智能的一种--生成式对抗网络(Generative Adversarial Networks)来生成表格数据,复制并扩大两个不同的公开学生数据集的维度。数学数据集 "有 395 个观测值和 33 个特征,而 "考试数据集 "有 1000 个观测值和 8 个特征。利用先进的 Python 库、条件表生成对抗网络和 Copula 生成对抗网络,我们的方法包括两个阶段。首先,我们采用镜像方法生成与真实数据集数量相匹配的合成数据,重点关注隐私问题并评估预测准确性。其次,用新创建的合成观测数据增强真实数据集,以填补缺乏学生数据的数据集的空白。在采用这些方法之前,我们先使用相关分析、密度分析、相关热图和主成分分析对合成数据进行验证。然后,我们比较了原始数据集、合成数据集和增强数据集对学生考试及格或不及格的预测准确性。本研究采用前馈神经网络、卷积神经网络和梯度提升神经网络,并使用贝叶斯优化法进行超参数调整,从而有条不紊地检验合成数据对预测准确性的影响。我们使用 Python 实现并优化这些模型。我们的镜像方法旨在实现与原始数据接近的准确率。同时,我们的增强方法旨在达到比仅从原始数据学习时略高的准确率水平。我们的研究结果为利用先进的生成式人工智能技术提高教育成果和成功实现我们的目标提供了可行的见解。
{"title":"Advancing student outcome predictions through generative adversarial networks","authors":"Helia Farhood ,&nbsp;Ibrahim Joudah ,&nbsp;Amin Beheshti ,&nbsp;Samuel Muller","doi":"10.1016/j.caeai.2024.100293","DOIUrl":"10.1016/j.caeai.2024.100293","url":null,"abstract":"<div><p>Predicting student outcomes is essential in educational analytics for creating personalised learning experiences. The effectiveness of these predictive models relies on having access to sufficient and accurate data. However, privacy concerns and the lack of student consent often restrict data collection, limiting the applicability of predictive models. To tackle this obstacle, we employ Generative Adversarial Networks, a type of Generative AI, to generate tabular data replicating and enlarging the dimensions of two distinct publicly available student datasets. The ‘Math dataset’ has 395 observations and 33 features, whereas the ‘Exam dataset’ has 1000 observations and 8 features. Using advanced Python libraries, Conditional Tabular Generative Adversarial Networks and Copula Generative Adversarial Networks, our methodology consists of two phases. First, a mirroring approach where we produce synthetic data matching the volume of the real datasets, focusing on privacy and evaluating predictive accuracy. Second, augmenting the real datasets with newly created synthetic observations to fill gaps in datasets that lack student data. We validate the synthetic data before employing these approaches using Correlation Analysis, Density Analysis, Correlation Heatmaps, and Principal Component Analysis. We then compare the predictive accuracy of whether students will pass or fail their exams across original, synthetic, and augmented datasets. Employing Feedforward Neural Networks, Convolutional Neural Networks, and Gradient-boosted Neural Networks, and using Bayesian optimisation for hyperparameter tuning, this research methodically examines the impact of synthetic data on prediction accuracy. We implement and optimize these models using Python. Our mirroring approach aims to achieve accuracy rates that closely align with the original data. Meanwhile, our augmenting approach seeks to reach a slightly higher accuracy level than when solely learning from the original data. Our findings provide actionable insights into leveraging advanced Generative AI techniques to enhance educational outcomes and meet our objectives successfully.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100293"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000961/pdfft?md5=84ea8e2d09f812e84420042fb46f8199&pid=1-s2.0-S2666920X24000961-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142270842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What motivates future teachers? The influence of Aartificial Iintelligence on student eachers' career choice 是什么激励着未来的教师?人工智能对学生教师职业选择的影响
Q1 Social Sciences Pub Date : 2024-09-14 DOI: 10.1016/j.caeai.2024.100296
Judit Martínez-Moreno , Dominik Petko

Artificial Intelligence in Education (AIEd) is reshaping not only the educational landscape but also potentially influencing the motivations of aspiring teachers. This paper explores whether considerations related to AIEd play a role in student teachers' decision to become teachers. For this aim, the study introduces a new AI subscale within the (D)FIT-Choice scale's Social Utility Value (SUV) factor and validates its effectiveness with a sample of 183 student teachers. Descriptive statistics reveal high mean scores for traditional motivators like Intrinsic Value Teaching, while AI-related factors, although considered, exhibit lower influence. A noticeable disconnection exists between digital motivations and the aspiration to shape the future, suggesting a potential gap in student teachers' understanding of digitalization's future impact. An extreme group analysis reveals a subset of student teachers who significantly consider AI. This group also gives value to Job Security and Make a Social Contribution, suggesting an awareness of AI's societal and professional impacts. Based on these findings, it is recommended to put a focus on teacher education programs to ensure student teachers' understanding of the impact of AI on education and society.

教育领域的人工智能(AIEd)不仅正在重塑教育格局,还可能影响有抱负的教师的动机。本文探讨了与人工智能教育相关的考虑因素是否在学生教师决定成为教师的过程中发挥作用。为此,研究在(D)FIT-选择量表的社会效用价值(SUV)因子中引入了一个新的人工智能子量表,并在 183 名学生教师样本中验证了其有效性。描述性统计显示,内在价值教学等传统激励因素的平均得分较高,而人工智能相关因素虽然也被考虑在内,但影响力较低。数字化动机与塑造未来的愿望之间存在明显的脱节,这表明学生教师对数字化未来影响的理解可能存在差距。对极端群体的分析表明,有一部分学生教师非常重视人工智能。这部分人还重视 "工作保障 "和 "做出社会贡献",这表明他们意识到了人工智能对社会和职业的影响。基于这些研究结果,我们建议教师教育项目重点关注确保学生教师了解人工智能对教育和社会的影响。
{"title":"What motivates future teachers? The influence of Aartificial Iintelligence on student eachers' career choice","authors":"Judit Martínez-Moreno ,&nbsp;Dominik Petko","doi":"10.1016/j.caeai.2024.100296","DOIUrl":"10.1016/j.caeai.2024.100296","url":null,"abstract":"<div><p>Artificial Intelligence in Education (AIEd) is reshaping not only the educational landscape but also potentially influencing the motivations of aspiring teachers. This paper explores whether considerations related to AIEd play a role in student teachers' decision to become teachers. For this aim, the study introduces a new AI subscale within the (D)FIT-Choice scale's Social Utility Value (SUV) factor and validates its effectiveness with a sample of 183 student teachers. Descriptive statistics reveal high mean scores for traditional motivators like Intrinsic Value Teaching, while AI-related factors, although considered, exhibit lower influence. A noticeable disconnection exists between digital motivations and the aspiration to shape the future, suggesting a potential gap in student teachers' understanding of digitalization's future impact. An extreme group analysis reveals a subset of student teachers who significantly consider AI. This group also gives value to Job Security and Make a Social Contribution, suggesting an awareness of AI's societal and professional impacts. Based on these findings, it is recommended to put a focus on teacher education programs to ensure student teachers' understanding of the impact of AI on education and society.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100296"},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000997/pdfft?md5=b29bf2e1cc48cca9e5d15dd706ad1b0a&pid=1-s2.0-S2666920X24000997-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLaVA-docent: Instruction tuning with multimodal large language model to support art appreciation education LLaVA-docent:利用多模态大语言模型调整教学,支持艺术欣赏教育
Q1 Social Sciences Pub Date : 2024-09-13 DOI: 10.1016/j.caeai.2024.100297
Unggi Lee , Minji Jeon , Yunseo Lee , Gyuri Byun , Yoorim Son , Jaeyoon Shin , Hongkyu Ko , Hyeoncheol Kim

Despite the development of various AI systems to support learning in various domains, AI assistance for art appreciation education has not been extensively explored. Art appreciation, often perceived as an unfamiliar and challenging endeavor for most students, can be more accessible with a generative AI enabled conversation partner that provides tailored questions and encourages the audience to deeply appreciate artwork. This study explores the application of multimodal large language models (MLLMs) in art appreciation education, with a focus on developing LLaVA-Docent, a model designed to serve as a personal tutor for art appreciation. Our approach involved design and development research, focusing on iterative enhancement to design and develop the application to produce a functional MLLM-enabled chatbot along with a data design framework for art appreciation education. To that end, we established a virtual dialogue dataset that was generated by GPT-4, which was instrumental in training our MLLM, LLaVA-Docent. The performance of LLaVA-Docent was evaluated by benchmarking it against alternative settings and revealed its distinct strengths and weaknesses. Our findings highlight the efficacy of the MMLM-based personalized art appreciation chatbot and demonstrate its applicability for a novel approach in which art appreciation is taught and experienced.

尽管开发了各种人工智能系统来支持各个领域的学习,但人工智能对艺术欣赏教育的帮助尚未得到广泛探索。对于大多数学生来说,艺术欣赏通常被认为是一项陌生且具有挑战性的工作,而人工智能生成的对话伙伴可以提供有针对性的问题,并鼓励观众深入欣赏艺术作品,从而使艺术欣赏变得更容易理解。本研究探讨了多模态大语言模型(MLLMs)在艺术鉴赏教育中的应用,重点是开发 LLaVA-Docent 模型,该模型旨在作为艺术鉴赏的个人导师。我们的方法包括设计和开发研究,重点是迭代增强设计和开发应用程序,以产生一个支持 MLLM 的功能性聊天机器人,并为艺术欣赏教育提供一个数据设计框架。为此,我们建立了一个由 GPT-4 生成的虚拟对话数据集,该数据集有助于训练我们的 MLLM LLaVA-Docent。通过对 LLaVA-Docent 的性能进行评估,将其与其他设置进行比较,发现了其明显的优缺点。我们的研究结果凸显了基于 MMLM 的个性化艺术欣赏聊天机器人的功效,并证明了它在艺术欣赏教学和体验的新方法中的适用性。
{"title":"LLaVA-docent: Instruction tuning with multimodal large language model to support art appreciation education","authors":"Unggi Lee ,&nbsp;Minji Jeon ,&nbsp;Yunseo Lee ,&nbsp;Gyuri Byun ,&nbsp;Yoorim Son ,&nbsp;Jaeyoon Shin ,&nbsp;Hongkyu Ko ,&nbsp;Hyeoncheol Kim","doi":"10.1016/j.caeai.2024.100297","DOIUrl":"10.1016/j.caeai.2024.100297","url":null,"abstract":"<div><p>Despite the development of various <span>AI</span> systems to support learning in various domains, <span>AI</span> assistance for art appreciation education has not been extensively explored. Art appreciation, often perceived as an unfamiliar and challenging endeavor for most students, can be more accessible with a generative AI enabled conversation partner that provides tailored questions and encourages the audience to deeply appreciate artwork. This study explores the application of multimodal large language models (MLLMs) in art appreciation education, with a focus on developing LLaVA-Docent, a model designed to serve as a personal tutor for art appreciation. Our approach involved design and development research, focusing on iterative enhancement to design and develop the application to produce a functional MLLM-enabled chatbot along with a data design framework for art appreciation education. To that end, we established a virtual dialogue dataset that was generated by GPT-4, which was instrumental in training our MLLM, LLaVA-Docent. The performance of LLaVA-Docent was evaluated by benchmarking it against alternative settings and revealed its distinct strengths and weaknesses. Our findings highlight the efficacy of the MMLM-based personalized art appreciation chatbot and demonstrate its applicability for a novel approach in which art appreciation is taught and experienced.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100297"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24001000/pdfft?md5=48322b1027ada7b47fe2466e14bfef09&pid=1-s2.0-S2666920X24001000-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of LLMs for educational question classification and generation 用于教育问题分类和生成的 LLM 分析
Q1 Social Sciences Pub Date : 2024-09-12 DOI: 10.1016/j.caeai.2024.100298
Said Al Faraby, Ade Romadhony, Adiwijaya

Large language models (LLMs) like ChatGPT have shown promise in generating educational content, including questions. This study evaluates the effectiveness of LLMs in classifying and generating educational-type questions. We assessed ChatGPT's performance using a dataset of 4,959 user-generated questions labeled into ten categories, employing various prompting techniques and aggregating results with a voting method to enhance robustness. Additionally, we evaluated ChatGPT's accuracy in generating type-specific questions from 100 reading sections sourced from five online textbooks, which were manually reviewed by human evaluators. We also generated questions based on learning objectives and compared their quality to those crafted by human experts, with evaluations by experts and crowdsourced participants.

Our findings reveal that ChatGPT achieved a macro-average F1-score of 0.57 in zero-shot classification, improving to 0.70 when combined with a Random Forest classifier using embeddings. The most effective prompting technique was zero-shot with added definitions, while few-shot and few-shot + Chain of Thought approaches underperformed. The voting method enhanced robustness in classification. In generating type-specific questions, ChatGPT's accuracy was lower than anticipated. However, quality differences between ChatGPT-generated and human-generated questions were not statistically significant, indicating ChatGPT's potential for educational content creation. This study underscores the transformative potential of LLMs in educational practices. By effectively classifying and generating high-quality educational questions, LLMs can reduce the workload on educators and enable personalized learning experiences.

像 ChatGPT 这样的大型语言模型(LLMs)在生成包括问题在内的教育内容方面已显示出良好的前景。本研究评估了 LLM 在分类和生成教育类问题方面的有效性。我们使用一个由 4,959 个用户生成的问题组成的数据集对 ChatGPT 的性能进行了评估,这些问题被标记为十个类别,我们采用了各种提示技术,并通过投票方法汇总结果以增强稳健性。此外,我们还评估了 ChatGPT 从五本在线教科书的 100 个阅读章节中生成特定类型问题的准确性,这些问题均由人工评估员进行人工审核。我们的研究结果表明,ChatGPT 在零镜头分类中的宏观平均 F1 分数为 0.57,当与使用嵌入的随机森林分类器相结合时,F1 分数提高到 0.70。最有效的提示技术是添加定义的零镜头,而少镜头和少镜头 + 思维链方法表现不佳。投票法增强了分类的稳健性。在生成特定类型问题时,ChatGPT 的准确率低于预期。不过,ChatGPT 生成的问题与人工生成的问题之间的质量差异在统计学上并不显著,这表明 ChatGPT 在教育内容创建方面具有潜力。这项研究强调了 LLM 在教育实践中的变革潜力。通过有效地分类和生成高质量的教育问题,LLMs 可以减轻教育工作者的工作量,实现个性化的学习体验。
{"title":"Analysis of LLMs for educational question classification and generation","authors":"Said Al Faraby,&nbsp;Ade Romadhony,&nbsp;Adiwijaya","doi":"10.1016/j.caeai.2024.100298","DOIUrl":"10.1016/j.caeai.2024.100298","url":null,"abstract":"<div><p>Large language models (LLMs) like ChatGPT have shown promise in generating educational content, including questions. This study evaluates the effectiveness of LLMs in classifying and generating educational-type questions. We assessed ChatGPT's performance using a dataset of 4,959 user-generated questions labeled into ten categories, employing various prompting techniques and aggregating results with a voting method to enhance robustness. Additionally, we evaluated ChatGPT's accuracy in generating type-specific questions from 100 reading sections sourced from five online textbooks, which were manually reviewed by human evaluators. We also generated questions based on learning objectives and compared their quality to those crafted by human experts, with evaluations by experts and crowdsourced participants.</p><p>Our findings reveal that ChatGPT achieved a macro-average F1-score of 0.57 in zero-shot classification, improving to 0.70 when combined with a Random Forest classifier using embeddings. The most effective prompting technique was zero-shot with added definitions, while few-shot and few-shot + Chain of Thought approaches underperformed. The voting method enhanced robustness in classification. In generating type-specific questions, ChatGPT's accuracy was lower than anticipated. However, quality differences between ChatGPT-generated and human-generated questions were not statistically significant, indicating ChatGPT's potential for educational content creation. This study underscores the transformative potential of LLMs in educational practices. By effectively classifying and generating high-quality educational questions, LLMs can reduce the workload on educators and enable personalized learning experiences.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100298"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24001012/pdfft?md5=9fd431cb491d50380d5493d0879bc351&pid=1-s2.0-S2666920X24001012-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“ChatGPT seems too good to be true”: College students’ use and perceptions of generative AI "ChatGPT似乎好得不像真的":大学生对生成式人工智能的使用和看法
Q1 Social Sciences Pub Date : 2024-09-12 DOI: 10.1016/j.caeai.2024.100294
Clare Baek, Tamara Tate, Mark Warschauer
This study investigates how U.S. college students (N = 1001) perceive and use ChatGPT, exploring its relationship with societal structures and student characteristics. Regression results show that gender, age, major, institution type, and institutional policy significantly influenced ChatGPT use for general, writing, and programming tasks. Students in their 30s–40s were more likely to use ChatGPT frequently than younger students. Non-native English speakers were more likely than native speakers to use ChatGPT frequently for writing, suggesting its potential as a support tool for language learners. Institutional policies allowing ChatGPT use predicted higher use of ChatGPT. Thematic analysis and natural language processing of open-ended responses revealed varied attitudes towards ChatGPT, with some fearing institutional punishment for using ChatGPT and others confident in their appropriate use of ChatGPT. Computer science majors expressed concerns about job displacement due to the advent of generative AI. Higher-income students generally viewed ChatGPT more positively than their lower-income counterparts. Our research underscores how technology can both empower and marginalize within educational settings; we advocate for equitable integration of AI in academic environments for diverse students.
本研究调查了美国大学生(N = 1001)如何看待和使用 ChatGPT,探讨了 ChatGPT 与社会结构和学生特征之间的关系。回归结果显示,性别、年龄、专业、院校类型和院校政策对 ChatGPT 在一般、写作和编程任务中的使用有显著影响。与年轻学生相比,30-40 岁的学生更有可能频繁使用 ChatGPT。非英语母语者比英语母语者更有可能在写作中频繁使用 ChatGPT,这表明 ChatGPT 有可能成为语言学习者的辅助工具。允许使用 ChatGPT 的机构政策预示着 ChatGPT 的使用率会更高。通过对开放式回答进行主题分析和自然语言处理,可以发现学生们对 ChatGPT 的态度各不相同,有些学生担心使用 ChatGPT 会受到机构的惩罚,而有些学生则对适当使用 ChatGPT 充满信心。计算机科学专业的学生对生成式人工智能的出现所带来的失业问题表示担忧。高收入学生对 ChatGPT 的看法普遍比低收入学生积极。我们的研究强调了技术如何在教育环境中既能增强能力,又能使人边缘化;我们提倡在学术环境中为不同学生公平地整合人工智能。
{"title":"“ChatGPT seems too good to be true”: College students’ use and perceptions of generative AI","authors":"Clare Baek,&nbsp;Tamara Tate,&nbsp;Mark Warschauer","doi":"10.1016/j.caeai.2024.100294","DOIUrl":"10.1016/j.caeai.2024.100294","url":null,"abstract":"<div><div>This study investigates how U.S. college students (N = 1001) perceive and use ChatGPT, exploring its relationship with societal structures and student characteristics. Regression results show that gender, age, major, institution type, and institutional policy significantly influenced ChatGPT use for general, writing, and programming tasks. Students in their 30s–40s were more likely to use ChatGPT frequently than younger students. Non-native English speakers were more likely than native speakers to use ChatGPT frequently for writing, suggesting its potential as a support tool for language learners. Institutional policies allowing ChatGPT use predicted higher use of ChatGPT. Thematic analysis and natural language processing of open-ended responses revealed varied attitudes towards ChatGPT, with some fearing institutional punishment for using ChatGPT and others confident in their appropriate use of ChatGPT. Computer science majors expressed concerns about job displacement due to the advent of generative AI. Higher-income students generally viewed ChatGPT more positively than their lower-income counterparts. Our research underscores how technology can both empower and marginalize within educational settings; we advocate for equitable integration of AI in academic environments for diverse students.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100294"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language models meet user interfaces: The case of provisioning feedback 大型语言模型与用户界面的结合:供应反馈案例
Q1 Social Sciences Pub Date : 2024-09-11 DOI: 10.1016/j.caeai.2024.100289
Stanislav Pozdniakov , Jonathan Brazil , Solmaz Abdi , Aneesha Bakharia , Shazia Sadiq , Dragan Gašević , Paul Denny , Hassan Khosravi

Incorporating Generative Artificial Intelligence (GenAI), especially Large Language Models (LLMs), into educational settings presents valuable opportunities to boost the efficiency of educators and enrich the learning experiences of students. A significant portion of the current use of LLMs by educators has involved using conversational user interfaces (CUIs), such as chat windows, for functions like generating educational materials or offering feedback to learners. The ability to engage in real-time conversations with LLMs, which can enhance educators' domain knowledge across various subjects, has been of high value. However, it also presents challenges to LLMs' widespread, ethical, and effective adoption. Firstly, educators must have a degree of expertise, including tool familiarity, AI literacy and prompting to effectively use CUIs, which can be a barrier to adoption. Secondly, the open-ended design of CUIs makes them exceptionally powerful, which raises ethical concerns, particularly when used for high-stakes decisions like grading. Additionally, there are risks related to privacy and intellectual property, stemming from the potential unauthorised sharing of sensitive information. Finally, CUIs are designed for short, synchronous interactions and often struggle and hallucinate when given complex, multi-step tasks (e.g., providing individual feedback based on a rubric on a large scale). To address these challenges, we explored the benefits of transitioning away from employing LLMs via CUIs to the creation of applications with user-friendly interfaces that leverage LLMs through API calls. We first propose a framework for pedagogically sound and ethically responsible incorporation of GenAI into educational tools, emphasizing a human-centred design. We then illustrate the application of our framework to the design and implementation of a novel tool called Feedback Copilot, which enables instructors to provide students with personalized qualitative feedback on their assignments in classes of any size. An evaluation involving the generation of feedback from two distinct variations of the Feedback Copilot tool, using numerically graded assignments from 338 students, demonstrates the viability and effectiveness of our approach. Our findings have significant implications for GenAI application researchers, educators seeking to leverage accessible GenAI tools, and educational technologists aiming to transcend the limitations of conversational AI interfaces, thereby charting a course for the future of GenAI in education.

将生成式人工智能(GenAI),尤其是大型语言模型(LLM)融入教育环境,为提高教育工作者的效率和丰富学生的学习体验提供了宝贵的机会。目前,教育工作者对 LLM 的使用有很大一部分涉及使用会话用户界面(CUI),如聊天窗口,以实现生成教学材料或向学习者提供反馈等功能。与 LLM 进行实时对话的能力可以增强教育工作者在不同学科领域的知识,具有很高的价值。然而,这也给 LLMs 的广泛、道德和有效应用带来了挑战。首先,教育工作者必须具备一定程度的专业知识,包括工具熟悉程度、人工智能素养和提示,才能有效使用 CUI,这可能成为采用 CUI 的障碍。其次,CUI 的开放式设计使其功能异常强大,这引发了道德方面的担忧,尤其是在用于像评分这样的高风险决策时。此外,由于敏感信息可能会在未经授权的情况下被共享,因此还存在与隐私和知识产权相关的风险。最后,CUI 是专为短时同步互动而设计的,在执行复杂的多步骤任务时(例如,根据评分标准提供大规模的个人反馈),CUI 往往会陷入困境和幻觉。为了应对这些挑战,我们探索了从通过 CUI 使用 LLM 过渡到创建具有用户友好界面的应用程序的好处,通过 API 调用利用 LLM。我们首先提出了一个框架,用于将 GenAI 融入教育工具,强调以人为本的设计,使其在教学上合理,在道德上负责。然后,我们举例说明了我们的框架在设计和实施名为 "Feedback Copilot "的新型工具中的应用,该工具使教师能够在任何规模的班级中为学生提供个性化的作业定性反馈。我们使用 338 名学生的数字分级作业,对两种不同的 Feedback Copilot 工具生成的反馈进行了评估,证明了我们方法的可行性和有效性。我们的研究结果对 GenAI 应用研究人员、寻求利用可访问的 GenAI 工具的教育工作者以及旨在超越对话式人工智能界面局限性的教育技术专家具有重要意义,从而为 GenAI 在教育领域的未来发展指明了方向。
{"title":"Large language models meet user interfaces: The case of provisioning feedback","authors":"Stanislav Pozdniakov ,&nbsp;Jonathan Brazil ,&nbsp;Solmaz Abdi ,&nbsp;Aneesha Bakharia ,&nbsp;Shazia Sadiq ,&nbsp;Dragan Gašević ,&nbsp;Paul Denny ,&nbsp;Hassan Khosravi","doi":"10.1016/j.caeai.2024.100289","DOIUrl":"10.1016/j.caeai.2024.100289","url":null,"abstract":"<div><p>Incorporating Generative Artificial Intelligence (GenAI), especially Large Language Models (LLMs), into educational settings presents valuable opportunities to boost the efficiency of educators and enrich the learning experiences of students. A significant portion of the current use of LLMs by educators has involved using conversational user interfaces (CUIs), such as chat windows, for functions like generating educational materials or offering feedback to learners. The ability to engage in real-time conversations with LLMs, which can enhance educators' domain knowledge across various subjects, has been of high value. However, it also presents challenges to LLMs' widespread, ethical, and effective adoption. Firstly, educators must have a degree of expertise, including tool familiarity, AI literacy and prompting to effectively use CUIs, which can be a barrier to adoption. Secondly, the open-ended design of CUIs makes them exceptionally powerful, which raises ethical concerns, particularly when used for high-stakes decisions like grading. Additionally, there are risks related to privacy and intellectual property, stemming from the potential unauthorised sharing of sensitive information. Finally, CUIs are designed for short, synchronous interactions and often struggle and hallucinate when given complex, multi-step tasks (e.g., providing individual feedback based on a rubric on a large scale). To address these challenges, we explored the benefits of transitioning away from employing LLMs via CUIs to the creation of applications with user-friendly interfaces that leverage LLMs through API calls. We first propose a framework for pedagogically sound and ethically responsible incorporation of GenAI into educational tools, emphasizing a human-centred design. We then illustrate the application of our framework to the design and implementation of a novel tool called Feedback Copilot, which enables instructors to provide students with personalized qualitative feedback on their assignments in classes of any size. An evaluation involving the generation of feedback from two distinct variations of the Feedback Copilot tool, using numerically graded assignments from 338 students, demonstrates the viability and effectiveness of our approach. Our findings have significant implications for GenAI application researchers, educators seeking to leverage accessible GenAI tools, and educational technologists aiming to transcend the limitations of conversational AI interfaces, thereby charting a course for the future of GenAI in education.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100289"},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000924/pdfft?md5=3bb8f6c7b9b7274023053e63f7ea8a0a&pid=1-s2.0-S2666920X24000924-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the proficiency of large language models in automatic feedback generation: An evaluation study 评估大型语言模型在自动生成反馈方面的能力:评估研究
Q1 Social Sciences Pub Date : 2024-09-11 DOI: 10.1016/j.caeai.2024.100299
Wei Dai , Yi-Shan Tsai , Jionghao Lin , Ahmad Aldino , Hua Jin , Tongguang Li , Dragan Gašević , Guanliang Chen

Assessment feedback is important to student learning. Learning analytics (LA) powered by artificial intelligence exhibits profound potential in helping instructors with the laborious provision of feedback. Inspired by the recent advancements made by Generative Pre-trained Transformer (GPT) models, we conducted a study to examine the extent to which GPT models hold the potential to advance the existing knowledge of LA-supported feedback systems towards improving the efficiency of feedback provision. Therefore, our study explored the ability of two versions of GPT models – i.e., GPT-3.5 (ChatGPT) and GPT-4 – to generate assessment feedback on students' writing assessment tasks, common in higher education, with open-ended topics for a data science-related course. We compared the feedback generated by GPT models (namely GPT-3.5 and GPT-4) with the feedback provided by human instructors in terms of readability, effectiveness (content containing effective feedback components), and reliability (correct assessment on student performance). Results showed that (1) both GPT-3.5 and GPT-4 were able to generate more readable feedback with greater consistency than human instructors, (2) GPT-4 outperformed GPT-3.5 and human instructors in providing feedback containing information about effective feedback dimensions, including feeding-up, feeding-forward, process level, and self-regulation level, and (3) GPT-4 demonstrated higher reliability of feedback compared to GPT-3.5. Based on our findings, we discussed the potential opportunities and challenges of utilising GPT models in assessment feedback generation.

评估反馈对学生的学习非常重要。由人工智能驱动的学习分析(LA)在帮助教师提供费力的反馈方面展现出了巨大的潜力。受生成式预训练转换器(GPT)模型最近取得的进展的启发,我们开展了一项研究,以考察 GPT 模型在多大程度上有可能推动现有的 LA 支持反馈系统知识,从而提高反馈提供的效率。因此,我们的研究探讨了两个版本的 GPT 模型(即 GPT-3.5 (ChatGPT) 和 GPT-4)为学生的写作评估任务生成评估反馈的能力,这些任务在高等教育中很常见,是一门数据科学相关课程的开放式题目。我们将 GPT 模型(即 GPT-3.5 和 GPT-4)生成的反馈与人类教师提供的反馈在可读性、有效性(包含有效反馈成分的内容)和可靠性(对学生表现的正确评估)方面进行了比较。结果表明:(1) GPT-3.5 和 GPT-4 都能比人类教师生成可读性更强、一致性更高的反馈;(2) GPT-4 在提供包含有效反馈维度(包括向上反馈、向前反馈、过程水平和自我调节水平)信息的反馈方面优于 GPT-3.5 和人类教师;(3) 与 GPT-3.5 相比,GPT-4 显示出更高的反馈可靠性。根据我们的研究结果,我们讨论了在评估反馈生成中使用 GPT 模型的潜在机遇和挑战。
{"title":"Assessing the proficiency of large language models in automatic feedback generation: An evaluation study","authors":"Wei Dai ,&nbsp;Yi-Shan Tsai ,&nbsp;Jionghao Lin ,&nbsp;Ahmad Aldino ,&nbsp;Hua Jin ,&nbsp;Tongguang Li ,&nbsp;Dragan Gašević ,&nbsp;Guanliang Chen","doi":"10.1016/j.caeai.2024.100299","DOIUrl":"10.1016/j.caeai.2024.100299","url":null,"abstract":"<div><p>Assessment feedback is important to student learning. Learning analytics (LA) powered by artificial intelligence exhibits profound potential in helping instructors with the laborious provision of feedback. Inspired by the recent advancements made by Generative Pre-trained Transformer (GPT) models, we conducted a study to examine the extent to which GPT models hold the potential to advance the existing knowledge of LA-supported feedback systems towards improving the efficiency of feedback provision. Therefore, our study explored the ability of two versions of GPT models – i.e., GPT-3.5 (ChatGPT) and GPT-4 – to generate assessment feedback on students' writing assessment tasks, common in higher education, with open-ended topics for a data science-related course. We compared the feedback generated by GPT models (namely GPT-3.5 and GPT-4) with the feedback provided by human instructors in terms of readability, effectiveness (content containing effective feedback components), and reliability (correct assessment on student performance). Results showed that (1) both GPT-3.5 and GPT-4 were able to generate more readable feedback with greater consistency than human instructors, (2) GPT-4 outperformed GPT-3.5 and human instructors in providing feedback containing information about effective feedback dimensions, including feeding-up, feeding-forward, process level, and self-regulation level, and (3) GPT-4 demonstrated higher reliability of feedback compared to GPT-3.5. Based on our findings, we discussed the potential opportunities and challenges of utilising GPT models in assessment feedback generation.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100299"},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24001024/pdfft?md5=54f423223507728d270e7cb6c02a10bb&pid=1-s2.0-S2666920X24001024-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Control knowledge tracing: Modeling students' learning dynamics from a control-theory perspective 控制知识追踪:从控制论角度模拟学生的学习动态
Q1 Social Sciences Pub Date : 2024-09-06 DOI: 10.1016/j.caeai.2024.100292
Cheng Ning Loong, Chih-Chen Chang

A student's learning system is a system that guides the student's knowledge acquisition process using available learning resources to produce certain learning outcomes that can be evaluated based on the scores of questions in an assessment. Such a learning system is analogous to a control system, which regulates the process of a plant through a controller in order to generate a desired response that can be inferred from sensor measurements. Inspired by this analogy, this study proposes to model the monitoring of students' knowledge acquisition process from a control-theory viewpoint, which is referred to as control knowledge tracing (CtrKT). The proposed CtrKT comprises a dynamic equation that characterizes the temporal variation of students' knowledge states in response to the effects of learning resources and an observation equation that maps their knowledge states to question scores. With this formulation, CtrKT enables tracking students' knowledge states, predicting their assessment performance, and teaching planning. The insights and accuracy of CtrKT in postulating the knowledge acquisition process are analyzed and validated using experimental data from psychology literature and two naturalistic datasets collected from a civil engineering undergraduate course. Results verify the feasibility of using CtrKT to estimate the overall assessment performance of the participants in the psychology experiments and the students in the naturalistic datasets. Lastly, this study explores the use of CtrKT for teaching scheduling and optimization, discusses its modeling issues, and compares it with other knowledge-tracing approaches.

学生学习系统是一个利用现有学习资源指导学生获取知识过程的系统,其目的是产生一定的学习成果,这些成果可以根据评估中的问题得分进行评价。这种学习系统类似于控制系统,它通过控制器调节工厂的生产过程,以产生可从传感器测量结果中推断出的预期响应。受这一类比的启发,本研究提出从控制论的角度对学生的知识获取过程进行监控建模,即控制知识跟踪(CtrKT)。所提出的 CtrKT 包括一个动态方程和一个观察方程,前者描述了学生的知识状态在学习资源影响下的时间变化,后者则将学生的知识状态映射到问题得分。有了这个公式,CtrKT 就能跟踪学生的知识状态,预测他们的评价成绩,并制定教学计划。我们利用心理学文献中的实验数据和从土木工程本科课程中收集的两个自然数据集,分析并验证了 CtrKT 在假设知识获取过程中的洞察力和准确性。结果验证了使用 CtrKT 估算心理学实验参与者和自然数据集学生的总体评估表现的可行性。最后,本研究探讨了 CtrKT 在教学安排和优化中的应用,讨论了其建模问题,并将其与其他知识追踪方法进行了比较。
{"title":"Control knowledge tracing: Modeling students' learning dynamics from a control-theory perspective","authors":"Cheng Ning Loong,&nbsp;Chih-Chen Chang","doi":"10.1016/j.caeai.2024.100292","DOIUrl":"10.1016/j.caeai.2024.100292","url":null,"abstract":"<div><p>A student's learning system is a system that guides the student's knowledge acquisition process using available learning resources to produce certain learning outcomes that can be evaluated based on the scores of questions in an assessment. Such a learning system is analogous to a control system, which regulates the process of a plant through a controller in order to generate a desired response that can be inferred from sensor measurements. Inspired by this analogy, this study proposes to model the monitoring of students' knowledge acquisition process from a control-theory viewpoint, which is referred to as control knowledge tracing (CtrKT). The proposed CtrKT comprises a dynamic equation that characterizes the temporal variation of students' knowledge states in response to the effects of learning resources and an observation equation that maps their knowledge states to question scores. With this formulation, CtrKT enables tracking students' knowledge states, predicting their assessment performance, and teaching planning. The insights and accuracy of CtrKT in postulating the knowledge acquisition process are analyzed and validated using experimental data from psychology literature and two naturalistic datasets collected from a civil engineering undergraduate course. Results verify the feasibility of using CtrKT to estimate the overall assessment performance of the participants in the psychology experiments and the students in the naturalistic datasets. Lastly, this study explores the use of CtrKT for teaching scheduling and optimization, discusses its modeling issues, and compares it with other knowledge-tracing approaches.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100292"},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X2400095X/pdfft?md5=f28faf847be38deedd55f86aa939a437&pid=1-s2.0-S2666920X2400095X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Education Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1