首页 > 最新文献

Computers and Education Artificial Intelligence最新文献

英文 中文
Role of online assessment system in formative evaluation of programming education 在线评价系统在程序设计教育形成性评价中的作用
Q1 Social Sciences Pub Date : 2025-12-01 DOI: 10.1016/j.caeai.2025.100515
Haitang Wan
With the rapid development of educational informatization, the IntelliAssessment is increasingly widely used in the formative evaluation of course teaching. Scalability testing (Python Locust framework) showed 100 % response rate under 500 concurrent requests (consistent with typical university course sizes), while 92 % response rate was observed at 1000 concurrent requests (an extreme stress test scenario). Security validation included a 94 % attack-blocking rate in penetration testing and 91.0 % F1-score for AI-driven phishing detection. The <2-s real-time feedback window (p50 = 1.2 s, p90 = 1.8 s, p99 = 2.3 s) is maintained for 90 % of interactions under typical loads, with latency degrading only at very high concurrency—pedagogically, this ensures timely formative feedback for most classroom scenarios. A supplementary analysis discussing current security limitations and the evolving nature of security threats has been added, along with potential development ideas for enhancing system security. These improvements aim to strengthen the comprehensiveness and scientific reliability of our manuscript. Statistics show that 27.65 % of the students who participated in the evaluation were very satisfied with the feedback, while 17.4 % thought that the feedback was helpful. As for understanding the assessment content, 3.15 % of the students indicated that they needed more clarity, indicating that the clarity of the assessment questions still required improvement. Among the students’ learning performance, 67.24 % scored higher than the passing line, indicating that most students can master the course content.
随着教育信息化的快速发展,智能评估在课程教学形成性评价中的应用越来越广泛。可伸缩性测试(Python Locust框架)显示,在500个并发请求(与典型的大学课程规模一致)下,响应率为100%,而在1000个并发请求(极端压力测试场景)下,响应率为92%。安全验证包括渗透测试中94%的攻击阻止率和人工智能驱动的网络钓鱼检测中91.0%的f1得分。2秒的实时反馈窗口(p50 = 1.2秒,p90 = 1.8秒,p99 = 2.3秒)在典型负载下保持90%的交互,延迟仅在非常高的并发性下降低-从教学角度来说,这确保了大多数课堂场景的及时形成反馈。本文还添加了一个补充分析,讨论了当前的安全限制和安全威胁的演变性质,以及增强系统安全性的潜在开发思想。这些改进旨在加强我们稿件的全面性和科学可靠性。统计显示,27.65%的参与评价的学生对反馈非常满意,17.4%的学生认为反馈有帮助。在对评估内容的理解上,有3.15%的学生表示需要更加清晰,说明评估问题的清晰性仍有待提高。在学生的学习成绩中,67.24%的学生得分高于及格线,说明大部分学生能够掌握课程内容。
{"title":"Role of online assessment system in formative evaluation of programming education","authors":"Haitang Wan","doi":"10.1016/j.caeai.2025.100515","DOIUrl":"10.1016/j.caeai.2025.100515","url":null,"abstract":"<div><div>With the rapid development of educational informatization, the IntelliAssessment is increasingly widely used in the formative evaluation of course teaching. Scalability testing (Python Locust framework) showed 100 % response rate under 500 concurrent requests (consistent with typical university course sizes), while 92 % response rate was observed at 1000 concurrent requests (an extreme stress test scenario). Security validation included a 94 % attack-blocking rate in penetration testing and 91.0 % F1-score for AI-driven phishing detection. The &lt;2-s real-time feedback window (p50 = 1.2 s, p90 = 1.8 s, p99 = 2.3 s) is maintained for 90 % of interactions under typical loads, with latency degrading only at very high concurrency—pedagogically, this ensures timely formative feedback for most classroom scenarios. A supplementary analysis discussing current security limitations and the evolving nature of security threats has been added, along with potential development ideas for enhancing system security. These improvements aim to strengthen the comprehensiveness and scientific reliability of our manuscript. Statistics show that 27.65 % of the students who participated in the evaluation were very satisfied with the feedback, while 17.4 % thought that the feedback was helpful. As for understanding the assessment content, 3.15 % of the students indicated that they needed more clarity, indicating that the clarity of the assessment questions still required improvement. Among the students’ learning performance, 67.24 % scored higher than the passing line, indicating that most students can master the course content.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100515"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive capability of foundational concepts tests for problem-solving using machine learning concepts: Evaluating project-based learning courses in artificial intelligence literacy education 使用机器学习概念解决问题的基础概念测试的预测能力:评估人工智能素养教育中基于项目的学习课程
Q1 Social Sciences Pub Date : 2025-12-01 DOI: 10.1016/j.caeai.2025.100503
Siu Cheung Kong , Chunyu Hou
In the artificial intelligence (AI) era, secondary and university students should be able to apply AI for problem-solving. This study designed and evaluated an AI literacy programme to enhance understanding of machine learning concepts. It also examined how the conceptual understanding from two foundational courses (Courses 1 and 2) affected students' application of these concepts in the subsequent two project-based learning courses (Courses 3 and 4). The regression analysis of data from 566, 566, 470, and 196 student participants enrolled on Courses 1, 2, 3, and 4, respectively, revealed that the post-course concept tests for Courses 1 and 2 accounted for 19.9 % of the variance in the students' problem-solving ability test before they took Course 3. This result indicates that teaching students' foundational concepts can develop their ability to solve machine learning-related problems. The post-course concept tests for Courses 1 and 2, together with the pre-course problem-solving ability test for Course 3, collectively explained 27.4 % of the variance in the students’ problem-solving ability after completing Course 3. Together with the significant improvement in the paired-samples t-test statistics for the pre- and post-course problem-solving test of Course 3, this indicates the importance of providing opportunities for students to solve machine learning-related problems. These findings provide empirical evidence to inform the design of curricula for AI literacy programmes. Project-based learning (PBL) is an approach that can provide opportunities for participants to develop problem-solving skills using foundational AI knowledge.
在人工智能(AI)时代,中学生和大学生应该能够应用AI解决问题。本研究设计并评估了一个人工智能扫盲计划,以提高对机器学习概念的理解。它还研究了两个基础课程(课程1和2)的概念理解如何影响学生在随后的两个基于项目的学习课程(课程3和4)中对这些概念的应用。通过对566名、566名、470名和196名分别参加课程1、2、3和4的学生进行回归分析,发现课程1和课程2的课后概念测试占课程3前学生问题解决能力测试方差的19.9%。这一结果表明,教授学生基本概念可以培养他们解决机器学习相关问题的能力。课程1和课程2的课程后概念测试和课程3的课程前问题解决能力测试共同解释了学生完成课程3后问题解决能力差异的27.4%。再加上课程3的课前和课后问题解决测试配对样本t检验统计量的显著提高,这表明为学生提供解决机器学习相关问题的机会的重要性。这些发现为人工智能扫盲课程的设计提供了经验证据。基于项目的学习(PBL)是一种方法,可以为参与者提供机会,发展使用基础人工智能知识解决问题的技能。
{"title":"Predictive capability of foundational concepts tests for problem-solving using machine learning concepts: Evaluating project-based learning courses in artificial intelligence literacy education","authors":"Siu Cheung Kong ,&nbsp;Chunyu Hou","doi":"10.1016/j.caeai.2025.100503","DOIUrl":"10.1016/j.caeai.2025.100503","url":null,"abstract":"<div><div>In the artificial intelligence (AI) era, secondary and university students should be able to apply AI for problem-solving. This study designed and evaluated an AI literacy programme to enhance understanding of machine learning concepts. It also examined how the conceptual understanding from two foundational courses (Courses 1 and 2) affected students' application of these concepts in the subsequent two project-based learning courses (Courses 3 and 4). The regression analysis of data from 566, 566, 470, and 196 student participants enrolled on Courses 1, 2, 3, and 4, respectively, revealed that the post-course concept tests for Courses 1 and 2 accounted for 19.9 % of the variance in the students' problem-solving ability test before they took Course 3. This result indicates that teaching students' foundational concepts can develop their ability to solve machine learning-related problems. The post-course concept tests for Courses 1 and 2, together with the pre-course problem-solving ability test for Course 3, collectively explained 27.4 % of the variance in the students’ problem-solving ability after completing Course 3. Together with the significant improvement in the paired-samples <em>t</em>-test statistics for the pre- and post-course problem-solving test of Course 3, this indicates the importance of providing opportunities for students to solve machine learning-related problems. These findings provide empirical evidence to inform the design of curricula for AI literacy programmes. Project-based learning (PBL) is an approach that can provide opportunities for participants to develop problem-solving skills using foundational AI knowledge.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100503"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards responsible AI in education: Challenges and implications for research and practice 在教育中实现负责任的人工智能:对研究和实践的挑战和影响
Q1 Social Sciences Pub Date : 2025-12-01 DOI: 10.1016/j.caeai.2024.100345
Teresa Cerratto Pargman, Cormac McGrath, Marcelo Milrad
{"title":"Towards responsible AI in education: Challenges and implications for research and practice","authors":"Teresa Cerratto Pargman,&nbsp;Cormac McGrath,&nbsp;Marcelo Milrad","doi":"10.1016/j.caeai.2024.100345","DOIUrl":"10.1016/j.caeai.2024.100345","url":null,"abstract":"","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100345"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145789991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing teachers’ AI competency: A professional development intervention study based on intelligent-TPACK framework 提升教师人工智能能力:基于智能- tpack框架的专业发展干预研究
Q1 Social Sciences Pub Date : 2025-12-01 DOI: 10.1016/j.caeai.2025.100521
Xiao Tan, Gary Cheng, Man Ho Ling
With the rapid penetration of generative artificial intelligence (AI) in higher education, university teachers' AI competency has become a critical determinant of effective technology integration in teaching. However, systematic and empirically validated intervention frameworks to support the development of this competency remain scarce. To address this gap, this study implemented a six-month professional development (PD) programme grounded in the Intelligent-TPACK framework and evaluated its effectiveness using a quasi-experimental pre-test-post-test design. A total of 64 teachers participated in the PD programme (experimental group), while pre- and post-test data were also collected from 61 teachers who did not participate (control group). Results indicate that the PD programme significantly enhanced AI competency in the experimental group, particularly in the domains of AI Technological Knowledge (AITK) and AI Technological Pedagogical Knowledge (AITPK). After controlling for baseline differences using ANCOVA, the effect size remained above the moderate threshold. A mixed-designed ANOVA further confirmed a significant interaction effect between group and time, ruling out maturation effects. Multi-level regression analysis revealed that background variables such as teaching experience, discipline, and professional title had limited predictive power for AI competency gains. Notably, self-perceived participation level did not significantly predict outcomes, whereas attendance rate emerged as a significant positive predictor. Interestingly, negative gain scores were observed in both groups. Follow-up interviews indicated that these scores did not reflect an actual decline in AI competency but rather a metacognitive recalibration, in which teachers shifted from unconscious incompetence to conscious incompetence—a pattern consistent with the Dunning–Kruger effect. This finding offers a novel theoretical perspective on the mechanism of change underlying the intervention. Overall, the PD programme based on the Intelligent-TPACK framework effectively enhanced university teachers’ AI competency and provides a systematic and evidence-based model for future PD initiatives in the AI era.
随着生成式人工智能(AI)在高等教育中的快速渗透,大学教师的AI能力已成为有效整合教学技术的关键决定因素。然而,支持这一能力发展的系统和经验验证的干预框架仍然很少。为了解决这一差距,本研究实施了一个基于智能- tpack框架的为期六个月的专业发展(PD)计划,并使用准实验的前测试-后测试设计评估其有效性。共有64名教师参加了PD项目(实验组),同时收集了61名未参加PD项目的教师(对照组)的测试前和测试后数据。结果表明,PD计划显著提高了实验组的人工智能能力,特别是在人工智能技术知识(AITK)和人工智能技术教学知识(AITPK)领域。在使用ANCOVA控制基线差异后,效应量仍高于中等阈值。混合设计的方差分析进一步证实了群体和时间之间的显著交互作用,排除了成熟效应。多层次回归分析显示,教学经验、学科、职称等背景变量对人工智能能力的预测能力有限。值得注意的是,自我感知的参与水平对结果没有显著的预测作用,而出勤率则是显著的正向预测因子。有趣的是,在两组中都观察到负增益分数。后续采访表明,这些分数并没有反映出人工智能能力的实际下降,而是一种元认知的重新校准,教师从无意识的无能转变为有意识的无能——这与邓宁-克鲁格效应一致。这一发现为干预背后的变化机制提供了一个新的理论视角。总体而言,基于Intelligent-TPACK框架的PD计划有效地提高了大学教师的人工智能能力,并为人工智能时代未来的PD计划提供了系统和基于证据的模型。
{"title":"Enhancing teachers’ AI competency: A professional development intervention study based on intelligent-TPACK framework","authors":"Xiao Tan,&nbsp;Gary Cheng,&nbsp;Man Ho Ling","doi":"10.1016/j.caeai.2025.100521","DOIUrl":"10.1016/j.caeai.2025.100521","url":null,"abstract":"<div><div>With the rapid penetration of generative artificial intelligence (AI) in higher education, university teachers' AI competency has become a critical determinant of effective technology integration in teaching. However, systematic and empirically validated intervention frameworks to support the development of this competency remain scarce. To address this gap, this study implemented a six-month professional development (PD) programme grounded in the Intelligent-TPACK framework and evaluated its effectiveness using a quasi-experimental pre-test-post-test design. A total of 64 teachers participated in the PD programme (experimental group), while pre- and post-test data were also collected from 61 teachers who did not participate (control group). Results indicate that the PD programme significantly enhanced AI competency in the experimental group, particularly in the domains of AI Technological Knowledge (AITK) and AI Technological Pedagogical Knowledge (AITPK). After controlling for baseline differences using ANCOVA, the effect size remained above the moderate threshold. A mixed-designed ANOVA further confirmed a significant interaction effect between group and time, ruling out maturation effects. Multi-level regression analysis revealed that background variables such as teaching experience, discipline, and professional title had limited predictive power for AI competency gains. Notably, self-perceived participation level did not significantly predict outcomes, whereas attendance rate emerged as a significant positive predictor. Interestingly, negative gain scores were observed in both groups. Follow-up interviews indicated that these scores did not reflect an actual decline in AI competency but rather a metacognitive recalibration, in which teachers shifted from unconscious incompetence to conscious incompetence—a pattern consistent with the Dunning–Kruger effect. This finding offers a novel theoretical perspective on the mechanism of change underlying the intervention. Overall, the PD programme based on the Intelligent-TPACK framework effectively enhanced university teachers’ AI competency and provides a systematic and evidence-based model for future PD initiatives in the AI era.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100521"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145736669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effectiveness of an AI-integrated VR oral training application in reducing public speaking anxiety and interview anxiety 人工智能集成虚拟现实口语训练应用在减少公众演讲焦虑和面试焦虑中的效果
Q1 Social Sciences Pub Date : 2025-11-29 DOI: 10.1016/j.caeai.2025.100514
Peiwen Huang , Yanling Hwang , Jui Ling Hsu , Chien Fand Peng , Cheng Han Tsai , Chih Yao Wang
Despite the growing importance of English oral communication skills, traditional language learning approaches show limited effectiveness in simultaneously addressing psychological barriers and speaking proficiency among college students. While previous studies have explored anxiety reduction or speaking enhancement separately, a significant gap exists in research examining integrated approaches that tackle Public Speaking Anxiety (PSA), Interview Anxiety, and English-speaking proficiency improvement simultaneously. This study investigated whether an AI-integrated VR oral training application could effectively address these interconnected challenges. A quasi-experimental design was employed with 20 English major students from a mid-central university in Taiwan. Participants completed five training sessions using Meta Quest 2 headsets and an AI-integrated VR oral training application providing tailored feedback on pronunciation, grammar, and fluency based on IELTS standards. Pre- and post-intervention assessments utilized validated instruments including the Personal Report of Public Speaking Anxiety (PRPSA) and Measure of Anxiety in Selection Interviews (MASI), alongside comprehensive speaking proficiency measures. Results demonstrated significant improvements in English speaking proficiency, including increased sentence length and word count, with grammatical errors and incomplete sentences decreasing markedly (p < .001). Concurrently, significant reductions in both PRPSA and MASI scores (p < .05) were observed, though lexical diversity showed slight decline. VR-related motion-sickness symptoms were mildly alleviated, and participants' perceived control increased significantly (p < .05), while interest and attention levels remained stable. These findings suggest that AI-integrated VR oral training applications can effectively enhance English speaking proficiency while simultaneously reducing anxiety levels and improving self-efficacy among English learners. The study addresses a critical research gap by demonstrating the potential of integrated technological approaches to tackle multiple barriers to effective English oral communication, offering promising implications for language education and anxiety management in academic contexts.
尽管英语口语交际能力的重要性日益增加,但传统的语言学习方法在同时解决大学生的心理障碍和口语能力方面的效果有限。虽然以前的研究分别探讨了减少焦虑或提高口语能力,但同时研究解决公共演讲焦虑(PSA)、面试焦虑和提高英语水平的综合方法的研究存在显著差距。本研究调查了人工智能集成的VR口语训练应用程序是否可以有效地解决这些相互关联的挑战。本研究采用准实验设计,以20名台湾中部某大学英语专业学生为研究对象。参与者使用Meta Quest 2耳机和人工智能集成的VR口语训练应用程序完成了五个培训课程,该应用程序根据雅思标准提供量身定制的发音、语法和流利度反馈。干预前和干预后的评估使用了有效的工具,包括个人演讲焦虑报告(PRPSA)和选择访谈焦虑测量(MASI),以及综合口语能力测量。结果显示,英语口语水平显著提高,包括句子长度和字数增加,语法错误和不完整句子显著减少(p < .001)。同时,PRPSA和MASI得分显著下降(p < 0.05),尽管词汇多样性略有下降。vr相关的晕动病症状轻度缓解,参与者感知控制显著增加(p < 0.05),而兴趣和注意力水平保持稳定。这些研究结果表明,人工智能集成VR口语训练应用可以有效提高英语口语水平,同时降低英语学习者的焦虑水平,提高自我效能感。该研究通过展示综合技术方法解决有效英语口语交流的多重障碍的潜力,填补了一个关键的研究空白,为学术环境中的语言教育和焦虑管理提供了有希望的启示。
{"title":"The effectiveness of an AI-integrated VR oral training application in reducing public speaking anxiety and interview anxiety","authors":"Peiwen Huang ,&nbsp;Yanling Hwang ,&nbsp;Jui Ling Hsu ,&nbsp;Chien Fand Peng ,&nbsp;Cheng Han Tsai ,&nbsp;Chih Yao Wang","doi":"10.1016/j.caeai.2025.100514","DOIUrl":"10.1016/j.caeai.2025.100514","url":null,"abstract":"<div><div>Despite the growing importance of English oral communication skills, traditional language learning approaches show limited effectiveness in simultaneously addressing psychological barriers and speaking proficiency among college students. While previous studies have explored anxiety reduction or speaking enhancement separately, a significant gap exists in research examining integrated approaches that tackle Public Speaking Anxiety (PSA), Interview Anxiety, and English-speaking proficiency improvement simultaneously. This study investigated whether an AI-integrated VR oral training application could effectively address these interconnected challenges. A quasi-experimental design was employed with 20 English major students from a mid-central university in Taiwan. Participants completed five training sessions using Meta Quest 2 headsets and an AI-integrated VR oral training application providing tailored feedback on pronunciation, grammar, and fluency based on IELTS standards. Pre- and post-intervention assessments utilized validated instruments including the Personal Report of Public Speaking Anxiety (PRPSA) and Measure of Anxiety in Selection Interviews (MASI), alongside comprehensive speaking proficiency measures. Results demonstrated significant improvements in English speaking proficiency, including increased sentence length and word count, with grammatical errors and incomplete sentences decreasing markedly (p &lt; .001). Concurrently, significant reductions in both PRPSA and MASI scores (p &lt; .05) were observed, though lexical diversity showed slight decline. VR-related motion-sickness symptoms were mildly alleviated, and participants' perceived control increased significantly (p &lt; .05), while interest and attention levels remained stable. These findings suggest that AI-integrated VR oral training applications can effectively enhance English speaking proficiency while simultaneously reducing anxiety levels and improving self-efficacy among English learners. The study addresses a critical research gap by demonstrating the potential of integrated technological approaches to tackle multiple barriers to effective English oral communication, offering promising implications for language education and anxiety management in academic contexts.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100514"},"PeriodicalIF":0.0,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stretching AI's reach: Assessing an AI-driven feedback system for extended academic writing 扩展AI的范围:评估AI驱动的反馈系统用于扩展学术写作
Q1 Social Sciences Pub Date : 2025-11-28 DOI: 10.1016/j.caeai.2025.100511
Jim Lo , Christy Wong , Agnes Ng , Pinna Wong , Denise Cheung , Pauli Lai
Advances in large language models (LLMs) enable timely and scalable writing evaluation. Previous research has shown that LLM-driven conversational systems, such as ChatGPT, can provide feedback on short essays. However, it is unclear whether AI can effectively evaluate more demanding genres. This study investigates a custom-built writing feedback system developed at a Hong Kong university that uses OpenAI's GPT-4 Turbo (0125-preview) to provide rubric-based feedback on a 1500-word academic report. Guided by a detailed, rubric-aligned prompt, the system generated 333 feedback items from 37 undergraduates, which were analysed for accuracy, tone, and inclusion of examples. The analysis showed that most feedback was accurate and addressed both strengths and weaknesses, but over half lacked concrete examples. Often recycling phrases from rubric descriptors, the feedback was largely generic and occasionally inaccurate. Interview data from six students revealed that the AI feedback was valued for its coverage, efficiency, and constructive tone, yet its generic nature undermined its usefulness. Despite these limitations, students expressed interest in receiving both AI and teacher feedback for the efficiency and coverage that AI offers, alongside the specificity and relevance of teacher input. These findings suggest that employing a well-crafted prompt on an AI model with a large context window does not necessarily guarantee substantive feedback. Therefore, educators using AI-driven feedback systems should thoroughly assess these systems' capacity to handle extended academic writing. Future research could explore ways to refine prompts and system design for long-form writing assignments.
大型语言模型(llm)的进步使及时和可扩展的写作评估成为可能。先前的研究表明,法学硕士驱动的会话系统,如ChatGPT,可以为短文提供反馈。然而,人工智能是否能够有效地评估要求更高的游戏类型尚不清楚。本研究调查了一所香港大学开发的定制写作反馈系统,该系统使用OpenAI的GPT-4 Turbo (0125-preview)为1500字的学术报告提供基于规则的反馈。在一个详细的、规则一致的提示的指导下,该系统从37名本科生中生成了333个反馈项目,并对其准确性、语气和示例的包含情况进行了分析。分析表明,大多数反馈都是准确的,并指出了优点和缺点,但超过一半的反馈缺乏具体的例子。这些反馈经常重复使用标题描述符中的短语,大部分是通用的,偶尔也不准确。来自六名学生的采访数据显示,人工智能反馈因其覆盖面、效率和建设性的语气而受到重视,但其普遍性削弱了其实用性。尽管存在这些限制,但学生们表示有兴趣接受人工智能和教师对人工智能提供的效率和覆盖范围的反馈,以及教师输入的特殊性和相关性。这些发现表明,在具有大上下文窗口的AI模型上使用精心设计的提示不一定能保证实质性的反馈。因此,使用人工智能驱动的反馈系统的教育工作者应该彻底评估这些系统处理扩展学术写作的能力。未来的研究可能会探索改进长篇写作作业提示和系统设计的方法。
{"title":"Stretching AI's reach: Assessing an AI-driven feedback system for extended academic writing","authors":"Jim Lo ,&nbsp;Christy Wong ,&nbsp;Agnes Ng ,&nbsp;Pinna Wong ,&nbsp;Denise Cheung ,&nbsp;Pauli Lai","doi":"10.1016/j.caeai.2025.100511","DOIUrl":"10.1016/j.caeai.2025.100511","url":null,"abstract":"<div><div>Advances in large language models (LLMs) enable timely and scalable writing evaluation. Previous research has shown that LLM-driven conversational systems, such as ChatGPT, can provide feedback on short essays. However, it is unclear whether AI can effectively evaluate more demanding genres. This study investigates a custom-built writing feedback system developed at a Hong Kong university that uses OpenAI's GPT-4 Turbo (0125-preview) to provide rubric-based feedback on a 1500-word academic report. Guided by a detailed, rubric-aligned prompt, the system generated 333 feedback items from 37 undergraduates, which were analysed for accuracy, tone, and inclusion of examples. The analysis showed that most feedback was accurate and addressed both strengths and weaknesses, but over half lacked concrete examples. Often recycling phrases from rubric descriptors, the feedback was largely generic and occasionally inaccurate. Interview data from six students revealed that the AI feedback was valued for its coverage, efficiency, and constructive tone, yet its generic nature undermined its usefulness. Despite these limitations, students expressed interest in receiving both AI and teacher feedback for the efficiency and coverage that AI offers, alongside the specificity and relevance of teacher input. These findings suggest that employing a well-crafted prompt on an AI model with a large context window does not necessarily guarantee substantive feedback. Therefore, educators using AI-driven feedback systems should thoroughly assess these systems' capacity to handle extended academic writing. Future research could explore ways to refine prompts and system design for long-form writing assignments.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100511"},"PeriodicalIF":0.0,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145718922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affective computing in online higher education: A systematic literature review 网络高等教育中的情感计算:系统文献综述
Q1 Social Sciences Pub Date : 2025-11-20 DOI: 10.1016/j.caeai.2025.100499
Krist Shingjergji , Deniz Iren , Corrie Urlings , Roland Klemke
Although affective states play a crucial role in education, they are often difficult to communicate and observe in online learning environments. This challenge has led to growing research on systems that can automatically detect affective states. This systematic literature review used PRISMA to analyze 96 studies on affective computing in online higher education, published between 2019 and 2024. The findings show that the most frequently studied affective states include learning-centered states, such as engagement, confusion, frustration, sentiment, as well as basic emotions, such as happiness, anger, sadness, surprise, and fear.
Terminology often overlaps, and basic emotions are commonly used as proxies for learning-centered states. The most used modality is facial expression, with the dominant detection approach being deep learning, particularly convolutional neural networks. Most studies rely on self-collected datasets that, due to privacy concerns, are not publicly shared, limiting reproducibility and generalizability. FER2013, collected in a generic context, and DAiSEE, collected in an online educational setting, are the most used public datasets. A key limitation is that most systems are not evaluated in real classrooms, revealing a gap between technological development, and educational application. Ethical considerations are often overlooked, with privacy, when addressed, being the main focus. Finally, the review’s findings highlight the need for stronger integration between education and technology through interdisciplinary collaboration and real-world validation.
尽管情感状态在教育中发挥着至关重要的作用,但在在线学习环境中,情感状态往往难以沟通和观察。这一挑战导致对能够自动检测情感状态的系统的研究越来越多。本系统文献综述使用PRISMA分析了2019年至2024年间发表的96项关于在线高等教育情感计算的研究。研究结果表明,最常被研究的情感状态包括以学习为中心的状态,如投入、困惑、沮丧、情绪,以及基本情绪,如快乐、愤怒、悲伤、惊讶和恐惧。术语经常重叠,基本情绪通常被用作以学习为中心状态的代理。最常用的模式是面部表情,主要的检测方法是深度学习,特别是卷积神经网络。大多数研究依赖于自我收集的数据集,由于隐私问题,这些数据集没有公开共享,限制了再现性和概括性。FER2013是在一般情况下收集的,DAiSEE是在在线教育环境中收集的,是最常用的公共数据集。一个关键的限制是,大多数系统没有在真实的教室中进行评估,这暴露了技术发展与教育应用之间的差距。道德方面的考虑往往被忽视,隐私问题往往成为主要焦点。最后,该综述的发现强调了通过跨学科合作和现实验证加强教育与技术之间整合的必要性。
{"title":"Affective computing in online higher education: A systematic literature review","authors":"Krist Shingjergji ,&nbsp;Deniz Iren ,&nbsp;Corrie Urlings ,&nbsp;Roland Klemke","doi":"10.1016/j.caeai.2025.100499","DOIUrl":"10.1016/j.caeai.2025.100499","url":null,"abstract":"<div><div>Although affective states play a crucial role in education, they are often difficult to communicate and observe in online learning environments. This challenge has led to growing research on systems that can automatically detect affective states. This systematic literature review used PRISMA to analyze 96 studies on affective computing in online higher education, published between 2019 and 2024. The findings show that the most frequently studied affective states include learning-centered states, such as engagement, confusion, frustration, sentiment, as well as basic emotions, such as happiness, anger, sadness, surprise, and fear.</div><div>Terminology often overlaps, and basic emotions are commonly used as proxies for learning-centered states. The most used modality is facial expression, with the dominant detection approach being deep learning, particularly convolutional neural networks. Most studies rely on self-collected datasets that, due to privacy concerns, are not publicly shared, limiting reproducibility and generalizability. FER2013, collected in a generic context, and DAiSEE, collected in an online educational setting, are the most used public datasets. A key limitation is that most systems are not evaluated in real classrooms, revealing a gap between technological development, and educational application. Ethical considerations are often overlooked, with privacy, when addressed, being the main focus. Finally, the review’s findings highlight the need for stronger integration between education and technology through interdisciplinary collaboration and real-world validation.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100499"},"PeriodicalIF":0.0,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145739286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring AI-generated feedback in peer-discussion contexts: A mixed-methods study of essay writing in secondary classrooms 在同行讨论环境中探索人工智能生成的反馈:中学课堂论文写作的混合方法研究
Q1 Social Sciences Pub Date : 2025-11-18 DOI: 10.1016/j.caeai.2025.100504
Irina Engeness
This mixed-methods study investigates how an AI-powered writing tool providing automated feedback compares with peer-generated feedback in supporting secondary students' essay writing. Two research questions guided the study: (1) Did using an AI-powered tool with AI-generated feedback yield greater gains in writing quality from the first to the final draft compared with a standard editor with peer feedback? (2) How did students’ engagement in the writing process differ in the target (used AI-generated feedback) and comparison (used peer feedback) groups, as evidenced by teacher–student and peer–peer interactions and how were these patterns associated with their conceptual understanding of essay content?
Eighty-one ninth-grade students from six Norwegian classrooms participated, with three classes using the AI-powered Essay Assessment Technology (EAT) and three relying on peer feedback. Quantitative analyses of first and final drafts showed that both groups improved, but students using EAT achieved statistically significant gains in writing quality. However, moderate inter-rater reliability limits the strength of these findings.
Qualitative analysis of classroom video data revealed distinct engagement patterns. Students in the EAT group drew on AI-generated “covered” subthemes (ideas already present in their writing) and “suggested” subthemes (relevant ideas not yet included) to refine their essays, fostering more systematic discussions of essay content. In contrast, students in the peer-feedback group focused more on surface-level issues, such as spelling and word count, with less consistent attention to essay content.
These findings suggest that AI-generated feedback, when embedded in peer discussion and teacher-facilitated classrooms, can strengthen the development of students’ conceptual understanding of essay content. Analyses indicate that structured AI feedback supported greater gains compared with peer feedback alone. The study highlights the pedagogical potential of AI-powered tools as part of formative assessment practices, while underscoring the critical role of teacher facilitation and structured feedback in fostering deeper engagement with essay content.
这项混合方法研究调查了在支持中学生论文写作方面,提供自动反馈的人工智能写作工具与同行生成的反馈相比如何。两个研究问题指导了这项研究:(1)与具有同行反馈的标准编辑器相比,使用具有人工智能生成反馈的人工智能工具是否在从初稿到最终稿的写作质量方面取得了更大的进步?(2)学生在目标组(使用人工智能生成的反馈)和比较组(使用同伴反馈)中对写作过程的参与有何不同,师生互动和同伴互动证明了这一点,这些模式如何与他们对文章内容的概念性理解相关联?来自挪威6个教室的81名九年级学生参加了这次活动,其中3个班级使用人工智能作文评估技术(EAT),另外3个班级依靠同伴反馈。对初稿和终稿的定量分析表明,两组学生都有所提高,但使用EAT的学生在写作质量上取得了统计学上的显著提高。然而,适度的评分者间信度限制了这些发现的强度。课堂视频数据的定性分析揭示了不同的参与模式。EAT小组的学生利用人工智能生成的“覆盖”子主题(他们写作中已经存在的观点)和“建议”子主题(尚未包含相关观点)来完善他们的文章,促进对文章内容的更系统的讨论。相比之下,同伴反馈组的学生更多地关注拼写和字数等表面问题,而对论文内容的关注较少。这些发现表明,人工智能产生的反馈,当嵌入到同伴讨论和教师促进的课堂时,可以加强学生对论文内容的概念性理解的发展。分析表明,与单独的同行反馈相比,结构化的人工智能反馈支持更大的收益。该研究强调了人工智能工具作为形成性评估实践的一部分的教学潜力,同时强调了教师促进和结构化反馈在促进学生更深入地参与论文内容方面的关键作用。
{"title":"Exploring AI-generated feedback in peer-discussion contexts: A mixed-methods study of essay writing in secondary classrooms","authors":"Irina Engeness","doi":"10.1016/j.caeai.2025.100504","DOIUrl":"10.1016/j.caeai.2025.100504","url":null,"abstract":"<div><div>This mixed-methods study investigates how an AI-powered writing tool providing automated feedback compares with peer-generated feedback in supporting secondary students' essay writing. Two research questions guided the study: (1) Did using an AI-powered tool with AI-generated feedback yield greater gains in writing quality from the first to the final draft compared with a standard editor with peer feedback? (2) How did students’ engagement in the writing process differ in the target (used AI-generated feedback) and comparison (used peer feedback) groups, as evidenced by teacher–student and peer–peer interactions and how were these patterns associated with their conceptual understanding of essay content?</div><div>Eighty-one ninth-grade students from six Norwegian classrooms participated, with three classes using the AI-powered Essay Assessment Technology (EAT) and three relying on peer feedback. Quantitative analyses of first and final drafts showed that both groups improved, but students using EAT achieved statistically significant gains in writing quality. However, moderate inter-rater reliability limits the strength of these findings.</div><div>Qualitative analysis of classroom video data revealed distinct engagement patterns. Students in the EAT group drew on AI-generated “covered” subthemes (ideas already present in their writing) and “suggested” subthemes (relevant ideas not yet included) to refine their essays, fostering more systematic discussions of essay content. In contrast, students in the peer-feedback group focused more on surface-level issues, such as spelling and word count, with less consistent attention to essay content.</div><div>These findings suggest that AI-generated feedback, when embedded in peer discussion and teacher-facilitated classrooms, can strengthen the development of students’ conceptual understanding of essay content. Analyses indicate that structured AI feedback supported greater gains compared with peer feedback alone. The study highlights the pedagogical potential of AI-powered tools as part of formative assessment practices, while underscoring the critical role of teacher facilitation and structured feedback in fostering deeper engagement with essay content.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100504"},"PeriodicalIF":0.0,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145568741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From intuition to action: Exploring teachers’ ethical awareness in the use of AI tools in education 从直觉到行动:探索教师在教育中使用人工智能工具的道德意识
Q1 Social Sciences Pub Date : 2025-11-18 DOI: 10.1016/j.caeai.2025.100502
Chun Sing Maxwell Ho , John Chi-Kin Lee
This study investigates how teachers perceive and understand ethical issues arising from the use of artificial intelligence (AI) in education, and identifies the factors influencing their ethical awareness. Grounded in the Social Intuitionist Model (SIM), which emphasizes intuitive and emotionally driven moral judgments, the research explores teachers’ ethical perceptions as shaped by classroom experiences (micro/meso contexts) and broader social, cultural, and institutional influences (macro contexts) surrounding AI integration. Using a case study approach, data were collected through semi-structured interviews with 26 teachers from primary and secondary schools, selected for their experience with AI integration. Thematic analysis revealed a four-pathway intuitive process: Functional-First, Triggered Ethical Awakening, Ethical Reflection and Reevaluation, and Ethical Adjustment. Teachers initially adopted AI for efficiency, but ethical awareness emerged through discomfort with issues such as student overreliance, biased content, and privacy breaches. Factors influencing awareness were categorized into individual (e.g., reflective disposition, technical understanding), interpersonal (e.g., peer dialogue), and school elements (e.g., workload, institutional support). The findings revealed that ethical awareness is dynamic and socially embedded, often initiated by emotional responses and reinforced through professional interactions. The study contributes original insights into the intuitive mechanisms of ethical recognition for teachers in the Chinese context. It underscores the need for structured ethical training, supportive school environments, and policy alignment to foster responsible AI use in education.
本研究调查了教师如何感知和理解在教育中使用人工智能(AI)所产生的伦理问题,并确定了影响他们伦理意识的因素。该研究以强调直觉和情感驱动的道德判断的社会直觉主义模型(SIM)为基础,探讨了围绕人工智能整合的课堂经验(微观/中观背景)和更广泛的社会、文化和制度影响(宏观背景)对教师的伦理观念的影响。采用案例研究方法,通过对26名中小学教师的半结构化访谈收集数据,这些教师是根据他们在人工智能整合方面的经验挑选出来的。主题分析揭示了一个四路径的直觉过程:功能优先、触发性伦理觉醒、伦理反思与再评价、伦理调整。教师最初采用人工智能是为了提高效率,但由于对学生过度依赖、有偏见的内容和侵犯隐私等问题感到不安,道德意识开始浮现。影响意识的因素分为个人因素(如反思倾向、技术理解)、人际因素(如同伴对话)和学校因素(如工作量、机构支持)。研究结果表明,道德意识是动态的,并嵌入社会,通常由情绪反应引发,并通过专业互动加强。本研究对中国情境下教师伦理认知的直觉机制有独到见解。报告强调需要有组织的道德培训、支持性的学校环境和政策协调,以促进在教育中负责任地使用人工智能。
{"title":"From intuition to action: Exploring teachers’ ethical awareness in the use of AI tools in education","authors":"Chun Sing Maxwell Ho ,&nbsp;John Chi-Kin Lee","doi":"10.1016/j.caeai.2025.100502","DOIUrl":"10.1016/j.caeai.2025.100502","url":null,"abstract":"<div><div>This study investigates how teachers perceive and understand ethical issues arising from the use of artificial intelligence (AI) in education, and identifies the factors influencing their ethical awareness. Grounded in the Social Intuitionist Model (SIM), which emphasizes intuitive and emotionally driven moral judgments, the research explores teachers’ ethical perceptions as shaped by classroom experiences (micro/meso contexts) and broader social, cultural, and institutional influences (macro contexts) surrounding AI integration. Using a case study approach, data were collected through semi-structured interviews with 26 teachers from primary and secondary schools, selected for their experience with AI integration. Thematic analysis revealed a four-pathway intuitive process: Functional-First, Triggered Ethical Awakening, Ethical Reflection and Reevaluation, and Ethical Adjustment. Teachers initially adopted AI for efficiency, but ethical awareness emerged through discomfort with issues such as student overreliance, biased content, and privacy breaches. Factors influencing awareness were categorized into individual (e.g., reflective disposition, technical understanding), interpersonal (e.g., peer dialogue), and school elements (e.g., workload, institutional support). The findings revealed that ethical awareness is dynamic and socially embedded, often initiated by emotional responses and reinforced through professional interactions. The study contributes original insights into the intuitive mechanisms of ethical recognition for teachers in the Chinese context. It underscores the need for structured ethical training, supportive school environments, and policy alignment to foster responsible AI use in education.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100502"},"PeriodicalIF":0.0,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145568749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling the sustainability perspectives on personalized digital games for digital citizenship education: A PLS-SEM approach 数字化公民教育中个性化数字游戏的可持续性视角建模:PLS-SEM方法
Q1 Social Sciences Pub Date : 2025-11-17 DOI: 10.1016/j.caeai.2025.100498
Patcharin Panjaburee , Gwo-Jen Hwang , Ungsinun Intarakamhang , Niwat Srisawasdi
As digital citizenship becomes an essential educational priority in the digital age, there is a growing need for sustainable and engaging instructional designs that foster students' ethical and responsible use of technology. Addressing this gap, this study modeled the sustainability perspectives underlying personalized digital game-based learning through a partial least squares structural equation modeling (PLS-SEM) approach. A longitudinal repeated-measures design was conducted with 372 lower secondary students in Thailand, using fuzzy logic and decision tree algorithms to personalize ethical digital scenarios. The proposed model examined how pedagogical design, content quality, usability, behavioral decisions, and motivation shape students' perceptions of sustainability. Results indicated that sustained motivation at later learning stages was the strongest predictor of perceived sustainability, while pedagogical and experiential factors exerted significant indirect effects through motivational engagement. The analysis also confirmed the longitudinal influence of early motivational experiences on later engagement, emphasizing the importance of adaptive feedback and reflective learning processes. These findings advance understanding of how AI-driven personalization can promote sustainable digital citizenship learning by integrating adaptive pathways, culturally relevant content, and motivational scaffolds to support long-term behavioral change. Implications for educational design, pedagogy, and policy are discussed to guide the development of scalable AI-supported learning environments.
随着数字公民成为数字时代重要的教育重点,越来越需要可持续和引人入胜的教学设计,以培养学生道德和负责任的使用技术。为了解决这一问题,本研究通过偏最小二乘结构方程建模(PLS-SEM)方法对个性化数字游戏学习的可持续性前景进行了建模。对泰国372名初中学生进行了纵向重复测量设计,使用模糊逻辑和决策树算法来个性化道德数字场景。提出的模型考察了教学设计、内容质量、可用性、行为决策和动机如何塑造学生对可持续性的看法。结果表明,后期学习阶段的持续动机是感知可持续性的最强预测因子,而教学和经验因素通过动机投入发挥了显著的间接影响。分析还证实了早期动机体验对后期投入的纵向影响,强调了适应性反馈和反思性学习过程的重要性。这些发现有助于理解人工智能驱动的个性化如何通过整合适应性途径、文化相关内容和激励框架来支持长期行为改变,从而促进可持续的数字公民学习。讨论了对教育设计、教学法和政策的影响,以指导可扩展的人工智能支持的学习环境的发展。
{"title":"Modeling the sustainability perspectives on personalized digital games for digital citizenship education: A PLS-SEM approach","authors":"Patcharin Panjaburee ,&nbsp;Gwo-Jen Hwang ,&nbsp;Ungsinun Intarakamhang ,&nbsp;Niwat Srisawasdi","doi":"10.1016/j.caeai.2025.100498","DOIUrl":"10.1016/j.caeai.2025.100498","url":null,"abstract":"<div><div>As digital citizenship becomes an essential educational priority in the digital age, there is a growing need for sustainable and engaging instructional designs that foster students' ethical and responsible use of technology. Addressing this gap, this study modeled the sustainability perspectives underlying personalized digital game-based learning through a partial least squares structural equation modeling (PLS-SEM) approach. A longitudinal repeated-measures design was conducted with 372 lower secondary students in Thailand, using fuzzy logic and decision tree algorithms to personalize ethical digital scenarios. The proposed model examined how pedagogical design, content quality, usability, behavioral decisions, and motivation shape students' perceptions of sustainability. Results indicated that sustained motivation at later learning stages was the strongest predictor of perceived sustainability, while pedagogical and experiential factors exerted significant indirect effects through motivational engagement. The analysis also confirmed the longitudinal influence of early motivational experiences on later engagement, emphasizing the importance of adaptive feedback and reflective learning processes. These findings advance understanding of how AI-driven personalization can promote sustainable digital citizenship learning by integrating adaptive pathways, culturally relevant content, and motivational scaffolds to support long-term behavioral change. Implications for educational design, pedagogy, and policy are discussed to guide the development of scalable AI-supported learning environments.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100498"},"PeriodicalIF":0.0,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145568750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Education Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1