首页 > 最新文献

Computers and Education Artificial Intelligence最新文献

英文 中文
Conversational AI in children's home literacy learning: effectiveness, advantages, challenges, and family perception 会话式人工智能在儿童家庭识字学习中的作用、优势、挑战和家庭认知
Q1 Social Sciences Pub Date : 2026-01-24 DOI: 10.1016/j.caeai.2026.100549
Shuang Quan , Xintian Tu-Shea , Yi Ding , Yao Du , Qingxiao Zheng , Laney E. Gerdich
This study investigates the effectiveness, affordances, limitations, and family perceptions of conversational AI for home literacy learning vs. human. We developed a large language model (LLM)-powered conversational AI system, named Vovo, to teach children vocabulary and co-construct stories using structured literacy pedagogy. The system was tested in home environments over six weeks with 10 families and their children aged 3–7 (M = 5.4). Across 150 learning sessions, Vovo delivered structured literacy instruction as effectively as parents, though children achieved higher learning outcomes when learning with parents. Video analysis revealed Vovo's advantages in pedagogical consistency, language modeling, and verbal socioemotional support, while facing challenges in speech recognition, instructional persistence, nonverbal social cues, and phoneme instruction. Parents perceived Vovo as intelligent, useful, and trustworthy, while expecting a multimodal design to improve engagement. Children perceived Vovo smart and fun but still preferred learning with parents due to emotional bonding. As one of the first studies to embed structured literacy pedagogy into home-based conversational AI system, this research contributes empirical insights into the evolving role of AI in home literacy environments. It also underscores the socially responsive AI design in early education and calls for future design that support parent-child-AI triadic interactions to optimize AI in home literacy learning.
本研究调查了对话式人工智能在家庭识字学习中的有效性、可得性、局限性和家庭观念。我们开发了一个名为Vovo的大型语言模型(LLM)驱动的会话人工智能系统,用结构化识字教学法教孩子们词汇和共同构建故事。该系统在10个家庭及其3-7岁的孩子(M = 5.4)的家庭环境中进行了为期6周的测试。在150次学习中,volvo提供了与父母一样有效的结构化识字教学,尽管孩子在与父母一起学习时取得了更高的学习成绩。视频分析显示,Vovo在教学一致性、语言建模和语言社会情感支持方面具有优势,但在语音识别、教学持久性、非语言社交线索和音素教学方面面临挑战。家长们认为Vovo智能、有用、值得信赖,同时期望通过多模式设计来提高参与度。孩子们认为Vovo聪明有趣,但由于情感联系,他们仍然更喜欢和父母一起学习。作为首批将结构化扫盲教学法嵌入家庭会话人工智能系统的研究之一,本研究为人工智能在家庭扫盲环境中不断发展的角色提供了实证见解。它还强调了早期教育中对社会敏感的人工智能设计,并呼吁未来的设计支持父母-孩子-人工智能三元互动,以优化家庭识字学习中的人工智能。
{"title":"Conversational AI in children's home literacy learning: effectiveness, advantages, challenges, and family perception","authors":"Shuang Quan ,&nbsp;Xintian Tu-Shea ,&nbsp;Yi Ding ,&nbsp;Yao Du ,&nbsp;Qingxiao Zheng ,&nbsp;Laney E. Gerdich","doi":"10.1016/j.caeai.2026.100549","DOIUrl":"10.1016/j.caeai.2026.100549","url":null,"abstract":"<div><div>This study investigates the effectiveness, affordances, limitations, and family perceptions of conversational AI for home literacy learning vs. human. We developed a large language model (LLM)-powered conversational AI system, named Vovo, to teach children vocabulary and co-construct stories using structured literacy pedagogy. The system was tested in home environments over six weeks with 10 families and their children aged 3–7 (<em>M</em> = 5.4). Across 150 learning sessions, Vovo delivered structured literacy instruction as effectively as parents, though children achieved higher learning outcomes when learning with parents. Video analysis revealed Vovo's advantages in pedagogical consistency, language modeling, and verbal socioemotional support, while facing challenges in speech recognition, instructional persistence, nonverbal social cues, and phoneme instruction. Parents perceived Vovo as intelligent, useful, and trustworthy, while expecting a multimodal design to improve engagement. Children perceived Vovo smart and fun but still preferred learning with parents due to emotional bonding. As one of the first studies to embed structured literacy pedagogy into home-based conversational AI system, this research contributes empirical insights into the evolving role of AI in home literacy environments. It also underscores the socially responsive AI design in early education and calls for future design that support parent-child-AI triadic interactions to optimize AI in home literacy learning.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100549"},"PeriodicalIF":0.0,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence literacy at school: A systematic review with a focus on psychological foundations 学校的人工智能素养:以心理学基础为重点的系统回顾
Q1 Social Sciences Pub Date : 2026-01-21 DOI: 10.1016/j.caeai.2026.100551
Shuyan Feng, Astrid Carolus
Artificial Intelligence (AI) is significantly changing school education. The increasing prevalence of AI calls for a framework of AI-related literacy specifically tailored to the educational context. A growing body of research has attempted to conceptualise AI literacy (AIL) from different disciplinary perspectives and with different foci. This systematic review aims to provide a comprehensive overview of definitions and psychological dimensions of AIL in school education by addressing the following questions: how is AIL defined and conceptualised, what are the dimensions of AIL, and what psychological dimensions are included. A total of 2642 records were identified from various databases, and 58 peer-reviewed articles were retrieved for this systematic review, which strictly followed the PRISMA guidelines. The findings propose different definitions of AIL for teachers, students and other educational professionals, and identifies dimensions that include cognitive, emotional, psychological, and behavioural constructs. More detailed, the review identifies six dimensions for teachers, such as contextual knowledge and continuous professional growth. For students, eight dimensions were identified, including AI-related thinking capacity and preparation for AI careers. Certain dimensions, such as AI knowledge and skills, AI ethics and societal implications, generative AI-specific competency, and most importantly, the psychological dimension consisting of cognitive and non-cognitive elements, were found to be shared across all target groups. Furthermore, personalisation and contextual adaptability emerged as additional key dimensions. In sum, the findings offer valuable insights for future research and practical guidance for decision-making in AI education, particularly in the areas of curriculum design, implementation, and assessment.
人工智能(AI)正在显著改变学校教育。人工智能的日益普及需要一个专门为教育背景量身定制的人工智能相关素养框架。越来越多的研究试图从不同的学科角度和不同的焦点对人工智能素养(AIL)进行概念化。本系统综述旨在通过解决以下问题,全面概述学校教育中AIL的定义和心理维度:AIL是如何定义和概念化的,AIL的维度是什么,以及包括哪些心理维度。本次系统评价严格遵循PRISMA指南,从不同数据库中共检索到2642条记录,并检索到58篇同行评议文章。研究结果为教师、学生和其他教育专业人员提出了不同的AIL定义,并确定了包括认知、情感、心理和行为结构在内的维度。更详细地说,该评估确定了教师的六个方面,如背景知识和持续专业成长。对于学生来说,他们确定了八个维度,包括与人工智能相关的思维能力和为人工智能职业做准备。某些维度,如人工智能知识和技能、人工智能伦理和社会影响、生成式人工智能特定能力,最重要的是,由认知和非认知因素组成的心理维度,被发现在所有目标群体中是共享的。此外,个性化和上下文适应性成为了额外的关键维度。总之,这些发现为未来的研究提供了宝贵的见解,并为人工智能教育的决策提供了实践指导,特别是在课程设计、实施和评估方面。
{"title":"Artificial intelligence literacy at school: A systematic review with a focus on psychological foundations","authors":"Shuyan Feng,&nbsp;Astrid Carolus","doi":"10.1016/j.caeai.2026.100551","DOIUrl":"10.1016/j.caeai.2026.100551","url":null,"abstract":"<div><div>Artificial Intelligence (AI) is significantly changing school education. The increasing prevalence of AI calls for a framework of AI-related literacy specifically tailored to the educational context. A growing body of research has attempted to conceptualise AI literacy (AIL) from different disciplinary perspectives and with different foci. This systematic review aims to provide a comprehensive overview of definitions and psychological dimensions of AIL in school education by addressing the following questions: how is AIL defined and conceptualised, what are the dimensions of AIL, and what psychological dimensions are included. A total of 2642 records were identified from various databases, and 58 peer-reviewed articles were retrieved for this systematic review, which strictly followed the PRISMA guidelines. The findings propose different definitions of AIL for teachers, students and other educational professionals, and identifies dimensions that include cognitive, emotional, psychological, and behavioural constructs. More detailed, the review identifies six dimensions for teachers, such as contextual knowledge and continuous professional growth. For students, eight dimensions were identified, including AI-related thinking capacity and preparation for AI careers. Certain dimensions, such as AI knowledge and skills, AI ethics and societal implications, generative AI-specific competency, and most importantly, the psychological dimension consisting of cognitive and non-cognitive elements, were found to be shared across all target groups. Furthermore, personalisation and contextual adaptability emerged as additional key dimensions. In sum, the findings offer valuable insights for future research and practical guidance for decision-making in AI education, particularly in the areas of curriculum design, implementation, and assessment.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100551"},"PeriodicalIF":0.0,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing AI literacy for educators: Where to start and to what end? 提高教育工作者的人工智能素养:从哪里开始,以什么为目的?
Q1 Social Sciences Pub Date : 2026-01-21 DOI: 10.1016/j.caeai.2026.100550
Bo Pei , Jie Lu , Zhaowei Zhang , Priscilla Tuffour , Sanghoon Park
As AI has become an integral part in current teaching and learning practices, educators' capacities to use AI technologies effectively and responsibly are closely tied to the quality of instruction and student learning outcomes. To provide a comprehensive examination for cultivating the relevant capacities, this study conducted a systematic literature review about AI literacy for educators. Informed by Bloom's Taxonomy, we investigated the existing research with a layered progression structure from five dimensions: definitions of educators' AI literacy, fundamental knowledge for understanding of AI, AI educational practices, educators' perspectives of AI applications, and pedagogies of integrating AI. The findings of this study revealed three key dimensions (i.e., human-AI interactions, harnessing AI tools, and ethical and societal implications) of educators' AI literacy and highlighted the importance of interrelationships among these dimensions. Furthermore, our study identified the fundamental knowledge that educators need to understand, the instructional scenarios in which the AI applications are applied, the associated opportunities and challenges, and the pedagogical approaches that have been proposed to effectively scaffold educators' engagement with AI. Overall, this literature review underscores the multidimensional and context-relevance nature of AI literacy for educators, developing from the interplay of multiple competencies within specific educational contexts. Finally, the study concludes by discussing implications for providing actionable insights to design training and professional development programs that better prepare educators to navigate the AI-driven educational environments.
由于人工智能已成为当前教学实践中不可或缺的一部分,教育工作者有效和负责任地使用人工智能技术的能力与教学质量和学生学习成果密切相关。为了对相关能力的培养提供一个全面的考察,本研究对教育工作者的人工智能素养进行了系统的文献综述。在布鲁姆分类法的指导下,我们从五个维度对现有研究进行了分层递进结构的调查:教育者的人工智能素养定义、理解人工智能的基础知识、人工智能教育实践、教育者对人工智能应用的看法以及整合人工智能的教学法。本研究的结果揭示了教育工作者的人工智能素养的三个关键维度(即人类与人工智能的互动、利用人工智能工具以及伦理和社会影响),并强调了这些维度之间相互关系的重要性。此外,我们的研究确定了教育工作者需要了解的基本知识、应用人工智能应用的教学场景、相关的机遇和挑战,以及为有效地支持教育工作者与人工智能的接触而提出的教学方法。总的来说,这篇文献综述强调了教育工作者的人工智能素养的多维性和情境相关性,这是从特定教育背景下多种能力的相互作用中发展而来的。最后,本研究讨论了为设计培训和专业发展计划提供可操作的见解的影响,以更好地为教育工作者做好应对人工智能驱动的教育环境的准备。
{"title":"Enhancing AI literacy for educators: Where to start and to what end?","authors":"Bo Pei ,&nbsp;Jie Lu ,&nbsp;Zhaowei Zhang ,&nbsp;Priscilla Tuffour ,&nbsp;Sanghoon Park","doi":"10.1016/j.caeai.2026.100550","DOIUrl":"10.1016/j.caeai.2026.100550","url":null,"abstract":"<div><div>As AI has become an integral part in current teaching and learning practices, educators' capacities to use AI technologies effectively and responsibly are closely tied to the quality of instruction and student learning outcomes. To provide a comprehensive examination for cultivating the relevant capacities, this study conducted a systematic literature review about AI literacy for educators. Informed by Bloom's Taxonomy, we investigated the existing research with a layered progression structure from five dimensions: definitions of educators' AI literacy, fundamental knowledge for understanding of AI, AI educational practices, educators' perspectives of AI applications, and pedagogies of integrating AI. The findings of this study revealed three key dimensions (i.e., human-AI interactions, harnessing AI tools, and ethical and societal implications) of educators' AI literacy and highlighted the importance of interrelationships among these dimensions. Furthermore, our study identified the fundamental knowledge that educators need to understand, the instructional scenarios in which the AI applications are applied, the associated opportunities and challenges, and the pedagogical approaches that have been proposed to effectively scaffold educators' engagement with AI. Overall, this literature review underscores the multidimensional and context-relevance nature of AI literacy for educators, developing from the interplay of multiple competencies within specific educational contexts. Finally, the study concludes by discussing implications for providing actionable insights to design training and professional development programs that better prepare educators to navigate the AI-driven educational environments.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100550"},"PeriodicalIF":0.0,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language models for education: An open-source paradigm for automated Q&A in the graduate classroom 用于教育的大型语言模型:研究生课堂中自动化问答的开源范例
Q1 Social Sciences Pub Date : 2026-01-14 DOI: 10.1016/j.caeai.2026.100546
Ryann M. Perez, Marie Shimogawa, Yanan Chang, Xinning Li, Hoang Anh T. Phan, Jason G. Marmorstein, Evan S.K. Yanagawa, E. James Petersson
Large Language Models (LLMs) offer scalable educational support, but face barriers regarding accuracy, cost, and learning depth. To interrogate these limitations, we developed the Teaching Assistant for Specialized Knowledge (TAsk), a retrieval-augmented generation enabled and educator curated pipeline. In this nine-week pilot study (N = 33 participants), we deployed TAsk in a graduate-level biological chemistry course. We compared TAsk against human expert teaching assistants (TAs) using blinded review process and analyzed inquiry depth. We observed three major findings related to potential pedagogical decisions and educational theory. First, TAsk delivered effective feedback that was specific and adaptive as it significantly outperformed expert TAs in overall correctness. However, human TAs remained superior in tailoring responses to course nuances. Second, behavioral analysis based on educational scaffolding techniques, such as Bloom's Taxonomy and the Zone of Proximal Development (ZPD), identified a cognitive bypass risk where frequent users submitted significantly fewer higher-order queries compared to infrequent users. Third, benchmarking demonstrated that smaller models could approach frontier model performance when optimized, suggesting future costs can be reduced significantly for TAsk in the pilot study. Finally, we validated a confabulation detection algorithm, hypothesizing that this algorithm could help students calibrate trust in model outputs in future iterations of TAsk. Taken together, these contributions establish TAsk as a validated framework for higher education learning while highlighting the critical need for pedagogical scaffolding for LLMs.
大型语言模型(llm)提供可扩展的教育支持,但面临准确性、成本和学习深度方面的障碍。为了解决这些限制,我们开发了专门知识教学助理(TAsk),这是一个检索增强生成功能和教育者管理的管道。在这项为期九周的试点研究中(N = 33名参与者),我们在研究生水平的生物化学课程中部署了TAsk。我们使用盲法审查过程将TAsk与人类专家助教(TAs)进行了比较,并分析了探究深度。我们观察到与潜在的教学决策和教育理论相关的三个主要发现。首先,TAsk提供了有效的反馈,这种反馈是特定的和自适应的,因为它在总体正确性方面明显优于专家助教。然而,人类助教在根据课程的细微差别做出相应的反应方面仍然更胜一筹。其次,基于教育脚手架技术的行为分析,如Bloom的分类法和最近发展区(ZPD),确定了一种认知绕过风险,即频繁用户提交的高阶查询明显少于不频繁用户。第三,基准测试表明,在优化后,较小的模型可以接近前沿模型的性能,这表明在试点研究中,TAsk的未来成本可以显著降低。最后,我们验证了虚构检测算法,假设该算法可以帮助学生在未来的TAsk迭代中校准对模型输出的信任。综上所述,这些贡献确立了TAsk作为高等教育学习的有效框架,同时强调了法学硕士教学脚手架的迫切需求。
{"title":"Large language models for education: An open-source paradigm for automated Q&A in the graduate classroom","authors":"Ryann M. Perez,&nbsp;Marie Shimogawa,&nbsp;Yanan Chang,&nbsp;Xinning Li,&nbsp;Hoang Anh T. Phan,&nbsp;Jason G. Marmorstein,&nbsp;Evan S.K. Yanagawa,&nbsp;E. James Petersson","doi":"10.1016/j.caeai.2026.100546","DOIUrl":"10.1016/j.caeai.2026.100546","url":null,"abstract":"<div><div>Large Language Models (LLMs) offer scalable educational support, but face barriers regarding accuracy, cost, and learning depth. To interrogate these limitations, we developed the Teaching Assistant for Specialized Knowledge (TAsk), a retrieval-augmented generation enabled and educator curated pipeline. In this nine-week pilot study (N = 33 participants), we deployed TAsk in a graduate-level biological chemistry course. We compared TAsk against human expert teaching assistants (TAs) using blinded review process and analyzed inquiry depth. We observed three major findings related to potential pedagogical decisions and educational theory. First, TAsk delivered effective feedback that was specific and adaptive as it significantly outperformed expert TAs in overall correctness. However, human TAs remained superior in tailoring responses to course nuances. Second, behavioral analysis based on educational scaffolding techniques, such as Bloom's Taxonomy and the Zone of Proximal Development (ZPD), identified a cognitive bypass risk where frequent users submitted significantly fewer higher-order queries compared to infrequent users. Third, benchmarking demonstrated that smaller models could approach frontier model performance when optimized, suggesting future costs can be reduced significantly for TAsk in the pilot study. Finally, we validated a confabulation detection algorithm, hypothesizing that this algorithm could help students calibrate trust in model outputs in future iterations of TAsk. Taken together, these contributions establish TAsk as a validated framework for higher education learning while highlighting the critical need for pedagogical scaffolding for LLMs.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100546"},"PeriodicalIF":0.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI in higher education: A bibliometric review of emerging trends, power dynamics, and global research landscapes 高等教育中的生成式人工智能:对新兴趋势、权力动态和全球研究格局的文献计量学回顾
Q1 Social Sciences Pub Date : 2026-01-09 DOI: 10.1016/j.caeai.2026.100544
Kun Dai , Yabing Liu , Xiaofan Zhang
The rapid evolution of Generative Artificial Intelligence (GenAI) is reshaping higher education (HE), offering transformative opportunities for academic engagement while posing significant challenges to academic integrity, ethical frameworks, and global research power dynamics. This study maps the recent (2022–2025) research landscape of GenAI in HE through a bibliometric analysis of 2762 articles from the Web of Science Core Collection. Employing multipolarity as an analytical lens, this study examines the power dynamics within this research domain reflected by publication records from different countries (or regions). Findings highlight surging global interest in GenAI in HE, with contributions led by the US, China, and the UK, alongside rising participation from non-Western scholars and institutions. By identifying the major topics, this study uncovers a more nuanced trajectory of GenAI-related discourse in HE. By examining publication status, contributors, and research topics, this study provides insights for stakeholders navigating the complexities of GenAI integration into HE and suggests trajectories for future research in this rapidly evolving field.
生成式人工智能(GenAI)的快速发展正在重塑高等教育(HE),为学术参与提供变革性机会,同时对学术诚信、道德框架和全球研究权力动态构成重大挑战。本研究通过对Web of Science核心馆藏2762篇文章的文献计量学分析,绘制了最近(2022-2025)geneai在HE领域的研究图景。本研究以多极化为分析视角,考察了不同国家(或地区)的出版记录所反映的该研究领域内的权力动态。研究结果突显了全球对高等教育中GenAI的兴趣激增,其中美国、中国和英国的贡献最大,同时非西方学者和机构的参与也越来越多。通过确定主要主题,本研究揭示了HE中genai相关话语的更微妙的轨迹。通过研究出版现状、贡献者和研究主题,本研究为利益相关者提供了导航GenAI整合到HE中的复杂性的见解,并为这一快速发展的领域的未来研究提供了建议。
{"title":"Generative AI in higher education: A bibliometric review of emerging trends, power dynamics, and global research landscapes","authors":"Kun Dai ,&nbsp;Yabing Liu ,&nbsp;Xiaofan Zhang","doi":"10.1016/j.caeai.2026.100544","DOIUrl":"10.1016/j.caeai.2026.100544","url":null,"abstract":"<div><div>The rapid evolution of Generative Artificial Intelligence (GenAI) is reshaping higher education (HE), offering transformative opportunities for academic engagement while posing significant challenges to academic integrity, ethical frameworks, and global research power dynamics. This study maps the recent (2022–2025) research landscape of GenAI in HE through a bibliometric analysis of 2762 articles from the Web of Science Core Collection. Employing multipolarity as an analytical lens, this study examines the power dynamics within this research domain reflected by publication records from different countries (or regions). Findings highlight surging global interest in GenAI in HE, with contributions led by the US, China, and the UK, alongside rising participation from non-Western scholars and institutions. By identifying the major topics, this study uncovers a more nuanced trajectory of GenAI-related discourse in HE. By examining publication status, contributors, and research topics, this study provides insights for stakeholders navigating the complexities of GenAI integration into HE and suggests trajectories for future research in this rapidly evolving field.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100544"},"PeriodicalIF":0.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLM sentiment quantification reveals selective alignment with human course-evaluation raters 法学硕士情感量化揭示了与人类课程评估评分者的选择性一致
Q1 Social Sciences Pub Date : 2026-01-06 DOI: 10.1016/j.caeai.2026.100545
Joyce W. Lacy , Chi Nnoka , Zachary Jock , Cathleen Morreale
Student course evaluations contain rich qualitative feedback in the form of comments written in response to open-ended questions. However, this qualitative data, which may be more nuanced and detailed than quantitative ratings, is often unexamined in both administrative and research settings due to the labor-intensive nature of manual analysis. We investigate whether large language models (LLMs), including BERT, RoBERTa, and OpenAI model variants, can accurately replicate human judgments of sentiment in these comments. We compare masked and generative language models, using both naïve and fine-tuned approaches, to analyze a curated dataset of 1000 de-identified course evaluation responses. Results show that some artificial intelligence (AI) models can approach inter-rater reliability with humans remarkably well and quickly with limited tuning or training data provided. However, performance varied and not all models were able to produce a reliable sentiment analysis, even after training. This has implications for future avenues of qualitative data analysis within course evaluations as well as the large repositories of course evaluations available at institutions of higher education. Importantly, consideration should be taken when selecting an AI model as this decision has ramifications for the reliability and validity of the generated output.
学生课程评估包含丰富的定性反馈,以回应开放式问题的评论形式。然而,这种定性数据可能比定量评级更细微和详细,由于人工分析的劳动密集型性质,在行政和研究环境中往往未经检查。我们研究了大型语言模型(llm),包括BERT、RoBERTa和OpenAI模型变体,是否可以准确地复制人类对这些评论中的情绪判断。我们比较了屏蔽语言模型和生成语言模型,使用naïve和微调方法,分析了1000个去识别的课程评估响应的策划数据集。结果表明,一些人工智能(AI)模型可以在提供有限的调整或训练数据的情况下,非常好地、快速地接近人类之间的可靠性。然而,表现各不相同,并不是所有的模型都能产生可靠的情感分析,即使在训练之后。这对课程评价中的定性数据分析的未来途径以及高等教育机构中可用的课程评价的大型存储库都有影响。重要的是,在选择人工智能模型时应该考虑,因为这个决定会影响生成输出的可靠性和有效性。
{"title":"LLM sentiment quantification reveals selective alignment with human course-evaluation raters","authors":"Joyce W. Lacy ,&nbsp;Chi Nnoka ,&nbsp;Zachary Jock ,&nbsp;Cathleen Morreale","doi":"10.1016/j.caeai.2026.100545","DOIUrl":"10.1016/j.caeai.2026.100545","url":null,"abstract":"<div><div>Student course evaluations contain rich qualitative feedback in the form of comments written in response to open-ended questions. However, this qualitative data, which may be more nuanced and detailed than quantitative ratings, is often unexamined in both administrative and research settings due to the labor-intensive nature of manual analysis. We investigate whether large language models (LLMs), including BERT, RoBERTa, and OpenAI model variants, can accurately replicate human judgments of sentiment in these comments. We compare masked and generative language models, using both naïve and fine-tuned approaches, to analyze a curated dataset of 1000 de-identified course evaluation responses. Results show that some artificial intelligence (AI) models can approach inter-rater reliability with humans remarkably well and quickly with limited tuning or training data provided. However, performance varied and not all models were able to produce a reliable sentiment analysis, even after training. This has implications for future avenues of qualitative data analysis within course evaluations as well as the large repositories of course evaluations available at institutions of higher education. Importantly, consideration should be taken when selecting an AI model as this decision has ramifications for the reliability and validity of the generated output.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100545"},"PeriodicalIF":0.0,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The AI literacy heptagon: A structured approach to AI literacy in higher education 人工智能素养七角形:高等教育中人工智能素养的结构化方法
Q1 Social Sciences Pub Date : 2026-01-05 DOI: 10.1016/j.caeai.2026.100540
Veronika Hackl , Alexandra Elena Müller , Maximilian Sailer
The integrative literature review addresses the conceptualization and implementation of AI Literacy (AIL) in Higher Education (HE) by examining recent research literature. Through an analysis of publications (2021–2024), we explore (1) how AIL is defined and conceptualized in current research, particularly in HE, and how it can be delineated from related concepts such as Data Literacy, Media Literacy, and Computational Literacy; (2) how various definitions can be synthesized into a comprehensive working definition, and (3) how scientific insights can be effectively translated into educational practice. Our analysis identifies seven central dimensions of AIL: technical, applicational, critical thinking, ethical, social, integrational, and legal. These are synthesized in the AI Literacy Heptagon, deepening conceptual understanding and supporting the structured development of AIL in HE. The study aims to bridge the gap between theoretical AIL conceptualizations and the practical implementation in academic curricula.
综合文献综述通过检查最近的研究文献,解决了高等教育(HE)中人工智能素养(AIL)的概念和实施。通过对出版物(2021-2024)的分析,我们探讨了(1)在当前的研究中,特别是在高等教育领域,如何定义和概念化ai,以及如何从相关概念(如数据素养、媒体素养和计算素养)中描述ai;(2)如何将各种定义综合成一个全面的工作定义;(3)如何将科学见解有效地转化为教育实践。我们的分析确定了人工智能的七个核心维度:技术、应用、批判性思维、伦理、社会、整合和法律。这些在人工智能素养七边形中综合,深化概念理解并支持高等教育中人工智能的结构化发展。本研究旨在弥补学术课程中人工智能理论概念与实际应用之间的差距。
{"title":"The AI literacy heptagon: A structured approach to AI literacy in higher education","authors":"Veronika Hackl ,&nbsp;Alexandra Elena Müller ,&nbsp;Maximilian Sailer","doi":"10.1016/j.caeai.2026.100540","DOIUrl":"10.1016/j.caeai.2026.100540","url":null,"abstract":"<div><div>The integrative literature review addresses the conceptualization and implementation of AI Literacy (AIL) in Higher Education (HE) by examining recent research literature. Through an analysis of publications (2021–2024), we explore (1) how AIL is defined and conceptualized in current research, particularly in HE, and how it can be delineated from related concepts such as Data Literacy, Media Literacy, and Computational Literacy; (2) how various definitions can be synthesized into a comprehensive working definition, and (3) how scientific insights can be effectively translated into educational practice. Our analysis identifies seven central dimensions of AIL: technical, applicational, critical thinking, ethical, social, integrational, and legal. These are synthesized in the AI Literacy Heptagon, deepening conceptual understanding and supporting the structured development of AIL in HE. The study aims to bridge the gap between theoretical AIL conceptualizations and the practical implementation in academic curricula.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100540"},"PeriodicalIF":0.0,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling generative AI adoption in higher education: An integrated TAM–TPB–SDT framework with SEM validation 高等教育中生成式人工智能采用建模:集成TAM-TPB-SDT框架与SEM验证
Q1 Social Sciences Pub Date : 2026-01-03 DOI: 10.1016/j.caeai.2026.100541
Dina Tbaishat , Omar AlFandi , Faten Hamad , Syed Muhammad Salman Bukhari , Suha Al Muhaissen
This study investigates the determinants of university students' adoption of generative artificial intelligence (GAI) tools in higher education. Integrating the Technology Acceptance Model (TAM), the Theory of Planned Behavior (TPB), and Self-Determination Theory (SDT), it develops and tests a complete model that captures cognitive, social, and motivational influences on adoption. A cross-sectional survey was conducted among 517 undergraduate and postgraduate students at Jordanian universities. The data were analyzed using structural equation modeling (SEM) with a two-step approach: confirmatory factor analysis (CFA) to validate the measurement model, followed by SEM to test the hypothesized structural relationships. Reliability, validity, measurement invariance across gender, and mediation effects were assessed. The integrated model showed excellent fit and substantial explanatory power, accounting for 83 % of the variance in behavioral intention and 81.6 % in actual AI use. Relatedness, perceived usefulness, attitude, and autonomy emerged as significant predictors of intention, while behavioral intention and competence predicted actual use. The ease of use strongly influenced usefulness, and mediation analysis confirmed indirect effects through usefulness and attitude. The model was invariant across gender groups, supporting its generalizability. This research extends TAM and TPB by integrating SDT's psychological needs, highlighting relatedness and competence as novel drivers of adoption. It provides the first empirical evidence from Jordan, a region underrepresented in the literature, highlighting that motivational dynamics carry greater weight than social norms in collectivist educational contexts. The study advances theoretical models of technology adoption and offers practical insights for universities and policymakers on promoting responsible and sustainable integration of AI in education.
本研究探讨了大学生在高等教育中采用生成式人工智能(GAI)工具的决定因素。它整合了技术接受模型(TAM)、计划行为理论(TPB)和自我决定理论(SDT),开发并测试了一个完整的模型,该模型捕获了对采用的认知、社会和动机影响。在约旦大学的517名本科生和研究生中进行了一项横断面调查。使用结构方程模型(SEM)对数据进行分析,采用两步方法:验证性因子分析(CFA)验证测量模型,然后使用SEM检验假设的结构关系。评估信度、效度、跨性别测量不变性和中介效应。综合模型拟合良好,解释力强,可以解释83%的行为意向方差和81.6%的实际人工智能使用方差。相关性、感知有用性、态度和自主性是意向的显著预测因子,而行为意向和能力预测实际使用。易用性强烈影响有用性,中介分析通过有用性和态度证实了间接影响。该模型在不同性别群体中是不变的,支持其普遍性。本研究通过整合SDT的心理需求来扩展TAM和TPB,强调相关性和能力是采用的新驱动因素。它提供了来自约旦的第一个经验证据,约旦是一个文献中代表性不足的地区,强调了在集体主义教育背景下,动机动力比社会规范更重要。该研究提出了技术采用的理论模型,并为大学和政策制定者提供了促进人工智能在教育中的负责任和可持续整合的实践见解。
{"title":"Modeling generative AI adoption in higher education: An integrated TAM–TPB–SDT framework with SEM validation","authors":"Dina Tbaishat ,&nbsp;Omar AlFandi ,&nbsp;Faten Hamad ,&nbsp;Syed Muhammad Salman Bukhari ,&nbsp;Suha Al Muhaissen","doi":"10.1016/j.caeai.2026.100541","DOIUrl":"10.1016/j.caeai.2026.100541","url":null,"abstract":"<div><div>This study investigates the determinants of university students' adoption of generative artificial intelligence (GAI) tools in higher education. Integrating the Technology Acceptance Model (TAM), the Theory of Planned Behavior (TPB), and Self-Determination Theory (SDT), it develops and tests a complete model that captures cognitive, social, and motivational influences on adoption. A cross-sectional survey was conducted among 517 undergraduate and postgraduate students at Jordanian universities. The data were analyzed using structural equation modeling (SEM) with a two-step approach: confirmatory factor analysis (CFA) to validate the measurement model, followed by SEM to test the hypothesized structural relationships. Reliability, validity, measurement invariance across gender, and mediation effects were assessed. The integrated model showed excellent fit and substantial explanatory power, accounting for 83 % of the variance in behavioral intention and 81.6 % in actual AI use. Relatedness, perceived usefulness, attitude, and autonomy emerged as significant predictors of intention, while behavioral intention and competence predicted actual use. The ease of use strongly influenced usefulness, and mediation analysis confirmed indirect effects through usefulness and attitude. The model was invariant across gender groups, supporting its generalizability. This research extends TAM and TPB by integrating SDT's psychological needs, highlighting relatedness and competence as novel drivers of adoption. It provides the first empirical evidence from Jordan, a region underrepresented in the literature, highlighting that motivational dynamics carry greater weight than social norms in collectivist educational contexts. The study advances theoretical models of technology adoption and offers practical insights for universities and policymakers on promoting responsible and sustainable integration of AI in education.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100541"},"PeriodicalIF":0.0,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering university teachers in higher education: A generative AI-responsive competency framework 赋予高等教育中的大学教师权力:生成式人工智能响应能力框架
Q1 Social Sciences Pub Date : 2026-01-03 DOI: 10.1016/j.caeai.2026.100542
Daner Sun , Shen Ba , Yingying Cha , Jiahui Yu , Feng-Kuang Chiang , Hai Min Dai , Cher-Ping Lim
The integration of generative artificial intelligence (GenAI) into higher education necessitates a reconceptualization of teacher competencies, moving beyond technical proficiency to encompass pedagogical strategies for fostering critical, ethical, and developmentally appropriate student-AI collaboration. Existing competency frameworks, however, exhibit notable limitations in equipping university teachers with actionable guidance for designing GenAI-mediated learning experiences that cultivate their students’ higher-order thinking and subject knowledge. In response, this paper develops and proposes a GenAI-responsive competency framework for university teachers to supplement existing frameworks and address areas that are not sufficiently covered or elaborated. Developed through a systematic analysis of digital and AI-related competency frameworks, the proposed model is grounded in constructivist learning theory, sociological perspectives, and student-centered pedagogy. Its theoretical and practical robustness was further refined through iterative expert review and consultation. The resulting framework comprises four core dimensions: GenAI Literacy, Curriculum/Learning Design, Teaching and Learning, and Assessment. Each dimension is articulated through a dual perspective: teachers’ own proficiency and their capacity to foster students’ critical engagement with GenAI. Competency progression is structured across three developmental levels: Basic, Intermediate, and Advanced, representing a continuum from technical awareness to guided application, and ultimately to critical and creative integration. The proposed framework supports teachers’ ongoing professional growth and enhances their ability to facilitate student autonomy, ethical reasoning, and collaborative engagement with GenAI. It provides a structured yet flexible tool for self-assessment, instructional design, and targeted professional development in higher education, thereby advancing the discourse on effective and responsible human-AI collaboration.
将生成式人工智能(GenAI)整合到高等教育中,需要对教师能力进行重新概念化,超越技术熟练程度,包括培养批判性、道德性和发展适宜的学生与人工智能合作的教学策略。然而,现有的能力框架在为大学教师提供可操作的指导,以设计基因人工智能介导的学习体验,培养学生的高阶思维和学科知识方面表现出明显的局限性。作为回应,本文为大学教师开发并提出了一个响应genai的能力框架,以补充现有框架并解决未充分涵盖或阐述的领域。通过对数字和人工智能相关能力框架的系统分析,该模型以建构主义学习理论、社会学观点和以学生为中心的教学法为基础。通过反复的专家评审和咨询,进一步完善了理论和实践的鲁棒性。由此产生的框架包括四个核心维度:基因素养、课程/学习设计、教与学以及评估。每个维度都是通过双重视角来表达的:教师自己的熟练程度和他们培养学生批判性地参与GenAI的能力。能力发展跨越三个发展水平:基础、中级和高级,代表了从技术意识到指导应用,最终到关键和创造性集成的连续统一体。拟议的框架支持教师持续的专业成长,并提高他们促进学生自主、道德推理和与GenAI合作的能力。它为高等教育中的自我评估、教学设计和有针对性的专业发展提供了一种结构化而灵活的工具,从而推动了关于有效和负责任的人类与人工智能协作的讨论。
{"title":"Empowering university teachers in higher education: A generative AI-responsive competency framework","authors":"Daner Sun ,&nbsp;Shen Ba ,&nbsp;Yingying Cha ,&nbsp;Jiahui Yu ,&nbsp;Feng-Kuang Chiang ,&nbsp;Hai Min Dai ,&nbsp;Cher-Ping Lim","doi":"10.1016/j.caeai.2026.100542","DOIUrl":"10.1016/j.caeai.2026.100542","url":null,"abstract":"<div><div>The integration of generative artificial intelligence (GenAI) into higher education necessitates a reconceptualization of teacher competencies, moving beyond technical proficiency to encompass pedagogical strategies for fostering critical, ethical, and developmentally appropriate student-AI collaboration. Existing competency frameworks, however, exhibit notable limitations in equipping university teachers with actionable guidance for designing GenAI-mediated learning experiences that cultivate their students’ higher-order thinking and subject knowledge. In response, this paper develops and proposes a GenAI-responsive competency framework for university teachers to supplement existing frameworks and address areas that are not sufficiently covered or elaborated. Developed through a systematic analysis of digital and AI-related competency frameworks, the proposed model is grounded in constructivist learning theory, sociological perspectives, and student-centered pedagogy. Its theoretical and practical robustness was further refined through iterative expert review and consultation. The resulting framework comprises four core dimensions: <em>GenAI Literacy</em>, <em>Curriculum/Learning Design</em>, <em>Teaching and Learning</em>, and <em>Assessment</em>. Each dimension is articulated through a dual perspective: teachers’ own proficiency and their capacity to foster students’ critical engagement with GenAI. Competency progression is structured across three developmental levels: <em>Basic</em>, <em>Intermediate</em>, and <em>Advanced</em>, representing a continuum from technical awareness to guided application, and ultimately to critical and creative integration. The proposed framework supports teachers’ ongoing professional growth and enhances their ability to facilitate student autonomy, ethical reasoning, and collaborative engagement with GenAI. It provides a structured yet flexible tool for self-assessment, instructional design, and targeted professional development in higher education, thereby advancing the discourse on effective and responsible human-AI collaboration.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100542"},"PeriodicalIF":0.0,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Less stress, better scores, same learning: The dissociation of performance and learning in AI-supported programming education 更少的压力,更好的成绩,同样的学习:在人工智能支持的编程教育中,表现和学习的分离
Q1 Social Sciences Pub Date : 2025-12-24 DOI: 10.1016/j.caeai.2025.100537
Patrick Bassner, Ben Lenk-Ostendorf, Ramona Beinstingel, Tobias Wasner, Stephan Krusche

Introduction

Generative AI is reshaping programming education, yet its effects on conceptual learning, intrinsic motivation, and cognitive load remain unclear. This study tests whether assistance deepens understanding or primarily boosts task completion, and how scaffolded versus answer-giving designs matter.

Objectives

This study compares performance, learning, cognitive load, frustration, and motivation across three AI support types, and examines students’ perceptions.

Methods

A three-arm randomized controlled trial was conducted in an introductory programming (CS1) course at TUM (N=275). Participants completed a 90-minute exercise on concurrency, implementing a parallel sum with threading in one of three conditions: (1) Iris, a scaffolded tutor providing calibrated hints while withholding full solutions; (2) ChatGPT, unrestricted assistance that can provide complete solutions; (3) no-AI control using traditional web resources. Pre- and post-knowledge tests and a code comprehension task measured learning, while auto-graded test coverage measured performance. Validated scales captured intrinsic, germane, and extraneous cognitive load, frustration, and intrinsic motivation.

Results

Both AI groups achieved substantially higher exercise scores than the control group, with distinct distributions: ChatGPT users clustered at high scores, control participants at low scores, and Iris users spread across the full range. Despite these performance gains, neither AI condition produced greater pre–post knowledge gains or code-comprehension advantages. Both AI groups reported lower frustration and reduced extraneous and germane load than the control group, while intrinsic load did not differ. Only Iris increased intrinsic motivation. Students rated ChatGPT as easier to use and more helpful.

Conclusion

In this setting, generative AI acted primarily as a performance aid rather than a learning enhancer. Scaffolded, hint-first design preserved motivational benefits, whereas AI providing unrestricted solutions encouraged a “comfort trap” where students’ preferences misaligned with pedagogical effectiveness. These findings motivate scaffolded AI integration and assessment designs resilient to environments where performance no longer reliably tracks understanding.
生成式人工智能正在重塑编程教育,但其对概念学习、内在动机和认知负荷的影响尚不清楚。这项研究测试了帮助是加深理解还是主要促进任务完成,以及架空式设计与给出答案设计的关系。本研究比较了三种人工智能支持类型的表现、学习、认知负荷、挫折和动机,并考察了学生的看法。方法采用三组随机对照试验,在TUM的程序设计入门(CS1)课程中进行。参与者完成了一项90分钟的并发性练习,在三种情况下执行并行和(1)Iris,一个脚手架导师,提供校准的提示,同时不提供完整的解决方案;(2) ChatGPT,可以提供完整解决方案的无限制协助;(3)使用传统web资源进行无ai控制。知识前和知识后测试以及代码理解任务衡量学习,而自动分级测试覆盖率衡量性能。有效的量表捕获了内在的、相关的和无关的认知负荷、挫折和内在动机。结果两个人工智能组的运动得分都明显高于对照组,且分布明显:ChatGPT用户聚集在得分较高的位置,对照组参与者聚集在得分较低的位置,而Iris用户分布在整个范围内。尽管有这些性能提升,但人工智能条件都没有产生更大的前置知识提升或代码理解优势。与对照组相比,两个人工智能组都报告了更低的挫败感,减少了外在和相关的负荷,而内在负荷没有差异。只有Iris增加了内在动机。学生们认为ChatGPT更容易使用,也更有帮助。在这种情况下,生成式AI主要是作为一种表现辅助工具,而不是学习增强器。架架式、暗示优先的设计保留了激励效益,而人工智能提供的无限制解决方案鼓励了一个“舒适陷阱”,学生的偏好与教学效果不一致。这些发现激发了脚手架人工智能集成和评估设计,以适应性能不再可靠地跟踪理解的环境。
{"title":"Less stress, better scores, same learning: The dissociation of performance and learning in AI-supported programming education","authors":"Patrick Bassner,&nbsp;Ben Lenk-Ostendorf,&nbsp;Ramona Beinstingel,&nbsp;Tobias Wasner,&nbsp;Stephan Krusche","doi":"10.1016/j.caeai.2025.100537","DOIUrl":"10.1016/j.caeai.2025.100537","url":null,"abstract":"<div><h3>Introduction</h3><div>Generative AI is reshaping programming education, yet its effects on conceptual learning, intrinsic motivation, and cognitive load remain unclear. This study tests whether assistance deepens understanding or primarily boosts task completion, and how scaffolded versus answer-giving designs matter.</div></div><div><h3>Objectives</h3><div>This study compares performance, learning, cognitive load, frustration, and motivation across three AI support types, and examines students’ perceptions.</div></div><div><h3>Methods</h3><div>A three-arm randomized controlled trial was conducted in an introductory programming (CS1) course at TUM (N=275). Participants completed a 90-minute exercise on concurrency, implementing a parallel sum with threading in one of three conditions: (1) <em>Iris</em>, a scaffolded tutor providing calibrated hints while withholding full solutions; (2) <em>ChatGPT</em>, unrestricted assistance that can provide complete solutions; (3) no-AI control using traditional web resources. Pre- and post-knowledge tests and a code comprehension task measured learning, while auto-graded test coverage measured performance. Validated scales captured intrinsic, germane, and extraneous cognitive load, frustration, and intrinsic motivation.</div></div><div><h3>Results</h3><div>Both AI groups achieved substantially higher exercise scores than the control group, with distinct distributions: <em>ChatGPT</em> users clustered at high scores, control participants at low scores, and <em>Iris</em> users spread across the full range. Despite these performance gains, neither AI condition produced greater pre–post knowledge gains or code-comprehension advantages. Both AI groups reported lower frustration and reduced extraneous and germane load than the control group, while intrinsic load did not differ. Only <em>Iris</em> increased intrinsic motivation. Students rated <em>ChatGPT</em> as easier to use and more helpful.</div></div><div><h3>Conclusion</h3><div>In this setting, generative AI acted primarily as a performance aid rather than a learning enhancer. Scaffolded, hint-first design preserved motivational benefits, whereas AI providing unrestricted solutions encouraged a “comfort trap” where students’ preferences misaligned with pedagogical effectiveness. These findings motivate scaffolded AI integration and assessment designs resilient to environments where performance no longer reliably tracks understanding.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100537"},"PeriodicalIF":0.0,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Education Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1