Pub Date : 2026-01-24DOI: 10.1016/j.caeai.2026.100549
Shuang Quan , Xintian Tu-Shea , Yi Ding , Yao Du , Qingxiao Zheng , Laney E. Gerdich
This study investigates the effectiveness, affordances, limitations, and family perceptions of conversational AI for home literacy learning vs. human. We developed a large language model (LLM)-powered conversational AI system, named Vovo, to teach children vocabulary and co-construct stories using structured literacy pedagogy. The system was tested in home environments over six weeks with 10 families and their children aged 3–7 (M = 5.4). Across 150 learning sessions, Vovo delivered structured literacy instruction as effectively as parents, though children achieved higher learning outcomes when learning with parents. Video analysis revealed Vovo's advantages in pedagogical consistency, language modeling, and verbal socioemotional support, while facing challenges in speech recognition, instructional persistence, nonverbal social cues, and phoneme instruction. Parents perceived Vovo as intelligent, useful, and trustworthy, while expecting a multimodal design to improve engagement. Children perceived Vovo smart and fun but still preferred learning with parents due to emotional bonding. As one of the first studies to embed structured literacy pedagogy into home-based conversational AI system, this research contributes empirical insights into the evolving role of AI in home literacy environments. It also underscores the socially responsive AI design in early education and calls for future design that support parent-child-AI triadic interactions to optimize AI in home literacy learning.
{"title":"Conversational AI in children's home literacy learning: effectiveness, advantages, challenges, and family perception","authors":"Shuang Quan , Xintian Tu-Shea , Yi Ding , Yao Du , Qingxiao Zheng , Laney E. Gerdich","doi":"10.1016/j.caeai.2026.100549","DOIUrl":"10.1016/j.caeai.2026.100549","url":null,"abstract":"<div><div>This study investigates the effectiveness, affordances, limitations, and family perceptions of conversational AI for home literacy learning vs. human. We developed a large language model (LLM)-powered conversational AI system, named Vovo, to teach children vocabulary and co-construct stories using structured literacy pedagogy. The system was tested in home environments over six weeks with 10 families and their children aged 3–7 (<em>M</em> = 5.4). Across 150 learning sessions, Vovo delivered structured literacy instruction as effectively as parents, though children achieved higher learning outcomes when learning with parents. Video analysis revealed Vovo's advantages in pedagogical consistency, language modeling, and verbal socioemotional support, while facing challenges in speech recognition, instructional persistence, nonverbal social cues, and phoneme instruction. Parents perceived Vovo as intelligent, useful, and trustworthy, while expecting a multimodal design to improve engagement. Children perceived Vovo smart and fun but still preferred learning with parents due to emotional bonding. As one of the first studies to embed structured literacy pedagogy into home-based conversational AI system, this research contributes empirical insights into the evolving role of AI in home literacy environments. It also underscores the socially responsive AI design in early education and calls for future design that support parent-child-AI triadic interactions to optimize AI in home literacy learning.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100549"},"PeriodicalIF":0.0,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1016/j.caeai.2026.100551
Shuyan Feng, Astrid Carolus
Artificial Intelligence (AI) is significantly changing school education. The increasing prevalence of AI calls for a framework of AI-related literacy specifically tailored to the educational context. A growing body of research has attempted to conceptualise AI literacy (AIL) from different disciplinary perspectives and with different foci. This systematic review aims to provide a comprehensive overview of definitions and psychological dimensions of AIL in school education by addressing the following questions: how is AIL defined and conceptualised, what are the dimensions of AIL, and what psychological dimensions are included. A total of 2642 records were identified from various databases, and 58 peer-reviewed articles were retrieved for this systematic review, which strictly followed the PRISMA guidelines. The findings propose different definitions of AIL for teachers, students and other educational professionals, and identifies dimensions that include cognitive, emotional, psychological, and behavioural constructs. More detailed, the review identifies six dimensions for teachers, such as contextual knowledge and continuous professional growth. For students, eight dimensions were identified, including AI-related thinking capacity and preparation for AI careers. Certain dimensions, such as AI knowledge and skills, AI ethics and societal implications, generative AI-specific competency, and most importantly, the psychological dimension consisting of cognitive and non-cognitive elements, were found to be shared across all target groups. Furthermore, personalisation and contextual adaptability emerged as additional key dimensions. In sum, the findings offer valuable insights for future research and practical guidance for decision-making in AI education, particularly in the areas of curriculum design, implementation, and assessment.
{"title":"Artificial intelligence literacy at school: A systematic review with a focus on psychological foundations","authors":"Shuyan Feng, Astrid Carolus","doi":"10.1016/j.caeai.2026.100551","DOIUrl":"10.1016/j.caeai.2026.100551","url":null,"abstract":"<div><div>Artificial Intelligence (AI) is significantly changing school education. The increasing prevalence of AI calls for a framework of AI-related literacy specifically tailored to the educational context. A growing body of research has attempted to conceptualise AI literacy (AIL) from different disciplinary perspectives and with different foci. This systematic review aims to provide a comprehensive overview of definitions and psychological dimensions of AIL in school education by addressing the following questions: how is AIL defined and conceptualised, what are the dimensions of AIL, and what psychological dimensions are included. A total of 2642 records were identified from various databases, and 58 peer-reviewed articles were retrieved for this systematic review, which strictly followed the PRISMA guidelines. The findings propose different definitions of AIL for teachers, students and other educational professionals, and identifies dimensions that include cognitive, emotional, psychological, and behavioural constructs. More detailed, the review identifies six dimensions for teachers, such as contextual knowledge and continuous professional growth. For students, eight dimensions were identified, including AI-related thinking capacity and preparation for AI careers. Certain dimensions, such as AI knowledge and skills, AI ethics and societal implications, generative AI-specific competency, and most importantly, the psychological dimension consisting of cognitive and non-cognitive elements, were found to be shared across all target groups. Furthermore, personalisation and contextual adaptability emerged as additional key dimensions. In sum, the findings offer valuable insights for future research and practical guidance for decision-making in AI education, particularly in the areas of curriculum design, implementation, and assessment.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100551"},"PeriodicalIF":0.0,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1016/j.caeai.2026.100550
Bo Pei , Jie Lu , Zhaowei Zhang , Priscilla Tuffour , Sanghoon Park
As AI has become an integral part in current teaching and learning practices, educators' capacities to use AI technologies effectively and responsibly are closely tied to the quality of instruction and student learning outcomes. To provide a comprehensive examination for cultivating the relevant capacities, this study conducted a systematic literature review about AI literacy for educators. Informed by Bloom's Taxonomy, we investigated the existing research with a layered progression structure from five dimensions: definitions of educators' AI literacy, fundamental knowledge for understanding of AI, AI educational practices, educators' perspectives of AI applications, and pedagogies of integrating AI. The findings of this study revealed three key dimensions (i.e., human-AI interactions, harnessing AI tools, and ethical and societal implications) of educators' AI literacy and highlighted the importance of interrelationships among these dimensions. Furthermore, our study identified the fundamental knowledge that educators need to understand, the instructional scenarios in which the AI applications are applied, the associated opportunities and challenges, and the pedagogical approaches that have been proposed to effectively scaffold educators' engagement with AI. Overall, this literature review underscores the multidimensional and context-relevance nature of AI literacy for educators, developing from the interplay of multiple competencies within specific educational contexts. Finally, the study concludes by discussing implications for providing actionable insights to design training and professional development programs that better prepare educators to navigate the AI-driven educational environments.
{"title":"Enhancing AI literacy for educators: Where to start and to what end?","authors":"Bo Pei , Jie Lu , Zhaowei Zhang , Priscilla Tuffour , Sanghoon Park","doi":"10.1016/j.caeai.2026.100550","DOIUrl":"10.1016/j.caeai.2026.100550","url":null,"abstract":"<div><div>As AI has become an integral part in current teaching and learning practices, educators' capacities to use AI technologies effectively and responsibly are closely tied to the quality of instruction and student learning outcomes. To provide a comprehensive examination for cultivating the relevant capacities, this study conducted a systematic literature review about AI literacy for educators. Informed by Bloom's Taxonomy, we investigated the existing research with a layered progression structure from five dimensions: definitions of educators' AI literacy, fundamental knowledge for understanding of AI, AI educational practices, educators' perspectives of AI applications, and pedagogies of integrating AI. The findings of this study revealed three key dimensions (i.e., human-AI interactions, harnessing AI tools, and ethical and societal implications) of educators' AI literacy and highlighted the importance of interrelationships among these dimensions. Furthermore, our study identified the fundamental knowledge that educators need to understand, the instructional scenarios in which the AI applications are applied, the associated opportunities and challenges, and the pedagogical approaches that have been proposed to effectively scaffold educators' engagement with AI. Overall, this literature review underscores the multidimensional and context-relevance nature of AI literacy for educators, developing from the interplay of multiple competencies within specific educational contexts. Finally, the study concludes by discussing implications for providing actionable insights to design training and professional development programs that better prepare educators to navigate the AI-driven educational environments.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100550"},"PeriodicalIF":0.0,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1016/j.caeai.2026.100546
Ryann M. Perez, Marie Shimogawa, Yanan Chang, Xinning Li, Hoang Anh T. Phan, Jason G. Marmorstein, Evan S.K. Yanagawa, E. James Petersson
Large Language Models (LLMs) offer scalable educational support, but face barriers regarding accuracy, cost, and learning depth. To interrogate these limitations, we developed the Teaching Assistant for Specialized Knowledge (TAsk), a retrieval-augmented generation enabled and educator curated pipeline. In this nine-week pilot study (N = 33 participants), we deployed TAsk in a graduate-level biological chemistry course. We compared TAsk against human expert teaching assistants (TAs) using blinded review process and analyzed inquiry depth. We observed three major findings related to potential pedagogical decisions and educational theory. First, TAsk delivered effective feedback that was specific and adaptive as it significantly outperformed expert TAs in overall correctness. However, human TAs remained superior in tailoring responses to course nuances. Second, behavioral analysis based on educational scaffolding techniques, such as Bloom's Taxonomy and the Zone of Proximal Development (ZPD), identified a cognitive bypass risk where frequent users submitted significantly fewer higher-order queries compared to infrequent users. Third, benchmarking demonstrated that smaller models could approach frontier model performance when optimized, suggesting future costs can be reduced significantly for TAsk in the pilot study. Finally, we validated a confabulation detection algorithm, hypothesizing that this algorithm could help students calibrate trust in model outputs in future iterations of TAsk. Taken together, these contributions establish TAsk as a validated framework for higher education learning while highlighting the critical need for pedagogical scaffolding for LLMs.
{"title":"Large language models for education: An open-source paradigm for automated Q&A in the graduate classroom","authors":"Ryann M. Perez, Marie Shimogawa, Yanan Chang, Xinning Li, Hoang Anh T. Phan, Jason G. Marmorstein, Evan S.K. Yanagawa, E. James Petersson","doi":"10.1016/j.caeai.2026.100546","DOIUrl":"10.1016/j.caeai.2026.100546","url":null,"abstract":"<div><div>Large Language Models (LLMs) offer scalable educational support, but face barriers regarding accuracy, cost, and learning depth. To interrogate these limitations, we developed the Teaching Assistant for Specialized Knowledge (TAsk), a retrieval-augmented generation enabled and educator curated pipeline. In this nine-week pilot study (N = 33 participants), we deployed TAsk in a graduate-level biological chemistry course. We compared TAsk against human expert teaching assistants (TAs) using blinded review process and analyzed inquiry depth. We observed three major findings related to potential pedagogical decisions and educational theory. First, TAsk delivered effective feedback that was specific and adaptive as it significantly outperformed expert TAs in overall correctness. However, human TAs remained superior in tailoring responses to course nuances. Second, behavioral analysis based on educational scaffolding techniques, such as Bloom's Taxonomy and the Zone of Proximal Development (ZPD), identified a cognitive bypass risk where frequent users submitted significantly fewer higher-order queries compared to infrequent users. Third, benchmarking demonstrated that smaller models could approach frontier model performance when optimized, suggesting future costs can be reduced significantly for TAsk in the pilot study. Finally, we validated a confabulation detection algorithm, hypothesizing that this algorithm could help students calibrate trust in model outputs in future iterations of TAsk. Taken together, these contributions establish TAsk as a validated framework for higher education learning while highlighting the critical need for pedagogical scaffolding for LLMs.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100546"},"PeriodicalIF":0.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.caeai.2026.100544
Kun Dai , Yabing Liu , Xiaofan Zhang
The rapid evolution of Generative Artificial Intelligence (GenAI) is reshaping higher education (HE), offering transformative opportunities for academic engagement while posing significant challenges to academic integrity, ethical frameworks, and global research power dynamics. This study maps the recent (2022–2025) research landscape of GenAI in HE through a bibliometric analysis of 2762 articles from the Web of Science Core Collection. Employing multipolarity as an analytical lens, this study examines the power dynamics within this research domain reflected by publication records from different countries (or regions). Findings highlight surging global interest in GenAI in HE, with contributions led by the US, China, and the UK, alongside rising participation from non-Western scholars and institutions. By identifying the major topics, this study uncovers a more nuanced trajectory of GenAI-related discourse in HE. By examining publication status, contributors, and research topics, this study provides insights for stakeholders navigating the complexities of GenAI integration into HE and suggests trajectories for future research in this rapidly evolving field.
生成式人工智能(GenAI)的快速发展正在重塑高等教育(HE),为学术参与提供变革性机会,同时对学术诚信、道德框架和全球研究权力动态构成重大挑战。本研究通过对Web of Science核心馆藏2762篇文章的文献计量学分析,绘制了最近(2022-2025)geneai在HE领域的研究图景。本研究以多极化为分析视角,考察了不同国家(或地区)的出版记录所反映的该研究领域内的权力动态。研究结果突显了全球对高等教育中GenAI的兴趣激增,其中美国、中国和英国的贡献最大,同时非西方学者和机构的参与也越来越多。通过确定主要主题,本研究揭示了HE中genai相关话语的更微妙的轨迹。通过研究出版现状、贡献者和研究主题,本研究为利益相关者提供了导航GenAI整合到HE中的复杂性的见解,并为这一快速发展的领域的未来研究提供了建议。
{"title":"Generative AI in higher education: A bibliometric review of emerging trends, power dynamics, and global research landscapes","authors":"Kun Dai , Yabing Liu , Xiaofan Zhang","doi":"10.1016/j.caeai.2026.100544","DOIUrl":"10.1016/j.caeai.2026.100544","url":null,"abstract":"<div><div>The rapid evolution of Generative Artificial Intelligence (GenAI) is reshaping higher education (HE), offering transformative opportunities for academic engagement while posing significant challenges to academic integrity, ethical frameworks, and global research power dynamics. This study maps the recent (2022–2025) research landscape of GenAI in HE through a bibliometric analysis of 2762 articles from the Web of Science Core Collection. Employing multipolarity as an analytical lens, this study examines the power dynamics within this research domain reflected by publication records from different countries (or regions). Findings highlight surging global interest in GenAI in HE, with contributions led by the US, China, and the UK, alongside rising participation from non-Western scholars and institutions. By identifying the major topics, this study uncovers a more nuanced trajectory of GenAI-related discourse in HE. By examining publication status, contributors, and research topics, this study provides insights for stakeholders navigating the complexities of GenAI integration into HE and suggests trajectories for future research in this rapidly evolving field.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100544"},"PeriodicalIF":0.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1016/j.caeai.2026.100545
Joyce W. Lacy , Chi Nnoka , Zachary Jock , Cathleen Morreale
Student course evaluations contain rich qualitative feedback in the form of comments written in response to open-ended questions. However, this qualitative data, which may be more nuanced and detailed than quantitative ratings, is often unexamined in both administrative and research settings due to the labor-intensive nature of manual analysis. We investigate whether large language models (LLMs), including BERT, RoBERTa, and OpenAI model variants, can accurately replicate human judgments of sentiment in these comments. We compare masked and generative language models, using both naïve and fine-tuned approaches, to analyze a curated dataset of 1000 de-identified course evaluation responses. Results show that some artificial intelligence (AI) models can approach inter-rater reliability with humans remarkably well and quickly with limited tuning or training data provided. However, performance varied and not all models were able to produce a reliable sentiment analysis, even after training. This has implications for future avenues of qualitative data analysis within course evaluations as well as the large repositories of course evaluations available at institutions of higher education. Importantly, consideration should be taken when selecting an AI model as this decision has ramifications for the reliability and validity of the generated output.
{"title":"LLM sentiment quantification reveals selective alignment with human course-evaluation raters","authors":"Joyce W. Lacy , Chi Nnoka , Zachary Jock , Cathleen Morreale","doi":"10.1016/j.caeai.2026.100545","DOIUrl":"10.1016/j.caeai.2026.100545","url":null,"abstract":"<div><div>Student course evaluations contain rich qualitative feedback in the form of comments written in response to open-ended questions. However, this qualitative data, which may be more nuanced and detailed than quantitative ratings, is often unexamined in both administrative and research settings due to the labor-intensive nature of manual analysis. We investigate whether large language models (LLMs), including BERT, RoBERTa, and OpenAI model variants, can accurately replicate human judgments of sentiment in these comments. We compare masked and generative language models, using both naïve and fine-tuned approaches, to analyze a curated dataset of 1000 de-identified course evaluation responses. Results show that some artificial intelligence (AI) models can approach inter-rater reliability with humans remarkably well and quickly with limited tuning or training data provided. However, performance varied and not all models were able to produce a reliable sentiment analysis, even after training. This has implications for future avenues of qualitative data analysis within course evaluations as well as the large repositories of course evaluations available at institutions of higher education. Importantly, consideration should be taken when selecting an AI model as this decision has ramifications for the reliability and validity of the generated output.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100545"},"PeriodicalIF":0.0,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1016/j.caeai.2026.100540
Veronika Hackl , Alexandra Elena Müller , Maximilian Sailer
The integrative literature review addresses the conceptualization and implementation of AI Literacy (AIL) in Higher Education (HE) by examining recent research literature. Through an analysis of publications (2021–2024), we explore (1) how AIL is defined and conceptualized in current research, particularly in HE, and how it can be delineated from related concepts such as Data Literacy, Media Literacy, and Computational Literacy; (2) how various definitions can be synthesized into a comprehensive working definition, and (3) how scientific insights can be effectively translated into educational practice. Our analysis identifies seven central dimensions of AIL: technical, applicational, critical thinking, ethical, social, integrational, and legal. These are synthesized in the AI Literacy Heptagon, deepening conceptual understanding and supporting the structured development of AIL in HE. The study aims to bridge the gap between theoretical AIL conceptualizations and the practical implementation in academic curricula.
{"title":"The AI literacy heptagon: A structured approach to AI literacy in higher education","authors":"Veronika Hackl , Alexandra Elena Müller , Maximilian Sailer","doi":"10.1016/j.caeai.2026.100540","DOIUrl":"10.1016/j.caeai.2026.100540","url":null,"abstract":"<div><div>The integrative literature review addresses the conceptualization and implementation of AI Literacy (AIL) in Higher Education (HE) by examining recent research literature. Through an analysis of publications (2021–2024), we explore (1) how AIL is defined and conceptualized in current research, particularly in HE, and how it can be delineated from related concepts such as Data Literacy, Media Literacy, and Computational Literacy; (2) how various definitions can be synthesized into a comprehensive working definition, and (3) how scientific insights can be effectively translated into educational practice. Our analysis identifies seven central dimensions of AIL: technical, applicational, critical thinking, ethical, social, integrational, and legal. These are synthesized in the AI Literacy Heptagon, deepening conceptual understanding and supporting the structured development of AIL in HE. The study aims to bridge the gap between theoretical AIL conceptualizations and the practical implementation in academic curricula.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100540"},"PeriodicalIF":0.0,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-03DOI: 10.1016/j.caeai.2026.100541
Dina Tbaishat , Omar AlFandi , Faten Hamad , Syed Muhammad Salman Bukhari , Suha Al Muhaissen
This study investigates the determinants of university students' adoption of generative artificial intelligence (GAI) tools in higher education. Integrating the Technology Acceptance Model (TAM), the Theory of Planned Behavior (TPB), and Self-Determination Theory (SDT), it develops and tests a complete model that captures cognitive, social, and motivational influences on adoption. A cross-sectional survey was conducted among 517 undergraduate and postgraduate students at Jordanian universities. The data were analyzed using structural equation modeling (SEM) with a two-step approach: confirmatory factor analysis (CFA) to validate the measurement model, followed by SEM to test the hypothesized structural relationships. Reliability, validity, measurement invariance across gender, and mediation effects were assessed. The integrated model showed excellent fit and substantial explanatory power, accounting for 83 % of the variance in behavioral intention and 81.6 % in actual AI use. Relatedness, perceived usefulness, attitude, and autonomy emerged as significant predictors of intention, while behavioral intention and competence predicted actual use. The ease of use strongly influenced usefulness, and mediation analysis confirmed indirect effects through usefulness and attitude. The model was invariant across gender groups, supporting its generalizability. This research extends TAM and TPB by integrating SDT's psychological needs, highlighting relatedness and competence as novel drivers of adoption. It provides the first empirical evidence from Jordan, a region underrepresented in the literature, highlighting that motivational dynamics carry greater weight than social norms in collectivist educational contexts. The study advances theoretical models of technology adoption and offers practical insights for universities and policymakers on promoting responsible and sustainable integration of AI in education.
{"title":"Modeling generative AI adoption in higher education: An integrated TAM–TPB–SDT framework with SEM validation","authors":"Dina Tbaishat , Omar AlFandi , Faten Hamad , Syed Muhammad Salman Bukhari , Suha Al Muhaissen","doi":"10.1016/j.caeai.2026.100541","DOIUrl":"10.1016/j.caeai.2026.100541","url":null,"abstract":"<div><div>This study investigates the determinants of university students' adoption of generative artificial intelligence (GAI) tools in higher education. Integrating the Technology Acceptance Model (TAM), the Theory of Planned Behavior (TPB), and Self-Determination Theory (SDT), it develops and tests a complete model that captures cognitive, social, and motivational influences on adoption. A cross-sectional survey was conducted among 517 undergraduate and postgraduate students at Jordanian universities. The data were analyzed using structural equation modeling (SEM) with a two-step approach: confirmatory factor analysis (CFA) to validate the measurement model, followed by SEM to test the hypothesized structural relationships. Reliability, validity, measurement invariance across gender, and mediation effects were assessed. The integrated model showed excellent fit and substantial explanatory power, accounting for 83 % of the variance in behavioral intention and 81.6 % in actual AI use. Relatedness, perceived usefulness, attitude, and autonomy emerged as significant predictors of intention, while behavioral intention and competence predicted actual use. The ease of use strongly influenced usefulness, and mediation analysis confirmed indirect effects through usefulness and attitude. The model was invariant across gender groups, supporting its generalizability. This research extends TAM and TPB by integrating SDT's psychological needs, highlighting relatedness and competence as novel drivers of adoption. It provides the first empirical evidence from Jordan, a region underrepresented in the literature, highlighting that motivational dynamics carry greater weight than social norms in collectivist educational contexts. The study advances theoretical models of technology adoption and offers practical insights for universities and policymakers on promoting responsible and sustainable integration of AI in education.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100541"},"PeriodicalIF":0.0,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-03DOI: 10.1016/j.caeai.2026.100542
Daner Sun , Shen Ba , Yingying Cha , Jiahui Yu , Feng-Kuang Chiang , Hai Min Dai , Cher-Ping Lim
The integration of generative artificial intelligence (GenAI) into higher education necessitates a reconceptualization of teacher competencies, moving beyond technical proficiency to encompass pedagogical strategies for fostering critical, ethical, and developmentally appropriate student-AI collaboration. Existing competency frameworks, however, exhibit notable limitations in equipping university teachers with actionable guidance for designing GenAI-mediated learning experiences that cultivate their students’ higher-order thinking and subject knowledge. In response, this paper develops and proposes a GenAI-responsive competency framework for university teachers to supplement existing frameworks and address areas that are not sufficiently covered or elaborated. Developed through a systematic analysis of digital and AI-related competency frameworks, the proposed model is grounded in constructivist learning theory, sociological perspectives, and student-centered pedagogy. Its theoretical and practical robustness was further refined through iterative expert review and consultation. The resulting framework comprises four core dimensions: GenAI Literacy, Curriculum/Learning Design, Teaching and Learning, and Assessment. Each dimension is articulated through a dual perspective: teachers’ own proficiency and their capacity to foster students’ critical engagement with GenAI. Competency progression is structured across three developmental levels: Basic, Intermediate, and Advanced, representing a continuum from technical awareness to guided application, and ultimately to critical and creative integration. The proposed framework supports teachers’ ongoing professional growth and enhances their ability to facilitate student autonomy, ethical reasoning, and collaborative engagement with GenAI. It provides a structured yet flexible tool for self-assessment, instructional design, and targeted professional development in higher education, thereby advancing the discourse on effective and responsible human-AI collaboration.
{"title":"Empowering university teachers in higher education: A generative AI-responsive competency framework","authors":"Daner Sun , Shen Ba , Yingying Cha , Jiahui Yu , Feng-Kuang Chiang , Hai Min Dai , Cher-Ping Lim","doi":"10.1016/j.caeai.2026.100542","DOIUrl":"10.1016/j.caeai.2026.100542","url":null,"abstract":"<div><div>The integration of generative artificial intelligence (GenAI) into higher education necessitates a reconceptualization of teacher competencies, moving beyond technical proficiency to encompass pedagogical strategies for fostering critical, ethical, and developmentally appropriate student-AI collaboration. Existing competency frameworks, however, exhibit notable limitations in equipping university teachers with actionable guidance for designing GenAI-mediated learning experiences that cultivate their students’ higher-order thinking and subject knowledge. In response, this paper develops and proposes a GenAI-responsive competency framework for university teachers to supplement existing frameworks and address areas that are not sufficiently covered or elaborated. Developed through a systematic analysis of digital and AI-related competency frameworks, the proposed model is grounded in constructivist learning theory, sociological perspectives, and student-centered pedagogy. Its theoretical and practical robustness was further refined through iterative expert review and consultation. The resulting framework comprises four core dimensions: <em>GenAI Literacy</em>, <em>Curriculum/Learning Design</em>, <em>Teaching and Learning</em>, and <em>Assessment</em>. Each dimension is articulated through a dual perspective: teachers’ own proficiency and their capacity to foster students’ critical engagement with GenAI. Competency progression is structured across three developmental levels: <em>Basic</em>, <em>Intermediate</em>, and <em>Advanced</em>, representing a continuum from technical awareness to guided application, and ultimately to critical and creative integration. The proposed framework supports teachers’ ongoing professional growth and enhances their ability to facilitate student autonomy, ethical reasoning, and collaborative engagement with GenAI. It provides a structured yet flexible tool for self-assessment, instructional design, and targeted professional development in higher education, thereby advancing the discourse on effective and responsible human-AI collaboration.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100542"},"PeriodicalIF":0.0,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.caeai.2025.100537
Patrick Bassner, Ben Lenk-Ostendorf, Ramona Beinstingel, Tobias Wasner, Stephan Krusche
Introduction
Generative AI is reshaping programming education, yet its effects on conceptual learning, intrinsic motivation, and cognitive load remain unclear. This study tests whether assistance deepens understanding or primarily boosts task completion, and how scaffolded versus answer-giving designs matter.
Objectives
This study compares performance, learning, cognitive load, frustration, and motivation across three AI support types, and examines students’ perceptions.
Methods
A three-arm randomized controlled trial was conducted in an introductory programming (CS1) course at TUM (N=275). Participants completed a 90-minute exercise on concurrency, implementing a parallel sum with threading in one of three conditions: (1) Iris, a scaffolded tutor providing calibrated hints while withholding full solutions; (2) ChatGPT, unrestricted assistance that can provide complete solutions; (3) no-AI control using traditional web resources. Pre- and post-knowledge tests and a code comprehension task measured learning, while auto-graded test coverage measured performance. Validated scales captured intrinsic, germane, and extraneous cognitive load, frustration, and intrinsic motivation.
Results
Both AI groups achieved substantially higher exercise scores than the control group, with distinct distributions: ChatGPT users clustered at high scores, control participants at low scores, and Iris users spread across the full range. Despite these performance gains, neither AI condition produced greater pre–post knowledge gains or code-comprehension advantages. Both AI groups reported lower frustration and reduced extraneous and germane load than the control group, while intrinsic load did not differ. Only Iris increased intrinsic motivation. Students rated ChatGPT as easier to use and more helpful.
Conclusion
In this setting, generative AI acted primarily as a performance aid rather than a learning enhancer. Scaffolded, hint-first design preserved motivational benefits, whereas AI providing unrestricted solutions encouraged a “comfort trap” where students’ preferences misaligned with pedagogical effectiveness. These findings motivate scaffolded AI integration and assessment designs resilient to environments where performance no longer reliably tracks understanding.
{"title":"Less stress, better scores, same learning: The dissociation of performance and learning in AI-supported programming education","authors":"Patrick Bassner, Ben Lenk-Ostendorf, Ramona Beinstingel, Tobias Wasner, Stephan Krusche","doi":"10.1016/j.caeai.2025.100537","DOIUrl":"10.1016/j.caeai.2025.100537","url":null,"abstract":"<div><h3>Introduction</h3><div>Generative AI is reshaping programming education, yet its effects on conceptual learning, intrinsic motivation, and cognitive load remain unclear. This study tests whether assistance deepens understanding or primarily boosts task completion, and how scaffolded versus answer-giving designs matter.</div></div><div><h3>Objectives</h3><div>This study compares performance, learning, cognitive load, frustration, and motivation across three AI support types, and examines students’ perceptions.</div></div><div><h3>Methods</h3><div>A three-arm randomized controlled trial was conducted in an introductory programming (CS1) course at TUM (N=275). Participants completed a 90-minute exercise on concurrency, implementing a parallel sum with threading in one of three conditions: (1) <em>Iris</em>, a scaffolded tutor providing calibrated hints while withholding full solutions; (2) <em>ChatGPT</em>, unrestricted assistance that can provide complete solutions; (3) no-AI control using traditional web resources. Pre- and post-knowledge tests and a code comprehension task measured learning, while auto-graded test coverage measured performance. Validated scales captured intrinsic, germane, and extraneous cognitive load, frustration, and intrinsic motivation.</div></div><div><h3>Results</h3><div>Both AI groups achieved substantially higher exercise scores than the control group, with distinct distributions: <em>ChatGPT</em> users clustered at high scores, control participants at low scores, and <em>Iris</em> users spread across the full range. Despite these performance gains, neither AI condition produced greater pre–post knowledge gains or code-comprehension advantages. Both AI groups reported lower frustration and reduced extraneous and germane load than the control group, while intrinsic load did not differ. Only <em>Iris</em> increased intrinsic motivation. Students rated <em>ChatGPT</em> as easier to use and more helpful.</div></div><div><h3>Conclusion</h3><div>In this setting, generative AI acted primarily as a performance aid rather than a learning enhancer. Scaffolded, hint-first design preserved motivational benefits, whereas AI providing unrestricted solutions encouraged a “comfort trap” where students’ preferences misaligned with pedagogical effectiveness. These findings motivate scaffolded AI integration and assessment designs resilient to environments where performance no longer reliably tracks understanding.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100537"},"PeriodicalIF":0.0,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}