首页 > 最新文献

Computers and Education Artificial Intelligence最新文献

英文 中文
How reliable are large language models in analyzing the quality of written lesson plans? A mixed-methods study from a teacher internship program 大型语言模型在分析书面教案质量方面有多可靠?一项来自教师实习项目的混合方法研究
Q1 Social Sciences Pub Date : 2025-12-23 DOI: 10.1016/j.caeai.2025.100538
Dennis Hauk, Nina Soujon
This study investigates the reliability of Large Language Models (LLMs) in evaluating the quality of written lesson plans from pre-service teachers. A total of 32 lesson plans, each ranging from 60 to 100 pages, were collected during a teacher internship program for civic education pre-service teachers. Using the ChatGPT-o1 reasoning model, we compared a human expert standard with LLM coding outcomes in a two-phase explanatory sequential mixed-methods design that combined quantitative reliability testing with a qualitative follow-up analysis to interpret inter-dimensional patterns of agreement. Quantitatively, overall reliability across six qualitative components of written lesson plans (Content Transformation, Task Creation, Adaptation, Goal Clarification, Contextualization and Sequencing) reached a moderate alignment in identifying explicit instructional features (α = .689; 73.8 % exact agreement). Qualitative analyses further revealed that the LLM struggled with high-inferential criteria, such as the depth of pedagogical reasoning and the coherence of instructional decisions, as it often relied on surface-level textual cues rather than deeper contextual understanding. These findings indicate that LLMs can support teacher educators and educational researchers as a design-stage screening tool, but human judgment remains essential for interpreting complex pedagogical constructs in written lesson plans and for ensuring the ethical and pedagogical integrity of evaluation processes. We outline implications for integrating LLM-based analysis into teacher education and emphasize improved prompt design and systematic human oversight to ensure reliable qualitative use.
本研究探讨了大型语言模型(llm)在评估职前教师书面教案质量方面的可靠性。在公民教育职前教师实习项目中,共收集了32份60 ~ 100页不等的教案。使用chatgpt - 01推理模型,我们将人类专家标准与LLM编码结果在两阶段解释顺序混合方法设计中进行了比较,该设计将定量可靠性测试与定性后续分析相结合,以解释一致性的多维模式。定量地,书面课程计划的六个定性组成部分(内容转换、任务创建、适应、目标澄清、情境化和排序)的总体可靠性在识别明确的教学特征方面达到了适度的一致性(α = 0.689; 73.8%的精确一致性)。定性分析进一步表明,法学硕士在高推理标准(如教学推理的深度和教学决策的连贯性)上挣扎,因为它往往依赖于表层的文本线索,而不是更深入的上下文理解。这些发现表明,法学硕士可以作为一种设计阶段的筛选工具来支持教师教育工作者和教育研究人员,但人类的判断对于解释书面课程计划中复杂的教学结构以及确保评估过程的道德和教学完整性仍然至关重要。我们概述了将基于法学硕士的分析整合到教师教育中的意义,并强调改进的及时设计和系统的人为监督,以确保可靠的定性使用。
{"title":"How reliable are large language models in analyzing the quality of written lesson plans? A mixed-methods study from a teacher internship program","authors":"Dennis Hauk,&nbsp;Nina Soujon","doi":"10.1016/j.caeai.2025.100538","DOIUrl":"10.1016/j.caeai.2025.100538","url":null,"abstract":"<div><div>This study investigates the reliability of Large Language Models (LLMs) in evaluating the quality of written lesson plans from pre-service teachers. A total of 32 lesson plans, each ranging from 60 to 100 pages, were collected during a teacher internship program for civic education pre-service teachers. Using the ChatGPT-o1 reasoning model, we compared a human expert standard with LLM coding outcomes in a two-phase explanatory sequential mixed-methods design that combined quantitative reliability testing with a qualitative follow-up analysis to interpret inter-dimensional patterns of agreement. Quantitatively, overall reliability across six qualitative components of written lesson plans (Content Transformation, Task Creation, Adaptation, Goal Clarification, Contextualization and Sequencing<em>)</em> reached a moderate alignment in identifying explicit instructional features (α = .689; 73.8 % exact agreement). Qualitative analyses further revealed that the LLM struggled with high-inferential criteria, such as the depth of pedagogical reasoning and the coherence of instructional decisions, as it often relied on surface-level textual cues rather than deeper contextual understanding. These findings indicate that LLMs can support teacher educators and educational researchers as a design-stage screening tool, but human judgment remains essential for interpreting complex pedagogical constructs in written lesson plans and for ensuring the ethical and pedagogical integrity of evaluation processes. We outline implications for integrating LLM-based analysis into teacher education and emphasize improved prompt design and systematic human oversight to ensure reliable qualitative use.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100538"},"PeriodicalIF":0.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Undergraduate students’ learning outcomes with ChatGPT: A meta-analytic study 大学生ChatGPT学习效果的元分析研究
Q1 Social Sciences Pub Date : 2025-12-22 DOI: 10.1016/j.caeai.2025.100536
Fangfang Mo , Jing Huang , Yao Yang , Zafer Özen , Yukiko Maeda , F. Richard Olenchak
ChatGPT has gained substantial attention in the field of higher education, particularly for its potential to enhance undergraduate students' learning outcomes. To better understand ChatGPT's impact, we conducted a meta-analysis evaluating the effects of ChatGPT applications on undergraduate students' learning outcomes, with data collected from studies published between January 1st, 2023, and May 31st, 2025. Our search across nine academic databases identified 5555 potential studies, of which 66 met the pre-defined inclusion criteria and were selected for meta-analysis. The meta-analysis incorporated 129 effect sizes, allowing us to estimate the overall impact of ChatGPT on undergraduate students' learning across a variety of academic disciplines. The results suggested that ChatGPT applications had a large positive effect (Hedges' g = 1.14, SE = 0.185) on undergraduate students' learning outcomes. The results of this study highlight undergraduate students' overall positive experiences with ChatGPT. These findings contribute to the growing body of literature on the role of artificial intelligence (AI) in higher education, offering critical insights for educators, administrators, and policymakers seeking to enhance undergraduate students' learning outcomes by integrating AI technologies like ChatGPT into academic curricula.
ChatGPT在高等教育领域获得了大量关注,特别是因为它有可能提高本科生的学习成果。为了更好地理解ChatGPT的影响,我们进行了一项meta分析,评估了ChatGPT应用对本科生学习成果的影响,数据收集自2023年1月1日至2025年5月31日发表的研究。我们在9个学术数据库中检索了5555项潜在研究,其中66项符合预定义的纳入标准,并被选中进行meta分析。荟萃分析纳入了129个效应量,使我们能够估计ChatGPT对不同学科本科生学习的总体影响。结果表明,ChatGPT应用对大学生的学习成果有较大的正向影响(Hedges' g = 1.14, SE = 0.185)。本研究的结果突出了大学生对聊天技术的总体积极体验。这些发现为越来越多的关于人工智能(AI)在高等教育中的作用的文献做出了贡献,为教育工作者、管理人员和政策制定者提供了重要的见解,他们希望通过将ChatGPT等人工智能技术整合到学术课程中来提高本科生的学习成果。
{"title":"Undergraduate students’ learning outcomes with ChatGPT: A meta-analytic study","authors":"Fangfang Mo ,&nbsp;Jing Huang ,&nbsp;Yao Yang ,&nbsp;Zafer Özen ,&nbsp;Yukiko Maeda ,&nbsp;F. Richard Olenchak","doi":"10.1016/j.caeai.2025.100536","DOIUrl":"10.1016/j.caeai.2025.100536","url":null,"abstract":"<div><div>ChatGPT has gained substantial attention in the field of higher education, particularly for its potential to enhance undergraduate students' learning outcomes. To better understand ChatGPT's impact, we conducted a meta-analysis evaluating the effects of ChatGPT applications on undergraduate students' learning outcomes, with data collected from studies published between January 1st, 2023, and May 31st, 2025. Our search across nine academic databases identified 5555 potential studies, of which 66 met the pre-defined inclusion criteria and were selected for meta-analysis. The meta-analysis incorporated 129 effect sizes, allowing us to estimate the overall impact of ChatGPT on undergraduate students' learning across a variety of academic disciplines. The results suggested that ChatGPT applications had a large positive effect (Hedges' <span><math><mrow><mi>g</mi></mrow></math></span> = 1.14, <em>SE</em> = 0.185) on undergraduate students' learning outcomes. The results of this study highlight undergraduate students' overall positive experiences with ChatGPT. These findings contribute to the growing body of literature on the role of artificial intelligence (AI) in higher education, offering critical insights for educators, administrators, and policymakers seeking to enhance undergraduate students' learning outcomes by integrating AI technologies like ChatGPT into academic curricula.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100536"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A LLM-based pedagogical framework for active, inquiry-based and adaptive learning in L2 writing 基于法学硕士的第二语言写作主动、探究性和适应性学习的教学框架
Q1 Social Sciences Pub Date : 2025-12-20 DOI: 10.1016/j.caeai.2025.100535
Ruonan Wang , Yan Yin , Yongbo Cao
Traditional L2 writing instruction often struggles to provide personalized, process-oriented feedback and engage student motivation. While generative AI like ChatGPT offers a potential solution, its application lacks a robust pedagogical foundation. This study proposes an innovative framework that integrates ChatGPT into L2 writing through a synthesis of active, inquiry-based, and adaptive learning principles. Within the framework, learners occupy the central position, undergoing the teaching implementation of LLM-based six-step writing instruction process to inquisitively learn, experiencing the teaching assessment of three-dimensional writing evaluation to actively acquire, and benefiting from the teaching reflection of plan for subsequent writing instruction to adaptively improve. Based on a quasi-experimental study involving 50 sophomores, the framework is proved to be effective, enhancing significantly learners’ writing outcomes and motivation. These findings add to the limited body of research on the utilization of ChatGPT in education, providing valuable implications for research and pedagogical practices in L2 writing.
传统的第二语言写作教学往往难以提供个性化的、以过程为导向的反馈和调动学生的积极性。虽然像ChatGPT这样的生成式人工智能提供了一个潜在的解决方案,但它的应用缺乏强大的教学基础。本研究提出了一个创新的框架,通过综合主动、探究式和适应性学习原则,将ChatGPT整合到第二语言写作中。在该框架内,学习者处于中心位置,经历了基于llm的六步写作教学过程的教学实施,进行了探究式学习,经历了三维写作评价的教学评估,进行了主动习得,受益于后续写作教学计划的教学反思,进行了自适应改进。通过对50名二年级学生的准实验研究,证明了该框架的有效性,显著提高了学习者的写作成绩和写作动机。这些发现补充了有限的关于ChatGPT在教育中的应用的研究,为第二语言写作的研究和教学实践提供了有价值的启示。
{"title":"A LLM-based pedagogical framework for active, inquiry-based and adaptive learning in L2 writing","authors":"Ruonan Wang ,&nbsp;Yan Yin ,&nbsp;Yongbo Cao","doi":"10.1016/j.caeai.2025.100535","DOIUrl":"10.1016/j.caeai.2025.100535","url":null,"abstract":"<div><div>Traditional L2 writing instruction often struggles to provide personalized, process-oriented feedback and engage student motivation. While generative AI like ChatGPT offers a potential solution, its application lacks a robust pedagogical foundation. This study proposes an innovative framework that integrates ChatGPT into L2 writing through a synthesis of active, inquiry-based, and adaptive learning principles. Within the framework, learners occupy the central position, undergoing the teaching implementation of LLM-based six-step writing instruction process to inquisitively learn, experiencing the teaching assessment of three-dimensional writing evaluation to actively acquire, and benefiting from the teaching reflection of plan for subsequent writing instruction to adaptively improve. Based on a quasi-experimental study involving 50 sophomores, the framework is proved to be effective, enhancing significantly learners’ writing outcomes and motivation. These findings add to the limited body of research on the utilization of ChatGPT in education, providing valuable implications for research and pedagogical practices in L2 writing.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100535"},"PeriodicalIF":0.0,"publicationDate":"2025-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EvalYaks: Instruction tuning datasets and LoRA fine-tuned models for automated scoring of CEFR B2 speaking assessment transcripts EvalYaks:指令调优数据集和LoRA微调模型,用于CEFR B2口语评估成绩单的自动评分
Q1 Social Sciences Pub Date : 2025-12-20 DOI: 10.1016/j.caeai.2025.100539
Nicy Scaria , Silvester John Joseph Kennedy , Thomas Latinovich , Deepak Subramani
Relying on human experts to evaluate the Common European Framework of Reference for Languages (CEFR) speaking assessments in an e-learning environment creates scalability challenges, as it limits how quickly and widely assessments can be conducted. We aim to automate the evaluation of CEFR B2 English speaking assessments in e-learning environments from conversation transcripts. First, we evaluate the capability of leading open source and commercial Large Language Models (LLMs) to score a candidate’s performance across various criteria in the CEFR B2 speaking exam in both global and India-specific contexts. Next, we create a new expert-validated, CEFR-aligned synthetic conversational dataset with transcripts that are rated at different assessment scores. In addition, new instruction-tuned datasets are developed from the English Vocabulary Profile (up to CEFR B2 level) and the CEFR-SP WikiAuto datasets. Finally, using these new datasets, we perform parameter efficient instruction tuning of Mistral Instruct 7B v0.2 to develop a family of models called EvalYaks. Four models in this family are for assessing the four sections of the CEFR B2 speaking exam, one for identifying the CEFR level of vocabulary and generating level-specific vocabulary, and another for detecting the CEFR level of text and generating level-specific text. EvalYaks achieved an average acceptable accuracy of 96 %, a degree of variation of 0.35 levels, achieving performance competitive with state-of-the-art frontier models like GPT-4o and Gemini Flash 2.5. Furthermore, a pilot validation on real-world learner transcripts verified the model’s transferability to real-world assessment contexts. This demonstrates that a 7B parameter LLM instruction tuned with high-quality CEFR-aligned assessment data can effectively evaluate and score CEFR B2 English speaking assessments, offering a promising solution for scalable, automated language proficiency evaluation. The methodology is adaptable to other regional contexts and CEFR levels through appropriate data generation and validation protocols.
在电子学习环境中,依靠人类专家来评估欧洲语言共同参考框架(CEFR)的口语评估会带来可扩展性方面的挑战,因为它限制了评估的进行速度和范围。我们的目标是通过会话记录在电子学习环境中自动评估CEFR B2英语口语评估。首先,我们评估了领先的开源和商业大型语言模型(llm)的能力,以在全球和印度特定背景下为候选人在CEFR B2口语考试中的各种标准打分。接下来,我们创建一个新的专家验证的、与cefr一致的合成会话数据集,其中包含按不同评估分数评分的文本。此外,从English Vocabulary Profile(达到CEFR B2级别)和CEFR- sp WikiAuto数据集开发了新的指令调优数据集。最后,利用这些新数据集,我们对Mistral instruction 7B v0.2进行参数高效指令调优,开发了一系列称为EvalYaks的模型。该系列中有四个模型用于评估CEFR B2口语考试的四个部分,一个用于识别CEFR词汇水平并生成特定级别的词汇,另一个用于检测CEFR文本水平并生成特定级别的文本。EvalYaks实现了96%的平均可接受精度,0.35水平的变化程度,实现了与最先进的前沿模型(如gpt - 40和Gemini Flash 2.5)竞争的性能。此外,对真实世界学习者成绩单的试点验证验证了该模型在真实世界评估环境中的可移植性。这表明7B参数LLM指令与高质量的CEFR一致的评估数据进行了调整,可以有效地评估和评分CEFR B2英语口语测试,为可扩展的自动化语言能力评估提供了一个有前途的解决方案。通过适当的数据生成和验证协议,该方法可适应其他区域情况和CEFR级别。
{"title":"EvalYaks: Instruction tuning datasets and LoRA fine-tuned models for automated scoring of CEFR B2 speaking assessment transcripts","authors":"Nicy Scaria ,&nbsp;Silvester John Joseph Kennedy ,&nbsp;Thomas Latinovich ,&nbsp;Deepak Subramani","doi":"10.1016/j.caeai.2025.100539","DOIUrl":"10.1016/j.caeai.2025.100539","url":null,"abstract":"<div><div>Relying on human experts to evaluate the Common European Framework of Reference for Languages (CEFR) speaking assessments in an e-learning environment creates scalability challenges, as it limits how quickly and widely assessments can be conducted. We aim to automate the evaluation of CEFR B2 English speaking assessments in e-learning environments from conversation transcripts. First, we evaluate the capability of leading open source and commercial Large Language Models (LLMs) to score a candidate’s performance across various criteria in the CEFR B2 speaking exam in both global and India-specific contexts. Next, we create a new expert-validated, CEFR-aligned synthetic conversational dataset with transcripts that are rated at different assessment scores. In addition, new instruction-tuned datasets are developed from the English Vocabulary Profile (up to CEFR B2 level) and the CEFR-SP WikiAuto datasets. Finally, using these new datasets, we perform parameter efficient instruction tuning of Mistral Instruct 7B v0.2 to develop a family of models called <em>EvalYaks</em>. Four models in this family are for assessing the four sections of the CEFR B2 speaking exam, one for identifying the CEFR level of vocabulary and generating level-specific vocabulary, and another for detecting the CEFR level of text and generating level-specific text. <em>EvalYaks</em> achieved an <em>average acceptable accuracy</em> of 96 %, a degree of variation of 0.35 levels, achieving performance competitive with state-of-the-art frontier models like GPT-4o and Gemini Flash 2.5. Furthermore, a pilot validation on real-world learner transcripts verified the model’s transferability to real-world assessment contexts. This demonstrates that a 7B parameter LLM instruction tuned with high-quality CEFR-aligned assessment data can effectively evaluate and score CEFR B2 English speaking assessments, offering a promising solution for scalable, automated language proficiency evaluation. The methodology is adaptable to other regional contexts and CEFR levels through appropriate data generation and validation protocols.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100539"},"PeriodicalIF":0.0,"publicationDate":"2025-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can students judge like experts? A large-scale study on the pedagogical quality of AI and human personalized formative feedback 学生能像专家一样判断吗?人工智能教学质量与人类个性化形成性反馈的大规模研究
Q1 Social Sciences Pub Date : 2025-12-18 DOI: 10.1016/j.caeai.2025.100533
Tanya Nazaretsky , Hagit Gabbay , Tanja Käser
While feedback is essential for guiding student learning, providing timely and personalized guidance in large-scale educational settings remains a significant challenge. Generative AI offers a scalable solution, yet little is known about students’ perceptions of AI-generated feedback. In this paper, we aim to investigate how the identity of the feedback provider (human vs. AI) affects students’ ability to assess feedback quality and whether their judgments are biased. We propose a comprehensive rubric for assessing the pedagogical quality of formative feedback. We use it to compare the objective quality of AI-generated and human-crafted feedback (N = 979). Next, using data collected from 472 STEM students, we examine the extent to which students’ perceptions of the same feedback align with those of the experts. Our contribution is threefold. First, by introducing a structured rubric, we address the need for more standardized and reliable methods to assess the pedagogical quality of AI-generated feedback. Second, our analysis indicates that the pedagogical quality of AI-generated feedback is, in practice, comparable to that of human-authored feedback. However, both types exhibit limitations, particularly in addressing metacognitive aspects. Third, students’ evaluations are largely influenced by their perceptions of the feedback provider’s credibility rather than the actual quality of the feedback itself. This pattern is consistent across all academic levels, genders, and fields of study. Our findings underscore the need for targeted strategies to enhance students’ ability to evaluate feedback objectively and to improve the pedagogical quality of AI-generated feedback, thereby strengthening the effectiveness of AI-powered educational feedback systems.
虽然反馈对指导学生学习至关重要,但在大规模教育环境中提供及时和个性化的指导仍然是一项重大挑战。生成式人工智能提供了一个可扩展的解决方案,但我们对学生对人工智能生成的反馈的看法知之甚少。在本文中,我们旨在研究反馈提供者的身份(人类与人工智能)如何影响学生评估反馈质量的能力,以及他们的判断是否有偏见。我们提出了一个评估形成性反馈的教学质量的综合标准。我们用它来比较人工智能生成和人工制作反馈的客观质量(N = 979)。接下来,使用从472名STEM学生收集的数据,我们检查了学生对相同反馈的看法与专家的看法一致的程度。我们的贡献是三重的。首先,通过引入结构化的标题,我们解决了对更标准化和可靠的方法的需求,以评估人工智能生成的反馈的教学质量。其次,我们的分析表明,在实践中,人工智能生成的反馈的教学质量与人类撰写的反馈相当。然而,这两种类型都有局限性,特别是在处理元认知方面。第三,学生的评价在很大程度上受到他们对反馈提供者可信度的看法的影响,而不是反馈本身的实际质量。这种模式在所有学术水平、性别和研究领域都是一致的。我们的研究结果强调需要有针对性的策略来提高学生客观评估反馈的能力,并提高人工智能生成反馈的教学质量,从而加强人工智能教育反馈系统的有效性。
{"title":"Can students judge like experts? A large-scale study on the pedagogical quality of AI and human personalized formative feedback","authors":"Tanya Nazaretsky ,&nbsp;Hagit Gabbay ,&nbsp;Tanja Käser","doi":"10.1016/j.caeai.2025.100533","DOIUrl":"10.1016/j.caeai.2025.100533","url":null,"abstract":"<div><div>While feedback is essential for guiding student learning, providing timely and personalized guidance in large-scale educational settings remains a significant challenge. Generative AI offers a scalable solution, yet little is known about students’ perceptions of AI-generated feedback. In this paper, we aim to investigate how the identity of the feedback provider (human vs. AI) affects students’ ability to assess feedback quality and whether their judgments are biased. We propose a comprehensive rubric for assessing the pedagogical quality of formative feedback. We use it to compare the objective quality of AI-generated and human-crafted feedback (N = 979). Next, using data collected from 472 STEM students, we examine the extent to which students’ perceptions of the same feedback align with those of the experts. Our contribution is threefold. First, by introducing a structured rubric, we address the need for more standardized and reliable methods to assess the pedagogical quality of AI-generated feedback. Second, our analysis indicates that the pedagogical quality of AI-generated feedback is, in practice, comparable to that of human-authored feedback. However, both types exhibit limitations, particularly in addressing metacognitive aspects. Third, students’ evaluations are largely influenced by their perceptions of the feedback provider’s credibility rather than the actual quality of the feedback itself. This pattern is consistent across all academic levels, genders, and fields of study. Our findings underscore the need for targeted strategies to enhance students’ ability to evaluate feedback objectively and to improve the pedagogical quality of AI-generated feedback, thereby strengthening the effectiveness of AI-powered educational feedback systems.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100533"},"PeriodicalIF":0.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From teachers to chatbots: Scaffolded corrective feedback and student trust in online L2 English classrooms 从教师到聊天机器人:第二语言在线课堂的架式纠正反馈和学生信任
Q1 Social Sciences Pub Date : 2025-12-16 DOI: 10.1016/j.caeai.2025.100530
Ali Soyoof , Barry Lee Reynolds , Ehsan Rassaei , Chian-Wen Kao , Xuan Van Ha
Teacher corrective feedback (TCF) plays a vital role in second language (L2) learning. Recent studies have examined feedback provided by both human teachers and large language models (LLMs). However, little is known about how students' trust differs toward scaffolded corrective feedback (SCF)—that is, feedback that incrementally progresses from indirect to direct during interaction—when it is provided by an LLM such as ChatGPT versus a language teacher. To address this gap, this study compared the effects of SCF, delivered by language teachers and ChatGPT, on L2 learning outcomes and student trust. Using a mixed-methods design, 40 lower-intermediate Iranian learners of English as a foreign language were randomly assigned to two conditions to receive scaffolded CF on English article usage from either a teacher or ChatGPT across four sessions. Learning gains obtained from immediate and delayed post-tests were analyzed using ANOVA and paired-sample t-tests, while semi-structured interviews and feedback interaction logs were examined using thematic analysis. Results showed that students in the teacher-delivered feedback group significantly outperformed those in the ChatGPT-delivered feedback group on both post- and delayed post-tests. Qualitative analyses suggested that this advantage stemmed from higher trust in the teacher, driven by the teacher's personalized emotional and technical support. The findings highlight that while ChatGPT can serve as a feedback tool in L2 instruction, its effectiveness depends on teacher mediation that attends to learners' individual differences and affective needs.
教师纠正反馈在第二语言学习中起着至关重要的作用。最近的研究检查了人类教师和大型语言模型(llm)提供的反馈。然而,很少有人知道学生对架式纠正反馈(SCF)的信任有什么不同,即在互动过程中从间接到直接的反馈,当它由法学硕士(如ChatGPT)和语言教师提供时。为了解决这一差距,本研究比较了语言教师和ChatGPT提供的SCF对第二语言学习成果和学生信任的影响。采用混合方法设计,40名伊朗英语作为外语的中低水平学习者被随机分配到两个条件下,在四个阶段中接受教师或ChatGPT关于英语文章使用的框架CF。使用方差分析和配对样本t检验分析即时和延迟后测试获得的学习收益,而使用主题分析检查半结构化访谈和反馈交互日志。结果显示,教师反馈组的学生在后测试和延迟后测试中都明显优于chatgpt反馈组的学生。定性分析表明,这种优势源于对教师的更高信任,这是由教师个性化的情感和技术支持所驱动的。研究结果强调,虽然ChatGPT可以作为第二语言教学的反馈工具,但其有效性取决于教师对学习者个体差异和情感需求的调解。
{"title":"From teachers to chatbots: Scaffolded corrective feedback and student trust in online L2 English classrooms","authors":"Ali Soyoof ,&nbsp;Barry Lee Reynolds ,&nbsp;Ehsan Rassaei ,&nbsp;Chian-Wen Kao ,&nbsp;Xuan Van Ha","doi":"10.1016/j.caeai.2025.100530","DOIUrl":"10.1016/j.caeai.2025.100530","url":null,"abstract":"<div><div>Teacher corrective feedback (TCF) plays a vital role in second language (L2) learning. Recent studies have examined feedback provided by both human teachers and large language models (LLMs). However, little is known about how students' trust differs toward scaffolded corrective feedback (SCF)—that is, feedback that incrementally progresses from indirect to direct during interaction—when it is provided by an LLM such as ChatGPT versus a language teacher. To address this gap, this study compared the effects of SCF, delivered by language teachers and ChatGPT, on L2 learning outcomes and student trust. Using a mixed-methods design, 40 lower-intermediate Iranian learners of English as a foreign language were randomly assigned to two conditions to receive scaffolded CF on English article usage from either a teacher or ChatGPT across four sessions. Learning gains obtained from immediate and delayed post-tests were analyzed using ANOVA and paired-sample <em>t</em>-tests, while semi-structured interviews and feedback interaction logs were examined using thematic analysis. Results showed that students in the teacher-delivered feedback group significantly outperformed those in the ChatGPT-delivered feedback group on both post- and delayed post-tests. Qualitative analyses suggested that this advantage stemmed from higher trust in the teacher, driven by the teacher's personalized emotional and technical support. The findings highlight that while ChatGPT can serve as a feedback tool in L2 instruction, its effectiveness depends on teacher mediation that attends to learners' individual differences and affective needs.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100530"},"PeriodicalIF":0.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards responsible AI in education: A Delphi-AHP-based framework for evaluating educational large language models 面向负责任的教育人工智能:基于delphi - ahp的教育大型语言模型评估框架
Q1 Social Sciences Pub Date : 2025-12-15 DOI: 10.1016/j.caeai.2025.100534
Pingrong Lin, Qin Deng, Yanbian Zhou
As large language models (LLMs) become deeply integrated into the educational landscape, evaluation criteria focusing solely on performance are insufficient to mitigate the risks of value misalignment and socioethical concerns. To steer educational LLMs towards responsible and beneficial development, this study aims to construct a multidimensional evaluation framework grounded in educational theory. Initially, a preliminary pool of evaluation indicators was established on the basis of a review of the literature and pedagogical theories. The Delphi method was subsequently employed to refine the indicator structure by integrating opinions from 21 cross-disciplinary experts. The analytic hierarchy process (AHP) was then applied to weigh these indicators and determine their priorities. The final framework comprises 5 first-level indicators and 21 s-level indicators. Learning effectiveness, knowledge construction capability, and social alignment are assigned critical weights, whereas intelligent interaction capability is less prioritized. Among the second-level indicators, information veracity was weighted the highest, while educational equity had the weakest influence. This study not only provides direction for the development and optimization of educational LLMs but also offers a reference for establishing a responsible artificial intelligence in education ecosystem.
随着大型语言模型(llm)深入融入教育领域,仅关注表现的评估标准不足以减轻价值错位和社会伦理问题的风险。为了引导教育学法学硕士走向负责任和有益的发展,本研究旨在构建一个基于教育理论的多维评估框架。首先,在回顾文献和教学理论的基础上,建立了初步的评价指标库。随后采用德尔菲法,通过整合21位跨学科专家的意见来完善指标结构。然后应用层次分析法(AHP)对这些指标进行权衡并确定其优先级。最终框架包括5个一级指标和21个s级指标。学习效率、知识构建能力和社会一致性被赋予了关键权重,而智能交互能力的优先级较低。在二级指标中,信息真实性的权重最高,而教育公平的影响最弱。本研究不仅为教育法学硕士的发展和优化提供了方向,也为建立负责任的教育人工智能生态系统提供了参考。
{"title":"Towards responsible AI in education: A Delphi-AHP-based framework for evaluating educational large language models","authors":"Pingrong Lin,&nbsp;Qin Deng,&nbsp;Yanbian Zhou","doi":"10.1016/j.caeai.2025.100534","DOIUrl":"10.1016/j.caeai.2025.100534","url":null,"abstract":"<div><div>As large language models (LLMs) become deeply integrated into the educational landscape, evaluation criteria focusing solely on performance are insufficient to mitigate the risks of value misalignment and socioethical concerns. To steer educational LLMs towards responsible and beneficial development, this study aims to construct a multidimensional evaluation framework grounded in educational theory. Initially, a preliminary pool of evaluation indicators was established on the basis of a review of the literature and pedagogical theories. The Delphi method was subsequently employed to refine the indicator structure by integrating opinions from 21 cross-disciplinary experts. The analytic hierarchy process (AHP) was then applied to weigh these indicators and determine their priorities. The final framework comprises 5 first-level indicators and 21 s-level indicators. Learning effectiveness, knowledge construction capability, and social alignment are assigned critical weights, whereas intelligent interaction capability is less prioritized. Among the second-level indicators, information veracity was weighted the highest, while educational equity had the weakest influence. This study not only provides direction for the development and optimization of educational LLMs but also offers a reference for establishing a responsible artificial intelligence in education ecosystem.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100534"},"PeriodicalIF":0.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards contextual-based AI: A scoping review of artificial intelligence in X reality for personalized learning 迈向基于情境的人工智能:X现实中用于个性化学习的人工智能的范围综述
Q1 Social Sciences Pub Date : 2025-12-15 DOI: 10.1016/j.caeai.2025.100523
Zifeng Liu , Serene Cheon , Austin Stanbury , Xinyue Jiao , Wanli Xing , Hyo Kang
This systematic review synthesizes 54 peer-reviewed studies published between 2019 and 2025 that examine how artificial intelligence (AI) and extended reality (XR) technologies are integrated to support adaptive and personalized learning. The studies were analyzed across multiple dimensions, including learning contexts, AI applications, adaptive input parameters, software and hardware used, and evaluation methods. The findings indicate growing research interest in AI–XR integration, with the majority of studies focused on procedural training and STEM education. Across these studies, AI is frequently used in multifaceted roles, most notably as a provider of real-time adaptive feedback, conversational agent, and a generator of instructional content. Despite these promising developments, the review identifies several critical limitations. While generative AI, particularly large language models (LLMs) such as GPT, has been widely used for conversational interactions, learner profile data remains largely underutilized. Inputs such as prior knowledge and motivation are rarely incorporated. Most implementations rely on a single adaptive strategy, typically driven by performance-based measures such as pre-quiz scores or task completion. As a result, they do not fully exploit the multimodal sensing capabilities of XR platforms (e.g., eye tracking, gesture recognition, environmental tracking), which could support context-sensitive, dynamically generated 3D content aligned with when, where, and how learners need support. Current evaluations of AI–XR systems also remain dominated by short-term performance outcomes, with limited attention to knowledge transfer and critical thinking. These findings highlight key opportunities for designing context-aware, learner-centered AI–XR systems and call for future research that more fully leverages multimodal data, incorporates richer learner profile information, and is grounded in explicit pedagogical models.
本系统综述综合了2019年至2025年间发表的54项同行评审研究,研究了人工智能(AI)和扩展现实(XR)技术如何集成以支持自适应和个性化学习。这些研究从多个维度进行了分析,包括学习背景、人工智能应用、自适应输入参数、使用的软件和硬件以及评估方法。研究结果表明,AI-XR集成的研究兴趣日益浓厚,大多数研究都集中在程序性培训和STEM教育上。在这些研究中,人工智能经常被用于多方面的角色,最明显的是作为实时自适应反馈的提供者、对话代理和教学内容的生成器。尽管有这些有希望的发展,审查指出了几个关键的局限性。虽然生成式人工智能,特别是大型语言模型(llm),如GPT,已广泛用于会话交互,但学习者简介数据仍未得到充分利用。像先验知识和动机这样的输入很少被纳入。大多数实现依赖于单一的自适应策略,通常由基于性能的度量(如预考分数或任务完成情况)驱动。因此,它们没有充分利用XR平台的多模态传感功能(例如,眼动追踪、手势识别、环境跟踪),这些功能可以支持上下文敏感的、动态生成的3D内容,这些内容与学习者需要支持的时间、地点和方式保持一致。目前对AI-XR系统的评估仍然以短期绩效结果为主,对知识转移和批判性思维的关注有限。这些发现强调了设计情境感知、以学习者为中心的AI-XR系统的关键机遇,并呼吁未来的研究更充分地利用多模态数据,结合更丰富的学习者概况信息,并以明确的教学模型为基础。
{"title":"Towards contextual-based AI: A scoping review of artificial intelligence in X reality for personalized learning","authors":"Zifeng Liu ,&nbsp;Serene Cheon ,&nbsp;Austin Stanbury ,&nbsp;Xinyue Jiao ,&nbsp;Wanli Xing ,&nbsp;Hyo Kang","doi":"10.1016/j.caeai.2025.100523","DOIUrl":"10.1016/j.caeai.2025.100523","url":null,"abstract":"<div><div>This systematic review synthesizes 54 peer-reviewed studies published between 2019 and 2025 that examine how artificial intelligence (AI) and extended reality (XR) technologies are integrated to support adaptive and personalized learning. The studies were analyzed across multiple dimensions, including learning contexts, AI applications, adaptive input parameters, software and hardware used, and evaluation methods. The findings indicate growing research interest in AI–XR integration, with the majority of studies focused on procedural training and STEM education. Across these studies, AI is frequently used in multifaceted roles, most notably as a provider of real-time adaptive feedback, conversational agent, and a generator of instructional content. Despite these promising developments, the review identifies several critical limitations. While generative AI, particularly large language models (LLMs) such as GPT, has been widely used for conversational interactions, learner profile data remains largely underutilized. Inputs such as prior knowledge and motivation are rarely incorporated. Most implementations rely on a single adaptive strategy, typically driven by performance-based measures such as pre-quiz scores or task completion. As a result, they do not fully exploit the multimodal sensing capabilities of XR platforms (e.g., eye tracking, gesture recognition, environmental tracking), which could support context-sensitive, dynamically generated 3D content aligned with when, where, and how learners need support. Current evaluations of AI–XR systems also remain dominated by short-term performance outcomes, with limited attention to knowledge transfer and critical thinking. These findings highlight key opportunities for designing context-aware, learner-centered AI–XR systems and call for future research that more fully leverages multimodal data, incorporates richer learner profile information, and is grounded in explicit pedagogical models.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100523"},"PeriodicalIF":0.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language models in education: a systematic review of empirical applications, benefits, and challenges 教育中的大型语言模型:对经验应用、益处和挑战的系统回顾
Q1 Social Sciences Pub Date : 2025-12-13 DOI: 10.1016/j.caeai.2025.100529
Yuhong Shi, Kun Yu, Yifei Dong, Fang Chen
The rapid advancement of Large Language Models (LLMs), particularly following the release of ChatGPT in November 2022, has significantly transformed educational methodologies. This systematic review aims to synthesize empirical studies published between November 2022 and March 2025, examining the implementation and effectiveness of LLMs in educational settings. 88 empirical studies identified key applications, benefits, and challenges associated with LLM integration in education. Our findings reveal that LLMs are utilized across various educational contexts in six primary applications, with Intelligent Tutoring Systems being particularly prominent. The benefits include improved academic performance, increased student engagement, enhanced accessibility, optimized resource utilization, and strengthened cognitive and skill development. However, challenges such as student over-reliance on AI, technical reliability issues, assessment fairness, and privacy concerns were identified. This review provides educators, researchers, and policymakers with evidence-based insights and practical guidance for effective LLM integration, contributing to the ongoing transformation of teaching and learning in the era of Generative Artificial Intelligence (GenAI) technology.
大型语言模型(llm)的快速发展,特别是在2022年11月ChatGPT发布之后,极大地改变了教育方法。本系统综述旨在综合2022年11月至2025年3月期间发表的实证研究,考察法学硕士在教育环境中的实施和有效性。88项实证研究确定了法学硕士在教育中整合的关键应用、好处和挑战。我们的研究结果表明,法学硕士在六种主要应用中被用于各种教育背景,其中智能辅导系统尤为突出。其好处包括提高学习成绩、提高学生参与度、增强可及性、优化资源利用以及加强认知和技能发展。然而,我们也发现了一些挑战,如学生对人工智能的过度依赖、技术可靠性问题、评估公平性和隐私问题。这篇综述为教育工作者、研究人员和政策制定者提供了有效整合法学硕士的基于证据的见解和实践指导,有助于在生成人工智能(GenAI)技术时代进行教学和学习的转变。
{"title":"Large language models in education: a systematic review of empirical applications, benefits, and challenges","authors":"Yuhong Shi,&nbsp;Kun Yu,&nbsp;Yifei Dong,&nbsp;Fang Chen","doi":"10.1016/j.caeai.2025.100529","DOIUrl":"10.1016/j.caeai.2025.100529","url":null,"abstract":"<div><div>The rapid advancement of Large Language Models (LLMs), particularly following the release of ChatGPT in November 2022, has significantly transformed educational methodologies. This systematic review aims to synthesize empirical studies published between November 2022 and March 2025, examining the implementation and effectiveness of LLMs in educational settings. 88 empirical studies identified key applications, benefits, and challenges associated with LLM integration in education. Our findings reveal that LLMs are utilized across various educational contexts in six primary applications, with Intelligent Tutoring Systems being particularly prominent. The benefits include improved academic performance, increased student engagement, enhanced accessibility, optimized resource utilization, and strengthened cognitive and skill development. However, challenges such as student over-reliance on AI, technical reliability issues, assessment fairness, and privacy concerns were identified. This review provides educators, researchers, and policymakers with evidence-based insights and practical guidance for effective LLM integration, contributing to the ongoing transformation of teaching and learning in the era of Generative Artificial Intelligence (GenAI) technology.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100529"},"PeriodicalIF":0.0,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating lab assistant chatbot on student learning and behaviors in a programming short course 在编程短期课程中评估实验室助理聊天机器人对学生学习和行为的影响
Q1 Social Sciences Pub Date : 2025-12-11 DOI: 10.1016/j.caeai.2025.100527
Thanapon Noraset, Akara Supratak, Chaiyong Ragkhitwetsagul, Nubthong Worathong, Suppawong Tuarob
The rise of generative AI has increased interest in its application as an intelligent lab assistant in programming education, but concerns persist over its educational value and potential exploitation. While previous work supports using a customized chatbot as an assistant that provides specific guidance rather than allowing students to prompt responses freely, empirical evidence directly comparing these approaches is still lacking. This study evaluates the impact of two chatbot designs, Unrestricted and Assistant, on student learning and behavior in a short Python programming course. Through a controlled experiment involving 42 participants, we found that students using the Assistant chatbot, which provided guidance through preset and free-text prompts without offering direct solutions, showed significantly greater improvement from pre- to post-test than those using an Unrestricted chatbot. Analysis of over 1000 chatbot interactions revealed a strong preference for free-text input and a high rate of attempted exploits among participants. Additionally, prompt injection tests demonstrated the Assistant chatbot’s partial vulnerability to hijacking attempts. These findings highlight the benefits and limitations of AI assistants in programming education, underscoring the importance of guided interaction design to support learning while minimizing exploitation.
生成式人工智能的兴起增加了人们对其在编程教育中作为智能实验室助手的应用的兴趣,但对其教育价值和潜在开发的担忧仍然存在。虽然之前的研究支持使用定制的聊天机器人作为助手,提供具体的指导,而不是让学生自由地做出反应,但直接比较这些方法的经验证据仍然缺乏。本研究评估了两种聊天机器人的设计,无限制和助理,对学生的学习和行为在一个简短的Python编程课程的影响。通过一项涉及42名参与者的对照实验,我们发现使用Assistant聊天机器人的学生从测试前到测试后的进步明显大于使用无限制聊天机器人的学生。Assistant聊天机器人通过预设和自由文本提示提供指导,而不提供直接的解决方案。对1000多个聊天机器人交互的分析显示,参与者对自由文本输入的强烈偏好和企图利用漏洞的高比率。此外,提示注入测试还显示了Assistant聊天机器人对劫持企图的部分脆弱性。这些发现强调了人工智能助手在编程教育中的好处和局限性,强调了指导性交互设计在支持学习的同时最大限度地减少剥削的重要性。
{"title":"Evaluating lab assistant chatbot on student learning and behaviors in a programming short course","authors":"Thanapon Noraset,&nbsp;Akara Supratak,&nbsp;Chaiyong Ragkhitwetsagul,&nbsp;Nubthong Worathong,&nbsp;Suppawong Tuarob","doi":"10.1016/j.caeai.2025.100527","DOIUrl":"10.1016/j.caeai.2025.100527","url":null,"abstract":"<div><div>The rise of generative AI has increased interest in its application as an intelligent lab assistant in programming education, but concerns persist over its educational value and potential exploitation. While previous work supports using a customized chatbot as an assistant that provides specific guidance rather than allowing students to prompt responses freely, empirical evidence directly comparing these approaches is still lacking. This study evaluates the impact of two chatbot designs, Unrestricted and Assistant, on student learning and behavior in a short Python programming course. Through a controlled experiment involving 42 participants, we found that students using the Assistant chatbot, which provided guidance through preset and free-text prompts without offering direct solutions, showed significantly greater improvement from pre- to post-test than those using an Unrestricted chatbot. Analysis of over 1000 chatbot interactions revealed a strong preference for free-text input and a high rate of attempted exploits among participants. Additionally, prompt injection tests demonstrated the Assistant chatbot’s partial vulnerability to hijacking attempts. These findings highlight the benefits and limitations of AI assistants in programming education, underscoring the importance of guided interaction design to support learning while minimizing exploitation.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100527"},"PeriodicalIF":0.0,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Education Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1