首页 > 最新文献

Computers and Education Artificial Intelligence最新文献

英文 中文
The promise and limits of LLMs in constructing proofs and hints for logic problems in intelligent tutoring systems 法学硕士在构建智能辅导系统中逻辑问题的证明和提示方面的前景与局限
Q1 Social Sciences Pub Date : 2025-10-28 DOI: 10.1016/j.caeai.2025.100490
Sutapa Dey Tithi, Arun Kumar Ramesh, Clara DiMarco, Xiaoyi Tian, Nazia Alam, Kimia Fazeli, Tiffany Barnes
Intelligent tutoring systems have demonstrated effectiveness in teaching formal propositional logic proofs, but their reliance on template-based explanations limits their ability to provide personalized student feedback. While large language models (LLMs) offer promising capabilities for dynamic feedback generation, they risk producing hallucinations or pedagogically unsound explanations. We evaluated the stepwise accuracy of LLMs in constructing multi-step symbolic logic proofs, comparing six prompting techniques across four state-of-the-art LLMs on 358 propositional logic problems. Results show that DeepSeek-V3 achieved superior performance with upto 86.7 % accuracy on stepwise proof construction and excelled particularly in simpler rules. We further used the best-performing LLM to generate explanatory hints for 1050 unique student problem-solving states from a logic ITS and evaluated them on 4 criteria with both an LLM grader and human expert ratings on a 20 % sample. Our analysis finds that LLM-generated hints were 75 % accurate and rated highly by human evaluators on consistency and clarity, but did not perform as well in explaining why the hint was provided or its larger context. Our results demonstrate that LLMs may be used to augment tutoring systems with logic tutoring hints, but those hints require additional modifications to ensure accuracy and pedagogical appropriateness.
智能辅导系统在教授正式命题逻辑证明方面已经证明了有效性,但它们对基于模板的解释的依赖限制了它们提供个性化学生反馈的能力。虽然大型语言模型(llm)为动态反馈生成提供了有前途的能力,但它们有产生幻觉或在教学上不合理的解释的风险。我们评估了llm在构建多步骤符号逻辑证明方面的逐步准确性,比较了四个最先进的llm在358个命题逻辑问题上的六种提示技术。结果表明,DeepSeek-V3在逐步证明构建方面取得了优异的性能,准确率高达86.7%,尤其在更简单的规则方面表现出色。我们进一步使用表现最好的LLM从逻辑ITS中为1050个独特的学生解决问题的状态生成解释性提示,并在20%的样本上使用LLM评分员和人类专家评分根据4个标准对它们进行评估。我们的分析发现,llm生成的提示准确率为75%,并且在一致性和清晰度方面得到了人类评估者的高度评价,但在解释提供提示的原因或其更大的背景方面表现不佳。我们的研究结果表明,法学硕士可以用逻辑辅导提示来增强辅导系统,但这些提示需要额外的修改,以确保准确性和教学的适当性。
{"title":"The promise and limits of LLMs in constructing proofs and hints for logic problems in intelligent tutoring systems","authors":"Sutapa Dey Tithi,&nbsp;Arun Kumar Ramesh,&nbsp;Clara DiMarco,&nbsp;Xiaoyi Tian,&nbsp;Nazia Alam,&nbsp;Kimia Fazeli,&nbsp;Tiffany Barnes","doi":"10.1016/j.caeai.2025.100490","DOIUrl":"10.1016/j.caeai.2025.100490","url":null,"abstract":"<div><div>Intelligent tutoring systems have demonstrated effectiveness in teaching formal propositional logic proofs, but their reliance on template-based explanations limits their ability to provide personalized student feedback. While large language models (LLMs) offer promising capabilities for dynamic feedback generation, they risk producing hallucinations or pedagogically unsound explanations. We evaluated the stepwise accuracy of LLMs in constructing multi-step symbolic logic proofs, comparing six prompting techniques across four state-of-the-art LLMs on 358 propositional logic problems. Results show that DeepSeek-V3 achieved superior performance with upto 86.7 % accuracy on stepwise proof construction and excelled particularly in simpler rules. We further used the best-performing LLM to generate explanatory hints for 1050 unique student problem-solving states from a logic ITS and evaluated them on 4 criteria with both an LLM grader and human expert ratings on a 20 % sample. Our analysis finds that LLM-generated hints were 75 % accurate and rated highly by human evaluators on consistency and clarity, but did not perform as well in explaining why the hint was provided or its larger context. Our results demonstrate that LLMs may be used to augment tutoring systems with logic tutoring hints, but those hints require additional modifications to ensure accuracy and pedagogical appropriateness.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100490"},"PeriodicalIF":0.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How AI literacy correlates with affective, behavioral, cognitive and contextual variables: A systematic review 人工智能读写能力如何与情感、行为、认知和语境变量相关:系统回顾
Q1 Social Sciences Pub Date : 2025-10-28 DOI: 10.1016/j.caeai.2025.100493
Arne Bewersdorff , Claudia Nerdel , Xiaoming Zhai
This systematic review maps the empirical landscape of AI literacy by examining its correlations with a diverse array of affective, behavioral, cognitive and contextual variables. Building on the review of AI literacy scales by Lintner (2024), we analyzed 31 empirical studies that applied six of those AI literacy scales, covering 14 countries and a range of participant groups. Our findings reveal robust correlations of AI literacy with AI self-efficacy, positive AI attitudes, motivation, and digital competencies, and negative correlations with AI anxiety and negative AI attitudes. Personal factors such as age appear largely uncorrelated with AI literacy. The review reveals measurement challenges regarding AI literacy: discrepancies between self-assessment scales and performance-based tests suggest that metacognitive biases like the Dunning Kruger effect may inflate certain correlations with self-assessment AI literacy scales. Despite these challenges, the robust findings provide a solid foundation for future research.
这篇系统综述通过研究人工智能读写能力与各种情感、行为、认知和语境变量的相关性,绘制了人工智能读写能力的实证图景。基于Lintner(2024)对人工智能素养量表的回顾,我们分析了31项实证研究,这些研究应用了其中的6种人工智能素养量表,涵盖了14个国家和一系列参与者群体。我们的研究结果表明,人工智能素养与人工智能自我效能、积极的人工智能态度、动机和数字能力之间存在显著相关性,与人工智能焦虑和消极的人工智能态度之间存在负相关。年龄等个人因素似乎在很大程度上与人工智能读写能力无关。这篇综述揭示了人工智能读写能力的测量挑战:自我评估量表和基于表现的测试之间的差异表明,邓宁·克鲁格效应等元认知偏差可能会夸大人工智能读写能力自我评估量表的某些相关性。尽管存在这些挑战,但这些强有力的发现为未来的研究提供了坚实的基础。
{"title":"How AI literacy correlates with affective, behavioral, cognitive and contextual variables: A systematic review","authors":"Arne Bewersdorff ,&nbsp;Claudia Nerdel ,&nbsp;Xiaoming Zhai","doi":"10.1016/j.caeai.2025.100493","DOIUrl":"10.1016/j.caeai.2025.100493","url":null,"abstract":"<div><div>This systematic review maps the empirical landscape of AI literacy by examining its correlations with a diverse array of affective, behavioral, cognitive and contextual variables. Building on the review of AI literacy scales by Lintner (2024), we analyzed 31 empirical studies that applied six of those AI literacy scales, covering 14 countries and a range of participant groups. Our findings reveal robust correlations of AI literacy with AI self-efficacy, positive AI attitudes, motivation, and digital competencies, and negative correlations with AI anxiety and negative AI attitudes. Personal factors such as age appear largely uncorrelated with AI literacy. The review reveals measurement challenges regarding AI literacy: discrepancies between self-assessment scales and performance-based tests suggest that metacognitive biases like the Dunning Kruger effect may inflate certain correlations with self-assessment AI literacy scales. Despite these challenges, the robust findings provide a solid foundation for future research.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100493"},"PeriodicalIF":0.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unmasking the impacts of self-evaluation in AI-supported writing instruction on EFL learners’ emotion regulation, self-competence, motivation, and writing achievement 揭示人工智能辅助写作教学中自我评价对英语学习者情绪调节、自我能力、动机和写作成就的影响
Q1 Social Sciences Pub Date : 2025-10-28 DOI: 10.1016/j.caeai.2025.100494
Tahereh Heydarnejad
This study explores the impact of embedding self-evaluation within AI-supported writing instruction on learners’ cognitive emotion regulation, self-competence, motivation, and writing achievement. Conducted at a high school in Iran, the research utilized a quantitative quasi-experimental pretest-posttest design involving two intact pre-intermediate writing classes randomly assigned to an experimental group and a control group. The experimental group received instruction that combined AI tools with structured self-evaluation activities, whereas the control group followed a traditional teaching approach without AI integration or self-evaluation. Data were collected using the Cognitive Emotion Regulation Questionnaire, the Self-Competence Scale, the Academic Motivation Scale, and standardized writing assessments. Statistical analyses, including Chi-square tests and t-tests, indicated that the experimental group significantly outperformed the control group across all measured variables, demonstrating improvements in cognitive emotion regulation, self-competence, motivation, and writing achievement. These results underscore the value of integrating self-evaluation practices alongside AI tools to enhance learner outcomes in EFL writing contexts.
本研究探讨了在人工智能支持的写作教学中嵌入自我评价对学习者认知情绪调节、自我能力、动机和写作成就的影响。该研究在伊朗的一所高中进行,采用定量准实验前测后测设计,包括两个完整的中级前写作班,随机分配到实验组和对照组。实验组接受人工智能工具与结构化自我评估活动相结合的教学,而对照组则采用传统的教学方法,没有整合人工智能或自我评估。采用认知情绪调节问卷、自我能力量表、学业动机量表和标准化写作量表收集数据。包括卡方检验和t检验在内的统计分析表明,实验组在所有测量变量上都明显优于对照组,在认知情绪调节、自我能力、动机和写作成绩方面都有改善。这些结果强调了将自我评估实践与人工智能工具相结合的价值,以提高学习者在英语写作环境中的学习成果。
{"title":"Unmasking the impacts of self-evaluation in AI-supported writing instruction on EFL learners’ emotion regulation, self-competence, motivation, and writing achievement","authors":"Tahereh Heydarnejad","doi":"10.1016/j.caeai.2025.100494","DOIUrl":"10.1016/j.caeai.2025.100494","url":null,"abstract":"<div><div>This study explores the impact of embedding self-evaluation within AI-supported writing instruction on learners’ cognitive emotion regulation, self-competence, motivation, and writing achievement. Conducted at a high school in Iran, the research utilized a quantitative quasi-experimental pretest-posttest design involving two intact pre-intermediate writing classes randomly assigned to an experimental group and a control group. The experimental group received instruction that combined AI tools with structured self-evaluation activities, whereas the control group followed a traditional teaching approach without AI integration or self-evaluation. Data were collected using the Cognitive Emotion Regulation Questionnaire, the Self-Competence Scale, the Academic Motivation Scale, and standardized writing assessments. Statistical analyses, including Chi-square tests and t-tests, indicated that the experimental group significantly outperformed the control group across all measured variables, demonstrating improvements in cognitive emotion regulation, self-competence, motivation, and writing achievement. These results underscore the value of integrating self-evaluation practices alongside AI tools to enhance learner outcomes in EFL writing contexts.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100494"},"PeriodicalIF":0.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization method for academic English content based on generative adversarial networks and data augmentation 基于生成对抗网络和数据增强的学术英语内容优化方法
Q1 Social Sciences Pub Date : 2025-10-24 DOI: 10.1016/j.caeai.2025.100492
Hui Gao
With the globalization of academic exchanges, the importance of academic English writing quality has become increasingly prominent. Especially for non-native speakers, grammar and language quality in academic English writing significantly affect the readability and academic value of articles. Therefore, this study proposes an academic English content optimization method based on generative adversarial networks and data augmentation. The method uses Transformer as the generator, combines generative adversarial networks with data augmentation techniques to generate high-quality pseudo error correction sentence pairs, and optimizes model performance through policy gradient methods. Although academic English is used as the application context in this study, the architecture can be adapted to other English writing genres given appropriate training corpora. From the results, when the iteration reached 500, the precision was 0.98 and the recall was 0.10. The accuracy-2, F1 score, mean absolute error, correlation coefficient index, and accuracy-7 values of the proposed academic English content optimization model were 87.8, 89.2, 0.05, 0.69, and 97.6. The proposed model has higher accuracy and efficiency on multiple datasets, which can effectively optimize various types of English grammar errors, providing new solutions for content optimization in academic English writing.
随着学术交流的全球化,学术英语写作质量的重要性日益凸显。尤其是对于非母语人士来说,学术英语写作中的语法和语言质量对文章的可读性和学术价值影响很大。因此,本研究提出了一种基于生成对抗网络和数据增强的学术英语内容优化方法。该方法以Transformer为生成器,结合生成式对抗网络和数据增强技术生成高质量的伪纠错句子对,并通过策略梯度方法优化模型性能。虽然在本研究中使用的是学术英语作为应用语境,但只要有适当的训练语料库,这种体系结构也可以适应其他英语写作体裁。从结果来看,当迭代次数达到500次时,精密度为0.98,召回率为0.10。所建立的学术英语内容优化模型的准确率-2、F1评分、平均绝对误差、相关系数指数和准确率-7分别为87.8、89.2、0.05、0.69和97.6。该模型在多数据集上具有更高的准确率和效率,能够有效优化各类英语语法错误,为学术英语写作的内容优化提供新的解决方案。
{"title":"Optimization method for academic English content based on generative adversarial networks and data augmentation","authors":"Hui Gao","doi":"10.1016/j.caeai.2025.100492","DOIUrl":"10.1016/j.caeai.2025.100492","url":null,"abstract":"<div><div>With the globalization of academic exchanges, the importance of academic English writing quality has become increasingly prominent. Especially for non-native speakers, grammar and language quality in academic English writing significantly affect the readability and academic value of articles. Therefore, this study proposes an academic English content optimization method based on generative adversarial networks and data augmentation. The method uses Transformer as the generator, combines generative adversarial networks with data augmentation techniques to generate high-quality pseudo error correction sentence pairs, and optimizes model performance through policy gradient methods. Although academic English is used as the application context in this study, the architecture can be adapted to other English writing genres given appropriate training corpora. From the results, when the iteration reached 500, the precision was 0.98 and the recall was 0.10. The accuracy-2, F1 score, mean absolute error, correlation coefficient index, and accuracy-7 values of the proposed academic English content optimization model were 87.8, 89.2, 0.05, 0.69, and 97.6. The proposed model has higher accuracy and efficiency on multiple datasets, which can effectively optimize various types of English grammar errors, providing new solutions for content optimization in academic English writing.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100492"},"PeriodicalIF":0.0,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Engagements with GPT responses and learner prompts in ChatGPT-based learning of English argumentative writing logic and their impacts 在基于chatgpt的英语议论文写作逻辑学习中,GPT反应和学习者提示的参与及其影响
Q1 Social Sciences Pub Date : 2025-10-22 DOI: 10.1016/j.caeai.2025.100489
Ruofei Zhang , Di Zou , Haoran Xie , Fu Lee Wang
ChatGPT can be defined as a chatbot powered by OpenAI's GPT language models, which has shown promise in improving English-as-a-foreign-language (EFL) writing knowledge and skills. However, its application to developing EFL argumentative writing logic remains largely unexplored, despite the importance of this area. Moreover, existing studies have highlighted deep learner engagement with ChatGPT-based learning but have not examined how engagement varies between two key components of this learning method: GPT responses (GPT's messages to learners) and learner prompts (learners' messages to GPT). To better understand the mechanisms and efficacy of ChatGPT-based learning for EFL argumentative writing, we developed a discipline-specific GPT-4-powered chatbot for learning English argumentative writing logic. Forty-two Chinese university students used the tool for 45–75 min. Learner engagement in GPT responses and learner prompts was assessed via eye movements on corresponding interface areas of ChatGPT recorded by a Tobii eye-tracker. Their learning outcomes were assessed via pre-post-delayed tests and pre-post writing tasks. Semi-structured interviews were also administered. Our findings revealed that learners engaged with GPT responses frequently but for short durations, and with learner prompts infrequently but for longer durations. Engagement in GPT responses appears to facilitate logic knowledge development, whereas engagement in learner prompts may be associated with challenges in developing writing logic. Based on the results, we explored the factors influencing the patterns and impacts of learner engagement with ChatGPT-based learning of English argumentative writing logic and offered implications for future implementation of this learning method.
ChatGPT可以定义为由OpenAI的GPT语言模型驱动的聊天机器人,它在提高英语作为外语(EFL)的写作知识和技能方面表现出了希望。然而,尽管这一领域很重要,但它在发展英语议论文写作逻辑方面的应用仍未得到很大程度的探索。此外,现有的研究强调了深度学习者对基于chatgpt的学习的参与,但没有研究这种学习方法的两个关键组成部分:GPT响应(GPT向学习者发送的信息)和学习者提示(学习者向GPT发送的信息)之间的参与是如何变化的。为了更好地了解基于gpt的英语议论文写作学习的机制和效果,我们开发了一个基于gpt -4的学科专用聊天机器人来学习英语议论文写作逻辑。42名中国大学生使用该工具45-75分钟。通过Tobii眼动仪记录的ChatGPT相应界面区域的眼动来评估学习者对GPT反应和学习者提示的参与程度。他们的学习成果是通过后延迟测试和后写作任务来评估的。还进行了半结构化访谈。我们的研究结果表明,学习者经常参与GPT反应,但持续时间较短,学习者很少参与学习者提示,但持续时间较长。参与GPT反应似乎有助于逻辑知识的发展,而参与学习者提示可能与发展写作逻辑的挑战有关。基于这些结果,我们探讨了影响学习者参与基于chatgpt的英语议论文逻辑学习模式和影响的因素,并为这种学习方法的未来实施提供了启示。
{"title":"Engagements with GPT responses and learner prompts in ChatGPT-based learning of English argumentative writing logic and their impacts","authors":"Ruofei Zhang ,&nbsp;Di Zou ,&nbsp;Haoran Xie ,&nbsp;Fu Lee Wang","doi":"10.1016/j.caeai.2025.100489","DOIUrl":"10.1016/j.caeai.2025.100489","url":null,"abstract":"<div><div>ChatGPT can be defined as a chatbot powered by OpenAI's GPT language models, which has shown promise in improving English-as-a-foreign-language (EFL) writing knowledge and skills. However, its application to developing EFL argumentative writing logic remains largely unexplored, despite the importance of this area. Moreover, existing studies have highlighted deep learner engagement with ChatGPT-based learning but have not examined how engagement varies between two key components of this learning method: GPT responses (GPT's messages to learners) and learner prompts (learners' messages to GPT). To better understand the mechanisms and efficacy of ChatGPT-based learning for EFL argumentative writing, we developed a discipline-specific GPT-4-powered chatbot for learning English argumentative writing logic. Forty-two Chinese university students used the tool for 45–75 min. Learner engagement in GPT responses and learner prompts was assessed via eye movements on corresponding interface areas of ChatGPT recorded by a Tobii eye-tracker. Their learning outcomes were assessed via pre-post-delayed tests and pre-post writing tasks. Semi-structured interviews were also administered. Our findings revealed that learners engaged with GPT responses frequently but for short durations, and with learner prompts infrequently but for longer durations. Engagement in GPT responses appears to facilitate logic knowledge development, whereas engagement in learner prompts may be associated with challenges in developing writing logic. Based on the results, we explored the factors influencing the patterns and impacts of learner engagement with ChatGPT-based learning of English argumentative writing logic and offered implications for future implementation of this learning method.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100489"},"PeriodicalIF":0.0,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI tools and POE model in educational technology Learning: Exploring participant experiences using thematic analysis 教育技术学习中的人工智能工具和POE模型:利用主题分析探索参与者体验
Q1 Social Sciences Pub Date : 2025-10-11 DOI: 10.1016/j.caeai.2025.100488
Sandy I-Ching Wang, Eric Zhi-Feng Liu
In an era where international education trends increasingly prioritize the integration of artificial intelligence (AI), there is a critical need to understand how students effectively use these tools to foster innovative and deep learning. This study addresses this gap by investigating higher education students’ experiences with advanced AI tools within a nine-week instructional experiment structured by the Predict-Observe-Explain (POE) model. Our primary motivation was to explore how a structured, inquiry-based framework could scaffold the development of sophisticated AI literacy, guiding students toward strategic human-AI partnerships. We employed a qualitative case study design, collecting data from 17 students at a Taiwanese university through written focus group interviews. Participants were granted free access to premium-tier generative AI tools, including ChatGPT and NotebookLM. Findings reveal that students developed sophisticated, task-aligned workflows by strategically combining multiple AI tools, a progression significantly accelerated by institutional scaffolding. Participants reported substantial benefits, including enhanced efficiency and deeper cognitive engagement, while also navigating persistent challenges such as accuracy concerns and technical limitations. They adopted adaptive strategies, including cross-tool verification, prompt refinement, and critical evaluation, to mitigate these issues. The study further demonstrates that AI tools were particularly effective in supporting research question refinement and academic reasoning. This research makes several contributions to the field of educational technology. It provides empirical evidence that inquiry-based models like POE are effective for guiding AI tool integration and fostering higher-order cognitive skills.
在国际教育趋势日益优先考虑人工智能(AI)整合的时代,我们迫切需要了解学生如何有效地使用这些工具来促进创新和深度学习。本研究通过调查高等教育学生在预测-观察-解释(POE)模型构建的为期九周的教学实验中使用先进人工智能工具的经验,解决了这一差距。我们的主要动机是探索一个结构化的、基于探究的框架如何支撑复杂的人工智能素养的发展,引导学生建立人类与人工智能的战略伙伴关系。本研究采用定性个案研究设计,透过书面焦点小组访谈,收集台湾某大学17名学生的资料。参与者可以免费使用高级生成人工智能工具,包括ChatGPT和NotebookLM。研究结果显示,学生们通过战略性地结合多种人工智能工具,开发出了复杂的、与任务一致的工作流程,这一进程因机构脚手架而显著加快。参与者报告了实质性的好处,包括提高效率和更深层次的认知参与,同时也克服了准确性问题和技术限制等持续存在的挑战。他们采用了适应性策略,包括跨工具验证、及时改进和关键评估,以减轻这些问题。该研究进一步表明,人工智能工具在支持研究问题提炼和学术推理方面特别有效。本研究对教育技术领域做出了若干贡献。它提供了经验证据,证明像POE这样基于查询的模型对于指导人工智能工具集成和培养高阶认知技能是有效的。
{"title":"AI tools and POE model in educational technology Learning: Exploring participant experiences using thematic analysis","authors":"Sandy I-Ching Wang,&nbsp;Eric Zhi-Feng Liu","doi":"10.1016/j.caeai.2025.100488","DOIUrl":"10.1016/j.caeai.2025.100488","url":null,"abstract":"<div><div>In an era where international education trends increasingly prioritize the integration of artificial intelligence (AI), there is a critical need to understand how students effectively use these tools to foster innovative and deep learning. This study addresses this gap by investigating higher education students’ experiences with advanced AI tools within a nine-week instructional experiment structured by the Predict-Observe-Explain (POE) model. Our primary motivation was to explore how a structured, inquiry-based framework could scaffold the development of sophisticated AI literacy, guiding students toward strategic human-AI partnerships. We employed a qualitative case study design, collecting data from 17 students at a Taiwanese university through written focus group interviews. Participants were granted free access to premium-tier generative AI tools, including ChatGPT and NotebookLM. Findings reveal that students developed sophisticated, task-aligned workflows by strategically combining multiple AI tools, a progression significantly accelerated by institutional scaffolding. Participants reported substantial benefits, including enhanced efficiency and deeper cognitive engagement, while also navigating persistent challenges such as accuracy concerns and technical limitations. They adopted adaptive strategies, including cross-tool verification, prompt refinement, and critical evaluation, to mitigate these issues. The study further demonstrates that AI tools were particularly effective in supporting research question refinement and academic reasoning. This research makes several contributions to the field of educational technology. It provides empirical evidence that inquiry-based models like POE are effective for guiding AI tool integration and fostering higher-order cognitive skills.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100488"},"PeriodicalIF":0.0,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145320194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are pre-service teachers ready to teach the Alpha generation? The impact of pre-service teachers' ChatGPT literacy levels on behavioral intentions toward ChatGPT-4.0 职前教师准备好教阿尔法一代了吗?职前教师ChatGPT素养水平对ChatGPT 4.0行为意向的影响
Q1 Social Sciences Pub Date : 2025-10-06 DOI: 10.1016/j.caeai.2025.100486
Eylem Kılıç , Firas Almasri , H. Eray Çelik
This study seeks to enhance our understanding of how pre-service teachers working with the Alpha Generation (PSTAG) interact with the Technology Acceptance Model (TAM) in the context of ChatGPT. It specifically examines their perceptions of ease of use (PEOU), perceived usefulness (PU), and behavioral intention (BI) toward ChatGPT-4o, utilizing an extended version of the TAM. The survey method was used, and 450 PSTAG participated in the current study. Data were collected through a survey including the ChatGPT literacy scale (ChatGPT-LS) and TAM to determine PSTAG's ChatGPT-4o literacy level and its relationship with PEOU, PU, and BI. Thirteen hypotheses are developed to test the proposed model. All but one of the hypotheses are supported. This study shows that PEOU and PU play a key role in BI's use of ChatGPT-4o, and the sub-dimensions of the ChatGPT-LS have a statistically significant effect on PEOU and PU. Technical proficiency was found to have no positive effect on PU. It can be suggested that PSTAG's ChatGPT literacy level should be improved through courses to increase their behavioral intention to use ChatGPT-4o for educational purposes.
本研究旨在加强我们对与阿尔法一代(PSTAG)一起工作的职前教师在ChatGPT背景下如何与技术接受模型(TAM)互动的理解。它利用TAM的扩展版本,特别检查了他们对chatgpt - 40的易用性(PEOU)、感知有用性(PU)和行为意图(BI)的看法。采用问卷调查法,450名PSTAG参与了本研究。通过包括ChatGPT识字量表(ChatGPT- ls)和TAM在内的调查收集数据,以确定PSTAG的ChatGPT- 40识字水平及其与PEOU, PU和BI的关系。提出了13个假设来检验所提出的模型。除了一个假设外,所有假设都得到了支持。本研究表明,PEOU和PU在BI对chatgpt - 40的使用中起着关键作用,ChatGPT-LS的子维度对PEOU和PU的影响具有统计学意义。技术熟练程度对PU没有正向影响。可以建议通过课程提高PSTAG的ChatGPT读写水平,增加他们将ChatGPT- 40用于教育目的的行为意愿。
{"title":"Are pre-service teachers ready to teach the Alpha generation? The impact of pre-service teachers' ChatGPT literacy levels on behavioral intentions toward ChatGPT-4.0","authors":"Eylem Kılıç ,&nbsp;Firas Almasri ,&nbsp;H. Eray Çelik","doi":"10.1016/j.caeai.2025.100486","DOIUrl":"10.1016/j.caeai.2025.100486","url":null,"abstract":"<div><div>This study seeks to enhance our understanding of how pre-service teachers working with the Alpha Generation (PSTAG) interact with the Technology Acceptance Model (TAM) in the context of ChatGPT. It specifically examines their perceptions of ease of use (PEOU), perceived usefulness (PU), and behavioral intention (BI) toward ChatGPT-4o, utilizing an extended version of the TAM. The survey method was used, and 450 PSTAG participated in the current study. Data were collected through a survey including the ChatGPT literacy scale (ChatGPT-LS) and TAM to determine PSTAG's ChatGPT-4o literacy level and its relationship with PEOU, PU, and BI. Thirteen hypotheses are developed to test the proposed model. All but one of the hypotheses are supported. This study shows that PEOU and PU play a key role in BI's use of ChatGPT-4o, and the sub-dimensions of the ChatGPT-LS have a statistically significant effect on PEOU and PU. Technical proficiency was found to have no positive effect on PU. It can be suggested that PSTAG's ChatGPT literacy level should be improved through courses to increase their behavioral intention to use ChatGPT-4o for educational purposes.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100486"},"PeriodicalIF":0.0,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145264453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A systematic literature review of generative artificial intelligence (GenAI) literacy in schools 对学校生成性人工智能(GenAI)素养的系统文献综述
Q1 Social Sciences Pub Date : 2025-10-06 DOI: 10.1016/j.caeai.2025.100487
Joonhyeong Park
Given the rapid integration of generative artificial intelligence (GenAI) technologies, such as large language models, into educational contexts, fostering students’ GenAI literacy has become essential. However, previous AI literacy frameworks may inadequately reflect specific competencies necessary for proficient GenAI use. To address this gap, this study aimed to conceptualise a GenAI specific literacy framework tailored explicitly for educational settings and systematically examine recent research trends concerning GenAI literacy. Employing a systematic literature review approach, 51 empirical studies published in 2023 and 2024 were selected and analysed based on five identified competencies of GenAI literacy: (1) know and understand GenAI, (2) use and apply GenAI, (3) evaluate and incorporate GenAI, (4) GenAI ethics, and (5) attitudes towards GenAI. The findings indicate that students demonstrated moderate understanding of GenAI concepts but frequently faced challenges in prompt engineering and critical evaluation of AI-generated outputs. Ethical considerations, particularly related to academic integrity, privacy, and data security, were highlighted as significant concerns. Furthermore, positive student attitudes towards GenAI, including curiosity and self-efficacy, emerged as vital components enhancing engagement with GenAI tools. A five-step interaction model was proposed to help in fostering students' GenAI literacy, emphasising iterative and dynamic engagement with GenAI tools. This study underscores the necessity of explicitly integrating GenAI-specific competencies into educational practices and recommends clear institutional policies, and further empirical research to support the responsible, effective, and reflective use of GenAI in school settings.
鉴于生成式人工智能(GenAI)技术(如大型语言模型)在教育环境中的快速整合,培养学生的GenAI素养变得至关重要。然而,以前的人工智能素养框架可能不能充分反映熟练使用GenAI所需的特定能力。为了解决这一差距,本研究旨在概念化一个明确为教育环境量身定制的GenAI特定扫盲框架,并系统地检查有关GenAI扫盲的最新研究趋势。本文采用系统文献综述的方法,选取了2023年和2024年发表的51项实证研究,并根据基因ai素养的五种能力(1)了解和理解基因ai,(2)使用和应用基因ai,(3)评估和整合基因ai,(4)基因ai伦理,(5)对基因ai的态度)对其进行了分析。研究结果表明,学生对GenAI概念的理解适度,但在快速工程和对ai生成的输出进行批判性评估方面经常面临挑战。伦理方面的考虑,特别是与学术诚信、隐私和数据安全相关的考虑,被强调为重要问题。此外,学生对GenAI的积极态度,包括好奇心和自我效能感,成为增强对GenAI工具参与的重要组成部分。提出了一个五步互动模型,以帮助培养学生的基因人工智能素养,强调迭代和动态参与基因人工智能工具。本研究强调了将GenAI特定能力明确整合到教育实践中的必要性,并建议制定明确的制度政策和进一步的实证研究,以支持在学校环境中负责任、有效和反思地使用GenAI。
{"title":"A systematic literature review of generative artificial intelligence (GenAI) literacy in schools","authors":"Joonhyeong Park","doi":"10.1016/j.caeai.2025.100487","DOIUrl":"10.1016/j.caeai.2025.100487","url":null,"abstract":"<div><div>Given the rapid integration of generative artificial intelligence (GenAI) technologies, such as large language models, into educational contexts, fostering students’ GenAI literacy has become essential. However, previous AI literacy frameworks may inadequately reflect specific competencies necessary for proficient GenAI use. To address this gap, this study aimed to conceptualise a GenAI specific literacy framework tailored explicitly for educational settings and systematically examine recent research trends concerning GenAI literacy. Employing a systematic literature review approach, 51 empirical studies published in 2023 and 2024 were selected and analysed based on five identified competencies of GenAI literacy: (1) know and understand GenAI, (2) use and apply GenAI, (3) evaluate and incorporate GenAI, (4) GenAI ethics, and (5) attitudes towards GenAI. The findings indicate that students demonstrated moderate understanding of GenAI concepts but frequently faced challenges in prompt engineering and critical evaluation of AI-generated outputs. Ethical considerations, particularly related to academic integrity, privacy, and data security, were highlighted as significant concerns. Furthermore, positive student attitudes towards GenAI, including curiosity and self-efficacy, emerged as vital components enhancing engagement with GenAI tools. A five-step interaction model was proposed to help in fostering students' GenAI literacy, emphasising iterative and dynamic engagement with GenAI tools. This study underscores the necessity of explicitly integrating GenAI-specific competencies into educational practices and recommends clear institutional policies, and further empirical research to support the responsible, effective, and reflective use of GenAI in school settings.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100487"},"PeriodicalIF":0.0,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145264451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Objective measurement of AI literacy: Development and validation of the AI competency objective scale (AICOS) 人工智能素养的客观测量:人工智能能力目标量表(AICOS)的开发与验证
Q1 Social Sciences Pub Date : 2025-10-04 DOI: 10.1016/j.caeai.2025.100485
André Markus , Astrid Carolus , Carolin Wienrich
As Artificial Intelligence (AI) becomes increasingly pervasive in everyday life, AI literacy is widely recognized as a set of essential competencies for navigating AI-driven environments safely, responsibly, and effectively. There is a growing need to assess this construct, for example, to inform targeted educational interventions. Although several measurement tools already exist, many show limitations regarding subjective data collection methods, differentiation between target groups, validity, and the integration of recent developments, such as Generative AI literacy. To address these limitations, this study introduces the AI Competency Objective Scale (AICOS) - an instrument grounded in a competency-oriented framework that enables a more objective assessment of AI literacy sub-competencies. AICOS draws on established theoretical models, integrates validated items from prior instruments, and explicitly incorporates Generative AI literacy as a distinct dimension. The AICOS provides a sound and comprehensive measure of AI literacy, and initial analyses indicate the potential for a modular structure. A preliminary short version of the scale has also been developed. Due to its methodological foundation, extensive validation, and incorporation of recent technological advancements, the test represents a valuable tool for scientific research and practical applications in educational and professional contexts. The AICOS contributes to the standardization of AI literacy assessment and supports the targeted development of AI-related competencies across diverse populations.
随着人工智能(AI)在日常生活中变得越来越普遍,人工智能素养被广泛认为是一套安全、负责任和有效地驾驭人工智能驱动环境的基本能力。越来越需要评估这种结构,例如,为有针对性的教育干预提供信息。虽然已经存在一些测量工具,但许多工具在主观数据收集方法、目标群体之间的区分、有效性和近期发展的整合(如生成人工智能素养)方面显示出局限性。为了解决这些局限性,本研究引入了人工智能能力目标量表(AICOS),这是一种以能力为导向的框架为基础的工具,可以更客观地评估人工智能素养子能力。AICOS借鉴了已建立的理论模型,整合了先前工具的验证项目,并明确地将生成人工智能素养作为一个独特的维度。AICOS为人工智能素养提供了一个健全而全面的衡量标准,初步分析表明了模块化结构的潜力。该量表的初步简短版本也已编制完成。由于其方法基础,广泛的验证,并结合了最新的技术进步,该测试代表了科学研究和在教育和专业环境中的实际应用的有价值的工具。AICOS有助于人工智能素养评估的标准化,并支持在不同人群中有针对性地发展与人工智能相关的能力。
{"title":"Objective measurement of AI literacy: Development and validation of the AI competency objective scale (AICOS)","authors":"André Markus ,&nbsp;Astrid Carolus ,&nbsp;Carolin Wienrich","doi":"10.1016/j.caeai.2025.100485","DOIUrl":"10.1016/j.caeai.2025.100485","url":null,"abstract":"<div><div>As Artificial Intelligence (AI) becomes increasingly pervasive in everyday life, AI literacy is widely recognized as a set of essential competencies for navigating AI-driven environments safely, responsibly, and effectively. There is a growing need to assess this construct, for example, to inform targeted educational interventions. Although several measurement tools already exist, many show limitations regarding subjective data collection methods, differentiation between target groups, validity, and the integration of recent developments, such as Generative AI literacy. To address these limitations, this study introduces the AI Competency Objective Scale (AICOS) - an instrument grounded in a competency-oriented framework that enables a more objective assessment of AI literacy sub-competencies. AICOS draws on established theoretical models, integrates validated items from prior instruments, and explicitly incorporates Generative AI literacy as a distinct dimension. The AICOS provides a sound and comprehensive measure of AI literacy, and initial analyses indicate the potential for a modular structure. A preliminary short version of the scale has also been developed. Due to its methodological foundation, extensive validation, and incorporation of recent technological advancements, the test represents a valuable tool for scientific research and practical applications in educational and professional contexts. The AICOS contributes to the standardization of AI literacy assessment and supports the targeted development of AI-related competencies across diverse populations.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100485"},"PeriodicalIF":0.0,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145264450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging an LLM-enhanced bilingual conversational agent for EFL children’s dialogic reading: Insights from children, parents, and educators 利用法学硕士增强的双语对话代理为EFL儿童对话阅读:来自儿童,家长和教育者的见解
Q1 Social Sciences Pub Date : 2025-09-30 DOI: 10.1016/j.caeai.2025.100484
Feiwen Xiao , Zhaohui Li , Jiaju Lin , Xiaohan Zou , Dandan Yang , Wenting Zou , Jinjun Xiong
Dialogic reading, a technique in which adults and children engage in interactive discussions around a story, has been shown to improve children’s language and literacy development. Despite its evidence-based benefits, its adoption amongfamilies with English as a Foreign Language (EFL) backgrounds has been particularly challenging due to limited English proficiency, restricted conversational skills, and a low inclination to read in English. This paper presents “Storio", an e-book integrated with a bilingual large language model (LLM)-based conversational agent named “Mia", used as a design probe to investigate interactions between EFL children (N=17) and the agent, and to gather insights from parents (N=19) and educators (N=2). The findings indicate that the bilingual agent effectively supports language output, fosters interactive experiences, and promotes language skills. The study offers valuable design implications for the development of LLM-based and children’s interactive e-books tailored to the needs of children with diverse linguistic and cultural backgrounds.
对话阅读是一种成年人和儿童围绕一个故事进行互动讨论的技巧,已被证明可以提高儿童的语言和读写能力。尽管它有基于证据的好处,但由于英语水平有限,会话技能有限,以及英语阅读倾向不高,在英语作为外语(EFL)背景的家庭中采用它尤其具有挑战性。本文介绍了一本名为“故事”的电子书,它集成了一个基于双语大语言模型(LLM)的会话代理“Mia”,作为一个设计探针来调查英语儿童(N=17)和代理之间的互动,并从家长(N=19)和教育工作者(N=2)那里收集见解。研究结果表明,双语代理有效地支持语言输出,促进互动体验,并提高语言技能。该研究为开发基于法学硕士的儿童互动电子书提供了有价值的设计启示,这些电子书适合不同语言和文化背景的儿童的需要。
{"title":"Leveraging an LLM-enhanced bilingual conversational agent for EFL children’s dialogic reading: Insights from children, parents, and educators","authors":"Feiwen Xiao ,&nbsp;Zhaohui Li ,&nbsp;Jiaju Lin ,&nbsp;Xiaohan Zou ,&nbsp;Dandan Yang ,&nbsp;Wenting Zou ,&nbsp;Jinjun Xiong","doi":"10.1016/j.caeai.2025.100484","DOIUrl":"10.1016/j.caeai.2025.100484","url":null,"abstract":"<div><div>Dialogic reading, a technique in which adults and children engage in interactive discussions around a story, has been shown to improve children’s language and literacy development. Despite its evidence-based benefits, its adoption amongfamilies with English as a Foreign Language (EFL) backgrounds has been particularly challenging due to limited English proficiency, restricted conversational skills, and a low inclination to read in English. This paper presents “Storio\", an e-book integrated with a bilingual large language model (LLM)-based conversational agent named “Mia\", used as a design probe to investigate interactions between EFL children (N=17) and the agent, and to gather insights from parents (N=19) and educators (N=2). The findings indicate that the bilingual agent effectively supports language output, fosters interactive experiences, and promotes language skills. The study offers valuable design implications for the development of LLM-based and children’s interactive e-books tailored to the needs of children with diverse linguistic and cultural backgrounds.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100484"},"PeriodicalIF":0.0,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Education Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1