首页 > 最新文献

Computers and Education Artificial Intelligence最新文献

英文 中文
Evaluating the potential of ChatGPT-reformulated essays as written feedback in L2 writing 评估chatgpt重新制定的论文在第二语言写作中的书面反馈潜力
Q1 Social Sciences Pub Date : 2025-11-17 DOI: 10.1016/j.caeai.2025.100500
Yingzhao Chen
Reformulation is a form of written corrective feedback to help second language (L2) learners improve their writing. This study examined whether ChatGPT could produce reformulations that (1) retain the meanings of the original essays and (2) are linguistically more developed than learners’ original essays. In addition, three types of ChatGPT prompts were compared to see which type yielded better reformulations. One thousand two hundred argumentative essays written for the TOEFL iBT® independent writing task were submitted to ChatGPT. ROUGE-L scores, used as a proxy for meaning retention, showed that ChatGPT reformulations largely retained the meaning of the original essays. A qualitative examination was conducted to examine the major types of changes ChatGPT made. For linguistic features, the ChatGPT reformulations were compared with the original essays for syntactic complexity, lexical sophistication, lexical diversity, and cohesion. Results showed that while ChatGPT reformulations were more developed for most linguistic features than the original essays, the reformulations did worse in cohesion. ChatGPT prompts with specific instructions produced reformulations with more developed linguistic features than a generic prompt. Findings were discussed in terms of how to use ChatGPT to generate reformulations and how to use the reformulations to improve L2 writing.
改写是一种帮助第二语言学习者提高写作水平的书面纠正反馈形式。本研究考察了ChatGPT是否可以产生:(1)保留原始文章的含义,(2)在语言上比学习者的原始文章更发达的重新表述。此外,还比较了三种类型的ChatGPT提示,以查看哪种类型产生更好的重新配方。1200篇托福网考独立写作的议论文被提交给了ChatGPT。ROUGE-L分数,作为意思保留的代理,表明ChatGPT重新表述在很大程度上保留了原始文章的意思。进行了定性检查,以检查ChatGPT所做的主要类型的更改。在语言特征方面,我们比较了ChatGPT改写后的文章与原文在句法复杂性、词汇复杂性、词汇多样性和衔接方面的差异。结果表明,虽然ChatGPT的重组在大多数语言特征上比原始文章更发达,但重组在衔接方面做得更差。与通用提示相比,带有特定指令的ChatGPT提示产生了具有更发达语言特征的重新表述。研究结果讨论了如何使用ChatGPT生成重新表述,以及如何使用重新表述来提高第二语言写作。
{"title":"Evaluating the potential of ChatGPT-reformulated essays as written feedback in L2 writing","authors":"Yingzhao Chen","doi":"10.1016/j.caeai.2025.100500","DOIUrl":"10.1016/j.caeai.2025.100500","url":null,"abstract":"<div><div>Reformulation is a form of written corrective feedback to help second language (L2) learners improve their writing. This study examined whether ChatGPT could produce reformulations that (1) retain the meanings of the original essays and (2) are linguistically more developed than learners’ original essays. In addition, three types of ChatGPT prompts were compared to see which type yielded better reformulations. One thousand two hundred argumentative essays written for the TOEFL iBT® independent writing task were submitted to ChatGPT. ROUGE-L scores, used as a proxy for meaning retention, showed that ChatGPT reformulations largely retained the meaning of the original essays. A qualitative examination was conducted to examine the major types of changes ChatGPT made. For linguistic features, the ChatGPT reformulations were compared with the original essays for syntactic complexity, lexical sophistication, lexical diversity, and cohesion. Results showed that while ChatGPT reformulations were more developed for most linguistic features than the original essays, the reformulations did worse in cohesion. ChatGPT prompts with specific instructions produced reformulations with more developed linguistic features than a generic prompt. Findings were discussed in terms of how to use ChatGPT to generate reformulations and how to use the reformulations to improve L2 writing.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100500"},"PeriodicalIF":0.0,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145568214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An exploration of the role of generative AI in fostering creativity in architectural learning environments 探索生成式人工智能在建筑学习环境中培养创造力的作用
Q1 Social Sciences Pub Date : 2025-11-17 DOI: 10.1016/j.caeai.2025.100501
Carlos Medel-Vera , Sandy Britton , William Francis Gates
This paper explores the role of generative AI (GenAI) in supporting creativity within architectural education through the lens of a student-led AI drawing competition. The research addresses two questions: (1) how creative are students' text prompts and the resulting AI-generated images, and is there a relationship between them? and (2) to what extent do students perceive GenAI as a supportive tool in their creative process? Drawing on a mixed-methods approach, the study combines semantic analysis of text prompts, aesthetic evaluation of AI-generated images, and a Creativity Support Index (CSI) survey, complemented by sentiment analysis of student feedback. The semantic analysis reveals varying levels of conceptual richness across prompts, with higher divergence correlating to more open-ended and expressive image results. The CSI data indicates strong support for exploratory and goal-directed creativity, with high scores in exploration and results-worth-effort dimensions. These findings suggest that GenAI can function as both a collaborator and provocateur in design pedagogy, facilitating creative ideation while inviting new pedagogical strategies centred on prompt literacy and reflective design. The study concludes by discussing implications for integrating AI tools into design education, emphasising the pedagogical value of prompt literacy, and calling for further research on creative agency and authorship in hybrid human–AI workflows.
本文通过学生主导的人工智能绘画比赛,探讨了生成式人工智能(GenAI)在建筑教育中支持创造力的作用。该研究解决了两个问题:(1)学生的文本提示和由此产生的人工智能生成的图像有多大的创造性,它们之间是否存在关系?(2)学生在多大程度上认为GenAI是他们创作过程中的支持性工具?该研究采用混合方法,结合了文本提示的语义分析、人工智能生成图像的美学评估和创造力支持指数(CSI)调查,并辅以对学生反馈的情感分析。语义分析揭示了不同提示的概念丰富程度,更高的差异与更多的开放式和表达性图像结果相关。CSI数据表明探索性和目标导向的创造力得到了强有力的支持,在探索和结果价值-努力维度上得分很高。这些发现表明,GenAI在设计教学中既可以充当合作者,也可以充当挑衅者,促进创造性思维,同时引入以快速识字和反思性设计为中心的新教学策略。该研究最后讨论了将人工智能工具整合到设计教育中的影响,强调了快速识字的教学价值,并呼吁进一步研究人类-人工智能混合工作流程中的创意代理和作者身份。
{"title":"An exploration of the role of generative AI in fostering creativity in architectural learning environments","authors":"Carlos Medel-Vera ,&nbsp;Sandy Britton ,&nbsp;William Francis Gates","doi":"10.1016/j.caeai.2025.100501","DOIUrl":"10.1016/j.caeai.2025.100501","url":null,"abstract":"<div><div>This paper explores the role of generative AI (GenAI) in supporting creativity within architectural education through the lens of a student-led AI drawing competition. The research addresses two questions: (1) how creative are students' text prompts and the resulting AI-generated images, and is there a relationship between them? and (2) to what extent do students perceive GenAI as a supportive tool in their creative process? Drawing on a mixed-methods approach, the study combines semantic analysis of text prompts, aesthetic evaluation of AI-generated images, and a Creativity Support Index (CSI) survey, complemented by sentiment analysis of student feedback. The semantic analysis reveals varying levels of conceptual richness across prompts, with higher divergence correlating to more open-ended and expressive image results. The CSI data indicates strong support for exploratory and goal-directed creativity, with high scores in exploration and results-worth-effort dimensions. These findings suggest that GenAI can function as both a collaborator and provocateur in design pedagogy, facilitating creative ideation while inviting new pedagogical strategies centred on prompt literacy and reflective design. The study concludes by discussing implications for integrating AI tools into design education, emphasising the pedagogical value of prompt literacy, and calling for further research on creative agency and authorship in hybrid human–AI workflows.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100501"},"PeriodicalIF":0.0,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145568751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trajectories of AI policy in higher education: Interpretations, discourses, and enactments of students and teachers 高等教育中人工智能政策的轨迹:学生和教师的解释、话语和制定
Q1 Social Sciences Pub Date : 2025-11-15 DOI: 10.1016/j.caeai.2025.100496
Jack Tsao
Generative artificial intelligence (GenAI) in higher education has introduced a spectrum of ethical challenges, significantly impacting learning outcomes, pedagogies, and assessments. Based on the experiences and perspectives of students and teachers at a research-intensive university in Hong Kong, the study draws on qualitative interview data with 58 undergraduate and graduate students and 12 teachers conducted in early 2025. Through the concept of policy trajectories (Ball, 1993; Ball et al., 2012), the research analyses the interconnections between material contexts and discursive constructions in how AI policies (and their absence) are framed, interpreted, enacted, and resisted. The findings reveal general concerns about academic integrity, fairness, equity, privacy, and data security, including specifically the invisible labour in dealing with ambiguous policies, uneven enforcement strategies, loopholes to avoid detection, disparities in access to state-of-the-art tools, and the cognitive and other developmental impacts due to overreliance on GenAI tools. Institutional ambiguity in policy supported experimentation and the appearance of progress, but risked individualising failure on teachers and students. Some actionable insights for university leaders and policymakers, teaching development centres, and individual teachers and programme coordinators include clearer messaging, the need for adaptive policies and guidelines with ongoing student and teacher participation, availability of digital libraries of toolkits, case studies and other resources, building in early “failure experiences”, and exposing students to authentic real-world applications and encounters to cultivate awareness on the limitations of GenAI. Ultimately, policy responses need to be both contextually and pragmatically sensitive, requiring on-the-ground experimentation and care by teachers.
高等教育中的生成式人工智能(GenAI)带来了一系列伦理挑战,对学习成果、教学方法和评估产生了重大影响。该研究基于香港一所研究型大学学生和教师的经验和观点,采用了2025年初对58名本科生和研究生以及12名教师进行的定性访谈数据。通过政策轨迹的概念(Ball, 1993; Ball et al., 2012),该研究分析了人工智能政策(及其缺席)如何被框架、解释、制定和抵制的物质背景和话语结构之间的相互联系。调查结果揭示了对学术诚信、公平、公平、隐私和数据安全的普遍担忧,特别是在处理模棱两可的政策、不平衡的执行策略、避免检测的漏洞、获取最先进工具的差距以及过度依赖GenAI工具造成的认知和其他发展影响方面。政策中的制度模糊性支持了实验和进步的表象,但却有可能导致教师和学生的个人失败。为大学领导和政策制定者、教学发展中心、教师个人和项目协调员提供的一些可操作的见解包括:更清晰的信息传递、在学生和教师持续参与的情况下制定适应性政策和指导方针的必要性、工具箱、案例研究和其他资源的数字图书馆的可用性、建立早期“失败经验”、让学生接触真实世界的应用和遭遇,以培养对GenAI局限性的认识。最终,政策反应需要对环境和实际情况都敏感,需要实地实验和教师的关心。
{"title":"Trajectories of AI policy in higher education: Interpretations, discourses, and enactments of students and teachers","authors":"Jack Tsao","doi":"10.1016/j.caeai.2025.100496","DOIUrl":"10.1016/j.caeai.2025.100496","url":null,"abstract":"<div><div>Generative artificial intelligence (GenAI) in higher education has introduced a spectrum of ethical challenges, significantly impacting learning outcomes, pedagogies, and assessments. Based on the experiences and perspectives of students and teachers at a research-intensive university in Hong Kong, the study draws on qualitative interview data with 58 undergraduate and graduate students and 12 teachers conducted in early 2025. Through the concept of policy trajectories (Ball, 1993; Ball et al., 2012), the research analyses the interconnections between material contexts and discursive constructions in how AI policies (and their absence) are framed, interpreted, enacted, and resisted. The findings reveal general concerns about academic integrity, fairness, equity, privacy, and data security, including specifically the invisible labour in dealing with ambiguous policies, uneven enforcement strategies, loopholes to avoid detection, disparities in access to state-of-the-art tools, and the cognitive and other developmental impacts due to overreliance on GenAI tools. Institutional ambiguity in policy supported experimentation and the appearance of progress, but risked individualising failure on teachers and students. Some actionable insights for university leaders and policymakers, teaching development centres, and individual teachers and programme coordinators include clearer messaging, the need for adaptive policies and guidelines with ongoing student and teacher participation, availability of digital libraries of toolkits, case studies and other resources, building in early “failure experiences”, and exposing students to authentic real-world applications and encounters to cultivate awareness on the limitations of GenAI. Ultimately, policy responses need to be both contextually and pragmatically sensitive, requiring on-the-ground experimentation and care by teachers.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100496"},"PeriodicalIF":0.0,"publicationDate":"2025-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145568745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing students’ DRIVE: A framework to evaluate learning through interactions with generative AI 评估学生的动力:通过与生成式人工智能的互动来评估学习的框架
Q1 Social Sciences Pub Date : 2025-11-13 DOI: 10.1016/j.caeai.2025.100497
Manuel Oliveira, Carlos Zednik, Gunter Bombaerts, Bert Sadowski, Rianne Conijn
As generative AI (GenAI) transforms how students learn and work, higher education must rethink its assessment strategies. This paper introduces a conceptual framework, DRIVE, and a taxonomy to help educators evaluate student learning based on their interactions with GenAI chatbots. Although existing research maps student-GenAI interactions to writing outcomes, practice-oriented tools for assessing evidence of domain-specific learning beyond general AI literacy skills or general writing skills remain underexplored. We propose that GenAI interactions can serve as a valid indicator of learning by revealing how students steer the interaction (Directive Reasoning Interaction) and articulate acquired knowledge into the dialogue with AI (Visible Expertise). We conducted a multi-methods analysis of GenAI interaction annotations (n = 1450) from graded essays (n = 70) in STEM writing-intensive courses. A strong positive correlation was found between the quality GenAI interactions and final essay scores, validating the feasibility of this assessment approach. Furthermore, our taxonomy revealed distinct GenAI interaction profiles: High essay scores were connected to a ”targeted improvement partnership” focused on text refinement, whereas high interaction scores were linked to a ”collaborative intellectual partnership” centered on idea development. In contrast, below-average scores were associated with ”basic information retrieval” or ”passive task delegation” profiles. These findings demonstrate how the assessment method (output vs. process focus) may shape students’ GenAI usage. Traditional assessment can reinforce text optimization, while process-focused evaluation may reward an exploratory partnership with AI. The DRIVE framework and the taxonomy offer educators and researchers a practical tool to design assessments that capture learning in AI-integrated classrooms.
随着生成式人工智能(GenAI)改变学生的学习和工作方式,高等教育必须重新思考其评估策略。本文介绍了一个概念框架,DRIVE和一个分类法,以帮助教育工作者根据他们与GenAI聊天机器人的互动来评估学生的学习。尽管现有的研究将学生与基因人工智能的互动映射到写作结果,但除了一般的人工智能读写技能或一般的写作技能之外,用于评估特定领域学习证据的实践导向工具仍未得到充分探索。我们建议,通过揭示学生如何引导互动(指令推理互动)并将获得的知识表达到与AI的对话(可见专业知识)中,GenAI互动可以作为学习的有效指标。我们对STEM写作强化课程中评分论文(n = 70)中的GenAI交互注释(n = 1450)进行了多方法分析。在GenAI交互的质量和最终论文分数之间发现了强烈的正相关,验证了这种评估方法的可行性。此外,我们的分类揭示了不同的GenAI互动特征:高作文分数与专注于文本精炼的“目标改进伙伴关系”有关,而高互动分数与专注于想法发展的“协作智力伙伴关系”有关。相比之下,低于平均水平的分数与“基本信息检索”或“被动任务委派”相关。这些发现证明了评估方法(输出与过程焦点)如何影响学生对GenAI的使用。传统的评估可以加强文本优化,而以过程为中心的评估可能会奖励与人工智能的探索性合作伙伴关系。DRIVE框架和分类为教育工作者和研究人员提供了一个实用的工具来设计评估,以捕捉人工智能集成教室中的学习情况。
{"title":"Assessing students’ DRIVE: A framework to evaluate learning through interactions with generative AI","authors":"Manuel Oliveira,&nbsp;Carlos Zednik,&nbsp;Gunter Bombaerts,&nbsp;Bert Sadowski,&nbsp;Rianne Conijn","doi":"10.1016/j.caeai.2025.100497","DOIUrl":"10.1016/j.caeai.2025.100497","url":null,"abstract":"<div><div>As generative AI (GenAI) transforms how students learn and work, higher education must rethink its assessment strategies. This paper introduces a conceptual framework, DRIVE, and a taxonomy to help educators evaluate student learning based on their interactions with GenAI chatbots. Although existing research maps student-GenAI interactions to writing outcomes, practice-oriented tools for assessing evidence of domain-specific learning beyond general AI literacy skills or general writing skills remain underexplored. We propose that GenAI interactions can serve as a valid indicator of learning by revealing how students steer the interaction (Directive Reasoning Interaction) and articulate acquired knowledge into the dialogue with AI (Visible Expertise). We conducted a multi-methods analysis of GenAI interaction annotations (<em>n</em> = 1450) from graded essays (<em>n</em> = 70) in STEM writing-intensive courses. A strong positive correlation was found between the quality GenAI interactions and final essay scores, validating the feasibility of this assessment approach. Furthermore, our taxonomy revealed distinct GenAI interaction profiles: High essay scores were connected to a ”targeted improvement partnership” focused on text refinement, whereas high interaction scores were linked to a ”collaborative intellectual partnership” centered on idea development. In contrast, below-average scores were associated with ”basic information retrieval” or ”passive task delegation” profiles. These findings demonstrate how the assessment method (output vs. process focus) may shape students’ GenAI usage. Traditional assessment can reinforce text optimization, while process-focused evaluation may reward an exploratory partnership with AI. The DRIVE framework and the taxonomy offer educators and researchers a practical tool to design assessments that capture learning in AI-integrated classrooms.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100497"},"PeriodicalIF":0.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital equity and computational thinking privilege: The case of first-year engineering and computing students' attitudes towards artificial intelligence 数字公平与计算思维特权:一年级工程与计算机专业学生对人工智能的态度
Q1 Social Sciences Pub Date : 2025-11-01 DOI: 10.1016/j.caeai.2025.100495
Noemi V. Mendoza Diaz , So Yoon Yoon , Nancy Gertrudiz Salvador
Attitudes can constitute barriers to engineering, computing, and artificial intelligence (AI) enculturation, contributing to and resulting from digital inequity. Building upon research on computational thinking privilege, we explored first-year students' (a) perceived future impact of AI on their career prospects and (b) backgrounds (e.g., gender, underrepresented minority (URM) status, and First-Generation status) associated with their attitudes toward AI, computational thinking, and course performance. Computational thinking was measured using our newly validated Engineering Computational Thinking Diagnostic (ECTD), while course performance was assessed based on final grades in an introductory computing course at a Southwestern institution—the first coding experience for many students. For the fall 2021 participant cohort of 163 first-year engineering and computing students, 40.9 % expressed positive attitudes toward AI in their career prospects, with 48.9 % of them having prior computer science course experience. Regarding their backgrounds, the number of CS courses taken before college significantly correlated with their attitudes toward AI, ECTD scores, and course grades—irrespective of gender, URM status, residence, First-Generation, or First-Time-in-College status. These findings support the notion that computational thinking privilege, shaped by prior exposure and access to resources, contributes to digital inequity and influences attitudes. Specifically, students' cognitive attitudes toward AI have the potential to shape AI literacy and education, potentially perpetuating inequities in an increasingly AI-driven world.
态度可能构成工程、计算和人工智能(AI)文化适应的障碍,助长和导致数字不平等。基于对计算思维特权的研究,我们探讨了一年级学生(a)对人工智能对其职业前景的未来影响的感知,以及(b)与他们对人工智能、计算思维和课程表现的态度相关的背景(如性别、未被充分代表的少数族裔(URM)地位和第一代地位)。计算思维是用我们最新验证的工程计算思维诊断(ECTD)来衡量的,而课程表现是根据西南大学一门计算机入门课程的最终成绩来评估的——这是许多学生的第一次编程经历。在2021年秋季的163名工程和计算机专业的一年级学生中,40.9%的人对人工智能在他们的职业前景中持积极态度,其中48.9%的人之前有过计算机科学课程的经验。就他们的背景而言,大学前学习的计算机科学课程的数量与他们对人工智能的态度、ECTD分数和课程成绩显著相关,而不考虑性别、URM状态、居住地、第一代或第一次进入大学的状态。这些发现支持这样一种观点,即计算思维特权是由先前的接触和资源获取所形成的,它导致了数字不平等并影响了人们的态度。具体来说,学生对人工智能的认知态度有可能影响人工智能素养和教育,在一个日益由人工智能驱动的世界里,这种不平等可能会持续下去。
{"title":"Digital equity and computational thinking privilege: The case of first-year engineering and computing students' attitudes towards artificial intelligence","authors":"Noemi V. Mendoza Diaz ,&nbsp;So Yoon Yoon ,&nbsp;Nancy Gertrudiz Salvador","doi":"10.1016/j.caeai.2025.100495","DOIUrl":"10.1016/j.caeai.2025.100495","url":null,"abstract":"<div><div>Attitudes can constitute barriers to engineering, computing, and artificial intelligence (AI) enculturation, contributing to and resulting from digital inequity. Building upon research on computational thinking privilege, we explored first-year students' (a) perceived future impact of AI on their career prospects and (b) backgrounds (e.g., gender, underrepresented minority (URM) status, and First-Generation status) associated with their attitudes toward AI, computational thinking, and course performance. Computational thinking was measured using our newly validated Engineering Computational Thinking Diagnostic (ECTD), while course performance was assessed based on final grades in an introductory computing course at a Southwestern institution—the first coding experience for many students. For the fall 2021 participant cohort of 163 first-year engineering and computing students, 40.9 % expressed positive attitudes toward AI in their career prospects, with 48.9 % of them having prior computer science course experience. Regarding their backgrounds, the number of CS courses taken before college significantly correlated with their attitudes toward AI, ECTD scores, and course grades—irrespective of gender, URM status, residence, First-Generation, or First-Time-in-College status. These findings support the notion that computational thinking privilege, shaped by prior exposure and access to resources, contributes to digital inequity and influences attitudes. Specifically, students' cognitive attitudes toward AI have the potential to shape AI literacy and education, potentially perpetuating inequities in an increasingly AI-driven world.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100495"},"PeriodicalIF":0.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145568752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conceptualizing AI literacies for children and youth: A systematic review on the design of AI literacy educational programs 儿童和青少年人工智能素养的概念化:人工智能素养教育项目设计的系统回顾
Q1 Social Sciences Pub Date : 2025-10-30 DOI: 10.1016/j.caeai.2025.100491
Osnat Atias, Areej Mawasi
The growing presence of Artificial Intelligence (AI) in society increases the exposure of children and youth to these technologies. In response, recent research introduced educational programs that foster AI knowledge and competencies, collectively comprising AI literacy. This study presents a systematic review of 23 articles published up to 2023 describing AI literacy programs for children and youth. We examined: (1) motivations for teaching AI literacy, (2) conceptualizations of AI literacy that informed program design, and (3) learning theories and pedagogical methods employed. The analysis identified five motivational themes: workforce, informed users, purposeful creators, advocacy, and social good. Seventeen AI literacy frameworks and conceptual models were identified and grouped into four themes: competency-based, computational, sociotechnical, and practice-based. Application of a three-dimensional model of literacy (operational, sociocultural, and critical), shows that the operational dimension predominates in both frameworks and program designs, the sociocultural dimension is less accentuated, and the critical dimension is least evident. Cognitive constructivism emerged as the dominant learning theory guiding program design, often supported by hands-on activities and project-based learning methods. This systematic review advances understanding of the conceptual drivers shaping AI literacy programs for children and youth. The findings highlight the need for stronger conceptualizations of sociocultural and critical AI literacies and for their more balanced integration into educational programs. Addressing these gaps would better support broad motivations for teaching AI to children and youth, such as fostering social and ethical understanding and agency, and guide future research towards more comprehensive and critically informed frameworks.
人工智能(AI)在社会中日益增长的存在增加了儿童和青少年对这些技术的接触。作为回应,最近的研究引入了培养人工智能知识和能力的教育项目,这些知识和能力共同构成了人工智能素养。本研究对截至2023年发表的23篇描述儿童和青少年人工智能扫盲计划的文章进行了系统回顾。我们研究了:(1)教授人工智能素养的动机,(2)为课程设计提供信息的人工智能素养概念,以及(3)所采用的学习理论和教学方法。分析确定了五个激励主题:劳动力、知情用户、有目的的创造者、倡导和社会公益。确定了17个人工智能素养框架和概念模型,并将其分为四个主题:基于能力、计算、社会技术和基于实践。三维识字模型(操作性、社会文化和批判性)的应用表明,在框架和计划设计中,操作性维度占主导地位,社会文化维度不那么突出,批判性维度最不明显。认知建构主义成为指导程序设计的主导学习理论,经常得到实践活动和基于项目的学习方法的支持。这一系统综述促进了对儿童和青少年人工智能扫盲计划形成的概念驱动因素的理解。研究结果强调,需要加强对社会文化和批判性人工智能素养的概念化,并将其更平衡地融入教育计划。解决这些差距将更好地支持向儿童和青少年教授人工智能的广泛动机,例如促进社会和伦理理解和能动性,并指导未来的研究朝着更全面和批判性知情的框架发展。
{"title":"Conceptualizing AI literacies for children and youth: A systematic review on the design of AI literacy educational programs","authors":"Osnat Atias,&nbsp;Areej Mawasi","doi":"10.1016/j.caeai.2025.100491","DOIUrl":"10.1016/j.caeai.2025.100491","url":null,"abstract":"<div><div>The growing presence of Artificial Intelligence (AI) in society increases the exposure of children and youth to these technologies. In response, recent research introduced educational programs that foster AI knowledge and competencies, collectively comprising AI literacy. This study presents a systematic review of 23 articles published up to 2023 describing AI literacy programs for children and youth. We examined: (1) motivations for teaching AI literacy, (2) conceptualizations of AI literacy that informed program design, and (3) learning theories and pedagogical methods employed. The analysis identified five motivational themes: workforce, informed users, purposeful creators, advocacy, and social good. Seventeen AI literacy frameworks and conceptual models were identified and grouped into four themes: competency-based, computational, sociotechnical, and practice-based. Application of a three-dimensional model of literacy (operational, sociocultural, and critical), shows that the operational dimension predominates in both frameworks and program designs, the sociocultural dimension is less accentuated, and the critical dimension is least evident. Cognitive constructivism emerged as the dominant learning theory guiding program design, often supported by hands-on activities and project-based learning methods. This systematic review advances understanding of the conceptual drivers shaping AI literacy programs for children and youth. The findings highlight the need for stronger conceptualizations of sociocultural and critical AI literacies and for their more balanced integration into educational programs. Addressing these gaps would better support broad motivations for teaching AI to children and youth, such as fostering social and ethical understanding and agency, and guide future research towards more comprehensive and critically informed frameworks.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100491"},"PeriodicalIF":0.0,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The promise and limits of LLMs in constructing proofs and hints for logic problems in intelligent tutoring systems 法学硕士在构建智能辅导系统中逻辑问题的证明和提示方面的前景与局限
Q1 Social Sciences Pub Date : 2025-10-28 DOI: 10.1016/j.caeai.2025.100490
Sutapa Dey Tithi, Arun Kumar Ramesh, Clara DiMarco, Xiaoyi Tian, Nazia Alam, Kimia Fazeli, Tiffany Barnes
Intelligent tutoring systems have demonstrated effectiveness in teaching formal propositional logic proofs, but their reliance on template-based explanations limits their ability to provide personalized student feedback. While large language models (LLMs) offer promising capabilities for dynamic feedback generation, they risk producing hallucinations or pedagogically unsound explanations. We evaluated the stepwise accuracy of LLMs in constructing multi-step symbolic logic proofs, comparing six prompting techniques across four state-of-the-art LLMs on 358 propositional logic problems. Results show that DeepSeek-V3 achieved superior performance with upto 86.7 % accuracy on stepwise proof construction and excelled particularly in simpler rules. We further used the best-performing LLM to generate explanatory hints for 1050 unique student problem-solving states from a logic ITS and evaluated them on 4 criteria with both an LLM grader and human expert ratings on a 20 % sample. Our analysis finds that LLM-generated hints were 75 % accurate and rated highly by human evaluators on consistency and clarity, but did not perform as well in explaining why the hint was provided or its larger context. Our results demonstrate that LLMs may be used to augment tutoring systems with logic tutoring hints, but those hints require additional modifications to ensure accuracy and pedagogical appropriateness.
智能辅导系统在教授正式命题逻辑证明方面已经证明了有效性,但它们对基于模板的解释的依赖限制了它们提供个性化学生反馈的能力。虽然大型语言模型(llm)为动态反馈生成提供了有前途的能力,但它们有产生幻觉或在教学上不合理的解释的风险。我们评估了llm在构建多步骤符号逻辑证明方面的逐步准确性,比较了四个最先进的llm在358个命题逻辑问题上的六种提示技术。结果表明,DeepSeek-V3在逐步证明构建方面取得了优异的性能,准确率高达86.7%,尤其在更简单的规则方面表现出色。我们进一步使用表现最好的LLM从逻辑ITS中为1050个独特的学生解决问题的状态生成解释性提示,并在20%的样本上使用LLM评分员和人类专家评分根据4个标准对它们进行评估。我们的分析发现,llm生成的提示准确率为75%,并且在一致性和清晰度方面得到了人类评估者的高度评价,但在解释提供提示的原因或其更大的背景方面表现不佳。我们的研究结果表明,法学硕士可以用逻辑辅导提示来增强辅导系统,但这些提示需要额外的修改,以确保准确性和教学的适当性。
{"title":"The promise and limits of LLMs in constructing proofs and hints for logic problems in intelligent tutoring systems","authors":"Sutapa Dey Tithi,&nbsp;Arun Kumar Ramesh,&nbsp;Clara DiMarco,&nbsp;Xiaoyi Tian,&nbsp;Nazia Alam,&nbsp;Kimia Fazeli,&nbsp;Tiffany Barnes","doi":"10.1016/j.caeai.2025.100490","DOIUrl":"10.1016/j.caeai.2025.100490","url":null,"abstract":"<div><div>Intelligent tutoring systems have demonstrated effectiveness in teaching formal propositional logic proofs, but their reliance on template-based explanations limits their ability to provide personalized student feedback. While large language models (LLMs) offer promising capabilities for dynamic feedback generation, they risk producing hallucinations or pedagogically unsound explanations. We evaluated the stepwise accuracy of LLMs in constructing multi-step symbolic logic proofs, comparing six prompting techniques across four state-of-the-art LLMs on 358 propositional logic problems. Results show that DeepSeek-V3 achieved superior performance with upto 86.7 % accuracy on stepwise proof construction and excelled particularly in simpler rules. We further used the best-performing LLM to generate explanatory hints for 1050 unique student problem-solving states from a logic ITS and evaluated them on 4 criteria with both an LLM grader and human expert ratings on a 20 % sample. Our analysis finds that LLM-generated hints were 75 % accurate and rated highly by human evaluators on consistency and clarity, but did not perform as well in explaining why the hint was provided or its larger context. Our results demonstrate that LLMs may be used to augment tutoring systems with logic tutoring hints, but those hints require additional modifications to ensure accuracy and pedagogical appropriateness.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100490"},"PeriodicalIF":0.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How AI literacy correlates with affective, behavioral, cognitive and contextual variables: A systematic review 人工智能读写能力如何与情感、行为、认知和语境变量相关:系统回顾
Q1 Social Sciences Pub Date : 2025-10-28 DOI: 10.1016/j.caeai.2025.100493
Arne Bewersdorff , Claudia Nerdel , Xiaoming Zhai
This systematic review maps the empirical landscape of AI literacy by examining its correlations with a diverse array of affective, behavioral, cognitive and contextual variables. Building on the review of AI literacy scales by Lintner (2024), we analyzed 31 empirical studies that applied six of those AI literacy scales, covering 14 countries and a range of participant groups. Our findings reveal robust correlations of AI literacy with AI self-efficacy, positive AI attitudes, motivation, and digital competencies, and negative correlations with AI anxiety and negative AI attitudes. Personal factors such as age appear largely uncorrelated with AI literacy. The review reveals measurement challenges regarding AI literacy: discrepancies between self-assessment scales and performance-based tests suggest that metacognitive biases like the Dunning Kruger effect may inflate certain correlations with self-assessment AI literacy scales. Despite these challenges, the robust findings provide a solid foundation for future research.
这篇系统综述通过研究人工智能读写能力与各种情感、行为、认知和语境变量的相关性,绘制了人工智能读写能力的实证图景。基于Lintner(2024)对人工智能素养量表的回顾,我们分析了31项实证研究,这些研究应用了其中的6种人工智能素养量表,涵盖了14个国家和一系列参与者群体。我们的研究结果表明,人工智能素养与人工智能自我效能、积极的人工智能态度、动机和数字能力之间存在显著相关性,与人工智能焦虑和消极的人工智能态度之间存在负相关。年龄等个人因素似乎在很大程度上与人工智能读写能力无关。这篇综述揭示了人工智能读写能力的测量挑战:自我评估量表和基于表现的测试之间的差异表明,邓宁·克鲁格效应等元认知偏差可能会夸大人工智能读写能力自我评估量表的某些相关性。尽管存在这些挑战,但这些强有力的发现为未来的研究提供了坚实的基础。
{"title":"How AI literacy correlates with affective, behavioral, cognitive and contextual variables: A systematic review","authors":"Arne Bewersdorff ,&nbsp;Claudia Nerdel ,&nbsp;Xiaoming Zhai","doi":"10.1016/j.caeai.2025.100493","DOIUrl":"10.1016/j.caeai.2025.100493","url":null,"abstract":"<div><div>This systematic review maps the empirical landscape of AI literacy by examining its correlations with a diverse array of affective, behavioral, cognitive and contextual variables. Building on the review of AI literacy scales by Lintner (2024), we analyzed 31 empirical studies that applied six of those AI literacy scales, covering 14 countries and a range of participant groups. Our findings reveal robust correlations of AI literacy with AI self-efficacy, positive AI attitudes, motivation, and digital competencies, and negative correlations with AI anxiety and negative AI attitudes. Personal factors such as age appear largely uncorrelated with AI literacy. The review reveals measurement challenges regarding AI literacy: discrepancies between self-assessment scales and performance-based tests suggest that metacognitive biases like the Dunning Kruger effect may inflate certain correlations with self-assessment AI literacy scales. Despite these challenges, the robust findings provide a solid foundation for future research.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100493"},"PeriodicalIF":0.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unmasking the impacts of self-evaluation in AI-supported writing instruction on EFL learners’ emotion regulation, self-competence, motivation, and writing achievement 揭示人工智能辅助写作教学中自我评价对英语学习者情绪调节、自我能力、动机和写作成就的影响
Q1 Social Sciences Pub Date : 2025-10-28 DOI: 10.1016/j.caeai.2025.100494
Tahereh Heydarnejad
This study explores the impact of embedding self-evaluation within AI-supported writing instruction on learners’ cognitive emotion regulation, self-competence, motivation, and writing achievement. Conducted at a high school in Iran, the research utilized a quantitative quasi-experimental pretest-posttest design involving two intact pre-intermediate writing classes randomly assigned to an experimental group and a control group. The experimental group received instruction that combined AI tools with structured self-evaluation activities, whereas the control group followed a traditional teaching approach without AI integration or self-evaluation. Data were collected using the Cognitive Emotion Regulation Questionnaire, the Self-Competence Scale, the Academic Motivation Scale, and standardized writing assessments. Statistical analyses, including Chi-square tests and t-tests, indicated that the experimental group significantly outperformed the control group across all measured variables, demonstrating improvements in cognitive emotion regulation, self-competence, motivation, and writing achievement. These results underscore the value of integrating self-evaluation practices alongside AI tools to enhance learner outcomes in EFL writing contexts.
本研究探讨了在人工智能支持的写作教学中嵌入自我评价对学习者认知情绪调节、自我能力、动机和写作成就的影响。该研究在伊朗的一所高中进行,采用定量准实验前测后测设计,包括两个完整的中级前写作班,随机分配到实验组和对照组。实验组接受人工智能工具与结构化自我评估活动相结合的教学,而对照组则采用传统的教学方法,没有整合人工智能或自我评估。采用认知情绪调节问卷、自我能力量表、学业动机量表和标准化写作量表收集数据。包括卡方检验和t检验在内的统计分析表明,实验组在所有测量变量上都明显优于对照组,在认知情绪调节、自我能力、动机和写作成绩方面都有改善。这些结果强调了将自我评估实践与人工智能工具相结合的价值,以提高学习者在英语写作环境中的学习成果。
{"title":"Unmasking the impacts of self-evaluation in AI-supported writing instruction on EFL learners’ emotion regulation, self-competence, motivation, and writing achievement","authors":"Tahereh Heydarnejad","doi":"10.1016/j.caeai.2025.100494","DOIUrl":"10.1016/j.caeai.2025.100494","url":null,"abstract":"<div><div>This study explores the impact of embedding self-evaluation within AI-supported writing instruction on learners’ cognitive emotion regulation, self-competence, motivation, and writing achievement. Conducted at a high school in Iran, the research utilized a quantitative quasi-experimental pretest-posttest design involving two intact pre-intermediate writing classes randomly assigned to an experimental group and a control group. The experimental group received instruction that combined AI tools with structured self-evaluation activities, whereas the control group followed a traditional teaching approach without AI integration or self-evaluation. Data were collected using the Cognitive Emotion Regulation Questionnaire, the Self-Competence Scale, the Academic Motivation Scale, and standardized writing assessments. Statistical analyses, including Chi-square tests and t-tests, indicated that the experimental group significantly outperformed the control group across all measured variables, demonstrating improvements in cognitive emotion regulation, self-competence, motivation, and writing achievement. These results underscore the value of integrating self-evaluation practices alongside AI tools to enhance learner outcomes in EFL writing contexts.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100494"},"PeriodicalIF":0.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization method for academic English content based on generative adversarial networks and data augmentation 基于生成对抗网络和数据增强的学术英语内容优化方法
Q1 Social Sciences Pub Date : 2025-10-24 DOI: 10.1016/j.caeai.2025.100492
Hui Gao
With the globalization of academic exchanges, the importance of academic English writing quality has become increasingly prominent. Especially for non-native speakers, grammar and language quality in academic English writing significantly affect the readability and academic value of articles. Therefore, this study proposes an academic English content optimization method based on generative adversarial networks and data augmentation. The method uses Transformer as the generator, combines generative adversarial networks with data augmentation techniques to generate high-quality pseudo error correction sentence pairs, and optimizes model performance through policy gradient methods. Although academic English is used as the application context in this study, the architecture can be adapted to other English writing genres given appropriate training corpora. From the results, when the iteration reached 500, the precision was 0.98 and the recall was 0.10. The accuracy-2, F1 score, mean absolute error, correlation coefficient index, and accuracy-7 values of the proposed academic English content optimization model were 87.8, 89.2, 0.05, 0.69, and 97.6. The proposed model has higher accuracy and efficiency on multiple datasets, which can effectively optimize various types of English grammar errors, providing new solutions for content optimization in academic English writing.
随着学术交流的全球化,学术英语写作质量的重要性日益凸显。尤其是对于非母语人士来说,学术英语写作中的语法和语言质量对文章的可读性和学术价值影响很大。因此,本研究提出了一种基于生成对抗网络和数据增强的学术英语内容优化方法。该方法以Transformer为生成器,结合生成式对抗网络和数据增强技术生成高质量的伪纠错句子对,并通过策略梯度方法优化模型性能。虽然在本研究中使用的是学术英语作为应用语境,但只要有适当的训练语料库,这种体系结构也可以适应其他英语写作体裁。从结果来看,当迭代次数达到500次时,精密度为0.98,召回率为0.10。所建立的学术英语内容优化模型的准确率-2、F1评分、平均绝对误差、相关系数指数和准确率-7分别为87.8、89.2、0.05、0.69和97.6。该模型在多数据集上具有更高的准确率和效率,能够有效优化各类英语语法错误,为学术英语写作的内容优化提供新的解决方案。
{"title":"Optimization method for academic English content based on generative adversarial networks and data augmentation","authors":"Hui Gao","doi":"10.1016/j.caeai.2025.100492","DOIUrl":"10.1016/j.caeai.2025.100492","url":null,"abstract":"<div><div>With the globalization of academic exchanges, the importance of academic English writing quality has become increasingly prominent. Especially for non-native speakers, grammar and language quality in academic English writing significantly affect the readability and academic value of articles. Therefore, this study proposes an academic English content optimization method based on generative adversarial networks and data augmentation. The method uses Transformer as the generator, combines generative adversarial networks with data augmentation techniques to generate high-quality pseudo error correction sentence pairs, and optimizes model performance through policy gradient methods. Although academic English is used as the application context in this study, the architecture can be adapted to other English writing genres given appropriate training corpora. From the results, when the iteration reached 500, the precision was 0.98 and the recall was 0.10. The accuracy-2, F1 score, mean absolute error, correlation coefficient index, and accuracy-7 values of the proposed academic English content optimization model were 87.8, 89.2, 0.05, 0.69, and 97.6. The proposed model has higher accuracy and efficiency on multiple datasets, which can effectively optimize various types of English grammar errors, providing new solutions for content optimization in academic English writing.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"9 ","pages":"Article 100492"},"PeriodicalIF":0.0,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Education Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1