首页 > 最新文献

Computers and Education Artificial Intelligence最新文献

英文 中文
A systematic review of conversational AI tools in ELT: Publication trends, tools, research methods, learning outcomes, and antecedents 系统回顾英语语言教学中的会话人工智能工具:出版趋势、工具、研究方法、学习成果和前因后果
Q1 Social Sciences Pub Date : 2024-09-06 DOI: 10.1016/j.caeai.2024.100291
Wan Yee Winsy Lai, Ju Seong Lee

This review analyzed the trends in conversational AI tools in ELT from January 2013 to November 2023. The study examined 32 papers, focusing on publication trends, tool types, research methods, learning outcomes, and factors influencing their use. Findings revealed a gradual increase in publications, with 4 (12%) from 2013 to 2021, 13 (41%) in 2022, and 15 (47%) in 2023. All studies (100%) were conducted in Asian EFL contexts. Among the AI chatbots, Google Assistant (25%) was the most widely used. Quasi-experimental (45%) and cross-section (41%) research designs were commonly employed. Mixed-method (50%) approaches were prevalent for data collection and analysis. Conversational AI yielded positive outcomes in affective (43%) and cognitive skills (41%). The main factors influencing user perceptions or behaviors were individual (47%) and microsystem layers (31%). Future studies should (a) include diverse contexts beyond Asia, (b) consider the use of up-to-date tools (e.g., ChatGPT), (c) employ rigorous experimental designs, (d) explore behavioral learning outcomes, and (e) investigate broader environmental factors. The systematic review enhances current knowledge of recent research trends, identifies environmental factors influencing conversational AI tools concentrating in ELT, and provides insights for future research and practice in this rapidly evolving field.

本综述分析了2013年1月至2023年11月期间英语语言教学中对话式人工智能工具的发展趋势。本研究审查了 32 篇论文,重点关注论文发表趋势、工具类型、研究方法、学习成果以及影响其使用的因素。研究结果表明,论文数量逐渐增加,2013 年至 2021 年有 4 篇(12%),2022 年有 13 篇(41%),2023 年有 15 篇(47%)。所有研究(100%)都是在亚洲的 EFL 环境中进行的。在人工智能聊天机器人中,谷歌助手(25%)的使用最为广泛。研究设计通常采用准实验法(45%)和横断面法(41%)。数据收集和分析普遍采用混合方法(50%)。对话式人工智能在情感(43%)和认知技能(41%)方面取得了积极成果。影响用户认知或行为的主要因素是个人(47%)和微系统层(31%)。未来的研究应:(a)包括亚洲以外的各种环境;(b)考虑使用最新工具(如 ChatGPT);(c)采用严格的实验设计;(d)探索行为学习结果;(e)调查更广泛的环境因素。系统性综述增强了当前对最新研究趋势的了解,确定了影响英语语言教学会话人工智能工具的环境因素,并为这一快速发展领域的未来研究和实践提供了见解。
{"title":"A systematic review of conversational AI tools in ELT: Publication trends, tools, research methods, learning outcomes, and antecedents","authors":"Wan Yee Winsy Lai,&nbsp;Ju Seong Lee","doi":"10.1016/j.caeai.2024.100291","DOIUrl":"10.1016/j.caeai.2024.100291","url":null,"abstract":"<div><p>This review analyzed the trends in conversational AI tools in ELT from January 2013 to November 2023. The study examined 32 papers, focusing on publication trends, tool types, research methods, learning outcomes, and factors influencing their use. Findings revealed a gradual increase in publications, with 4 (12%) from 2013 to 2021, 13 (41%) in 2022, and 15 (47%) in 2023. All studies (100%) were conducted in Asian EFL contexts. Among the AI chatbots, <em>Google Assistant</em> (25%) was the most widely used. Quasi-experimental (45%) and cross-section (41%) research designs were commonly employed. Mixed-method (50%) approaches were prevalent for data collection and analysis. Conversational AI yielded positive outcomes in affective (43%) and cognitive skills (41%). The main factors influencing user perceptions or behaviors were individual (47%) and microsystem layers (31%). Future studies should (a) include diverse contexts beyond Asia, (b) consider the use of up-to-date tools (e.g., <em>ChatGPT</em>), (c) employ rigorous experimental designs, (d) explore behavioral learning outcomes, and (e) investigate broader environmental factors. The systematic review enhances current knowledge of recent research trends, identifies environmental factors influencing conversational AI tools concentrating in ELT, and provides insights for future research and practice in this rapidly evolving field.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100291"},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000948/pdfft?md5=68635526e04d61d702e1a64da49f7651&pid=1-s2.0-S2666920X24000948-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI chatbots in programming education: Students’ use in a scientific computing course and consequences for learning 编程教育中的人工智能聊天机器人:学生在科学计算课程中的使用情况及其对学习的影响
Q1 Social Sciences Pub Date : 2024-09-02 DOI: 10.1016/j.caeai.2024.100290
Suzanne Groothuijsen , Antoine van den Beemt , Joris C. Remmers , Ludo W. van Meeuwen

Teaching and learning in higher education require adaptation following students' inevitable use of AI chatbots. This study contributes to the empirical literature on students' use of AI chatbots and how they influence learning. The aim of this study is to identify how to adapt programming education in higher engineering education. A mixed-methods case study was conducted of a scientific computing course in a Mechanical Engineering Master's program at a Eindhoven University of Technology in the Netherlands. Data consisted of 29 student questionnaires, a semi-structured group interview with three students, a semi-structured interview with the teacher, and 29 students' grades. Results show that students used ChatGPT for error checking and debugging of code, increasing conceptual understanding, generating, and optimizing solution code, explaining code, and solving mathematical problems. While students reported advantages of using ChatGPT, the teacher expressed concerns over declining code quality and student learning. Furthermore, both students and teacher perceived a negative influence from ChatGPT usage on pair programming, and consequently on student collaboration. The findings suggest that learning objectives should be formulated in more detail, to highlight essential programming skills, and be expanded to include the use of AI tools. Complex programming assignments remain appropriate in programming education, but pair programming as a didactic approach should be reconsidered in light of the growing use of AI Chatbots.

在学生不可避免地使用人工智能聊天机器人后,高等教育的教学和学习需要进行调整。本研究为有关学生使用人工智能聊天机器人及其如何影响学习的实证文献做出了贡献。本研究旨在确定如何调整高等工程教育中的编程教育。本研究采用混合方法对荷兰埃因霍温理工大学机械工程硕士课程中的科学计算课程进行了案例研究。数据包括 29 份学生问卷、对三名学生的半结构化小组访谈、对教师的半结构化访谈以及 29 名学生的成绩。结果表明,学生使用 ChatGPT 检查错误和调试代码、加深概念理解、生成和优化解决方案代码、解释代码和解决数学问题。学生们表示使用 ChatGPT 有很多好处,而教师则对代码质量下降和学生学习效果不佳表示担忧。此外,学生和教师都认为使用 ChatGPT 对结对编程产生了负面影响,从而影响了学生之间的协作。研究结果表明,应更详细地制定学习目标,突出基本编程技能,并扩展到人工智能工具的使用。复杂的编程作业仍然适合编程教育,但鉴于人工智能聊天机器人的使用越来越多,结对编程作为一种教学方法应重新考虑。
{"title":"AI chatbots in programming education: Students’ use in a scientific computing course and consequences for learning","authors":"Suzanne Groothuijsen ,&nbsp;Antoine van den Beemt ,&nbsp;Joris C. Remmers ,&nbsp;Ludo W. van Meeuwen","doi":"10.1016/j.caeai.2024.100290","DOIUrl":"10.1016/j.caeai.2024.100290","url":null,"abstract":"<div><p>Teaching and learning in higher education require adaptation following students' inevitable use of AI chatbots. This study contributes to the empirical literature on students' use of AI chatbots and how they influence learning. The aim of this study is to identify how to adapt programming education in higher engineering education. A mixed-methods case study was conducted of a scientific computing course in a Mechanical Engineering Master's program at a Eindhoven University of Technology in the Netherlands. Data consisted of 29 student questionnaires, a semi-structured group interview with three students, a semi-structured interview with the teacher, and 29 students' grades. Results show that students used ChatGPT for error checking and debugging of code, increasing conceptual understanding, generating, and optimizing solution code, explaining code, and solving mathematical problems. While students reported advantages of using ChatGPT, the teacher expressed concerns over declining code quality and student learning. Furthermore, both students and teacher perceived a negative influence from ChatGPT usage on pair programming, and consequently on student collaboration. The findings suggest that learning objectives should be formulated in more detail, to highlight essential programming skills, and be expanded to include the use of AI tools. Complex programming assignments remain appropriate in programming education, but pair programming as a didactic approach should be reconsidered in light of the growing use of AI Chatbots.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100290"},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000936/pdfft?md5=5dd3b00f57974dcb226523ac3d26b418&pid=1-s2.0-S2666920X24000936-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing and validating measures for AI literacy tests: From self-reported to objective measures 开发和验证人工智能素养测试的测量方法:从自我报告到客观测量
Q1 Social Sciences Pub Date : 2024-08-30 DOI: 10.1016/j.caeai.2024.100282
Thomas K.F. Chiu , Yifan Chen , King Woon Yau , Ching-sing Chai , Helen Meng , Irwin King , Savio Wong , Yeung Yam

The majority of AI literacy studies have designed and developed self-reported questionnaires to assess AI learning and understanding. These studies assessed students' perceived AI capability rather than AI literacy because self-perceptions are seldom an accurate account of true measures. International assessment programs that use objective measures to assess science, mathematical, digital, and computational literacy back up this argument. Furthermore, because AI education research is still in its infancy, the current definition of AI literacy in the literature may not meet the needs of young students. Therefore, this study aims to develop and validate an AI literacy test for school students within the interdisciplinary project known as AI4future. Engineering and education researchers created and selected 25 multiple-choice questions to accomplish this goal, and school teachers validated them while developing an AI curriculum for middle schools. 2390 students in grades 7 to 9 took the test. We used a Rasch model to investigate the discrimination, reliability, and validity of the items. The results showed that the model met the unidimensionality assumption and demonstrated a set of reliable and valid items. They indicate the quality of the test items. The test enables AI education researchers and practitioners to appropriately evaluate their AI-related education interventions.

大多数人工智能素养研究都设计和开发了自我报告问卷来评估人工智能的学习和理解。这些研究评估的是学生感知到的人工智能能力,而不是人工智能素养,因为自我感知很少能准确反映真实情况。国际评估项目使用客观的测量方法来评估科学、数学、数字和计算素养,也证明了这一论点。此外,由于人工智能教育研究仍处于起步阶段,目前文献中关于人工智能素养的定义可能无法满足年轻学生的需求。因此,本研究旨在开发和验证跨学科项目 "AI4future "中针对在校学生的人工智能素养测试。为实现这一目标,工程和教育研究人员创建并选择了 25 道选择题,学校教师在为中学开发人工智能课程的同时对这些选择题进行了验证。2390 名七至九年级的学生参加了测试。我们使用 Rasch 模型研究了题目的区分度、信度和效度。结果表明,该模型符合单维假设,并展示了一套可靠有效的项目。这些结果表明了测试项目的质量。该测试使人工智能教育研究人员和从业人员能够恰当地评估与人工智能相关的教育干预措施。
{"title":"Developing and validating measures for AI literacy tests: From self-reported to objective measures","authors":"Thomas K.F. Chiu ,&nbsp;Yifan Chen ,&nbsp;King Woon Yau ,&nbsp;Ching-sing Chai ,&nbsp;Helen Meng ,&nbsp;Irwin King ,&nbsp;Savio Wong ,&nbsp;Yeung Yam","doi":"10.1016/j.caeai.2024.100282","DOIUrl":"10.1016/j.caeai.2024.100282","url":null,"abstract":"<div><p>The majority of AI literacy studies have designed and developed self-reported questionnaires to assess AI learning and understanding. These studies assessed students' perceived AI capability rather than AI literacy because self-perceptions are seldom an accurate account of true measures. International assessment programs that use objective measures to assess science, mathematical, digital, and computational literacy back up this argument. Furthermore, because AI education research is still in its infancy, the current definition of AI literacy in the literature may not meet the needs of young students. Therefore, this study aims to develop and validate an AI literacy test for school students within the interdisciplinary project known as AI4future. Engineering and education researchers created and selected 25 multiple-choice questions to accomplish this goal, and school teachers validated them while developing an AI curriculum for middle schools. 2390 students in grades 7 to 9 took the test. We used a Rasch model to investigate the discrimination, reliability, and validity of the items. The results showed that the model met the unidimensionality assumption and demonstrated a set of reliable and valid items. They indicate the quality of the test items. The test enables AI education researchers and practitioners to appropriately evaluate their AI-related education interventions.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100282"},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000857/pdfft?md5=0fca2149c7cdd2f757af3d4d0dfabea4&pid=1-s2.0-S2666920X24000857-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effect of student acceptance on learning outcomes: AI-generated short videos versus paper materials 学生接受程度对学习效果的影响:人工智能生成的短视频与纸质材料的对比
Q1 Social Sciences Pub Date : 2024-08-27 DOI: 10.1016/j.caeai.2024.100286
Yidi Zhang , Margarida Lucas , Pedro Bem-haja , Luís Pedro

The use of video and paper-based materials is commonly widespread in foreign language learning (FLL). It is well established that the level of acceptance of these materials influences learning outcomes, but there is lack of evidence regarding the use and related impact of videos generated by artificial intelligence (AI) on these aspects. This paper used linear mixed models and path analysis to investigate the influence of student acceptance of AI-generated short videos on learning outcomes compared to paper-based materials. Student acceptance was assessed based on perceived ease of use (PEU), perceived usefulness (PU), attitude (A), intentions (I), and concentration (C). The results indicate that both AI-generated short videos and paper-based materials can significantly enhance learning outcomes. AI-generated short videos are more likely to be accepted by students with lower pre-test scores and may lead to more significant learning outcomes when PEU, PU, A, I and C are at higher levels. On the other hand, paper-based materials are more likely to be accepted by students with higher pre-test scores and may lead to more significant learning outcomes when PEU, PU, A, I and C are at lower levels. These findings offer empirical evidence supporting the use of AI-generated short videos in FLL and provide suggestions for selecting appropriate learning materials in different FLL contexts.

在外语学习(FLL)中,视频和纸质材料的使用非常普遍。对这些材料的接受程度会影响学习效果,这一点已得到公认,但关于人工智能(AI)生成的视频的使用及其对这些方面的相关影响却缺乏证据。本文采用线性混合模型和路径分析,研究了与纸质材料相比,学生对人工智能生成的短视频的接受程度对学习效果的影响。对学生接受度的评估基于感知易用性(PEU)、感知有用性(PU)、态度(A)、意向(I)和专注度(C)。结果表明,人工智能生成的短视频和纸质材料都能显著提高学习效果。当 PEU、PU、A、I 和 C 处于较高水平时,人工智能生成的短视频更容易被考前分数较低的学生接受,并可能带来更显著的学习效果。另一方面,当 PEU、PU、A、I 和 C 处于较低水平时,纸质材料更容易被考前分数较高的学生接受,并可能带来更显著的学习效果。这些研究结果为在 FLL 中使用人工智能生成的短视频提供了实证支持,并为在不同 FLL 环境中选择合适的学习材料提供了建议。
{"title":"The effect of student acceptance on learning outcomes: AI-generated short videos versus paper materials","authors":"Yidi Zhang ,&nbsp;Margarida Lucas ,&nbsp;Pedro Bem-haja ,&nbsp;Luís Pedro","doi":"10.1016/j.caeai.2024.100286","DOIUrl":"10.1016/j.caeai.2024.100286","url":null,"abstract":"<div><p>The use of video and paper-based materials is commonly widespread in foreign language learning (FLL). It is well established that the level of acceptance of these materials influences learning outcomes, but there is lack of evidence regarding the use and related impact of videos generated by artificial intelligence (AI) on these aspects. This paper used linear mixed models and path analysis to investigate the influence of student acceptance of AI-generated short videos on learning outcomes compared to paper-based materials. Student acceptance was assessed based on perceived ease of use (PEU), perceived usefulness (PU), attitude (A), intentions (I), and concentration (C). The results indicate that both AI-generated short videos and paper-based materials can significantly enhance learning outcomes. AI-generated short videos are more likely to be accepted by students with lower pre-test scores and may lead to more significant learning outcomes when PEU, PU, A, I and C are at higher levels. On the other hand, paper-based materials are more likely to be accepted by students with higher pre-test scores and may lead to more significant learning outcomes when PEU, PU, A, I and C are at lower levels. These findings offer empirical evidence supporting the use of AI-generated short videos in FLL and provide suggestions for selecting appropriate learning materials in different FLL contexts.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100286"},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000894/pdfft?md5=a9ad201209bad5172c392fac6bbb6f8e&pid=1-s2.0-S2666920X24000894-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142096399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Teachers' and students' perceptions of AI-generated concept explanations: Implications for integrating generative AI in computer science education 教师和学生对人工智能生成概念解释的看法:将生成式人工智能融入计算机科学教育的意义
Q1 Social Sciences Pub Date : 2024-08-27 DOI: 10.1016/j.caeai.2024.100283
Soohwan Lee, Ki-Sang Song

The educational application of Generative AI (GAI) has garnered significant interest, sparking discussions about the pedagogical value of GAI-generated content. This study investigates the perceived effectiveness of concept explanations produced by GAI compared to those created by human teachers, focusing on programming concepts of sequence, selection, and iteration. The research also explores teachers' and students' ability to discern the source of these explanations. Participants included 11 teachers and 70 sixth-grade students who were presented with concept explanations created or generated by teachers and ChatGPT. They were asked to evaluate the helpfulness of the explanations and identify their source. Results indicated that teachers found GAI-generated explanations more helpful for sequence and selection concepts, while preferring teacher-created explanations for iteration (χ2(2, N = 11) = 10.062, p = .007, ω = .595). In contrast, students showed varying abilities to distinguish between AI-generated and teacher-created explanations across concepts, with significant differences observed (χ2(2, N = 70) = 22.127, p < .001, ω = .399). Notably, students demonstrated difficulty in identifying the source of explanations for the iteration concept (χ2(1, N = 70) = 8.45, p = .004, φ = .348). Qualitative analysis of open-ended responses revealed that teachers and students employed similar criteria for evaluating explanations but differed in their ability to discern the source. Teachers focused on pedagogical effectiveness, while students prioritized relatability and clarity. The findings highlight the importance of considering both teachers' and students' perspectives when integrating GAI into computer science education. The study proposes strategies for designing GAI-based explanations that cater to learners' needs and emphasizes the necessity of explicit AI literacy instruction. Limitations and future research directions are discussed, underlining the need for larger-scale studies and experimental designs that assess the impact of GAI on actual learning outcomes.

生成式人工智能(GAI)在教育领域的应用引起了广泛关注,并引发了有关 GAI 生成内容的教学价值的讨论。本研究调查了由 GAI 生成的概念解释与人类教师生成的概念解释相比的感知效果,重点关注序列、选择和迭代等编程概念。研究还探讨了教师和学生辨别这些解释来源的能力。参与者包括 11 名教师和 70 名六年级学生,他们观看了由教师和 ChatGPT 创建或生成的概念解释。他们被要求对解释的有用性进行评价,并确定解释的来源。结果表明,教师认为 GAI 生成的解释对序列和选择概念更有帮助,而对迭代概念更喜欢教师创建的解释(χ2(2, N = 11) = 10.062, p = .007, ω = .595)。与此相反,学生在区分人工智能生成的解释和教师创建的解释方面表现出不同的能力,且差异显著(χ2(2, N = 70) = 22.127, p < .001, ω = .399)。值得注意的是,学生在确定迭代概念的解释来源方面表现出困难(χ2(1,N = 70)= 8.45,p = .004,φ = .348)。对开放式回答的定性分析显示,教师和学生在评价解释时采用了相似的标准,但在辨别来源的能力上存在差异。教师注重教学效果,而学生则优先考虑亲和力和清晰度。研究结果凸显了在将 GAI 纳入计算机科学教育时同时考虑教师和学生观点的重要性。研究提出了设计基于 GAI 的讲解的策略,以满足学习者的需求,并强调了明确的人工智能素养教学的必要性。研究还讨论了局限性和未来研究方向,强调需要进行更大规模的研究和实验设计,以评估GAI对实际学习成果的影响。
{"title":"Teachers' and students' perceptions of AI-generated concept explanations: Implications for integrating generative AI in computer science education","authors":"Soohwan Lee,&nbsp;Ki-Sang Song","doi":"10.1016/j.caeai.2024.100283","DOIUrl":"10.1016/j.caeai.2024.100283","url":null,"abstract":"<div><p>The educational application of Generative AI (GAI) has garnered significant interest, sparking discussions about the pedagogical value of GAI-generated content. This study investigates the perceived effectiveness of concept explanations produced by GAI compared to those created by human teachers, focusing on programming concepts of sequence, selection, and iteration. The research also explores teachers' and students' ability to discern the source of these explanations. Participants included 11 teachers and 70 sixth-grade students who were presented with concept explanations created or generated by teachers and ChatGPT. They were asked to evaluate the helpfulness of the explanations and identify their source. Results indicated that teachers found GAI-generated explanations more helpful for sequence and selection concepts, while preferring teacher-created explanations for iteration (χ2(2, N = 11) = 10.062, p = .007, ω = .595). In contrast, students showed varying abilities to distinguish between AI-generated and teacher-created explanations across concepts, with significant differences observed (χ2(2, N = 70) = 22.127, p &lt; .001, ω = .399). Notably, students demonstrated difficulty in identifying the source of explanations for the iteration concept (χ2(1, N = 70) = 8.45, p = .004, φ = .348). Qualitative analysis of open-ended responses revealed that teachers and students employed similar criteria for evaluating explanations but differed in their ability to discern the source. Teachers focused on pedagogical effectiveness, while students prioritized relatability and clarity. The findings highlight the importance of considering both teachers' and students' perspectives when integrating GAI into computer science education. The study proposes strategies for designing GAI-based explanations that cater to learners' needs and emphasizes the necessity of explicit AI literacy instruction. Limitations and future research directions are discussed, underlining the need for larger-scale studies and experimental designs that assess the impact of GAI on actual learning outcomes.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100283"},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000869/pdfft?md5=da3079ff70e673eb6248b52e3987a082&pid=1-s2.0-S2666920X24000869-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring social and computer science students’ perceptions of AI integration in (foreign) language instruction 探索社会科学和计算机科学专业学生对人工智能融入(外语)教学的看法
Q1 Social Sciences Pub Date : 2024-08-26 DOI: 10.1016/j.caeai.2024.100285
Kosta Dolenc , Mihaela Brumen

Artificial intelligence (AI) has gained acceptance in the field of education. Nevertheless, existing research on AI in education, particularly in foreign language (FL) learning and teaching, is notably limited in scope and depth. In the present study, we addressed this research gap by investigating social and computer science students' perceptions of the integration and use of AI-based technologies in education, focusing specifically on foreign language teaching. Using an online questionnaire, we analysed factors such as students' field of study, gender differences, and the type of AI used. The questionnaire included statements categorised into thematic clusters, with responses measured on a five-point Likert scale. Statistical analysis, including chi-square tests and Cohen's d, revealed that individuals studying computer science, males, and supporters of generative AI are more likely to use AI tools for educational purposes. They perceive fewer barriers to the integration of AI into FL education. Social science students and women are less likely to use AI tools in FL education and express scepticism about their potential to improve academic outcomes. They tend to be more critical or cautious regarding the role of AI in FL education. They view AI as a valuable tool that enhances the learning experience but, at the same time, recognise the irreplaceable role of human teachers. The study highlights the need for targeted educational initiatives to address gender and disciplinary gaps in AI adoption, promote informed discussions on AI in education, and develop balanced AI integration strategies to improve FL learning. These findings suggest educators and policymakers should implement comprehensive AI training programs and ethical guidelines for responsible AI use in (FL) education.

人工智能(AI)已在教育领域获得认可。然而,有关人工智能在教育领域,特别是外语(FL)学习和教学领域的现有研究,在广度和深度上都明显有限。在本研究中,我们针对这一研究空白,调查了社会科学和计算机科学专业的学生对人工智能技术在教育中的整合与使用的看法,尤其侧重于外语教学。通过在线问卷,我们分析了学生的学习领域、性别差异和所使用的人工智能类型等因素。调查问卷包括按主题分类的陈述,回答采用五点李克特量表。包括卡方检验和 Cohen's d 在内的统计分析显示,学习计算机科学的学生、男性和生成式人工智能的支持者更有可能将人工智能工具用于教育目的。他们认为将人工智能融入 FL 教育的障碍较少。社会科学专业的学生和女性不太可能在 FL 教育中使用人工智能工具,并对其改善学术成果的潜力表示怀疑。他们往往对人工智能在 FL 教育中的作用持批评或谨慎态度。他们认为人工智能是一种有价值的工具,可以增强学习体验,但同时也认识到人类教师不可替代的作用。这项研究强调,有必要采取有针对性的教育措施,以解决在采用人工智能方面存在的性别和学科差距,促进对人工智能在教育中的应用进行知情讨论,并制定均衡的人工智能整合战略,以改善 FL 学习。这些研究结果表明,教育工作者和政策制定者应实施全面的人工智能培训计划和道德准则,以便在(FL)教育中负责任地使用人工智能。
{"title":"Exploring social and computer science students’ perceptions of AI integration in (foreign) language instruction","authors":"Kosta Dolenc ,&nbsp;Mihaela Brumen","doi":"10.1016/j.caeai.2024.100285","DOIUrl":"10.1016/j.caeai.2024.100285","url":null,"abstract":"<div><p>Artificial intelligence (AI) has gained acceptance in the field of education. Nevertheless, existing research on AI in education, particularly in foreign language (FL) learning and teaching, is notably limited in scope and depth. In the present study, we addressed this research gap by investigating social and computer science students' perceptions of the integration and use of AI-based technologies in education, focusing specifically on foreign language teaching. Using an online questionnaire, we analysed factors such as students' field of study, gender differences, and the type of AI used. The questionnaire included statements categorised into thematic clusters, with responses measured on a five-point Likert scale. Statistical analysis, including chi-square tests and Cohen's d, revealed that individuals studying computer science, males, and supporters of generative AI are more likely to use AI tools for educational purposes. They perceive fewer barriers to the integration of AI into FL education. Social science students and women are less likely to use AI tools in FL education and express scepticism about their potential to improve academic outcomes. They tend to be more critical or cautious regarding the role of AI in FL education. They view AI as a valuable tool that enhances the learning experience but, at the same time, recognise the irreplaceable role of human teachers. The study highlights the need for targeted educational initiatives to address gender and disciplinary gaps in AI adoption, promote informed discussions on AI in education, and develop balanced AI integration strategies to improve FL learning. These findings suggest educators and policymakers should implement comprehensive AI training programs and ethical guidelines for responsible AI use in (FL) education.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100285"},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000882/pdfft?md5=d0d6da316006cb052c80cb05f0e2d50c&pid=1-s2.0-S2666920X24000882-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142089115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI in essay-based assessment: Student adoption, usage, and performance 人工智能在基于论文的评估中的应用:学生的采用、使用和成绩
Q1 Social Sciences Pub Date : 2024-08-24 DOI: 10.1016/j.caeai.2024.100288
David Smerdon

The rise of generative artificial intelligence (AI) has sparked debate in education about whether to ban AI tools for assessments. This study explores the adoption and impact of AI tools on an undergraduate research proposal assignment using a mixed-methods approach. From a sample of 187 students, 69 completed a survey, with 46 (67%) reporting the use of AI tools. AI-using students were significantly more likely to be higher-performing, with a pre-semester average GPA of 5.46 compared to 4.92 for non-users (7-point scale, p = .025). Most students used AI assistance for the highest-weighted components of the task, such as the research topic and methods section, using AI primarily for generating research ideas and gathering feedback. Regression analysis suggests that there was no statistically significant effect of AI use on student performance in the task, with the preferred regression specification estimating an effect size of less than 1 mark out of 100. The qualitative analysis identified six main themes of AI usage: idea generation, writing assistance, literature search, grammar checking, statistical analysis, and overall learning impact. These findings indicate that while AI tools are widely adopted, their impact on academic performance is neutral, suggesting a potential for integration into educational practices without compromising academic integrity.

生成式人工智能(AI)的兴起引发了教育界关于是否禁止在评估中使用 AI 工具的争论。本研究采用混合方法探讨了人工智能工具在本科生研究提案作业中的应用及其影响。在 187 名学生样本中,69 人完成了调查,其中 46 人(67%)报告使用了人工智能工具。使用人工智能的学生成绩明显更好,学期前平均 GPA 为 5.46,而未使用人工智能的学生为 4.92(7 分制,p = .025)。大多数学生使用人工智能辅助完成任务中权重最高的部分,如研究课题和研究方法部分,主要使用人工智能生成研究思路和收集反馈。回归分析表明,使用人工智能对学生在任务中的表现没有显著的统计学影响,首选回归规范估计的影响大小小于 1 分(满分 100 分)。定性分析确定了人工智能使用的六大主题:创意生成、写作帮助、文献检索、语法检查、统计分析和总体学习影响。这些研究结果表明,虽然人工智能工具被广泛采用,但它们对学习成绩的影响是中性的,这表明它们有可能在不损害学术诚信的前提下融入教育实践。
{"title":"AI in essay-based assessment: Student adoption, usage, and performance","authors":"David Smerdon","doi":"10.1016/j.caeai.2024.100288","DOIUrl":"10.1016/j.caeai.2024.100288","url":null,"abstract":"<div><p>The rise of generative artificial intelligence (AI) has sparked debate in education about whether to ban AI tools for assessments. This study explores the adoption and impact of AI tools on an undergraduate research proposal assignment using a mixed-methods approach. From a sample of 187 students, 69 completed a survey, with 46 (67%) reporting the use of AI tools. AI-using students were significantly more likely to be higher-performing, with a pre-semester average GPA of 5.46 compared to 4.92 for non-users (7-point scale, <em>p</em> = .025). Most students used AI assistance for the highest-weighted components of the task, such as the research topic and methods section, using AI primarily for generating research ideas and gathering feedback. Regression analysis suggests that there was no statistically significant effect of AI use on student performance in the task, with the preferred regression specification estimating an effect size of less than 1 mark out of 100. The qualitative analysis identified six main themes of AI usage: idea generation, writing assistance, literature search, grammar checking, statistical analysis, and overall learning impact. These findings indicate that while AI tools are widely adopted, their impact on academic performance is neutral, suggesting a potential for integration into educational practices without compromising academic integrity.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100288"},"PeriodicalIF":0.0,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000912/pdfft?md5=9fcea80886fdddc7bc070209d4d8039a&pid=1-s2.0-S2666920X24000912-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142117545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating technological and instructional factors influencing the acceptance of AIGC-assisted design courses 评估影响接受 AIGC 辅助设计课程的技术和教学因素
Q1 Social Sciences Pub Date : 2024-08-24 DOI: 10.1016/j.caeai.2024.100287
Qianling Jiang , Yuzhuo Zhang , Wei Wei , Chao Gu

Purpose

This study aims to explore the key factors influencing design students' acceptance of AIGC-assisted design courses, providing specific strategies for course design to help students better learn this new technology and enhance their competitiveness in the design industry. The research focuses on evaluating technological and course-level factors, providing actionable insights for course developers.

Design/methodology/approach

The research establishes and validates evaluation dimensions and indicators affecting acceptance using structured questionnaires to collect data and employs factor analysis and weight analysis to determine the importance of each factor.

Findings

The results of the study reveal that the main dimensions influencing student acceptance include technology application and innovation, teaching content and methods, and extracurricular learning support and resources. Regarding indicators, data privacy, timeliness of extracurricular learning support, and availability of extracurricular learning resources are identified as the most critical factors.

Originality

The uniqueness of this study lies in providing specific course design strategies for AIGC-assisted design courses based on the weight analysis results for different dimensions and indicators. These strategies aim to help students better adapt to these courses and enhance their acceptance. Furthermore, the conclusions and recommendations of this study offer valuable insights for educational institutions and instructors, promoting further optimization and development of AIGC-assisted design courses.

目的本研究旨在探讨影响设计专业学生接受 AIGC 辅助设计课程的关键因素,为课程设计提供具体策略,帮助学生更好地学习这项新技术,提高他们在设计行业的竞争力。研究重点评估了技术和课程层面的因素,为课程开发人员提供了可操作的见解。研究采用结构化问卷收集数据,建立并验证了影响接受度的评价维度和指标,并采用因子分析和权重分析确定了各因素的重要性。研究结果研究结果显示,影响学生接受度的主要维度包括技术应用与创新、教学内容与方法、课外学习支持与资源。本研究的独特之处在于根据不同维度和指标的权重分析结果,为AIGC辅助设计课程提供了具体的课程设计策略。这些策略旨在帮助学生更好地适应这些课程并提高其接受度。此外,本研究的结论和建议也为教育机构和教师提供了有价值的启示,促进了AIGC辅助设计课程的进一步优化和发展。
{"title":"Evaluating technological and instructional factors influencing the acceptance of AIGC-assisted design courses","authors":"Qianling Jiang ,&nbsp;Yuzhuo Zhang ,&nbsp;Wei Wei ,&nbsp;Chao Gu","doi":"10.1016/j.caeai.2024.100287","DOIUrl":"10.1016/j.caeai.2024.100287","url":null,"abstract":"<div><h3>Purpose</h3><p>This study aims to explore the key factors influencing design students' acceptance of AIGC-assisted design courses, providing specific strategies for course design to help students better learn this new technology and enhance their competitiveness in the design industry. The research focuses on evaluating technological and course-level factors, providing actionable insights for course developers.</p></div><div><h3>Design/methodology/approach</h3><p>The research establishes and validates evaluation dimensions and indicators affecting acceptance using structured questionnaires to collect data and employs factor analysis and weight analysis to determine the importance of each factor.</p></div><div><h3>Findings</h3><p>The results of the study reveal that the main dimensions influencing student acceptance include technology application and innovation, teaching content and methods, and extracurricular learning support and resources. Regarding indicators, data privacy, timeliness of extracurricular learning support, and availability of extracurricular learning resources are identified as the most critical factors.</p></div><div><h3>Originality</h3><p>The uniqueness of this study lies in providing specific course design strategies for AIGC-assisted design courses based on the weight analysis results for different dimensions and indicators. These strategies aim to help students better adapt to these courses and enhance their acceptance. Furthermore, the conclusions and recommendations of this study offer valuable insights for educational institutions and instructors, promoting further optimization and development of AIGC-assisted design courses.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100287"},"PeriodicalIF":0.0,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000900/pdfft?md5=c9607b5a429406e445f75d5e9d896936&pid=1-s2.0-S2666920X24000900-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the psychometric properties of ChatGPT-generated questions 评估 ChatGPT 生成问题的心理测量特性
Q1 Social Sciences Pub Date : 2024-08-22 DOI: 10.1016/j.caeai.2024.100284
Shreya Bhandari , Yunting Liu , Yerin Kwak , Zachary A. Pardos

Not much is known about how LLM-generated questions compare to gold-standard, traditional formative assessments concerning their difficulty and discrimination parameters, which are valued properties in the psychometric measurement field. We follow a rigorous measurement methodology to compare a set of ChatGPT-generated questions, produced from one lesson summary in a textbook, to existing questions from a published Creative Commons textbook. To do this, we collected and analyzed responses from 207 test respondents who answered questions from both item pools and used a linking methodology to compare IRT properties between the two pools. We find that neither the difficulty nor discrimination parameters of the 15 items in each pool differ statistically significantly, with some evidence that the ChatGPT items were marginally better at differentiating different respondent abilities. The response time also does not differ significantly between the two sources of items. The ChatGPT-generated items showed evidence of unidimensionality and did not affect the unidimensionality of the original set of items when tested together. Finally, through a fine-grained learning objective labeling analysis, we found greater similarity in the learning objective distribution of ChatGPT-generated items and the items from the target OpenStax lesson (0.9666) than between ChatGPT-generated items and adjacent OpenStax lessons (0.6859 for the previous lesson and 0.6153 for the subsequent lesson). These results corroborate our conclusion that generative AI can produce algebra items of similar quality to existing textbook questions that share the same construct or constructs as those questions.

关于 LLM 生成的问题与黄金标准的传统形成性评估在难度和区分度参数方面的比较,目前所知不多,而难度和区分度参数是心理测量领域的重要属性。我们采用严格的测量方法,将根据教科书中的一课摘要生成的一组 ChatGPT 问题与已出版的知识共享教科书中的现有问题进行比较。为此,我们收集并分析了 207 位回答两个题库中问题的受测者的答卷,并使用链接方法比较了两个题库的 IRT 特性。我们发现,两个题库中 15 个题目的难度和区分度参数在统计上都没有显著差异,有证据表明 ChatGPT 题目在区分不同答题者能力方面略胜一筹。两种来源的题目在回答时间上也没有明显差异。ChatGPT 生成的项目显示出单维性,并且在一起测试时不会影响原始项目集的单维性。最后,通过细粒度的学习目标标签分析,我们发现 ChatGPT 生成的项目与目标 OpenStax 课程中的项目(0.9666)在学习目标分布上的相似度高于 ChatGPT 生成的项目与相邻 OpenStax 课程之间的相似度(前一课程为 0.6859,后一课程为 0.6153)。这些结果证实了我们的结论,即生成式人工智能可以生成与现有教科书问题质量相似的代数题目,而且这些题目与现有教科书问题具有相同的构造。
{"title":"Evaluating the psychometric properties of ChatGPT-generated questions","authors":"Shreya Bhandari ,&nbsp;Yunting Liu ,&nbsp;Yerin Kwak ,&nbsp;Zachary A. Pardos","doi":"10.1016/j.caeai.2024.100284","DOIUrl":"10.1016/j.caeai.2024.100284","url":null,"abstract":"<div><p>Not much is known about how LLM-generated questions compare to gold-standard, traditional formative assessments concerning their difficulty and discrimination parameters, which are valued properties in the psychometric measurement field. We follow a rigorous measurement methodology to compare a set of ChatGPT-generated questions, produced from one lesson summary in a textbook, to existing questions from a published Creative Commons textbook. To do this, we collected and analyzed responses from 207 test respondents who answered questions from both item pools and used a linking methodology to compare IRT properties between the two pools. We find that neither the difficulty nor discrimination parameters of the 15 items in each pool differ statistically significantly, with some evidence that the ChatGPT items were marginally better at differentiating different respondent abilities. The response time also does not differ significantly between the two sources of items. The ChatGPT-generated items showed evidence of unidimensionality and did not affect the unidimensionality of the original set of items when tested together. Finally, through a fine-grained learning objective labeling analysis, we found greater similarity in the learning objective distribution of ChatGPT-generated items and the items from the target OpenStax lesson (0.9666) than between ChatGPT-generated items and adjacent OpenStax lessons (0.6859 for the previous lesson and 0.6153 for the subsequent lesson). These results corroborate our conclusion that generative AI can produce algebra items of similar quality to existing textbook questions that share the same construct or constructs as those questions.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100284"},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000870/pdfft?md5=91d7e8564077ef80c2ba5f18fa4e22fb&pid=1-s2.0-S2666920X24000870-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding the perception of design students towards ChatGPT 了解设计专业学生对 ChatGPT 的看法
Q1 Social Sciences Pub Date : 2024-08-22 DOI: 10.1016/j.caeai.2024.100281
Vigneshkumar Chellappa, Yan Luximon

The benefits of artificial intelligence (AI)-enabled language models, such as ChatGPT, have contributed to their growing popularity in education. However, there is currently a lack of evidence regarding the perception of ChatGPT, specifically among design students. This study aimed to understand the product design (PD) and user experience design (UXD) students' views on ChatGPT and focused on an Indian university. The study employed a survey research design, utilizing questionnaires as the primary data collection method. The collected data (n = 149) was analyzed using descriptive statistics (i.e., frequency, percentage, average, and standard deviation (SD). Inferential statistics (i.e., one-way ANOVA) was used to understand the significant differences between the programs of study, gender, and academic level. The findings indicate that the students expressed admiration for the capabilities of ChatGPT and found it to be an interesting and helpful tool for their studies. In addition, the students' motivation towards using ChatGPT was moderate. Furthermore, the study observed significant differences between PD and UXD students and differences based on gender and academic level on certain variables. Notably, UXD students reported that ChatGPT does not understand their questions well, and formulating effective prompts for the tool was more challenging than for PD students. Based on the findings, the study recommends how educators should consider integrating ChatGPT into design education curricula and pedagogical practices. The insights aim to contribute to refining the use of ChatGPT in educational settings and exploring avenues for improving its effectiveness, ultimately advancing the field of AI in design education.

人工智能(AI)语言模型(如 ChatGPT)的优势使其在教育领域越来越受欢迎。然而,目前还缺乏有关 ChatGPT 感知的证据,特别是在设计专业学生中。本研究旨在了解产品设计(PD)和用户体验设计(UXD)专业学生对 ChatGPT 的看法,研究对象是印度一所大学。研究采用了调查研究设计,将问卷调查作为主要的数据收集方法。收集到的数据(n = 149)使用描述性统计(即频率、百分比、平均值和标准差)进行分析。推断统计(即单因素方差分析)用于了解不同专业、性别和学术水平之间的显著差异。研究结果表明,学生们对 ChatGPT 的功能表示钦佩,认为它是一个有趣且对学习有帮助的工具。此外,学生使用 ChatGPT 的积极性一般。此外,研究还观察到,在某些变量上,PD 和 UXD 学生之间存在明显差异,性别和学术水平也存在差异。值得注意的是,用户体验差的学生表示 ChatGPT 不能很好地理解他们的问题,而且与用户体验差的学生相比,为工具制定有效的提示更具挑战性。基于研究结果,本研究建议教育工作者考虑如何将 ChatGPT 整合到设计教育课程和教学实践中。这些见解旨在帮助完善 ChatGPT 在教育环境中的使用,并探索提高其有效性的途径,最终推动人工智能在设计教育领域的发展。
{"title":"Understanding the perception of design students towards ChatGPT","authors":"Vigneshkumar Chellappa,&nbsp;Yan Luximon","doi":"10.1016/j.caeai.2024.100281","DOIUrl":"10.1016/j.caeai.2024.100281","url":null,"abstract":"<div><p>The benefits of artificial intelligence (AI)-enabled language models, such as ChatGPT, have contributed to their growing popularity in education. However, there is currently a lack of evidence regarding the perception of ChatGPT, specifically among design students. This study aimed to understand the product design (PD) and user experience design (UXD) students' views on ChatGPT and focused on an Indian university. The study employed a survey research design, utilizing questionnaires as the primary data collection method. The collected data (n = 149) was analyzed using descriptive statistics (i.e., frequency, percentage, average, and standard deviation (SD). Inferential statistics (i.e., one-way ANOVA) was used to understand the significant differences between the programs of study, gender, and academic level. The findings indicate that the students expressed admiration for the capabilities of ChatGPT and found it to be an interesting and helpful tool for their studies. In addition, the students' motivation towards using ChatGPT was moderate. Furthermore, the study observed significant differences between PD and UXD students and differences based on gender and academic level on certain variables. Notably, UXD students reported that ChatGPT does not understand their questions well, and formulating effective prompts for the tool was more challenging than for PD students. Based on the findings, the study recommends how educators should consider integrating ChatGPT into design education curricula and pedagogical practices. The insights aim to contribute to refining the use of ChatGPT in educational settings and exploring avenues for improving its effectiveness, ultimately advancing the field of AI in design education.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100281"},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000845/pdfft?md5=7f886da5dc4c10e4786939013f2aae97&pid=1-s2.0-S2666920X24000845-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142058527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Education Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1