Pub Date : 2024-11-06DOI: 10.1109/TLT.2024.3491864
Lifang Bai;Yijia Wei
ChatGPT can promptly reformulate a text and improve its quality in content and form while preserving the original meaning. Yet, little is known about how learners respond to such reformulations. Here, we employed a three-stage writing task (composing-comparison-rewriting) to investigate how learners notice, integrate, and perceive ChatGPT's reformulations as feedback in English as a foreign language (EFL) classroom context. We collected learners’ written notes made during the comparison of their original texts and the reformulations, categorizing them based on the language-related type of noticing (vocabulary, discourse, and form) and quality of noticing (depth of processing: low, intermediate, and high). We also analyzed the instances of reformulations integrated into learners’ rewriting and their answers to a questionnaire. The results showed that: 1) the reformulations directed learners’ attention to the gaps in their original writing, especially in word choice, and prompted them to integrate ChatGPT-generated changes in their rewriting; 2) the number of instances integrated into the rewriting was directly related to the quantity, type, and quality of noticing in the comparison stage; and 3) learners generally appreciated the pedagogical value of ChatGPT in EFL writing, particularly during the revision stage, although they believe occasionally ChatGPT might misinterpret their intentions. The study suggests that ChatGPT's reformulations should be complemented with peer and teacher feedback to create a comprehensive and personalized learning environment.
{"title":"Exploring EFL Learners’ Integration and Perceptions of ChatGPT's Text Revisions: A Three-Stage Writing Task Study","authors":"Lifang Bai;Yijia Wei","doi":"10.1109/TLT.2024.3491864","DOIUrl":"https://doi.org/10.1109/TLT.2024.3491864","url":null,"abstract":"ChatGPT can promptly reformulate a text and improve its quality in content and form while preserving the original meaning. Yet, little is known about how learners respond to such reformulations. Here, we employed a three-stage writing task (composing-comparison-rewriting) to investigate how learners notice, integrate, and perceive ChatGPT's reformulations as feedback in English as a foreign language (EFL) classroom context. We collected learners’ written notes made during the comparison of their original texts and the reformulations, categorizing them based on the language-related type of noticing (vocabulary, discourse, and form) and quality of noticing (depth of processing: low, intermediate, and high). We also analyzed the instances of reformulations integrated into learners’ rewriting and their answers to a questionnaire. The results showed that: 1) the reformulations directed learners’ attention to the gaps in their original writing, especially in word choice, and prompted them to integrate ChatGPT-generated changes in their rewriting; 2) the number of instances integrated into the rewriting was directly related to the quantity, type, and quality of noticing in the comparison stage; and 3) learners generally appreciated the pedagogical value of ChatGPT in EFL writing, particularly during the revision stage, although they believe occasionally ChatGPT might misinterpret their intentions. The study suggests that ChatGPT's reformulations should be complemented with peer and teacher feedback to create a comprehensive and personalized learning environment.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2215-2226"},"PeriodicalIF":2.9,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-05DOI: 10.1109/TLT.2024.3492073
Clemens Drieschner;Ferdinand Xiong;Florian Bajraktari;Matthias C. Utesch;Jörg Weking;Helmut Krcmar
The growing demand for sustainable products and services has led to the emergence of a range of entrepreneurial opportunities for innovative business solutions. These new opportunities require novel entrepreneurial skills to design, grow, and maintain sustainable businesses. However, existing educational games do not address the specific skills needed for sustainable entrepreneurship. This work introduces an interactive business game that enables learning in an engaging and practical way to prepare learners to become the entrepreneurs of tomorrow. Our business game is intended to increase learners’ self-assessed skills and self-confidence by at least one point on the five-point Likert scale. The game is embedded into a didactic framework accompanied by a lecture and discussion part where learners manage their bike rental business and self-reflect on their decisions. The game scenario allows the players to guide their business through three growth stages: survival, growth, and expansion. In this study, 151 high school students participated in the evaluation across all stages. They assessed their knowledge in a pretest and a posttest and judged the developed game on its ease of use. The learners’ sustainable entrepreneurship knowledge and skills improved significantly. This study thus confirms the mixed approach as a viable teaching tool for this area of expertise. Furthermore, a holistic didactic framework with an introduction to the game and subsequent debriefing contributes to the success of the business game too. A sign test and bootstrapping with confidence intervals constituted a simple means of statistically quantifying the learners’ improvement.
{"title":"Measuring the Effectiveness of Learning Economic Principles in a Sustainable Entrepreneurship Game-Based Setting: A Bootstrapping Approach","authors":"Clemens Drieschner;Ferdinand Xiong;Florian Bajraktari;Matthias C. Utesch;Jörg Weking;Helmut Krcmar","doi":"10.1109/TLT.2024.3492073","DOIUrl":"https://doi.org/10.1109/TLT.2024.3492073","url":null,"abstract":"The growing demand for sustainable products and services has led to the emergence of a range of entrepreneurial opportunities for innovative business solutions. These new opportunities require novel entrepreneurial skills to design, grow, and maintain sustainable businesses. However, existing educational games do not address the specific skills needed for sustainable entrepreneurship. This work introduces an interactive business game that enables learning in an engaging and practical way to prepare learners to become the entrepreneurs of tomorrow. Our business game is intended to increase learners’ self-assessed skills and self-confidence by at least one point on the five-point Likert scale. The game is embedded into a didactic framework accompanied by a lecture and discussion part where learners manage their bike rental business and self-reflect on their decisions. The game scenario allows the players to guide their business through three growth stages: survival, growth, and expansion. In this study, 151 high school students participated in the evaluation across all stages. They assessed their knowledge in a pretest and a posttest and judged the developed game on its ease of use. The learners’ sustainable entrepreneurship knowledge and skills improved significantly. This study thus confirms the mixed approach as a viable teaching tool for this area of expertise. Furthermore, a holistic didactic framework with an introduction to the game and subsequent debriefing contributes to the success of the business game too. A sign test and bootstrapping with confidence intervals constituted a simple means of statistically quantifying the learners’ improvement.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"18 ","pages":"1-12"},"PeriodicalIF":2.9,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10745173","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142859204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1109/TLT.2024.3491801
Yuto Tomikawa;Ayaka Suzuki;Masaki Uto
The automatic generation of reading comprehension questions, referred to as question generation (QG), is attracting attention in the field of education. To achieve efficient educational applications of QG methods, it is desirable to generate questions with difficulty levels that are appropriate for each learner's reading ability. Therefore, in recent years, several difficulty-controllable QG methods have been proposed. However, conventional methods generate only questions and cannot produce question–answer pairs. Furthermore, such methods ignore the relationship between question difficulty and learner ability, making it challenging to ascertain the appropriate difficulty levels for each learner. To address these issues, in this article, we propose a method for generating question–answer pairs based on difficulty, defined using a statistical model known as item response theory. The proposed difficulty-controllable generation is achieved by extending two pretrained transformer models: bidirectional encoder representations from transformers and text-to-text transfer transformer. In addition, because learners' abilities are generally not knowable in advance, we propose an adaptive QG framework that efficiently estimates the learners' abilities while generating and presenting questions with difficulty levels suitable for their abilities. Through experiments involving real data, we confirmed that the proposed method can generate question–answer pairs with difficulty levels that align with the learners' abilities while efficiently estimating their abilities.
{"title":"Adaptive Question–Answer Generation With Difficulty Control Using Item Response Theory and Pretrained Transformer Models","authors":"Yuto Tomikawa;Ayaka Suzuki;Masaki Uto","doi":"10.1109/TLT.2024.3491801","DOIUrl":"https://doi.org/10.1109/TLT.2024.3491801","url":null,"abstract":"The automatic generation of reading comprehension questions, referred to as question generation (QG), is attracting attention in the field of education. To achieve efficient educational applications of QG methods, it is desirable to generate questions with difficulty levels that are appropriate for each learner's reading ability. Therefore, in recent years, several difficulty-controllable QG methods have been proposed. However, conventional methods generate only questions and cannot produce question–answer pairs. Furthermore, such methods ignore the relationship between question difficulty and learner ability, making it challenging to ascertain the appropriate difficulty levels for each learner. To address these issues, in this article, we propose a method for generating question–answer pairs based on difficulty, defined using a statistical model known as item response theory. The proposed difficulty-controllable generation is achieved by extending two pretrained transformer models: bidirectional encoder representations from transformers and text-to-text transfer transformer. In addition, because learners' abilities are generally not knowable in advance, we propose an adaptive QG framework that efficiently estimates the learners' abilities while generating and presenting questions with difficulty levels suitable for their abilities. Through experiments involving real data, we confirmed that the proposed method can generate question–answer pairs with difficulty levels that align with the learners' abilities while efficiently estimating their abilities.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2240-2252"},"PeriodicalIF":2.9,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10742557","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1109/TLT.2024.3486630
Jiwon Kim;Jack Miller;Kexin Wang;Michael C. Dorneich;Eliot Winer;Lori J. Brown
This study introduces an augmented reality (AR) authoring tool tailored for flight instructors without technical expertise. While AR offers potential in aviation weather education and instructors desire to use it in the classroom, they face challenges due to limited digital proficiency and complexity of authoring tools. Many existing AR authoring tools prioritize technical aspects over user experience. To address these challenges, a no-programming-required AR authoring tool was developed based on instructor-informed requirements, such as incorporating features of flight waypoints and weather phenomena. A total of 41 participants tested the tool by crafting three AR learning modules. After using the tool, there was a significant increase in participants’ confidence in AR content creation (+30%), AR authoring process (+51%), and interactive AR development (+50%). In addition, there was a significant decrease in their concerns about technical complexity (–19%), mental effort (–30%), and time consumption (–30%). Participants rated the incorporated functions highly preferable and indicated the tool has high usability. Participants completed the most challenging task quickly and with a low cognitive load. The findings demonstrate the tool's effectiveness in enabling participants to competently and efficiently author AR content, reducing technical concerns. Such tools can facilitate the integration of AR technology into the classroom, offering students improved access to interactive 3-D visualizations of dynamic subjects, such as aviation weather, which require students to mentally visualize weather conditions and understand their manifestations.
本研究介绍了一种为没有专业技术知识的飞行教员量身定制的增强现实(AR)创作工具。虽然增强现实技术在航空气象教育方面具有潜力,教员们也希望在课堂上使用这种技术,但由于数字技术能力有限和制作工具的复杂性,他们面临着挑战。许多现有的 AR 制作工具都将技术方面的问题置于用户体验之上。为了应对这些挑战,我们根据教员提出的要求,开发了一种无需编程的 AR 创作工具,例如将飞行航点和天气现象的特征融入其中。共有 41 名学员通过制作三个 AR 学习模块对该工具进行了测试。使用该工具后,学员在 AR 内容创建(+30%)、AR 创作过程(+51%)和交互式 AR 开发(+50%)方面的信心有了显著提高。此外,他们对技术复杂性(-19%)、脑力劳动(-30%)和时间消耗(-30%)的担忧也明显减少。参与者对纳入的功能给予了很高的评价,并表示该工具具有很高的可用性。参与者以较低的认知负荷快速完成了最具挑战性的任务。研究结果表明,该工具能够有效地帮助参与者胜任并高效地编写 AR 内容,减少了技术方面的顾虑。这种工具可以促进 AR 技术与课堂的整合,为学生提供更好的机会,使他们能够获得动态主题的交互式三维可视化内容,例如航空天气,这需要学生在头脑中将天气状况可视化并理解其表现形式。
{"title":"Empowering Instructors: Augmented Reality Authoring Toolkit for Aviation Weather Education","authors":"Jiwon Kim;Jack Miller;Kexin Wang;Michael C. Dorneich;Eliot Winer;Lori J. Brown","doi":"10.1109/TLT.2024.3486630","DOIUrl":"https://doi.org/10.1109/TLT.2024.3486630","url":null,"abstract":"This study introduces an augmented reality (AR) authoring tool tailored for flight instructors without technical expertise. While AR offers potential in aviation weather education and instructors desire to use it in the classroom, they face challenges due to limited digital proficiency and complexity of authoring tools. Many existing AR authoring tools prioritize technical aspects over user experience. To address these challenges, a no-programming-required AR authoring tool was developed based on instructor-informed requirements, such as incorporating features of flight waypoints and weather phenomena. A total of 41 participants tested the tool by crafting three AR learning modules. After using the tool, there was a significant increase in participants’ confidence in AR content creation (+30%), AR authoring process (+51%), and interactive AR development (+50%). In addition, there was a significant decrease in their concerns about technical complexity (–19%), mental effort (–30%), and time consumption (–30%). Participants rated the incorporated functions highly preferable and indicated the tool has high usability. Participants completed the most challenging task quickly and with a low cognitive load. The findings demonstrate the tool's effectiveness in enabling participants to competently and efficiently author AR content, reducing technical concerns. Such tools can facilitate the integration of AR technology into the classroom, offering students improved access to interactive 3-D visualizations of dynamic subjects, such as aviation weather, which require students to mentally visualize weather conditions and understand their manifestations.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2195-2206"},"PeriodicalIF":2.9,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1109/TLT.2024.3488086
Ruxin Zheng;Huifen Xu;Minjuan Wang;Jijian Lu
This study investigates the impact of artificial general intelligence (AGI)-assisted project-based learning (PBL) on students’ higher order thinking and self-efficacy. Based on input from 17 experts, four key roles of AGI in supporting PBL were identified: information retrieval, information processing, information generation, and feedback evaluation. An educational experiment was then conducted with 198 eighth-grade students from two middle schools in China, using a pretest and posttest design. The students were divided into three groups: Experimental Group A (AGI-assisted PBL), Control Group B (PBL without AGI assistance), and Control Group C (traditional teaching methods). A scale was administered to assess students’ higher order thinking and self-efficacy before and after the experiment. In addition, semistructured interviews were conducted with 12 students from Experimental Group A to gather qualitative data on their perceptions of AGI-assisted PBL. The results indicated that students in Experimental Group A had significantly higher scores in higher order thinking and self-efficacy compared to those in Control Groups B and C, demonstrating the positive impact of AGI in supporting PBL learning.
本研究调查了人工智能(AGI)辅助项目式学习(PBL)对学生高阶思维和自我效能的影响。根据 17 位专家的意见,确定了 AGI 在支持项目式学习中的四个关键作用:信息检索、信息处理、信息生成和反馈评估。随后,对来自中国两所中学的 198 名八年级学生进行了教育实验,实验采用前测和后测设计。学生被分为三组:实验组 A(AGI 辅助的 PBL)、对照组 B(无 AGI 辅助的 PBL)和对照组 C(传统教学方法)。实验前后对学生的高阶思维和自我效能进行了量表评估。此外,还对实验组 A 的 12 名学生进行了半结构式访谈,以收集他们对 AGI 辅助 PBL 的看法的定性数据。结果表明,与对照组 B 和 C 的学生相比,实验组 A 的学生在高阶思维和自我效能感方面的得分明显更高,这证明了 AGI 在支持 PBL 学习方面的积极影响。
{"title":"The Impact of Artificial General Intelligence-Assisted Project-Based Learning on Students’ Higher Order Thinking and Self-Efficacy","authors":"Ruxin Zheng;Huifen Xu;Minjuan Wang;Jijian Lu","doi":"10.1109/TLT.2024.3488086","DOIUrl":"https://doi.org/10.1109/TLT.2024.3488086","url":null,"abstract":"This study investigates the impact of artificial general intelligence (AGI)-assisted project-based learning (PBL) on students’ higher order thinking and self-efficacy. Based on input from 17 experts, four key roles of AGI in supporting PBL were identified: information retrieval, information processing, information generation, and feedback evaluation. An educational experiment was then conducted with 198 eighth-grade students from two middle schools in China, using a pretest and posttest design. The students were divided into three groups: Experimental Group A (AGI-assisted PBL), Control Group B (PBL without AGI assistance), and Control Group C (traditional teaching methods). A scale was administered to assess students’ higher order thinking and self-efficacy before and after the experiment. In addition, semistructured interviews were conducted with 12 students from Experimental Group A to gather qualitative data on their perceptions of AGI-assisted PBL. The results indicated that students in Experimental Group A had significantly higher scores in higher order thinking and self-efficacy compared to those in Control Groups B and C, demonstrating the positive impact of AGI in supporting PBL learning.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2207-2214"},"PeriodicalIF":2.9,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-29DOI: 10.1109/TLT.2024.3487898
Fabio Buttussi;Luca Chittaro
Educational virtual environments (EVEs) can enable effective learning experiences on various devices, including smartphones, using nonimmersive virtual reality (VR). To this purpose, researchers and educators should identify the most appropriate pedagogical techniques, not restarting from scratch but exploring which traditional e-learning and VR techniques can be effectively combined or adapted to EVEs. In this direction, this article explores if test questions, a typical e-learning technique, can be effectively employed in an EVE through a careful well-blended design. We also consider the active performance of procedures, a typical VR technique, to evaluate if test questions can be synergic with it or if they can instead break presence and be detrimental to learning. The between-subject study we describe involved 120 participants in four conditions: with/without test questions and active/passive procedure performance. The EVE was run on a smartphone, using nonimmersive VR, and taught hand hygiene procedures for infectious disease prevention. Results showed that introducing test questions did not break presence but surprisingly increased it, especially when combined with active procedure performance. Participants’ self-efficacy increased after using the EVE regardless of condition, and the different conditions did not significantly change engagement. Moreover, participants who had answered test questions in the EVE showed a reduction in the number of omitted steps in an assessment of learning transfer. Finally, test questions increased participants’ satisfaction. Overall, these greater-than-expected benefits support the adoption of the proposed test question design in EVEs based on nonimmersive VR.
教育虚拟环境(EVE)可以利用非沉浸式虚拟现实技术(VR),在包括智能手机在内的各种设备上实现有效的学习体验。为此,研究人员和教育工作者应找出最合适的教学技术,而不是从头开始,而是探索哪些传统的电子学习和 VR 技术可以有效地结合或适用于 EVE。在这个方向上,本文探讨了试题作为一种典型的电子学习技术,能否通过精心的混合设计在电子游戏中得到有效应用。我们还考虑了程序(一种典型的虚拟现实技术)的积极表现,以评估测试问题是否能与之协同增效,或者它们是否会打破存在感并不利于学习。在我们描述的主体间研究中,120 名参与者在四种条件下进行了学习:有/无测试问题和主动/被动程序表现。EVE在智能手机上运行,使用非沉浸式VR,教授预防传染病的手部卫生程序。结果表明,引入测试问题并没有破坏存在感,反而出人意料地提高了存在感,尤其是在与主动程序表现相结合时。无论在什么条件下,参与者在使用 EVE 后的自我效能感都有所提高,而不同的条件并没有显著改变参与度。此外,在学习迁移评估中,在电子学习环境中回答过测试问题的参与者减少了遗漏步骤的数量。最后,测试问题提高了参与者的满意度。总之,这些超出预期的益处支持在基于非沉浸式 VR 的 EVE 中采用建议的测试问题设计。
{"title":"Embedding Test Questions in Educational Mobile Virtual Reality: A Study on Hospital Hygiene Procedures","authors":"Fabio Buttussi;Luca Chittaro","doi":"10.1109/TLT.2024.3487898","DOIUrl":"https://doi.org/10.1109/TLT.2024.3487898","url":null,"abstract":"Educational virtual environments (EVEs) can enable effective learning experiences on various devices, including smartphones, using nonimmersive virtual reality (VR). To this purpose, researchers and educators should identify the most appropriate pedagogical techniques, not restarting from scratch but exploring which traditional e-learning and VR techniques can be effectively combined or adapted to EVEs. In this direction, this article explores if test questions, a typical e-learning technique, can be effectively employed in an EVE through a careful well-blended design. We also consider the active performance of procedures, a typical VR technique, to evaluate if test questions can be synergic with it or if they can instead break presence and be detrimental to learning. The between-subject study we describe involved 120 participants in four conditions: with/without test questions and active/passive procedure performance. The EVE was run on a smartphone, using nonimmersive VR, and taught hand hygiene procedures for infectious disease prevention. Results showed that introducing test questions did not break presence but surprisingly increased it, especially when combined with active procedure performance. Participants’ self-efficacy increased after using the EVE regardless of condition, and the different conditions did not significantly change engagement. Moreover, participants who had answered test questions in the EVE showed a reduction in the number of omitted steps in an assessment of learning transfer. Finally, test questions increased participants’ satisfaction. Overall, these greater-than-expected benefits support the adoption of the proposed test question design in EVEs based on nonimmersive VR.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2253-2265"},"PeriodicalIF":2.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10737683","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.1109/TLT.2024.3486749
Fan Ouyang;Mingyue Guo;Ning Zhang;Xianping Bai;Pengcheng Jiao
Artificial general intelligence (AGI) has gained increasing global attention as the field of large language models undergoes rapid development. Due to its human-like cognitive abilities, the AGI system has great potential to help instructors provide detailed, comprehensive, and individualized feedback to students throughout the educational process. ChatGPT, as a preliminary version of the AGI system, has the potential to improve programming education. In programming, students often have difficulties in writing codes and debugging errors, whereas ChatGPT can provide intelligent feedback to support students’ programming learning process. This research implemented intelligent feedback generated by ChatGPT to facilitate collaborative programming among student groups and further compared the effects of ChatGPT with instructors’ manual feedback on programming. This research employed a variety of learning analytics methods to analyze students’ computer programming performances, cognitive and regulation discourses, and programming behaviors. Results indicated that no substantial differences were identified in students’ programming knowledge acquisition and group-level programming product quality when both instructor manual feedback and ChatGPT intelligent feedback were provided. ChatGPT intelligent feedback facilitated students’ regulation-oriented collaborative programming, while instructor manual feedback facilitated cognition-oriented collaborative discussions during programming. Compared to the instructor manual feedback, ChatGPT intelligent feedback was perceived by students as having more obvious strengths as well as weaknesses. Drawing from the results, this research offered pedagogical and analytical insights to enhance the integration of ChatGPT into programming education at the higher education context. This research also provided a new perspective on facilitating collaborative learning experiences among students, instructors, and the AGI system.
{"title":"Comparing the Effects of Instructor Manual Feedback and ChatGPT Intelligent Feedback on Collaborative Programming in China's Higher Education","authors":"Fan Ouyang;Mingyue Guo;Ning Zhang;Xianping Bai;Pengcheng Jiao","doi":"10.1109/TLT.2024.3486749","DOIUrl":"https://doi.org/10.1109/TLT.2024.3486749","url":null,"abstract":"Artificial general intelligence (AGI) has gained increasing global attention as the field of large language models undergoes rapid development. Due to its human-like cognitive abilities, the AGI system has great potential to help instructors provide detailed, comprehensive, and individualized feedback to students throughout the educational process. ChatGPT, as a preliminary version of the AGI system, has the potential to improve programming education. In programming, students often have difficulties in writing codes and debugging errors, whereas ChatGPT can provide intelligent feedback to support students’ programming learning process. This research implemented intelligent feedback generated by ChatGPT to facilitate collaborative programming among student groups and further compared the effects of ChatGPT with instructors’ manual feedback on programming. This research employed a variety of learning analytics methods to analyze students’ computer programming performances, cognitive and regulation discourses, and programming behaviors. Results indicated that no substantial differences were identified in students’ programming knowledge acquisition and group-level programming product quality when both instructor manual feedback and ChatGPT intelligent feedback were provided. ChatGPT intelligent feedback facilitated students’ regulation-oriented collaborative programming, while instructor manual feedback facilitated cognition-oriented collaborative discussions during programming. Compared to the instructor manual feedback, ChatGPT intelligent feedback was perceived by students as having more obvious strengths as well as weaknesses. Drawing from the results, this research offered pedagogical and analytical insights to enhance the integration of ChatGPT into programming education at the higher education context. This research also provided a new perspective on facilitating collaborative learning experiences among students, instructors, and the AGI system.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2227-2239"},"PeriodicalIF":2.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1109/TLT.2024.3456072
Chris Dede
{"title":"Guest Editorial Intelligence Augmentation: The Owl of Athena","authors":"Chris Dede","doi":"10.1109/TLT.2024.3456072","DOIUrl":"https://doi.org/10.1109/TLT.2024.3456072","url":null,"abstract":"","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2154-2155"},"PeriodicalIF":2.9,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10726640","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142452739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1109/TLT.2024.3475741
Yussy Chinchay;César A. Collazos;Javier Gomez;Germán Montoro
This research focuses on the assessment of attention to identify the design needs for optimized learning technologies for children with autism. Within a single case study incorporating a multiple-baseline design involving baseline, intervention, and postintervention phases, we developed an application enabling personalized attention strategies. These strategies were assessed for their efficacy in enhancing attentional abilities during digital learning tasks. Data analysis of children's interaction experience, support requirements, task completion time, and attentional patterns was conducted using a tablet-based application. The findings contribute to a comprehensive understanding of how children with autism engage with digital learning activities and underscore the significance of personalized attention strategies. Key interaction design principles were identified to address attention-related challenges and promote engagement in the learning experience. This study advances the development of inclusive digital learning environments for children on the autism spectrum by leveraging attention assessment.
{"title":"Designing Learning Technologies: Assessing Attention in Children With Autism Through a Single Case Study","authors":"Yussy Chinchay;César A. Collazos;Javier Gomez;Germán Montoro","doi":"10.1109/TLT.2024.3475741","DOIUrl":"https://doi.org/10.1109/TLT.2024.3475741","url":null,"abstract":"This research focuses on the assessment of attention to identify the design needs for optimized learning technologies for children with autism. Within a single case study incorporating a multiple-baseline design involving baseline, intervention, and postintervention phases, we developed an application enabling personalized attention strategies. These strategies were assessed for their efficacy in enhancing attentional abilities during digital learning tasks. Data analysis of children's interaction experience, support requirements, task completion time, and attentional patterns was conducted using a tablet-based application. The findings contribute to a comprehensive understanding of how children with autism engage with digital learning activities and underscore the significance of personalized attention strategies. Key interaction design principles were identified to address attention-related challenges and promote engagement in the learning experience. This study advances the development of inclusive digital learning environments for children on the autism spectrum by leveraging attention assessment.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2172-2182"},"PeriodicalIF":2.9,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10706829","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1109/TLT.2024.3464560
Yu Bai;Jun Li;Jun Shen;Liang Zhao
The potential of artificial intelligence (AI) in transforming education has received considerable attention. This study aims to explore the potential of large language models (LLMs) in assisting students with studying and passing standardized exams, while many people think it is a hype situation. Using primary education as an example, this research investigates whether ChatGPT-3.5 can achieve satisfactory performance on the Chinese Primary School Exams and whether it can be used as a teaching aid or tutor. We designed an experimental framework and constructed a benchmark that comprises 4800 questions collected from 48 tasks in Chinese elementary education settings. Through automatic and manual evaluations, we observed that ChatGPT-3.5’s pass rate was below the required level of accuracy for most tasks, and the correctness of ChatGPT-3.5’s answer interpretation was unsatisfactory. These results revealed a discrepancy between the findings and our initial expectations. However, the comparative experiments between ChatGPT-3.5 and ChatGPT-4 indicated significant improvements in model performance, demonstrating the potential of using LLMs as a teaching aid. This article also investigates the use of the trans-prompting strategy to reduce the impact of language bias and enhance question understanding. We present a comparison of the models' performance and the improvement under the trans-lingual problem decomposition prompting mechanism. Finally, we discuss the challenges associated with the appropriate application of AI-driven language models, along with future directions and limitations in the field of AI for education.
{"title":"Investigating the Efficacy of ChatGPT-3.5 for Tutoring in Chinese Elementary Education Settings","authors":"Yu Bai;Jun Li;Jun Shen;Liang Zhao","doi":"10.1109/TLT.2024.3464560","DOIUrl":"https://doi.org/10.1109/TLT.2024.3464560","url":null,"abstract":"The potential of artificial intelligence (AI) in transforming education has received considerable attention. This study aims to explore the potential of large language models (LLMs) in assisting students with studying and passing standardized exams, while many people think it is a hype situation. Using primary education as an example, this research investigates whether ChatGPT-3.5 can achieve satisfactory performance on the Chinese Primary School Exams and whether it can be used as a teaching aid or tutor. We designed an experimental framework and constructed a benchmark that comprises 4800 questions collected from 48 tasks in Chinese elementary education settings. Through automatic and manual evaluations, we observed that ChatGPT-3.5’s pass rate was below the required level of accuracy for most tasks, and the correctness of ChatGPT-3.5’s answer interpretation was unsatisfactory. These results revealed a discrepancy between the findings and our initial expectations. However, the comparative experiments between ChatGPT-3.5 and ChatGPT-4 indicated significant improvements in model performance, demonstrating the potential of using LLMs as a teaching aid. This article also investigates the use of the trans-prompting strategy to reduce the impact of language bias and enhance question understanding. We present a comparison of the models' performance and the improvement under the trans-lingual problem decomposition prompting mechanism. Finally, we discuss the challenges associated with the appropriate application of AI-driven language models, along with future directions and limitations in the field of AI for education.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2156-2171"},"PeriodicalIF":2.9,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142517999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}