Pub Date : 2024-10-30DOI: 10.1109/TLT.2024.3488086
Ruxin Zheng;Huifen Xu;Minjuan Wang;Jijian Lu
This study investigates the impact of artificial general intelligence (AGI)-assisted project-based learning (PBL) on students’ higher order thinking and self-efficacy. Based on input from 17 experts, four key roles of AGI in supporting PBL were identified: information retrieval, information processing, information generation, and feedback evaluation. An educational experiment was then conducted with 198 eighth-grade students from two middle schools in China, using a pretest and posttest design. The students were divided into three groups: Experimental Group A (AGI-assisted PBL), Control Group B (PBL without AGI assistance), and Control Group C (traditional teaching methods). A scale was administered to assess students’ higher order thinking and self-efficacy before and after the experiment. In addition, semistructured interviews were conducted with 12 students from Experimental Group A to gather qualitative data on their perceptions of AGI-assisted PBL. The results indicated that students in Experimental Group A had significantly higher scores in higher order thinking and self-efficacy compared to those in Control Groups B and C, demonstrating the positive impact of AGI in supporting PBL learning.
本研究调查了人工智能(AGI)辅助项目式学习(PBL)对学生高阶思维和自我效能的影响。根据 17 位专家的意见,确定了 AGI 在支持项目式学习中的四个关键作用:信息检索、信息处理、信息生成和反馈评估。随后,对来自中国两所中学的 198 名八年级学生进行了教育实验,实验采用前测和后测设计。学生被分为三组:实验组 A(AGI 辅助的 PBL)、对照组 B(无 AGI 辅助的 PBL)和对照组 C(传统教学方法)。实验前后对学生的高阶思维和自我效能进行了量表评估。此外,还对实验组 A 的 12 名学生进行了半结构式访谈,以收集他们对 AGI 辅助 PBL 的看法的定性数据。结果表明,与对照组 B 和 C 的学生相比,实验组 A 的学生在高阶思维和自我效能感方面的得分明显更高,这证明了 AGI 在支持 PBL 学习方面的积极影响。
{"title":"The Impact of Artificial General Intelligence-Assisted Project-Based Learning on Students’ Higher Order Thinking and Self-Efficacy","authors":"Ruxin Zheng;Huifen Xu;Minjuan Wang;Jijian Lu","doi":"10.1109/TLT.2024.3488086","DOIUrl":"https://doi.org/10.1109/TLT.2024.3488086","url":null,"abstract":"This study investigates the impact of artificial general intelligence (AGI)-assisted project-based learning (PBL) on students’ higher order thinking and self-efficacy. Based on input from 17 experts, four key roles of AGI in supporting PBL were identified: information retrieval, information processing, information generation, and feedback evaluation. An educational experiment was then conducted with 198 eighth-grade students from two middle schools in China, using a pretest and posttest design. The students were divided into three groups: Experimental Group A (AGI-assisted PBL), Control Group B (PBL without AGI assistance), and Control Group C (traditional teaching methods). A scale was administered to assess students’ higher order thinking and self-efficacy before and after the experiment. In addition, semistructured interviews were conducted with 12 students from Experimental Group A to gather qualitative data on their perceptions of AGI-assisted PBL. The results indicated that students in Experimental Group A had significantly higher scores in higher order thinking and self-efficacy compared to those in Control Groups B and C, demonstrating the positive impact of AGI in supporting PBL learning.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2207-2214"},"PeriodicalIF":2.9,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-29DOI: 10.1109/TLT.2024.3487898
Fabio Buttussi;Luca Chittaro
Educational virtual environments (EVEs) can enable effective learning experiences on various devices, including smartphones, using nonimmersive virtual reality (VR). To this purpose, researchers and educators should identify the most appropriate pedagogical techniques, not restarting from scratch but exploring which traditional e-learning and VR techniques can be effectively combined or adapted to EVEs. In this direction, this article explores if test questions, a typical e-learning technique, can be effectively employed in an EVE through a careful well-blended design. We also consider the active performance of procedures, a typical VR technique, to evaluate if test questions can be synergic with it or if they can instead break presence and be detrimental to learning. The between-subject study we describe involved 120 participants in four conditions: with/without test questions and active/passive procedure performance. The EVE was run on a smartphone, using nonimmersive VR, and taught hand hygiene procedures for infectious disease prevention. Results showed that introducing test questions did not break presence but surprisingly increased it, especially when combined with active procedure performance. Participants’ self-efficacy increased after using the EVE regardless of condition, and the different conditions did not significantly change engagement. Moreover, participants who had answered test questions in the EVE showed a reduction in the number of omitted steps in an assessment of learning transfer. Finally, test questions increased participants’ satisfaction. Overall, these greater-than-expected benefits support the adoption of the proposed test question design in EVEs based on nonimmersive VR.
教育虚拟环境(EVE)可以利用非沉浸式虚拟现实技术(VR),在包括智能手机在内的各种设备上实现有效的学习体验。为此,研究人员和教育工作者应找出最合适的教学技术,而不是从头开始,而是探索哪些传统的电子学习和 VR 技术可以有效地结合或适用于 EVE。在这个方向上,本文探讨了试题作为一种典型的电子学习技术,能否通过精心的混合设计在电子游戏中得到有效应用。我们还考虑了程序(一种典型的虚拟现实技术)的积极表现,以评估测试问题是否能与之协同增效,或者它们是否会打破存在感并不利于学习。在我们描述的主体间研究中,120 名参与者在四种条件下进行了学习:有/无测试问题和主动/被动程序表现。EVE在智能手机上运行,使用非沉浸式VR,教授预防传染病的手部卫生程序。结果表明,引入测试问题并没有破坏存在感,反而出人意料地提高了存在感,尤其是在与主动程序表现相结合时。无论在什么条件下,参与者在使用 EVE 后的自我效能感都有所提高,而不同的条件并没有显著改变参与度。此外,在学习迁移评估中,在电子学习环境中回答过测试问题的参与者减少了遗漏步骤的数量。最后,测试问题提高了参与者的满意度。总之,这些超出预期的益处支持在基于非沉浸式 VR 的 EVE 中采用建议的测试问题设计。
{"title":"Embedding Test Questions in Educational Mobile Virtual Reality: A Study on Hospital Hygiene Procedures","authors":"Fabio Buttussi;Luca Chittaro","doi":"10.1109/TLT.2024.3487898","DOIUrl":"https://doi.org/10.1109/TLT.2024.3487898","url":null,"abstract":"Educational virtual environments (EVEs) can enable effective learning experiences on various devices, including smartphones, using nonimmersive virtual reality (VR). To this purpose, researchers and educators should identify the most appropriate pedagogical techniques, not restarting from scratch but exploring which traditional e-learning and VR techniques can be effectively combined or adapted to EVEs. In this direction, this article explores if test questions, a typical e-learning technique, can be effectively employed in an EVE through a careful well-blended design. We also consider the active performance of procedures, a typical VR technique, to evaluate if test questions can be synergic with it or if they can instead break presence and be detrimental to learning. The between-subject study we describe involved 120 participants in four conditions: with/without test questions and active/passive procedure performance. The EVE was run on a smartphone, using nonimmersive VR, and taught hand hygiene procedures for infectious disease prevention. Results showed that introducing test questions did not break presence but surprisingly increased it, especially when combined with active procedure performance. Participants’ self-efficacy increased after using the EVE regardless of condition, and the different conditions did not significantly change engagement. Moreover, participants who had answered test questions in the EVE showed a reduction in the number of omitted steps in an assessment of learning transfer. Finally, test questions increased participants’ satisfaction. Overall, these greater-than-expected benefits support the adoption of the proposed test question design in EVEs based on nonimmersive VR.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2253-2265"},"PeriodicalIF":2.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10737683","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.1109/TLT.2024.3486749
Fan Ouyang;Mingyue Guo;Ning Zhang;Xianping Bai;Pengcheng Jiao
Artificial general intelligence (AGI) has gained increasing global attention as the field of large language models undergoes rapid development. Due to its human-like cognitive abilities, the AGI system has great potential to help instructors provide detailed, comprehensive, and individualized feedback to students throughout the educational process. ChatGPT, as a preliminary version of the AGI system, has the potential to improve programming education. In programming, students often have difficulties in writing codes and debugging errors, whereas ChatGPT can provide intelligent feedback to support students’ programming learning process. This research implemented intelligent feedback generated by ChatGPT to facilitate collaborative programming among student groups and further compared the effects of ChatGPT with instructors’ manual feedback on programming. This research employed a variety of learning analytics methods to analyze students’ computer programming performances, cognitive and regulation discourses, and programming behaviors. Results indicated that no substantial differences were identified in students’ programming knowledge acquisition and group-level programming product quality when both instructor manual feedback and ChatGPT intelligent feedback were provided. ChatGPT intelligent feedback facilitated students’ regulation-oriented collaborative programming, while instructor manual feedback facilitated cognition-oriented collaborative discussions during programming. Compared to the instructor manual feedback, ChatGPT intelligent feedback was perceived by students as having more obvious strengths as well as weaknesses. Drawing from the results, this research offered pedagogical and analytical insights to enhance the integration of ChatGPT into programming education at the higher education context. This research also provided a new perspective on facilitating collaborative learning experiences among students, instructors, and the AGI system.
{"title":"Comparing the Effects of Instructor Manual Feedback and ChatGPT Intelligent Feedback on Collaborative Programming in China's Higher Education","authors":"Fan Ouyang;Mingyue Guo;Ning Zhang;Xianping Bai;Pengcheng Jiao","doi":"10.1109/TLT.2024.3486749","DOIUrl":"https://doi.org/10.1109/TLT.2024.3486749","url":null,"abstract":"Artificial general intelligence (AGI) has gained increasing global attention as the field of large language models undergoes rapid development. Due to its human-like cognitive abilities, the AGI system has great potential to help instructors provide detailed, comprehensive, and individualized feedback to students throughout the educational process. ChatGPT, as a preliminary version of the AGI system, has the potential to improve programming education. In programming, students often have difficulties in writing codes and debugging errors, whereas ChatGPT can provide intelligent feedback to support students’ programming learning process. This research implemented intelligent feedback generated by ChatGPT to facilitate collaborative programming among student groups and further compared the effects of ChatGPT with instructors’ manual feedback on programming. This research employed a variety of learning analytics methods to analyze students’ computer programming performances, cognitive and regulation discourses, and programming behaviors. Results indicated that no substantial differences were identified in students’ programming knowledge acquisition and group-level programming product quality when both instructor manual feedback and ChatGPT intelligent feedback were provided. ChatGPT intelligent feedback facilitated students’ regulation-oriented collaborative programming, while instructor manual feedback facilitated cognition-oriented collaborative discussions during programming. Compared to the instructor manual feedback, ChatGPT intelligent feedback was perceived by students as having more obvious strengths as well as weaknesses. Drawing from the results, this research offered pedagogical and analytical insights to enhance the integration of ChatGPT into programming education at the higher education context. This research also provided a new perspective on facilitating collaborative learning experiences among students, instructors, and the AGI system.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2227-2239"},"PeriodicalIF":2.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1109/TLT.2024.3456072
Chris Dede
{"title":"Guest Editorial Intelligence Augmentation: The Owl of Athena","authors":"Chris Dede","doi":"10.1109/TLT.2024.3456072","DOIUrl":"https://doi.org/10.1109/TLT.2024.3456072","url":null,"abstract":"","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2154-2155"},"PeriodicalIF":2.9,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10726640","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142452739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1109/TLT.2024.3475741
Yussy Chinchay;César A. Collazos;Javier Gomez;Germán Montoro
This research focuses on the assessment of attention to identify the design needs for optimized learning technologies for children with autism. Within a single case study incorporating a multiple-baseline design involving baseline, intervention, and postintervention phases, we developed an application enabling personalized attention strategies. These strategies were assessed for their efficacy in enhancing attentional abilities during digital learning tasks. Data analysis of children's interaction experience, support requirements, task completion time, and attentional patterns was conducted using a tablet-based application. The findings contribute to a comprehensive understanding of how children with autism engage with digital learning activities and underscore the significance of personalized attention strategies. Key interaction design principles were identified to address attention-related challenges and promote engagement in the learning experience. This study advances the development of inclusive digital learning environments for children on the autism spectrum by leveraging attention assessment.
{"title":"Designing Learning Technologies: Assessing Attention in Children With Autism Through a Single Case Study","authors":"Yussy Chinchay;César A. Collazos;Javier Gomez;Germán Montoro","doi":"10.1109/TLT.2024.3475741","DOIUrl":"https://doi.org/10.1109/TLT.2024.3475741","url":null,"abstract":"This research focuses on the assessment of attention to identify the design needs for optimized learning technologies for children with autism. Within a single case study incorporating a multiple-baseline design involving baseline, intervention, and postintervention phases, we developed an application enabling personalized attention strategies. These strategies were assessed for their efficacy in enhancing attentional abilities during digital learning tasks. Data analysis of children's interaction experience, support requirements, task completion time, and attentional patterns was conducted using a tablet-based application. The findings contribute to a comprehensive understanding of how children with autism engage with digital learning activities and underscore the significance of personalized attention strategies. Key interaction design principles were identified to address attention-related challenges and promote engagement in the learning experience. This study advances the development of inclusive digital learning environments for children on the autism spectrum by leveraging attention assessment.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2172-2182"},"PeriodicalIF":2.9,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10706829","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1109/TLT.2024.3464560
Yu Bai;Jun Li;Jun Shen;Liang Zhao
The potential of artificial intelligence (AI) in transforming education has received considerable attention. This study aims to explore the potential of large language models (LLMs) in assisting students with studying and passing standardized exams, while many people think it is a hype situation. Using primary education as an example, this research investigates whether ChatGPT-3.5 can achieve satisfactory performance on the Chinese Primary School Exams and whether it can be used as a teaching aid or tutor. We designed an experimental framework and constructed a benchmark that comprises 4800 questions collected from 48 tasks in Chinese elementary education settings. Through automatic and manual evaluations, we observed that ChatGPT-3.5’s pass rate was below the required level of accuracy for most tasks, and the correctness of ChatGPT-3.5’s answer interpretation was unsatisfactory. These results revealed a discrepancy between the findings and our initial expectations. However, the comparative experiments between ChatGPT-3.5 and ChatGPT-4 indicated significant improvements in model performance, demonstrating the potential of using LLMs as a teaching aid. This article also investigates the use of the trans-prompting strategy to reduce the impact of language bias and enhance question understanding. We present a comparison of the models' performance and the improvement under the trans-lingual problem decomposition prompting mechanism. Finally, we discuss the challenges associated with the appropriate application of AI-driven language models, along with future directions and limitations in the field of AI for education.
{"title":"Investigating the Efficacy of ChatGPT-3.5 for Tutoring in Chinese Elementary Education Settings","authors":"Yu Bai;Jun Li;Jun Shen;Liang Zhao","doi":"10.1109/TLT.2024.3464560","DOIUrl":"https://doi.org/10.1109/TLT.2024.3464560","url":null,"abstract":"The potential of artificial intelligence (AI) in transforming education has received considerable attention. This study aims to explore the potential of large language models (LLMs) in assisting students with studying and passing standardized exams, while many people think it is a hype situation. Using primary education as an example, this research investigates whether ChatGPT-3.5 can achieve satisfactory performance on the Chinese Primary School Exams and whether it can be used as a teaching aid or tutor. We designed an experimental framework and constructed a benchmark that comprises 4800 questions collected from 48 tasks in Chinese elementary education settings. Through automatic and manual evaluations, we observed that ChatGPT-3.5’s pass rate was below the required level of accuracy for most tasks, and the correctness of ChatGPT-3.5’s answer interpretation was unsatisfactory. These results revealed a discrepancy between the findings and our initial expectations. However, the comparative experiments between ChatGPT-3.5 and ChatGPT-4 indicated significant improvements in model performance, demonstrating the potential of using LLMs as a teaching aid. This article also investigates the use of the trans-prompting strategy to reduce the impact of language bias and enhance question understanding. We present a comparison of the models' performance and the improvement under the trans-lingual problem decomposition prompting mechanism. Finally, we discuss the challenges associated with the appropriate application of AI-driven language models, along with future directions and limitations in the field of AI for education.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2156-2171"},"PeriodicalIF":2.9,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142517999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research shows that gamified learning experiences can effectively improve the outstanding issues of students in online learning, such as lack of continuous motivation and easy burnout, thereby improving the effectiveness of online learning. However, how to enhance the gamified learning experience in online learning, and what impact there is between the gamified learning experience and the effectiveness of online learning, remain to be further explored. This research article is based on the theory of gamified learning experience and uses structural equation modeling methodology to explore the relationship among the three dimensions of situation-based cognitive experience, collaboration-based social experience, and motivation-based subjectivity experience and the effectiveness of online learning. The results indicate that there is a significant positive correlation among the three dimensions, and all three dimensions have a significant positive impact on the online learning effectiveness. The subjective experience based on motivation has the greatest impact on the online learning effectiveness, and the other two dimensions have a significant positive impact on the online learning effectiveness. The impact on online learning effectiveness is similar. Finally, the article makes recommendations based on the research conclusions, expecting to provide a research foundation for enhancing the gamified learning experience and improving the effectiveness of online learning.
{"title":"Impact of Gamified Learning Experience on Online Learning Effectiveness","authors":"Xiangping Cui;Chen Du;Jun Shen;Susan Zhang;Juan Xu","doi":"10.1109/TLT.2024.3462892","DOIUrl":"10.1109/TLT.2024.3462892","url":null,"abstract":"Research shows that gamified learning experiences can effectively improve the outstanding issues of students in online learning, such as lack of continuous motivation and easy burnout, thereby improving the effectiveness of online learning. However, how to enhance the gamified learning experience in online learning, and what impact there is between the gamified learning experience and the effectiveness of online learning, remain to be further explored. This research article is based on the theory of gamified learning experience and uses structural equation modeling methodology to explore the relationship among the three dimensions of situation-based cognitive experience, collaboration-based social experience, and motivation-based subjectivity experience and the effectiveness of online learning. The results indicate that there is a significant positive correlation among the three dimensions, and all three dimensions have a significant positive impact on the online learning effectiveness. The subjective experience based on motivation has the greatest impact on the online learning effectiveness, and the other two dimensions have a significant positive impact on the online learning effectiveness. The impact on online learning effectiveness is similar. Finally, the article makes recommendations based on the research conclusions, expecting to provide a research foundation for enhancing the gamified learning experience and improving the effectiveness of online learning.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2130-2139"},"PeriodicalIF":2.9,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1109/TLT.2024.3451050
Seng Chee Tan;Kay Wijekumar;Huaqing Hong;Justin Olmanson;Robert Twomey;Tanmay Sinha
{"title":"Guest Editorial Education in the World of ChatGPT and Generative AI","authors":"Seng Chee Tan;Kay Wijekumar;Huaqing Hong;Justin Olmanson;Robert Twomey;Tanmay Sinha","doi":"10.1109/TLT.2024.3451050","DOIUrl":"https://doi.org/10.1109/TLT.2024.3451050","url":null,"abstract":"","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"2062-2064"},"PeriodicalIF":2.9,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10673879","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1109/TLT.2024.3456447
Alejandra J. Magana;Syed Tanzim Mubarrat;Dominic Kao;Bedrich Benes
Fostering productive engagement within teams has been found to improve student learning outcomes. Consequently, characterizing productive and unproductive time during teamwork sessions is a critical preliminary step to increase engagement in teamwork meetings. However, research from the cognitive sciences has mainly focused on characterizing levels of productive engagement. Thus, the theoretical contribution of this study focuses on characterizing active and passive forms of engagement, as well as negative and positive forms of engagement. In tandem, researchers have used computer-based methods to supplement quantitative and qualitative analyses to investigate teamwork engagement. Yet, these studies have been limited to information extracted primarily from one data stream. For instance, text data from discussion forums or video data from recordings. We developed an artificial intelligence (AI)-based automatic system that detects productive and unproductive engagement during live teamwork sessions. The technical contribution of this study focuses on the use of three data streams from an interactive session: audio, video, and text. We automatically analyze them and determine each team's level of engagement, such as productive engagement, unproductive engagement, disengagement, and idle. The AI-based system was validated based on hand-coded data. We used the system to characterize productive and unproductive engagement patterns in teams using deep learning methods. Results showed that there were $>$