Pub Date : 2024-03-05DOI: 10.1109/TLT.2024.3372894
Israel Ulises Cayetano-Jiménez;Erick Axel Martinez-Ríos;Rogelio Bustamante-Bello;Ricardo A. Ramírez-Mendoza;María Soledad Ramírez-Montoya
Educational robotics (ER) is a discipline of applied robotics focused on teaching robot design, analysis, application, and operation. Traditionally, ER has favored rigid robots, overlooking the potential of soft robots (SRs). While rigid robots offer insights into dynamics, kinematics, and control, they have limitations in exploring the depths of mechanical design and material properties. In this regard, SRs present an opportunity to expand educational topics and activities in robotics through their unique bioinspired properties and accessibility. Despite their promise, there is a notable lack of research on SRs as educational tools, limiting the identification of research avenues that could promote their adoption in educational settings. This study conducts a systematic literature review to elucidate the impact of SRs across academic levels, pedagogical strategies, prevalent artificial muscles, educational activities, and assessment methods. The findings indicate a significant focus on K-12 workshops utilizing soft pneumatic actuators. Furthermore, SRs have fostered the development of fabrication and mechanical design skills beyond mere programming tasks. However, there is a shortage of studies analyzing their use in higher education or their impact on learning outcomes, suggesting a critical need for comprehensive evaluations to determine their effectiveness, rather than solely relying on surveys for student feedback. Thus, there is an opportunity to explore and evaluate the use of SRs in more advanced settings and multidisciplinary activities, urging for rigorous assessments of their influence on learning outcomes. By undertaking this, we aim to provide a foundation for integrating SRs into the ER curriculum, potentially transforming teaching methodologies and enriching students' learning experiences.
{"title":"Experimenting With Soft Robotics in Education: A Systematic Literature Review From 2006 to 2022","authors":"Israel Ulises Cayetano-Jiménez;Erick Axel Martinez-Ríos;Rogelio Bustamante-Bello;Ricardo A. Ramírez-Mendoza;María Soledad Ramírez-Montoya","doi":"10.1109/TLT.2024.3372894","DOIUrl":"10.1109/TLT.2024.3372894","url":null,"abstract":"Educational robotics (ER) is a discipline of applied robotics focused on teaching robot design, analysis, application, and operation. Traditionally, ER has favored rigid robots, overlooking the potential of soft robots (SRs). While rigid robots offer insights into dynamics, kinematics, and control, they have limitations in exploring the depths of mechanical design and material properties. In this regard, SRs present an opportunity to expand educational topics and activities in robotics through their unique bioinspired properties and accessibility. Despite their promise, there is a notable lack of research on SRs as educational tools, limiting the identification of research avenues that could promote their adoption in educational settings. This study conducts a systematic literature review to elucidate the impact of SRs across academic levels, pedagogical strategies, prevalent artificial muscles, educational activities, and assessment methods. The findings indicate a significant focus on K-12 workshops utilizing soft pneumatic actuators. Furthermore, SRs have fostered the development of fabrication and mechanical design skills beyond mere programming tasks. However, there is a shortage of studies analyzing their use in higher education or their impact on learning outcomes, suggesting a critical need for comprehensive evaluations to determine their effectiveness, rather than solely relying on surveys for student feedback. Thus, there is an opportunity to explore and evaluate the use of SRs in more advanced settings and multidisciplinary activities, urging for rigorous assessments of their influence on learning outcomes. By undertaking this, we aim to provide a foundation for integrating SRs into the ER curriculum, potentially transforming teaching methodologies and enriching students' learning experiences.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"1261-1278"},"PeriodicalIF":3.7,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10460415","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140045730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-04DOI: 10.1109/TLT.2024.3372508
Yasar C. Kakdas;Sinan Kockara;Tansel Halic;Doga Demirel
This article presents a 3-D medical simulation that employs reinforcement learning (RL) and interactive RL (IRL) to teach and assess the procedure of donning and doffing personal protective equipment (PPE). The simulation is motivated by the need for effective, safe, and remote training techniques in medicine, particularly in light of the COVID-19 pandemic. The simulation has two modes: a tutorial mode and an assessment mode. In the tutorial mode, a computer-based, ill-trained RL agent utilizes RL to learn the correct sequence of donning the PPE by trial and error. This allows students to experience many outlier cases they might not encounter in an in-class educational model. In the assessment mode, an IRL-based method is used to evaluate how effective the participant is at correcting the mistakes performed by the RL agent. Each time the RL agent interacts with the environment and performs an action, the participants provide positive or negative feedback regarding the action taken. Following the assessment, participants receive a score based on the accuracy of their feedback and the time taken for the RL agent to learn the correct sequence. An experiment was conducted using two groups, each consisting of ten participants. The first group received RL-assisted training for donning PPE, followed by an IRL-based assessment. Meanwhile, the second group observed a video featuring the RL agent demonstrating only the correct donning order without outlier cases, replicating traditional training, before undergoing the same assessment as the first group. Results showed that RL-assisted training with many outlier cases was more effective than traditional training with only regular cases. Moreover, combining RL with IRL significantly enhanced the participants' performance. Notably, 90% of the participants finished the assessment with perfect scores within three iterations. In contrast, only 10% of those who did not engage in RL-assisted training finished the assessment with a perfect score, highlighting the substantial impact of RL and IRL integration on participants’ overall achievement.
{"title":"Enhancing Medical Training Through Learning From Mistakes by Interacting With an Ill-Trained Reinforcement Learning Agent","authors":"Yasar C. Kakdas;Sinan Kockara;Tansel Halic;Doga Demirel","doi":"10.1109/TLT.2024.3372508","DOIUrl":"10.1109/TLT.2024.3372508","url":null,"abstract":"This article presents a 3-D medical simulation that employs reinforcement learning (RL) and interactive RL (IRL) to teach and assess the procedure of donning and doffing personal protective equipment (PPE). The simulation is motivated by the need for effective, safe, and remote training techniques in medicine, particularly in light of the COVID-19 pandemic. The simulation has two modes: a tutorial mode and an assessment mode. In the tutorial mode, a computer-based, ill-trained RL agent utilizes RL to learn the correct sequence of donning the PPE by trial and error. This allows students to experience many outlier cases they might not encounter in an in-class educational model. In the assessment mode, an IRL-based method is used to evaluate how effective the participant is at correcting the mistakes performed by the RL agent. Each time the RL agent interacts with the environment and performs an action, the participants provide positive or negative feedback regarding the action taken. Following the assessment, participants receive a score based on the accuracy of their feedback and the time taken for the RL agent to learn the correct sequence. An experiment was conducted using two groups, each consisting of ten participants. The first group received RL-assisted training for donning PPE, followed by an IRL-based assessment. Meanwhile, the second group observed a video featuring the RL agent demonstrating only the correct donning order without outlier cases, replicating traditional training, before undergoing the same assessment as the first group. Results showed that RL-assisted training with many outlier cases was more effective than traditional training with only regular cases. Moreover, combining RL with IRL significantly enhanced the participants' performance. Notably, 90% of the participants finished the assessment with perfect scores within three iterations. In contrast, only 10% of those who did not engage in RL-assisted training finished the assessment with a perfect score, highlighting the substantial impact of RL and IRL integration on participants’ overall achievement.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"1248-1260"},"PeriodicalIF":3.7,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140037300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-03DOI: 10.1109/TLT.2024.3396735
Andrés Neyem;Luis A. González;Marcelo Mendoza;Juan Pablo Sandoval Alcocer;Leonardo Centellas;Carlos Paredes
Software assistants have significantly impacted software development for both practitioners and students, particularly in capstone projects. The effectiveness of these tools varies based on their knowledge sources; assistants with localized domain-specific knowledge may have limitations, while tools, such as ChatGPT, using broad datasets, might offer recommendations that do not always match the specific objectives of a capstone course. Addressing a gap in current educational technology, this article introduces an AI Knowledge Assistant specifically designed to overcome the limitations of the existing tools by enhancing the quality and relevance of large language models (LLMs). It achieves this through the innovative integration of contextual knowledge from a local “lessons learned” database tailored to the capstone course. We conducted a study with 150 students using the assistant during their capstone course. Integrated into the Kanban project tracking system, the assistant offered recommendations using different strategies: direct searches in the lessons learned database, direct queries to a generative pretrained transformers (GPT) model, query enrichment with lessons learned before submission to GPT and large language model meta AI (LLaMa) models, and query enhancement with Stack Overflow data before GPT processing. Survey results underscored a strong preference among students for direct LLM queries and those enriched with local repository insights, highlighting the assistant's practical value. Furthermore, our linguistic analysis conclusively demonstrated that texts generated by the LLM closely mirrored the linguistic standards and topical relevance of university course requirements. This alignment not only fosters a deeper understanding of course content but also significantly enhances the material's applicability to real-world scenarios.
{"title":"Toward an AI Knowledge Assistant for Context-Aware Learning Experiences in Software Capstone Project Development","authors":"Andrés Neyem;Luis A. González;Marcelo Mendoza;Juan Pablo Sandoval Alcocer;Leonardo Centellas;Carlos Paredes","doi":"10.1109/TLT.2024.3396735","DOIUrl":"10.1109/TLT.2024.3396735","url":null,"abstract":"Software assistants have significantly impacted software development for both practitioners and students, particularly in capstone projects. The effectiveness of these tools varies based on their knowledge sources; assistants with localized domain-specific knowledge may have limitations, while tools, such as ChatGPT, using broad datasets, might offer recommendations that do not always match the specific objectives of a capstone course. Addressing a gap in current educational technology, this article introduces an AI Knowledge Assistant specifically designed to overcome the limitations of the existing tools by enhancing the quality and relevance of large language models (LLMs). It achieves this through the innovative integration of contextual knowledge from a local “lessons learned” database tailored to the capstone course. We conducted a study with 150 students using the assistant during their capstone course. Integrated into the Kanban project tracking system, the assistant offered recommendations using different strategies: direct searches in the lessons learned database, direct queries to a generative pretrained transformers (GPT) model, query enrichment with lessons learned before submission to GPT and large language model meta AI (LLaMa) models, and query enhancement with Stack Overflow data before GPT processing. Survey results underscored a strong preference among students for direct LLM queries and those enriched with local repository insights, highlighting the assistant's practical value. Furthermore, our linguistic analysis conclusively demonstrated that texts generated by the LLM closely mirrored the linguistic standards and topical relevance of university course requirements. This alignment not only fosters a deeper understanding of course content but also significantly enhances the material's applicability to real-world scenarios.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"1639-1654"},"PeriodicalIF":3.7,"publicationDate":"2024-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140826943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-02DOI: 10.1109/TLT.2024.3396159
Qiuyu Zheng;Zengzhao Chen;Mengke Wang;Yawen Shi;Shaohui Chen;Zhi Liu
The rationality and the effectiveness of classroom teaching behavior directly influence the quality of classroom instruction. Analyzing teaching behavior intelligently can provide robust data support for teacher development and teaching supervision. By observing verbal and nonverbal behaviors of teachers in the classroom, valuable data on teacher–student interaction, classroom atmosphere, and teacher–student rapport can be obtained. However, traditional approaches of teaching behavior analysis primarily focus on student groups in the classroom, neglecting intelligent analysis and intervention of teacher behavior. Moreover, these traditional methods often rely on manual annotation and decision making, which are time consuming and labor intensive, and cannot efficiently facilitate analysis. To address these limitations, this article proposes an innovative automated multimode teaching behavior analysis framework, known as AMTBA. First, a model for segmenting classroom events is introduced, which separates teacher behavior sequences logically. Next, this article utilizes deep learning strategies with optimal performance to conduct multimode analysis and identification of split classroom events, enabling the fine-grained measurement of teacher's behavior in terms of verbal interaction, emotion, gaze, and position. Overall, we establish a uniform description framework. The AMTBA framework is utilized to analyze eight classrooms, and the obtained teacher behavior data are used to analyze differences. The empirical results reveal the differences of teacher behavior in different types of teachers, different teaching modes, and different classes. These findings provide an efficient solution for large-scale and multidisciplinary educational analysis and demonstrate the practical value of AMTBA in educational analytics.
{"title":"Automated Multimode Teaching Behavior Analysis: A Pipeline-Based Event Segmentation and Description","authors":"Qiuyu Zheng;Zengzhao Chen;Mengke Wang;Yawen Shi;Shaohui Chen;Zhi Liu","doi":"10.1109/TLT.2024.3396159","DOIUrl":"10.1109/TLT.2024.3396159","url":null,"abstract":"The rationality and the effectiveness of classroom teaching behavior directly influence the quality of classroom instruction. Analyzing teaching behavior intelligently can provide robust data support for teacher development and teaching supervision. By observing verbal and nonverbal behaviors of teachers in the classroom, valuable data on teacher–student interaction, classroom atmosphere, and teacher–student rapport can be obtained. However, traditional approaches of teaching behavior analysis primarily focus on student groups in the classroom, neglecting intelligent analysis and intervention of teacher behavior. Moreover, these traditional methods often rely on manual annotation and decision making, which are time consuming and labor intensive, and cannot efficiently facilitate analysis. To address these limitations, this article proposes an innovative automated multimode teaching behavior analysis framework, known as AMTBA. First, a model for segmenting classroom events is introduced, which separates teacher behavior sequences logically. Next, this article utilizes deep learning strategies with optimal performance to conduct multimode analysis and identification of split classroom events, enabling the fine-grained measurement of teacher's behavior in terms of verbal interaction, emotion, gaze, and position. Overall, we establish a uniform description framework. The AMTBA framework is utilized to analyze eight classrooms, and the obtained teacher behavior data are used to analyze differences. The empirical results reveal the differences of teacher behavior in different types of teachers, different teaching modes, and different classes. These findings provide an efficient solution for large-scale and multidisciplinary educational analysis and demonstrate the practical value of AMTBA in educational analytics.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"1717-1733"},"PeriodicalIF":3.7,"publicationDate":"2024-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140827092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-02DOI: 10.1109/TLT.2024.3396393
Hanall Sung;Martina A. Rau;Barry D. Van Veen
In many science, technology, engineering, and mathematics (STEM) domains, instruction on foundational concepts heavily relies on visuals. Instructors often assume that students can mentally visualize concepts but students often struggle with internal visualization skills—the ability to mentally visualize information. In order to address this issue, we developed a formal, as well as an informal assessment of students’ internal visualization skills in the context of engineering instruction. To validate the assessments, we used data triangulation methods. We drew on data from two separate studies conducted in a small-scale lab experiment and in a larger-scale classroom context. Our studies demonstrate that an intelligent tutoring system with interactive visual representations can serve as an informal assessment of students’ internal visualization skills, predicting their performance on a formal assessment of these skills. Our study enriches methodological and theoretical underpinnings in educational research and practices in multiple ways: it contributes to research methodologies by illustrating how multimodal triangulation can be used for test development, theories of learning by offering pathways to assessing internal visualization skills that are not directly observable, and instructional practices in STEM education by enabling instructors to determine when and where they should provide additional scaffoldings.
{"title":"Development of an Intelligent Tutoring System That Assesses Internal Visualization Skills in Engineering Using Multimodal Triangulation","authors":"Hanall Sung;Martina A. Rau;Barry D. Van Veen","doi":"10.1109/TLT.2024.3396393","DOIUrl":"10.1109/TLT.2024.3396393","url":null,"abstract":"In many science, technology, engineering, and mathematics (STEM) domains, instruction on foundational concepts heavily relies on visuals. Instructors often assume that students can mentally visualize concepts but students often struggle with internal visualization skills—the ability to mentally visualize information. In order to address this issue, we developed a formal, as well as an informal assessment of students’ internal visualization skills in the context of engineering instruction. To validate the assessments, we used data triangulation methods. We drew on data from two separate studies conducted in a small-scale lab experiment and in a larger-scale classroom context. Our studies demonstrate that an intelligent tutoring system with interactive visual representations can serve as an informal assessment of students’ internal visualization skills, predicting their performance on a formal assessment of these skills. Our study enriches methodological and theoretical underpinnings in educational research and practices in multiple ways: it contributes to research methodologies by illustrating how multimodal triangulation can be used for test development, theories of learning by offering pathways to assessing internal visualization skills that are not directly observable, and instructional practices in STEM education by enabling instructors to determine when and where they should provide additional scaffoldings.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"17 ","pages":"1625-1638"},"PeriodicalIF":3.7,"publicationDate":"2024-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140826860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-26DOI: 10.1109/TLT.2024.3369690
Jijian Lu;Ruxin Zheng;Zikun Gong;Huifen Xu
Generative artificial intelligence (AI) has emerged as a noteworthy milestone and a consequential advancement in the annals of major disciplines within the domains of human science and technology. This study aims to explore the effects of generative AI-assisted preservice teaching skills training on preservice teachers’ self-efficacy and higher order thinking. The participants of this study were 215 preservice mathematics, science, and computer teachers from a university in China. First, a pretest–post-test quasi-experimental design was implemented for an experimental group (teaching skills training by generative AI) and a control group (teaching skills training by traditional methods) by investigating the teacher self-efficacy and higher order thinking of the two groups before and after the experiment. Finally, a semistructured interview comprising open-ended questions was administered to 25 preservice teachers within the experimental group to present their views on generative AI-assisted teaching. The results showed that the scores of preservice teachers in the experimental group, who used generative AI for teachers’ professional development, were considerably higher than those of the control group, both in teacher self-efficacy ( F