Pub Date : 2025-06-17DOI: 10.1109/TLT.2025.3580536
Fulan Fan;Siyu Wang;Mai Dinuer · Mai Hemuti;Xin Nie;Laurence T. Yang
Intelligence augmentation can offer personalized learning resources and pathways tailored to each student’s unique characteristics and needs. Among these advancements, the large language model (LLM) agent has ushered in a new revolution in education. In this study, we constructed a metacognitive reflective learning scaffold (MRLS) grounded in metacognitive theory and reflective learning principles to provide conceptual support for students during their reflective practices. In addition, we developed a metacognitive reflective learning agent (MRLA) on the Coze platform designed to deliver personalized guidance and assistance throughout the reflective learning process. We conducted a 16-week $2 times 2$ quasi-experiment study at Z University in China, where participants were randomly assigned to four groups. Throughout the research process, we collected dialogue data from students using the Coze platform, as well as reflection reports submitted via the XueXiTong platform for quantitative analysis. Empirical results demonstrated that both the MRLS and MRLA significantly enhanced students’ metacognition, indicated that the MRLS offers precise guidance for students’ reflective learning processes, enabling them to better comprehend and articulate their reflections. The MRLA equips students with more convenient, efficient, and intelligent resources, significantly augmenting the provision of metacognitive training support that would otherwise be provided by teachers. This study emphasizes the validity and necessity of MRLS and MRLA for the cultivation of students’ metacognitive ability and provides insights for the future application of LLM agent and learning scaffolds for optimizing students’ learning process.
{"title":"Enhancing Students’ Metacognition With Innovative IA-Based Metacognitive Reflective Learning Tool","authors":"Fulan Fan;Siyu Wang;Mai Dinuer · Mai Hemuti;Xin Nie;Laurence T. Yang","doi":"10.1109/TLT.2025.3580536","DOIUrl":"https://doi.org/10.1109/TLT.2025.3580536","url":null,"abstract":"Intelligence augmentation can offer personalized learning resources and pathways tailored to each student’s unique characteristics and needs. Among these advancements, the large language model (LLM) agent has ushered in a new revolution in education. In this study, we constructed a metacognitive reflective learning scaffold (MRLS) grounded in metacognitive theory and reflective learning principles to provide conceptual support for students during their reflective practices. In addition, we developed a metacognitive reflective learning agent (MRLA) on the Coze platform designed to deliver personalized guidance and assistance throughout the reflective learning process. We conducted a 16-week <inline-formula><tex-math>$2 times 2$</tex-math></inline-formula> quasi-experiment study at Z University in China, where participants were randomly assigned to four groups. Throughout the research process, we collected dialogue data from students using the Coze platform, as well as reflection reports submitted via the XueXiTong platform for quantitative analysis. Empirical results demonstrated that both the MRLS and MRLA significantly enhanced students’ metacognition, indicated that the MRLS offers precise guidance for students’ reflective learning processes, enabling them to better comprehend and articulate their reflections. The MRLA equips students with more convenient, efficient, and intelligent resources, significantly augmenting the provision of metacognitive training support that would otherwise be provided by teachers. This study emphasizes the validity and necessity of MRLS and MRLA for the cultivation of students’ metacognitive ability and provides insights for the future application of LLM agent and learning scaffolds for optimizing students’ learning process.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"18 ","pages":"699-715"},"PeriodicalIF":2.9,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study explores the use of a serious game integrating the RIASEC Test within an interactive narrative for vocational guidance. The game generates dynamic, personalized scenarios in real time using artificial intelligence, enhancing engagement and authenticity. Implemented with 200 students from high school and university levels, the study evaluated its effectiveness through satisfaction surveys, qualitative interviews, and pregame and postgame responses. Results showed high satisfaction (4.2–4.5 on a five-point Likert scale) and a significant correlation between game responses and academic performance, aligning vocational interests with academic strengths. These findings demonstrate the potential of serious games to personalize career guidance and support comprehensive student development.
{"title":"The Use of Interactive Narratives in Educational Games to Assess Vocational Interests: An Application of the RIASEC Test Integrated With OpenAI","authors":"Rommel Gutierrez;Alexandra Maldonado Navarro;William Villegas-Ch;Aracely Mera-Navarrete","doi":"10.1109/TLT.2025.3579226","DOIUrl":"https://doi.org/10.1109/TLT.2025.3579226","url":null,"abstract":"This study explores the use of a serious game integrating the RIASEC Test within an interactive narrative for vocational guidance. The game generates dynamic, personalized scenarios in real time using artificial intelligence, enhancing engagement and authenticity. Implemented with 200 students from high school and university levels, the study evaluated its effectiveness through satisfaction surveys, qualitative interviews, and pregame and postgame responses. Results showed high satisfaction (4.2–4.5 on a five-point Likert scale) and a significant correlation between game responses and academic performance, aligning vocational interests with academic strengths. These findings demonstrate the potential of serious games to personalize career guidance and support comprehensive student development.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"18 ","pages":"639-651"},"PeriodicalIF":2.9,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11031240","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144550226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-06DOI: 10.1109/TLT.2025.3577950
Jiwon Lai Kim;Gahgene Gweon;Muhsin Menekse
According to the interactive–constructive–active–passive (ICAP) framework, engaging in constructive cognitive modes yields better learning outcomes than active modes. Also, prior studies on educational chatbots suggest that enhancing chatbot humanness can improve learning. However, these two ideas have not been fully explored together, especially within the context of text-based, disembodied chatbots. This study investigates the impact of the cognitive engagement modes (constructive versus active) and chatbot humanness (humanized versus nonhumanized) on learning outcomes and five dimensions of learning motivation. We conducted a two-by-two factorial user experiment with 55 chatbot users. Data were analyzed through a mixed-method approach to examine the main and interaction effects of the two independent variables. Regarding learning outcomes, our data showed that learners who interacted with constructive chatbots showed higher learning outcomes than those who interacted with active chatbots. In addition, learners who interacted with nonhumanized chatbots reported higher learning outcomes than those who interacted with humanized chatbots. Lastly, we observed a significant interaction effect between the two independent variables on tension-pressure and perceived competence, which are two dimensions of learning motivation. Our study extended the applicability of the ICAP framework to the domain of chatbot-based learning, challenged the assumption that the humanness of chatbots can lead to improved learning outcomes, and underscored the importance of exploring both the cognitive engagement modes and the humanness of chatbots when designing chatbots to enhance users’ learning motivation.
{"title":"The Effects of Cognitive Engagement and Humanness of Chatbots on Learning Outcomes and Motivation","authors":"Jiwon Lai Kim;Gahgene Gweon;Muhsin Menekse","doi":"10.1109/TLT.2025.3577950","DOIUrl":"https://doi.org/10.1109/TLT.2025.3577950","url":null,"abstract":"According to the interactive–constructive–active–passive (ICAP) framework, engaging in <italic>constructive</i> cognitive modes yields better learning outcomes than <italic>active</i> modes. Also, prior studies on educational chatbots suggest that enhancing chatbot humanness can improve learning. However, these two ideas have not been fully explored together, especially within the context of text-based, disembodied chatbots. This study investigates the impact of the cognitive engagement modes (<italic>constructive</i> versus <italic>active</i>) and chatbot humanness (humanized versus nonhumanized) on learning outcomes and five dimensions of learning motivation. We conducted a two-by-two factorial user experiment with 55 chatbot users. Data were analyzed through a mixed-method approach to examine the main and interaction effects of the two independent variables. Regarding learning outcomes, our data showed that learners who interacted with <italic>constructive</i> chatbots showed higher learning outcomes than those who interacted with <italic>active</i> chatbots. In addition, learners who interacted with nonhumanized chatbots reported higher learning outcomes than those who interacted with humanized chatbots. Lastly, we observed a significant interaction effect between the two independent variables on tension-pressure and perceived competence, which are two dimensions of learning motivation. Our study extended the applicability of the ICAP framework to the domain of chatbot-based learning, challenged the assumption that the humanness of chatbots can lead to improved learning outcomes, and underscored the importance of exploring both the cognitive engagement modes and the humanness of chatbots when designing chatbots to enhance users’ learning motivation.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"18 ","pages":"652-665"},"PeriodicalIF":2.9,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144550407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-02DOI: 10.1109/TLT.2025.3575936
Linda Greta Dui;Chiara Piazzalunga;Simone Toffoli;Milad Malavolti;Francesca Lunardini;Nicola Basilico;Matteo Luperto;Michele Antonazzi;N. Alberto Borghese;Luca Maggi;Arso Savanovic;Andrej Krpic;Stefania Fontolan;Marisa Bortolozzo;Sandro Franceschini;Stefano Bonometti;Cristiano Termine;Simona Ferrante
The screening of specific learning disabilities faces many challenges, such as: 1) the lack of an educational alliance between schools, families, and clinicians; 2) the lack of quantitative data about children’s difficulties and their progression over time; or 3) the inefficiency of access to care when it is truly needed. To address these issues, this work presents ESSENCE, a platform aimed at supporting schools throughout the entire process, from identifying children with difficulties to reporting cases to child neuropsychiatrists. Following an iterative co-design process with all relevant stakeholders, several system components were defined, developed, and refined with users’ feedback. The final prototype was field-tested over one year by approximately 70 children, their teachers, and some clinicians. Compliance, the System Usability Scale, and custom satisfaction questionnaires were used to evaluate the system. Compliance was high, with at least five sessions conducted in 80% of the weeks. 82% of children reported good usability, and 92% were satisfied with the experience. The quantitative data collected through ESSENCE enhance the process of identifying specific learning disabilities, making it more targeted and beneficial for both children in need and the health-care system.
{"title":"Co-Design and Field Testing of a Platform to Support the Screening of Learning Difficulties","authors":"Linda Greta Dui;Chiara Piazzalunga;Simone Toffoli;Milad Malavolti;Francesca Lunardini;Nicola Basilico;Matteo Luperto;Michele Antonazzi;N. Alberto Borghese;Luca Maggi;Arso Savanovic;Andrej Krpic;Stefania Fontolan;Marisa Bortolozzo;Sandro Franceschini;Stefano Bonometti;Cristiano Termine;Simona Ferrante","doi":"10.1109/TLT.2025.3575936","DOIUrl":"https://doi.org/10.1109/TLT.2025.3575936","url":null,"abstract":"The screening of specific learning disabilities faces many challenges, such as: 1) the lack of an educational alliance between schools, families, and clinicians; 2) the lack of quantitative data about children’s difficulties and their progression over time; or 3) the inefficiency of access to care when it is truly needed. To address these issues, this work presents ESSENCE, a platform aimed at supporting schools throughout the entire process, from identifying children with difficulties to reporting cases to child neuropsychiatrists. Following an iterative co-design process with all relevant stakeholders, several system components were defined, developed, and refined with users’ feedback. The final prototype was field-tested over one year by approximately 70 children, their teachers, and some clinicians. Compliance, the System Usability Scale, and custom satisfaction questionnaires were used to evaluate the system. Compliance was high, with at least five sessions conducted in 80% of the weeks. 82% of children reported good usability, and 92% were satisfied with the experience. The quantitative data collected through ESSENCE enhance the process of identifying specific learning disabilities, making it more targeted and beneficial for both children in need and the health-care system.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"18 ","pages":"684-698"},"PeriodicalIF":2.9,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11021399","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-25DOI: 10.1109/TLT.2025.3564610
Yukun Xu;Hui Li
Primary school students, despite their vulnerability to cyberattacks, lack targeted cybersecurity education. Using Scopus and Google Scholar, this scoping review analyzed 15 articles (2014–2024) following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Extension for Scoping Reviews guidelines to examine the landscape of cybersecurity education for this age group. Four key themes emerged: educational tools and methods, theoretical perspectives on online risks, parental engagement, and children's online behaviors and risks. Findings revealed that while students possess some awareness, their understanding is often superficial, leading to overconfidence and risky online practices. A disconnect was observed between parents and teachers regarding responsibility and effective safety practices. Furthermore, limited implementation and outdated resources hinder the effectiveness of promising pedagogical approaches such as digital games and interactive platforms. This review highlights the urgent need for comprehensive cybersecurity education fostering critical thinking and stakeholder collaboration. It provides practical implications for educators, parents, and policymakers to promote a culture of online safety alongside recommendations for future research.
{"title":"Cybersecurity Matters for Primary School Students: A Scoping Review of the Trends, Challenges, and Opportunities","authors":"Yukun Xu;Hui Li","doi":"10.1109/TLT.2025.3564610","DOIUrl":"https://doi.org/10.1109/TLT.2025.3564610","url":null,"abstract":"Primary school students, despite their vulnerability to cyberattacks, lack targeted cybersecurity education. Using Scopus and Google Scholar, this scoping review analyzed 15 articles (2014–2024) following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Extension for Scoping Reviews guidelines to examine the landscape of cybersecurity education for this age group. Four key themes emerged: educational tools and methods, theoretical perspectives on online risks, parental engagement, and children's online behaviors and risks. Findings revealed that while students possess some awareness, their understanding is often superficial, leading to overconfidence and risky online practices. A disconnect was observed between parents and teachers regarding responsibility and effective safety practices. Furthermore, limited implementation and outdated resources hinder the effectiveness of promising pedagogical approaches such as digital games and interactive platforms. This review highlights the urgent need for comprehensive cybersecurity education fostering critical thinking and stakeholder collaboration. It provides practical implications for educators, parents, and policymakers to promote a culture of online safety alongside recommendations for future research.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"18 ","pages":"513-529"},"PeriodicalIF":2.9,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-24DOI: 10.1109/TLT.2025.3564177
Liuying Gong;Jingyuan Chen;Fei Wu
The capabilities of large language models (LLMs) in language comprehension, conversational interaction, and content generation have led to their widespread adoption across various educational stages and contexts. Given the fundamental role of education, concerns are rising about whether LLMs can serve as competent teachers. To address the challenge of comprehensively evaluating the competencies of LLMs as teachers, a systematic quantitative evaluation based on the competency model has emerged as a valuable approach. Our study, grounded in the teacher competency model and drawing from 14 existing scales, constructed an evaluation framework called TeacherComp. Based on TeacherComp, we evaluated six LLMs from OpenAI across four dimensions: knowledge, skills, values, and traits. Through comparisons between LLMs’ responses and human norms, we found that: 1) with each successive update, LLMs have shown overall improvements in knowledge, while their skills dimension scores have increasingly aligned with human norms; 2) there are both commonalities and differences in the performance of various LLMs regarding values and traits. For instance, while they all tend to exhibit more negative traits than humans, their morals can vary; and 3) LLMs with reduced security, constructed using jailbreak techniques, exhibit values and traits more closely aligned with human norms. Building on these findings, we provided interpretations and suggestions for the application of LLMs in various educational contexts. Overall, this study helps teachers and students use LLMs in appropriate contexts and provides developers with guidance for future iterations, thereby advancing the role of LLMs in empowering education.
{"title":"Is ChatGPT a Competent Teacher? Systematic Evaluation of Large Language Models on the Competency Model","authors":"Liuying Gong;Jingyuan Chen;Fei Wu","doi":"10.1109/TLT.2025.3564177","DOIUrl":"https://doi.org/10.1109/TLT.2025.3564177","url":null,"abstract":"The capabilities of large language models (LLMs) in language comprehension, conversational interaction, and content generation have led to their widespread adoption across various educational stages and contexts. Given the fundamental role of education, concerns are rising about whether LLMs can serve as competent teachers. To address the challenge of comprehensively evaluating the competencies of LLMs as teachers, a systematic quantitative evaluation based on the competency model has emerged as a valuable approach. Our study, grounded in the teacher competency model and drawing from 14 existing scales, constructed an evaluation framework called TeacherComp. Based on TeacherComp, we evaluated six LLMs from OpenAI across four dimensions: knowledge, skills, values, and traits. Through comparisons between LLMs’ responses and human norms, we found that: 1) with each successive update, LLMs have shown overall improvements in knowledge, while their skills dimension scores have increasingly aligned with human norms; 2) there are both commonalities and differences in the performance of various LLMs regarding values and traits. For instance, while they all tend to exhibit more negative traits than humans, their morals can vary; and 3) LLMs with reduced security, constructed using jailbreak techniques, exhibit values and traits more closely aligned with human norms. Building on these findings, we provided interpretations and suggestions for the application of LLMs in various educational contexts. Overall, this study helps teachers and students use LLMs in appropriate contexts and provides developers with guidance for future iterations, thereby advancing the role of LLMs in empowering education.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"18 ","pages":"530-541"},"PeriodicalIF":2.9,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144100084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-21DOI: 10.1109/TLT.2025.3555649
Ronak R. Mohanty;Peter Selly;Lindsey Brenner;Shantanu Vyas;Cassidy R. Nelson;Jason B. Moats;Joseph L. Gabbard;Ranjana K. Mehta
Immersive extended reality (XR) technologies, including augmented reality (AR), virtual reality, and mixed reality, are transforming the landscape of education and training through experiences that promote skill acquisition and enhance memory retention. These technologies have notably improved decision making and situational awareness in public safety training. Despite the promise of these advancements, XR adoption for emergency response has been slow. This hesitancy can be partially attributed to a lack of guidance for integrating these novel technologies into existing curricula. This work aims to guide instructional designers, curriculum developers, and technologists in seamlessly integrating immersive technologies into public safety training curricula. This work provides a comprehensive account of our collaboration with instructional designers, public safety personnel, and subject matter experts in developing an AR-based training curriculum for the Sort, Assess, Life-saving Interventions, Treatment/Transport triage technique used in mass casualty incidents (MCIs). In addition, we introduce a systematic framework for public safety curriculum development based on the Analyze, Design, Develop, Implement, Evaluate instructional design model. Leveraging a human-centered design approach, we first analyze the necessity for immersive learning in public safety. Next, we identify the obstacles in developing XR training experiences and outline our construct of a training prototype through iterative evaluations based on stakeholder feedback. Finally, we share qualitative insights through iterative evaluations with firefighters and emergency medical technicians performing MCI triage tasks in AR, supplemented by survey questionnaires and semistructured interviews. Our goal is to provide a blueprint for a successful integration of immersive technologies into public safety training curricula.
{"title":"From Discovery to Design and Implementation: A Guide on Integrating Immersive Technologies in Public Safety Training","authors":"Ronak R. Mohanty;Peter Selly;Lindsey Brenner;Shantanu Vyas;Cassidy R. Nelson;Jason B. Moats;Joseph L. Gabbard;Ranjana K. Mehta","doi":"10.1109/TLT.2025.3555649","DOIUrl":"https://doi.org/10.1109/TLT.2025.3555649","url":null,"abstract":"Immersive extended reality (XR) technologies, including augmented reality (AR), virtual reality, and mixed reality, are transforming the landscape of education and training through experiences that promote skill acquisition and enhance memory retention. These technologies have notably improved decision making and situational awareness in public safety training. Despite the promise of these advancements, XR adoption for emergency response has been slow. This hesitancy can be partially attributed to a lack of guidance for integrating these novel technologies into existing curricula. This work aims to guide instructional designers, curriculum developers, and technologists in seamlessly integrating immersive technologies into public safety training curricula. This work provides a comprehensive account of our collaboration with instructional designers, public safety personnel, and subject matter experts in developing an AR-based training curriculum for the Sort, Assess, Life-saving Interventions, Treatment/Transport triage technique used in mass casualty incidents (MCIs). In addition, we introduce a systematic framework for public safety curriculum development based on the Analyze, Design, Develop, Implement, Evaluate instructional design model. Leveraging a human-centered design approach, we first analyze the necessity for immersive learning in public safety. Next, we identify the obstacles in developing XR training experiences and outline our construct of a training prototype through iterative evaluations based on stakeholder feedback. Finally, we share qualitative insights through iterative evaluations with firefighters and emergency medical technicians performing MCI triage tasks in AR, supplemented by survey questionnaires and semistructured interviews. Our goal is to provide a blueprint for a successful integration of immersive technologies into public safety training curricula.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"18 ","pages":"387-401"},"PeriodicalIF":2.9,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143856210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-18DOI: 10.1109/TLT.2025.3562298
Sohail Ahmed Soomro;Halar Haleem;Bertrand Schneider;Georgi V. Georgiev
This study presents a monocular approach for capturing students' prototyping activities and interactions in digital-fabrication-based makerspaces. The proposed method uses images from a single camera and applies object reidentification, tracking, and depth estimation algorithms to track and uniquely label participants in the space, extracting both spatial and temporal information. A case study was conducted by recording a lab session in a digital-fabrication-based makerspace where students from a university undergraduate program turned their product ideas into tangible prototypes using digital fabrication. Moreover, a creativity test was conducted to assess individual creative competence. The findings reveal that the monocular approach effectively captures interactions among team members and instructors. It also identifies prototyping activities at individual and team levels. Furthermore, results demonstrate that the students with high and low creativity scores exhibit distinct patterns of interaction with instructors and teammates. Those with high creativity scores worked more independently and less collaboratively. Students with low creativity scores worked more collaboratively and less independently. The proposed monocular approach can be used in formal educational settings for student evaluation and prototyping activities. In addition, instructors can use this approach to assess and tailor teaching methods by promptly intervening and providing structures and scaffolding support to assist struggling students.
{"title":"Capturing Activities and Interactions in Makerspaces Using Monocular Computer Vision","authors":"Sohail Ahmed Soomro;Halar Haleem;Bertrand Schneider;Georgi V. Georgiev","doi":"10.1109/TLT.2025.3562298","DOIUrl":"https://doi.org/10.1109/TLT.2025.3562298","url":null,"abstract":"This study presents a monocular approach for capturing students' prototyping activities and interactions in digital-fabrication-based makerspaces. The proposed method uses images from a single camera and applies object reidentification, tracking, and depth estimation algorithms to track and uniquely label participants in the space, extracting both spatial and temporal information. A case study was conducted by recording a lab session in a digital-fabrication-based makerspace where students from a university undergraduate program turned their product ideas into tangible prototypes using digital fabrication. Moreover, a creativity test was conducted to assess individual creative competence. The findings reveal that the monocular approach effectively captures interactions among team members and instructors. It also identifies prototyping activities at individual and team levels. Furthermore, results demonstrate that the students with high and low creativity scores exhibit distinct patterns of interaction with instructors and teammates. Those with high creativity scores worked more independently and less collaboratively. Students with low creativity scores worked more collaboratively and less independently. The proposed monocular approach can be used in formal educational settings for student evaluation and prototyping activities. In addition, instructors can use this approach to assess and tailor teaching methods by promptly intervening and providing structures and scaffolding support to assist struggling students.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"18 ","pages":"470-483"},"PeriodicalIF":2.9,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10970091","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143900583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Providing timely and personalized feedback on open-ended student responses is a challenge in education due to the increased workloads and time constraints educators face. While existing research has explored how learning analytic approaches can support feedback provision, previous studies have not sufficiently investigated educators' perspectives of how these strategies affect the assessment process. This article reports on the findings of a study that aimed to evaluate the impact of an artificial intelligence (AI)-driven platform designed to assist educators in the assessment and feedback process. Leveraging large language models and learning analytics, the platform supports educators by offering tag-based recommendations and AI-generated feedback to enhance the quality and efficiency of open-response evaluations. A controlled experiment involving 65 higher education instructors assessed the platform's effectiveness in real-world environments. Using the technology acceptance model, this study investigated the platform's usefulness and relevance from the instructors' perspectives. Moreover, we collected data from the platform's usage to identify partners in instructors' behavior for different scenarios. Results indicate that AI-driven feedback significantly improved instructors' ability to provide detailed personalized feedback in less time. This study contributes to the growing research on AI applications in educational assessment and highlights key considerations for adopting AI-driven tools in instructional settings.
{"title":"Empowering Instructors With AI: Evaluating the Impact of an AI-Driven Feedback Tool in Learning Analytics","authors":"Cleon Xavier;Luiz Rodrigues;Newarney Costa;Rodrigues Neto;Gabriel Alves;Taciana Pontual Falcão;Dragan Gašević;Rafael Ferreira Mello","doi":"10.1109/TLT.2025.3562379","DOIUrl":"https://doi.org/10.1109/TLT.2025.3562379","url":null,"abstract":"Providing timely and personalized feedback on open-ended student responses is a challenge in education due to the increased workloads and time constraints educators face. While existing research has explored how learning analytic approaches can support feedback provision, previous studies have not sufficiently investigated educators' perspectives of how these strategies affect the assessment process. This article reports on the findings of a study that aimed to evaluate the impact of an artificial intelligence (AI)-driven platform designed to assist educators in the assessment and feedback process. Leveraging large language models and learning analytics, the platform supports educators by offering tag-based recommendations and AI-generated feedback to enhance the quality and efficiency of open-response evaluations. A controlled experiment involving 65 higher education instructors assessed the platform's effectiveness in real-world environments. Using the technology acceptance model, this study investigated the platform's usefulness and relevance from the instructors' perspectives. Moreover, we collected data from the platform's usage to identify partners in instructors' behavior for different scenarios. Results indicate that AI-driven feedback significantly improved instructors' ability to provide detailed personalized feedback in less time. This study contributes to the growing research on AI applications in educational assessment and highlights key considerations for adopting AI-driven tools in instructional settings.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"18 ","pages":"498-512"},"PeriodicalIF":2.9,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10970108","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-16DOI: 10.1109/TLT.2025.3561332
Xueqiao Zhang;Chao Zhang;Jianwen Sun;Jun Xiao;Yi Yang;Yawei Luo
Large language models (LLMs) have significantly advanced smart education in the artificial general intelligence era. A promising application lies in the automatic generalization of instructional design for curriculum and learning activities, focusing on two key aspects: 1) customized generation: generating niche-targeted teaching content based on students' varying learning abilities and states and 2) intelligent optimization: iteratively optimizing content based on feedback from learning effectiveness or test scores. Currently, a single large LLM cannot effectively manage the entire process, posing a challenge for designing intelligent teaching plans. To address these issues, we developed EduPlanner, an LLM-based multiagent system comprising an evaluator agent, an optimizer agent, and a question analyst, working in adversarial collaboration to generate customized and intelligent instructional design for curriculum and learning activities. Taking mathematics lessons as our example, EduPlanner employs a novel Skill-Tree structure to accurately model the background mathematics knowledge of student groups, personalizing instructional design for curriculum and learning activities according to students' knowledge levels and learning abilities. In addition, we introduce the CIDDP, an LLM-based 5-D evaluation module encompassing Clarity, Integrity, Depth, Practicality, and Pertinence, to comprehensively assess mathematics lesson plan quality and bootstrap intelligent optimization. Experiments conducted on the GSM8K and Algebra datasets demonstrate that EduPlanner excels in evaluating and optimizing instructional design for curriculum and learning activities. Ablation studies further validate the significance and effectiveness of each component within the framework.
{"title":"EduPlanner: LLM-Based Multiagent Systems for Customized and Intelligent Instructional Design","authors":"Xueqiao Zhang;Chao Zhang;Jianwen Sun;Jun Xiao;Yi Yang;Yawei Luo","doi":"10.1109/TLT.2025.3561332","DOIUrl":"https://doi.org/10.1109/TLT.2025.3561332","url":null,"abstract":"Large language models (LLMs) have significantly advanced smart education in the artificial general intelligence era. A promising application lies in the automatic generalization of instructional design for curriculum and learning activities, focusing on two key aspects: 1) <italic>customized generation:</i> generating niche-targeted teaching content based on students' varying learning abilities and states and 2) <italic>intelligent optimization:</i> iteratively optimizing content based on feedback from learning effectiveness or test scores. Currently, a single large LLM cannot effectively manage the entire process, posing a challenge for designing intelligent teaching plans. To address these issues, we developed EduPlanner, an LLM-based multiagent system comprising an evaluator agent, an optimizer agent, and a question analyst, working in adversarial collaboration to generate customized and intelligent instructional design for curriculum and learning activities. Taking mathematics lessons as our example, EduPlanner employs a novel Skill-Tree structure to accurately model the background mathematics knowledge of student groups, personalizing instructional design for curriculum and learning activities according to students' knowledge levels and learning abilities. In addition, we introduce the CIDDP, an LLM-based 5-D evaluation module encompassing <bold>C</b>larity, <bold>I</b>ntegrity, <bold>D</b>epth, <bold>P</b>racticality, and <bold>P</b>ertinence, to comprehensively assess mathematics lesson plan quality and bootstrap intelligent optimization. Experiments conducted on the GSM8K and Algebra datasets demonstrate that EduPlanner excels in evaluating and optimizing instructional design for curriculum and learning activities. Ablation studies further validate the significance and effectiveness of each component within the framework.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"18 ","pages":"416-427"},"PeriodicalIF":2.9,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143888394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}