Pub Date : 2024-08-31DOI: 10.1007/s10639-024-12963-x
Feifei Wang, Alan C. K. Cheung, Ching Sing Chai, Jin Liu
As learners are able to perceive interactivity when interacting with instructors or peer learners in traditional learning environments, learners are similarly able to perceive interactivity when interacting with artificial intelligence (AI) in AI-supported learning environments. Advancements in AI, such as generative AI including ChatGPT and Midjourney, enhance learners’ perceived interactivity, thereby facilitating learning through AI-enabled interaction. However, there is no scale in education for measuring perceived interactivity of learner-AI interaction. This study develops a 17-item scale to assess the extent to which a learner perceives interactivity with AI from four dimensions: responsiveness, personalization, learner control, and learning engagement. The sample group included 422 Chinese university students for the first application and 306 university students for the second application. Both the exploratory factor analysis and the confirmatory factor analysis verified the factor structure of the scale. The Cronbach’s alpha value for the whole scale was 0.948, whereas the Cronbach’s alpha values for the four dimensions ranged between 0.820 and 0.915. Results suggested that this scale was a reliable and valid instrument. This study also found that perceived interactivity of learner-AI interaction was significantly associated with AI tools, learners’ behavioral intentions to use AI in learning, months of using AI in learning, and average duration of using AI in learning each time, and not associated with ages, genders, education levels, and fields of education. Finally, theoretical and practical implications are discussed.
{"title":"Development and validation of the perceived interactivity of learner-AI interaction scale","authors":"Feifei Wang, Alan C. K. Cheung, Ching Sing Chai, Jin Liu","doi":"10.1007/s10639-024-12963-x","DOIUrl":"https://doi.org/10.1007/s10639-024-12963-x","url":null,"abstract":"<p>As learners are able to perceive interactivity when interacting with instructors or peer learners in traditional learning environments, learners are similarly able to perceive interactivity when interacting with artificial intelligence (AI) in AI-supported learning environments. Advancements in AI, such as generative AI including ChatGPT and Midjourney, enhance learners’ perceived interactivity, thereby facilitating learning through AI-enabled interaction. However, there is no scale in education for measuring perceived interactivity of learner-AI interaction. This study develops a 17-item scale to assess the extent to which a learner perceives interactivity with AI from four dimensions: responsiveness, personalization, learner control, and learning engagement. The sample group included 422 Chinese university students for the first application and 306 university students for the second application. Both the exploratory factor analysis and the confirmatory factor analysis verified the factor structure of the scale. The Cronbach’s alpha value for the whole scale was 0.948, whereas the Cronbach’s alpha values for the four dimensions ranged between 0.820 and 0.915. Results suggested that this scale was a reliable and valid instrument. This study also found that perceived interactivity of learner-AI interaction was significantly associated with AI tools, learners’ behavioral intentions to use AI in learning, months of using AI in learning, and average duration of using AI in learning each time, and not associated with ages, genders, education levels, and fields of education. Finally, theoretical and practical implications are discussed.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"7 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The demand to incorporate informatics into primary education is seen as a critical necessity both today, and for the future of modern societies. Numerous countries are currently revising their primary education curricula in order to incorporate informatics concepts and computational thinking skills. Although many successful initiatives have been implemented, countries commonly encounter shared obstacles related to teacher competence development, concept selection, learning content design, and the pedagogical approaches employed. This study explored the effectiveness of three pedagogical approaches on primary school students’ learning of informatics concepts. Mixed-method research with a concurrent embedded design in the form of a quasi-experimental study was conducted to investigate the effectiveness of the three pedagogical approaches (two unplugged: role-play, hands-on, and one plugged: technology-mediated). A total of 55 fourth-grade students participated in the intervention where the instructional content focused on the core five concepts of informatics in primary school through 15 activities. Based on students’ pretest and posttest results, as well as their reflections, unique advantages and drawbacks of the three pedagogical approaches were revealed. Gender differences according to the results, reflections, and pedagogical approaches were each investigated. Although variations were noted in task completion and reflective outcomes, it is a crucial to recognise that the effectiveness of any approach may be contingent upon other contextual factors. The findings of this study are significant in terms of the potential influence of various pedagogical approaches on future educational practices, as well as policies for instructional designers at the primary school level.
{"title":"To plug or not to plug: exploring pedagogical differences for teaching informatics in primary schools","authors":"Gabrielė Stupurienė, Tatjana Jevsikova, Yasemin Gülbahar, Anita Juškevičienė, Austėja Gindulytė, Agnė Juodagalvytė","doi":"10.1007/s10639-024-13000-7","DOIUrl":"https://doi.org/10.1007/s10639-024-13000-7","url":null,"abstract":"<p>The demand to incorporate informatics into primary education is seen as a critical necessity both today, and for the future of modern societies. Numerous countries are currently revising their primary education curricula in order to incorporate informatics concepts and computational thinking skills. Although many successful initiatives have been implemented, countries commonly encounter shared obstacles related to teacher competence development, concept selection, learning content design, and the pedagogical approaches employed. This study explored the effectiveness of three pedagogical approaches on primary school students’ learning of informatics concepts. Mixed-method research with a concurrent embedded design in the form of a quasi-experimental study was conducted to investigate the effectiveness of the three pedagogical approaches (two unplugged: role-play, hands-on, and one plugged: technology-mediated). A total of 55 fourth-grade students participated in the intervention where the instructional content focused on the core five concepts of informatics in primary school through 15 activities. Based on students’ pretest and posttest results, as well as their reflections, unique advantages and drawbacks of the three pedagogical approaches were revealed. Gender differences according to the results, reflections, and pedagogical approaches were each investigated. Although variations were noted in task completion and reflective outcomes, it is a crucial to recognise that the effectiveness of any approach may be contingent upon other contextual factors. The findings of this study are significant in terms of the potential influence of various pedagogical approaches on future educational practices, as well as policies for instructional designers at the primary school level.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"380 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-31DOI: 10.1007/s10639-024-12999-z
Mihyun Son, Minsu Ha
Digital literacy is essential for scientific literacy in a digital world. Although the NGSS Practices include many activities that require digital literacy, most studies have examined digital literacy from a generic perspective rather than a curricular context. This study aimed to develop a self-report tool to measure elements of digital literacy among middle and high school students in the context of science practice. Using Messick's validity framework, Rasch analysis was conducted to ensure the tool's validity. Initial items were developed from the NGSS, KSES, and other countries' curricula and related research literature. The final 38 items were expertly reviewed by scientists and applied to 1194 students for statistical analysis. The results indicated that the tool could be divided into five dimensions of digital literacy in the context of science practice: collecting and recording data, analyzing and interpreting (statistics), analyzing and interpreting (tools), generating conclusions, and sharing and presenting. Item fit and reliability were analyzed. The study found that most items did not show significant gender or school level differences, but scores increased with grade level. Boys tended to perform better than girls, and this difference did not change with grade level. Analysis and Interpretation (Tools) showed the largest differences across school levels. The developed measurement tool suggests that digital literacy in the context of science practice is distinct from generic digital literacy, requiring a multi-contextual approach to teaching. Furthermore, the gender gap was evident in all areas and did not decrease with higher school levels, particularly in STEM-related items like math and computational languages, indicating a need for focused education for girls. The tool developed in this study can serve as a baseline for teachers to identify students' levels and for students to set learning goals. It provides information on how digital literacy can be taught within a curricular context.
{"title":"Development of a digital literacy measurement tool for middle and high school students in the context of scientific practice","authors":"Mihyun Son, Minsu Ha","doi":"10.1007/s10639-024-12999-z","DOIUrl":"https://doi.org/10.1007/s10639-024-12999-z","url":null,"abstract":"<p>Digital literacy is essential for scientific literacy in a digital world. Although the NGSS Practices include many activities that require digital literacy, most studies have examined digital literacy from a generic perspective rather than a curricular context. This study aimed to develop a self-report tool to measure elements of digital literacy among middle and high school students in the context of science practice. Using Messick's validity framework, Rasch analysis was conducted to ensure the tool's validity. Initial items were developed from the NGSS, KSES, and other countries' curricula and related research literature. The final 38 items were expertly reviewed by scientists and applied to 1194 students for statistical analysis. The results indicated that the tool could be divided into five dimensions of digital literacy in the context of science practice: collecting and recording data, analyzing and interpreting (statistics), analyzing and interpreting (tools), generating conclusions, and sharing and presenting. Item fit and reliability were analyzed. The study found that most items did not show significant gender or school level differences, but scores increased with grade level. Boys tended to perform better than girls, and this difference did not change with grade level. Analysis and Interpretation (Tools) showed the largest differences across school levels. The developed measurement tool suggests that digital literacy in the context of science practice is distinct from generic digital literacy, requiring a multi-contextual approach to teaching. Furthermore, the gender gap was evident in all areas and did not decrease with higher school levels, particularly in STEM-related items like math and computational languages, indicating a need for focused education for girls. The tool developed in this study can serve as a baseline for teachers to identify students' levels and for students to set learning goals. It provides information on how digital literacy can be taught within a curricular context.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"12 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Engagement in self-regulated learning (SRL) may improve academic achievements and support development of lifelong learning skills. Despite its educational potential, many students find SRL challenging. Educational chatbots have a potential to scaffold or externally regulate SRL processes by interacting with students in an adaptive way. However, to our knowledge, researchers have yet to learn whether and how educational chatbots developed so far have (1) promoted learning processes pertaining to SRL and (2) improved student learning performance in different tasks. To contribute this new knowledge to the field, we conducted a systematic literature review of the studies on educational chatbots that can be linked to processes of SRL. In doing so, we followed the PRISMA guidelines. We collected and reviewed publications published between 2012 and 2023, and identified 27 publications for analysis. We found that educational chatbots so far have mainly supported learners to identify learning resources, enact appropriate learning strategies, and metacognitively monitor their studying. Limited guidance has been provided to students to set learning goals, create learning plans, reflect on their prior studying, and adapt to their future studying. Most of the chatbots in the reviewed corpus of studies appeared to promote productive SRL processes and boost learning performance of students across different domains, confirming the potential of this technology to support SRL. However, in some studies the chatbot interventions showed non-significant and mixed effects. In this paper, we also discuss the findings and provide recommendations for future research.
{"title":"How educational chatbots support self-regulated learning? A systematic review of the literature","authors":"Rui Guan, Mladen Raković, Guanliang Chen, Dragan Gašević","doi":"10.1007/s10639-024-12881-y","DOIUrl":"https://doi.org/10.1007/s10639-024-12881-y","url":null,"abstract":"<p>Engagement in self-regulated learning (SRL) may improve academic achievements and support development of lifelong learning skills. Despite its educational potential, many students find SRL challenging. Educational chatbots have a potential to scaffold or externally regulate SRL processes by interacting with students in an adaptive way. However, to our knowledge, researchers have yet to learn whether and how educational chatbots developed so far have (1) promoted learning processes pertaining to SRL and (2) improved student learning performance in different tasks. To contribute this new knowledge to the field, we conducted a systematic literature review of the studies on educational chatbots that can be linked to processes of SRL. In doing so, we followed the PRISMA guidelines. We collected and reviewed publications published between 2012 and 2023, and identified 27 publications for analysis. We found that educational chatbots so far have mainly supported learners to identify learning resources, enact appropriate learning strategies, and metacognitively monitor their studying. Limited guidance has been provided to students to set learning goals, create learning plans, reflect on their prior studying, and adapt to their future studying. Most of the chatbots in the reviewed corpus of studies appeared to promote productive SRL processes and boost learning performance of students across different domains, confirming the potential of this technology to support SRL. However, in some studies the chatbot interventions showed non-significant and mixed effects. In this paper, we also discuss the findings and provide recommendations for future research.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"67 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1007/s10639-024-12984-6
Deniz Mertkan Gezgin, Tuğba Türk Kurtça
The purpose of this research is to create a reliable and valid scale to assess AIlessphobia in Education (the fear of being without Artificial Intelligence in education) among university students. In three phases, a sample of 1378 undergraduate students from different faculties at a public university participated in the reliability and validity investigations of the scale during the academic year 2023–2024. Expert comments were obtained to assess the scale's face validity and content validity. The first group sample (n = 420) underwent exploratory factor analysis (EFA), the second group sample (n = 510) underwent confirmatory factor analysis (CFA), and the third group sample (n = 448) underwent criterion-related validity testing. EFA revealed that the scale had a two-factor structure with 18 items that explained 56.23% of the total variance. The CFA analysis verified the scale's two-factor structure and produced good fit values (χ2/df = 2.25, CFI = .99; TLI = .99; NFI = .98; IFI = .99; SRMR = .049; RMSEA = 0.050 [0.42–0.57]). The first factor's analysis showed acceptable values for Guttman's lambda (λ = 0.930–0.948), McDonald's omega (ω = 0.923–0.929), and Cronbach's alpha (α = 0.925–0.935). Similarly, the second factor's analysis also showed acceptable values for these measures (λ = 0.851–0.880, ω = 0.850–0.879, α = 0.847–0.877). Overall, the entire scale demonstrated acceptable values for Cronbach's alpha (0.925–0.935), McDonald's omega (0.922–0.942), and Guttman's lambda (0.940–0.942). Additionally, the scale exhibited a positive and statistically significant correlation with the Fırat Netlessphobia Scale, indicating satisfactory criterion validity. Cross-gender invariance analysis was also performed, showing that gender invariance was achieved. The results indicate that this scale is valid and reliable for university students. In conclusion, the scale fills a critical gap in educational research by providing a reliable tool to measure students' fears and anxieties about the absence of Artificial Intelligence (AI) in their learning experiences. By accurately assessing this unique form of anxiety, educators and policymakers can develop targeted interventions to better understand and mitigate students' fears and support the integration of AI in education, thereby enhancing its constructive contribution to learning.
{"title":"Developing the AIlessphobia in education scale and examining its psychometric characteristics","authors":"Deniz Mertkan Gezgin, Tuğba Türk Kurtça","doi":"10.1007/s10639-024-12984-6","DOIUrl":"https://doi.org/10.1007/s10639-024-12984-6","url":null,"abstract":"<p>The purpose of this research is to create a reliable and valid scale to assess AIlessphobia in Education (the fear of being without Artificial Intelligence in education) among university students. In three phases, a sample of 1378 undergraduate students from different faculties at a public university participated in the reliability and validity investigations of the scale during the academic year 2023–2024. Expert comments were obtained to assess the scale's face validity and content validity. The first group sample (<i>n</i> = 420) underwent exploratory factor analysis (EFA), the second group sample (<i>n</i> = 510) underwent confirmatory factor analysis (CFA), and the third group sample (<i>n</i> = 448) underwent criterion-related validity testing. EFA revealed that the scale had a two-factor structure with 18 items that explained 56.23% of the total variance. The CFA analysis verified the scale's two-factor structure and produced good fit values (χ2/df = 2.25, CFI = .99; TLI = .99; NFI = .98; IFI = .99; SRMR = .049; RMSEA = 0.050 [0.42–0.57]). The first factor's analysis showed acceptable values for Guttman's lambda (λ = 0.930–0.948), McDonald's omega (ω = 0.923–0.929), and Cronbach's alpha (α = 0.925–0.935). Similarly, the second factor's analysis also showed acceptable values for these measures (λ = 0.851–0.880, ω = 0.850–0.879, α = 0.847–0.877). Overall, the entire scale demonstrated acceptable values for Cronbach's alpha (0.925–0.935), McDonald's omega (0.922–0.942), and Guttman's lambda (0.940–0.942). Additionally, the scale exhibited a positive and statistically significant correlation with the Fırat Netlessphobia Scale, indicating satisfactory criterion validity. Cross-gender invariance analysis was also performed, showing that gender invariance was achieved. The results indicate that this scale is valid and reliable for university students. In conclusion, the scale fills a critical gap in educational research by providing a reliable tool to measure students' fears and anxieties about the absence of Artificial Intelligence (AI) in their learning experiences. By accurately assessing this unique form of anxiety, educators and policymakers can develop targeted interventions to better understand and mitigate students' fears and support the integration of AI in education, thereby enhancing its constructive contribution to learning.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"16 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1007/s10639-024-12970-y
Hüseyin Ateş
Integrating Augmented Reality (AR) technology into Intelligent Tutoring Systems (ITS) has the potential to enhance science education outcomes among middle school students. The purpose of this research was to determine the benefits of an ITS-AR system over traditional science teaching methods regarding science learning outcomes, motivation, engagement, and student confidence in science education. Using a quasi-experimental setup with a pretest–posttest and a control group, the research compared the effects of the ITS-AR system with conventional science teaching. In the experiment, the ITS-AR system offered tailored feedback, adaptable learning routes, and targeted assistance to students based on their requirements and advancement. It also helped them visualize intricate scientific notions and experiments using AR technology. The findings indicated that the ITS-AR system significantly improved science learning outcomes compared to the conventional teaching method. Additionally, the students using the ITS-AR system were more motivated, engaged, and confident in their science education than those in the control group. These results point towards the benefits of combining AR with ITS to boost science education results and heighten student involvement and enthusiasm in science studies. This research highlights the potential for incorporating artificial intelligence into science teaching and the creation of efficient ITS-AR tools for science education.
将增强现实(AR)技术整合到智能辅导系统(ITS)中有可能提高中学生的科学教育成果。本研究的目的是确定 ITS-AR 系统与传统科学教学方法相比,在科学学习成果、学习动机、参与度和学生对科学教育的信心等方面的优势。研究采用前测-后测和对照组的准实验设置,比较了 ITS-AR 系统与传统科学教学的效果。在实验中,ITS-AR 系统为学生提供了量身定制的反馈、可调整的学习路线,并根据学生的要求和进步情况提供有针对性的帮助。该系统还利用 AR 技术帮助学生将复杂的科学概念和实验形象化。研究结果表明,与传统教学方法相比,ITS-AR 系统显著提高了科学学习效果。此外,与对照组学生相比,使用 ITS-AR 系统的学生在科学教育中更积极、更投入、更自信。这些结果表明,将 AR 与 ITS 相结合可提高科学教育的效果,增强学生对科学学习的参与和热情。这项研究凸显了将人工智能融入科学教学和创建高效的 ITS-AR 科学教育工具的潜力。
{"title":"Integrating augmented reality into intelligent tutoring systems to enhance science education outcomes","authors":"Hüseyin Ateş","doi":"10.1007/s10639-024-12970-y","DOIUrl":"https://doi.org/10.1007/s10639-024-12970-y","url":null,"abstract":"<p>Integrating Augmented Reality (AR) technology into Intelligent Tutoring Systems (ITS) has the potential to enhance science education outcomes among middle school students. The purpose of this research was to determine the benefits of an ITS-AR system over traditional science teaching methods regarding science learning outcomes, motivation, engagement, and student confidence in science education. Using a quasi-experimental setup with a pretest–posttest and a control group, the research compared the effects of the ITS-AR system with conventional science teaching. In the experiment, the ITS-AR system offered tailored feedback, adaptable learning routes, and targeted assistance to students based on their requirements and advancement. It also helped them visualize intricate scientific notions and experiments using AR technology. The findings indicated that the ITS-AR system significantly improved science learning outcomes compared to the conventional teaching method. Additionally, the students using the ITS-AR system were more motivated, engaged, and confident in their science education than those in the control group. These results point towards the benefits of combining AR with ITS to boost science education results and heighten student involvement and enthusiasm in science studies. This research highlights the potential for incorporating artificial intelligence into science teaching and the creation of efficient ITS-AR tools for science education.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"8 3 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1007/s10639-024-12962-y
Soojeong Jeong, Justin Rague, Kaylee Litson, David F. Feldon, M. Jeannette Lawler, Kenneth Plummer
DBL is a novel pedagogical approach intended to improve students’ conditional knowledge and problem-solving skills by exposing them to a sequence of branching learning decisions. The DBL software provided students with ample opportunities to engage in the expert decision-making processes involved in complex problem-solving and to receive just-in-time instruction and scaffolds at each decision point. The purpose of this study was to examine the effects of decision-based learning (DBL) on undergraduate students’ learning performance in introductory physics courses as well as the mediating roles of cognitive load and self-testing for such effects. We used a quasi-experimental posttest design across two sections of an online introductory physics course including a total N = 390 participants. Contrary to our initial hypothesis, DBL instruction did not have a direct effect on cognitive load and had no indirect effect on student performance through cognitive load. Results also indicated that while DBL did not directly impact students’ physics performance, self-testing positively mediated the relationship between DBL and student performance. Our findings underscore the importance of students’ use of self-testing which plays a crucial role when engaging with DBL as it can influence effort input towards the domain task and thereby optimize learning performance.
{"title":"Effects of decision-based learning on student performance in introductory physics: The mediating roles of cognitive load and self-testing","authors":"Soojeong Jeong, Justin Rague, Kaylee Litson, David F. Feldon, M. Jeannette Lawler, Kenneth Plummer","doi":"10.1007/s10639-024-12962-y","DOIUrl":"https://doi.org/10.1007/s10639-024-12962-y","url":null,"abstract":"<p>DBL is a novel pedagogical approach intended to improve students’ conditional knowledge and problem-solving skills by exposing them to a sequence of branching learning decisions. The DBL software provided students with ample opportunities to engage in the expert decision-making processes involved in complex problem-solving and to receive just-in-time instruction and scaffolds at each decision point. The purpose of this study was to examine the effects of decision-based learning (DBL) on undergraduate students’ learning performance in introductory physics courses as well as the mediating roles of cognitive load and self-testing for such effects. We used a quasi-experimental posttest design across two sections of an online introductory physics course including a total <i>N</i> = 390 participants. Contrary to our initial hypothesis, DBL instruction did not have a direct effect on cognitive load and had no indirect effect on student performance through cognitive load. Results also indicated that while DBL did not directly impact students’ physics performance, self-testing positively mediated the relationship between DBL and student performance. Our findings underscore the importance of students’ use of self-testing which plays a crucial role when engaging with DBL as it can influence effort input towards the domain task and thereby optimize learning performance.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"5 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1007/s10639-024-12995-3
Meina Zhu, Min Young Doo, Sara Masoud, Yaoxian Huang
This study examines the influences of learners’ motivation, self-monitoring, and self-management on learning satisfaction in online learning environments. The participants were 185 undergraduates and 99 graduate students majoring in computer science and engineering. The participants’ motivation, self-monitoring, self-management, and learning satisfaction were measured using a questionnaire. Results indicated that motivation, self-monitoring, and self-management significantly influenced learning satisfaction and the three factors together accounted for approximately 60% of the variance in learning satisfaction. Motivation was the most influential factor on learning engagement. Group differences emerged between undergraduates and graduate students in the influences of motivation, self-monitoring, and self-management on learning satisfaction. Compared to undergraduate students, graduate students had statistically higher scores in motivation, self-monitoring, and self-management, but not in learning satisfaction. The three factors also influenced undergraduate and graduate students differently in the regression analysis results. Motivation and self-monitoring, but not self-management influenced undergraduates’ learning satisfaction, whereas motivation and self-management, but not self-monitoring influenced graduates’ learning satisfaction. Further studies are needed to investigate the reasons for the group differences. The implications are that instructors need to utilize SDL strategies extensively to enhance learning satisfaction in online learning. In addition, designers, instructors, and institutions should tailor the learning strategies more effectively for their target audience given the differences in the influence of SDL on learning satisfaction between undergraduates and graduates.
{"title":"The influence of SDL on learning satisfaction in online learning and group differences between undergraduates and graduates","authors":"Meina Zhu, Min Young Doo, Sara Masoud, Yaoxian Huang","doi":"10.1007/s10639-024-12995-3","DOIUrl":"https://doi.org/10.1007/s10639-024-12995-3","url":null,"abstract":"<p>This study examines the influences of learners’ motivation, self-monitoring, and self-management on learning satisfaction in online learning environments. The participants were 185 undergraduates and 99 graduate students majoring in computer science and engineering. The participants’ motivation, self-monitoring, self-management, and learning satisfaction were measured using a questionnaire. Results indicated that motivation, self-monitoring, and self-management significantly influenced learning satisfaction and the three factors together accounted for approximately 60% of the variance in learning satisfaction. Motivation was the most influential factor on learning engagement. Group differences emerged between undergraduates and graduate students in the influences of motivation, self-monitoring, and self-management on learning satisfaction. Compared to undergraduate students, graduate students had statistically higher scores in motivation, self-monitoring, and self-management, but not in learning satisfaction. The three factors also influenced undergraduate and graduate students differently in the regression analysis results. Motivation and self-monitoring, but not self-management influenced undergraduates’ learning satisfaction, whereas motivation and self-management, but not self-monitoring influenced graduates’ learning satisfaction. Further studies are needed to investigate the reasons for the group differences. The implications are that instructors need to utilize SDL strategies extensively to enhance learning satisfaction in online learning. In addition, designers, instructors, and institutions should tailor the learning strategies more effectively for their target audience given the differences in the influence of SDL on learning satisfaction between undergraduates and graduates.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"122 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1007/s10639-024-12988-2
Ji Hyun Yu, Devraj Chauhan
This paper presents a comprehensive analysis of the major themes in Natural Language Processing (NLP) applications for personalized learning, derived from a Latent Dirichlet Allocation (LDA) examination of top educational technology journals from 2014 to 2023. Our methodology involved collecting a corpus of relevant journal articles, applying LDA for thematic extraction, and conducting sentiment analysis on the identified themes. Four predominant themes have been identified: Emotionally Intelligent NLP for Enhanced Writing Education, Interactive Conversational Tutors, Semantic and Sentiment Analysis in Video-based Learning, and Algorithmic Personalization in Massive Open Online Courses (MOOCs). The study highlights the growing importance of emotional intelligence in NLP, the development of AI-powered conversational tutors, and the strategic use of NLP to extract insights from multimedia content. Moreover, the study reveals a uniformly positive sentiment towards NLP’s potential in education, despite the challenges and a need for ethical considerations. No significant sentiment variances were found across the four themes, indicating a consensus on NLP’s value in diverse educational applications. This research supports the sentiment of ongoing innovation within NLP to enhance personalized learning experiences and suggests a promising future for its empirical validation and application in educational settings.
{"title":"Trends in NLP for personalized learning: LDA and sentiment analysis insights","authors":"Ji Hyun Yu, Devraj Chauhan","doi":"10.1007/s10639-024-12988-2","DOIUrl":"https://doi.org/10.1007/s10639-024-12988-2","url":null,"abstract":"<p>This paper presents a comprehensive analysis of the major themes in Natural Language Processing (NLP) applications for personalized learning, derived from a Latent Dirichlet Allocation (LDA) examination of top educational technology journals from 2014 to 2023. Our methodology involved collecting a corpus of relevant journal articles, applying LDA for thematic extraction, and conducting sentiment analysis on the identified themes. Four predominant themes have been identified: Emotionally Intelligent NLP for Enhanced Writing Education, Interactive Conversational Tutors, Semantic and Sentiment Analysis in Video-based Learning, and Algorithmic Personalization in Massive Open Online Courses (MOOCs). The study highlights the growing importance of emotional intelligence in NLP, the development of AI-powered conversational tutors, and the strategic use of NLP to extract insights from multimedia content. Moreover, the study reveals a uniformly positive sentiment towards NLP’s potential in education, despite the challenges and a need for ethical considerations. No significant sentiment variances were found across the four themes, indicating a consensus on NLP’s value in diverse educational applications. This research supports the sentiment of ongoing innovation within NLP to enhance personalized learning experiences and suggests a promising future for its empirical validation and application in educational settings.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"58 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1007/s10639-024-12982-8
Guanyao Xu, Aiqing Yu, Cong Xu, Xianquan Liu, Guy Trainin
Students’ behavior and academic achievement are improved when technology is integrated into instruction in ways that support learning. Technology integration competency (TIC) has become a critical capacity for second language (L2) pre-service teachers in the digital era. However, the significance of such competency has been neglected in the field of teaching Chinese as a second language (TCSL). This study developed a framework to investigate pre-service TCSL teachers’ TIC. The framework was developed by modifying the existing International Society for Technology in Education Standards for Educators (ISTE-SE). It was tailored to be used within the context of L2 teaching. The framework also incorporated artificial intelligence (AI) concepts in addition to being content based. The structural relationships between pre-service teachers’ TIC and the influencing factors were analyzed through structural equation modeling (SEM). The results revealed that, among the seven factors describing an educator’s TIC, pre-service TCSL teachers had reached the highest level of technology-competency in Analyst and the lowest level in Designer. Furthermore, pre-service TCSL teachers’ TIC was significantly affected by their technology course completion and grade level in the teacher education programs, though the strength of both relationships was small. This study made several recommendations for a strategic enhancement of TCSL teacher education programs, emphasizing continuous development of TIC across various educator roles and academic levels.
{"title":"Investigating pre-service TCSL teachers’ technology integration competency through a content-based AI-inclusive framework","authors":"Guanyao Xu, Aiqing Yu, Cong Xu, Xianquan Liu, Guy Trainin","doi":"10.1007/s10639-024-12982-8","DOIUrl":"https://doi.org/10.1007/s10639-024-12982-8","url":null,"abstract":"<p>Students’ behavior and academic achievement are improved when technology is integrated into instruction in ways that support learning. Technology integration competency (TIC) has become a critical capacity for second language (L2) pre-service teachers in the digital era. However, the significance of such competency has been neglected in the field of teaching Chinese as a second language (TCSL). This study developed a framework to investigate pre-service TCSL teachers’ TIC. The framework was developed by modifying the existing International Society for Technology in Education Standards for Educators (ISTE-SE). It was tailored to be used within the context of L2 teaching. The framework also incorporated artificial intelligence (AI) concepts in addition to being content based. The structural relationships between pre-service teachers’ TIC and the influencing factors were analyzed through structural equation modeling (SEM). The results revealed that, among the seven factors describing an educator’s TIC, pre-service TCSL teachers had reached the highest level of technology-competency in Analyst and the lowest level in Designer. Furthermore, pre-service TCSL teachers’ TIC was significantly affected by their technology course completion and grade level in the teacher education programs, though the strength of both relationships was small. This study made several recommendations for a strategic enhancement of TCSL teacher education programs, emphasizing continuous development of TIC across various educator roles and academic levels.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"161 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}