Pub Date : 2024-07-31DOI: 10.1007/s10639-024-12878-7
Jinhee Kim, Seongryeong Yu, Rita Detrick, Na Li
The rapid development of generative artificial intelligence (GenAI), including large language models (LLM), has merged to support students in their academic writing process. Keeping pace with the technical and educational landscape requires careful consideration of the opportunities and challenges that GenAI-assisted systems create within education. This serves as a useful and necessary starting point for fully leveraging its potential for learning and teaching. Hence, it is crucial to gather insights from diverse perspectives and use cases from actual users, particularly the unique voices and needs of student-users. Therefore, this study explored and examined students' perceptions and experiences about GenAI-assisted academic writing by conducting in-depth interviews with 20 Chinese students in higher education after completing academic writing tasks using a ChatGPT4-embedded writing system developed by the research team. The study found that students expected AI to serve multiple roles, including multi-tasking writing assistant, virtual tutor, and digital peer to support multifaceted writing processes and performance. Students perceived that GenAI-assisted writing could benefit them in three areas including the writing process, performance, and their affective domain. Meanwhile, they also identified AI-related, student-related, and task-related challenges that were experienced during the GenAI-assisted writing activity. These findings contribute to a more nuanced understanding of GenAI's impact on academic writing that is inclusive of student perspectives, offering implications for educational AI design and instructional design.
{"title":"Exploring students’ perspectives on Generative AI-assisted academic writing","authors":"Jinhee Kim, Seongryeong Yu, Rita Detrick, Na Li","doi":"10.1007/s10639-024-12878-7","DOIUrl":"https://doi.org/10.1007/s10639-024-12878-7","url":null,"abstract":"<p>The rapid development of generative artificial intelligence (GenAI), including large language models (LLM), has merged to support students in their academic writing process. Keeping pace with the technical and educational landscape requires careful consideration of the opportunities and challenges that GenAI-assisted systems create within education. This serves as a useful and necessary starting point for fully leveraging its potential for learning and teaching. Hence, it is crucial to gather insights from diverse perspectives and use cases from actual users, particularly the unique voices and needs of student-users. Therefore, this study explored and examined students' perceptions and experiences about GenAI-assisted academic writing by conducting in-depth interviews with 20 Chinese students in higher education after completing academic writing tasks using a ChatGPT4-embedded writing system developed by the research team. The study found that students expected AI to serve multiple roles, including multi-tasking writing assistant, virtual tutor, and digital peer to support multifaceted writing processes and performance. Students perceived that GenAI-assisted writing could benefit them in three areas including the writing process, performance, and their affective domain. Meanwhile, they also identified AI-related, student-related, and task-related challenges that were experienced during the GenAI-assisted writing activity. These findings contribute to a more nuanced understanding of GenAI's impact on academic writing that is inclusive of student perspectives, offering implications for educational AI design and instructional design.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"67 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-30DOI: 10.1007/s10639-024-12913-7
Christopher C. Y. Yang, Jiun-Yu Wu, Hiroaki Ogata
Blended learning (BL) combines traditional classroom activities with online learning resources, enabling students to obtain higher academic performance through well-defined interactive learning strategies. However, lacking the capacity to self-regulate their learning, many students might fail to comprehensively study the learning materials after face-to-face learning. In this study, a learning analytics dashboard (LAD)-based self-regulated learning (SRL) approach is proposed to enhance the students’ practices of SRL in an e-book-based BL environment. The proposed approach aims to support students to precisely reflect on their face-to-face e-book reading activities, effectively review the e-book learning materials after the face-to-face learning sessions, and, finally, set new goals for their next face-to-face learning session by using a LAD. To evaluate the effects of the proposed approach, a quasi-experimental design was deployed in a university-level course that adopted a BL model. The experimental group learned through the proposed approach using an e-book and the LAD, whereas the control group learned using the conventional BL approach using only the e-book. The results of the one-way analysis of covariance (ANCOVA) and Mann–Whitney U test demonstrate a statistically significant difference (p-value less than 0.01) between both groups in terms of students’ learning outcomes, awareness of SRL, self-efficacy (SE), and e-book reading engagements. This provides educators with evidence of the effectiveness of an explicit SRL approach in BL, which not only improves student learning outcomes from the given course and awareness of self-regulation and SE but also increases course engagement compared to students who learn with conventional BL approaches.
混合式学习(BL)将传统的课堂活动与在线学习资源相结合,通过明确的互动学习策略,使学生获得更高的学业成绩。然而,由于缺乏自我调节学习的能力,许多学生在面授学习后可能无法全面地研读学习材料。本研究提出一個以學習分析儀表板(LAD)為基礎的自律學習(SRL)方法,以加強學生在電子書基礎學習環境中的自律學習實踐。该方法旨在支持学生精确地反思他们面对面的电子书阅读活动,并在面对面学习后有效地回顾电子书学习材料,最后利用学习分析仪表板为他们的下一次面对面学习设定新的目标。为了评估所建议的方法的效果,我们在一门采用基本法模式的大学课程中采用了准实验设计。实验组通过使用电子书和 LAD 来学习所提出的方法,而对照组则只使用电子书来学习传统的 BL 方法。单因子方差分析(ANCOVA)和曼-惠特尼 U 检验的结果表明,两组学生在学习成果、自学能力意识、自我效能感(SE)和电子书阅读参与度方面存在显著差异(P 值小于 0.01)。这为教育工作者提供了在基础教育中采用明确的自学能力学习方法的有效性证据,与采用传统基础教育方法学习的学生相比,这种方法不仅能提高学生的课程学习成绩、自律意识和自学能力,还能提高学生的课程参与度。
{"title":"Learning analytics dashboard-based self-regulated learning approach for enhancing students’ e-book-based blended learning","authors":"Christopher C. Y. Yang, Jiun-Yu Wu, Hiroaki Ogata","doi":"10.1007/s10639-024-12913-7","DOIUrl":"https://doi.org/10.1007/s10639-024-12913-7","url":null,"abstract":"<p>Blended learning (BL) combines traditional classroom activities with online learning resources, enabling students to obtain higher academic performance through well-defined interactive learning strategies. However, lacking the capacity to self-regulate their learning, many students might fail to comprehensively study the learning materials after face-to-face learning. In this study, a learning analytics dashboard (LAD)-based self-regulated learning (SRL) approach is proposed to enhance the students’ practices of SRL in an e-book-based BL environment. The proposed approach aims to support students to precisely reflect on their face-to-face e-book reading activities, effectively review the e-book learning materials after the face-to-face learning sessions, and, finally, set new goals for their next face-to-face learning session by using a LAD. To evaluate the effects of the proposed approach, a quasi-experimental design was deployed in a university-level course that adopted a BL model. The experimental group learned through the proposed approach using an e-book and the LAD, whereas the control group learned using the conventional BL approach using only the e-book. The results of the one-way analysis of covariance (ANCOVA) and Mann–Whitney U test demonstrate a statistically significant difference (<i>p</i>-value less than 0.01) between both groups in terms of students’ learning outcomes, awareness of SRL, self-efficacy (SE), and e-book reading engagements. This provides educators with evidence of the effectiveness of an explicit SRL approach in BL, which not only improves student learning outcomes from the given course and awareness of self-regulation and SE but also increases course engagement compared to students who learn with conventional BL approaches.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"1 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-30DOI: 10.1007/s10639-024-12896-5
Masoud Rahimi, Jalil Fathi, Di Zou
Grounded in the activity theory, we adopted a sequential explanatory mixed-methods approach to explore the impact of automated written corrective feedback (AWCF) on English as a foreign language (EFL) learners’ academic writing skills (i.e. task achievement, coherence and cohesion, lexicon, and grammatical range and accuracy). To this end, two intact classes were selected and randomly assigned to an electronic class (30 EFL learners), receiving AWCF electronically, and a non-electronic class (26 EFL learners), receiving written corrective feedback (WCF) non-electronically. Both groups of learners engaged in interactive writing activities guided by the principles of the activity theory, which capitalised on the roles of writing collaboration, social environment, and the mediation of electronic/nonelectronic artefacts to develop the writing skills. The required quantitative and qualitative data were collected via IELTS academic writing Task 1 and Task 2, a stimulated recall technique, and an individual semi-structured interview. The results of one-way ANCOVA indicated that the electronic learners outperformed their non-electronic counterparts in writing performance, task achievement, and grammatical range and accuracy, whilst no significant differences were found between the two groups’ coherence and cohesion and lexicon. The stimulated recall technique, conducted with seven electronic EFL learners, confirmed the electronic learners’ behavioural, cognitive, and affective engagement with the AWCF. The individual semi-structured interview, conducted with the same electronic learners, also indicated the electronic learners’ positive and negative attitudes and perceptions towards the AWCF, further corroborating the findings. Pedagogical implications are discussed within the framework of the activity theory to clarify how instructional procedures and learning environments can be designed to more effectively contribute to EFL learners’ interactive writing activities and, hence, their writing skills development.
{"title":"Exploring the impact of automated written corrective feedback on the academic writing skills of EFL learners: An activity theory perspective","authors":"Masoud Rahimi, Jalil Fathi, Di Zou","doi":"10.1007/s10639-024-12896-5","DOIUrl":"https://doi.org/10.1007/s10639-024-12896-5","url":null,"abstract":"<p>Grounded in the activity theory, we adopted a sequential explanatory mixed-methods approach to explore the impact of automated written corrective feedback (AWCF) on English as a foreign language (EFL) learners’ academic writing skills (i.e. task achievement, coherence and cohesion, lexicon, and grammatical range and accuracy). To this end, two intact classes were selected and randomly assigned to an electronic class (30 EFL learners), receiving AWCF electronically, and a non-electronic class (26 EFL learners), receiving written corrective feedback (WCF) non-electronically. Both groups of learners engaged in interactive writing activities guided by the principles of the activity theory, which capitalised on the roles of writing collaboration, social environment, and the mediation of electronic/nonelectronic artefacts to develop the writing skills. The required quantitative and qualitative data were collected via IELTS academic writing Task 1 and Task 2, a stimulated recall technique, and an individual semi-structured interview. The results of one-way ANCOVA indicated that the electronic learners outperformed their non-electronic counterparts in writing performance, task achievement, and grammatical range and accuracy, whilst no significant differences were found between the two groups’ coherence and cohesion and lexicon. The stimulated recall technique, conducted with seven electronic EFL learners, confirmed the electronic learners’ behavioural, cognitive, and affective engagement with the AWCF. The individual semi-structured interview, conducted with the same electronic learners, also indicated the electronic learners’ positive and negative attitudes and perceptions towards the AWCF, further corroborating the findings. Pedagogical implications are discussed within the framework of the activity theory to clarify how instructional procedures and learning environments can be designed to more effectively contribute to EFL learners’ interactive writing activities and, hence, their writing skills development.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"45 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Though online peer assessment is recognized as a critical factor in enhancing learning performance, pedagogical strategies and analysis of students’ peer assessment at the group level, rather than the individual level, are underexplored. Online group assessment (OGA) focuses on assessing peer-group work in an online environment. A total of 64 student teachers participated in this study, where they were divided into multiple groups of four. Each group was required to collaborate on completing an instructional design and engage in OGA activities. We utilized the Technological Pedagogical Content Knowledge (TPACK) scale to assess the instructional designs of student teachers, evaluating their ability to integrate technology, pedagogy, and content knowledge. In this research, we consider the TPACK scores of each group’s instructional design as their learning performance. The correlations between providing, receiving, and responding to comments and group learning performance were explored by adopting a mixed methods approach. The results indicated that OGA enhanced group learning performance. Providing comments was more associated with improved group learning performance than receiving and responding to them. Furthermore, providing informative comments was more associated with group learning performance than providing other types of comments. In addition, innovative responses were positively associated with group learning performance, while uptake responses were negatively associated with group learning performance. Finally, the discussion and suggestions of intervention for different stages of OGA are provided to help design and implement OGA activities in the future.
{"title":"Supporting learning performance improvement: Role of online group assessment","authors":"Fengjuan Chen, Si Zhang, Qingtang Liu, Shufan Yu, Xiaojuan Li, Xinxin Zheng","doi":"10.1007/s10639-024-12907-5","DOIUrl":"https://doi.org/10.1007/s10639-024-12907-5","url":null,"abstract":"<p>Though online peer assessment is recognized as a critical factor in enhancing learning performance, pedagogical strategies and analysis of students’ peer assessment at the group level, rather than the individual level, are underexplored. Online group assessment (OGA) focuses on assessing peer-group work in an online environment. A total of 64 student teachers participated in this study, where they were divided into multiple groups of four. Each group was required to collaborate on completing an instructional design and engage in OGA activities. We utilized the Technological Pedagogical Content Knowledge (TPACK) scale to assess the instructional designs of student teachers, evaluating their ability to integrate technology, pedagogy, and content knowledge. In this research, we consider the TPACK scores of each group’s instructional design as their learning performance. The correlations between providing, receiving, and responding to comments and group learning performance were explored by adopting a mixed methods approach. The results indicated that OGA enhanced group learning performance. Providing comments was more associated with improved group learning performance than receiving and responding to them. Furthermore, providing informative comments was more associated with group learning performance than providing other types of comments. In addition, <i>innovative</i> responses were positively associated with group learning performance, while <i>uptake</i> responses were negatively associated with group learning performance. Finally, the discussion and suggestions of intervention for different stages of OGA are provided to help design and implement OGA activities in the future.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"78 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141873280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-29DOI: 10.1007/s10639-024-12914-6
László Horváth, Tibor M. Pintér, Helga Misley, Ida Dringó-Horváth
Digital competence is crucial for technology integration in education, with teacher educators playing a vital role in preparing student teachers for digitalized environments. In our conceptualization of teachers’ digital competence (TDC), we emphasize its embeddedness in a professional context. The Digital Competence for Educators (DigCompEdu) framework aligns with this understanding, yet research focusing on teacher educators is limited. To address this gap, we followed a quantitative research strategy to explore different sources of validity evidence for the DigCompEdu in a small, non-representative Hungarian teacher-educator sample (N = 183) via an online questionnaire. Our study, regarding the DigCompEdu as a measure of TDC, aims to (1) establish validity evidence based on internal structure concerns via Partial Least Squares structural equation modelling to evaluate the validity and reliability of the tool, (2) compare TDC self-categorization with test results to provide validity evidence based on the consequences of testing, and (3) explore validity evidence based on relationships of TDC with other variables such as age, technological, and pedagogical competence. Our findings reveal a significant mediating effect of professional engagement on teacher educators’ ability to support student teachers’ digital competence development. Despite the sample’s limitation, this study contributes to refining the DigCompEdu framework and highlights the importance of professional engagement in fostering digital competence among teacher educators.
{"title":"Validity evidence regarding the use of DigCompEdu as a self-reflection tool: The case of Hungarian teacher educators","authors":"László Horváth, Tibor M. Pintér, Helga Misley, Ida Dringó-Horváth","doi":"10.1007/s10639-024-12914-6","DOIUrl":"https://doi.org/10.1007/s10639-024-12914-6","url":null,"abstract":"<p>Digital competence is crucial for technology integration in education, with teacher educators playing a vital role in preparing student teachers for digitalized environments. In our conceptualization of teachers’ digital competence (TDC), we emphasize its embeddedness in a professional context. The Digital Competence for Educators (DigCompEdu) framework aligns with this understanding, yet research focusing on teacher educators is limited. To address this gap, we followed a quantitative research strategy to explore different sources of validity evidence for the DigCompEdu in a small, non-representative Hungarian teacher-educator sample (<i>N</i> = 183) via an online questionnaire. Our study, regarding the DigCompEdu as a measure of TDC, aims to (1) establish validity evidence based on internal structure concerns via Partial Least Squares structural equation modelling to evaluate the validity and reliability of the tool, (2) compare TDC self-categorization with test results to provide validity evidence based on the consequences of testing, and (3) explore validity evidence based on relationships of TDC with other variables such as age, technological, and pedagogical competence. Our findings reveal a significant mediating effect of professional engagement on teacher educators’ ability to support student teachers’ digital competence development. Despite the sample’s limitation, this study contributes to refining the DigCompEdu framework and highlights the importance of professional engagement in fostering digital competence among teacher educators.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"1117 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-29DOI: 10.1007/s10639-024-12904-8
Olivier Habimana, Mathias Nduwingoma, Irénée Ndayambaje, Jean Francois Maniraho, Ali Kaleeba, Dany Kamuhanda, Evariste Mwumvaneza, Marie Claire Uwera, Albert Ngiruwonsanga, Evode Mukama, Celestin Ntivuguruzwa, Gerard Nizeyimana, Ezechiel Nsabayezu
This study explores the level of engagement with Information and Communication Technology (ICT) supported content among students and teachers in learning sciences and basic computing at Rwandan lower secondary schools. Data were collected from ten well-equipped smart classrooms across ten schools. A sample of 394 participants included ten deputy headteachers, 40 teachers, and 344 students. Interviews, classroom observations, and surveys were used for data collection. Findings revealed a significant digital divide among students due to limited ICT literacy, time constraints, and limited access to computer devices. Also, the findings indicate that teachers faced various challenges, including underutilisation of ICT in science lessons, primarily due to inadequate digital competence. The study recommends strategies to enhance students’ digital skills through training programs and foster ICT-oriented teacher communities of practice via professional development to improve digital competence and innovative teaching methods.
{"title":"Investigating ICT-led engagement with content in science and basic computing subjects of lower secondary schools in Rwanda","authors":"Olivier Habimana, Mathias Nduwingoma, Irénée Ndayambaje, Jean Francois Maniraho, Ali Kaleeba, Dany Kamuhanda, Evariste Mwumvaneza, Marie Claire Uwera, Albert Ngiruwonsanga, Evode Mukama, Celestin Ntivuguruzwa, Gerard Nizeyimana, Ezechiel Nsabayezu","doi":"10.1007/s10639-024-12904-8","DOIUrl":"https://doi.org/10.1007/s10639-024-12904-8","url":null,"abstract":"<p>This study explores the level of engagement with Information and Communication Technology (ICT) supported content among students and teachers in learning sciences and basic computing at Rwandan lower secondary schools. Data were collected from ten well-equipped smart classrooms across ten schools. A sample of 394 participants included ten deputy headteachers, 40 teachers, and 344 students. Interviews, classroom observations, and surveys were used for data collection. Findings revealed a significant digital divide among students due to limited ICT literacy, time constraints, and limited access to computer devices. Also, the findings indicate that teachers faced various challenges, including underutilisation of ICT in science lessons, primarily due to inadequate digital competence. The study recommends strategies to enhance students’ digital skills through training programs and foster ICT-oriented teacher communities of practice via professional development to improve digital competence and innovative teaching methods.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"4 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-27DOI: 10.1007/s10639-024-12911-9
Hatice Yildiz Durak
Feedback is critical in providing personalized information about educational processes and supporting their performance in online collaborative learning environments. However, giving effective feedback and monitoring its effects, which is especially important in online environments, is a complex issue. Although providing feedback by analyzing online learning behaviors, it is unclear how the effectiveness of this feedback translates into online learning experiences. The current study aims to compare the behavioral patterns of online system engagement of students who receive and do not receive machine learning-based temporal learning analytics (ML-LA) feedback, to identify the differences between student groups in terms of learning performance, online engagement, and various system usage variables, and to examine the behavioral patterns change over time of students regarding online system engagement. The current study was conducted with the participation of 49 undergraduate students. The study defined three engagement levels using system usage analytics and cluster analysis. While t-test and ANCOVA were applied to pre-test and post-test scores to evaluate students’ learning performance and online engagement, lag sequential analysis was used to analyze behavioral patterns, and the Markov chain was used to examine the change of behavioral patterns over time. The group receiving ML-LA feedback showed higher behavior and cognitive engagement than the control group. In addition, the rate of completing learning tasks was higher in the experimental group. Temporal patterns of online engagement behaviors across student groups are described and compared. The results showed that both groups used all stages of the system features. However, there were some differences in the navigation rankings. The most important behavioral transitions in the experimental group are task and discussion viewing and posting, task posting updating, and group performance viewing. In the control group, the most important behavioral transitions are the relationship between viewing a discussion and making a discussion, then this is followed by the sequential relationship between viewing individual performance and viewing group performance. The results showed that students’ engagement behaviors transitioned from light to medium and intense throughout the semester, especially in the experimental group. For learning designers and researchers, this study can help develop a deep understanding of environment design.
在在线协作学习环境中,反馈对于提供有关教育过程的个性化信息和支持他们的表现至关重要。然而,如何提供有效的反馈并监控其效果(这在在线环境中尤为重要)是一个复杂的问题。虽然通过分析在线学习行为来提供反馈,但目前还不清楚这种反馈的有效性如何转化为在线学习体验。本研究旨在比较接受和未接受基于机器学习的时态学习分析(ML-LA)反馈的学生参与在线系统的行为模式,确定学生群体在学习成绩、在线参与度和各种系统使用变量方面的差异,并考察学生参与在线系统的行为模式随时间的变化。本研究有 49 名本科生参与。研究利用系统使用分析和聚类分析定义了三个参与度等级。研究采用了 t 检验和方差分析来评估学生的学习成绩和在线参与度,采用了滞后序列分析来分析行为模式,采用了马尔可夫链来研究行为模式随时间的变化。与对照组相比,接受 ML-LA 反馈的小组表现出更高的行为和认知参与度。此外,实验组的学习任务完成率也更高。研究还描述并比较了各组学生在线参与行为的时间模式。结果显示,两组学生都使用了系统所有阶段的功能。不过,在导航排名方面存在一些差异。实验组最重要的行为转换是任务和讨论的查看和发布、任务发布的更新以及小组表现的查看。在对照组中,最重要的行为转换是查看讨论和进行讨论之间的关系,然后是查看个人表现和查看小组表现之间的顺序关系。结果显示,在整个学期中,学生的参与行为从轻度过渡到中度和重度,尤其是在实验组中。对于学习设计者和研究人员来说,这项研究有助于深入理解环境设计。
{"title":"Impact of ML-LA feedback system on learners’ academic performance, engagement and behavioral patterns in online collaborative learning environments: A lag sequential analysis and Markov chain approach","authors":"Hatice Yildiz Durak","doi":"10.1007/s10639-024-12911-9","DOIUrl":"https://doi.org/10.1007/s10639-024-12911-9","url":null,"abstract":"<p>Feedback is critical in providing personalized information about educational processes and supporting their performance in online collaborative learning environments. However, giving effective feedback and monitoring its effects, which is especially important in online environments, is a complex issue. Although providing feedback by analyzing online learning behaviors, it is unclear how the effectiveness of this feedback translates into online learning experiences. The current study aims to compare the behavioral patterns of online system engagement of students who receive and do not receive machine learning-based temporal learning analytics (ML-LA) feedback, to identify the differences between student groups in terms of learning performance, online engagement, and various system usage variables, and to examine the behavioral patterns change over time of students regarding online system engagement. The current study was conducted with the participation of 49 undergraduate students. The study defined three engagement levels using system usage analytics and cluster analysis. While t-test and ANCOVA were applied to pre-test and post-test scores to evaluate students’ learning performance and online engagement, lag sequential analysis was used to analyze behavioral patterns, and the Markov chain was used to examine the change of behavioral patterns over time. The group receiving ML-LA feedback showed higher behavior and cognitive engagement than the control group. In addition, the rate of completing learning tasks was higher in the experimental group. Temporal patterns of online engagement behaviors across student groups are described and compared. The results showed that both groups used all stages of the system features. However, there were some differences in the navigation rankings. The most important behavioral transitions in the experimental group are task and discussion viewing and posting, task posting updating, and group performance viewing. In the control group, the most important behavioral transitions are the relationship between viewing a discussion and making a discussion, then this is followed by the sequential relationship between viewing individual performance and viewing group performance. The results showed that students’ engagement behaviors transitioned from light to medium and intense throughout the semester, especially in the experimental group. For learning designers and researchers, this study can help develop a deep understanding of environment design.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"35 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141780965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-27DOI: 10.1007/s10639-024-12899-2
Fatima Makda
Virtual teaching gained momentum for its ability to drive education continuity in times of disruption. As a result, the implementation of virtual teaching has piqued the attention of the higher education sector to leverage the affordances of this mode of instructional delivery, even in times of non-disruption. This study aims to conduct a review of virtual teaching in the higher education sector to reveal the key research trends of previous publications and areas of focus for future research. A bibliometric analysis is used to identify the key topics, themes, authors, sources, articles, and existing collaborations. To achieve this, papers indexed in the Scopus database between 2012 and 2023 were examined and analysed using VOSviewer. The findings of the study are provided through a quantitative analysis that gives a high-level overview of virtual teaching in the higher education sector and highlights the key performance indicators for the creation of articles and their citation through tables, graphs, and visualisation maps. The research yielded a total of 5,663 publications, of which 2,635 published articles were included in the analysis. The findings reiterate virtual teaching as a move in the direction of sustainable education as its assists in democratising knowledge. The analysis highlights the multifaceted nature of the research topic on virtual teaching, revealing six distinct yet interconnected thematic clusters. This study provides a holistic picture of virtual teaching in the higher education sector by integrating the analysis results with pertinent reviews of literature and makes recommendations for future research.
{"title":"Digital education: Mapping the landscape of virtual teaching in higher education – a bibliometric review","authors":"Fatima Makda","doi":"10.1007/s10639-024-12899-2","DOIUrl":"https://doi.org/10.1007/s10639-024-12899-2","url":null,"abstract":"<p>Virtual teaching gained momentum for its ability to drive education continuity in times of disruption. As a result, the implementation of virtual teaching has piqued the attention of the higher education sector to leverage the affordances of this mode of instructional delivery, even in times of non-disruption. This study aims to conduct a review of virtual teaching in the higher education sector to reveal the key research trends of previous publications and areas of focus for future research. A bibliometric analysis is used to identify the key topics, themes, authors, sources, articles, and existing collaborations. To achieve this, papers indexed in the Scopus database between 2012 and 2023 were examined and analysed using VOSviewer. The findings of the study are provided through a quantitative analysis that gives a high-level overview of virtual teaching in the higher education sector and highlights the key performance indicators for the creation of articles and their citation through tables, graphs, and visualisation maps. The research yielded a total of 5,663 publications, of which 2,635 published articles were included in the analysis. The findings reiterate virtual teaching as a move in the direction of sustainable education as its assists in democratising knowledge. The analysis highlights the multifaceted nature of the research topic on virtual teaching, revealing six distinct yet interconnected thematic clusters. This study provides a holistic picture of virtual teaching in the higher education sector by integrating the analysis results with pertinent reviews of literature and makes recommendations for future research.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"945 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141780966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-27DOI: 10.1007/s10639-024-12905-7
Chengming Zhang, Min Hu, Weidong Wu, Farrukh Kamran, Xining Wang
Artificial intelligence (AI) integration in education has grown significantly recently. However, the potential risks of AI have led to educators being wary of implementing AI systems. To discover whether AI systems can be effective in the classroom in the future, it is critical to understand how risk factors (e.g., perceived safety risks, perceived privacy risks, and urban/rural differences) affect pre-service teachers’ AI acceptance. Therefore, the study aimed to (1) explore the influence of perceived risks and AI trust on pre-service teachers’ intentions to use AI-based educational applications, and (2) investigate possible variations in potential determinants of their intentions to use AI based on urban–rural differences. In this study, data from 483 pre-service teachers in China (262 from rural areas) were obtained by survey and analyzed using confirmatory factor analysis (CFA) and structural equation modeling-based multi-group analysis. The study’s findings demonstrated that while AI trust influenced pre-service teachers’ AI acceptance, the effect was less pronounced than perceived ease of use and perceived usefulness. Most notably, findings showed that perceived privacy and safety risks negatively influence AI trust among pre-service teachers from rural areas, which was a trend not observed in pre-service teachers from urban areas. As a result, to integrate AI-based applications into educational settings, pre-service teachers believed that the AI system must be functionally robust, user-friendly, and transparent. In addition, urban–rural differences considerably affect pre-service teachers’ AI acceptance. This study provides further relevant recommendations for educators and policymakers.
{"title":"Unpacking perceived risks and AI trust influences pre-service teachers’ AI acceptance: A structural equation modeling-based multi-group analysis","authors":"Chengming Zhang, Min Hu, Weidong Wu, Farrukh Kamran, Xining Wang","doi":"10.1007/s10639-024-12905-7","DOIUrl":"https://doi.org/10.1007/s10639-024-12905-7","url":null,"abstract":"<p>Artificial intelligence (AI) integration in education has grown significantly recently. However, the potential risks of AI have led to educators being wary of implementing AI systems. To discover whether AI systems can be effective in the classroom in the future, it is critical to understand how risk factors (e.g., perceived safety risks, perceived privacy risks, and urban/rural differences) affect pre-service teachers’ AI acceptance. Therefore, the study aimed to (1) explore the influence of perceived risks and AI trust on pre-service teachers’ intentions to use AI-based educational applications, and (2) investigate possible variations in potential determinants of their intentions to use AI based on urban–rural differences. In this study, data from 483 pre-service teachers in China (262 from rural areas) were obtained by survey and analyzed using confirmatory factor analysis (CFA) and structural equation modeling-based multi-group analysis. The study’s findings demonstrated that while AI trust influenced pre-service teachers’ AI acceptance, the effect was less pronounced than perceived ease of use and perceived usefulness. Most notably, findings showed that perceived privacy and safety risks negatively influence AI trust among pre-service teachers from rural areas, which was a trend not observed in pre-service teachers from urban areas. As a result, to integrate AI-based applications into educational settings, pre-service teachers believed that the AI system must be functionally robust, user-friendly, and transparent. In addition, urban–rural differences considerably affect pre-service teachers’ AI acceptance. This study provides further relevant recommendations for educators and policymakers.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"63 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141780964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-27DOI: 10.1007/s10639-024-12912-8
Emmanuel Fokides, Eirini Peristeraki
This research analyzed the efficacy of ChatGPT as a tool for the correction and provision of feedback on primary school students' short essays written in both the English and Greek languages. The accuracy and qualitative aspects of ChatGPT-generated corrections and feedback were compared to that of educators. For the essays written in English, it was found that ChatGPT outperformed the educators both in terms of quantity and quality. It detected more mistakes, provided more detailed feedback, its focus was similar to that of educators, its orientation was more balanced, and it was more positive although more academic/formal in terms of style/tone. For the essays written in Greek, ChatGPT did not perform as well as educators did. Although it provided more detailed feedback and detected roughly the same number of mistakes, it incorrectly flagged as mistakes correctly written words and/or phrases. Moreover, compared to educators, it focused less on language mechanics and delivered less balanced feedback in terms of orientation. In terms of style/tone, there were no significant differences. When comparing ChatGPT's performance in English and Greek short essays, it was found that it performed better in the former language in both the quantitative and qualitative parameters that were examined. The implications of the above findings are also discussed.
{"title":"Comparing ChatGPT's correction and feedback comments with that of educators in the context of primary students' short essays written in English and Greek","authors":"Emmanuel Fokides, Eirini Peristeraki","doi":"10.1007/s10639-024-12912-8","DOIUrl":"https://doi.org/10.1007/s10639-024-12912-8","url":null,"abstract":"<p>This research analyzed the efficacy of ChatGPT as a tool for the correction and provision of feedback on primary school students' short essays written in both the English and Greek languages. The accuracy and qualitative aspects of ChatGPT-generated corrections and feedback were compared to that of educators. For the essays written in English, it was found that ChatGPT outperformed the educators both in terms of quantity and quality. It detected more mistakes, provided more detailed feedback, its focus was similar to that of educators, its orientation was more balanced, and it was more positive although more academic/formal in terms of style/tone. For the essays written in Greek, ChatGPT did not perform as well as educators did. Although it provided more detailed feedback and detected roughly the same number of mistakes, it incorrectly flagged as mistakes correctly written words and/or phrases. Moreover, compared to educators, it focused less on language mechanics and delivered less balanced feedback in terms of orientation. In terms of style/tone, there were no significant differences. When comparing ChatGPT's performance in English and Greek short essays, it was found that it performed better in the former language in both the quantitative and qualitative parameters that were examined. The implications of the above findings are also discussed.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":"70 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141780967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}