Pub Date : 2024-11-06DOI: 10.1016/j.caeai.2024.100330
Musa Adekunle Ayanwale , Sibusiso D. Ntshangase , Owolabi Paul Adelana , Kunle Waheed Afolabi , Umar A. Adam , Stella Oluwakemi Olatunbosun
This study contributes to existing research on how to integrate Artificial intelligence (AI) into school systems globally. This research explores in-service teachers' preparedness for integrating artificial intelligence into schools. We conducted this research within the context of the South African school system with teachers of various specializations, including sciences, social Sciences, mathematics, and languages. Drawing on the extended Unified Theory of Acceptance and Use of Technology (UTAUT2), we gathered teachers' perspectives through eight variables of technology integration, social influence, AI ethics, attitudes, TPACK, perceived self-efficacy, AI professional development, and AI preparedness. To analyze the 430 teachers' data involved in this study, we used a structural equation modeling analytical approach with SmartPLS software version 4.1.0.0. Our results indicate that technology integration, social influence, attitudes, and perceived self-efficacy influence teachers’ preparedness for AI. However, TPACK and ethics do not influence preparing teachers to integrate AI into schools. This study further presents interesting insight based on the mediation and moderation analysis of the variables. We discuss our findings and highlight their implications for practice and policy.
{"title":"Navigating the future: Exploring in-service teachers' preparedness for artificial intelligence integration into South African schools","authors":"Musa Adekunle Ayanwale , Sibusiso D. Ntshangase , Owolabi Paul Adelana , Kunle Waheed Afolabi , Umar A. Adam , Stella Oluwakemi Olatunbosun","doi":"10.1016/j.caeai.2024.100330","DOIUrl":"10.1016/j.caeai.2024.100330","url":null,"abstract":"<div><div>This study contributes to existing research on how to integrate Artificial intelligence (AI) into school systems globally. This research explores in-service teachers' preparedness for integrating artificial intelligence into schools. We conducted this research within the context of the South African school system with teachers of various specializations, including sciences, social Sciences, mathematics, and languages. Drawing on the extended Unified Theory of Acceptance and Use of Technology (UTAUT2), we gathered teachers' perspectives through eight variables of technology integration, social influence, AI ethics, attitudes, TPACK, perceived self-efficacy, AI professional development, and AI preparedness. To analyze the 430 teachers' data involved in this study, we used a structural equation modeling analytical approach with SmartPLS software version 4.1.0.0. Our results indicate that technology integration, social influence, attitudes, and perceived self-efficacy influence teachers’ preparedness for AI. However, TPACK and ethics do not influence preparing teachers to integrate AI into schools. This study further presents interesting insight based on the mediation and moderation analysis of the variables. We discuss our findings and highlight their implications for practice and policy.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100330"},"PeriodicalIF":0.0,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-05DOI: 10.1016/j.caeai.2024.100329
Wen-Ling Hsu, Andri Dayarana K. Silalahi
ChatGPT's impact on education is both significant and inevitable, presenting a paradox of threats and benefits. While studies have explored ChatGPT's usage, intention, and motivation, few have addressed its paradoxical use from a benefit-risk-coping perspective, leaving gaps in practical and theoretical solutions. This study integrates the Unified Theory of Acceptance and Use of Technology (UTAUT) and Protection Motivation Theory (PMT) to elucidate ChatGPT's paradoxical use in education. Using Structural Equation Modeling (SEM) and fuzzy sets Qualitative Comparative Analysis (fsQCA), it aims to provide insights and solutions for managing the benefits, risks, and coping mechanisms associated with ChatGPT in education. Hypotheses and propositions were tested on Taiwanese higher education users (N = 351). Findings from SEM confirm that the perceived threat of ChatGPT's severity decreases the intention to use it, while coping strategies such as self-efficacy and response efficacy strongly predict the intention to use ChatGPT. Additionally, the benefits of performance expectancy and task efficiency significantly increase the intention to use ChatGPT, which in turn significantly increases actual usage behavior. Findings from fsQCA reveal three configurations for both the usage and disusage of ChatGPT in education. The study identifies three constructs with necessary conditions. This research makes a significant theoretical contribution by integrating UTAUT and PMT into a unified framework to elucidate the paradoxical aspects of benefit-risk-coping in the use of ChatGPT. Practical implications for higher educational institutions and their scholars (e.g., students, lecturers, researchers) are also provided through confirmed solutions from the QCA model. These solutions offer strategic insights to leverage the benefits, identify the risks, and cope with the threats associated with using ChatGPT in education.
{"title":"Exploring the paradoxical use of ChatGPT in education: Analyzing benefits, risks, and coping strategies through integrated UTAUT and PMT theories using a hybrid approach of SEM and fsQCA","authors":"Wen-Ling Hsu, Andri Dayarana K. Silalahi","doi":"10.1016/j.caeai.2024.100329","DOIUrl":"10.1016/j.caeai.2024.100329","url":null,"abstract":"<div><div>ChatGPT's impact on education is both significant and inevitable, presenting a paradox of threats and benefits. While studies have explored ChatGPT's usage, intention, and motivation, few have addressed its paradoxical use from a benefit-risk-coping perspective, leaving gaps in practical and theoretical solutions. This study integrates the Unified Theory of Acceptance and Use of Technology (UTAUT) and Protection Motivation Theory (PMT) to elucidate ChatGPT's paradoxical use in education. Using Structural Equation Modeling (SEM) and fuzzy sets Qualitative Comparative Analysis (fsQCA), it aims to provide insights and solutions for managing the benefits, risks, and coping mechanisms associated with ChatGPT in education. Hypotheses and propositions were tested on Taiwanese higher education users (N = 351). Findings from SEM confirm that the perceived threat of ChatGPT's severity decreases the intention to use it, while coping strategies such as self-efficacy and response efficacy strongly predict the intention to use ChatGPT. Additionally, the benefits of performance expectancy and task efficiency significantly increase the intention to use ChatGPT, which in turn significantly increases actual usage behavior. Findings from fsQCA reveal three configurations for both the usage and disusage of ChatGPT in education. The study identifies three constructs with necessary conditions. This research makes a significant theoretical contribution by integrating UTAUT and PMT into a unified framework to elucidate the paradoxical aspects of benefit-risk-coping in the use of ChatGPT. Practical implications for higher educational institutions and their scholars (e.g., students, lecturers, researchers) are also provided through confirmed solutions from the QCA model. These solutions offer strategic insights to leverage the benefits, identify the risks, and cope with the threats associated with using ChatGPT in education.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100329"},"PeriodicalIF":0.0,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.caeai.2024.100328
Siu Cheung Kong, Yin Yang, Chunyu Hou
The rapid development of generative artificial intelligence (GenAI) tools has given rise to a growing discussion of the potential challenges and benefits that the use of these technologies may present in the field of education. This study examines the acceptance of the use of GenAI tools for teaching and learning among primary and secondary school teachers in Hong Kong. It uses an extension of the technology acceptance model (TAM) with a modified framework that incorporates two key factors: self-efficacy and subjective norm. Data were collected from a sample of 367 primary and secondary school teachers in Hong Kong using questionnaires containing items for six constructs: self-efficacy, perceived usefulness, perceived ease of use, attitude towards using, subjective norm, and behavioural intention. The results show that fostering teachers' self-efficacy, perceived usefulness, and attitude is essential for successfully increasing their behavioural intention to use GenAI tools. Subjective norm was also found to influence teachers' behavioural intention. To enhance teachers' effective use of GenAI for teaching, teacher development programmes should focus on equipping teachers with comprehensive conceptual knowledge and skills and an understanding of the application of these tools to teaching and learning. Policy support to create a conducive environment for the use of GenAI in teaching and learning would also be beneficial. The study has theoretical implications in its extension of the TAM model as well as implications for enhancing teachers’ AI literacy and developing pedagogies for the meaningful use of GenAI tools for teaching and learning in K–12 settings.
{"title":"Examining teachers’ behavioural intention of using generative artificial intelligence tools for teaching and learning based on the extended technology acceptance model","authors":"Siu Cheung Kong, Yin Yang, Chunyu Hou","doi":"10.1016/j.caeai.2024.100328","DOIUrl":"10.1016/j.caeai.2024.100328","url":null,"abstract":"<div><div>The rapid development of generative artificial intelligence (GenAI) tools has given rise to a growing discussion of the potential challenges and benefits that the use of these technologies may present in the field of education. This study examines the acceptance of the use of GenAI tools for teaching and learning among primary and secondary school teachers in Hong Kong. It uses an extension of the technology acceptance model (TAM) with a modified framework that incorporates two key factors: self-efficacy and subjective norm. Data were collected from a sample of 367 primary and secondary school teachers in Hong Kong using questionnaires containing items for six constructs: self-efficacy, perceived usefulness, perceived ease of use, attitude towards using, subjective norm, and behavioural intention. The results show that fostering teachers' self-efficacy, perceived usefulness, and attitude is essential for successfully increasing their behavioural intention to use GenAI tools. Subjective norm was also found to influence teachers' behavioural intention. To enhance teachers' effective use of GenAI for teaching, teacher development programmes should focus on equipping teachers with comprehensive conceptual knowledge and skills and an understanding of the application of these tools to teaching and learning. Policy support to create a conducive environment for the use of GenAI in teaching and learning would also be beneficial. The study has theoretical implications in its extension of the TAM model as well as implications for enhancing teachers’ AI literacy and developing pedagogies for the meaningful use of GenAI tools for teaching and learning in K–12 settings.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100328"},"PeriodicalIF":0.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-31DOI: 10.1016/j.caeai.2024.100327
Medha Mohan Ambali Parambil , Jaloliddin Rustamov , Soha Galalaldin Ahmed , Zahiriddin Rustamov , Ali Ismail Awad, Nazar Zaki, Fady Alnajjar
The rapid adoption of online learning in higher education has resulted in significant cybersecurity challenges. As educational institutions increasingly rely on digital platforms, they are facing cyber threats that can compromise sensitive data and disrupt operations. This systematic literature review explores the integration of artificial intelligence (AI) into traditional methods to address cybersecurity risks in online higher education. The review integrates a qualitative synthesis of relevant literature and a quantitative meta-analysis using PRISMA guidelines, ensuring comprehensive insights into the integration process. The most prevalent cybersecurity threats are examined, and the effectiveness of AI-based and conventional approaches in mitigating these challenges is compared. Additionally, the most effective AI techniques in cybersecurity solutions are analyzed, and their performance is compared across different contexts. Furthermore, the review considers the key ethical and technical considerations associated with integrating AI into traditional cybersecurity methods. The findings reveal that while AI-based techniques offer promising solutions for threat detection, authentication, and privacy preservation, their successful implementation requires careful consideration of data privacy, fairness, transparency, and robustness. The importance of interdisciplinary collaboration, continuous monitoring of AI models—by automated systems and humans—and the need for comprehensive guidelines to ensure responsible and ethical use of AI in cybersecurity are highlighted. The findings of this review provide actionable insights for educational institutions, educators, and students, helping to facilitate the development of secure and resilient online learning environments. The identified ethical and technical considerations can serve as a foundation for the responsible integration of AI into cybersecurity within the online higher-education sector.
{"title":"Integrating AI-based and conventional cybersecurity measures into online higher education settings: Challenges, opportunities, and prospects","authors":"Medha Mohan Ambali Parambil , Jaloliddin Rustamov , Soha Galalaldin Ahmed , Zahiriddin Rustamov , Ali Ismail Awad, Nazar Zaki, Fady Alnajjar","doi":"10.1016/j.caeai.2024.100327","DOIUrl":"10.1016/j.caeai.2024.100327","url":null,"abstract":"<div><div>The rapid adoption of online learning in higher education has resulted in significant cybersecurity challenges. As educational institutions increasingly rely on digital platforms, they are facing cyber threats that can compromise sensitive data and disrupt operations. This systematic literature review explores the integration of artificial intelligence (AI) into traditional methods to address cybersecurity risks in online higher education. The review integrates a qualitative synthesis of relevant literature and a quantitative meta-analysis using PRISMA guidelines, ensuring comprehensive insights into the integration process. The most prevalent cybersecurity threats are examined, and the effectiveness of AI-based and conventional approaches in mitigating these challenges is compared. Additionally, the most effective AI techniques in cybersecurity solutions are analyzed, and their performance is compared across different contexts. Furthermore, the review considers the key ethical and technical considerations associated with integrating AI into traditional cybersecurity methods. The findings reveal that while AI-based techniques offer promising solutions for threat detection, authentication, and privacy preservation, their successful implementation requires careful consideration of data privacy, fairness, transparency, and robustness. The importance of interdisciplinary collaboration, continuous monitoring of AI models—by automated systems and humans—and the need for comprehensive guidelines to ensure responsible and ethical use of AI in cybersecurity are highlighted. The findings of this review provide actionable insights for educational institutions, educators, and students, helping to facilitate the development of secure and resilient online learning environments. The identified ethical and technical considerations can serve as a foundation for the responsible integration of AI into cybersecurity within the online higher-education sector.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100327"},"PeriodicalIF":0.0,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.caeai.2024.100324
Ahmed Mohamed Elbaz , Islam Elbayoumi Salem , Alyaa Darwish , Nasser Alhamr Alkathiri , Viju Mathew , Hajer Ahmed Al-Kaaf
The present study develops an integrative framework that investigates the relationship between the acceptability of ChatGPT among business students and their attitude, intention to adopt, and performance. This is achieved by examining the moderating role of business students' moral values and religious ethics. Using data collected from 312 university business students in Oman, we show that perceived usefulness (PU), perceived ease of use (PEOU), and perceived convenience (PC) have a positive effect on their attitudes toward ChatGPT. Business students' attitudes toward ChatGPT have a strong positive influence on their adoption intentions. Notably, university business students’ ChatGPT adoption intentions increased their academic performance. Remarkably, business students' Personal Morality and religion-related ethics trigger them to experience regret or a sense of responsibility for their actions that violate academic integrity or ethical standards in their studies. Establishing explicit ethical standards and procedures for the usage of artificial intelligence (AI) tools such as ChatGPT in educational settings is vital for higher educational institutions (HEIs). This research adds to the theoretical intervention of investigating how personal morality and religion-related ethics interact with AI tools such as ChatGPT, which can add to ethical decision-making theories. The current study has significant implications for theory as well as for practice.
{"title":"Getting to know ChatGPT: How business students feel, what they think about personal morality, and how their academic outcomes affect Oman's higher education","authors":"Ahmed Mohamed Elbaz , Islam Elbayoumi Salem , Alyaa Darwish , Nasser Alhamr Alkathiri , Viju Mathew , Hajer Ahmed Al-Kaaf","doi":"10.1016/j.caeai.2024.100324","DOIUrl":"10.1016/j.caeai.2024.100324","url":null,"abstract":"<div><div>The present study develops an integrative framework that investigates the relationship between the acceptability of ChatGPT among business students and their attitude, intention to adopt, and performance. This is achieved by examining the moderating role of business students' moral values and religious ethics. Using data collected from 312 university business students in Oman, we show that perceived usefulness (PU), perceived ease of use (PEOU), and perceived convenience (PC) have a positive effect on their attitudes toward ChatGPT. Business students' attitudes toward ChatGPT have a strong positive influence on their adoption intentions. Notably, university business students’ ChatGPT adoption intentions increased their academic performance. Remarkably, business students' Personal Morality and religion-related ethics trigger them to experience regret or a sense of responsibility for their actions that violate academic integrity or ethical standards in their studies. Establishing explicit ethical standards and procedures for the usage of artificial intelligence (AI) tools such as ChatGPT in educational settings is vital for higher educational institutions (HEIs). This research adds to the theoretical intervention of investigating how personal morality and religion-related ethics interact with AI tools such as ChatGPT, which can add to ethical decision-making theories. The current study has significant implications for theory as well as for practice.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100324"},"PeriodicalIF":0.0,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.caeai.2024.100326
Hui Wang , Anh Dang , Zihao Wu , Son Mac
The advancements in Generative Artificial Intelligence (GenAI) can provide opportunities for enriching educational experiences, but at the same time raise concerns regarding academic integrity. Many educators have expressed anxiety and hesitation when it comes to integrating GenAI in their teaching practices. Thus, recommendations and guidance from institutions are needed to support instructors in this new and emerging GenAI era. In response to this need, this study explores different U.S. universities' academic policies and guidelines regarding the use of GenAI tools (e.g., ChatGPT) for teaching and learning, and from there, gains understanding of how these universities respond and adapt to the development of GenAI in their academic contexts. Data sources include academic policies, statements, guidelines, and relevant resources provided by the top 100 universities in the U.S. Results show that the majority of these universities adopt an open but cautious approach towards GenAI. Primary concerns lie in ethical usage, accuracy, and data privacy. Most universities actively respond and provide diverse types of resources, such as syllabus templates, workshops, shared articles, and one-on-one consultations; focusing on a range of topics, namely general technical introduction, ethical concerns, pedagogical applications, preventive strategies, data privacy, limitations, and detective tools. The findings provide four practical pedagogical implications for educators when considering GenAI in teaching practices: 1) accepting GenAI presence, 2) aligning GenAI use with learning objectives, 3) evolving curriculum to prevent misuse of GenAI, and 4) adopting multifaceted evaluation strategies. For recommendations toward policy making, the article suggests two possible directions for the use of GenAI tools: 1) establishing discipline-specific policies and guidelines, and 2) managing students' sensitive information in a transparent and careful manner.
{"title":"Generative AI in higher education: Seeing ChatGPT through universities' policies, resources, and guidelines","authors":"Hui Wang , Anh Dang , Zihao Wu , Son Mac","doi":"10.1016/j.caeai.2024.100326","DOIUrl":"10.1016/j.caeai.2024.100326","url":null,"abstract":"<div><div>The advancements in Generative Artificial Intelligence (GenAI) can provide opportunities for enriching educational experiences, but at the same time raise concerns regarding academic integrity. Many educators have expressed anxiety and hesitation when it comes to integrating GenAI in their teaching practices. Thus, recommendations and guidance from institutions are needed to support instructors in this new and emerging GenAI era. In response to this need, this study explores different U.S. universities' academic policies and guidelines regarding the use of GenAI tools (e.g., ChatGPT) for teaching and learning, and from there, gains understanding of how these universities respond and adapt to the development of GenAI in their academic contexts. Data sources include academic policies, statements, guidelines, and relevant resources provided by the top 100 universities in the U.S. Results show that the majority of these universities adopt an open but cautious approach towards GenAI. Primary concerns lie in ethical usage, accuracy, and data privacy. Most universities actively respond and provide diverse types of resources, such as syllabus templates, workshops, shared articles, and one-on-one consultations; focusing on a range of topics, namely general technical introduction, ethical concerns, pedagogical applications, preventive strategies, data privacy, limitations, and detective tools. The findings provide four practical pedagogical implications for educators when considering GenAI in teaching practices: 1) accepting GenAI presence, 2) aligning GenAI use with learning objectives, 3) evolving curriculum to prevent misuse of GenAI, and 4) adopting multifaceted evaluation strategies. For recommendations toward policy making, the article suggests two possible directions for the use of GenAI tools: 1) establishing discipline-specific policies and guidelines, and 2) managing students' sensitive information in a transparent and careful manner.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100326"},"PeriodicalIF":0.0,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-29DOI: 10.1016/j.caeai.2024.100323
Lihang Guan , Shaofeng Li , Mingyue Michelle Gu
This meta-analysis examines the efficacy of generative artificial intelligence (GenAI) in second language acquisition within self-directed, out-of-classroom informal contexts. A total of 15 studies meeting the inclusion criteria were identified that examined the impact of GenAI on second-language proficiency, motivation, and self-regulation. GenAI was shown to have significant effects on English proficiency and self-regulation, demonstrating its versatility in enhancing language learning outcomes. However, GenAI failed to show significant effects on learning motivation, and based on this finding we highlight the need to develop measures of motivation that are suitable for GenAI in education. Possible ways to apply GenAI in the informal language learning environment are also discussed based on the included literature.
{"title":"AI in informal digital English learning: A meta-analysis of its effectiveness on proficiency, motivation, and self-regulation","authors":"Lihang Guan , Shaofeng Li , Mingyue Michelle Gu","doi":"10.1016/j.caeai.2024.100323","DOIUrl":"10.1016/j.caeai.2024.100323","url":null,"abstract":"<div><div>This meta-analysis examines the efficacy of generative artificial intelligence (GenAI) in second language acquisition within self-directed, out-of-classroom informal contexts. A total of 15 studies meeting the inclusion criteria were identified that examined the impact of GenAI on second-language proficiency, motivation, and self-regulation. GenAI was shown to have significant effects on English proficiency and self-regulation, demonstrating its versatility in enhancing language learning outcomes. However, GenAI failed to show significant effects on learning motivation, and based on this finding we highlight the need to develop measures of motivation that are suitable for GenAI in education. Possible ways to apply GenAI in the informal language learning environment are also discussed based on the included literature.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100323"},"PeriodicalIF":0.0,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142571772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.caeai.2024.100325
Simin Xu , Xiaowei Huang , Chung Kwan Lo , Gaowei Chen , Morris Siu-yung Jong
High-quality instruction is essential to facilitating student learning, prompting many professional development (PD) programmes for teachers to focus on improving classroom dialogue. However, during PD programmes, analysing discourse data is time-consuming, delaying feedback on teachers' performance and potentially impairing the programmes' effectiveness. We therefore explored the use of ChatGPT (a fine-tuned GPT-3.5 series model) and GPT-4o to automate the coding of classroom discourse data. We equipped these AI tools with a codebook designed for mathematics discourse and academically productive talk. Our dataset consisted of over 400 authentic talk turns in Chinese from synchronous online mathematics lessons. The coding outcomes of ChatGPT and GPT-4o were quantitatively compared against a human standard. Qualitative analysis was conducted to understand their coding decisions. The overall agreement between the human standard, ChatGPT output, and GPT-4o output was moderate (Fleiss's Kappa = 0.46) when classifying talk turns into major categories. Pairwise comparisons indicated that GPT-4o (Cohen's Kappa = 0.69) had better performance than ChatGPT (Cohen's Kappa = 0.33). However, at the code level, the performance of both AI tools was unsatisfactory. Based on the identified competences and weaknesses, we propose a two-stage approach to classroom discourse analysis. Specifically, GPT-4o can be employed for the initial category-level analysis, following which teacher educators can conduct a more detailed code-level analysis and refine the coding outcomes. This approach can facilitate timely provision of analytical resources for teachers to reflect on their teaching practices.
{"title":"Evaluating the performance of ChatGPT and GPT-4o in coding classroom discourse data: A study of synchronous online mathematics instruction","authors":"Simin Xu , Xiaowei Huang , Chung Kwan Lo , Gaowei Chen , Morris Siu-yung Jong","doi":"10.1016/j.caeai.2024.100325","DOIUrl":"10.1016/j.caeai.2024.100325","url":null,"abstract":"<div><div>High-quality instruction is essential to facilitating student learning, prompting many professional development (PD) programmes for teachers to focus on improving classroom dialogue. However, during PD programmes, analysing discourse data is time-consuming, delaying feedback on teachers' performance and potentially impairing the programmes' effectiveness. We therefore explored the use of ChatGPT (a fine-tuned GPT-3.5 series model) and GPT-4o to automate the coding of classroom discourse data. We equipped these AI tools with a codebook designed for mathematics discourse and academically productive talk. Our dataset consisted of over 400 authentic talk turns in Chinese from synchronous online mathematics lessons. The coding outcomes of ChatGPT and GPT-4o were quantitatively compared against a human standard. Qualitative analysis was conducted to understand their coding decisions. The overall agreement between the human standard, ChatGPT output, and GPT-4o output was moderate (Fleiss's Kappa = 0.46) when classifying talk turns into major categories. Pairwise comparisons indicated that GPT-4o (Cohen's Kappa = 0.69) had better performance than ChatGPT (Cohen's Kappa = 0.33). However, at the code level, the performance of both AI tools was unsatisfactory. Based on the identified competences and weaknesses, we propose a two-stage approach to classroom discourse analysis. Specifically, GPT-4o can be employed for the initial category-level analysis, following which teacher educators can conduct a more detailed code-level analysis and refine the coding outcomes. This approach can facilitate timely provision of analytical resources for teachers to reflect on their teaching practices.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100325"},"PeriodicalIF":0.0,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142571771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research examines the use of ChatGPT among university-level students in the United Arab Emirates (UAE) and its effects on their learning experiences. The precise focus remains on the effects of ChatGPT usage on Student Engagement, Critical Thinking Abilities, and Academic Achievement. Using the cross-sectional design, the Constructivism Learning Theory supports this research. Data gathered using 353 structured questionnaires is analyzed using Partial Least Square-Structural Equation Modelling (PLS-SEM). Results showed that ChatGPT usage positively affects student engagement in the learning process. The effect of ChatGPT usage on Critical Thinking Abilities also remained significant. Finally, the findings indicated the positive effect of ChatGPT usage on the Academic Achievement of Emirati students. These results imply a robust, positive, and constructive role of AI technology, particularly ChatGPT, in the education and learning journey of university students in the UAE. It is concluded that ChatGPT is a useful tool that helps students by providing resources and suggestions throughout their learning process. It increases engagement, effort, and ambition in academic tasks, enhancing academic achievement. ChatGPT supports educational progress and motivates students to obtain knowledge by improving their interest in learning. Finally, the study's implications and limitations are discussed. Also, recommendations for future studies are proposed.
{"title":"Examining the effect of ChatGPT usage on students’ academic learning and achievement: A survey-based study in Ajman, UAE","authors":"Enaam Youssef , Mervat Medhat , Soumaya Abdellatif , Mahra Al Malek","doi":"10.1016/j.caeai.2024.100316","DOIUrl":"10.1016/j.caeai.2024.100316","url":null,"abstract":"<div><div>This research examines the use of ChatGPT among university-level students in the United Arab Emirates (UAE) and its effects on their learning experiences. The precise focus remains on the effects of ChatGPT usage on Student Engagement, Critical Thinking Abilities, and Academic Achievement. Using the cross-sectional design, the Constructivism Learning Theory supports this research. Data gathered using 353 structured questionnaires is analyzed using Partial Least Square-Structural Equation Modelling (PLS-SEM). Results showed that ChatGPT usage positively affects student engagement in the learning process. The effect of ChatGPT usage on Critical Thinking Abilities also remained significant. Finally, the findings indicated the positive effect of ChatGPT usage on the Academic Achievement of Emirati students. These results imply a robust, positive, and constructive role of AI technology, particularly ChatGPT, in the education and learning journey of university students in the UAE. It is concluded that ChatGPT is a useful tool that helps students by providing resources and suggestions throughout their learning process. It increases engagement, effort, and ambition in academic tasks, enhancing academic achievement. ChatGPT supports educational progress and motivates students to obtain knowledge by improving their interest in learning. Finally, the study's implications and limitations are discussed. Also, recommendations for future studies are proposed.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100316"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1016/j.caeai.2024.100322
Asaf Salman, Giora Alexandron
Personalized learning builds upon the fundamental assumption of uniqueness in learning behavior, often taken for granted. Quite surprisingly, however, the literature provides little to no empirical evidence backing the existence of individual learning behaviors. Driven by curiosity, we challenge this axiom. Our operationalization of a unique learning behavior draws an analogy to a fingerprint – a distinctive trait that sets individuals apart, which we correspondingly termed the ‘Digital Fingerprint of Learner Behavior’ (DFL). If such a thing as DFL truly exists, then given enough fine-grained behavioral data, we argue that it should be possible to model a DFL to a level of discriminability that enables training machine learning models to associate (map) between the (de-identified) digital traces of the same learner in diverse contexts. To test our hypothesis, we experimented with data from 24 MITx massive open online courses (MOOCs) offered via edX between 2014 and 2017. We focused our investigation on contexts where both the content and platform remain constant. A learner's DFL was computed from the learner's activity data within a specific course chapter, as stored in the system's logs. The results show that the mean level of accuracy (across courses) in identifying unseen DFLs is 0.582 (SD=0.173). Using Shapley Additive exPlanations (SHAP), we rank 686 features for their importance in differentiating between DFLs. To the best of our knowledge, this study is the first to provide empirical evidence that learners' behavior is unique to a degree that can distinguish between them on an individual level, similar to the level of identification provided by a fingerprint, and sets a benchmark for the task of DFL identification.
{"title":"The digital fingerprint of learner behavior: Empirical evidence for individuality in learning using deep learning","authors":"Asaf Salman, Giora Alexandron","doi":"10.1016/j.caeai.2024.100322","DOIUrl":"10.1016/j.caeai.2024.100322","url":null,"abstract":"<div><div>Personalized learning builds upon the fundamental assumption of uniqueness in learning behavior, often taken for granted. Quite surprisingly, however, the literature provides little to no empirical evidence backing the existence of individual learning behaviors. Driven by curiosity, we challenge this axiom. Our operationalization of a unique learning behavior draws an analogy to a fingerprint – a distinctive trait that sets individuals apart, which we correspondingly termed the ‘Digital Fingerprint of Learner Behavior’ (DFL). If such a thing as DFL truly exists, then given enough fine-grained behavioral data, we argue that it should be possible to model a DFL to a level of discriminability that enables training machine learning models to associate (map) between the (de-identified) digital traces of the same learner in diverse contexts. To test our hypothesis, we experimented with data from 24 MITx massive open online courses (MOOCs) offered via edX between 2014 and 2017. We focused our investigation on contexts where both the content and platform remain constant. A learner's DFL was computed from the learner's activity data within a specific course chapter, as stored in the system's logs. The results show that the mean level of accuracy (across courses) in identifying unseen DFLs is 0.582 (<em>SD</em>=0.173). Using Shapley Additive exPlanations (SHAP), we rank 686 features for their importance in differentiating between DFLs. To the best of our knowledge, this study is the first to provide empirical evidence that learners' behavior is unique to a degree that can distinguish between them on an individual level, similar to the level of identification provided by a fingerprint, and sets a benchmark for the task of DFL identification.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100322"},"PeriodicalIF":0.0,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}