In the context of information and communication technology (ICT)-enhanced teaching, teacher well-being plays a crucial role in promoting teaching effectiveness and students’ learning achievement. Drawing on the interactionist model of teacher well-being, this study investigated university teachers’ well-being (e.g., emotional exhaustion and teacher engagement) in ICT-enhanced teaching and its associations with their self-efficacy (e.g., classroom management, instructional strategy and course design) and teaching support (e.g., autonomy support, teaching resources and peer support). The results of an online questionnaire survey conducted among 836 university teachers in China indicated that the enhanced integration of ICT into teaching practices neither impaired teacher engagement nor caused them significant emotional exhaustion. Instead, adequate teaching resources and autonomy support contributed positively to both teacher self-efficacy and engagement. Increased efficacy in course design and classroom management alleviated their emotional exhaustion. Moreover, teacher self-efficacy significantly mediated the effects of autonomy support on emotional exhaustion and teacher engagement. These results have practical implications for understanding and promoting university teachers’ well-being as well as teaching effectiveness in ICT-enhanced teaching environments. Implications for practice or policy Administrators may consider providing adequate resources geared towards enhancing university teachers’ confidence and engagement in ICT-enhanced teaching. Administrators may avoid introducing excessive and burdensome initiatives to university teachers to prevent teacher emotional exhaustion. University teachers may be granted significant autonomy in selecting their preferred teaching platforms, methods and materials to meet their specific needs and preferences in ICT-enhanced teaching.
{"title":"University teachers’ well-being in ICT-enhanced teaching: The roles of teacher self-efficacy and teaching support","authors":"Jiying Han, Chao Gao","doi":"10.14742/ajet.8868","DOIUrl":"https://doi.org/10.14742/ajet.8868","url":null,"abstract":"In the context of information and communication technology (ICT)-enhanced teaching, teacher well-being plays a crucial role in promoting teaching effectiveness and students’ learning achievement. Drawing on the interactionist model of teacher well-being, this study investigated university teachers’ well-being (e.g., emotional exhaustion and teacher engagement) in ICT-enhanced teaching and its associations with their self-efficacy (e.g., classroom management, instructional strategy and course design) and teaching support (e.g., autonomy support, teaching resources and peer support). The results of an online questionnaire survey conducted among 836 university teachers in China indicated that the enhanced integration of ICT into teaching practices neither impaired teacher engagement nor caused them significant emotional exhaustion. Instead, adequate teaching resources and autonomy support contributed positively to both teacher self-efficacy and engagement. Increased efficacy in course design and classroom management alleviated their emotional exhaustion. Moreover, teacher self-efficacy significantly mediated the effects of autonomy support on emotional exhaustion and teacher engagement. These results have practical implications for understanding and promoting university teachers’ well-being as well as teaching effectiveness in ICT-enhanced teaching environments.\u0000Implications for practice or policy\u0000\u0000Administrators may consider providing adequate resources geared towards enhancing university teachers’ confidence and engagement in ICT-enhanced teaching.\u0000Administrators may avoid introducing excessive and burdensome initiatives to university teachers to prevent teacher emotional exhaustion.\u0000University teachers may be granted significant autonomy in selecting their preferred teaching platforms, methods and materials to meet their specific needs and preferences in ICT-enhanced teaching.\u0000","PeriodicalId":47812,"journal":{"name":"Australasian Journal of Educational Technology","volume":"7 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138944605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Binh Nguyen Thanh, Diem Thi-Ngoc Vo, Minh Nguyen Nhat, Thi Thu Tra Pham, Hieu Thai Trung, Son Ha Xuan
In this study, we introduce a framework designed to help educators assess the effectiveness of popular generative artificial intelligence (AI) tools in solving authentic assessments. We employed Bloom’s taxonomy as a guiding principle to create authentic assessments that evaluate the capabilities of generative AI tools. We applied this framework to assess the abilities of ChatGPT-4, ChatGPT-3.5, Google Bard and Microsoft Bing in solving authentic assessments in economics. We found that generative AI tools perform very well at the lower levels of Bloom's taxonomy while still maintaining a decent level of performance at the higher levels, with “create” being the weakest level of performance. Interestingly, these tools are better able to address numeric-based questions than text-based ones. Moreover, all the generative AI tools exhibit weaknesses in building arguments based on theoretical frameworks, maintaining the coherence of different arguments and providing appropriate references. Our study provides educators with a framework to assess the capabilities of generative AI tools, enabling them to make more informed decisions regarding assessments and learning activities. Our findings demand a strategic reimagining of educational goals and assessments, emphasising higher cognitive skills and calling for a concerted effort to enhance the capabilities of educators in preparing students for a rapidly transforming professional environment. Implications for practice or policy Our proposed framework enables educators to systematically evaluate the capabilities of widely used generative AI tools in assessments and assist them in the assessment design process. Tertiary institutions should re-evaluate and redesign programmes and course learning outcomes. The new focus on learning outcomes should address the higher levels of educational goals of Bloom’s taxonomy, specifically the “create” level.
{"title":"Race with the machines: Assessing the capability of generative AI in solving authentic assessments","authors":"Binh Nguyen Thanh, Diem Thi-Ngoc Vo, Minh Nguyen Nhat, Thi Thu Tra Pham, Hieu Thai Trung, Son Ha Xuan","doi":"10.14742/ajet.8902","DOIUrl":"https://doi.org/10.14742/ajet.8902","url":null,"abstract":"In this study, we introduce a framework designed to help educators assess the effectiveness of popular generative artificial intelligence (AI) tools in solving authentic assessments. We employed Bloom’s taxonomy as a guiding principle to create authentic assessments that evaluate the capabilities of generative AI tools. We applied this framework to assess the abilities of ChatGPT-4, ChatGPT-3.5, Google Bard and Microsoft Bing in solving authentic assessments in economics. We found that generative AI tools perform very well at the lower levels of Bloom's taxonomy while still maintaining a decent level of performance at the higher levels, with “create” being the weakest level of performance. Interestingly, these tools are better able to address numeric-based questions than text-based ones. Moreover, all the generative AI tools exhibit weaknesses in building arguments based on theoretical frameworks, maintaining the coherence of different arguments and providing appropriate references. Our study provides educators with a framework to assess the capabilities of generative AI tools, enabling them to make more informed decisions regarding assessments and learning activities. Our findings demand a strategic reimagining of educational goals and assessments, emphasising higher cognitive skills and calling for a concerted effort to enhance the capabilities of educators in preparing students for a rapidly transforming professional environment.\u0000Implications for practice or policy\u0000\u0000Our proposed framework enables educators to systematically evaluate the capabilities of widely used generative AI tools in assessments and assist them in the assessment design process.\u0000Tertiary institutions should re-evaluate and redesign programmes and course learning outcomes. The new focus on learning outcomes should address the higher levels of educational goals of Bloom’s taxonomy, specifically the “create” level.\u0000","PeriodicalId":47812,"journal":{"name":"Australasian Journal of Educational Technology","volume":"4 2","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138947823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative artificial intelligence (AI) has had a significant impact in tertiary education for practitioners and researchers during 2023. We review the way in which academics have made sense of generative AI, revisit our proposed research agenda and reflect on our changing roles as academics in relation to learning, teaching, design and policy.
{"title":"AI in tertiary education: progress on research and practice","authors":"Kate Thompson, L. Corrin, J. Lodge","doi":"10.14742/ajet.9251","DOIUrl":"https://doi.org/10.14742/ajet.9251","url":null,"abstract":"Generative artificial intelligence (AI) has had a significant impact in tertiary education for practitioners and researchers during 2023. We review the way in which academics have made sense of generative AI, revisit our proposed research agenda and reflect on our changing roles as academics in relation to learning, teaching, design and policy.","PeriodicalId":47812,"journal":{"name":"Australasian Journal of Educational Technology","volume":"93 8","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138994202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bünyamin Kayali, Mehmet Yavuz, Şener Balat, M. Çalişan
The purpose of this study was to determine university students' experiences with the use of ChatGPT in online courses. The sample consisted of 84 associate degree students from a state university in Turkey. A multi-method approach was used in the study. Although quantitative data were collected using the Chatbot Usability Scale, qualitative data were collected using a semi-structured interview form that we developed. The data were analysed using descriptive and content analysis methods. According to the findings, ChatGPT exhibits advantages such as a user-friendly interface and fast, concise, relevant responses. Moreover, emphasizing its contribution to the learning process, the information provided was sufficient and topic-oriented. The understandability of the chatbot’s functions and the clarity of their communication were emphasized. However, there are disadvantages such as performance issues, frequency of errors and the risk of providing misleading information. Concerns have also been raised about the potential difficulties chatbots may face in ambiguous conversations and providing insufficient information on privacy issues. In conclusion, ChatGPT is recognised as a potentially valuable tool in education based on positive usability impressions; however, more research is needed for its safe use. Implications for practice or policy Based on positive usability impressions, students and instructors can use ChatGPT to support educational activities. ChatGPT can promote and enhance students' personalised learning experiences. ChatGPT can be used in all higher education courses. Users should be cautious about the accuracy and reliability of the answers provided by ChatGPT. Decision-makers should take precautions against risks such as privacy, ethics, confidentiality and security that may arise from using artificial intelligence in education.
{"title":"Investigation of student experiences with ChatGPT-supported online learning applications in higher education","authors":"Bünyamin Kayali, Mehmet Yavuz, Şener Balat, M. Çalişan","doi":"10.14742/ajet.8915","DOIUrl":"https://doi.org/10.14742/ajet.8915","url":null,"abstract":"The purpose of this study was to determine university students' experiences with the use of ChatGPT in online courses. The sample consisted of 84 associate degree students from a state university in Turkey. A multi-method approach was used in the study. Although quantitative data were collected using the Chatbot Usability Scale, qualitative data were collected using a semi-structured interview form that we developed. The data were analysed using descriptive and content analysis methods. According to the findings, ChatGPT exhibits advantages such as a user-friendly interface and fast, concise, relevant responses. Moreover, emphasizing its contribution to the learning process, the information provided was sufficient and topic-oriented. The understandability of the chatbot’s functions and the clarity of their communication were emphasized. However, there are disadvantages such as performance issues, frequency of errors and the risk of providing misleading information. Concerns have also been raised about the potential difficulties chatbots may face in ambiguous conversations and providing insufficient information on privacy issues. In conclusion, ChatGPT is recognised as a potentially valuable tool in education based on positive usability impressions; however, more research is needed for its safe use.\u0000Implications for practice or policy\u0000\u0000Based on positive usability impressions, students and instructors can use ChatGPT to support educational activities.\u0000ChatGPT can promote and enhance students' personalised learning experiences.\u0000ChatGPT can be used in all higher education courses.\u0000Users should be cautious about the accuracy and reliability of the answers provided by ChatGPT.\u0000Decision-makers should take precautions against risks such as privacy, ethics, confidentiality and security that may arise from using artificial intelligence in education.\u0000","PeriodicalId":47812,"journal":{"name":"Australasian Journal of Educational Technology","volume":"62 18","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138945617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Knight, Camille Dickson-Deane, Keith Heggart, Kirsty Kitto, Dilek Çetindamar Kozanoğlu, Damian Maher, Bhuva Narayan, Forooq Zarrabi
The launch of new tools in late 2022 heralded significant growth in attention to the impacts of generative AI (GenAI) in education. Claims of the potential impact on education are contested, but there are clear risks of inappropriate use particularly where GenAI aligns poorly with learning aims. In response, in mid-2023, the Australian Federal Government held an inquiry, calling for public submissions. This inquiry offers a lens onto the policy framing of GenAI in education and provides the object of investigation for this paper. We use the inquiry submissions, extracting structured claims from each. This extraction is provided as an open data set for further research, while this paper focuses on our analysis of the policy recommendations made. Implications for practice or policy For practitioners, policymakers, and researchers. the paper provides an overview and synthesis of submission recommendations and their themes, by source type. For respondents to the inquiry (sources), the paper supports reflection regarding synergies and gaps in recommendations, pointing to opportunity for collaboration and policy development. For stakeholders with responsibility for aspects of policy delivery and/or those applying a critical lens to the inquiry and recommendation framing(s), the paper offers actionable insight.
{"title":"Generative AI in the Australian education system: An open data set of stakeholder recommendations and emerging analysis from a public inquiry","authors":"Simon Knight, Camille Dickson-Deane, Keith Heggart, Kirsty Kitto, Dilek Çetindamar Kozanoğlu, Damian Maher, Bhuva Narayan, Forooq Zarrabi","doi":"10.14742/ajet.8922","DOIUrl":"https://doi.org/10.14742/ajet.8922","url":null,"abstract":"The launch of new tools in late 2022 heralded significant growth in attention to the impacts of generative AI (GenAI) in education. Claims of the potential impact on education are contested, but there are clear risks of inappropriate use particularly where GenAI aligns poorly with learning aims. In response, in mid-2023, the Australian Federal Government held an inquiry, calling for public submissions. This inquiry offers a lens onto the policy framing of GenAI in education and provides the object of investigation for this paper. We use the inquiry submissions, extracting structured claims from each. This extraction is provided as an open data set for further research, while this paper focuses on our analysis of the policy recommendations made.\u0000Implications for practice or policy\u0000\u0000For practitioners, policymakers, and researchers. the paper provides an overview and synthesis of submission recommendations and their themes, by source type.\u0000For respondents to the inquiry (sources), the paper supports reflection regarding synergies and gaps in recommendations, pointing to opportunity for collaboration and policy development.\u0000For stakeholders with responsibility for aspects of policy delivery and/or those applying a critical lens to the inquiry and recommendation framing(s), the paper offers actionable insight.\u0000","PeriodicalId":47812,"journal":{"name":"Australasian Journal of Educational Technology","volume":"16 10","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138947052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence (AI) technology, such as Chat Generative Pre-trained Transformer (ChatGPT), is evolving quickly and having a significant impact on the higher education sector. Although the impact of ChatGPT on academic integrity processes is a key concern, little is known about whether academics can reliably recognise texts that have been generated by AI. This qualitative study applies Turing’s Imitation Game to investigate 16 education academics’ perceptions of two pairs of texts written by either ChatGPT or a human. Pairs of texts, written in response to the same task, were used as the stimulus for interviews that probed academics’ perceptions of text authorship and the textual features that were important in their decision-making. Results indicated academics were only able to identify AI-generated texts half of the time, highlighting the sophistication of contemporary generative AI technology. Academics perceived the following categories as important for their decision-making: voice, word usage, structure, task achievement and flow. All five categories of decision-making were variously used to rationalise both accurate and inaccurate decisions about text authorship. The implications of these results are discussed with a particular focus on what strategies can be applied to support academics more effectively as they manage the ongoing challenge of AI in higher education. Implications for practice or policy: Experienced academics may be unable to distinguish between texts written by contemporary generative AI technology and humans. Academics are uncertain about the current capabilities of generative AI and need support in redesigning assessments that succeed in providing robust evidence of student achievement of learning outcomes. Institutions must assess the adequacy of their assessment designs, AI use policies, and AI-related procedures to enhance students’ capacity for effective and ethical use of generative AI technology.
{"title":"Academics' perceptions of ChatGPT-generated written outputs: A practical application of Turing’s Imitation Game","authors":"Joshua A Matthews, Catherine Rita Volpe","doi":"10.14742/ajet.8896","DOIUrl":"https://doi.org/10.14742/ajet.8896","url":null,"abstract":"Artificial intelligence (AI) technology, such as Chat Generative Pre-trained Transformer (ChatGPT), is evolving quickly and having a significant impact on the higher education sector. Although the impact of ChatGPT on academic integrity processes is a key concern, little is known about whether academics can reliably recognise texts that have been generated by AI. This qualitative study applies Turing’s Imitation Game to investigate 16 education academics’ perceptions of two pairs of texts written by either ChatGPT or a human. Pairs of texts, written in response to the same task, were used as the stimulus for interviews that probed academics’ perceptions of text authorship and the textual features that were important in their decision-making. Results indicated academics were only able to identify AI-generated texts half of the time, highlighting the sophistication of contemporary generative AI technology. Academics perceived the following categories as important for their decision-making: voice, word usage, structure, task achievement and flow. All five categories of decision-making were variously used to rationalise both accurate and inaccurate decisions about text authorship. The implications of these results are discussed with a particular focus on what strategies can be applied to support academics more effectively as they manage the ongoing challenge of AI in higher education.\u0000Implications for practice or policy:\u0000\u0000Experienced academics may be unable to distinguish between texts written by contemporary generative AI technology and humans.\u0000Academics are uncertain about the current capabilities of generative AI and need support in redesigning assessments that succeed in providing robust evidence of student achievement of learning outcomes.\u0000Institutions must assess the adequacy of their assessment designs, AI use policies, and AI-related procedures to enhance students’ capacity for effective and ethical use of generative AI technology.\u0000","PeriodicalId":47812,"journal":{"name":"Australasian Journal of Educational Technology","volume":"3 6","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138944194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thanh Pham, Thanh Binh Nguyen, Son Ha, Ngoc Thanh Nguyen Ngoc
This research explored the potential of artificial intelligence (AI)-assisted learning using ChatGPT in an engineering course at a university in South-east Asia. The study investigated the benefits and challenges that students may encounter when utilising ChatGPT-3.5 as a learning tool. This research developed an AI-assisted learning flow that empowers learners and lecturers to integrate ChatGPT into their teaching and learning processes. The flow was subsequently used to validate and assess a variety of exercises, tutorial tasks and assessment-like questions for the course under study. Introducing a self-rating system allowed the study to facilitate users in assessing the generative responses. The findings indicate that ChatGPT has significant potential to assist students; however, there is a necessity for training and offering guidance to students on effective interactions with ChatGPT. The study contributes to the evidence of the potential of AI-assisted learning and identifies areas for future research in refining the use of AI tools to better support students' educational journey. Implications for practice or policy Educators and administrators could review the usage of ChatGPT in an engineering technology course and study the implications of generative AI tools in higher education. Academics could adapt and modify the proposed AI-assisted learning flow in this paper to suit their classroom. Students can review and adopt the proposed AI-assisted learning flow in this paper for their studies. Researchers could follow up on the application of ChatGPT in teaching and learning: teaching quality and student experience, academic integrity and assessment design.
{"title":"Digital transformation in engineering education: Exploring the potential of AI-assisted learning","authors":"Thanh Pham, Thanh Binh Nguyen, Son Ha, Ngoc Thanh Nguyen Ngoc","doi":"10.14742/ajet.8825","DOIUrl":"https://doi.org/10.14742/ajet.8825","url":null,"abstract":"This research explored the potential of artificial intelligence (AI)-assisted learning using ChatGPT in an engineering course at a university in South-east Asia. The study investigated the benefits and challenges that students may encounter when utilising ChatGPT-3.5 as a learning tool. This research developed an AI-assisted learning flow that empowers learners and lecturers to integrate ChatGPT into their teaching and learning processes. The flow was subsequently used to validate and assess a variety of exercises, tutorial tasks and assessment-like questions for the course under study. Introducing a self-rating system allowed the study to facilitate users in assessing the generative responses. The findings indicate that ChatGPT has significant potential to assist students; however, there is a necessity for training and offering guidance to students on effective interactions with ChatGPT. The study contributes to the evidence of the potential of AI-assisted learning and identifies areas for future research in refining the use of AI tools to better support students' educational journey.\u0000Implications for practice or policy\u0000\u0000Educators and administrators could review the usage of ChatGPT in an engineering technology course and study the implications of generative AI tools in higher education.\u0000Academics could adapt and modify the proposed AI-assisted learning flow in this paper to suit their classroom.\u0000Students can review and adopt the proposed AI-assisted learning flow in this paper for their studies.\u0000Researchers could follow up on the application of ChatGPT in teaching and learning: teaching quality and student experience, academic integrity and assessment design.\u0000","PeriodicalId":47812,"journal":{"name":"Australasian Journal of Educational Technology","volume":"84 13","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138945302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent decades, flipped learning has been adopted by teachers to improve learning achievement. However, it is challenging to provide all students with instant personalised guidance at the same time. To address this gap, based on Chat Generative Pre-trained Transformer (ChatGPT) and the learning scaffolding theory, I developed a ChatGPT-based flipped learning guiding approach (ChatGPT-FLGA) according to the analysis, design, development, implementation and evaluation model. To investigate the effectiveness of ChatGPT-FLGA, a quasi-experiment was conducted in the learning activities of a courseware project. One of two classes was randomly assigned to the experimental group, while the other was assigned to the control group. The students in both classes received flipped classroom instruction and conducted discussions through Tencent QQ applications, but only those in the experimental group learned with ChatGPT-FLGA. The results revealed that the ChatGPT-FLGA significantly improved students’ performance, self-efficacy, learning attitudes, intrinsic motivation and creative thinking. The research findings enrich the literature on ChatGPT in flipped classrooms by addressing the influence of ChatGPT-FLGA on students' performance and perceptions. Implications for practice or policy: Teachers and universities should utilise ChatGPT as a tool for supporting students’ learning and promoting their problem-solving skills. Course designers and academic staff can leverage ChatGPT-FLGA to enact student-centred pedagogical transformation in massive open online courses or flipped learning. Course designers should master how to use ChatGPT-FLGA and its learning system, to foster learners’ self-regulated learning, help them promote online self-efficacy and overcome difficulties in learning motivation and creative thinking ability.
{"title":"Effects of a ChatGPT-based flipped learning guiding approach on learners’ courseware project performances and perceptions","authors":"Haifeng Li","doi":"10.14742/ajet.8923","DOIUrl":"https://doi.org/10.14742/ajet.8923","url":null,"abstract":"In recent decades, flipped learning has been adopted by teachers to improve learning achievement. However, it is challenging to provide all students with instant personalised guidance at the same time. To address this gap, based on Chat Generative Pre-trained Transformer (ChatGPT) and the learning scaffolding theory, I developed a ChatGPT-based flipped learning guiding approach (ChatGPT-FLGA) according to the analysis, design, development, implementation and evaluation model. To investigate the effectiveness of ChatGPT-FLGA, a quasi-experiment was conducted in the learning activities of a courseware project. One of two classes was randomly assigned to the experimental group, while the other was assigned to the control group. The students in both classes received flipped classroom instruction and conducted discussions through Tencent QQ applications, but only those in the experimental group learned with ChatGPT-FLGA. The results revealed that the ChatGPT-FLGA significantly improved students’ performance, self-efficacy, learning attitudes, intrinsic motivation and creative thinking. The research findings enrich the literature on ChatGPT in flipped classrooms by addressing the influence of ChatGPT-FLGA on students' performance and perceptions.\u0000Implications for practice or policy:\u0000\u0000Teachers and universities should utilise ChatGPT as a tool for supporting students’ learning and promoting their problem-solving skills.\u0000Course designers and academic staff can leverage ChatGPT-FLGA to enact student-centred pedagogical transformation in massive open online courses or flipped learning.\u0000Course designers should master how to use ChatGPT-FLGA and its learning system, to foster learners’ self-regulated learning, help them promote online self-efficacy and overcome difficulties in learning motivation and creative thinking ability.\u0000","PeriodicalId":47812,"journal":{"name":"Australasian Journal of Educational Technology","volume":"1 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138944232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ariel Ortiz Beltrán, D. Hernández‐Leo, Ishari Amarasinghe
This paper leverages analytics methods to investigate the impact of changes in teaching modalities shaped by the COVID-19 pandemic on undergraduate students’ satisfaction within a Spanish brick-and-mortar higher education institution. Unlike research that has focused on faculty- or programme-level data, this study offers a comprehensive institutional perspective by analysing large-scale data (N = 83,532) gathered from satisfaction surveys across all undergraduate courses in eight faculties from 2018 to 2021. The longitudinal analysis revealed significant changes (p < 0.05) in satisfaction indicators, particularly overall satisfaction and perceived workload. During the emergency remote teaching period, there was a significant decrease in satisfaction and high levels of variability across courses. However, a year after emergency remote teaching, with increased implementations of technology-supported online and mixed teaching modalities, satisfaction measures not only recovered but exceeded pre-COVID levels in the aforementioned indicators when the teaching modality was fully co-located. The variability of answers also reached historical lows, reflecting more uniform student experiences. These findings highlight the resilience of educators and the current higher education system and suggest a capacity to learn and improve from disruptive pedagogical changes. The study also provides insights into how data analytics can help monitor and inform the evolution of teaching practices. Implications for practice or policy Higher education institution administrators should improve the understanding of the effects derived from changes in their teaching and learning models, for example, in teaching modalities and related technology support. Student satisfaction data analytics offer useful indicators to study the impact of those effects. Higher education institutions should provide support for educators to ensure minimal deviations from expected averages of educational quality indicators regardless of the educators’ capacity to adapt to changes in the teaching models.
{"title":"Surviving and thriving: How changes in teaching modalities influenced student satisfaction before, during and after COVID-19","authors":"Ariel Ortiz Beltrán, D. Hernández‐Leo, Ishari Amarasinghe","doi":"10.14742/ajet.8958","DOIUrl":"https://doi.org/10.14742/ajet.8958","url":null,"abstract":"This paper leverages analytics methods to investigate the impact of changes in teaching modalities shaped by the COVID-19 pandemic on undergraduate students’ satisfaction within a Spanish brick-and-mortar higher education institution. Unlike research that has focused on faculty- or programme-level data, this study offers a comprehensive institutional perspective by analysing large-scale data (N = 83,532) gathered from satisfaction surveys across all undergraduate courses in eight faculties from 2018 to 2021. The longitudinal analysis revealed significant changes (p < 0.05) in satisfaction indicators, particularly overall satisfaction and perceived workload. During the emergency remote teaching period, there was a significant decrease in satisfaction and high levels of variability across courses. However, a year after emergency remote teaching, with increased implementations of technology-supported online and mixed teaching modalities, satisfaction measures not only recovered but exceeded pre-COVID levels in the aforementioned indicators when the teaching modality was fully co-located. The variability of answers also reached historical lows, reflecting more uniform student experiences. These findings highlight the resilience of educators and the current higher education system and suggest a capacity to learn and improve from disruptive pedagogical changes. The study also provides insights into how data analytics can help monitor and inform the evolution of teaching practices.\u0000Implications for practice or policy\u0000\u0000Higher education institution administrators should improve the understanding of the effects derived from changes in their teaching and learning models, for example,\u0000in teaching modalities and related technology support.\u0000Student satisfaction data analytics offer useful indicators to study the impact of those effects.\u0000Higher education institutions should provide support for educators to ensure minimal deviations from expected averages of educational quality indicators regardless of the educators’ capacity to adapt to changes in the teaching models.\u0000","PeriodicalId":47812,"journal":{"name":"Australasian Journal of Educational Technology","volume":" 2","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138961597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Under this “new normal” world scenario, online teaching has been essential rather than a choice in continuing learning activities. During the COVID-19 period, virtual teaching platforms played an important role in the success of online teaching in various higher educational institutions. Thus, the current study attempted to predict faculty adoption of online platforms by introducing a set of essential drivers for engaging in online teaching. Following the theory of reasoned action, the study broadened the technology acceptance model variables and security and trust as extrinsic determinants and included resistance to change as moderators to invigorate the research model. Data were collected through an online survey with a sample size of 418 Indian respondents. Our results posit that perceived ease of use, usefulness, security and trust positively influence the faculty's intentions to adopt online platforms. In addition, the study also reported that positive intention leads to the actual use of virtual platforms. Furthermore, the research found the moderating role of the resistance to change dimension in the association of intention and actual use of virtual teaching platforms. The findings provide both theoretical and practical applications of educational technology. Implications for practice or policy The first step for accepting virtual teaching platforms is to help faculty to reduce their resistance for effective online teaching. Higher education institutions should have a policy promising faculty that online teaching using virtual teaching platforms will offer a safer and more trustworthy environment. Higher education institutions should undertake intense organisational renewal and implement bottom-up processes for synchronous learning. Regulators could frame a policy including virtual teaching platforms to provide interactive professional development opportunities.
{"title":"Faculty acceptance of virtual teaching platforms for online teaching: Moderating role of resistance to change","authors":"Harmandeep Singh, Parminder Singh, Dharna Sharma","doi":"10.14742/ajet.7529","DOIUrl":"https://doi.org/10.14742/ajet.7529","url":null,"abstract":"Under this “new normal” world scenario, online teaching has been essential rather than a choice in continuing learning activities. During the COVID-19 period, virtual teaching platforms played an important role in the success of online teaching in various higher educational institutions. Thus, the current study attempted to predict faculty adoption of online platforms by introducing a set of essential drivers for engaging in online teaching. Following the theory of reasoned action, the study broadened the technology acceptance model variables and security and trust as extrinsic determinants and included resistance to change as moderators to invigorate the research model. Data were collected through an online survey with a sample size of 418 Indian respondents. Our results posit that perceived ease of use, usefulness, security and trust positively influence the faculty's intentions to adopt online platforms. In addition, the study also reported that positive intention leads to the actual use of virtual platforms. Furthermore, the research found the moderating role of the resistance to change dimension in the association of intention and actual use of virtual teaching platforms. The findings provide both theoretical and practical applications of educational technology.\u0000Implications for practice or policy\u0000\u0000The first step for accepting virtual teaching platforms is to help faculty to reduce their resistance for effective online teaching.\u0000Higher education institutions should have a policy promising faculty that online teaching using virtual teaching platforms will offer a safer and more trustworthy environment.\u0000Higher education institutions should undertake intense organisational renewal and implement bottom-up processes for synchronous learning.\u0000Regulators could frame a policy including virtual teaching platforms to provide interactive professional development opportunities.\u0000","PeriodicalId":47812,"journal":{"name":"Australasian Journal of Educational Technology","volume":"31 12","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139004705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}