首页 > 最新文献

Computers and Education Artificial Intelligence最新文献

英文 中文
Artificial intelligence in teaching and teacher professional development: A systematic review
Q1 Social Sciences Pub Date : 2024-12-23 DOI: 10.1016/j.caeai.2024.100355
Xiao Tan, Gary Cheng, Man Ho Ling
The application of Artificial Intelligence (AI) technology in education is increasingly recognized as a key driver of educational innovation. While extensive literature exists on the integration of AI technologies in educational settings, less emphasis has been placed on the critical role of teachers and their professional development needs. This study systematically reviews research conducted between 2015 and 2024 on teachers' use of AI technology in their teaching and professional development, focusing on the relationship between the supply of professional development opportunities and the demand for AI integration among teachers. Using PRISMA principles and protocols, this review identified and synthesized 95 relevant research articles. The findings reveal a significant imbalance in research focus. Specifically, 65% of the studies examined the application of AI in teaching, including technologies such as conversational AI and related technologies, AI-driven learning and assessment systems, immersive technologies, visual and auditory computing, and teaching and learning analytics. In contrast, only 35% of the studies explored AI's role in enhancing teacher professional development. This review highlights a gap in research addressing the development needs of teachers as they integrate AI technologies into their teaching practices. It emphasizes the need for future research to focus more on the potential of AI in teacher professional development and to investigate how AI technologies can be applied in education from both the perspectives of student learning and teacher instruction. Furthermore, research on AI in professional development should prioritize addressing technological and ethical challenges to ensure the responsible and effective integration of AI in education.
{"title":"Artificial intelligence in teaching and teacher professional development: A systematic review","authors":"Xiao Tan,&nbsp;Gary Cheng,&nbsp;Man Ho Ling","doi":"10.1016/j.caeai.2024.100355","DOIUrl":"10.1016/j.caeai.2024.100355","url":null,"abstract":"<div><div>The application of Artificial Intelligence (AI) technology in education is increasingly recognized as a key driver of educational innovation. While extensive literature exists on the integration of AI technologies in educational settings, less emphasis has been placed on the critical role of teachers and their professional development needs. This study systematically reviews research conducted between 2015 and 2024 on teachers' use of AI technology in their teaching and professional development, focusing on the relationship between the supply of professional development opportunities and the demand for AI integration among teachers. Using PRISMA principles and protocols, this review identified and synthesized 95 relevant research articles. The findings reveal a significant imbalance in research focus. Specifically, 65% of the studies examined the application of AI in teaching, including technologies such as conversational AI and related technologies, AI-driven learning and assessment systems, immersive technologies, visual and auditory computing, and teaching and learning analytics. In contrast, only 35% of the studies explored AI's role in enhancing teacher professional development. This review highlights a gap in research addressing the development needs of teachers as they integrate AI technologies into their teaching practices. It emphasizes the need for future research to focus more on the potential of AI in teacher professional development and to investigate how AI technologies can be applied in education from both the perspectives of student learning and teacher instruction. Furthermore, research on AI in professional development should prioritize addressing technological and ethical challenges to ensure the responsible and effective integration of AI in education.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100355"},"PeriodicalIF":0.0,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging LLMs for optimised feature selection and embedding in structured data: A case study on graduate employment classification
Q1 Social Sciences Pub Date : 2024-12-22 DOI: 10.1016/j.caeai.2024.100356
Radiah Haque, Hui-Ngo Goh, Choo-Yee Ting, Albert Quek, M.D. Rakibul Hasan
The application of Machine Learning (ML) for predicting graduate student employability is a growing area of research, driven by the need to align educational outcomes with job market requirements. In this context, this paper investigates the application of Large Language Models (LLMs) for tabular data transformation and embedding, specifically using Bidirectional Encoder Representations from Transformers (BERT), to enhance the performance of ML models in binary classification tasks for student employability prediction. The primary objective is to determine whether converting structured data into text format improves model accuracy. The study involves several ML models including Artificial Neural Networks (ANN), CatBoost, and BERT classifier. The focus is on predicting the employment status of graduate students based on demographic, academic, and graduate tracer study data, collected from over 4000 university graduates. Feature selection methods, including Boruta and Extra Tree Classifier (ETC) are employed to identify the optimal feature set, guided by a sliding window algorithm for automatic feature selection. The models are trained in four stages: 1) original dataset without feature selection or word embedding, 2) dataset with selected optimal features, 3) transformed data with word embedding, and 4) transformed data with feature selection applied both before and after word embedding. The baseline model (without feature selection and embedding) achieved the highest accuracy with the ANN model (79%). Subsequently, applying ETC for feature selection improved accuracy, with CatBoost achieving 83%. Further transformation with BERT-based embeddings raised the highest accuracy to 85% using the BERT classifier. Finally, the optimal accuracy of 88% was obtained by applying feature selection before and after embedding, with the BERT-Boruta model. The findings from this study demonstrate that using the dual-stage feature selection approach in combination with BERT embedding significantly increases the classification accuracy. This highlights the potential of LLMs in transforming tabular data for enhanced graduate employment prediction.
{"title":"Leveraging LLMs for optimised feature selection and embedding in structured data: A case study on graduate employment classification","authors":"Radiah Haque,&nbsp;Hui-Ngo Goh,&nbsp;Choo-Yee Ting,&nbsp;Albert Quek,&nbsp;M.D. Rakibul Hasan","doi":"10.1016/j.caeai.2024.100356","DOIUrl":"10.1016/j.caeai.2024.100356","url":null,"abstract":"<div><div>The application of Machine Learning (ML) for predicting graduate student employability is a growing area of research, driven by the need to align educational outcomes with job market requirements. In this context, this paper investigates the application of Large Language Models (LLMs) for tabular data transformation and embedding, specifically using Bidirectional Encoder Representations from Transformers (BERT), to enhance the performance of ML models in binary classification tasks for student employability prediction. The primary objective is to determine whether converting structured data into text format improves model accuracy. The study involves several ML models including Artificial Neural Networks (ANN), CatBoost, and BERT classifier. The focus is on predicting the employment status of graduate students based on demographic, academic, and graduate tracer study data, collected from over 4000 university graduates. Feature selection methods, including Boruta and Extra Tree Classifier (ETC) are employed to identify the optimal feature set, guided by a sliding window algorithm for automatic feature selection. The models are trained in four stages: 1) original dataset without feature selection or word embedding, 2) dataset with selected optimal features, 3) transformed data with word embedding, and 4) transformed data with feature selection applied both before and after word embedding. The baseline model (without feature selection and embedding) achieved the highest accuracy with the ANN model (79%). Subsequently, applying ETC for feature selection improved accuracy, with CatBoost achieving 83%. Further transformation with BERT-based embeddings raised the highest accuracy to 85% using the BERT classifier. Finally, the optimal accuracy of 88% was obtained by applying feature selection before and after embedding, with the BERT-Boruta model. The findings from this study demonstrate that using the dual-stage feature selection approach in combination with BERT embedding significantly increases the classification accuracy. This highlights the potential of LLMs in transforming tabular data for enhanced graduate employment prediction.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100356"},"PeriodicalIF":0.0,"publicationDate":"2024-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI in higher education: A global perspective of institutional adoption policies and guidelines
Q1 Social Sciences Pub Date : 2024-12-19 DOI: 10.1016/j.caeai.2024.100348
Yueqiao Jin , Lixiang Yan , Vanessa Echeverria , Dragan Gašević , Roberto Martinez-Maldonado
Integrating generative AI (GAI) into higher education is crucial for preparing a future generation of GAI-literate students. However, a comprehensive understanding of global institutional adoption policies remains absent, with most prior studies focusing on the Global North and lacking a theoretical lens. This study utilizes the Diffusion of Innovations Theory to examine GAI adoption strategies in higher education across 40 universities from six global regions. It explores the characteristics of GAI innovation, including compatibility, trialability, and observability, and analyses the communication channels and roles and responsibilities outlined in university policies and guidelines. The findings reveal that universities are proactively addressing GAI integration by emphasising academic integrity, enhancing teaching and learning practices, and promoting equity. Key policy measures include the development of guidelines for ethical GAI use, the design of authentic assessments to mitigate misuse, and the provision of training programs for faculty and students to foster GAI literacy. Despite these efforts, gaps remain in comprehensive policy frameworks, particularly in addressing data privacy concerns and ensuring equitable access to GAI tools. The study underscores the importance of clear communication channels, stakeholder collaboration, and ongoing evaluation to support effective GAI adoption. These insights provide actionable insights for policymakers to craft inclusive, transparent, and adaptive strategies for integrating GAI into higher education.
{"title":"Generative AI in higher education: A global perspective of institutional adoption policies and guidelines","authors":"Yueqiao Jin ,&nbsp;Lixiang Yan ,&nbsp;Vanessa Echeverria ,&nbsp;Dragan Gašević ,&nbsp;Roberto Martinez-Maldonado","doi":"10.1016/j.caeai.2024.100348","DOIUrl":"10.1016/j.caeai.2024.100348","url":null,"abstract":"<div><div>Integrating generative AI (GAI) into higher education is crucial for preparing a future generation of GAI-literate students. However, a comprehensive understanding of global institutional adoption policies remains absent, with most prior studies focusing on the Global North and lacking a theoretical lens. This study utilizes the Diffusion of Innovations Theory to examine GAI adoption strategies in higher education across 40 universities from six global regions. It explores the characteristics of GAI innovation, including compatibility, trialability, and observability, and analyses the communication channels and roles and responsibilities outlined in university policies and guidelines. The findings reveal that universities are proactively addressing GAI integration by emphasising academic integrity, enhancing teaching and learning practices, and promoting equity. Key policy measures include the development of guidelines for ethical GAI use, the design of authentic assessments to mitigate misuse, and the provision of training programs for faculty and students to foster GAI literacy. Despite these efforts, gaps remain in comprehensive policy frameworks, particularly in addressing data privacy concerns and ensuring equitable access to GAI tools. The study underscores the importance of clear communication channels, stakeholder collaboration, and ongoing evaluation to support effective GAI adoption. These insights provide actionable insights for policymakers to craft inclusive, transparent, and adaptive strategies for integrating GAI into higher education.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100348"},"PeriodicalIF":0.0,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of adaptive feedback generated by a large language model: A case study in teacher education
Q1 Social Sciences Pub Date : 2024-12-17 DOI: 10.1016/j.caeai.2024.100349
Annette Kinder , Fiona J. Briese , Marius Jacobs , Niclas Dern , Niels Glodny , Simon Jacobs , Samuel Leßmann
This study investigates the effects of adaptive feedback generated by large language models (LLMs), specifically ChatGPT, on performance in a written diagnostic reasoning task among German pre-service teachers (n = 269). Additionally, the study analyzed user evaluations of the feedback and feedback processing time. Diagnostic reasoning, a critical skill for making informed pedagogical decisions, was assessed through a writing task integrated into a teacher preparation course. Participants were randomly assigned to receive either adaptive feedback generated by ChatGPT or static feedback prepared in advance by a human expert, which was identical for all participants in that condition, before completing a second writing task. The findings reveal that ChatGPT-generated adaptive feedback significantly improved the quality of justification in the students’ writing compared to the static feedback written by an expert. However, no significant difference was observed in decision accuracy between the two groups, suggesting that the type and source of feedback did not impact decision-making processes. Additionally, students who had received LLM-generated adaptive feedback spent more time processing the feedback and subsequently wrote longer texts, indicating longer engagement with the feedback and the task. Participants also rated adaptive feedback as more useful and interesting than static feedback, aligning with previous research on the motivational benefits of adaptive feedback. The study highlights the potential of LLMs like ChatGPT as valuable tools in educational settings, particularly in large courses where providing adaptive feedback is challenging.
{"title":"Effects of adaptive feedback generated by a large language model: A case study in teacher education","authors":"Annette Kinder ,&nbsp;Fiona J. Briese ,&nbsp;Marius Jacobs ,&nbsp;Niclas Dern ,&nbsp;Niels Glodny ,&nbsp;Simon Jacobs ,&nbsp;Samuel Leßmann","doi":"10.1016/j.caeai.2024.100349","DOIUrl":"10.1016/j.caeai.2024.100349","url":null,"abstract":"<div><div>This study investigates the effects of adaptive feedback generated by large language models (LLMs), specifically ChatGPT, on performance in a written diagnostic reasoning task among German pre-service teachers (<em>n</em> = 269). Additionally, the study analyzed user evaluations of the feedback and feedback processing time. Diagnostic reasoning, a critical skill for making informed pedagogical decisions, was assessed through a writing task integrated into a teacher preparation course. Participants were randomly assigned to receive either adaptive feedback generated by ChatGPT or static feedback prepared in advance by a human expert, which was identical for all participants in that condition, before completing a second writing task. The findings reveal that ChatGPT-generated adaptive feedback significantly improved the quality of justification in the students’ writing compared to the static feedback written by an expert. However, no significant difference was observed in decision accuracy between the two groups, suggesting that the type and source of feedback did not impact decision-making processes. Additionally, students who had received LLM-generated adaptive feedback spent more time processing the feedback and subsequently wrote longer texts, indicating longer engagement with the feedback and the task. Participants also rated adaptive feedback as more useful and interesting than static feedback, aligning with previous research on the motivational benefits of adaptive feedback. The study highlights the potential of LLMs like ChatGPT as valuable tools in educational settings, particularly in large courses where providing adaptive feedback is challenging.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100349"},"PeriodicalIF":0.0,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI in academic writing: Does information on authorship impact learners’ revision behavior?
Q1 Social Sciences Pub Date : 2024-12-17 DOI: 10.1016/j.caeai.2024.100350
Anna Radtke , Nikol Rummel
The role of generative artificial intelligence (AI) in education has expanded significantly over recent years. AI-based text generators such as ChatGPT provide an accessible and effective tool for learners, particularly in academic writing. While revision is considered an essential part of both individual and collaborative writing, research on the revision of AI-generated texts remains limited. However, with the growing adoption of generative AI in education, learners’ ability to effectively revise AI-generated content is likely to become increasingly important in the future. The aim of this study was to investigate whether learners exhibit different revision behaviors when presented with different information about the author of a text (peer vs. AI). We further examined the impact of learners’ prior experiences, attitudes, and gender on text revision. Therefore, N = 303 learners revised two different texts: one labeled as peer-written and the other as AI-generated. The results revealed that while learners invested less time in revising a text labeled as AI-generated, information about the author did not affect the number of areas identified as requiring improvement or the number of revisions made. Moreover, learners who indicated greater prior exposure to media reports about AI-based text generators, a higher level of trust in AI, and a tendency toward ‘loafing’ in AI-assisted writing spent less time revising a text labeled as AI-generated. Conversely, learners with more experience in academic writing identified more areas for improvement and made more extensive revisions, regardless of the labeled authorship.
{"title":"Generative AI in academic writing: Does information on authorship impact learners’ revision behavior?","authors":"Anna Radtke ,&nbsp;Nikol Rummel","doi":"10.1016/j.caeai.2024.100350","DOIUrl":"10.1016/j.caeai.2024.100350","url":null,"abstract":"<div><div>The role of generative artificial intelligence (AI) in education has expanded significantly over recent years. AI-based text generators such as <em>ChatGPT</em> provide an accessible and effective tool for learners, particularly in academic writing. While revision is considered an essential part of both individual and collaborative writing, research on the revision of AI-generated texts remains limited. However, with the growing adoption of generative AI in education, learners’ ability to effectively revise AI-generated content is likely to become increasingly important in the future. The aim of this study was to investigate whether learners exhibit different revision behaviors when presented with different information about the author of a text (peer vs. AI). We further examined the impact of learners’ prior experiences, attitudes, and gender on text revision. Therefore, <em>N</em> = 303 learners revised two different texts: one labeled as peer-written and the other as AI-generated. The results revealed that while learners invested less time in revising a text labeled as AI-generated, information about the author did not affect the number of areas identified as requiring improvement or the number of revisions made. Moreover, learners who indicated greater prior exposure to media reports about AI-based text generators, a higher level of trust in AI, and a tendency toward ‘loafing’ in AI-assisted writing spent less time revising a text labeled as AI-generated. Conversely, learners with more experience in academic writing identified more areas for improvement and made more extensive revisions, regardless of the labeled authorship.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100350"},"PeriodicalIF":0.0,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Systematically visualizing ChatGPT used in higher education: Publication trend, disciplinary domains, research themes, adoption and acceptance
Q1 Social Sciences Pub Date : 2024-12-14 DOI: 10.1016/j.caeai.2024.100336
Ting Ma
Since it was released in November 2022, ChatGPT has been exerting revolutionary influence on the realm of higher education. In order to obtain a comprehensive understanding of the research landscape, we conduct a systematic literature review on the studies of ChatGPT used in higher education. Both quantitative and qualitative methods were adopted to bibliometrically examine the included literature selected from Web of Science and Scopus through the PRISMA protocol. Tools of VOSviewer and CitNetExplorer were employed to visualize the citation information. Our findings showed that the recent two years witnessed an ever-growing popularity of this research theme. Citation information analysis reveals the most influential authors, countries, sources, organizations and four focused topics. The disciplinary distribution of related research indicates a wide range of categories. More importantly, ChatGPT was found to be versatile in assisting teachers, students and researchers with a variety of tasks, and the factors influencing the acceptance of this technology among college students could be investigated through models like TAM, UTAUT and their extensions. We suggest future studies to focus on the ways to address the limitations and ethical issues of ChatGPT through AI literacy cultivation and joint efforts of all stakeholders.
{"title":"Systematically visualizing ChatGPT used in higher education: Publication trend, disciplinary domains, research themes, adoption and acceptance","authors":"Ting Ma","doi":"10.1016/j.caeai.2024.100336","DOIUrl":"10.1016/j.caeai.2024.100336","url":null,"abstract":"<div><div>Since it was released in November 2022, ChatGPT has been exerting revolutionary influence on the realm of higher education. In order to obtain a comprehensive understanding of the research landscape, we conduct a systematic literature review on the studies of ChatGPT used in higher education. Both quantitative and qualitative methods were adopted to bibliometrically examine the included literature selected from Web of Science and Scopus through the PRISMA protocol. Tools of VOSviewer and CitNetExplorer were employed to visualize the citation information. Our findings showed that the recent two years witnessed an ever-growing popularity of this research theme. Citation information analysis reveals the most influential authors, countries, sources, organizations and four focused topics. The disciplinary distribution of related research indicates a wide range of categories. More importantly, ChatGPT was found to be versatile in assisting teachers, students and researchers with a variety of tasks, and the factors influencing the acceptance of this technology among college students could be investigated through models like TAM, UTAUT and their extensions. We suggest future studies to focus on the ways to address the limitations and ethical issues of ChatGPT through AI literacy cultivation and joint efforts of all stakeholders.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100336"},"PeriodicalIF":0.0,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence in higher education: Modelling students’ motivation for continuous use of ChatGPT based on a modified self-determination theory
Q1 Social Sciences Pub Date : 2024-12-13 DOI: 10.1016/j.caeai.2024.100346
Nagaletchimee Annamalai , Brandford Bervell , Dickson Okoree Mireku , Raphael Papa Kweku Andoh
The purpose of this study was to investigate the determinants of higher education students' motivation towards continuous usage of ChatGPT for English language learning, based on a modified Self-Determination Theory (SDT). A quantitative approach hinged on a cross-sectional survey design was adopted, and an online questionnaire used to collect data from 324 students studying English as Foreign Language (EFL) and English as a Second Language (ESL). The data were analyzed using a Partial Least Squares-Structural Equation Modelling (PLS-SEM) technique. This study established that initial ChatGPT usage determined students' perceived autonomy, competence, relatedness and challenges in ChatGPT usage. In addition, a novel finding was that, both autonomy and relatedness predicted students' competence in using ChatGPT to learn. Further, determinants of students' motivation for continuous usage of ChatGPT were autonomy and relatedness. Lastly, the study through Important-Performance Map Analysis (IPMA), established autonomy as the most important as well as the highest performing factor determining students' motivation for continuous usage of ChatGPT. The validated SDT model explained a large total variance of 70.8% in students’ motivation for continuous use of ChatGPT. Based on the results, recommendations were made for both theory as well as policy and practice towards ChatGPT usage in higher education.
{"title":"Artificial intelligence in higher education: Modelling students’ motivation for continuous use of ChatGPT based on a modified self-determination theory","authors":"Nagaletchimee Annamalai ,&nbsp;Brandford Bervell ,&nbsp;Dickson Okoree Mireku ,&nbsp;Raphael Papa Kweku Andoh","doi":"10.1016/j.caeai.2024.100346","DOIUrl":"10.1016/j.caeai.2024.100346","url":null,"abstract":"<div><div>The purpose of this study was to investigate the determinants of higher education students' motivation towards continuous usage of ChatGPT for English language learning, based on a modified Self-Determination Theory (SDT). A quantitative approach hinged on a cross-sectional survey design was adopted, and an online questionnaire used to collect data from 324 students studying English as Foreign Language (EFL) and English as a Second Language (ESL). The data were analyzed using a Partial Least Squares-Structural Equation Modelling (PLS-SEM) technique. This study established that initial ChatGPT usage determined students' perceived autonomy, competence, relatedness and challenges in ChatGPT usage. In addition, a novel finding was that, both autonomy and relatedness predicted students' competence in using ChatGPT to learn. Further, determinants of students' motivation for continuous usage of ChatGPT were autonomy and relatedness. Lastly, the study through Important-Performance Map Analysis (IPMA), established autonomy as the most important as well as the highest performing factor determining students' motivation for continuous usage of ChatGPT. The validated SDT model explained a large total variance of 70.8% in students’ motivation for continuous use of ChatGPT. Based on the results, recommendations were made for both theory as well as policy and practice towards ChatGPT usage in higher education.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100346"},"PeriodicalIF":0.0,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assisting quality assurance of examination tasks: Using a GPT model and Bayesian testing for formative assessment
Q1 Social Sciences Pub Date : 2024-12-12 DOI: 10.1016/j.caeai.2024.100343
Nico Willert, Phi Katharina Würz
Formative quality assurance in the creation of examination tasks has always been an extremely time-consuming process. Especially due to the changing and short-lived content of computer science, new questions have to be created regularly, which in turn requires quality assurance. With the emergence of artificial intelligence (AI) systems such as ChatGPT and their ability to solve a range of different tasks, the question arises as to what extent this ability can also be utilized as part of a quality assurance process. One aspect of the formative quality assurance of multiple-choice questions involves checking the correct classification of alternative answers into correct and incorrect answers. As AI systems inherently lack transparency and predictability in their output, we present a simplified approach using Bayesian hypothesis testing to estimate the tendencies of an AI towards the classification. To evaluate the approach, the process is implemented and connected to the OpenAI API to handle inconsistent responses and other aspects that contribute to the robustness and reliability. This research is concluded by an evaluation carried out by means of the gpt-3.5-turbo model, using the examination tasks of two programming courses. This provides insights into the response scheme of the AI in relation to the prompt pattern used and the usability of AI for the subsequent quality assurance process.
{"title":"Assisting quality assurance of examination tasks: Using a GPT model and Bayesian testing for formative assessment","authors":"Nico Willert,&nbsp;Phi Katharina Würz","doi":"10.1016/j.caeai.2024.100343","DOIUrl":"10.1016/j.caeai.2024.100343","url":null,"abstract":"<div><div>Formative quality assurance in the creation of examination tasks has always been an extremely time-consuming process. Especially due to the changing and short-lived content of computer science, new questions have to be created regularly, which in turn requires quality assurance. With the emergence of artificial intelligence (AI) systems such as ChatGPT and their ability to solve a range of different tasks, the question arises as to what extent this ability can also be utilized as part of a quality assurance process. One aspect of the formative quality assurance of multiple-choice questions involves checking the correct classification of alternative answers into correct and incorrect answers. As AI systems inherently lack transparency and predictability in their output, we present a simplified approach using Bayesian hypothesis testing to estimate the tendencies of an AI towards the classification. To evaluate the approach, the process is implemented and connected to the OpenAI API to handle inconsistent responses and other aspects that contribute to the robustness and reliability. This research is concluded by an evaluation carried out by means of the gpt-3.5-turbo model, using the examination tasks of two programming courses. This provides insights into the response scheme of the AI in relation to the prompt pattern used and the usability of AI for the subsequent quality assurance process.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100343"},"PeriodicalIF":0.0,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing the use of generative AI in academic writing
Q1 Social Sciences Pub Date : 2024-12-12 DOI: 10.1016/j.caeai.2024.100342
Johan van Niekerk , Petrus M.J. Delport , Iain Sutherland
The rise of generative AI has been a major disruptive force in academia. Academics are concerned about its impact on student learning. Students can use generative AI technologies, such as ChatGPT, to complete many academic tasks on their behalf. This could lead to poor academic outcomes as students use ChatGPT to complete assessments, rather than engaging with the learning material. One particularly vulnerable academic activity is academic writing. This paper reports the results of an active learning intervention where ChatGPT was used by students to write an academic paper. The resultant papers were then analysed and critiqued by students to highlight the weaknesses of such AI-produced papers. The research used the Technology Acceptance Model to measure changing student perceptions about the usefulness and ease of use of ChatGPT in the creation of academic text. A statistical analysis indicates the intervention's impact on their behavioural intentions on using ChatGPT for academic writing.
{"title":"Addressing the use of generative AI in academic writing","authors":"Johan van Niekerk ,&nbsp;Petrus M.J. Delport ,&nbsp;Iain Sutherland","doi":"10.1016/j.caeai.2024.100342","DOIUrl":"10.1016/j.caeai.2024.100342","url":null,"abstract":"<div><div>The rise of generative AI has been a major disruptive force in academia. Academics are concerned about its impact on student learning. Students can use generative AI technologies, such as ChatGPT, to complete many academic tasks on their behalf. This could lead to poor academic outcomes as students use ChatGPT to complete assessments, rather than engaging with the learning material. One particularly vulnerable academic activity is academic writing. This paper reports the results of an <em>active learning</em> intervention where ChatGPT was used by students to write an academic paper. The resultant papers were then analysed and critiqued by students to highlight the weaknesses of such AI-produced papers. The research used the Technology Acceptance Model to measure changing student perceptions about the usefulness and ease of use of ChatGPT in the creation of academic text. A statistical analysis indicates the intervention's impact on their behavioural intentions on using ChatGPT for academic writing.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100342"},"PeriodicalIF":0.0,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic item generation in various STEM subjects using large language model prompting
Q1 Social Sciences Pub Date : 2024-12-09 DOI: 10.1016/j.caeai.2024.100344
Kuang Wen Chan , Farhan Ali , Joonhyeong Park , Kah Shen Brandon Sham , Erdalyn Yeh Thong Tan , Francis Woon Chien Chong , Kun Qian , Guan Kheng Sze
Large language models (LLMs) that power chatbots such as ChatGPT have capabilities across numerous domains. Teachers and students have been increasingly using chatbots in science, technology, engineering, and mathematics (STEM) subjects in various ways, including for assessment purposes. However, there has been a lack of systematic investigation into LLMs’ capabilities and limitations in automatically generating items for STEM subject assessments, especially given that LLMs can hallucinate and may risk promoting misconceptions and hindering conceptual understanding. To address this, we systematically investigated LLMs' conceptual understanding and quality of working in generating question and answer pairs across various STEM subjects. We used prompt engineering on GPT-3.5 and GPT-4 with three different approaches: standard prompting, standard prompting with added chain-of-thought prompting using worked examples with steps, and the chain-of-thought prompting with coding language. The questions and answer pairs were generated at the pre-university level in the three STEM subjects of chemistry, physics, and mathematics and evaluated by subject-matter experts. We found that LLMs generated quality questions when using the chain-of-thought prompting for both GPT-3.5 and GPT-4 and when using the chain-of-thought prompting with coding language for GPT-4 overall. However, there were varying patterns in generating multistep answers, with differences in final answer and intermediate step accuracy. An interesting finding was that the chain-of-thought prompting with coding language for GPT-4 significantly outperformed the other approaches in generating correct final answers while demonstrating moderate accuracy in generating multistep answers correctly. In addition, through qualitative analysis, we identified domain-specific prompting patterns across the three STEM subjects. We then discussed how our findings aligned with, contradicted, and contributed to the current body of knowledge on automatic item generation research using LLMs, and the implications for teachers using LLMs to generate STEM assessment items.
{"title":"Automatic item generation in various STEM subjects using large language model prompting","authors":"Kuang Wen Chan ,&nbsp;Farhan Ali ,&nbsp;Joonhyeong Park ,&nbsp;Kah Shen Brandon Sham ,&nbsp;Erdalyn Yeh Thong Tan ,&nbsp;Francis Woon Chien Chong ,&nbsp;Kun Qian ,&nbsp;Guan Kheng Sze","doi":"10.1016/j.caeai.2024.100344","DOIUrl":"10.1016/j.caeai.2024.100344","url":null,"abstract":"<div><div>Large language models (LLMs) that power chatbots such as ChatGPT have capabilities across numerous domains. Teachers and students have been increasingly using chatbots in science, technology, engineering, and mathematics (STEM) subjects in various ways, including for assessment purposes. However, there has been a lack of systematic investigation into LLMs’ capabilities and limitations in automatically generating items for STEM subject assessments, especially given that LLMs can hallucinate and may risk promoting misconceptions and hindering conceptual understanding. To address this, we systematically investigated LLMs' conceptual understanding and quality of working in generating question and answer pairs across various STEM subjects. We used prompt engineering on GPT-3.5 and GPT-4 with three different approaches: standard prompting, standard prompting with added chain-of-thought prompting using worked examples with steps, and the chain-of-thought prompting with coding language. The questions and answer pairs were generated at the pre-university level in the three STEM subjects of chemistry, physics, and mathematics and evaluated by subject-matter experts. We found that LLMs generated quality questions when using the chain-of-thought prompting for both GPT-3.5 and GPT-4 and when using the chain-of-thought prompting with coding language for GPT-4 overall. However, there were varying patterns in generating multistep answers, with differences in final answer and intermediate step accuracy. An interesting finding was that the chain-of-thought prompting with coding language for GPT-4 significantly outperformed the other approaches in generating correct final answers while demonstrating moderate accuracy in generating multistep answers correctly. In addition, through qualitative analysis, we identified domain-specific prompting patterns across the three STEM subjects. We then discussed how our findings aligned with, contradicted, and contributed to the current body of knowledge on automatic item generation research using LLMs, and the implications for teachers using LLMs to generate STEM assessment items.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100344"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Education Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1