首页 > 最新文献

Computers and Education Artificial Intelligence最新文献

英文 中文
Evaluation of early student performance prediction given concept drift
Q1 Social Sciences Pub Date : 2025-01-23 DOI: 10.1016/j.caeai.2025.100369
Benedikt Sonnleitner , Tom Madou , Matthias Deceuninck , Filotas Theodosiou , Yves R. Sagaert
Forecasting student performance can help to identify students at risk and aids in recommending actions to improve their learning outcomes. That often involves elaborate machine learning pipelines. These tend to use large feature sets including behavioral data from learning management systems or demographic information. However, this complexity can lead to inaccurate predictions when concept drift occurs, or when a large number of features are used with a limited sample size. We investigate the performance of different machine learning pipelines on a data set with change in study behavior during the Covid-19 period. We demonstrate that (i) LASSO, a shrinkage estimator that reduces complexity and overfitting, outperforms several machine learning models under these circumstances, (ii) a linear regression relying on only two handcrafted features achieves higher accuracy and substantially less predictive bias than commonly used, more complex models with large feature sets. Due to their simplicity, these models can serve as a benchmark for future studies and a fallback model when substantial concept or covariate drift is encountered.
{"title":"Evaluation of early student performance prediction given concept drift","authors":"Benedikt Sonnleitner ,&nbsp;Tom Madou ,&nbsp;Matthias Deceuninck ,&nbsp;Filotas Theodosiou ,&nbsp;Yves R. Sagaert","doi":"10.1016/j.caeai.2025.100369","DOIUrl":"10.1016/j.caeai.2025.100369","url":null,"abstract":"<div><div>Forecasting student performance can help to identify students at risk and aids in recommending actions to improve their learning outcomes. That often involves elaborate machine learning pipelines. These tend to use large feature sets including behavioral data from learning management systems or demographic information. However, this complexity can lead to inaccurate predictions when concept drift occurs, or when a large number of features are used with a limited sample size. We investigate the performance of different machine learning pipelines on a data set with change in study behavior during the Covid-19 period. We demonstrate that (i) LASSO, a shrinkage estimator that reduces complexity and overfitting, outperforms several machine learning models under these circumstances, (ii) a linear regression relying on only two handcrafted features achieves higher accuracy and substantially less predictive bias than commonly used, more complex models with large feature sets. Due to their simplicity, these models can serve as a benchmark for future studies and a fallback model when substantial concept or covariate drift is encountered.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100369"},"PeriodicalIF":0.0,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“I just think it is the way of the future”: Teachers' use of ChatGPT to develop motivationally-supportive math lessons
Q1 Social Sciences Pub Date : 2025-01-22 DOI: 10.1016/j.caeai.2025.100367
Teomara Rutherford, Andrew Rodrigues, Santiago Duque-Baird, Sotheara Veng, Rosa Mykyta-Chomsky, Yiqin Cao, Kristin Chisholm, Ekaterina Bergwall
ChatGPT has quickly infiltrated the educational landscape and teachers are engaging with this tool to support their teaching activities, including lesson development. In this study, eight elementary school teachers are guided in the development of motivationally-supportive mathematics lessons using ChatGPT as a tool to support student positive mathematics emotions, motivation, and engagement as theorized within Control Value Theory (CVT). Results reveal that teachers found ChatGPT useful for this purpose and that the lessons implemented demonstrated some success in fostering motivationally-supportive math activities. Compared to non-ChatGPT lessons, in lessons developed with ChatGPT, the same teachers used more utility value messages, more non-standard examples, and more specific feedback while engaging in less lesson-irrelevant chit-chat. Within these lessons, students also reported feeling less bored and provided fewer negatively-valenced comments compared to non-ChatGPT lessons. The results have implications for the use of ChatGPT as a lesson development tool and demonstrate the success of a CVT-framed intervention.
{"title":"“I just think it is the way of the future”: Teachers' use of ChatGPT to develop motivationally-supportive math lessons","authors":"Teomara Rutherford,&nbsp;Andrew Rodrigues,&nbsp;Santiago Duque-Baird,&nbsp;Sotheara Veng,&nbsp;Rosa Mykyta-Chomsky,&nbsp;Yiqin Cao,&nbsp;Kristin Chisholm,&nbsp;Ekaterina Bergwall","doi":"10.1016/j.caeai.2025.100367","DOIUrl":"10.1016/j.caeai.2025.100367","url":null,"abstract":"<div><div>ChatGPT has quickly infiltrated the educational landscape and teachers are engaging with this tool to support their teaching activities, including lesson development. In this study, eight elementary school teachers are guided in the development of motivationally-supportive mathematics lessons using ChatGPT as a tool to support student positive mathematics emotions, motivation, and engagement as theorized within Control Value Theory (CVT). Results reveal that teachers found ChatGPT useful for this purpose and that the lessons implemented demonstrated some success in fostering motivationally-supportive math activities. Compared to non-ChatGPT lessons, in lessons developed with ChatGPT, the same teachers used more utility value messages, more non-standard examples, and more specific feedback while engaging in less lesson-irrelevant chit-chat. Within these lessons, students also reported feeling less bored and provided fewer negatively-valenced comments compared to non-ChatGPT lessons. The results have implications for the use of ChatGPT as a lesson development tool and demonstrate the success of a CVT-framed intervention.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100367"},"PeriodicalIF":0.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143453424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can large language models meet the challenge of generating school-level questions?
Q1 Social Sciences Pub Date : 2025-01-21 DOI: 10.1016/j.caeai.2025.100370
Subhankar Maity, Aniket Deroy, Sudeshna Sarkar
In the realm of education, crafting appropriate questions for examinations is a meticulous and time-consuming task that is crucial for assessing students' understanding of the subject matter. This paper explores the potential of leveraging large language models (LLMs) to automate question generation in the educational domain. Specifically, we focus on generating educational questions from contexts extracted from school-level textbooks. Our study aims to prompt LLMs such as GPT-4 Turbo, GPT-3.5 Turbo, Llama-2-70B, Llama-3.1-405B, and Gemini Pro to generate a complete set of questions for each context, potentially streamlining the question generation process for educators. We performed a human evaluation of the generated questions, assessing their coverage, grammaticality, usefulness, answerability, and relevance. Additionally, we prompted LLMs to generate questions based on Bloom's revised taxonomy, categorizing and evaluating these questions according to their cognitive complexity and learning objectives. We applied both zero-shot and eight-shot prompting techniques. These efforts provide insight into the efficacy of LLMs in automated question generation and their potential in assessing students' cognitive abilities across various school-level subjects. The results show that employing an eight-shot technique improves the performance of human evaluation metrics for the generated complete set of questions and helps generate questions that are better aligned with Bloom's revised taxonomy.
{"title":"Can large language models meet the challenge of generating school-level questions?","authors":"Subhankar Maity,&nbsp;Aniket Deroy,&nbsp;Sudeshna Sarkar","doi":"10.1016/j.caeai.2025.100370","DOIUrl":"10.1016/j.caeai.2025.100370","url":null,"abstract":"<div><div>In the realm of education, crafting appropriate questions for examinations is a meticulous and time-consuming task that is crucial for assessing students' understanding of the subject matter. This paper explores the potential of leveraging large language models (LLMs) to automate question generation in the educational domain. Specifically, we focus on generating educational questions from contexts extracted from school-level textbooks. Our study aims to prompt LLMs such as GPT-4 Turbo, GPT-3.5 Turbo, Llama-2-70B, Llama-3.1-405B, and Gemini Pro to generate a complete set of questions for each context, potentially streamlining the question generation process for educators. We performed a human evaluation of the generated questions, assessing their <em>coverage</em>, <em>grammaticality</em>, <em>usefulness</em>, <em>answerability</em>, and <em>relevance</em>. Additionally, we prompted LLMs to generate questions based on Bloom's revised taxonomy, categorizing and evaluating these questions according to their cognitive complexity and learning objectives. We applied both zero-shot and eight-shot prompting techniques. These efforts provide insight into the efficacy of LLMs in automated question generation and their potential in assessing students' cognitive abilities across various school-level subjects. The results show that employing an eight-shot technique improves the performance of human evaluation metrics for the generated complete set of questions and helps generate questions that are better aligned with Bloom's revised taxonomy.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100370"},"PeriodicalIF":0.0,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safer interaction with IVAs: The impact of privacy literacy training on competent use of intelligent voice assistants
Q1 Social Sciences Pub Date : 2025-01-21 DOI: 10.1016/j.caeai.2025.100372
André Markus , Maximilian Baumann , Jan Pfister , Astrid Carolus , Andreas Hotho , Carolin Wienrich
Intelligent voice assistants (IVAs) are widely used in households but can compromise privacy by inadvertently recording or encouraging personal disclosures through social cues. Against this backdrop, interventions that promote privacy literacy, sensitize users to privacy risks, and empower them to self-determine IVA interactions are becoming increasingly important. This work aims to develop and evaluate two online training modules that promote privacy literacy in the context of IVAs by providing knowledge about the institutional practices of IVA providers and clarifying users' privacy rights when using IVAs. Results show that the training modules have distinct strengths. For example, Training Module 1 increases subjective privacy literacy, raises specific concerns about IVA companies, and fosters the intention to engage more reflectively with IVAs. In contrast, Training Module 2 increases users' perceptions of control over their privacy and raises concerns about devices. Both modules share common outcomes, including increased privacy awareness, decreased trust, and social anthropomorphic perceptions of IVAs. Overall, these modules represent a significant advance in promoting the competent use of speech-based technology and provide valuable insights for future research and education on privacy in AI applications.
{"title":"Safer interaction with IVAs: The impact of privacy literacy training on competent use of intelligent voice assistants","authors":"André Markus ,&nbsp;Maximilian Baumann ,&nbsp;Jan Pfister ,&nbsp;Astrid Carolus ,&nbsp;Andreas Hotho ,&nbsp;Carolin Wienrich","doi":"10.1016/j.caeai.2025.100372","DOIUrl":"10.1016/j.caeai.2025.100372","url":null,"abstract":"<div><div>Intelligent voice assistants (IVAs) are widely used in households but can compromise privacy by inadvertently recording or encouraging personal disclosures through social cues. Against this backdrop, interventions that promote privacy literacy, sensitize users to privacy risks, and empower them to self-determine IVA interactions are becoming increasingly important. This work aims to develop and evaluate two online training modules that promote privacy literacy in the context of IVAs by providing knowledge about the institutional practices of IVA providers and clarifying users' privacy rights when using IVAs. Results show that the training modules have distinct strengths. For example, Training Module 1 increases subjective privacy literacy, raises specific concerns about IVA companies, and fosters the intention to engage more reflectively with IVAs. In contrast, Training Module 2 increases users' perceptions of control over their privacy and raises concerns about devices. Both modules share common outcomes, including increased privacy awareness, decreased trust, and social anthropomorphic perceptions of IVAs. Overall, these modules represent a significant advance in promoting the competent use of speech-based technology and provide valuable insights for future research and education on privacy in AI applications.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100372"},"PeriodicalIF":0.0,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the potential of GenAI for personalised English teaching: Learners' experiences and perceptions
Q1 Social Sciences Pub Date : 2025-01-20 DOI: 10.1016/j.caeai.2025.100371
Lucas Kohnke , Di Zou , Fan Su
Artificial intelligence has become seamlessly integrated into personal, professional, and educational spheres. Generative AI (GenAI), in particular, is revolutionising content creation in second language (L2) writing instruction through advanced machine learning models. This study examines the influence of GenAI on L2 learners' language competencies, focusing on tools commonly used by first-year English for Academic Purposes students. Through qualitative and quantitative analysis, including surveys and interviews, this research explored students’ experiences and perceptions of GenAI tools, including Grammarly and Quillbot. The findings revealed that two-thirds of the students (66.7%) regularly used these tools, which they found particularly helpful for improving grammar, writing, vocabulary, and reading skills. Interview insights indicated that the students appreciated the personalised feedback and creative support provided by GenAI tools, although they also acknowledged risks such as irrelevant feedback and potential overreliance. We suggest that while GenAI tools enhance language learning by providing personalised and adaptive support, they should complement rather than replace traditional methods. Our results underscore the need for professional development for educators and the establishment of guidelines to address academic integrity and data privacy.
{"title":"Exploring the potential of GenAI for personalised English teaching: Learners' experiences and perceptions","authors":"Lucas Kohnke ,&nbsp;Di Zou ,&nbsp;Fan Su","doi":"10.1016/j.caeai.2025.100371","DOIUrl":"10.1016/j.caeai.2025.100371","url":null,"abstract":"<div><div>Artificial intelligence has become seamlessly integrated into personal, professional, and educational spheres. Generative AI (GenAI), in particular, is revolutionising content creation in second language (L2) writing instruction through advanced machine learning models. This study examines the influence of GenAI on L2 learners' language competencies, focusing on tools commonly used by first-year English for Academic Purposes students. Through qualitative and quantitative analysis, including surveys and interviews, this research explored students’ experiences and perceptions of GenAI tools, including Grammarly and Quillbot. The findings revealed that two-thirds of the students (66.7%) regularly used these tools, which they found particularly helpful for improving grammar, writing, vocabulary, and reading skills. Interview insights indicated that the students appreciated the personalised feedback and creative support provided by GenAI tools, although they also acknowledged risks such as irrelevant feedback and potential overreliance. We suggest that while GenAI tools enhance language learning by providing personalised and adaptive support, they should complement rather than replace traditional methods. Our results underscore the need for professional development for educators and the establishment of guidelines to address academic integrity and data privacy.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100371"},"PeriodicalIF":0.0,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceived MOOC satisfaction: A review mining approach using machine learning and fine-tuned BERTs
Q1 Social Sciences Pub Date : 2025-01-17 DOI: 10.1016/j.caeai.2025.100366
Xieling Chen , Haoran Xie , Di Zou , Gary Cheng , Xiaohui Tao , Fu Lee Wang
This study investigates the application of machine learning and BERT models to identify topic categories in helpful online course reviews and uncover factors that influence the overall satisfaction of learners in massive open online courses (MOOCs). The research has three main objectives: (1) to assess the effectiveness of machine learning models in classifying review helpfulness, (2) to evaluate the performance of fine-tuned BERT models in identifying review topics, and (3) to explore the factors that influence learner satisfaction across various disciplines. The study uses a MOOC corpus containing 102,184 course reviews from 401 courses across 13 disciplines. The methodology involves three approaches: (1) machine learning for automatic classification of review helpfulness, (2) BERT models for automatic classification of review topics, and (3) multiple linear regression analysis to explore the factors influencing learner satisfaction. The results show that most machine learning models achieve precision, recall, and F1 scores above 80%, 99%, and 89%, respectively, in identifying review helpfulness. The fine-tuned BERT model outperforms baseline models with precision, recall, and F1 scores of 78.4%, 74.4%, and 75.9%, respectively, in classifying review topics. Additionally, the regression analysis identifies key factors affecting learner satisfaction, such as the positive influence of “Instructor” frequency and the negative impact of “Platforms and tools” and “Process”. These insights offer valuable guidance for educators, course designers, and platform developers, contributing to the optimization of MOOC offerings to better meet the evolving needs of learners.
{"title":"Perceived MOOC satisfaction: A review mining approach using machine learning and fine-tuned BERTs","authors":"Xieling Chen ,&nbsp;Haoran Xie ,&nbsp;Di Zou ,&nbsp;Gary Cheng ,&nbsp;Xiaohui Tao ,&nbsp;Fu Lee Wang","doi":"10.1016/j.caeai.2025.100366","DOIUrl":"10.1016/j.caeai.2025.100366","url":null,"abstract":"<div><div>This study investigates the application of machine learning and BERT models to identify topic categories in helpful online course reviews and uncover factors that influence the overall satisfaction of learners in massive open online courses (MOOCs). The research has three main objectives: (1) to assess the effectiveness of machine learning models in classifying review helpfulness, (2) to evaluate the performance of fine-tuned BERT models in identifying review topics, and (3) to explore the factors that influence learner satisfaction across various disciplines. The study uses a MOOC corpus containing 102,184 course reviews from 401 courses across 13 disciplines. The methodology involves three approaches: (1) machine learning for automatic classification of review helpfulness, (2) BERT models for automatic classification of review topics, and (3) multiple linear regression analysis to explore the factors influencing learner satisfaction. The results show that most machine learning models achieve precision, recall, and F1 scores above 80%, 99%, and 89%, respectively, in identifying review helpfulness. The fine-tuned BERT model outperforms baseline models with precision, recall, and F1 scores of 78.4%, 74.4%, and 75.9%, respectively, in classifying review topics. Additionally, the regression analysis identifies key factors affecting learner satisfaction, such as the positive influence of “Instructor” frequency and the negative impact of “Platforms and tools” and “Process”. These insights offer valuable guidance for educators, course designers, and platform developers, contributing to the optimization of MOOC offerings to better meet the evolving needs of learners.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100366"},"PeriodicalIF":0.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143311059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The critical role of trust in adopting AI-powered educational technology for learning: An instrument for measuring student perceptions
Q1 Social Sciences Pub Date : 2025-01-17 DOI: 10.1016/j.caeai.2025.100368
Tanya Nazaretsky, Paola Mejia-Domenzain, Vinitra Swamy, Jibril Frej, Tanja Käser
In recent decades, we have witnessed the democratization of AI-powered Educational Technology (AI-EdTech). However, despite the increased accessibility and evolving technological capabilities, its adoption is accompanied by significant challenges, predominantly rooted in social and psychological aspects. At the same time, limited research has been conducted on human factors, especially trust, influencing students' readiness and willingness to adopt AI-EdTech. This study aims to bridge this gap by addressing the multidimensional nature of trust and developing a new instrument for measuring students' perceptions of adopting AI-EdTech. With 665 student responses, we employ Exploratory and Confirmatory Factor Analysis to provide evidence of the instrument's internal validity and identify four key factors influencing students' trust and readiness to adopt AI-EdTech. We then utilize Structural Equations Modeling to explore the causal relationships among these factors, confirming that students' trust in AI-EdTech positively influences AI-EdTech's perceived usefulness both directly and indirectly through AI-readiness. Finally, we use our instrument to analyze 665 student responses, covering eight courses and Bachelor's and Master's degree programs. Our contribution is two-fold. First, by introducing the empirically validated instrument, we address the need for more consistent and reliable assessments of trust-related factors in student adoption of AI-EdTech. Second, our findings confirm that student demographics, specifically gender and educational background, significantly correlated with their trust perceptions, emphasizing the importance of addressing the specific needs of students with various demographics.
{"title":"The critical role of trust in adopting AI-powered educational technology for learning: An instrument for measuring student perceptions","authors":"Tanya Nazaretsky,&nbsp;Paola Mejia-Domenzain,&nbsp;Vinitra Swamy,&nbsp;Jibril Frej,&nbsp;Tanja Käser","doi":"10.1016/j.caeai.2025.100368","DOIUrl":"10.1016/j.caeai.2025.100368","url":null,"abstract":"<div><div>In recent decades, we have witnessed the democratization of AI-powered Educational Technology (AI-EdTech). However, despite the increased accessibility and evolving technological capabilities, its adoption is accompanied by significant challenges, predominantly rooted in social and psychological aspects. At the same time, limited research has been conducted on human factors, especially trust, influencing students' readiness and willingness to adopt AI-EdTech. This study aims to bridge this gap by addressing the multidimensional nature of trust and developing a new instrument for measuring students' perceptions of adopting AI-EdTech. With 665 student responses, we employ Exploratory and Confirmatory Factor Analysis to provide evidence of the instrument's internal validity and identify four key factors influencing students' trust and readiness to adopt AI-EdTech. We then utilize Structural Equations Modeling to explore the causal relationships among these factors, confirming that students' trust in AI-EdTech positively influences AI-EdTech's perceived usefulness both directly and indirectly through AI-readiness. Finally, we use our instrument to analyze 665 student responses, covering eight courses and Bachelor's and Master's degree programs. Our contribution is two-fold. First, by introducing the empirically validated instrument, we address the need for more consistent and reliable assessments of trust-related factors in student adoption of AI-EdTech. Second, our findings confirm that student demographics, specifically gender and educational background, significantly correlated with their trust perceptions, emphasizing the importance of addressing the specific needs of students with various demographics.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100368"},"PeriodicalIF":0.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cross-national assessment of artificial intelligence (AI) Chatbot user perceptions in collegiate physics education
Q1 Social Sciences Pub Date : 2025-01-10 DOI: 10.1016/j.caeai.2025.100365
Benjamin Agyare , Joseph Asare , Amani Kraishan , Isaac Nkrumah , Daniel Kwasi Adjekum
This study explores the perception of artificial intelligence (AI)-based Chatbots, specifically Open AI's ChatGPT use, among physics students in four universities in Ghana, Jordan, and the United States. We utilized a survey instrument adapted from the Technology Acceptance Model (TAM) to elicit responses from 804 students. TAM constructs: Perceived Usefulness (PU), Perceived Ease of Use (PEU), Subjective Norms (SN), Attitude Towards Technology Use (ATU), Behavioral Intention (BI), and User Behavior (UB) were assessed. We also assessed perceptions of ethical use (EU) and student learning outcomes (SLO) using a Structural Equation Model (SEM) approach. A measurement model had good fit indices and validated most hypotheses. A path analysis (PA) for hypothesized relationships suggested PEU and SN are significant predictors of BI and UB, whereas PU's influence on BI was indirect. Significantly, EU concerns negatively moderated the relationship between BI and UB, suggesting that higher ethical concerns can reduce ChatGPT usage. Cross-cultural analysis uncovered significant differences in perceptions and usage patterns influenced by institutional policies, academic levels, and demographic factors. Our findings affirm TAM's robustness in predicting technology use across various cultural and institutional settings. Findings also underscore the crucial roles of social influence in fostering positive user behaviors for Chat GPT. This study provides insights for educators and policymakers to develop strategies for integrating AI Chatbots responsibly and effectively in collegiate physics education while addressing ethical concerns. A longitudinal survey of the relationships between consistent AI Chatbot use, institutional support, student motivation, and learning outcomes is recommended.
{"title":"A cross-national assessment of artificial intelligence (AI) Chatbot user perceptions in collegiate physics education","authors":"Benjamin Agyare ,&nbsp;Joseph Asare ,&nbsp;Amani Kraishan ,&nbsp;Isaac Nkrumah ,&nbsp;Daniel Kwasi Adjekum","doi":"10.1016/j.caeai.2025.100365","DOIUrl":"10.1016/j.caeai.2025.100365","url":null,"abstract":"<div><div>This study explores the perception of artificial intelligence (AI)-based Chatbots, specifically Open AI's ChatGPT use, among physics students in four universities in Ghana, Jordan, and the United States. We utilized a survey instrument adapted from the Technology Acceptance Model (TAM) to elicit responses from 804 students. TAM constructs: Perceived Usefulness (PU), Perceived Ease of Use (PEU), Subjective Norms (SN), Attitude Towards Technology Use (ATU), Behavioral Intention (BI), and User Behavior (UB) were assessed. We also assessed perceptions of ethical use (EU) and student learning outcomes (SLO) using a Structural Equation Model (SEM) approach. A measurement model had good fit indices and validated most hypotheses. A path analysis (PA) for hypothesized relationships suggested PEU and SN are significant predictors of BI and UB, whereas PU's influence on BI was indirect. Significantly, EU concerns negatively moderated the relationship between BI and UB, suggesting that higher ethical concerns can reduce ChatGPT usage. Cross-cultural analysis uncovered significant differences in perceptions and usage patterns influenced by institutional policies, academic levels, and demographic factors. Our findings affirm TAM's robustness in predicting technology use across various cultural and institutional settings. Findings also underscore the crucial roles of social influence in fostering positive user behaviors for Chat GPT. This study provides insights for educators and policymakers to develop strategies for integrating AI Chatbots responsibly and effectively in collegiate physics education while addressing ethical concerns. A longitudinal survey of the relationships between consistent AI Chatbot use, institutional support, student motivation, and learning outcomes is recommended.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100365"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating generative artificial intelligence in K-12 education: Examining teachers’ preparedness, practices, and barriers
Q1 Social Sciences Pub Date : 2025-01-06 DOI: 10.1016/j.caeai.2025.100363
Yin Hong Cheah , Jingru Lu , Juhee Kim
Despite the growing body of research on developing K-12 teachers' generative AI (GenAI) knowledge and skills, its integration into daily teaching practices remains underexplored. Using a snowball sampling method, this study examined the preparedness, practices, and barriers encountered by 89 U.S. teachers in the state of Idaho. Participants were predominantly White, female teachers serving in rural schools. A mixed-methods analysis of survey responses revealed that teachers were generally underprepared for integrating GenAI, with fewer than half incorporating it into their educational practices. Unlike the widespread classroom integration patterns observed with general educational technologies, teachers in this study tended to use GenAI for out-of-classroom duties (i.e., lesson preparation, assessment, and administrative tasks) rather than for real-time teaching and learning. These preferences could be attributed to key barriers teachers faced, including doubts about GenAI's ability to manage risks (i.e., technology value beliefs), reduced human interaction in instruction (i.e., pedagogical beliefs), ethical considerations, and the absence of policies and guidance. This study highlights the need to develop support systems and targeted policies to facilitate teachers' GenAI integration, offering implications for Idaho's education system and the broader U.S. context.
{"title":"Integrating generative artificial intelligence in K-12 education: Examining teachers’ preparedness, practices, and barriers","authors":"Yin Hong Cheah ,&nbsp;Jingru Lu ,&nbsp;Juhee Kim","doi":"10.1016/j.caeai.2025.100363","DOIUrl":"10.1016/j.caeai.2025.100363","url":null,"abstract":"<div><div>Despite the growing body of research on developing K-12 teachers' generative AI (GenAI) knowledge and skills, its integration into daily teaching practices remains underexplored. Using a snowball sampling method, this study examined the preparedness, practices, and barriers encountered by 89 U.S. teachers in the state of Idaho. Participants were predominantly White, female teachers serving in rural schools. A mixed-methods analysis of survey responses revealed that teachers were generally underprepared for integrating GenAI, with fewer than half incorporating it into their educational practices. Unlike the widespread classroom integration patterns observed with general educational technologies, teachers in this study tended to use GenAI for out-of-classroom duties (i.e., lesson preparation, assessment, and administrative tasks) rather than for real-time teaching and learning. These preferences could be attributed to key barriers teachers faced, including doubts about GenAI's ability to manage risks (i.e., technology value beliefs), reduced human interaction in instruction (i.e., pedagogical beliefs), ethical considerations, and the absence of policies and guidance. This study highlights the need to develop support systems and targeted policies to facilitate teachers' GenAI integration, offering implications for Idaho's education system and the broader U.S. context.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100363"},"PeriodicalIF":0.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attitudes, perceptions and AI self-efficacy in K-12 education
Q1 Social Sciences Pub Date : 2025-01-03 DOI: 10.1016/j.caeai.2024.100358
Nina Bergdahl , Jeanette Sjöberg
Access to AI-driven chatbots is triggering schools to transform. Easy access and questions of cheating are balanced against potential upsides of individual support, time savings, and the risk of falling behind. Therefore, insights into teachers’ AI self-efficacy and attitudes towards the integration of AI-driven chatbots in education necessitate research. This study approaches teachers' readiness to use AI-driven chatbots. A survey and poll questions were administered, yielding 312 and 406 responses respectively, focusing on AI self-efficacy, attitudes, and perceived usefulness in education. Preliminary findings show that while teachers are generally positive about the potential of AI in education, their AI self-efficacy varies significantly based on prior use of the technology, perceived relevance, and the support available to them. The study highlights the need for internal support and targeted professional development interventions and offers practical insights for policymakers, educators, and curriculum developers to foster teacher readiness and competence in using AI-driven chatbots in their professional tasks, in and outside of class.
{"title":"Attitudes, perceptions and AI self-efficacy in K-12 education","authors":"Nina Bergdahl ,&nbsp;Jeanette Sjöberg","doi":"10.1016/j.caeai.2024.100358","DOIUrl":"10.1016/j.caeai.2024.100358","url":null,"abstract":"<div><div>Access to AI-driven chatbots is triggering schools to transform. Easy access and questions of cheating are balanced against potential upsides of individual support, time savings, and the risk of falling behind. Therefore, insights into teachers’ AI self-efficacy and attitudes towards the integration of AI-driven chatbots in education necessitate research. This study approaches teachers' readiness to use AI-driven chatbots. A survey and poll questions were administered, yielding 312 and 406 responses respectively, focusing on AI self-efficacy, attitudes, and perceived usefulness in education. Preliminary findings show that while teachers are generally positive about the potential of AI in education, their AI self-efficacy varies significantly based on prior use of the technology, perceived relevance, and the support available to them. The study highlights the need for internal support and targeted professional development interventions and offers practical insights for policymakers, educators, and curriculum developers to foster teacher readiness and competence in using AI-driven chatbots in their professional tasks, in and outside of class.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100358"},"PeriodicalIF":0.0,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Education Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1