Pub Date : 2024-09-30DOI: 10.1016/j.caeai.2024.100305
Marco Lünich, Birte Keller, Frank Marcinkowski
The integration of Artificial Intelligence (AI) into higher education, particularly through Academic Performance Prediction (APP), promises enhanced educational outcomes. However, it simultaneously raises concerns regarding data privacy, potential biases, and broader socio-technical implications. Our study, focusing on Germany–a pivotal player in shaping the European Union's AI policies–seeks to understand prevailing perceptions of APP among students and the general public. Initial findings of a large standardized online survey suggest a divergence in perceptions: While students, in comparison to the general population, do not attribute a higher risk to APP in a general risk assessment, they do perceive higher societal and, in particular, individual damages from APP. Factors influencing these damage perceptions include trust in AI and personal experiences with discrimination. Students further emphasize the importance of preserving their autonomy by placing high value on self-determined data sharing and explaining their individual APP. Recognizing these varied perceptions is crucial for educators, policy-makers, and higher education institutions as they navigate the intricate ethical landscape of AI in education. This understanding can inform strategies that accommodate both the potential benefits and concerns associated with AI-driven educational tools.
{"title":"Diverging perceptions of artificial intelligence in higher education: A comparison of student and public assessments on risks and damages of academic performance prediction in Germany","authors":"Marco Lünich, Birte Keller, Frank Marcinkowski","doi":"10.1016/j.caeai.2024.100305","DOIUrl":"10.1016/j.caeai.2024.100305","url":null,"abstract":"<div><div>The integration of Artificial Intelligence (AI) into higher education, particularly through Academic Performance Prediction (APP), promises enhanced educational outcomes. However, it simultaneously raises concerns regarding data privacy, potential biases, and broader socio-technical implications. Our study, focusing on Germany–a pivotal player in shaping the European Union's AI policies–seeks to understand prevailing perceptions of APP among students and the general public. Initial findings of a large standardized online survey suggest a divergence in perceptions: While students, in comparison to the general population, do not attribute a higher risk to APP in a general risk assessment, they do perceive higher societal and, in particular, individual damages from APP. Factors influencing these damage perceptions include trust in AI and personal experiences with discrimination. Students further emphasize the importance of preserving their autonomy by placing high value on self-determined data sharing and explaining their individual APP. Recognizing these varied perceptions is crucial for educators, policy-makers, and higher education institutions as they navigate the intricate ethical landscape of AI in education. This understanding can inform strategies that accommodate both the potential benefits and concerns associated with AI-driven educational tools.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100305"},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-27DOI: 10.1016/j.caeai.2024.100310
Martin J. Koch , Carolin Wienrich , Samantha Straka , Marc Erich Latoschik , Astrid Carolus
Comprehensive concepts of AI literacy (AIL) and valid measures are essential for research (e.g., intervention studies) and practice (e.g., personnel selection/development) alike. To date, several scales have been published, sharing standard features but differing in some aspects. We first aim to briefly overview instruments identified from unsystematic literature research in February 2023. We identified four scales and one collection of items. We describe the instruments and compare them. We identified common themes and overlaps in the instruments and developmental procedure. We also found differences regarding scale development procedures and latent dimensions. Following this literature research, we came to the conclusion that the literature on AI literacy measurement was fragmented, and little effort was undertaken to integrate different AI literacy conceptualizations. The second focus of this study is to test the factorial structures of existing AIL measurement instruments and identify latent dimensions of AIL across all instruments. We used robust maximum-likelihood confirmatory factor analysis to test factorial structures in a joint survey of all AIL items in an English-speaking online sample (N=219). We found general support for all instruments' factorial structures with minor deviations from the original factorial structures for some of the instruments. In a second analysis step, to address the issue of fragmented research on AI literacy conceptualization and measurement, we used principal axis exploratory factor analysis with oblique rotation to identify latent dimensions across all items. We found four correlated latent dimensions of AIL, which were mostly interpretable as the abilities to use and interact with AI, to design/program AI (incl. in-depth technical knowledge), to perform complex cognitive operations regarding AI (e.g., ethical considerations), and a common factor for the abilities to detect AI/differentiate between AI and humans and manage persuasive influences of AI (i.e., persuasion literacy). Our findings sort the multitude of AIL instruments and reveal four latent core dimensions of AIL. Thus, they contribute importantly to the conceptual understanding of AIL that has been fragmented so far.
{"title":"Overview and confirmatory and exploratory factor analysis of AI literacy scale","authors":"Martin J. Koch , Carolin Wienrich , Samantha Straka , Marc Erich Latoschik , Astrid Carolus","doi":"10.1016/j.caeai.2024.100310","DOIUrl":"10.1016/j.caeai.2024.100310","url":null,"abstract":"<div><div>Comprehensive concepts of AI literacy (AIL) and valid measures are essential for research (e.g., intervention studies) and practice (e.g., personnel selection/development) alike. To date, several scales have been published, sharing standard features but differing in some aspects. We first aim to briefly overview instruments identified from unsystematic literature research in February 2023. We identified four scales and one collection of items. We describe the instruments and compare them. We identified common themes and overlaps in the instruments and developmental procedure. We also found differences regarding scale development procedures and latent dimensions. Following this literature research, we came to the conclusion that the literature on AI literacy measurement was fragmented, and little effort was undertaken to integrate different AI literacy conceptualizations. The second focus of this study is to test the factorial structures of existing AIL measurement instruments and identify latent dimensions of AIL across all instruments. We used robust maximum-likelihood confirmatory factor analysis to test factorial structures in a joint survey of all AIL items in an English-speaking online sample (<em>N</em>=219). We found general support for all instruments' factorial structures with minor deviations from the original factorial structures for some of the instruments. In a second analysis step, to address the issue of fragmented research on AI literacy conceptualization and measurement, we used principal axis exploratory factor analysis with oblique rotation to identify latent dimensions across all items. We found four correlated latent dimensions of AIL, which were mostly interpretable as the abilities to use and interact with AI, to design/program AI (incl. in-depth technical knowledge), to perform complex cognitive operations regarding AI (e.g., ethical considerations), and a common factor for the abilities to detect AI/differentiate between AI and humans and manage persuasive influences of AI (i.e., persuasion literacy). Our findings sort the multitude of AIL instruments and reveal four latent core dimensions of AIL. Thus, they contribute importantly to the conceptual understanding of AIL that has been fragmented so far.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100310"},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-27DOI: 10.1016/j.caeai.2024.100300
Ali Al-Zawqari, Dries Peumans, Gerd Vandersteen
Researchers have observed the relationship between educational achievements and students' demographic characteristics in physical classroom-based learning. In the context of online education, recent studies were conducted to explore the leading factors of successful online courses. These studies also investigated how demographic features impact student achievement in the online learning environment. This motivates the use of demographic information alongside other features to predict students' academic performance. Since demographic features include protected attributes, such as gender and age, evaluating predictive models must go beyond minimizing the overall error. In this work, we analyze and investigate the use of neural networks to predict underperforming students in online courses. However, our goal is not only to enhance the accuracy but also to evaluate the fairness of the predictive models, a problem concerning the application of machine learning in education. This paper starts by analyzing the available solutions to fairness in predictive models: bias mitigation with pre-processing and in-processing methods. We show that the current evaluation is missing the case of partial awareness of protected features, which is the case when the model is aware of bias on some protected attributes but not all. The in-processing method, specifically the adversarial bias mitigation, shows that debiasing in some protected features exacerbates the bias on other protected features. This observation motivates our proposal of an alternative approach to enhance bias mitigation even in the partial awareness scenario by working with latent space. We implement the proposed solution using denoising autoencoders. The quantitative analysis used three distributions from The Open University Learning Analytics dataset (OULAD). The obtained results show that the latent space-based method offers the best solution as it maintains accuracy while mitigating the bias of the prediction models. These results indicate that in the case of partial awareness, the latent space method is considered superior to the adversarial bias mitigation approach.
{"title":"Latent space bias mitigation for predicting at-risk students","authors":"Ali Al-Zawqari, Dries Peumans, Gerd Vandersteen","doi":"10.1016/j.caeai.2024.100300","DOIUrl":"10.1016/j.caeai.2024.100300","url":null,"abstract":"<div><div>Researchers have observed the relationship between educational achievements and students' demographic characteristics in physical classroom-based learning. In the context of online education, recent studies were conducted to explore the leading factors of successful online courses. These studies also investigated how demographic features impact student achievement in the online learning environment. This motivates the use of demographic information alongside other features to predict students' academic performance. Since demographic features include protected attributes, such as gender and age, evaluating predictive models must go beyond minimizing the overall error. In this work, we analyze and investigate the use of neural networks to predict underperforming students in online courses. However, our goal is not only to enhance the accuracy but also to evaluate the fairness of the predictive models, a problem concerning the application of machine learning in education. This paper starts by analyzing the available solutions to fairness in predictive models: bias mitigation with pre-processing and in-processing methods. We show that the current evaluation is missing the case of partial awareness of protected features, which is the case when the model is aware of bias on some protected attributes but not all. The in-processing method, specifically the adversarial bias mitigation, shows that debiasing in some protected features exacerbates the bias on other protected features. This observation motivates our proposal of an alternative approach to enhance bias mitigation even in the partial awareness scenario by working with latent space. We implement the proposed solution using denoising autoencoders. The quantitative analysis used three distributions from The Open University Learning Analytics dataset (OULAD). The obtained results show that the latent space-based method offers the best solution as it maintains accuracy while mitigating the bias of the prediction models. These results indicate that in the case of partial awareness, the latent space method is considered superior to the adversarial bias mitigation approach.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100300"},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-25DOI: 10.1016/j.caeai.2024.100309
Albert C.M. Yang , Ji-Yang Lin , Cheng-Yan Lin , Hiroaki Ogata
Programming is regarded as a focal point in the current rapidly evolving educational landscape. To aid learning in this domain, we developed PyTutor, an innovative intelligent tutoring system (ITS) that is designed to assist beginners in Python programming. PyTutor utilizes the ChatGPT model to offer continuous guidance, problem-solving hints, and detailed code explanations. It features a structured hint system for each question, covering pseudocode, cloze, basic, and advanced coding solutions. In our 11-week experiment, we compared 35 students who used PyTutor with 36 students who did not. The results indicated the effectiveness of PyTutor, particularly for students with weak foundations in programming. Those with lower initial knowledge exhibited higher engagement, completion rates, and success rates in in-class and after-class programming exercises. Nevertheless, we observed a potential risk of overreliance on PyTutor among students, which may impede the development of independent problem-solving skills. Thus, we recommend the balanced usage of PyTutor. In conclusion, PyTutor is a valuable ITS in programming education that considerably improves the learning outcomes of beginners. Its tailored approach renders it a promising tool for bridging knowledge gaps and enhancing overall educational experiences in the field of programming.
{"title":"Enhancing python learning with PyTutor: Efficacy of a ChatGPT-Based intelligent tutoring system in programming education","authors":"Albert C.M. Yang , Ji-Yang Lin , Cheng-Yan Lin , Hiroaki Ogata","doi":"10.1016/j.caeai.2024.100309","DOIUrl":"10.1016/j.caeai.2024.100309","url":null,"abstract":"<div><div>Programming is regarded as a focal point in the current rapidly evolving educational landscape. To aid learning in this domain, we developed PyTutor, an innovative intelligent tutoring system (ITS) that is designed to assist beginners in Python programming. PyTutor utilizes the ChatGPT model to offer continuous guidance, problem-solving hints, and detailed code explanations. It features a structured hint system for each question, covering pseudocode, cloze, basic, and advanced coding solutions. In our 11-week experiment, we compared 35 students who used PyTutor with 36 students who did not. The results indicated the effectiveness of PyTutor, particularly for students with weak foundations in programming. Those with lower initial knowledge exhibited higher engagement, completion rates, and success rates in in-class and after-class programming exercises. Nevertheless, we observed a potential risk of overreliance on PyTutor among students, which may impede the development of independent problem-solving skills. Thus, we recommend the balanced usage of PyTutor. In conclusion, PyTutor is a valuable ITS in programming education that considerably improves the learning outcomes of beginners. Its tailored approach renders it a promising tool for bridging knowledge gaps and enhancing overall educational experiences in the field of programming.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100309"},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142356972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-25DOI: 10.1016/j.caeai.2024.100307
Bernard Yaw Sekyi Acquah , Francis Arthur , Iddrisu Salifu , Emmanuel Quayson , Sharon Abam Nortey
In the ever-changing landscape of education, the integration of technology has become an inevitable force that reshapes the foundations of teaching and learning. Amidst this transformative wave, the concept of Artificial Intelligence (AI) has taken center stage, promising innovative approaches, and increased efficiency. Within this context, the exploration of preservice teachers' behavioural intention to employ AI in lesson planning has emerged as a critical issue for examination. This study used a descriptive cross-sectional survey design and employed a purposive sampling technique to recruit 783 preservice teachers. By employing a cutting-edge dual-staged partial least squares structural equation modelling-artificial neural network (PLS-SEM-ANN) approach, this study investigated the influence of the following essential variables on preservice teachers' intentions to incorporate AI into their lesson planning endeavours: performance expectancy, effort expectancy, habit, hedonic motivation, social influence, and facilitating conditions. Social influence emerged as the most significant positive predictor of preservice teachers' behavioural intention to use AI in lesson planning. Additionally, habit, performance expectancy, effort expectancy, and facilitating conditions substantially positively influenced preservice teachers' behavioural intention to use AI in lesson planning. Conversely, hedonic motivation did not significantly affect preservice teachers’ behavioural intention to use AI in lesson planning. This study not only enhances our understanding of technology integration in pedagogy from a theoretical standpoint but also provides practical recommendations for refining educational curricula and instructional strategies that promote effective AI integration.
{"title":"Preservice teachers’ behavioural intention to use artificial intelligence in lesson planning: A dual-staged PLS-SEM-ANN approach","authors":"Bernard Yaw Sekyi Acquah , Francis Arthur , Iddrisu Salifu , Emmanuel Quayson , Sharon Abam Nortey","doi":"10.1016/j.caeai.2024.100307","DOIUrl":"10.1016/j.caeai.2024.100307","url":null,"abstract":"<div><div>In the ever-changing landscape of education, the integration of technology has become an inevitable force that reshapes the foundations of teaching and learning. Amidst this transformative wave, the concept of Artificial Intelligence (AI) has taken center stage, promising innovative approaches, and increased efficiency. Within this context, the exploration of preservice teachers' behavioural intention to employ AI in lesson planning has emerged as a critical issue for examination. This study used a descriptive cross-sectional survey design and employed a purposive sampling technique to recruit 783 preservice teachers. By employing a cutting-edge dual-staged partial least squares structural equation modelling-artificial neural network (PLS-SEM-ANN) approach, this study investigated the influence of the following essential variables on preservice teachers' intentions to incorporate AI into their lesson planning endeavours: performance expectancy, effort expectancy, habit, hedonic motivation, social influence, and facilitating conditions. Social influence emerged as the most significant positive predictor of preservice teachers' behavioural intention to use AI in lesson planning. Additionally, habit, performance expectancy, effort expectancy, and facilitating conditions substantially positively influenced preservice teachers' behavioural intention to use AI in lesson planning. Conversely, hedonic motivation did not significantly affect preservice teachers’ behavioural intention to use AI in lesson planning. This study not only enhances our understanding of technology integration in pedagogy from a theoretical standpoint but also provides practical recommendations for refining educational curricula and instructional strategies that promote effective AI integration.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100307"},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142356973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growing demand for artificial intelligence (AI) skills across various sectors has enhanced AI-focused careers and shaped academic exploration in educational institutions. These institutions have been actively developing teaching methods that enhance practical AI applications, particularly through integrating AI with the Internet of Things (IoT), leading to the emergence of the Artificial Intelligence of Things (AIoT). This convergence promises significant advancements in AI education, addressing gaps in structured learning methods for AIoT. This study explored AIoT's application in Smart Farming (SF) and its potential to enrich AI education and sectoral advancements. The AIoT platform was designed for SF simulations, integrating environmental sensing, AI processing, and user-friendly outputs. This platform was implemented with 40 first-year computer science university students in Thailand using a one-group pre-posttest design. This approach transformed theoretical AI concepts into experiential learning through interactive activities, demonstrating AIoT's capability to increase AI conceptual understanding, trigger AI competencies, and promote positive learning perceptions. Therefore, this study presented the results as indicative of the AIoT platform's potential benefits, emphasizing the need for further robust experimental research. This study contributes to educational technology discussions by suggesting improvements in AIoT platform effectiveness and highlighting areas for future investigation.
{"title":"Fostering student competencies and perceptions through artificial intelligence of things educational platform","authors":"Sasithorn Chookaew , Pornchai Kitcharoen , Suppachai Howimanporn , Patcharin Panjaburee","doi":"10.1016/j.caeai.2024.100308","DOIUrl":"10.1016/j.caeai.2024.100308","url":null,"abstract":"<div><div>The growing demand for artificial intelligence (AI) skills across various sectors has enhanced AI-focused careers and shaped academic exploration in educational institutions. These institutions have been actively developing teaching methods that enhance practical AI applications, particularly through integrating AI with the Internet of Things (IoT), leading to the emergence of the Artificial Intelligence of Things (AIoT). This convergence promises significant advancements in AI education, addressing gaps in structured learning methods for AIoT. This study explored AIoT's application in Smart Farming (SF) and its potential to enrich AI education and sectoral advancements. The AIoT platform was designed for SF simulations, integrating environmental sensing, AI processing, and user-friendly outputs. This platform was implemented with 40 first-year computer science university students in Thailand using a one-group pre-posttest design. This approach transformed theoretical AI concepts into experiential learning through interactive activities, demonstrating AIoT's capability to increase AI conceptual understanding, trigger AI competencies, and promote positive learning perceptions. Therefore, this study presented the results as indicative of the AIoT platform's potential benefits, emphasizing the need for further robust experimental research. This study contributes to educational technology discussions by suggesting improvements in AIoT platform effectiveness and highlighting areas for future investigation.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100308"},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142327804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-24DOI: 10.1016/j.caeai.2024.100311
Jörg von Garrel, Jana Mayer
AI-based language tools such as ChatGPT have the potential to fundamentally change studying and teaching at universities. Since there have been few empirical studies on the use of AI systems by students, this article aims to analyze the use of AI in higher education. In particular, it focuses on identifying features that are important to students when using AI during their studies. For this purpose, a choice-based conjoint experiment was conducted through a survey involving over 6300 participants from German universities. The results show that students attach particular importance to the degree of scientific rigor. The optimal package for an AI-based tool for studying includes the citation of reliable or truthful sources, the detection and correction of errors in the input, a comprehensive and detailed formulation of the output, no AI-caused hallucinations and transparency of the database. These characteristics differ only slightly in group-specific analyses.
{"title":"Which features of AI-based tools are important for students? A choice-based conjoint analysis","authors":"Jörg von Garrel, Jana Mayer","doi":"10.1016/j.caeai.2024.100311","DOIUrl":"10.1016/j.caeai.2024.100311","url":null,"abstract":"<div><div>AI-based language tools such as ChatGPT have the potential to fundamentally change studying and teaching at universities. Since there have been few empirical studies on the use of AI systems by students, this article aims to analyze the use of AI in higher education. In particular, it focuses on identifying features that are important to students when using AI during their studies. For this purpose, a choice-based conjoint experiment was conducted through a survey involving over 6300 participants from German universities. The results show that students attach particular importance to the degree of scientific rigor. The optimal package for an AI-based tool for studying includes the citation of reliable or truthful sources, the detection and correction of errors in the input, a comprehensive and detailed formulation of the output, no AI-caused hallucinations and transparency of the database. These characteristics differ only slightly in group-specific analyses.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100311"},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142327805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-20DOI: 10.1016/j.caeai.2024.100304
Anouschka van Leeuwen, Marije Goudriaan, Ünal Aksu
Most higher education institutions employ study advisors to support their students. To adequately perform their task, study advisors have access to study information about their students. Using AI techniques to analyze that information and to predict if a student might be at risk of study delay could be a valuable tool in study advisors' practice. In this paper, we present a use case of how such a tool was developed (in the form of a dashboard) and which steps and considerations played a role in the responsible deployment of the tool. Three aspects are described: first, we present the timeline of the case study and zoom in on how the macro-level of the institution (where the groundwork is laid to facilitate AI-systems in education) and the micro-level of the implementation of the system influenced each other. Second, we describe which stakeholders were involved and what their ethical considerations were concerning data management, algorithms, and pedagogy. Third, we describe the initial evaluation of the dashboard in terms of study advisors’ experiences and provide suggestions on how to stimulate the responsible and useful implementation of a predictive modelling tool.
{"title":"How to responsibly deploy a predictive modelling dashboard for study advisors? A use case illustrating various stakeholder perspectives","authors":"Anouschka van Leeuwen, Marije Goudriaan, Ünal Aksu","doi":"10.1016/j.caeai.2024.100304","DOIUrl":"10.1016/j.caeai.2024.100304","url":null,"abstract":"<div><div>Most higher education institutions employ study advisors to support their students. To adequately perform their task, study advisors have access to study information about their students. Using AI techniques to analyze that information and to predict if a student might be at risk of study delay could be a valuable tool in study advisors' practice. In this paper, we present a use case of how such a tool was developed (in the form of a dashboard) and which steps and considerations played a role in the responsible deployment of the tool. Three aspects are described: first, we present the timeline of the case study and zoom in on how the macro-level of the <em>institution</em> (where the groundwork is laid to facilitate AI-systems in education) and the micro-level of the <em>implementation</em> of the system influenced each other. Second, we describe which stakeholders were involved and what their ethical considerations were concerning data management, algorithms, and pedagogy. Third, we describe the initial evaluation of the dashboard in terms of study advisors’ experiences and provide suggestions on how to stimulate the responsible and useful implementation of a predictive modelling tool.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100304"},"PeriodicalIF":0.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24001073/pdfft?md5=84e1d2eaf7c82fb91184236266a2bfd7&pid=1-s2.0-S2666920X24001073-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142315031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1016/j.caeai.2024.100306
Yao Fu , Zhenjie Weng
With the rapid development of artificial intelligence (AI) in recent years, there has been an increasing number of studies on integrating AI in various educational contexts, ranging from early childhood to higher education. Although systematic reviews have widely reported the effects of AI on teaching and learning, limited reviews have examined and defined responsible AI in education (AIED). To fill this gap, we conducted a convergent systematic mixed studies review to analyze key themes emerging from primary research. Following the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines, we searched Scopus and Web of Science and identified 40 empirical studies that satisfied our inclusion criteria. Specifically, we used four criteria for the screening process: (1) the study's full text was available in English; (2) the study was published before April 10th, 2024 in peer-reviewed journals or conference proceedings; (3) the study was primary research that collected original data and applied qualitative, quantitative, or mixed-methods as the study methodology; and (4) the study had a clear focus on ethical and/or responsible AI in one or multiple educational context(s). Our findings identified essential stakeholders and characteristics of responsible AI in K-20 educational contexts and expanded understanding of responsible human-centered AI (HCAI). We unveiled characteristics vital to HCAI, encompassing Fairness and Equity, Privacy and Security, Non-maleficence and Beneficence, Agency and Autonomy, and Transparency and Intelligibility. In addition, we provided suggestions on how to achieve responsible HCAI via collaborative efforts of stakeholders, including roles of users (e.g., students and educators), developers, researchers, and policy and decision-makers.
近年来,随着人工智能(AI)的快速发展,有关将人工智能融入从幼儿教育到高等教育等各种教育环境的研究越来越多。尽管系统性综述广泛报道了人工智能对教学和学习的影响,但对负责任的人工智能教育(AIED)进行研究和定义的综述却很有限。为了填补这一空白,我们开展了一项趋同的系统性混合研究综述,以分析主要研究中出现的关键主题。根据《系统性综述和元分析首选报告项目》(PRISMA)指南,我们搜索了 Scopus 和 Web of Science,确定了 40 项符合纳入标准的实证研究。具体来说,我们在筛选过程中使用了四项标准:(1)研究报告的全文为英文;(2)研究报告于 2024 年 4 月 10 日之前发表在同行评审期刊或会议论文集上;(3)研究报告为收集原始数据的初步研究,并将定性、定量或混合方法作为研究方法;以及(4)研究报告明确关注一种或多种教育环境中的道德和/或负责任的人工智能。我们的研究结果确定了 K-20 教育环境中负责任的人工智能的重要利益相关者和特征,并拓展了对负责任的以人为本的人工智能(HCAI)的理解。我们揭示了对 HCAI 至关重要的特征,包括公平与公正、隐私与安全、非恶意与有利、代理与自主以及透明与智能。此外,我们还就如何通过利益相关者的共同努力实现负责任的 HCAI 提出了建议,包括用户(如学生和教育工作者)、开发人员、研究人员以及政策和决策者的角色。
{"title":"Navigating the ethical terrain of AI in education: A systematic review on framing responsible human-centered AI practices","authors":"Yao Fu , Zhenjie Weng","doi":"10.1016/j.caeai.2024.100306","DOIUrl":"10.1016/j.caeai.2024.100306","url":null,"abstract":"<div><div>With the rapid development of artificial intelligence (AI) in recent years, there has been an increasing number of studies on integrating AI in various educational contexts, ranging from early childhood to higher education. Although systematic reviews have widely reported the effects of AI on teaching and learning, limited reviews have examined and defined responsible AI in education (AIED). To fill this gap, we conducted a convergent systematic mixed studies review to analyze key themes emerging from primary research. Following the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines, we searched Scopus and Web of Science and identified 40 empirical studies that satisfied our inclusion criteria. Specifically, we used four criteria for the screening process: (1) the study's full text was available in English; (2) the study was published before April 10th, 2024 in peer-reviewed journals or conference proceedings; (3) the study was primary research that collected original data and applied qualitative, quantitative, or mixed-methods as the study methodology; and (4) the study had a clear focus on ethical and/or responsible AI in one or multiple educational context(s). Our findings identified essential stakeholders and characteristics of responsible AI in K-20 educational contexts and expanded understanding of responsible human-centered AI (HCAI). We unveiled characteristics vital to HCAI, encompassing Fairness and Equity, Privacy and Security, Non-maleficence and Beneficence, Agency and Autonomy, and Transparency and Intelligibility. In addition, we provided suggestions on how to achieve responsible HCAI via collaborative efforts of stakeholders, including roles of users (e.g., students and educators), developers, researchers, and policy and decision-makers.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100306"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24001097/pdfft?md5=c9dd430227b9715d8d56e7a8bbfc0e2a&pid=1-s2.0-S2666920X24001097-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1016/j.caeai.2024.100303
Lisa Herrmann, Jonas Weigert
The use of computational tools to predict academic success has become increasingly popular. Machine learning algorithms, trained on past study histories, have been shown to provide valid predictions. However, knowing about biases and unfairness in algorithms, one should take a closer look at these predictions. This paper explores the extent to which the predictive accuracy of academic success varies between specific groups of students, focusing on traditional and non-traditional students (NTS), who have not acquired a higher education entrance qualification at school. In a case study the study compares several popular algorithms and their prediction quality, and investigates whether misclassified NTS show positive or negative biases. Results revealed that the accuracy of predicting academic success for NTS was significantly lower than when considering all students as a whole. The direction of the distortion cannot be determined exactly due to small case numbers. The study emphasizes that the possibility of bias always has to be considered when predicting study success, and the use of such tools must ensure there are no undesirable biases that could affect certain students.
{"title":"AI-based prediction of academic success: Support for many, disadvantage for some?","authors":"Lisa Herrmann, Jonas Weigert","doi":"10.1016/j.caeai.2024.100303","DOIUrl":"10.1016/j.caeai.2024.100303","url":null,"abstract":"<div><div>The use of computational tools to predict academic success has become increasingly popular. Machine learning algorithms, trained on past study histories, have been shown to provide valid predictions. However, knowing about biases and unfairness in algorithms, one should take a closer look at these predictions. This paper explores the extent to which the predictive accuracy of academic success varies between specific groups of students, focusing on traditional and non-traditional students (NTS), who have not acquired a higher education entrance qualification at school. In a case study the study compares several popular algorithms and their prediction quality, and investigates whether misclassified NTS show positive or negative biases. Results revealed that the accuracy of predicting academic success for NTS was significantly lower than when considering all students as a whole. The direction of the distortion cannot be determined exactly due to small case numbers. The study emphasizes that the possibility of bias always has to be considered when predicting study success, and the use of such tools must ensure there are no undesirable biases that could affect certain students.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100303"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24001061/pdfft?md5=586423a6b9f5f5318134629961734dad&pid=1-s2.0-S2666920X24001061-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142310457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}