Diverging perceptions of artificial intelligence in higher education: A comparison of student and public assessments on risks and damages of academic performance prediction in Germany
{"title":"Diverging perceptions of artificial intelligence in higher education: A comparison of student and public assessments on risks and damages of academic performance prediction in Germany","authors":"Marco Lünich, Birte Keller, Frank Marcinkowski","doi":"10.1016/j.caeai.2024.100305","DOIUrl":null,"url":null,"abstract":"<div><div>The integration of Artificial Intelligence (AI) into higher education, particularly through Academic Performance Prediction (APP), promises enhanced educational outcomes. However, it simultaneously raises concerns regarding data privacy, potential biases, and broader socio-technical implications. Our study, focusing on Germany–a pivotal player in shaping the European Union's AI policies–seeks to understand prevailing perceptions of APP among students and the general public. Initial findings of a large standardized online survey suggest a divergence in perceptions: While students, in comparison to the general population, do not attribute a higher risk to APP in a general risk assessment, they do perceive higher societal and, in particular, individual damages from APP. Factors influencing these damage perceptions include trust in AI and personal experiences with discrimination. Students further emphasize the importance of preserving their autonomy by placing high value on self-determined data sharing and explaining their individual APP. Recognizing these varied perceptions is crucial for educators, policy-makers, and higher education institutions as they navigate the intricate ethical landscape of AI in education. This understanding can inform strategies that accommodate both the potential benefits and concerns associated with AI-driven educational tools.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100305"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Education Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666920X24001085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
The integration of Artificial Intelligence (AI) into higher education, particularly through Academic Performance Prediction (APP), promises enhanced educational outcomes. However, it simultaneously raises concerns regarding data privacy, potential biases, and broader socio-technical implications. Our study, focusing on Germany–a pivotal player in shaping the European Union's AI policies–seeks to understand prevailing perceptions of APP among students and the general public. Initial findings of a large standardized online survey suggest a divergence in perceptions: While students, in comparison to the general population, do not attribute a higher risk to APP in a general risk assessment, they do perceive higher societal and, in particular, individual damages from APP. Factors influencing these damage perceptions include trust in AI and personal experiences with discrimination. Students further emphasize the importance of preserving their autonomy by placing high value on self-determined data sharing and explaining their individual APP. Recognizing these varied perceptions is crucial for educators, policy-makers, and higher education institutions as they navigate the intricate ethical landscape of AI in education. This understanding can inform strategies that accommodate both the potential benefits and concerns associated with AI-driven educational tools.