人工智能素养量表概述及确认性和探索性因素分析

Martin J. Koch , Carolin Wienrich , Samantha Straka , Marc Erich Latoschik , Astrid Carolus
{"title":"人工智能素养量表概述及确认性和探索性因素分析","authors":"Martin J. Koch ,&nbsp;Carolin Wienrich ,&nbsp;Samantha Straka ,&nbsp;Marc Erich Latoschik ,&nbsp;Astrid Carolus","doi":"10.1016/j.caeai.2024.100310","DOIUrl":null,"url":null,"abstract":"<div><div>Comprehensive concepts of AI literacy (AIL) and valid measures are essential for research (e.g., intervention studies) and practice (e.g., personnel selection/development) alike. To date, several scales have been published, sharing standard features but differing in some aspects. We first aim to briefly overview instruments identified from unsystematic literature research in February 2023. We identified four scales and one collection of items. We describe the instruments and compare them. We identified common themes and overlaps in the instruments and developmental procedure. We also found differences regarding scale development procedures and latent dimensions. Following this literature research, we came to the conclusion that the literature on AI literacy measurement was fragmented, and little effort was undertaken to integrate different AI literacy conceptualizations. The second focus of this study is to test the factorial structures of existing AIL measurement instruments and identify latent dimensions of AIL across all instruments. We used robust maximum-likelihood confirmatory factor analysis to test factorial structures in a joint survey of all AIL items in an English-speaking online sample (<em>N</em>=219). We found general support for all instruments' factorial structures with minor deviations from the original factorial structures for some of the instruments. In a second analysis step, to address the issue of fragmented research on AI literacy conceptualization and measurement, we used principal axis exploratory factor analysis with oblique rotation to identify latent dimensions across all items. We found four correlated latent dimensions of AIL, which were mostly interpretable as the abilities to use and interact with AI, to design/program AI (incl. in-depth technical knowledge), to perform complex cognitive operations regarding AI (e.g., ethical considerations), and a common factor for the abilities to detect AI/differentiate between AI and humans and manage persuasive influences of AI (i.e., persuasion literacy). Our findings sort the multitude of AIL instruments and reveal four latent core dimensions of AIL. Thus, they contribute importantly to the conceptual understanding of AIL that has been fragmented so far.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100310"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Overview and confirmatory and exploratory factor analysis of AI literacy scale\",\"authors\":\"Martin J. Koch ,&nbsp;Carolin Wienrich ,&nbsp;Samantha Straka ,&nbsp;Marc Erich Latoschik ,&nbsp;Astrid Carolus\",\"doi\":\"10.1016/j.caeai.2024.100310\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Comprehensive concepts of AI literacy (AIL) and valid measures are essential for research (e.g., intervention studies) and practice (e.g., personnel selection/development) alike. To date, several scales have been published, sharing standard features but differing in some aspects. We first aim to briefly overview instruments identified from unsystematic literature research in February 2023. We identified four scales and one collection of items. We describe the instruments and compare them. We identified common themes and overlaps in the instruments and developmental procedure. We also found differences regarding scale development procedures and latent dimensions. Following this literature research, we came to the conclusion that the literature on AI literacy measurement was fragmented, and little effort was undertaken to integrate different AI literacy conceptualizations. The second focus of this study is to test the factorial structures of existing AIL measurement instruments and identify latent dimensions of AIL across all instruments. We used robust maximum-likelihood confirmatory factor analysis to test factorial structures in a joint survey of all AIL items in an English-speaking online sample (<em>N</em>=219). We found general support for all instruments' factorial structures with minor deviations from the original factorial structures for some of the instruments. In a second analysis step, to address the issue of fragmented research on AI literacy conceptualization and measurement, we used principal axis exploratory factor analysis with oblique rotation to identify latent dimensions across all items. We found four correlated latent dimensions of AIL, which were mostly interpretable as the abilities to use and interact with AI, to design/program AI (incl. in-depth technical knowledge), to perform complex cognitive operations regarding AI (e.g., ethical considerations), and a common factor for the abilities to detect AI/differentiate between AI and humans and manage persuasive influences of AI (i.e., persuasion literacy). Our findings sort the multitude of AIL instruments and reveal four latent core dimensions of AIL. Thus, they contribute importantly to the conceptual understanding of AIL that has been fragmented so far.</div></div>\",\"PeriodicalId\":34469,\"journal\":{\"name\":\"Computers and Education Artificial Intelligence\",\"volume\":\"7 \",\"pages\":\"Article 100310\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers and Education Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666920X24001139\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Education Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666920X24001139","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

摘要

全面的人工智能素养(AIL)概念和有效的测量方法对于研究(如干预研究)和实践(如人员选拔/发展)都至关重要。迄今为止,已经发布了几种量表,它们具有共同的标准特征,但在某些方面存在差异。我们首先简要介绍 2023 年 2 月从非系统文献研究中发现的工具。我们确定了四个量表和一个项目集。我们对这些工具进行了描述和比较。我们发现了工具和编制程序中的共同主题和重叠之处。我们还发现了量表编制程序和潜在维度方面的差异。经过文献研究,我们得出结论,人工智能素养测量方面的文献是支离破碎的,几乎没有人努力整合不同的人工智能素养概念。本研究的第二个重点是检验现有人工智能素养测量工具的因子结构,并确定所有工具中人工智能素养的潜在维度。我们采用稳健的最大似然确证因子分析方法,在英语在线样本(N=219)中对所有人工智能素养项目进行联合调查,以检验因子结构。我们发现,所有工具的因子结构都得到了普遍支持,部分工具的因子结构与原始因子结构略有偏差。在第二步分析中,为了解决人工智能素养概念化和测量研究零散的问题,我们使用了主轴探索性因子分析和斜向旋转来识别所有项目的潜在维度。我们发现了人工智能素养的四个相关潜维度,主要可解释为使用人工智能并与之互动的能力、设计/编程人工智能的能力(包括深入的技术知识)、执行有关人工智能的复杂认知操作的能力(如伦理考虑),以及检测人工智能/区分人工智能与人类和管理人工智能说服影响的能力(即说服素养)的共同因子。我们的研究结果对众多人工智能素养工具进行了分类,并揭示了人工智能素养的四个潜在核心维度。因此,它们对迄今为止零散的人工智能语言的概念理解做出了重要贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Overview and confirmatory and exploratory factor analysis of AI literacy scale
Comprehensive concepts of AI literacy (AIL) and valid measures are essential for research (e.g., intervention studies) and practice (e.g., personnel selection/development) alike. To date, several scales have been published, sharing standard features but differing in some aspects. We first aim to briefly overview instruments identified from unsystematic literature research in February 2023. We identified four scales and one collection of items. We describe the instruments and compare them. We identified common themes and overlaps in the instruments and developmental procedure. We also found differences regarding scale development procedures and latent dimensions. Following this literature research, we came to the conclusion that the literature on AI literacy measurement was fragmented, and little effort was undertaken to integrate different AI literacy conceptualizations. The second focus of this study is to test the factorial structures of existing AIL measurement instruments and identify latent dimensions of AIL across all instruments. We used robust maximum-likelihood confirmatory factor analysis to test factorial structures in a joint survey of all AIL items in an English-speaking online sample (N=219). We found general support for all instruments' factorial structures with minor deviations from the original factorial structures for some of the instruments. In a second analysis step, to address the issue of fragmented research on AI literacy conceptualization and measurement, we used principal axis exploratory factor analysis with oblique rotation to identify latent dimensions across all items. We found four correlated latent dimensions of AIL, which were mostly interpretable as the abilities to use and interact with AI, to design/program AI (incl. in-depth technical knowledge), to perform complex cognitive operations regarding AI (e.g., ethical considerations), and a common factor for the abilities to detect AI/differentiate between AI and humans and manage persuasive influences of AI (i.e., persuasion literacy). Our findings sort the multitude of AIL instruments and reveal four latent core dimensions of AIL. Thus, they contribute importantly to the conceptual understanding of AIL that has been fragmented so far.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
16.80
自引率
0.00%
发文量
66
审稿时长
50 days
期刊最新文献
Enhancing data analysis and programming skills through structured prompt training: The impact of generative AI in engineering education Understanding the practices, perceptions, and (dis)trust of generative AI among instructors: A mixed-methods study in the U.S. higher education Technological self-efficacy and sense of coherence: Key drivers in teachers' AI acceptance and adoption The influence of AI literacy on complex problem-solving skills through systematic thinking skills and intuition thinking skills: An empirical study in Thai gen Z accounting students Psychometrics of an Elo-based large-scale online learning system
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1