As artificial intelligence (AI) and generative AI (GenAI) technologies become increasingly integrated into everyday life, the need for validated tools that measure people's knowledge about AI grows. Here, we present the development and validation of a theoretically driven, performance-based scale for assessing AI and GenAI knowledge. The scale is grounded in a two-axial framework. One axis captures three knowledge types: content knowledge (what AI is and where it is encountered), procedural knowledge (how AI systems operate and are used), and epistemic knowledge (what features and construction processes characterize AI outputs). The other axis encompasses three knowledge domains: technology-related knowledge (AI systems), user-related knowledge (users' interaction with AI), and society-related knowledge (the social and ethical implications of AI). Based on an online survey of 800 internet-using adults from Israel, the 26-item scale was evaluated using confirmatory factor analysis, which demonstrated an acceptable model fit. It was further validated through two-stage structural equation modeling and group comparisons. Overall, the scale was found to be both valid and practically insightful: while it reproduces the expected relationships with additional constructs (e.g., trust in GenAI, attitudes toward AI) and expected differences between demographic groups, it also provides nuanced insights on the intricacies of AI knowledge. For example, the scale indicates that the relationship between trust in GenAI and knowledge about AI is grounded in both epistemic and societal knowledge. Thus, this novel tool affords more precise investigations into how different types and domains of AI knowledge relate to perceptions, behaviors, and decision-making in an AI-mediated world.
扫码关注我们
求助内容:
应助结果提醒方式:
