Introduction: The rapid growth of AI-generated content (AIGC) on social media has led to the introduction of AI disclosure labels to enhance transparency; however, emerging technologies such as Sora2 make it difficult for users to discern synthetic from human-created content, presenting challenges for both users and platform designers.
Methods: This study investigates how different AI labels (clear, ambiguous, and no label) affect user behavior, focusing on information avoidance. We performed two online experiments (N = 760) to examine these effects in simulated social media scenarios (Bilibili and TikTok).
Results: We found that ambiguous AI labels functioned as heuristic barriers that significantly increased information avoidance compared to clear or no labels. Cognitive dissonance was identified as a key mediator, where conflicting information led to discomfort and subsequent disengagement. Furthermore, factors such as label-content congruence and thematic relevance moderated these impacts.
Discussion: These findings suggest that while AI disclosure labels are intended to improve transparency, ambiguous labels may inadvertently hinder user engagement, offering important implications for the design of transparency tools in AI-driven social media environments.
扫码关注我们
求助内容:
应助结果提醒方式:
