In today’s rapidly evolving digital media landscape, video recommender systems have become central to enhancing user experiences by delivering personalized content. However, they also raise significant concerns about dataveillance—the continuous monitoring of user behavior. This study examines the complex relationship between dataveillance awareness, privacy concerns, perceived value of information disclosure, protective intentions, and self-censorship in video recommender systems. Using structural equation modeling based on data from an online scenario-based experiment (N = 385), our findings reveal that heightened dataveillance awareness significantly increases privacy concerns and diminishes the perceived value of sharing information. These privacy concerns drive users toward protective behaviors, such as self-censorship. Notably, the study reveals a selective disclosure paradox: where users are more likely to engage in self-censorship when they perceive their shared information as valuable, but when they become more aware of being monitored (dataveillance), they start to see their information as less valuable, which makes them less likely to self-censor. Grounded in privacy calculus and protection motivation theories, this research underscores the chilling effect of dataveillance and presents a comprehensive model that explains how perceived privacy risks shape user engagement. By shedding light on unconscious behaviors that may hinder recommender systems’ ability to optimize their algorithms, the findings offer both theoretical insights into digital user behavior and practical recommendations for designing systems that balance personalization with subtle management of perceived disclosure value, ultimately reducing self-censorship.
扫码关注我们
求助内容:
应助结果提醒方式:
