The growing adoption of emotionally adaptive Artificial Intelligence (AI) companionship applications raises critical concerns about privacy, emotional dependency, and behavioural susceptibility. These systems provide affective gratification while relying on continuous data tracking, generating tension between intimacy and surveillance. This study investigates how users’ understanding of tracking mechanisms, perceived risks, and perceived benefits jointly shape susceptibility, which is behavioural susceptibility to AI influence. Integrating cognitive dissonance theory, cognitive adaptation theory, and the privacy paradox, the research develops and validates an affective-override privacy calculus that explains how emotional rationalisation mediates privacy decision-making. The study compares users and non-users of AI companionship apps using cross-sectional survey data (n = 698) and partial least squares structural equation modelling with multi-group analysis. Results show that tracking awareness is a cognitive safeguard for non-users but an emotional rationalisation tool for users, amplifying perceived benefits and engagement despite recognised risks. The model demonstrates that emotional attachment can invert conventional risk–behaviour relationships, reframing awareness as context-dependent. Findings inform the ethical design of emotional AI, highlighting the need for emotional transparency, privacy literacy, and regulatory attention to affective coercion.
扫码关注我们
求助内容:
应助结果提醒方式:
