Augmented Human Agents (AHAs)—hybrid systems that combine AI with human agents—are increasingly utilized in online customer service. In response to regulatory demands for AI transparency, this paper examines how disclosing an agent as an Augmented Human Agent (AHA), compared to a Human , influences customer engagement in online customer service encounters. Despite its practical relevance, little is known about the psychological effects of such disclosure. To address this gap, we comprehensively explore the fundamental psychological mechanisms underlying AHA disclosure and customer engagement, as well as the boundary conditions influencing this relationship. Grounded in the Human-AI Interaction Theory of Interactive Media Effects (HAII-TIME) Framework, we conducted a 3 (identity disclosure of the agent: AHA vs. AI vs. Human) × 2 (communication style: social-oriented vs. task-oriented) online experiment on Prolific and developed a moderated mediation model. Our findings reveal a paradox: although the net effect of AHA disclosure on engagement is positive, the disclosure itself also acts as a cue that activates substantial negative cognitive heuristics. Specifically, disclosure of AHA reduces social presence heuristics and heightens negative machine heuristics. However, a social-oriented communication style completely neutralizes these negative heuristics, acting as a protective buffer that enables the full positive effect of the AHA to be realized. In contrast, a task-oriented style exacerbates the negative mediating pathways, suppressing engagement. These findings shift the theoretical narrative from a simple “best of both worlds” model to a more nuanced compensatory model. Theoretical and practical implications for AHA disclosure and implementation are discussed in the context of online customer service encounters.
扫码关注我们
求助内容:
应助结果提醒方式:
