Anthropomorphic design has received increasing interest in research on conversational agents (CAs) and artificial intelligence (AI). Research suggests that the design of the agents’ language impacts trust and cognitive load by making the agent more “human-like”. This research seeks to understand the impacts and limits of two dimensions of language-focused anthropomorphism — the agent’s ability to empathize and signal the effort to engage with the users’ feelings through language structure, and the agent’s effort to systemize and take agency to drive the conversation using logic. We advance existing Theories of Mind (ToMs) with linguistic empathy theory to explain how language structure and logic used during the conversation impact two dimensions of system trust and cognitive load through systemizing and empathizing. We conducted a behavioral online experiment involving 277 residents who interacted with one of three online systems, varying in their interfaces’ Systemizing–Empathizing capability: A menu-based interface (MUI) (no Systemizing Ability), a non-empathetic chatbot, and an empathetic chatbot (both with Systemizing Ability and Empathizing Ability). Half of the participants were emotionally disturbed to examine the moderating effects of anger. Our results revealed that systemizing, exhibited by both chatbots, lowers cognitive effort. The ability to empathize through language increased perceived helpfulness. While the empathetic chatbot was generally perceived as more trustworthy, this effect was reversed when users experienced anger: There is an uncanny valley effect, where empathizing through words has its limits. These findings advance research on anthropomorphism design and trust in CAs.