The ability of ChatGPT to create grammatically accurate and coherent texts has generated considerable anxiety among those concerned that students might use such large language models (LLMs) to write their assignments. The extent to which LLMs can mimic human writers is starting to be explored, but we know little about their ability to use nominal resources to create effective academic texts. This study investigates metadiscursive nouns in argumentative essays, comparing how ChatGPT and university students employ these devices to organise text, express stance, and construct persuasive arguments. By analysing 145 essays from each source, we examine the syntactic patterns, interactive functions, and interactional uses of metadiscursive nouns. The analysis reveals that while overall frequencies were similar, ChatGPT has distinct preferences for simpler syntactic constructions (particularly the determiner + N pattern) and relies heavily on anaphoric references, whereas students demonstrate more balanced syntactic distribution and greater use of cataphoric references. Interactionally, ChatGPT prefers manner nouns for descriptive precision, while students favour status nouns for evaluative reasoning and evidential nouns for empirical grounding. These findings show that, while structurally coherent, LLM-generated texts often lack the rhetorical flexibility and evaluative sophistication of human academic writing, offering valuable insights for EAP pedagogy.