David M. Markowitz, Jeffrey T. Hancock, Jeremy N. Bailenson
{"title":"固有虚假人工智能交流和故意虚假人类交流的语言标记:来自酒店评论的证据","authors":"David M. Markowitz, Jeffrey T. Hancock, Jeremy N. Bailenson","doi":"10.1177/0261927x231200201","DOIUrl":null,"url":null,"abstract":"To the human eye, AI-generated outputs of large language models have increasingly become indistinguishable from human-generated outputs. Therefore, to determine the linguistic properties that separate AI-generated text from human-generated text, we used a state-of-the-art chatbot, ChatGPT, and compared how it wrote hotel reviews to human-generated counterparts across content (emotion), style (analytic writing, adjectives), and structural features (readability). Results suggested AI-generated text had a more analytic style and was more affective, more descriptive, and less readable than human-generated text. Classification accuracies of AI-generated versus human-generated texts were over 80%, far exceeding chance (∼50%). Here, we argue AI-generated text is inherently false when communicating about personal experiences that are typical of humans and differs from intentionally false human-generated text at the language level. Implications for AI-mediated communication and deception research are discussed.","PeriodicalId":47861,"journal":{"name":"Journal of Language and Social Psychology","volume":"36 1","pages":"0"},"PeriodicalIF":2.0000,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Linguistic Markers of Inherently False AI Communication and Intentionally False Human Communication: Evidence From Hotel Reviews\",\"authors\":\"David M. Markowitz, Jeffrey T. Hancock, Jeremy N. Bailenson\",\"doi\":\"10.1177/0261927x231200201\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To the human eye, AI-generated outputs of large language models have increasingly become indistinguishable from human-generated outputs. Therefore, to determine the linguistic properties that separate AI-generated text from human-generated text, we used a state-of-the-art chatbot, ChatGPT, and compared how it wrote hotel reviews to human-generated counterparts across content (emotion), style (analytic writing, adjectives), and structural features (readability). Results suggested AI-generated text had a more analytic style and was more affective, more descriptive, and less readable than human-generated text. Classification accuracies of AI-generated versus human-generated texts were over 80%, far exceeding chance (∼50%). Here, we argue AI-generated text is inherently false when communicating about personal experiences that are typical of humans and differs from intentionally false human-generated text at the language level. Implications for AI-mediated communication and deception research are discussed.\",\"PeriodicalId\":47861,\"journal\":{\"name\":\"Journal of Language and Social Psychology\",\"volume\":\"36 1\",\"pages\":\"0\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2023-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Language and Social Psychology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/0261927x231200201\",\"RegionNum\":3,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Language and Social Psychology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/0261927x231200201","RegionNum":3,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMMUNICATION","Score":null,"Total":0}
Linguistic Markers of Inherently False AI Communication and Intentionally False Human Communication: Evidence From Hotel Reviews
To the human eye, AI-generated outputs of large language models have increasingly become indistinguishable from human-generated outputs. Therefore, to determine the linguistic properties that separate AI-generated text from human-generated text, we used a state-of-the-art chatbot, ChatGPT, and compared how it wrote hotel reviews to human-generated counterparts across content (emotion), style (analytic writing, adjectives), and structural features (readability). Results suggested AI-generated text had a more analytic style and was more affective, more descriptive, and less readable than human-generated text. Classification accuracies of AI-generated versus human-generated texts were over 80%, far exceeding chance (∼50%). Here, we argue AI-generated text is inherently false when communicating about personal experiences that are typical of humans and differs from intentionally false human-generated text at the language level. Implications for AI-mediated communication and deception research are discussed.
期刊介绍:
The Journal of Language and Social Psychology explores the social dimensions of language and the linguistic implications of social life. Articles are drawn from a wide range of disciplines, including linguistics, cognitive science, sociology, communication, psychology, education, and anthropology. The journal provides complete and balanced coverage of the latest developments and advances through original, full-length articles, short research notes, and special features as Debates, Courses and Conferences, and Book Reviews.