{"title":"The Turing test of online reviews: Can we tell the difference between human-written and GPT-4-written online reviews?","authors":"Balázs Kovács","doi":"10.1007/s11002-024-09729-3","DOIUrl":null,"url":null,"abstract":"<p>Online reviews serve as a guide for consumer choice. With advancements in large language models (LLMs) and generative AI, the fast and inexpensive creation of human-like text may threaten the feedback function of online reviews if neither readers nor platforms can differentiate between human-written and AI-generated content. In two experiments, we found that humans cannot recognize AI-written reviews. Even with monetary incentives for accuracy, both Type I and Type II errors were common: human reviews were often mistaken for AI-generated reviews, and even more frequently, AI-generated reviews were mistaken for human reviews. This held true across various ratings, emotional tones, review lengths, and participants’ genders, education levels, and AI expertise. Younger participants were somewhat better at distinguishing between human and AI reviews. An additional study revealed that current AI detectors were also fooled by AI-generated reviews. We discuss the implications of our findings on trust erosion, manipulation, regulation, consumer behavior, AI detection, market structure, innovation, and review platforms.</p>","PeriodicalId":48068,"journal":{"name":"Marketing Letters","volume":"17 1","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Marketing Letters","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1007/s11002-024-09729-3","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0
Abstract
Online reviews serve as a guide for consumer choice. With advancements in large language models (LLMs) and generative AI, the fast and inexpensive creation of human-like text may threaten the feedback function of online reviews if neither readers nor platforms can differentiate between human-written and AI-generated content. In two experiments, we found that humans cannot recognize AI-written reviews. Even with monetary incentives for accuracy, both Type I and Type II errors were common: human reviews were often mistaken for AI-generated reviews, and even more frequently, AI-generated reviews were mistaken for human reviews. This held true across various ratings, emotional tones, review lengths, and participants’ genders, education levels, and AI expertise. Younger participants were somewhat better at distinguishing between human and AI reviews. An additional study revealed that current AI detectors were also fooled by AI-generated reviews. We discuss the implications of our findings on trust erosion, manipulation, regulation, consumer behavior, AI detection, market structure, innovation, and review platforms.
期刊介绍:
Marketing Letters: A Journal of Research in Marketing publishes high-quality, shorter paper (under 5,000 words including abstract, main text and references, which is equivalent to 20 total pages, double-spaced with 12 point Times New Roman font) on marketing, the emphasis being on immediacy and current interest. The journal offers a medium for the truly rapid publication of research results.
The focus of Marketing Letters is on empirical findings, methodological papers, and theoretical and conceptual insights across areas of research in marketing.
Marketing Letters is required reading for anyone working in marketing science, consumer research, methodology, and marketing strategy and management.
The key subject areas and topics covered in Marketing Letters are: choice models, consumer behavior, consumer research, management science, market research, sales and advertising, marketing management, marketing research, marketing science, psychology, and statistics.
Officially cited as: Mark Lett