Oliver Wieczorek , Isabel Steinhardt , Rebecca Schmidt , Sylvi Mauermeister , Christian Schneijderberg
{"title":"The Bot Delusion. Large language models and anticipated consequences for academics’ publication and citation behavior","authors":"Oliver Wieczorek , Isabel Steinhardt , Rebecca Schmidt , Sylvi Mauermeister , Christian Schneijderberg","doi":"10.1016/j.futures.2024.103537","DOIUrl":null,"url":null,"abstract":"<div><div>The present paper discusses the extent to which Large Language Models (LLMs) may affect the scientific enterprise, reinforcing or mitigating existing structural inequalities expressed by the Matthew Effect and introducing a “bot delusion” in academia. In a theory-led thought experiment, we first focus on the academic publication and citation system and develop three scenarios of the anticipated consequences of using LLMs: reproducing content and status quo (Scenario 1), enabling content coherence evaluation (Scenario 2) and content evaluation (Scenario 3). Second, we discuss the interaction between the use of LLMs and academic (counter)norms for citation selection and their impact on the publication and citation system. Finally, we introduce communal counter-norms to capture academics’ loyal citation behavior and develop three future scenarios that academia may face when LLMs are widely used in the research process, namely status quo future of science, mixed-access future, and open science future.</div></div>","PeriodicalId":48239,"journal":{"name":"Futures","volume":"166 ","pages":"Article 103537"},"PeriodicalIF":3.0000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Futures","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0016328724002209","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0
Abstract
The present paper discusses the extent to which Large Language Models (LLMs) may affect the scientific enterprise, reinforcing or mitigating existing structural inequalities expressed by the Matthew Effect and introducing a “bot delusion” in academia. In a theory-led thought experiment, we first focus on the academic publication and citation system and develop three scenarios of the anticipated consequences of using LLMs: reproducing content and status quo (Scenario 1), enabling content coherence evaluation (Scenario 2) and content evaluation (Scenario 3). Second, we discuss the interaction between the use of LLMs and academic (counter)norms for citation selection and their impact on the publication and citation system. Finally, we introduce communal counter-norms to capture academics’ loyal citation behavior and develop three future scenarios that academia may face when LLMs are widely used in the research process, namely status quo future of science, mixed-access future, and open science future.
期刊介绍:
Futures is an international, refereed, multidisciplinary journal concerned with medium and long-term futures of cultures and societies, science and technology, economics and politics, environment and the planet and individuals and humanity. Covering methods and practices of futures studies, the journal seeks to examine possible and alternative futures of all human endeavours. Futures seeks to promote divergent and pluralistic visions, ideas and opinions about the future. The editors do not necessarily agree with the views expressed in the pages of Futures