Risks of abuse of large language models, like ChatGPT, in scientific publishing: Authorship, predatory publishing, and paper mills

IF 2.2 3区 管理学 Q2 INFORMATION SCIENCE & LIBRARY SCIENCE Learned Publishing Pub Date : 2023-09-08 DOI:10.1002/leap.1578
Graham Kendall, Jaime A. Teixeira da Silva
{"title":"Risks of abuse of large language models, like ChatGPT, in scientific publishing: Authorship, predatory publishing, and paper mills","authors":"Graham Kendall,&nbsp;Jaime A. Teixeira da Silva","doi":"10.1002/leap.1578","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>\n </p><ul>\n \n <li>Academia is already witnessing the abuse of authorship in papers with text generated by large language models (LLMs) such as ChatGPT.</li>\n \n <li>LLM-generated text is testing the limits of publishing ethics as we traditionally know it.</li>\n \n <li>We alert the community to imminent risks of LLM technologies, like ChatGPT, for amplifying the predatory publishing ‘industry’.</li>\n \n <li>The abuse of ChatGPT for the paper mill industry cannot be over-emphasized.</li>\n \n <li>Detection of LLM-generated text is the responsibility of editors and journals/publishers.</li>\n </ul>\n </div>","PeriodicalId":51636,"journal":{"name":"Learned Publishing","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/leap.1578","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Learned Publishing","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/leap.1578","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

  • Academia is already witnessing the abuse of authorship in papers with text generated by large language models (LLMs) such as ChatGPT.
  • LLM-generated text is testing the limits of publishing ethics as we traditionally know it.
  • We alert the community to imminent risks of LLM technologies, like ChatGPT, for amplifying the predatory publishing ‘industry’.
  • The abuse of ChatGPT for the paper mill industry cannot be over-emphasized.
  • Detection of LLM-generated text is the responsibility of editors and journals/publishers.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在科学出版中滥用大型语言模型(如ChatGPT)的风险:作者身份、掠夺性出版和造纸厂
学术界已经目睹了由ChatGPT等大型语言模型(llm)生成文本的论文中作者身份的滥用。法学硕士生成的文本正在测试我们传统上所知道的出版伦理的极限。我们提醒社区,法学硕士技术(如ChatGPT)可能会放大掠夺性出版“行业”。ChatGPT对造纸厂行业的滥用再怎么强调也不为过。检测法学硕士生成的文本是编辑和期刊/出版商的责任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Learned Publishing
Learned Publishing INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
4.40
自引率
17.90%
发文量
72
期刊最新文献
Purchase and publish: Early career researchers and open access publishing costs Issue Information The promotion and implementation of open science measures among high-performing journals from Brazil, Mexico, Portugal, and Spain The stock characters in the editorial boards of journals run by predatory publishers Exploring named-entity recognition techniques for academic books
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1