Automated citation recommendation tools encourage questionable citations

IF 2.9 4区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Research Evaluation Pub Date : 2022-06-02 DOI:10.1093/reseval/rvac016
S. Horbach, Freek Oude Maatman, W. Halffman, Wytske M. Hepkema
{"title":"Automated citation recommendation tools encourage questionable citations","authors":"S. Horbach, Freek Oude Maatman, W. Halffman, Wytske M. Hepkema","doi":"10.1093/reseval/rvac016","DOIUrl":null,"url":null,"abstract":"\n Citing practices have long been at the heart of scientific reporting, playing both socially and epistemically important functions in science. While such practices have been relatively stable over time, recent attempts to develop automated citation recommendation tools have the potential to drastically impact citing practices. We claim that, even though such tools may come with tempting advantages, their development and implementation should be conducted with caution. Describing the role of citations in science’s current publishing and social reward structures, we argue that automated citation tools encourage questionable citing practices. More specifically, we describe how such tools may lead to an increase in: perfunctory citation and sloppy argumentation; affirmation biases; and Matthew effects. In addition, a lack of transparency of the tools’ underlying algorithmic structure renders their usage problematic. Hence, we urge that the consequences of citation recommendation tools should at least be understood and assessed before any attempts to implementation or broad distribution are undertaken.","PeriodicalId":47668,"journal":{"name":"Research Evaluation","volume":"1 1","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2022-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research Evaluation","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1093/reseval/rvac016","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 3

Abstract

Citing practices have long been at the heart of scientific reporting, playing both socially and epistemically important functions in science. While such practices have been relatively stable over time, recent attempts to develop automated citation recommendation tools have the potential to drastically impact citing practices. We claim that, even though such tools may come with tempting advantages, their development and implementation should be conducted with caution. Describing the role of citations in science’s current publishing and social reward structures, we argue that automated citation tools encourage questionable citing practices. More specifically, we describe how such tools may lead to an increase in: perfunctory citation and sloppy argumentation; affirmation biases; and Matthew effects. In addition, a lack of transparency of the tools’ underlying algorithmic structure renders their usage problematic. Hence, we urge that the consequences of citation recommendation tools should at least be understood and assessed before any attempts to implementation or broad distribution are undertaken.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
自动引文推荐工具鼓励有问题的引文
长期以来,引用实践一直是科学报道的核心,在科学中发挥着重要的社会和认知功能。虽然这种做法一直相对稳定,但最近开发自动引用推荐工具的尝试有可能极大地影响引用实践。我们认为,尽管这些工具可能具有诱人的优势,但它们的开发和实现应该谨慎进行。在描述引文在当前科学出版和社会奖励结构中的作用时,我们认为自动引文工具鼓励了有问题的引用行为。更具体地说,我们描述了这些工具如何导致:敷衍引用和草率论证的增加;肯定的偏见;和马太效应。此外,这些工具的底层算法结构缺乏透明度,使得它们的使用存在问题。因此,我们强烈建议,在尝试实施或广泛推广之前,至少应该了解和评估引文推荐工具的后果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Research Evaluation
Research Evaluation INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
6.00
自引率
18.20%
发文量
42
期刊介绍: Research Evaluation is a peer-reviewed, international journal. It ranges from the individual research project up to inter-country comparisons of research performance. Research projects, researchers, research centres, and the types of research output are all relevant. It includes public and private sectors, natural and social sciences. The term "evaluation" applies to all stages from priorities and proposals, through the monitoring of on-going projects and programmes, to the use of the results of research.
期刊最新文献
Correction to: Methods for measuring social and conceptual dimensions of convergence science Correction to: Stated preference methods and STI policy studies: a foreground approach A tribute to our dearly departed colleague and friend: An introduction to the Special Issue in memory of Prof. Paul Benneworth The legal foundation of responsible research assessment: An overview on European Union and Italy The conflict of impact for early career researchers planning for a future in the academy
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1