S. Horbach, Freek Oude Maatman, W. Halffman, Wytske M. Hepkema
{"title":"Automated citation recommendation tools encourage questionable citations","authors":"S. Horbach, Freek Oude Maatman, W. Halffman, Wytske M. Hepkema","doi":"10.1093/reseval/rvac016","DOIUrl":null,"url":null,"abstract":"\n Citing practices have long been at the heart of scientific reporting, playing both socially and epistemically important functions in science. While such practices have been relatively stable over time, recent attempts to develop automated citation recommendation tools have the potential to drastically impact citing practices. We claim that, even though such tools may come with tempting advantages, their development and implementation should be conducted with caution. Describing the role of citations in science’s current publishing and social reward structures, we argue that automated citation tools encourage questionable citing practices. More specifically, we describe how such tools may lead to an increase in: perfunctory citation and sloppy argumentation; affirmation biases; and Matthew effects. In addition, a lack of transparency of the tools’ underlying algorithmic structure renders their usage problematic. Hence, we urge that the consequences of citation recommendation tools should at least be understood and assessed before any attempts to implementation or broad distribution are undertaken.","PeriodicalId":47668,"journal":{"name":"Research Evaluation","volume":"1 1","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2022-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research Evaluation","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1093/reseval/rvac016","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 3
Abstract
Citing practices have long been at the heart of scientific reporting, playing both socially and epistemically important functions in science. While such practices have been relatively stable over time, recent attempts to develop automated citation recommendation tools have the potential to drastically impact citing practices. We claim that, even though such tools may come with tempting advantages, their development and implementation should be conducted with caution. Describing the role of citations in science’s current publishing and social reward structures, we argue that automated citation tools encourage questionable citing practices. More specifically, we describe how such tools may lead to an increase in: perfunctory citation and sloppy argumentation; affirmation biases; and Matthew effects. In addition, a lack of transparency of the tools’ underlying algorithmic structure renders their usage problematic. Hence, we urge that the consequences of citation recommendation tools should at least be understood and assessed before any attempts to implementation or broad distribution are undertaken.
期刊介绍:
Research Evaluation is a peer-reviewed, international journal. It ranges from the individual research project up to inter-country comparisons of research performance. Research projects, researchers, research centres, and the types of research output are all relevant. It includes public and private sectors, natural and social sciences. The term "evaluation" applies to all stages from priorities and proposals, through the monitoring of on-going projects and programmes, to the use of the results of research.