Evaluating the Accuracy of scite, a Smart Citation Index

Caitlin Bakker, Nicole Theis-Mahon, Sarah Jane Brown
{"title":"Evaluating the Accuracy of scite, a Smart Citation Index","authors":"Caitlin Bakker, Nicole Theis-Mahon, Sarah Jane Brown","doi":"10.18060/26528","DOIUrl":null,"url":null,"abstract":"Objectives: Citations do not always equate endorsement, therefore it is important to understand the context of a citation. Researchers may heavily rely on a paper they cite, they may refute it entirely, or they may mention it only in passing, so an accurate classification of a citation is valuable for researchers and users. While AI solutions have emerged to provide a more nuanced meaning, the accuracy of these tools has yet to be determined. This project seeks to assess the accuracy of scite in assessing the meaning of citations in a sample of publications. Methods: Using a previously established sample of systematic reviews that cited retracted publications, we conducted known item searching in scite, a tool that uses machine learning to categorize the meaning of citations. scite's interpretation of the citation's meaning was recorded, as was our assessment of the citation’s meaning. Citations were classified as mentioning, supporting or contrasting. Recall, precision, and f-measure were calculated to describe the accuracy of scite's assessment in comparison to human assessment. Results: From the original sample of 324 citations, 98 citations were classified in scite. Of these, scite found that 2 were supporting and 96 were mentioning, while we determined that 42 were supporting, 39 were mentioning, and 17 were contrasting. Supporting citations had high precision and low recall, while mentioning citations had high recall and low precision. F-measures ranged between 0.0 and 0.58, representing low classification accuracy. Conclusions: In our sample, the overall accuracy of scite's assessments was low. scite was less able to classify supporting and contrasting citations, and instead labeled them as mentioning. Although there is potential and enthusiasm for AI to make engagement with literature easier and more immediate, the results generated from AI differed significantly from the human interpretation.","PeriodicalId":90517,"journal":{"name":"Hypothesis : the newsletter of the Research Section of MLA","volume":"360 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Hypothesis : the newsletter of the Research Section of MLA","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18060/26528","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Objectives: Citations do not always equate endorsement, therefore it is important to understand the context of a citation. Researchers may heavily rely on a paper they cite, they may refute it entirely, or they may mention it only in passing, so an accurate classification of a citation is valuable for researchers and users. While AI solutions have emerged to provide a more nuanced meaning, the accuracy of these tools has yet to be determined. This project seeks to assess the accuracy of scite in assessing the meaning of citations in a sample of publications. Methods: Using a previously established sample of systematic reviews that cited retracted publications, we conducted known item searching in scite, a tool that uses machine learning to categorize the meaning of citations. scite's interpretation of the citation's meaning was recorded, as was our assessment of the citation’s meaning. Citations were classified as mentioning, supporting or contrasting. Recall, precision, and f-measure were calculated to describe the accuracy of scite's assessment in comparison to human assessment. Results: From the original sample of 324 citations, 98 citations were classified in scite. Of these, scite found that 2 were supporting and 96 were mentioning, while we determined that 42 were supporting, 39 were mentioning, and 17 were contrasting. Supporting citations had high precision and low recall, while mentioning citations had high recall and low precision. F-measures ranged between 0.0 and 0.58, representing low classification accuracy. Conclusions: In our sample, the overall accuracy of scite's assessments was low. scite was less able to classify supporting and contrasting citations, and instead labeled them as mentioning. Although there is potential and enthusiasm for AI to make engagement with literature easier and more immediate, the results generated from AI differed significantly from the human interpretation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
智能引文索引scite的准确性评估
目的:引用并不总是等同于认可,因此理解引用的上下文是很重要的。研究人员可能严重依赖他们引用的论文,他们可能完全反驳它,或者他们可能只是顺便提到它,所以对引用的准确分类对研究人员和用户都是有价值的。虽然人工智能解决方案已经出现,提供了更细微的含义,但这些工具的准确性尚未确定。本项目力求在评估出版物样本中引文的含义时评估引文的准确性。方法:使用先前建立的引用撤回出版物的系统综述样本,我们在scite中进行已知项目搜索,scite是一种使用机器学习对引用的含义进行分类的工具。Scite对引文含义的解释被记录了下来,我们对引文含义的评估也被记录了下来。引用被分类为提及、支持或对比。计算召回率、精密度和f-measure来描述与人类评估相比,sciet评估的准确性。结果:在324篇引文的原始样本中,共分类出98篇引文。其中,sciite发现2人支持,96人提及,而我们确定42人支持,39人提及,17人反对。支持引文查全率高,查全率低,提及引文查全率高,查全率低。f值范围在0.0到0.58之间,表示分类精度较低。结论:在我们的样本中,scite评估的总体准确性较低。Scite不太能够对支持和对比引用进行分类,而是将它们标记为提及。尽管人工智能有潜力和热情让人们更容易、更直接地接触文学,但人工智能产生的结果与人类的解释有很大不同。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Evaluating the Accuracy of scite, a Smart Citation Index Staying Relevant in the Era of Budget Cuts and Artificial Intelligence Gaining Ground: OER at 3 Health Sciences Institutions Preparing for the New NIH Data Management and Sharing Policy Methods for Evaluating Database Coverage
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1