Quantitative Methods in Research Evaluation Citation Indicators, Altmetrics, and Artificial Intelligence

Mike Thelwall
{"title":"Quantitative Methods in Research Evaluation Citation Indicators, Altmetrics, and Artificial Intelligence","authors":"Mike Thelwall","doi":"arxiv-2407.00135","DOIUrl":null,"url":null,"abstract":"This book critically analyses the value of citation data, altmetrics, and\nartificial intelligence to support the research evaluation of articles,\nscholars, departments, universities, countries, and funders. It introduces and\ndiscusses indicators that can support research evaluation and analyses their\nstrengths and weaknesses as well as the generic strengths and weaknesses of the\nuse of indicators for research assessment. The book includes evidence of the\ncomparative value of citations and altmetrics in all broad academic fields\nprimarily through comparisons against article level human expert judgements\nfrom the UK Research Excellence Framework 2021. It also discusses the potential\napplications of traditional artificial intelligence and large language models\nfor research evaluation, with large scale evidence for the former. The book\nconcludes that citation data can be informative and helpful in some research\nfields for some research evaluation purposes but that indicators are never\naccurate enough to be described as research quality measures. It also argues\nthat AI may be helpful in limited circumstances for some types of research\nevaluation.","PeriodicalId":501285,"journal":{"name":"arXiv - CS - Digital Libraries","volume":"6 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Digital Libraries","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.00135","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This book critically analyses the value of citation data, altmetrics, and artificial intelligence to support the research evaluation of articles, scholars, departments, universities, countries, and funders. It introduces and discusses indicators that can support research evaluation and analyses their strengths and weaknesses as well as the generic strengths and weaknesses of the use of indicators for research assessment. The book includes evidence of the comparative value of citations and altmetrics in all broad academic fields primarily through comparisons against article level human expert judgements from the UK Research Excellence Framework 2021. It also discusses the potential applications of traditional artificial intelligence and large language models for research evaluation, with large scale evidence for the former. The book concludes that citation data can be informative and helpful in some research fields for some research evaluation purposes but that indicators are never accurate enough to be described as research quality measures. It also argues that AI may be helpful in limited circumstances for some types of research evaluation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
研究评价中的定量方法 引用指标、Altmetrics 和人工智能
本书批判性地分析了引用数据、altmetrics和人工智能在支持对文章、学者、院系、大学、国家和资助者进行研究评估方面的价值。本书介绍并讨论了可以支持研究评估的指标,分析了这些指标的优缺点,以及使用指标进行研究评估的一般优缺点。本书主要通过与英国《2021 年卓越研究框架》中的文章级人类专家评判进行比较,证明了引文和 Altmetrics 在所有广泛学术领域的比较价值。书中还讨论了传统人工智能和大型语言模型在研究评估中的潜在应用,并提供了前者的大规模证据。本书的结论是,在某些研究领域,引文数据可以为某些研究评价目的提供信息和帮助,但指标的准确性永远不足以被称为研究质量衡量标准。该书还认为,在有限的情况下,人工智能可能有助于某些类型的研究评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Publishing Instincts: An Exploration-Exploitation Framework for Studying Academic Publishing Behavior and "Home Venues" Research Citations Building Trust in Wikipedia Evaluating the Linguistic Coverage of OpenAlex: An Assessment of Metadata Accuracy and Completeness Towards understanding evolution of science through language model series Ensuring Adherence to Standards in Experiment-Related Metadata Entered Via Spreadsheets
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1