Quality to Impact, Text to Metadata: Publication and Evaluation in the Age of Metrics

M. Biagioli
{"title":"Quality to Impact, Text to Metadata: Publication and Evaluation in the Age of Metrics","authors":"M. Biagioli","doi":"10.1086/699152","DOIUrl":null,"url":null,"abstract":"The evaluation of scholarly works used to be interpretively complex but technologically simple. One read and evaluated an author’s publication, manuscript, or grant proposal together with the evidence it contained or referred to. Scholars have been doing this for centuries, by themselves, from their desks, best if in the proximity of a good library. Peer review — the epitome of academic judgment and its independence — slowly grew from this model of scholarly evaluation by scholars. \n \nThings have dramatically changed in recent years. The assessment of scholars and their work may now start and end with a simple Google Scholar search or other quantitative, auditing-like techniques that make reading publications superfluous. This is a world of evaluation not populated by scholars practicing peer review, but by a variety of methods and actors dispersed across academic institutions, data analytics companies, and media outlets tracking anything from citation counts (of books, journals, and conference abstracts) and journal impact factors, to a variety of indicators like H-index, Eigenfactor, CiteScore, SCImago Journal Rank, as well as altmetrics. We have moved from descriptive metrics used by scientists and scholars, to evaluative metrics used by outsiders who typically do not have technical knowledge of the field they seek to evaluate. This is a shift that reflects a fundamental and increasingly naturalized assumption that the number or frequency of citations received by a publication is, somehow, an index of its quality or value.","PeriodicalId":187662,"journal":{"name":"KNOW: A Journal on the Formation of Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"KNOW: A Journal on the Formation of Knowledge","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1086/699152","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

Abstract

The evaluation of scholarly works used to be interpretively complex but technologically simple. One read and evaluated an author’s publication, manuscript, or grant proposal together with the evidence it contained or referred to. Scholars have been doing this for centuries, by themselves, from their desks, best if in the proximity of a good library. Peer review — the epitome of academic judgment and its independence — slowly grew from this model of scholarly evaluation by scholars. Things have dramatically changed in recent years. The assessment of scholars and their work may now start and end with a simple Google Scholar search or other quantitative, auditing-like techniques that make reading publications superfluous. This is a world of evaluation not populated by scholars practicing peer review, but by a variety of methods and actors dispersed across academic institutions, data analytics companies, and media outlets tracking anything from citation counts (of books, journals, and conference abstracts) and journal impact factors, to a variety of indicators like H-index, Eigenfactor, CiteScore, SCImago Journal Rank, as well as altmetrics. We have moved from descriptive metrics used by scientists and scholars, to evaluative metrics used by outsiders who typically do not have technical knowledge of the field they seek to evaluate. This is a shift that reflects a fundamental and increasingly naturalized assumption that the number or frequency of citations received by a publication is, somehow, an index of its quality or value.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从质量到影响,从文本到元数据:度量时代的出版与评估
学术著作的评价过去在解释上很复杂,但在技术上很简单。阅读并评估作者的出版物、手稿或拨款提案以及其中包含或引用的证据。几个世纪以来,学者们都是这样做的,在他们的办公桌上,最好是在一个好的图书馆附近。同行评议——学术判断及其独立性的缩影——从这种学者的学术评价模式中慢慢成长起来。近年来,情况发生了巨大变化。对学者及其工作的评估现在可能开始和结束于简单的谷歌Scholar搜索或其他量化的、类似审计的技术,这使得阅读出版物变得多余。这是一个评估的世界,不是由学者进行同行评议,而是由各种各样的方法和参与者分散在学术机构、数据分析公司和媒体机构中,跟踪从引用计数(书籍、期刊和会议摘要)到期刊影响因子,到各种指标,如h指数、特征因子、CiteScore、SCImago期刊排名,以及其他指标。我们已经从科学家和学者使用的描述性度量标准,转移到通常不具备他们想要评估的领域的技术知识的局外人使用的评估性度量标准。这种转变反映了一种基本的、越来越自然的假设,即出版物被引用的次数或频率在某种程度上是其质量或价值的指标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Transcreation and Postcolonial Knowledge Chomsky versus Foucault, and the Problem of Knowledge in Translation When Dragons Show Themselves: Research, Constructing Knowledge, and the Practice of Translation A Critique of Provincial Reason: Situated Cosmopolitanisms and the Infrastructures of Theoretical Translation Translation and the Archive
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1