G. Alibekova, S. Özçelik, A. Satybaldin, M. Bapiyeva, T. Medeni
{"title":"Research Performance Assessment Issues: The Case of Kazakhstan","authors":"G. Alibekova, S. Özçelik, A. Satybaldin, M. Bapiyeva, T. Medeni","doi":"10.29024/sar.37","DOIUrl":"https://doi.org/10.29024/sar.37","url":null,"abstract":"","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48136375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although large citation databases such as Web of Science and Scopus are widely used in bibliometric research, they have several disadvantages, including limited availability, poor coverage of books and conference proceedings, and inadequate mechanisms for distinguishing among authors. We discuss these issues, then examine the comparative advantages and disadvantages of other bibliographic databases, with emphasis on (a) discipline-centered article databases such as EconLit, MEDLINE, PsycINFO, and SocINDEX, and (b) book databases such as Amazon.com , Books in Print, Google Books, and OCLC WorldCat. Finally, we document the methods used to compile a freely available data set that includes five-year publication counts from SocINDEX and Amazon along with a range of individual and institutional characteristics for 2,132 faculty in 426 U.S. departments of sociology. Although our methods are time-consuming, they can be readily adopted in other subject areas by investigators without access to Web of Science or Scopus (i.e., by faculty at institutions other than the top research universities). Data sets that combine bibliographic, individual, and institutional information may be especially useful for bibliometric studies grounded in disciplines such as labor economics and the sociology of professions. Policy highlights While nearly all research universities provide access to Web of Science or Scopus, these databases are available at only a small minority of undergraduate colleges. Systematic restrictions on access may result in systematic biases in the literature of scholarly communication and assessment. The limitations of the largest citation databases influence the kinds of research that can be most readily pursued. In particular, research problems that use exclusively bibliometric data may be preferred over those that draw on a wider range of information sources. Because books, conference papers, and other research outputs remain important in many fields of study, journal databases cover just one component of scholarly accomplishment. Likewise, data on publications and citation impact cannot fully account for the influence of scholarly work on teaching, practice, and public knowledge. The automation of data compilation processes removes opportunities for investigators to gain first-hand, in-depth understanding of the patterns and relationships among variables. In contrast, manual processes may stimulate the kind of associative thinking that can lead to new insights and perspectives.
尽管科学网和Scopus等大型引文数据库在文献计量学研究中被广泛使用,但它们也有几个缺点,包括可用性有限、书籍和会议记录覆盖率低,以及区分作者的机制不足。我们讨论了这些问题,然后考察了其他书目数据库的比较优势和劣势,重点是(a)以学科为中心的文章数据库,如EconLit、MEDLINE、PsycINFO和SocINDEX,以及(b)图书数据库,如Amazon.com、Books in Print、Google Books和OCLC WorldCat。最后,我们记录了用于汇编免费可用数据集的方法,该数据集包括来自SocINDEX和亚马逊的五年出版计数,以及美国426个社会学系2132名教师的一系列个人和机构特征。尽管我们的方法很耗时,但在其他学科领域,研究人员可以很容易地采用这些方法,而无需访问科学网或Scopus(即顶尖研究型大学以外的机构的教员)。结合了书目、个人和机构信息的数据集可能对基于劳动经济学和职业社会学等学科的文献计量研究特别有用。政策亮点虽然几乎所有的研究型大学都提供访问科学网或Scopus的服务,但这些数据库仅在少数本科生学院可用。对访问的系统性限制可能会导致学术交流和评估文献中的系统性偏见。最大引文数据库的局限性影响了最容易进行的研究类型。特别是,完全使用文献计量数据的研究问题可能比那些利用更广泛信息来源的问题更受欢迎。由于书籍、会议论文和其他研究成果在许多研究领域仍然很重要,期刊数据库只涵盖学术成就的一个组成部分。同样,关于出版物和引文影响的数据不能完全解释学术工作对教学、实践和公共知识的影响。数据汇编过程的自动化为调查人员提供了获得对变量之间的模式和关系的第一手深入了解的机会。相反,手工过程可能会激发联想思维,从而产生新的见解和观点。
{"title":"Using Conventional Bibliographic Databases for Social Science Research: Web of Science and Scopus are not the Only Options","authors":"E. I. Wilder, W. H. Walters","doi":"10.29024/sar.36","DOIUrl":"https://doi.org/10.29024/sar.36","url":null,"abstract":"Although large citation databases such as Web of Science and Scopus are widely used in bibliometric research, they have several disadvantages, including limited availability, poor coverage of books and conference proceedings, and inadequate mechanisms for distinguishing among authors. We discuss these issues, then examine the comparative advantages and disadvantages of other bibliographic databases, with emphasis on (a) discipline-centered article databases such as EconLit, MEDLINE, PsycINFO, and SocINDEX, and (b) book databases such as Amazon.com , Books in Print, Google Books, and OCLC WorldCat. Finally, we document the methods used to compile a freely available data set that includes five-year publication counts from SocINDEX and Amazon along with a range of individual and institutional characteristics for 2,132 faculty in 426 U.S. departments of sociology. Although our methods are time-consuming, they can be readily adopted in other subject areas by investigators without access to Web of Science or Scopus (i.e., by faculty at institutions other than the top research universities). Data sets that combine bibliographic, individual, and institutional information may be especially useful for bibliometric studies grounded in disciplines such as labor economics and the sociology of professions. Policy highlights While nearly all research universities provide access to Web of Science or Scopus, these databases are available at only a small minority of undergraduate colleges. Systematic restrictions on access may result in systematic biases in the literature of scholarly communication and assessment. The limitations of the largest citation databases influence the kinds of research that can be most readily pursued. In particular, research problems that use exclusively bibliometric data may be preferred over those that draw on a wider range of information sources. Because books, conference papers, and other research outputs remain important in many fields of study, journal databases cover just one component of scholarly accomplishment. Likewise, data on publications and citation impact cannot fully account for the influence of scholarly work on teaching, practice, and public knowledge. The automation of data compilation processes removes opportunities for investigators to gain first-hand, in-depth understanding of the patterns and relationships among variables. In contrast, manual processes may stimulate the kind of associative thinking that can lead to new insights and perspectives.","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43732120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Describes a method to provide an independent, community-sourced set of best practice criteria with which to assess global university rankings and to identify the extent to which a sample of six rankings, Academic Ranking of World Universities (ARWU), CWTS Leiden, QS World University Rankings (QS WUR), Times Higher Education World University Rankings (THE WUR), U-Multirank, and US News & World Report Best Global Universities, met those criteria. The criteria fell into four categories: good governance, transparency, measure what matters, and rigour. The relative strengths and weaknesses of each ranking were compared. Overall, the rankings assessed fell short of all criteria, with greatest strengths in the area of transparency and greatest weaknesses in the area of measuring what matters to the communities they were ranking. The ranking that most closely met the criteria was CWTS Leiden. Scoring poorly across all the criteria were the THE WUR and US News rankings. Suggestions for developing the ranker rating method are described.
{"title":"Developing a Method for Evaluating Global University Rankings","authors":"Elizabeth Gadd, Richard Holmes, J. Shearer","doi":"10.29024/SAR.31","DOIUrl":"https://doi.org/10.29024/SAR.31","url":null,"abstract":"Describes a method to provide an independent, community-sourced set of best practice criteria with which to assess global university rankings and to identify the extent to which a sample of six rankings, Academic Ranking of World Universities (ARWU), CWTS Leiden, QS World University Rankings (QS WUR), Times Higher Education World University Rankings (THE WUR), U-Multirank, and US News & World Report Best Global Universities, met those criteria. The criteria fell into four categories: good governance, transparency, measure what matters, and rigour. The relative strengths and weaknesses of each ranking were compared. Overall, the rankings assessed fell short of all criteria, with greatest strengths in the area of transparency and greatest weaknesses in the area of measuring what matters to the communities they were ranking. The ranking that most closely met the criteria was CWTS Leiden. Scoring poorly across all the criteria were the THE WUR and US News rankings. Suggestions for developing the ranker rating method are described.","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45315254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research data in all its diversity—instrument readouts, observations, images, texts, video and audio files, and so on—is the basis for most advancement in the sciences. Yet the assessment of most research programmes happens at the publication level, and data has yet to be treated like a first-class research object. How can and should the research community use indicators to understand the quality and many potential impacts of research data? In this article, we discuss the research into research data metrics, these metrics’ strengths and limitations with regard to formal evaluation practices, and the possible meanings of such indicators. We acknowledge the dearth of guidance for using altmetrics and other indicators when assessing the impact and quality of research data, and suggest heuristics for policymakers and evaluators interested in doing so, in the absence of formal governmental or disciplinary policies. Policy highlights Research data is an important building block of scientific production, but efforts to develop a framework for assessing data’s impacts have had limited success to date. Indicators like citations, altmetrics, usage statistics, and reuse metrics highlight the influence of research data upon other researchers and the public, to varying degrees. In the absence of a shared definition of “quality”, varying metrics may be used to measure a dataset’s accuracy, currency, completeness, and consistency. Policymakers interested in setting standards for assessing research data using indicators should take into account indicator availability and disciplinary variations in the data when creating guidelines for explaining and interpreting research data’s impact. Quality metrics are context dependent: they may vary based upon discipline, data structure, and repository. For this reason, there is no agreed upon set of indicators that can be used to measure quality. Citations are well-suited to showcase research impact and are the most widely understood indicator. However, efforts to standardize and promote data citation practices have seen limited success, leading to varying rates of citation data availability across disciplines. Altmetrics can help illustrate public interest in research, but availability of altmetrics for research data is very limited. Usage statistics are typically understood to showcase interest in research data, but infrastructure to standardize these measures have only recently been introduced, and not all repositories report their usage metrics to centralized data brokers like DataCite. Reuse metrics vary widely in terms of what kinds of reuse they measure (e.g. educational, scholarly, etc). This category of indicator has the fewest heuristics for collection and use associated with it; think about explaining and interpreting reuse with qualitative data, wherever possible. All research data impact indicators should be interpreted in line with the Leiden Manifesto’s principles, including accounting for disciplinary varia
{"title":"Assessing the Impact and Quality of Research Data Using Altmetrics and Other Indicators","authors":"Stacy Konkiel","doi":"10.29024/SAR.13","DOIUrl":"https://doi.org/10.29024/SAR.13","url":null,"abstract":"Research data in all its diversity—instrument readouts, observations, images, texts, video and audio files, and so on—is the basis for most advancement in the sciences. Yet the assessment of most research programmes happens at the publication level, and data has yet to be treated like a first-class research object. How can and should the research community use indicators to understand the quality and many potential impacts of research data? In this article, we discuss the research into research data metrics, these metrics’ strengths and limitations with regard to formal evaluation practices, and the possible meanings of such indicators. We acknowledge the dearth of guidance for using altmetrics and other indicators when assessing the impact and quality of research data, and suggest heuristics for policymakers and evaluators interested in doing so, in the absence of formal governmental or disciplinary policies. Policy highlights Research data is an important building block of scientific production, but efforts to develop a framework for assessing data’s impacts have had limited success to date. Indicators like citations, altmetrics, usage statistics, and reuse metrics highlight the influence of research data upon other researchers and the public, to varying degrees. In the absence of a shared definition of “quality”, varying metrics may be used to measure a dataset’s accuracy, currency, completeness, and consistency. Policymakers interested in setting standards for assessing research data using indicators should take into account indicator availability and disciplinary variations in the data when creating guidelines for explaining and interpreting research data’s impact. Quality metrics are context dependent: they may vary based upon discipline, data structure, and repository. For this reason, there is no agreed upon set of indicators that can be used to measure quality. Citations are well-suited to showcase research impact and are the most widely understood indicator. However, efforts to standardize and promote data citation practices have seen limited success, leading to varying rates of citation data availability across disciplines. Altmetrics can help illustrate public interest in research, but availability of altmetrics for research data is very limited. Usage statistics are typically understood to showcase interest in research data, but infrastructure to standardize these measures have only recently been introduced, and not all repositories report their usage metrics to centralized data brokers like DataCite. Reuse metrics vary widely in terms of what kinds of reuse they measure (e.g. educational, scholarly, etc). This category of indicator has the fewest heuristics for collection and use associated with it; think about explaining and interpreting reuse with qualitative data, wherever possible. All research data impact indicators should be interpreted in line with the Leiden Manifesto’s principles, including accounting for disciplinary varia","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43676130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We live in a world awash in numbers. Tables, graphs, charts, Fitbit readouts, spreadsheets that overflow our screens no matter how large, economic forecasts, climate modeling, weather predictions, journal impact factors, H-indices, and the list could go on and on, still barely scratching the surface. We are measured, surveyed, and subject to constant surveillance, largely through the quantification of a dizzying array of features of ourselves and the world around us. This article draws on work in the history of the quantification and measurement of intelligence and other examples from the history of quantification to suggest that quantification and measurement should be seen not just as technical pursuits, but also as normative ones. Every act of seeing, whether through sight or numbers, is also an act of occlusion, of not-seeing. And every move to make decisions more orderly and rational by translating a question into numerical comparisons is also a move to render irrelevant and often invisible the factors that were not included. The reductions and simplifications quantifications rely on can without question bring great and important clarity, but always at a cost. Among the moral questions for the practitioner is not just whether that cost is justified, but, even more critically, who is being asked to pay it?
{"title":"Quantification – Affordances and Limits","authors":"John Carson","doi":"10.29024/sar.24","DOIUrl":"https://doi.org/10.29024/sar.24","url":null,"abstract":"We live in a world awash in numbers. Tables, graphs, charts, Fitbit readouts, spreadsheets that overflow our screens no matter how large, economic forecasts, climate modeling, weather predictions, journal impact factors, H-indices, and the list could go on and on, still barely scratching the surface. We are measured, surveyed, and subject to constant surveillance, largely through the quantification of a dizzying array of features of ourselves and the world around us. This article draws on work in the history of the quantification and measurement of intelligence and other examples from the history of quantification to suggest that quantification and measurement should be seen not just as technical pursuits, but also as normative ones. Every act of seeing, whether through sight or numbers, is also an act of occlusion, of not-seeing. And every move to make decisions more orderly and rational by translating a question into numerical comparisons is also a move to render irrelevant and often invisible the factors that were not included. The reductions and simplifications quantifications rely on can without question bring great and important clarity, but always at a cost. Among the moral questions for the practitioner is not just whether that cost is justified, but, even more critically, who is being asked to pay it?","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42452542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A radical reform of research assessment was recently launched in China. It seeks to replace a focus on Web of Science-based indicators with a balanced combination of qualitative and quantitative research evaluation, and to strengthen the local relevance of research in China. It trusts the institutions to implement the policy within a few months but does not provide the necessary national platforms for coordination, influence and collaboration on developing shared tools and information resources and for agreement on definitions, criteria and protocols for the procedures. Based on international experiences, this article provides constructive ideas for the implementation of the new policy.
{"title":"The new research assessment reform in China and its implementation","authors":"Lin Zhang, G. Sivertsen","doi":"10.31235/osf.io/9mqzd","DOIUrl":"https://doi.org/10.31235/osf.io/9mqzd","url":null,"abstract":"A radical reform of research assessment was recently launched in China. It seeks to replace a focus on Web of Science-based indicators with a balanced combination of qualitative and quantitative research evaluation, and to strengthen the local relevance of research in China. It trusts the institutions to implement the policy within a few months but does not provide the necessary national platforms for coordination, influence and collaboration on developing shared tools and information resources and for agreement on definitions, criteria and protocols for the procedures. Based on international experiences, this article provides constructive ideas for the implementation of the new policy.","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49086706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This editorial gives an outline of the scope an mission of the journal Scholarly Assessment Reports.
这篇社论概述了学术评估报告杂志的范围和使命。
{"title":"The Launch of the Journal Scholarly Assessment\u0000 Reports","authors":"H. Moed","doi":"10.29024/sar.1","DOIUrl":"https://doi.org/10.29024/sar.1","url":null,"abstract":"This editorial gives an outline of the scope an mission of the journal Scholarly Assessment Reports.","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48661442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}