首页 > 最新文献

Scholarly Assessment Reports最新文献

英文 中文
Research Performance Assessment Issues: The Case of Kazakhstan 研究绩效评估问题:以哈萨克斯坦为例
Q1 Social Sciences Pub Date : 2021-10-19 DOI: 10.29024/sar.37
G. Alibekova, S. Özçelik, A. Satybaldin, M. Bapiyeva, T. Medeni
{"title":"Research Performance Assessment Issues: The Case of Kazakhstan","authors":"G. Alibekova, S. Özçelik, A. Satybaldin, M. Bapiyeva, T. Medeni","doi":"10.29024/sar.37","DOIUrl":"https://doi.org/10.29024/sar.37","url":null,"abstract":"","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48136375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Conventional Bibliographic Databases for Social Science Research: Web of Science and Scopus are not the Only Options 使用传统书目数据库进行社会科学研究:Web of Science和Scopus不是唯一的选择
Q1 Social Sciences Pub Date : 2021-08-03 DOI: 10.29024/sar.36
E. I. Wilder, W. H. Walters
Although large citation databases such as Web of Science and Scopus are widely used in bibliometric research, they have several disadvantages, including limited availability, poor coverage of books and conference proceedings, and inadequate mechanisms for distinguishing among authors. We discuss these issues, then examine the comparative advantages and disadvantages of other bibliographic databases, with emphasis on (a) discipline-centered article databases such as EconLit, MEDLINE, PsycINFO, and SocINDEX, and (b) book databases such as Amazon.com , Books in Print, Google Books, and OCLC WorldCat. Finally, we document the methods used to compile a freely available data set that includes five-year publication counts from SocINDEX and Amazon along with a range of individual and institutional characteristics for 2,132 faculty in 426 U.S. departments of sociology. Although our methods are time-consuming, they can be readily adopted in other subject areas by investigators without access to Web of Science or Scopus (i.e., by faculty at institutions other than the top research universities). Data sets that combine bibliographic, individual, and institutional information may be especially useful for bibliometric studies grounded in disciplines such as labor economics and the sociology of professions. Policy highlights While nearly all research universities provide access to Web of Science or Scopus, these databases are available at only a small minority of undergraduate colleges. Systematic restrictions on access may result in systematic biases in the literature of scholarly communication and assessment. The limitations of the largest citation databases influence the kinds of research that can be most readily pursued. In particular, research problems that use exclusively bibliometric data may be preferred over those that draw on a wider range of information sources. Because books, conference papers, and other research outputs remain important in many fields of study, journal databases cover just one component of scholarly accomplishment. Likewise, data on publications and citation impact cannot fully account for the influence of scholarly work on teaching, practice, and public knowledge. The automation of data compilation processes removes opportunities for investigators to gain first-hand, in-depth understanding of the patterns and relationships among variables. In contrast, manual processes may stimulate the kind of associative thinking that can lead to new insights and perspectives.
尽管科学网和Scopus等大型引文数据库在文献计量学研究中被广泛使用,但它们也有几个缺点,包括可用性有限、书籍和会议记录覆盖率低,以及区分作者的机制不足。我们讨论了这些问题,然后考察了其他书目数据库的比较优势和劣势,重点是(a)以学科为中心的文章数据库,如EconLit、MEDLINE、PsycINFO和SocINDEX,以及(b)图书数据库,如Amazon.com、Books in Print、Google Books和OCLC WorldCat。最后,我们记录了用于汇编免费可用数据集的方法,该数据集包括来自SocINDEX和亚马逊的五年出版计数,以及美国426个社会学系2132名教师的一系列个人和机构特征。尽管我们的方法很耗时,但在其他学科领域,研究人员可以很容易地采用这些方法,而无需访问科学网或Scopus(即顶尖研究型大学以外的机构的教员)。结合了书目、个人和机构信息的数据集可能对基于劳动经济学和职业社会学等学科的文献计量研究特别有用。政策亮点虽然几乎所有的研究型大学都提供访问科学网或Scopus的服务,但这些数据库仅在少数本科生学院可用。对访问的系统性限制可能会导致学术交流和评估文献中的系统性偏见。最大引文数据库的局限性影响了最容易进行的研究类型。特别是,完全使用文献计量数据的研究问题可能比那些利用更广泛信息来源的问题更受欢迎。由于书籍、会议论文和其他研究成果在许多研究领域仍然很重要,期刊数据库只涵盖学术成就的一个组成部分。同样,关于出版物和引文影响的数据不能完全解释学术工作对教学、实践和公共知识的影响。数据汇编过程的自动化为调查人员提供了获得对变量之间的模式和关系的第一手深入了解的机会。相反,手工过程可能会激发联想思维,从而产生新的见解和观点。
{"title":"Using Conventional Bibliographic Databases for Social Science Research: Web of Science and Scopus are not the Only Options","authors":"E. I. Wilder, W. H. Walters","doi":"10.29024/sar.36","DOIUrl":"https://doi.org/10.29024/sar.36","url":null,"abstract":"Although large citation databases such as Web of Science and Scopus are widely used in bibliometric research, they have several disadvantages, including limited availability, poor coverage of books and conference proceedings, and inadequate mechanisms for distinguishing among authors. We discuss these issues, then examine the comparative advantages and disadvantages of other bibliographic databases, with emphasis on (a) discipline-centered article databases such as EconLit, MEDLINE, PsycINFO, and SocINDEX, and (b) book databases such as Amazon.com , Books in Print, Google Books, and OCLC WorldCat. Finally, we document the methods used to compile a freely available data set that includes five-year publication counts from SocINDEX and Amazon along with a range of individual and institutional characteristics for 2,132 faculty in 426 U.S. departments of sociology. Although our methods are time-consuming, they can be readily adopted in other subject areas by investigators without access to Web of Science or Scopus (i.e., by faculty at institutions other than the top research universities). Data sets that combine bibliographic, individual, and institutional information may be especially useful for bibliometric studies grounded in disciplines such as labor economics and the sociology of professions. Policy highlights While nearly all research universities provide access to Web of Science or Scopus, these databases are available at only a small minority of undergraduate colleges. Systematic restrictions on access may result in systematic biases in the literature of scholarly communication and assessment. The limitations of the largest citation databases influence the kinds of research that can be most readily pursued. In particular, research problems that use exclusively bibliometric data may be preferred over those that draw on a wider range of information sources. Because books, conference papers, and other research outputs remain important in many fields of study, journal databases cover just one component of scholarly accomplishment. Likewise, data on publications and citation impact cannot fully account for the influence of scholarly work on teaching, practice, and public knowledge. The automation of data compilation processes removes opportunities for investigators to gain first-hand, in-depth understanding of the patterns and relationships among variables. In contrast, manual processes may stimulate the kind of associative thinking that can lead to new insights and perspectives.","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43732120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Developing a Method for Evaluating Global University Rankings 开发一种评估全球大学排名的方法
Q1 Social Sciences Pub Date : 2021-04-28 DOI: 10.29024/SAR.31
Elizabeth Gadd, Richard Holmes, J. Shearer
Describes a method to provide an independent, community-sourced set of best practice criteria with which to assess global university rankings and to identify the extent to which a sample of six rankings, Academic Ranking of World Universities (ARWU), CWTS Leiden, QS World University Rankings (QS WUR), Times Higher Education World University Rankings (THE WUR), U-Multirank, and US News & World Report Best Global Universities, met those criteria. The criteria fell into four categories: good governance, transparency, measure what matters, and rigour. The relative strengths and weaknesses of each ranking were compared. Overall, the rankings assessed fell short of all criteria, with greatest strengths in the area of transparency and greatest weaknesses in the area of measuring what matters to the communities they were ranking. The ranking that most closely met the criteria was CWTS Leiden. Scoring poorly across all the criteria were the THE WUR and US News rankings. Suggestions for developing the ranker rating method are described.
描述了一种方法,以提供一套独立的,社区来源的最佳实践标准来评估全球大学排名,并确定六个排名样本的程度,世界大学学术排名(ARWU), CWTS莱顿,QS世界大学排名(QS WUR),泰晤士高等教育世界大学排名(the WUR), U-Multirank和美国新闻与世界报道最佳全球大学,符合这些标准。这些标准分为四类:良好的治理、透明度、衡量重要事项和严谨性。比较各排名的相对优势和劣势。总的来说,评估的排名没有达到所有标准,最大的优点是透明度,最大的缺点是衡量对他们排名的社区重要的方面。最符合标准的排名是莱顿CWTS。《世界大学报》和《美国新闻与世界报道》的排名在所有标准上都得分很低。提出了发展排名评定方法的建议。
{"title":"Developing a Method for Evaluating Global University Rankings","authors":"Elizabeth Gadd, Richard Holmes, J. Shearer","doi":"10.29024/SAR.31","DOIUrl":"https://doi.org/10.29024/SAR.31","url":null,"abstract":"Describes a method to provide an independent, community-sourced set of best practice criteria with which to assess global university rankings and to identify the extent to which a sample of six rankings, Academic Ranking of World Universities (ARWU), CWTS Leiden, QS World University Rankings (QS WUR), Times Higher Education World University Rankings (THE WUR), U-Multirank, and US News & World Report Best Global Universities, met those criteria. The criteria fell into four categories: good governance, transparency, measure what matters, and rigour. The relative strengths and weaknesses of each ranking were compared. Overall, the rankings assessed fell short of all criteria, with greatest strengths in the area of transparency and greatest weaknesses in the area of measuring what matters to the communities they were ranking. The ranking that most closely met the criteria was CWTS Leiden. Scoring poorly across all the criteria were the THE WUR and US News rankings. Suggestions for developing the ranker rating method are described.","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45315254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Assessing the Impact and Quality of Research Data Using Altmetrics and Other Indicators 使用Altmetrics和其他指标评估研究数据的影响和质量
Q1 Social Sciences Pub Date : 2020-09-29 DOI: 10.29024/SAR.13
Stacy Konkiel
Research data in all its diversity—instrument readouts, observations, images, texts, video and audio files, and so on—is the basis for most advancement in the sciences. Yet the assessment of most research programmes happens at the publication level, and data has yet to be treated like a first-class research object. How can and should the research community use indicators to understand the quality and many potential impacts of research data? In this article, we discuss the research into research data metrics, these metrics’ strengths and limitations with regard to formal evaluation practices, and the possible meanings of such indicators. We acknowledge the dearth of guidance for using altmetrics and other indicators when assessing the impact and quality of research data, and suggest heuristics for policymakers and evaluators interested in doing so, in the absence of formal governmental or disciplinary policies. Policy highlights Research data is an important building block of scientific production, but efforts to develop a framework for assessing data’s impacts have had limited success to date. Indicators like citations, altmetrics, usage statistics, and reuse metrics highlight the influence of research data upon other researchers and the public, to varying degrees. In the absence of a shared definition of “quality”, varying metrics may be used to measure a dataset’s accuracy, currency, completeness, and consistency. Policymakers interested in setting standards for assessing research data using indicators should take into account indicator availability and disciplinary variations in the data when creating guidelines for explaining and interpreting research data’s impact. Quality metrics are context dependent: they may vary based upon discipline, data structure, and repository. For this reason, there is no agreed upon set of indicators that can be used to measure quality. Citations are well-suited to showcase research impact and are the most widely understood indicator. However, efforts to standardize and promote data citation practices have seen limited success, leading to varying rates of citation data availability across disciplines. Altmetrics can help illustrate public interest in research, but availability of altmetrics for research data is very limited. Usage statistics are typically understood to showcase interest in research data, but infrastructure to standardize these measures have only recently been introduced, and not all repositories report their usage metrics to centralized data brokers like DataCite. Reuse metrics vary widely in terms of what kinds of reuse they measure (e.g. educational, scholarly, etc). This category of indicator has the fewest heuristics for collection and use associated with it; think about explaining and interpreting reuse with qualitative data, wherever possible. All research data impact indicators should be interpreted in line with the Leiden Manifesto’s principles, including accounting for disciplinary varia
各种各样的研究数据——仪器读数、观察结果、图像、文本、视频和音频文件等等——是大多数科学进步的基础。然而,大多数研究项目的评估都是在出版层面进行的,数据尚未被视为一流的研究对象。研究界如何以及应该使用指标来了解研究数据的质量和许多潜在影响?在本文中,我们讨论了研究数据指标的研究,这些指标在正式评估实践中的优势和局限性,以及这些指标可能的意义。我们承认,在评估研究数据的影响和质量时,缺乏使用替代指标和其他指标的指导,并在缺乏正式的政府或学科政策的情况下,为有兴趣这样做的政策制定者和评估人员提供启发。研究数据是科学生产的重要组成部分,但是迄今为止,开发评估数据影响的框架的努力取得的成功有限。诸如引用、替代度量、使用统计和重用度量等指标在不同程度上强调了研究数据对其他研究人员和公众的影响。在缺乏对“质量”的共同定义的情况下,可能会使用不同的指标来衡量数据集的准确性、时效性、完整性和一致性。有兴趣为使用指标评估研究数据制定标准的政策制定者在制定解释和解释研究数据影响的指南时,应该考虑到指标的可用性和数据中的学科差异。质量度量是依赖于上下文的:它们可能根据规程、数据结构和存储库而变化。由于这个原因,没有一套可以用来衡量质量的商定指标。引文非常适合展示研究的影响,是最被广泛理解的指标。然而,标准化和促进数据引用实践的努力取得了有限的成功,导致不同学科的引文数据可用性不同。替代指标可以帮助说明公众对研究的兴趣,但研究数据的替代指标的可用性非常有限。使用统计数据通常被理解为显示对研究数据的兴趣,但是标准化这些度量的基础设施直到最近才被引入,并且并不是所有的存储库都向像DataCite这样的集中式数据代理报告他们的使用指标。重用度量在它们度量的重用类型(例如教育、学术等)方面差别很大。这类指标的收集和使用启发式最少;尽可能用定性数据来解释和解释重用。所有研究数据影响指标的解释应符合《莱顿宣言》的原则,包括考虑学科差异和数据可用性。使用数字指标评估研究数据的影响和质量尚未广泛实践,尽管研究人员普遍支持这种做法。
{"title":"Assessing the Impact and Quality of Research Data Using Altmetrics and Other Indicators","authors":"Stacy Konkiel","doi":"10.29024/SAR.13","DOIUrl":"https://doi.org/10.29024/SAR.13","url":null,"abstract":"Research data in all its diversity—instrument readouts, observations, images, texts, video and audio files, and so on—is the basis for most advancement in the sciences. Yet the assessment of most research programmes happens at the publication level, and data has yet to be treated like a first-class research object. How can and should the research community use indicators to understand the quality and many potential impacts of research data? In this article, we discuss the research into research data metrics, these metrics’ strengths and limitations with regard to formal evaluation practices, and the possible meanings of such indicators. We acknowledge the dearth of guidance for using altmetrics and other indicators when assessing the impact and quality of research data, and suggest heuristics for policymakers and evaluators interested in doing so, in the absence of formal governmental or disciplinary policies. Policy highlights Research data is an important building block of scientific production, but efforts to develop a framework for assessing data’s impacts have had limited success to date. Indicators like citations, altmetrics, usage statistics, and reuse metrics highlight the influence of research data upon other researchers and the public, to varying degrees. In the absence of a shared definition of “quality”, varying metrics may be used to measure a dataset’s accuracy, currency, completeness, and consistency. Policymakers interested in setting standards for assessing research data using indicators should take into account indicator availability and disciplinary variations in the data when creating guidelines for explaining and interpreting research data’s impact. Quality metrics are context dependent: they may vary based upon discipline, data structure, and repository. For this reason, there is no agreed upon set of indicators that can be used to measure quality. Citations are well-suited to showcase research impact and are the most widely understood indicator. However, efforts to standardize and promote data citation practices have seen limited success, leading to varying rates of citation data availability across disciplines. Altmetrics can help illustrate public interest in research, but availability of altmetrics for research data is very limited. Usage statistics are typically understood to showcase interest in research data, but infrastructure to standardize these measures have only recently been introduced, and not all repositories report their usage metrics to centralized data brokers like DataCite. Reuse metrics vary widely in terms of what kinds of reuse they measure (e.g. educational, scholarly, etc). This category of indicator has the fewest heuristics for collection and use associated with it; think about explaining and interpreting reuse with qualitative data, wherever possible. All research data impact indicators should be interpreted in line with the Leiden Manifesto’s principles, including accounting for disciplinary varia","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43676130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Quantification – Affordances and Limits 量化。支持和限制
Q1 Social Sciences Pub Date : 2020-08-07 DOI: 10.29024/sar.24
John Carson
We live in a world awash in numbers. Tables, graphs, charts, Fitbit readouts, spreadsheets that overflow our screens no matter how large, economic forecasts, climate modeling, weather predictions, journal impact factors, H-indices, and the list could go on and on, still barely scratching the surface. We are measured, surveyed, and subject to constant surveillance, largely through the quantification of a dizzying array of features of ourselves and the world around us. This article draws on work in the history of the quantification and measurement of intelligence and other examples from the history of quantification to suggest that quantification and measurement should be seen not just as technical pursuits, but also as normative ones. Every act of seeing, whether through sight or numbers, is also an act of occlusion, of not-seeing. And every move to make decisions more orderly and rational by translating a question into numerical comparisons is also a move to render irrelevant and often invisible the factors that were not included. The reductions and simplifications quantifications rely on can without question bring great and important clarity, but always at a cost. Among the moral questions for the practitioner is not just whether that cost is justified, but, even more critically, who is being asked to pay it?
我们生活在一个数字泛滥的世界。表格、图形、图表、Fitbit读数、充斥我们屏幕的电子表格、经济预测、气候模型、天气预测、期刊影响因子、h指数等等,这些都只是皮毛。我们被衡量,被调查,并受到持续的监视,主要是通过量化我们自己和我们周围世界的一系列令人眼花缭乱的特征。本文借鉴了智能量化和测量历史上的工作,以及量化历史上的其他例子,表明量化和测量不仅应该被视为技术追求,而且应该被视为规范追求。每一种看的行为,无论是通过视觉还是数字,也是一种遮挡行为,一种看不见的行为。通过将问题转化为数字比较来做出更有序、更理性的决策的每一个举动,也都是在使未包括在内的因素变得无关紧要、往往是不可见。量化所依赖的缩减和简化无疑可以带来巨大而重要的清晰度,但总是要付出代价的。对于从业者来说,道德问题不仅仅是成本是否合理,更重要的是,谁被要求支付这笔费用?
{"title":"Quantification – Affordances and Limits","authors":"John Carson","doi":"10.29024/sar.24","DOIUrl":"https://doi.org/10.29024/sar.24","url":null,"abstract":"We live in a world awash in numbers. Tables, graphs, charts, Fitbit readouts, spreadsheets that overflow our screens no matter how large, economic forecasts, climate modeling, weather predictions, journal impact factors, H-indices, and the list could go on and on, still barely scratching the surface. We are measured, surveyed, and subject to constant surveillance, largely through the quantification of a dizzying array of features of ourselves and the world around us. This article draws on work in the history of the quantification and measurement of intelligence and other examples from the history of quantification to suggest that quantification and measurement should be seen not just as technical pursuits, but also as normative ones. Every act of seeing, whether through sight or numbers, is also an act of occlusion, of not-seeing. And every move to make decisions more orderly and rational by translating a question into numerical comparisons is also a move to render irrelevant and often invisible the factors that were not included. The reductions and simplifications quantifications rely on can without question bring great and important clarity, but always at a cost. Among the moral questions for the practitioner is not just whether that cost is justified, but, even more critically, who is being asked to pay it?","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42452542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The new research assessment reform in China and its implementation 中国新的科研考核改革及其实施
Q1 Social Sciences Pub Date : 2020-04-29 DOI: 10.31235/osf.io/9mqzd
Lin Zhang, G. Sivertsen
A radical reform of research assessment was recently launched in China. It seeks to replace a focus on Web of Science-based indicators with a balanced combination of qualitative and quantitative research evaluation, and to strengthen the local relevance of research in China. It trusts the institutions to implement the policy within a few months but does not provide the necessary national platforms for coordination, influence and collaboration on developing shared tools and information resources and for agreement on definitions, criteria and protocols for the procedures. Based on international experiences, this article provides constructive ideas for the implementation of the new policy.
中国最近启动了一项彻底的研究评估改革。它试图用定性和定量研究评估的平衡结合来取代对基于科学网络的指标的关注,并加强中国研究的地方相关性。它相信各机构能够在几个月内实施该政策,但没有提供必要的国家平台,以便在开发共享工具和信息资源方面进行协调、影响和合作,并就程序的定义、标准和议定书达成一致。本文在借鉴国际经验的基础上,为新政策的实施提供了建设性的思路。
{"title":"The new research assessment reform in China and its implementation","authors":"Lin Zhang, G. Sivertsen","doi":"10.31235/osf.io/9mqzd","DOIUrl":"https://doi.org/10.31235/osf.io/9mqzd","url":null,"abstract":"A radical reform of research assessment was recently launched in China. It seeks to replace a focus on Web of Science-based indicators with a balanced combination of qualitative and quantitative research evaluation, and to strengthen the local relevance of research in China. It trusts the institutions to implement the policy within a few months but does not provide the necessary national platforms for coordination, influence and collaboration on developing shared tools and information resources and for agreement on definitions, criteria and protocols for the procedures. Based on international experiences, this article provides constructive ideas for the implementation of the new policy.","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49086706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
The Launch of the Journal Scholarly Assessment Reports 期刊学术评估报告的推出
Q1 Social Sciences Pub Date : 2019-10-31 DOI: 10.29024/sar.1
H. Moed
This editorial gives an outline of the scope an mission of the journal Scholarly Assessment Reports.
这篇社论概述了学术评估报告杂志的范围和使命。
{"title":"The Launch of the Journal Scholarly Assessment\u0000 Reports","authors":"H. Moed","doi":"10.29024/sar.1","DOIUrl":"https://doi.org/10.29024/sar.1","url":null,"abstract":"This editorial gives an outline of the scope an mission of the journal Scholarly Assessment Reports.","PeriodicalId":52687,"journal":{"name":"Scholarly Assessment Reports","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48661442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Scholarly Assessment Reports
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1