M. Thelwall, K. Kousha, E. Stuart, Meiko Makita, Mahshid Abdoli, Paul Wilson, Jonathan M. Levitt
{"title":"跨学科研究的感知质量是否因领域而异?","authors":"M. Thelwall, K. Kousha, E. Stuart, Meiko Makita, Mahshid Abdoli, Paul Wilson, Jonathan M. Levitt","doi":"10.1108/jd-01-2023-0012","DOIUrl":null,"url":null,"abstract":"PurposeTo assess whether interdisciplinary research evaluation scores vary between fields.Design/methodology/approachThe authors investigate whether published refereed journal articles were scored differently by expert assessors (two per output, agreeing a score and norm referencing) from multiple subject-based Units of Assessment (UoAs) in the REF2021 UK national research assessment exercise. The primary raw data was 8,015 journal articles published 2014–2020 and evaluated by multiple UoAs, and the agreement rates were compared to the estimated agreement rates for articles multiply-evaluated within a single UoA.FindingsThe authors estimated a 53% agreement rate on a four-point quality scale between UoAs for the same article and a within-UoA agreement rate of 70%. This suggests that quality scores vary more between fields than within fields for interdisciplinary research. There were also some hierarchies between fields, in the sense of UoAs that tended to give higher scores for the same article than others.Research limitations/implicationsThe results apply to one country and type of research evaluation. The agreement rate percentage estimates are both based on untested assumptions about the extent of cross-checking scores for the same articles in the REF, so the inferences about the agreement rates are tenuous.Practical implicationsThe results underline the importance of choosing relevant fields for any type of research evaluation.Originality/valueThis is the first evaluation of the extent to which a careful peer-review exercise generates different scores for the same articles between disciplines.","PeriodicalId":47969,"journal":{"name":"Journal of Documentation","volume":" ","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Does the perceived quality of interdisciplinary research vary between fields?\",\"authors\":\"M. Thelwall, K. Kousha, E. Stuart, Meiko Makita, Mahshid Abdoli, Paul Wilson, Jonathan M. Levitt\",\"doi\":\"10.1108/jd-01-2023-0012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"PurposeTo assess whether interdisciplinary research evaluation scores vary between fields.Design/methodology/approachThe authors investigate whether published refereed journal articles were scored differently by expert assessors (two per output, agreeing a score and norm referencing) from multiple subject-based Units of Assessment (UoAs) in the REF2021 UK national research assessment exercise. The primary raw data was 8,015 journal articles published 2014–2020 and evaluated by multiple UoAs, and the agreement rates were compared to the estimated agreement rates for articles multiply-evaluated within a single UoA.FindingsThe authors estimated a 53% agreement rate on a four-point quality scale between UoAs for the same article and a within-UoA agreement rate of 70%. This suggests that quality scores vary more between fields than within fields for interdisciplinary research. There were also some hierarchies between fields, in the sense of UoAs that tended to give higher scores for the same article than others.Research limitations/implicationsThe results apply to one country and type of research evaluation. The agreement rate percentage estimates are both based on untested assumptions about the extent of cross-checking scores for the same articles in the REF, so the inferences about the agreement rates are tenuous.Practical implicationsThe results underline the importance of choosing relevant fields for any type of research evaluation.Originality/valueThis is the first evaluation of the extent to which a careful peer-review exercise generates different scores for the same articles between disciplines.\",\"PeriodicalId\":47969,\"journal\":{\"name\":\"Journal of Documentation\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2023-04-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Documentation\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1108/jd-01-2023-0012\",\"RegionNum\":3,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Documentation","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1108/jd-01-2023-0012","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
Does the perceived quality of interdisciplinary research vary between fields?
PurposeTo assess whether interdisciplinary research evaluation scores vary between fields.Design/methodology/approachThe authors investigate whether published refereed journal articles were scored differently by expert assessors (two per output, agreeing a score and norm referencing) from multiple subject-based Units of Assessment (UoAs) in the REF2021 UK national research assessment exercise. The primary raw data was 8,015 journal articles published 2014–2020 and evaluated by multiple UoAs, and the agreement rates were compared to the estimated agreement rates for articles multiply-evaluated within a single UoA.FindingsThe authors estimated a 53% agreement rate on a four-point quality scale between UoAs for the same article and a within-UoA agreement rate of 70%. This suggests that quality scores vary more between fields than within fields for interdisciplinary research. There were also some hierarchies between fields, in the sense of UoAs that tended to give higher scores for the same article than others.Research limitations/implicationsThe results apply to one country and type of research evaluation. The agreement rate percentage estimates are both based on untested assumptions about the extent of cross-checking scores for the same articles in the REF, so the inferences about the agreement rates are tenuous.Practical implicationsThe results underline the importance of choosing relevant fields for any type of research evaluation.Originality/valueThis is the first evaluation of the extent to which a careful peer-review exercise generates different scores for the same articles between disciplines.
期刊介绍:
The scope of the Journal of Documentation is broadly information sciences, encompassing all of the academic and professional disciplines which deal with recorded information. These include, but are certainly not limited to: ■Information science, librarianship and related disciplines ■Information and knowledge management ■Information and knowledge organisation ■Information seeking and retrieval, and human information behaviour ■Information and digital literacies