The extent to which predatory journals can harm scientific practice increases as the numbers of such journals expand, in so far as they undermine scientific integrity, quality, and credibility, especially if those journals leak into prestigious databases. Journal Citation Reports (JCRs), a reference for the assessment of researchers and for grant-making decisions, is used as a standard whitelist, in so far as the selectivity of a JCR-indexed journal adds a legitimacy of sorts to the articles that the journal publishes. The Multidisciplinary Digital Publishing Institute (MDPI) once included on Beall’s list of potential, possible or probable predatory scholarly open-access publishers, had 53 journals ranked in the 2018 JCRs annual report. These journals are analysed, not only to contrast the formal criteria for the identification of predatory journals, but taking a step further, their background is also analysed with regard to self-citations and the source of those self-cita-tions in 2018 and 2019. The results showed that the self-citation rates increased and was very much higher than those of the leading journals in the JCR category. Besides, an increasingly high rate of citations from other MDPI-journals was observed. The formal criteria together with the analysis of the citation patterns of the 53 journals under analysis all singled them out as predatory journals. Hence, specific recommendations are given to researchers, educational institutions and prestigious databases advising them to review their working relations with those sorts of journals.
{"title":"Expression of concern: Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI)","authors":"M. Ángeles Oviedo-García","doi":"10.1093/reseval/rvab030","DOIUrl":"https://doi.org/10.1093/reseval/rvab030","url":null,"abstract":"The extent to which predatory journals can harm scientific practice increases as the numbers of such journals expand, in so far as they undermine scientific integrity, quality, and credibility, especially if those journals leak into prestigious databases. Journal Citation Reports (JCRs), a reference for the assessment of researchers and for grant-making decisions, is used as a standard whitelist, in so far as the selectivity of a JCR-indexed journal adds a legitimacy of sorts to the articles that the journal publishes. The Multidisciplinary Digital Publishing Institute (MDPI) once included on Beall’s list of potential, possible or probable predatory scholarly open-access publishers, had 53 journals ranked in the 2018 JCRs annual report. These journals are analysed, not only to contrast the formal criteria for the identification of predatory journals, but taking a step further, their background is also analysed with regard to self-citations and the source of those self-cita-tions in 2018 and 2019. The results showed that the self-citation rates increased and was very much higher than those of the leading journals in the JCR category. Besides, an increasingly high rate of citations from other MDPI-journals was observed. The formal criteria together with the analysis of the citation patterns of the 53 journals under analysis all singled them out as predatory journals. Hence, specific recommendations are given to researchers, educational institutions and prestigious databases advising them to review their working relations with those sorts of journals.","PeriodicalId":47668,"journal":{"name":"Research Evaluation","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42097927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robyn S Newson, L. Rychetnik, L. King, A. Milat, A. Bauman
Current assessments of research impact have been criticized for capturing what can be easily counted not what actually counts. To empirically examine this issue, we approached measuring research impact from two directions, tracing forwards from research and backwards from policy, within a defined research-policy system (childhood obesity prevention research and policy in New South Wales, Australia from 2000 to 2015). The forward tracing research impact assessment component traced a sample of 148 local research projects forward to examine their policy impacts. Of the projects considered, 16% had an impact on local policy and for a further 19%, decision-makers were aware of the research, but there was no evidence it influenced policy decisions. The backward tracing component of the study included an analysis of research use across three policy initiatives. It provided a more nuanced understanding of the relative influence of research on policy. Both direct uses of specific research and indirect uses of research incorporated as broader bodies of knowledge were evident. Measuring research impact from both directions captured the diverse ways that research was used in decision-making. Our findings illustrate complexities in the assessment process and in real-life policymaking trajectories. They highlight the role that timing of assessment plays in perception of impacts and difficulties attributing longer-term impacts to specific research. This study supports the use of models where politics and complex system dynamics shape knowledge and its influence on decision-making, rather than research being the primary driver for policy change.
{"title":"Looking for evidence of research impact and use: A qualitative study of an Australian research-policy system","authors":"Robyn S Newson, L. Rychetnik, L. King, A. Milat, A. Bauman","doi":"10.1093/reseval/rvab017","DOIUrl":"https://doi.org/10.1093/reseval/rvab017","url":null,"abstract":"\u0000 Current assessments of research impact have been criticized for capturing what can be easily counted not what actually counts. To empirically examine this issue, we approached measuring research impact from two directions, tracing forwards from research and backwards from policy, within a defined research-policy system (childhood obesity prevention research and policy in New South Wales, Australia from 2000 to 2015). The forward tracing research impact assessment component traced a sample of 148 local research projects forward to examine their policy impacts. Of the projects considered, 16% had an impact on local policy and for a further 19%, decision-makers were aware of the research, but there was no evidence it influenced policy decisions. The backward tracing component of the study included an analysis of research use across three policy initiatives. It provided a more nuanced understanding of the relative influence of research on policy. Both direct uses of specific research and indirect uses of research incorporated as broader bodies of knowledge were evident. Measuring research impact from both directions captured the diverse ways that research was used in decision-making. Our findings illustrate complexities in the assessment process and in real-life policymaking trajectories. They highlight the role that timing of assessment plays in perception of impacts and difficulties attributing longer-term impacts to specific research. This study supports the use of models where politics and complex system dynamics shape knowledge and its influence on decision-making, rather than research being the primary driver for policy change.","PeriodicalId":47668,"journal":{"name":"Research Evaluation","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49600712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maryam Razmgir, Sirous Panahi, L. Ghalichi, S. Mousavi, Shahram Sedghi
This article explores the models and frameworks developed on “research impact’. We aim to provide a comprehensive overview of related literature through scoping study method. The present research investigates the nature, objectives, approaches, and other main attributes of the research impact models. It examines to analyze and classify models based on their characteristics. Forty-seven studies and 10 reviews published between 1996 and 2020 were included in the analysis. The majority of models were developed for the impact assessment and evaluation purposes. We identified three approaches in the models, namely outcome-based, process-based, and those utilized both of them, among which the outcome-based approach was the most frequently used by impact models and evaluation was considered as the main objective of this group. The process-based ones were mainly adapted from the W.K. Kellogg Foundation logic model and were potentially eligible for impact improvement. We highlighted the scope of processes and other specific features for the recent models. Given the benefits of the process-based approach in enhancing and accelerating the research impact, it is important to consider such approach in the development of impact models. Effective interaction between researchers and stakeholders, knowledge translation, and evidence synthesis are the other possible driving forces contributing to achieve and improve impact.
{"title":"Exploring research impact models: A systematic scoping review","authors":"Maryam Razmgir, Sirous Panahi, L. Ghalichi, S. Mousavi, Shahram Sedghi","doi":"10.1093/reseval/rvab009","DOIUrl":"https://doi.org/10.1093/reseval/rvab009","url":null,"abstract":"\u0000 This article explores the models and frameworks developed on “research impact’. We aim to provide a comprehensive overview of related literature through scoping study method. The present research investigates the nature, objectives, approaches, and other main attributes of the research impact models. It examines to analyze and classify models based on their characteristics. Forty-seven studies and 10 reviews published between 1996 and 2020 were included in the analysis. The majority of models were developed for the impact assessment and evaluation purposes. We identified three approaches in the models, namely outcome-based, process-based, and those utilized both of them, among which the outcome-based approach was the most frequently used by impact models and evaluation was considered as the main objective of this group. The process-based ones were mainly adapted from the W.K. Kellogg Foundation logic model and were potentially eligible for impact improvement. We highlighted the scope of processes and other specific features for the recent models. Given the benefits of the process-based approach in enhancing and accelerating the research impact, it is important to consider such approach in the development of impact models. Effective interaction between researchers and stakeholders, knowledge translation, and evidence synthesis are the other possible driving forces contributing to achieve and improve impact.","PeriodicalId":47668,"journal":{"name":"Research Evaluation","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2021-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47897908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Transformative Innovation Policies (TIPs) assert that addressing the key challenges currently facing our societies requires profound changes in current socio-technical systems. To leverage such ‘socio-technical transitions’ calls for a different, broad mix of research and innovation policies, with particular attention being paid to policy experiments. As TIPs diffuse and gain legitimacy they pose a substantial evaluation challenge: how can we evaluate these policy experiments with a narrow geographical and temporal scope, when the final objective is ambitiously systemic? How can we know whether a specific set of policy experiments is contributing to systemic transformation? Drawing on TIPs principles as developed by and applied in the activities of the Transformative Innovation Policy Consortium and on the concept of transformative outcomes, this article develops an approach to the evaluation of TIPs that is operational and adaptable to different contexts.
{"title":"A formative approach to the evaluation of Transformative Innovation Policies","authors":"J. Molas-Gallart, A. Boni, S. Giachi, J. Schot","doi":"10.1093/reseval/rvab016","DOIUrl":"https://doi.org/10.1093/reseval/rvab016","url":null,"abstract":"\u0000 Transformative Innovation Policies (TIPs) assert that addressing the key challenges currently facing our societies requires profound changes in current socio-technical systems. To leverage such ‘socio-technical transitions’ calls for a different, broad mix of research and innovation policies, with particular attention being paid to policy experiments. As TIPs diffuse and gain legitimacy they pose a substantial evaluation challenge: how can we evaluate these policy experiments with a narrow geographical and temporal scope, when the final objective is ambitiously systemic? How can we know whether a specific set of policy experiments is contributing to systemic transformation? Drawing on TIPs principles as developed by and applied in the activities of the Transformative Innovation Policy Consortium and on the concept of transformative outcomes, this article develops an approach to the evaluation of TIPs that is operational and adaptable to different contexts.","PeriodicalId":47668,"journal":{"name":"Research Evaluation","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47267611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Academic publishing is undergoing profound changes that shape the conditions of knowledge production and the way research is communicated, prompting a lively debate on how the various activities of those involved can be adequately acknowledged in publications. This contribution aims to empirically examine the relationship between authorship regulations in journal policies, the disciplinary variance in authorship practice and larger concepts of academic authorship. Analyzing (1) editorial policies and (2) data from an interdisciplinary survey of scientists, we examine to what extent disciplinary variances are reflected in the policies as well as in researchers' individual understandings. Here we find that the regulation of authorship qua policies is primarily effected at the level of the publishers. Although considerable disciplinary variations of journal policies are sometimes suggested in the literature, we find only minor differences in authorship criteria. The survey data however show that researchers' understandings of authorship exhibit significant, discipline-specific differences, as well as differences related to the characteristics of the research practice. It hence becomes clear that discipline-specific conditions of knowledge production with the resulting differences in authorship practices are hardly reflected in authorship policies. We conclude that the regulatory ambitions of authorship policies mostly focus on the prevention and elimination of deficits in the quality and integrity of scientific publications. Thus, it seems questionable whether authorship policies in their current form are suitable instruments for mediating between diverse authorship practices and normative ideals of legitimate authorship.
{"title":"Say my name, say my name: Academic authorship conventions between editorial policies and disciplinary practices","authors":"Felicitas Hesselmann, Cornelia Schendzielorz, Nikita Sorgatz","doi":"10.1093/RESEVAL/RVAB003","DOIUrl":"https://doi.org/10.1093/RESEVAL/RVAB003","url":null,"abstract":"\u0000 Academic publishing is undergoing profound changes that shape the conditions of knowledge production and the way research is communicated, prompting a lively debate on how the various activities of those involved can be adequately acknowledged in publications. This contribution aims to empirically examine the relationship between authorship regulations in journal policies, the disciplinary variance in authorship practice and larger concepts of academic authorship. Analyzing (1) editorial policies and (2) data from an interdisciplinary survey of scientists, we examine to what extent disciplinary variances are reflected in the policies as well as in researchers' individual understandings. Here we find that the regulation of authorship qua policies is primarily effected at the level of the publishers. Although considerable disciplinary variations of journal policies are sometimes suggested in the literature, we find only minor differences in authorship criteria. The survey data however show that researchers' understandings of authorship exhibit significant, discipline-specific differences, as well as differences related to the characteristics of the research practice. It hence becomes clear that discipline-specific conditions of knowledge production with the resulting differences in authorship practices are hardly reflected in authorship policies. We conclude that the regulatory ambitions of authorship policies mostly focus on the prevention and elimination of deficits in the quality and integrity of scientific publications. Thus, it seems questionable whether authorship policies in their current form are suitable instruments for mediating between diverse authorship practices and normative ideals of legitimate authorship.","PeriodicalId":47668,"journal":{"name":"Research Evaluation","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2021-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41855190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The pressure on the universities to take a visible place in the rankings has caused anachronistic policies and practices in evaluating the performance of universities. The value attributed to the rankings results in policies prioritizing the criteria imposed by rankings while evaluating the performance of academics, which successively causes several issues in assessing the real impact of the academic practices. Considering these criticisms and concerns about the impact assessment, this study aimed at exploring the perceptions of academics about the impact of their academic practices. Adapting the interpretive phenomenological design, the data were collected through semi-structured interviews with 20 participants from the field of education in five flagship universities of Turkey. The findings of the study revealed that, although impact assessment understanding of the academics and their institutions go parallel with covering the practices around three basic missions of the university, many activities go in between without recognition by the same impact assessment practices. Interestingly, the academics exhibited their commitment to institutional policies in impact assessment practices; however, they exhibit resentment for the same policies due to failing to recognize the localized mission of the university, threatening the deeply rooted values of the academy, fouling the academy with ethical violations, and causing further detachment between academic practices and societal needs. The concerns and criticism of the current impact assessment are likely to alter the priorities of the universities and push them to adapt an impact assessment, which is less relevant to the local needs of their societies.
{"title":"‘Scaling’ the academia: Perspectives of academics on the impact of their practices","authors":"Yaşar Kondakçı, Merve Zayim-Kurtay, Sevgi Kaya-Kasikci, Hanife Hilal Senay, Busra Kulakoglu","doi":"10.1093/RESEVAL/RVAB015","DOIUrl":"https://doi.org/10.1093/RESEVAL/RVAB015","url":null,"abstract":"\u0000 The pressure on the universities to take a visible place in the rankings has caused anachronistic policies and practices in evaluating the performance of universities. The value attributed to the rankings results in policies prioritizing the criteria imposed by rankings while evaluating the performance of academics, which successively causes several issues in assessing the real impact of the academic practices. Considering these criticisms and concerns about the impact assessment, this study aimed at exploring the perceptions of academics about the impact of their academic practices. Adapting the interpretive phenomenological design, the data were collected through semi-structured interviews with 20 participants from the field of education in five flagship universities of Turkey. The findings of the study revealed that, although impact assessment understanding of the academics and their institutions go parallel with covering the practices around three basic missions of the university, many activities go in between without recognition by the same impact assessment practices. Interestingly, the academics exhibited their commitment to institutional policies in impact assessment practices; however, they exhibit resentment for the same policies due to failing to recognize the localized mission of the university, threatening the deeply rooted values of the academy, fouling the academy with ethical violations, and causing further detachment between academic practices and societal needs. The concerns and criticism of the current impact assessment are likely to alter the priorities of the universities and push them to adapt an impact assessment, which is less relevant to the local needs of their societies.","PeriodicalId":47668,"journal":{"name":"Research Evaluation","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48116788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article reports on a study that followed up on an initial interdisciplinary project and focused specifically on the experiences of researchers involved in practice-based interdisciplinary research. We share an approach to research evaluation that focuses on the experiences of those conducting the research rather than the outputs. The study allowed those involved in the initial successful project to reflect post hoc on their experiences. We show that neglecting fundamental conceptions about how the research is conceptualized can lead to challenges with the research itself. In addition to alternative understandings of research and concepts, practical and logistical issues, whilst seeming trivial, feed into communication issues such as misunderstanding of terms and language. We argue that tensions and confusions around the very nature of the research—what was being researched, and what was valued as research, epistemological differences between the disciplinary perspectives—need to be explored and interrogated in order to maximize the benefits of interdisciplinary research. We conclude with considerations of the relationship between interdisciplinary research in a team and identity work of team members, and the implications this may have for research design, an area of research evaluation that certainly needs further exploration.
{"title":"Researcher experiences in practice-based interdisciplinary research","authors":"J. Leigh, N. Brown","doi":"10.1093/reseval/rvab018","DOIUrl":"https://doi.org/10.1093/reseval/rvab018","url":null,"abstract":"\u0000 This article reports on a study that followed up on an initial interdisciplinary project and focused specifically on the experiences of researchers involved in practice-based interdisciplinary research. We share an approach to research evaluation that focuses on the experiences of those conducting the research rather than the outputs. The study allowed those involved in the initial successful project to reflect post hoc on their experiences. We show that neglecting fundamental conceptions about how the research is conceptualized can lead to challenges with the research itself. In addition to alternative understandings of research and concepts, practical and logistical issues, whilst seeming trivial, feed into communication issues such as misunderstanding of terms and language. We argue that tensions and confusions around the very nature of the research—what was being researched, and what was valued as research, epistemological differences between the disciplinary perspectives—need to be explored and interrogated in order to maximize the benefits of interdisciplinary research. We conclude with considerations of the relationship between interdisciplinary research in a team and identity work of team members, and the implications this may have for research design, an area of research evaluation that certainly needs further exploration.","PeriodicalId":47668,"journal":{"name":"Research Evaluation","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2021-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42055267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a growing recognition that needs more to be done to ensure that research contributes to better health services and patient outcomes. Stakeholder engagement in research, including co-production, has been identified as a promising mechanism for improving the value, relevance and utilization of research. This article presents findings from a prospective study which explored the impact of stakeholder engagement in a 3-year European tobacco control research project. That research project aimed to engage stakeholders in the development, testing and dissemination of a return-on-investment tool across five EU countries (the Netherlands, Spain, Hungary, Germany and the UK). The prospective study comprised interviews, observations and document review. The analysis focused on the extent to which the project team recognized, conceptualized and operationalized stakeholder engagement over the course of the research project. Stakeholder engagement in the European research project was conceptualized as a key feature of pre-designated spaces within their work programme. Over the course of the project, however, the tool development work and stakeholder engagement activities decoupled. While the modelling and tool development became more secluded, stakeholder engagement activities subtly transformed from co-production, to consultation, to something more recognizable as research participation. The contribution of this article is not to argue against the potential contribution of stakeholder engagement and co-production, but to show how even well-planned engagement activities can be diverted within the existing research funding and research production systems where non-research stakeholders remain at the margins and can even be seen as a threat to academic identify and autonomy.
{"title":"How far does an emphasis on stakeholder engagement and co-production in research present a threat to academic identity and autonomy? A prospective study across five European countries","authors":"A. Boaz, R. Borst, M. Kok, A. O’Shea","doi":"10.1093/RESEVAL/RVAB013","DOIUrl":"https://doi.org/10.1093/RESEVAL/RVAB013","url":null,"abstract":"\u0000 There is a growing recognition that needs more to be done to ensure that research contributes to better health services and patient outcomes. Stakeholder engagement in research, including co-production, has been identified as a promising mechanism for improving the value, relevance and utilization of research. This article presents findings from a prospective study which explored the impact of stakeholder engagement in a 3-year European tobacco control research project. That research project aimed to engage stakeholders in the development, testing and dissemination of a return-on-investment tool across five EU countries (the Netherlands, Spain, Hungary, Germany and the UK). The prospective study comprised interviews, observations and document review. The analysis focused on the extent to which the project team recognized, conceptualized and operationalized stakeholder engagement over the course of the research project. Stakeholder engagement in the European research project was conceptualized as a key feature of pre-designated spaces within their work programme. Over the course of the project, however, the tool development work and stakeholder engagement activities decoupled. While the modelling and tool development became more secluded, stakeholder engagement activities subtly transformed from co-production, to consultation, to something more recognizable as research participation. The contribution of this article is not to argue against the potential contribution of stakeholder engagement and co-production, but to show how even well-planned engagement activities can be diverted within the existing research funding and research production systems where non-research stakeholders remain at the margins and can even be seen as a threat to academic identify and autonomy.","PeriodicalId":47668,"journal":{"name":"Research Evaluation","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/RESEVAL/RVAB013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44452247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Seeber, Jef Vlegels, Elwin Reimink, A. Marušić, David G. Pina
We have limited understanding of why reviewers tend to strongly disagree when scoring the same research proposal. Thus far, research that explored disagreement has focused on the characteristics of the proposal or the applicants, while ignoring the characteristics of the reviewers themselves. This article aims to address this gap by exploring which reviewer characteristics most affect disagreement among reviewers. We present hypotheses regarding the effect of a reviewer’s level of experience in evaluating research proposals for a specific granting scheme, that is, scheme reviewing experience. We test our hypotheses by studying two of the most important research funding programmes in the European Union from 2014 to 2018, namely, 52,488 proposals evaluated under three funding schemes of the Horizon 2020 Marie Sklodowska-Curie Actions (MSCA), and 1,939 proposals evaluated under the European Cooperation in Science and Technology Actions. We find that reviewing experience on previous calls of a specific scheme significantly reduces disagreement, while experience of evaluating proposals in other schemes—namely, general reviewing experience, does not have any effect. Moreover, in MSCA—Individual Fellowships, we observe an inverted U relationship between the number of proposals a reviewer evaluates in a given call and disagreement, with a remarkable decrease in disagreement above 13 evaluated proposals. Our results indicate that reviewing experience in a specific scheme improves reliability, curbing unwarranted disagreement by fine-tuning reviewers’ evaluation.
{"title":"Does reviewing experience reduce disagreement in proposals evaluation? Insights from Marie Skłodowska-Curie and COST Actions","authors":"M. Seeber, Jef Vlegels, Elwin Reimink, A. Marušić, David G. Pina","doi":"10.1093/RESEVAL/RVAB011","DOIUrl":"https://doi.org/10.1093/RESEVAL/RVAB011","url":null,"abstract":"\u0000 We have limited understanding of why reviewers tend to strongly disagree when scoring the same research proposal. Thus far, research that explored disagreement has focused on the characteristics of the proposal or the applicants, while ignoring the characteristics of the reviewers themselves. This article aims to address this gap by exploring which reviewer characteristics most affect disagreement among reviewers. We present hypotheses regarding the effect of a reviewer’s level of experience in evaluating research proposals for a specific granting scheme, that is, scheme reviewing experience. We test our hypotheses by studying two of the most important research funding programmes in the European Union from 2014 to 2018, namely, 52,488 proposals evaluated under three funding schemes of the Horizon 2020 Marie Sklodowska-Curie Actions (MSCA), and 1,939 proposals evaluated under the European Cooperation in Science and Technology Actions. We find that reviewing experience on previous calls of a specific scheme significantly reduces disagreement, while experience of evaluating proposals in other schemes—namely, general reviewing experience, does not have any effect. Moreover, in MSCA—Individual Fellowships, we observe an inverted U relationship between the number of proposals a reviewer evaluates in a given call and disagreement, with a remarkable decrease in disagreement above 13 evaluated proposals. Our results indicate that reviewing experience in a specific scheme improves reliability, curbing unwarranted disagreement by fine-tuning reviewers’ evaluation.","PeriodicalId":47668,"journal":{"name":"Research Evaluation","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2021-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/RESEVAL/RVAB011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47645454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Aparicio, D. Rodríguez, J. Zabala‐Iturriagagoitia
This article aims to provide a systemic instrument to evaluate the functioning of higher education systems. Despite systemic instruments have had a strong impact on the management of public policy systems in fields such as health and innovation, higher education has not been widely discussed in applying this type of instrument. Herein lies the main gap that we want to close. The ultimate purpose of the evaluation instrument introduced here is thus to provide information for decision-makers, so these can identify the strengths/weaknesses in the functioning of their respective higher education systems from a systemic perspective. To achieve the previous goal, we apply the methodological guidelines of the integrative review of the literature. An integrative review of the literature was chosen because it guides the extraction of quantitative evidence from the literature and its classification, with the purpose of integrating the results into an analytical framework. This resulting analytical framework is what we have labelled as the systemic evaluation instrument. The article makes three contributions to the literature. First, the different types of higher education institutions considered in the literature and the higher education systems analysis scales are evidenced. Second, we identify the capacities and functions examined by the literature so that higher education institutions and higher education systems can fulfil their missions. Third, a systemic evaluation framework for higher education institutions and higher education systems is presented. The article concludes with a discussion of the opportunities and challenges associated to the implementation of such a systemic framework for policymaking.
{"title":"The systemic approach as an instrument to evaluate higher education systems: Opportunities and challenges","authors":"J. Aparicio, D. Rodríguez, J. Zabala‐Iturriagagoitia","doi":"10.1093/RESEVAL/RVAB012","DOIUrl":"https://doi.org/10.1093/RESEVAL/RVAB012","url":null,"abstract":"\u0000 This article aims to provide a systemic instrument to evaluate the functioning of higher education systems. Despite systemic instruments have had a strong impact on the management of public policy systems in fields such as health and innovation, higher education has not been widely discussed in applying this type of instrument. Herein lies the main gap that we want to close. The ultimate purpose of the evaluation instrument introduced here is thus to provide information for decision-makers, so these can identify the strengths/weaknesses in the functioning of their respective higher education systems from a systemic perspective. To achieve the previous goal, we apply the methodological guidelines of the integrative review of the literature. An integrative review of the literature was chosen because it guides the extraction of quantitative evidence from the literature and its classification, with the purpose of integrating the results into an analytical framework. This resulting analytical framework is what we have labelled as the systemic evaluation instrument. The article makes three contributions to the literature. First, the different types of higher education institutions considered in the literature and the higher education systems analysis scales are evidenced. Second, we identify the capacities and functions examined by the literature so that higher education institutions and higher education systems can fulfil their missions. Third, a systemic evaluation framework for higher education institutions and higher education systems is presented. The article concludes with a discussion of the opportunities and challenges associated to the implementation of such a systemic framework for policymaking.","PeriodicalId":47668,"journal":{"name":"Research Evaluation","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/RESEVAL/RVAB012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44487369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}