{"title":"Constructing an effective evaluation system to identify doctors’ research capabilities","authors":"Xiaojing Hu","doi":"10.1002/hcs2.82","DOIUrl":null,"url":null,"abstract":"<p>The events of the coronavirus disease 2019 pandemic have emphasized the indispensable role of doctors in promoting public health and well-being [<span>1</span>]. Although medicine and health care are being transformed by technological advances, such as artificial intelligence, big data, genomics, precision medicine, and telemedicine, doctors continue to play a critical role in providing health care. However, a key challenge today is the lack of recognition of doctors by society at large. Hospitals, patients, and public opinion all play a role in evaluating doctors. However, this study will focus on hospitals’ doctor evaluations.</p><p>At the macro level, doctor evaluations influence their value orientation, research directions, and resource allocation. Assessing doctors also impacts their research and behavior at the micro level, as it is a crucial element in their development. It is challenging to build a suitable doctor evaluation system; therefore, doctor evaluations are a common research subject among the global academic community. Various stakeholders have paid attention to this issue, which is still being debated in the literature.</p><p>The global academic community considers an evaluation system based purely on merit and performance to be the most suitable for doctor evaluations [<span>2, 3</span>] with a primary focus on clinical care and scientific research. In addition, doctors are expected to also teach when working at large academic medical centers. Among these three sections, the index for scientific research evaluation accounts for the highest proportion [<span>4</span>]. A survey of 170 universities randomly selected from the CWTS Leiden Ranking revealed that among the 92 universities offering a School of Biomedical Sciences and promoting the accessibility of evaluation criteria, the mentioned policies included peer-reviewed publications, funding, national or international reputations, author order, and journal impact factors in 95%, 67%, 48%, 37%, and 28% of cases, respectively. Furthermore, most institutions clearly indicate their expectations for the minimum number of papers to be published annually [<span>5</span>]. Alawi et al. have shown that in many countries, the evaluation of medical professionals is primarily based on their ability to publish papers and secure research funding [<span>6</span>]. The recognition of these achievements under the existing evaluation system has a significant impact on key evaluation factors, such as performance, publications, work roles, and research awards.</p><p>Doctors in Chinese hospitals are assessed primarily on the inclusion of their scientific publications in the Science Citation Index (SCI). The number of published papers indexed in SCI significantly influences their professional ranking and likelihood of promotion. Hence, many young Chinese doctors feel under pressure to publish academic papers in addition to performing their clinical duties [<span>7</span>]. According to the National Science and Technology Workers Survey Group [<span>8</span>], nearly half (45.9%) of Chinese science and technology workers perceived the overreliance on paper evaluations as a significant issue when assessing talent. The majority of authors (93.7%) acknowledged that professional advancement was their main reason for publishing papers, with 90.4% publishing to fulfill diverse evaluation requirements. Among the top three evaluation methods for hospital rankings in China, the number of published SCI papers is the most significant criterion for measuring the level of hospital research. This criterion plays a key role in enhancing the hospitals’ rankings; therefore, most medical staff hired by Chinese hospitals must have publications included in SCI journals. Newly published papers also directly influence medical staff's promotions and bonuses, which are often linked to the journals’ impact factor. Hence, doctors become motivated to publish more papers in journals with high impact factors [<span>9, 10</span>].</p><p>This quantitative evaluation system has undeniably played a vital role in the Chinese scientific community in the last 30 years and has driven the rapid growth of Chinese scientific papers in the literature. The number of Chinese papers indexed in SCI have increased from fifteenth place worldwide in 1991 to second place in 2021 [<span>11</span>], which demonstrates the recent considerable growth in Chinese scientific publishing. In particular, an upsurge in medical paper publications made significant contributions to this growth.</p><p>One of the clear benefits of this quantitative evaluation system is its objectivity, as all individuals are assessed based on a set of easily measurable standards. However, the worldwide academic community has increasingly reflected on the drawbacks of this quantitative evaluation system, such as its harmful impact on scientific progress, among other related issues. In particular, the current doctor evaluation system, which overemphasizes the publication of academic papers, is widely believed to cause several problems, such as emphasis on publications, prioritizing quantity over quality, incentives for swift publication. The evaluation of doctors’ research abilities should prioritize the quality of research, optimize classification systems, and develop more appropriate assessment criteria. These issues will be discussed in the following related sections.</p><p>The requirements for competitive evaluation leads to doctors pursuing research publications and sacrificing their scientific curiosity and independence as a consequence. In addition, the quantitative evaluation system has been shown to be a critical but insufficient method that does not fully reflect scientific development and progress [<span>12</span>]. A primary goal of medical research is to achieve a comprehensive understanding of disease and in the pursuit of knowledge, the process of caring for patients gives doctors a unique research perspective [<span>13</span>]. Studies that incorporate distinctive clinical queries can effectively enhance our knowledge of diseases [<span>14</span>]. The independence of doctors’ research depends on their curiosity [<span>15</span>], but the current evaluation system curbs their curiosity and independence because the basis of competition is that competing research is similar, and without similarity there is no competition. Nevertheless, it is important to acknowledge that genuinely ingenious and pioneering research must tackle distinct issues, and being distinct implies that it is arduous to compete on the same level. Park et al. [<span>16</span>] investigated the impact of newly published papers on the interpretation of historical documents and found a steady decline in the proportion of “breakthroughs” in scientific research since 1945, despite the recent significant scientific and technological advancements. Park et al. [<span>16</span>] also analyzed the most frequently used verbs in the papers and found that 1950s researchers tended to use words such as “produce” or “determine” when discussing the creation or discovery of concepts or objects. However, recent studies conducted during the 2010s used terms such as “improve” or “enhance” to indicate gradual progress. Hence, present-day research can be said to be less revolutionary than research in the past. Chu and Evans [<span>17</span>] analyzed 1.8 billion citations of 90 million papers published between 1960 and 2014, and found that newly published papers tended to build upon and refine existing perspectives rather than introduce groundbreaking ideas that disrupt the normative status quo. These findings demonstrate that the quantitative evaluation system for doctors only leads to the publication of “ordinary” papers that may advance and enhance current knowledge but are unable to generate truly revolutionary and innovative research outcomes.</p><p>Focusing on publishing a large number of academic papers rather than prioritizing their quality is not effective for enhancing clinical practice, which is one of the primary aims of medical research. Clinical research is the foundation of evidence-based medicine and landmark clinical trials have contributed remarkably to making considerable progress in improving disease prevention and treatment [<span>18</span>], particularly innovative clinical trials [<span>19</span>]. Despite criticisms indicating that the majority of doctors should prioritize their clinical practice over conducting research with the purpose of publishing papers [<span>20</span>], it cannot be ignored that many significant strides in modern medicine have been the result of doctors’ efforts to cure diseases [<span>21</span>]. Furthermore, conducting clinical research may enable doctors to effectively communicate their clinical and translational research findings to both patients and the public compared with doctors who do not conduct clinical research [<span>22</span>]. Diverse research strategies can enhance medical practices, such as promoting high-volume or -quality research productivity [<span>23</span>]. The first strategy is represented by the existing doctor evaluation system. Regrettably, empirical investigations have shown that the advancement of medical practice through high-caliber research is not accomplished by increasing the quantity of studies [<span>24, 25</span>]. Moreover, clinical studies published in journals with an average impact factor of ≥3 were related to lower readmission rates among both doctors and surgeons [<span>20</span>]. Therefore, the current focus on publishing more papers at the expense of research quality does not promote the advancement of clinical practices.</p><p>Using a single evaluation index incentivizes doctors to prioritize swift and effortless publications, even if it means disregarding scientific research ethics. Medicine is a primarily practice-based field where doctors may excel at diagnosing and treating illnesses, but lack academic interest or research skills. Nevertheless, the current evaluation system requires doctors to publish papers to achieve career promotion. Consequently, numerous doctors undertake risks for personal gain and pursue unethical publication avenues [<span>5, 26</span>].</p><p>Chawla [<span>27</span>] discovered over 400 counterfeit papers that potentially originated from the same paper factory and covered several medical fields, including pediatrics, cardiology, endocrinology, nephrology, and vascular surgery. The writers were all affiliated with Chinese hospitals. In 2021, the Royal Society of Chemistry Advances retracted 70 papers from Chinese hospitals due to their strikingly similar graphics and titles [<span>28</span>]. This trend is persistent. For example, Sabel et al. [<span>29</span>] estimated that 34% of neuroscience papers and 24% of medical papers published in 2020 may be fake or plagiarized, with China, Russia, Turkey, Egypt, and India producing the highest proportion of such papers. Other countries, such as Brazil, South Korea, Mexico, Serbia, Iran, Argentina, and Israel, were also studied, and the countries with the lowest percentage of fraudulent papers were Japan, Europe, Canada, Australia, the United States, and Ukraine.</p><p>The current doctor evaluation system excessively emphasizes the quantity of papers published, because hospitals often require doctors to have published a minimum number of studies to be promoted. Doctors may also receive large bonuses to incentivize their pursuit of publications, which leads to the rapid and voluminous publication of papers. Unfortunately, this also promotes predatory publishing practices and wasteful use of scientific research funds [<span>30</span>]. Hence, predatory journals become an ideal option for doctors seeking to publish a large quantity of their work quickly. According to Shamseer et al. [<span>31</span>], over half of the authors who published papers in predatory journals originated from upper middle- or high-income countries. Seventeen percent of papers received external funding, with the US National Institutes of Health being the most common funding agency. In particular, Shamseer et al. [<span>31</span>] highlighted the adverse impact of academic awards based on research publications, which encourage researchers with limited publishing experience to publish in predatory journals. The Inter Academy Partnership [<span>32</span>] found that 9% of the 1872 researchers from over 100 surveyed countries had unintentionally published in predatory journals, whereas 8% were uncertain if they had. It is estimated that more than one million researchers are impacted, with over $4 billion in research funding at risk of being squandered and predatory journals receiving a minimum of $178 million from article-processing charges. Another alarming discovery is that some scholars published papers in deceitful journals deliberately due to scientific research demands. Shamseer et al. [<span>31</span>] posits that the unsuitable assessment of researchers, which relies solely on very vague published metrics, promotes this misconduct.</p><p>These drawbacks of the quantitative evaluation system have garnered increasing attention and discourse within the global scientific community alongside growing calls for the reform of the current doctor evaluation system [<span>7</span>]. The Chinese government has recently acknowledged these issues within its current talent evaluation system. In May 2021, President Xi Jinping emphasized the need to improve the evaluation system by breaking through the “break the four unique” principle (It mainly refers to breaking the tendency of only papers, only titles, only academic qualifications, and only awards in talent evaluation) and establishing new standards [<span>33</span>]. However, creating a scientific, objective, and accurate evaluation system for medical professionals may lack consensus in practical terms; therefore, I propose the following suggestions for the academic community and colleagues to discuss and exchange.</p><p>From an evaluation standpoint, doctors should conduct innovative and diverse research and prioritize their research quality over the quantity of published papers. Medical history has repeatedly demonstrated that genuine medical breakthroughs and innovations are often solitary and not immediately evident. Therefore, doctors in the early stages of their careers should be encouraged to avoid subjectivity and explore unknown areas of medicine based on their own curiosity and interest. Specifically, hospitals should allocate a portion of their scientific research funds toward supporting unpopular research topics. Rewarding long-term success, tolerating early failures, and providing researchers with greater experimental freedom can enable the pursuit of innovative scientific projects leading to scientific breakthroughs [<span>24</span>]. Increasing the number of young reviewers would avoid the potential influence of senior experts’ tendency to support the existing knowledge system during the peer review process [<span>4</span>]. When evaluating and rewarding academic performance, hospitals should give more weight to “unpopular” research. Some may argue that this shift in focus would make currently unpopular research subjects popular in the future and this is indeed a possibility. A practical solution for the future is to create a national research database with weights allocated to each research subcategory. It should be noted that these weight distributions should not be static and should be updated dynamically based on research engagement and the number of correlated research findings.</p><p>As advocated by the <i>New England Journal of Medicine</i>, researchers should be evaluated based on the quality and quantity of their scientific contributions rather than the number of published papers [<span>34</span>]. Furthermore, it is crucial to emphasize that most doctors, especially surgeons, should conduct research that is highly relevant to clinical problems rather than pursuing basic research for the sake of increasing the quantity of publications. This focus on basic science may reduce the time doctors have available for clinical practice without providing proportional benefits [<span>20</span>].</p><p>From an objective standpoint, a classification system for evaluating doctors should use different criteria depending on the type of doctor. In the medical field, researchers propose that doctors be classified based on their primary responsibilities, such as clinical practice, research, and teaching, and then use various indicators to evaluate the doctors in each subcategory [<span>35</span>]. This is a sound proposal because individual doctors have varying levels of expertise and their professional intentions, time, and energy are finite. Therefore, expecting one person to excel across all areas is unrealistic [<span>36</span>]. Many doctors face significant challenges in engaging in clinical research, such as intensive clinical workloads, time and energy constraints, and limited training in scientific writing. Therefore, any doctor evaluations should prioritize their clinical outcomes rather than the quantity of projects for which they have secured funding successfully or articles published [<span>37, 38</span>]. However, if doctors assume top-level editorial roles, such as editor-in-chief, chapter editor, or editor of a medical textbook, becomes a key member of a national academic committee, or is granted a patent, these achievements could provide noteworthy references for promotions [<span>35</span>].</p><p>An increasing number of clinicians are assuming managerial roles in medical institutions [<span>4</span>]; therefore, to acknowledge these individuals who possess the willingness and skill to engage in various aspects of medical care, two new categories, namely, management and multidisciplinary studies, should be included in addition to the existing three categories (clinical practice, academic research and teaching) to ensure the fair and effective evaluation of all types of doctors.</p><p>From an academic integrity perspective, it is necessary to establish appropriate assessment criteria to guide doctors in conducting open, transparent, and ethical research. Insufficient academic inquiries and reporting are still widespread in the contemporary scientific research landscape [<span>39</span>]. Especially in the medical field, researchers who lack the ability to maintain their integrity in supporting ethical standards may experience significant negative outcomes given the importance of ethical standards to clinical decision-making. The results of a survey by Hammarfelt showed that researchers adjusted their publication practices to suit their institution's evaluation criteria [<span>40</span>]. Therefore, special attention should be given to the evaluation of academic integrity when assessing doctors’ research results. Nevertheless, the current evaluation system tends to prioritize the quantity of publications rather than their reliability, accuracy, reproducibility, and transparency [<span>41</span>]. Rice et al. [<span>5</span>] have shown that a minute percentage of medical schools share school data, publish open access articles, register clinical research before it is conducted, and their evaluation mechanisms comply with worldwide research reporting regulations, such as the Consolidated Standards of Reporting Trialsand Preferred Reporting Items for Systematic Reviews and Meta Analysis. As academic integrity and ethical standards are excluded from the mainstream doctor evaluation system, this phenomenon deserves great attention. As we already know, the promotions and rewards associated with evaluations can influence doctors’ behavior [<span>42</span>]. Therefore, it is reasonable to expect the inclusion of more indicators of academic integrity and scientific research ethics in the evaluation system, which can lead to more credible, open, and transparent clinical research for the benefit of the public, the academic community, and patients. However, it is important to carefully assess the value of these standards for science and the public in addition to their feasibility as promotion standards.</p><p>In summary, although the academic-oriented doctor evaluation system, which encourages doctors to produce academic papers and project applications, has been in use worldwide for a significant period, stakeholders are increasingly recognizing its flaws. For instance, the focus on publishing impedes the advancement of medical research, does not provide significant assistance to clinical practice, incentivizes doctors to engage in academic dishonesty to publish their work, and exhausts significant research funds in predatory journals that have little value. Consequently, it is urgently necessary to reform the current doctor evaluation system. We advocate the creation of a system for evaluating doctors that promotes innovation and produces high-quality research. The hypothetical doctor evaluation system should encompass various criteria while stressing scientific research ethics and integrity. Although the new doctor evaluation system may pose a challenge to implement in the short term, this research direction deserves attention and effort from the global academic research community.</p><p><b>Xiaojing Hu</b>: Conceptualization, writing—original draft, writing—review and editing.</p><p>The author declares no conflict of interest.</p><p>Not applicable.</p><p>Not applicable.</p>","PeriodicalId":100601,"journal":{"name":"Health Care Science","volume":"3 1","pages":"67-72"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/hcs2.82","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Care Science","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/hcs2.82","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The events of the coronavirus disease 2019 pandemic have emphasized the indispensable role of doctors in promoting public health and well-being [1]. Although medicine and health care are being transformed by technological advances, such as artificial intelligence, big data, genomics, precision medicine, and telemedicine, doctors continue to play a critical role in providing health care. However, a key challenge today is the lack of recognition of doctors by society at large. Hospitals, patients, and public opinion all play a role in evaluating doctors. However, this study will focus on hospitals’ doctor evaluations.
At the macro level, doctor evaluations influence their value orientation, research directions, and resource allocation. Assessing doctors also impacts their research and behavior at the micro level, as it is a crucial element in their development. It is challenging to build a suitable doctor evaluation system; therefore, doctor evaluations are a common research subject among the global academic community. Various stakeholders have paid attention to this issue, which is still being debated in the literature.
The global academic community considers an evaluation system based purely on merit and performance to be the most suitable for doctor evaluations [2, 3] with a primary focus on clinical care and scientific research. In addition, doctors are expected to also teach when working at large academic medical centers. Among these three sections, the index for scientific research evaluation accounts for the highest proportion [4]. A survey of 170 universities randomly selected from the CWTS Leiden Ranking revealed that among the 92 universities offering a School of Biomedical Sciences and promoting the accessibility of evaluation criteria, the mentioned policies included peer-reviewed publications, funding, national or international reputations, author order, and journal impact factors in 95%, 67%, 48%, 37%, and 28% of cases, respectively. Furthermore, most institutions clearly indicate their expectations for the minimum number of papers to be published annually [5]. Alawi et al. have shown that in many countries, the evaluation of medical professionals is primarily based on their ability to publish papers and secure research funding [6]. The recognition of these achievements under the existing evaluation system has a significant impact on key evaluation factors, such as performance, publications, work roles, and research awards.
Doctors in Chinese hospitals are assessed primarily on the inclusion of their scientific publications in the Science Citation Index (SCI). The number of published papers indexed in SCI significantly influences their professional ranking and likelihood of promotion. Hence, many young Chinese doctors feel under pressure to publish academic papers in addition to performing their clinical duties [7]. According to the National Science and Technology Workers Survey Group [8], nearly half (45.9%) of Chinese science and technology workers perceived the overreliance on paper evaluations as a significant issue when assessing talent. The majority of authors (93.7%) acknowledged that professional advancement was their main reason for publishing papers, with 90.4% publishing to fulfill diverse evaluation requirements. Among the top three evaluation methods for hospital rankings in China, the number of published SCI papers is the most significant criterion for measuring the level of hospital research. This criterion plays a key role in enhancing the hospitals’ rankings; therefore, most medical staff hired by Chinese hospitals must have publications included in SCI journals. Newly published papers also directly influence medical staff's promotions and bonuses, which are often linked to the journals’ impact factor. Hence, doctors become motivated to publish more papers in journals with high impact factors [9, 10].
This quantitative evaluation system has undeniably played a vital role in the Chinese scientific community in the last 30 years and has driven the rapid growth of Chinese scientific papers in the literature. The number of Chinese papers indexed in SCI have increased from fifteenth place worldwide in 1991 to second place in 2021 [11], which demonstrates the recent considerable growth in Chinese scientific publishing. In particular, an upsurge in medical paper publications made significant contributions to this growth.
One of the clear benefits of this quantitative evaluation system is its objectivity, as all individuals are assessed based on a set of easily measurable standards. However, the worldwide academic community has increasingly reflected on the drawbacks of this quantitative evaluation system, such as its harmful impact on scientific progress, among other related issues. In particular, the current doctor evaluation system, which overemphasizes the publication of academic papers, is widely believed to cause several problems, such as emphasis on publications, prioritizing quantity over quality, incentives for swift publication. The evaluation of doctors’ research abilities should prioritize the quality of research, optimize classification systems, and develop more appropriate assessment criteria. These issues will be discussed in the following related sections.
The requirements for competitive evaluation leads to doctors pursuing research publications and sacrificing their scientific curiosity and independence as a consequence. In addition, the quantitative evaluation system has been shown to be a critical but insufficient method that does not fully reflect scientific development and progress [12]. A primary goal of medical research is to achieve a comprehensive understanding of disease and in the pursuit of knowledge, the process of caring for patients gives doctors a unique research perspective [13]. Studies that incorporate distinctive clinical queries can effectively enhance our knowledge of diseases [14]. The independence of doctors’ research depends on their curiosity [15], but the current evaluation system curbs their curiosity and independence because the basis of competition is that competing research is similar, and without similarity there is no competition. Nevertheless, it is important to acknowledge that genuinely ingenious and pioneering research must tackle distinct issues, and being distinct implies that it is arduous to compete on the same level. Park et al. [16] investigated the impact of newly published papers on the interpretation of historical documents and found a steady decline in the proportion of “breakthroughs” in scientific research since 1945, despite the recent significant scientific and technological advancements. Park et al. [16] also analyzed the most frequently used verbs in the papers and found that 1950s researchers tended to use words such as “produce” or “determine” when discussing the creation or discovery of concepts or objects. However, recent studies conducted during the 2010s used terms such as “improve” or “enhance” to indicate gradual progress. Hence, present-day research can be said to be less revolutionary than research in the past. Chu and Evans [17] analyzed 1.8 billion citations of 90 million papers published between 1960 and 2014, and found that newly published papers tended to build upon and refine existing perspectives rather than introduce groundbreaking ideas that disrupt the normative status quo. These findings demonstrate that the quantitative evaluation system for doctors only leads to the publication of “ordinary” papers that may advance and enhance current knowledge but are unable to generate truly revolutionary and innovative research outcomes.
Focusing on publishing a large number of academic papers rather than prioritizing their quality is not effective for enhancing clinical practice, which is one of the primary aims of medical research. Clinical research is the foundation of evidence-based medicine and landmark clinical trials have contributed remarkably to making considerable progress in improving disease prevention and treatment [18], particularly innovative clinical trials [19]. Despite criticisms indicating that the majority of doctors should prioritize their clinical practice over conducting research with the purpose of publishing papers [20], it cannot be ignored that many significant strides in modern medicine have been the result of doctors’ efforts to cure diseases [21]. Furthermore, conducting clinical research may enable doctors to effectively communicate their clinical and translational research findings to both patients and the public compared with doctors who do not conduct clinical research [22]. Diverse research strategies can enhance medical practices, such as promoting high-volume or -quality research productivity [23]. The first strategy is represented by the existing doctor evaluation system. Regrettably, empirical investigations have shown that the advancement of medical practice through high-caliber research is not accomplished by increasing the quantity of studies [24, 25]. Moreover, clinical studies published in journals with an average impact factor of ≥3 were related to lower readmission rates among both doctors and surgeons [20]. Therefore, the current focus on publishing more papers at the expense of research quality does not promote the advancement of clinical practices.
Using a single evaluation index incentivizes doctors to prioritize swift and effortless publications, even if it means disregarding scientific research ethics. Medicine is a primarily practice-based field where doctors may excel at diagnosing and treating illnesses, but lack academic interest or research skills. Nevertheless, the current evaluation system requires doctors to publish papers to achieve career promotion. Consequently, numerous doctors undertake risks for personal gain and pursue unethical publication avenues [5, 26].
Chawla [27] discovered over 400 counterfeit papers that potentially originated from the same paper factory and covered several medical fields, including pediatrics, cardiology, endocrinology, nephrology, and vascular surgery. The writers were all affiliated with Chinese hospitals. In 2021, the Royal Society of Chemistry Advances retracted 70 papers from Chinese hospitals due to their strikingly similar graphics and titles [28]. This trend is persistent. For example, Sabel et al. [29] estimated that 34% of neuroscience papers and 24% of medical papers published in 2020 may be fake or plagiarized, with China, Russia, Turkey, Egypt, and India producing the highest proportion of such papers. Other countries, such as Brazil, South Korea, Mexico, Serbia, Iran, Argentina, and Israel, were also studied, and the countries with the lowest percentage of fraudulent papers were Japan, Europe, Canada, Australia, the United States, and Ukraine.
The current doctor evaluation system excessively emphasizes the quantity of papers published, because hospitals often require doctors to have published a minimum number of studies to be promoted. Doctors may also receive large bonuses to incentivize their pursuit of publications, which leads to the rapid and voluminous publication of papers. Unfortunately, this also promotes predatory publishing practices and wasteful use of scientific research funds [30]. Hence, predatory journals become an ideal option for doctors seeking to publish a large quantity of their work quickly. According to Shamseer et al. [31], over half of the authors who published papers in predatory journals originated from upper middle- or high-income countries. Seventeen percent of papers received external funding, with the US National Institutes of Health being the most common funding agency. In particular, Shamseer et al. [31] highlighted the adverse impact of academic awards based on research publications, which encourage researchers with limited publishing experience to publish in predatory journals. The Inter Academy Partnership [32] found that 9% of the 1872 researchers from over 100 surveyed countries had unintentionally published in predatory journals, whereas 8% were uncertain if they had. It is estimated that more than one million researchers are impacted, with over $4 billion in research funding at risk of being squandered and predatory journals receiving a minimum of $178 million from article-processing charges. Another alarming discovery is that some scholars published papers in deceitful journals deliberately due to scientific research demands. Shamseer et al. [31] posits that the unsuitable assessment of researchers, which relies solely on very vague published metrics, promotes this misconduct.
These drawbacks of the quantitative evaluation system have garnered increasing attention and discourse within the global scientific community alongside growing calls for the reform of the current doctor evaluation system [7]. The Chinese government has recently acknowledged these issues within its current talent evaluation system. In May 2021, President Xi Jinping emphasized the need to improve the evaluation system by breaking through the “break the four unique” principle (It mainly refers to breaking the tendency of only papers, only titles, only academic qualifications, and only awards in talent evaluation) and establishing new standards [33]. However, creating a scientific, objective, and accurate evaluation system for medical professionals may lack consensus in practical terms; therefore, I propose the following suggestions for the academic community and colleagues to discuss and exchange.
From an evaluation standpoint, doctors should conduct innovative and diverse research and prioritize their research quality over the quantity of published papers. Medical history has repeatedly demonstrated that genuine medical breakthroughs and innovations are often solitary and not immediately evident. Therefore, doctors in the early stages of their careers should be encouraged to avoid subjectivity and explore unknown areas of medicine based on their own curiosity and interest. Specifically, hospitals should allocate a portion of their scientific research funds toward supporting unpopular research topics. Rewarding long-term success, tolerating early failures, and providing researchers with greater experimental freedom can enable the pursuit of innovative scientific projects leading to scientific breakthroughs [24]. Increasing the number of young reviewers would avoid the potential influence of senior experts’ tendency to support the existing knowledge system during the peer review process [4]. When evaluating and rewarding academic performance, hospitals should give more weight to “unpopular” research. Some may argue that this shift in focus would make currently unpopular research subjects popular in the future and this is indeed a possibility. A practical solution for the future is to create a national research database with weights allocated to each research subcategory. It should be noted that these weight distributions should not be static and should be updated dynamically based on research engagement and the number of correlated research findings.
As advocated by the New England Journal of Medicine, researchers should be evaluated based on the quality and quantity of their scientific contributions rather than the number of published papers [34]. Furthermore, it is crucial to emphasize that most doctors, especially surgeons, should conduct research that is highly relevant to clinical problems rather than pursuing basic research for the sake of increasing the quantity of publications. This focus on basic science may reduce the time doctors have available for clinical practice without providing proportional benefits [20].
From an objective standpoint, a classification system for evaluating doctors should use different criteria depending on the type of doctor. In the medical field, researchers propose that doctors be classified based on their primary responsibilities, such as clinical practice, research, and teaching, and then use various indicators to evaluate the doctors in each subcategory [35]. This is a sound proposal because individual doctors have varying levels of expertise and their professional intentions, time, and energy are finite. Therefore, expecting one person to excel across all areas is unrealistic [36]. Many doctors face significant challenges in engaging in clinical research, such as intensive clinical workloads, time and energy constraints, and limited training in scientific writing. Therefore, any doctor evaluations should prioritize their clinical outcomes rather than the quantity of projects for which they have secured funding successfully or articles published [37, 38]. However, if doctors assume top-level editorial roles, such as editor-in-chief, chapter editor, or editor of a medical textbook, becomes a key member of a national academic committee, or is granted a patent, these achievements could provide noteworthy references for promotions [35].
An increasing number of clinicians are assuming managerial roles in medical institutions [4]; therefore, to acknowledge these individuals who possess the willingness and skill to engage in various aspects of medical care, two new categories, namely, management and multidisciplinary studies, should be included in addition to the existing three categories (clinical practice, academic research and teaching) to ensure the fair and effective evaluation of all types of doctors.
From an academic integrity perspective, it is necessary to establish appropriate assessment criteria to guide doctors in conducting open, transparent, and ethical research. Insufficient academic inquiries and reporting are still widespread in the contemporary scientific research landscape [39]. Especially in the medical field, researchers who lack the ability to maintain their integrity in supporting ethical standards may experience significant negative outcomes given the importance of ethical standards to clinical decision-making. The results of a survey by Hammarfelt showed that researchers adjusted their publication practices to suit their institution's evaluation criteria [40]. Therefore, special attention should be given to the evaluation of academic integrity when assessing doctors’ research results. Nevertheless, the current evaluation system tends to prioritize the quantity of publications rather than their reliability, accuracy, reproducibility, and transparency [41]. Rice et al. [5] have shown that a minute percentage of medical schools share school data, publish open access articles, register clinical research before it is conducted, and their evaluation mechanisms comply with worldwide research reporting regulations, such as the Consolidated Standards of Reporting Trialsand Preferred Reporting Items for Systematic Reviews and Meta Analysis. As academic integrity and ethical standards are excluded from the mainstream doctor evaluation system, this phenomenon deserves great attention. As we already know, the promotions and rewards associated with evaluations can influence doctors’ behavior [42]. Therefore, it is reasonable to expect the inclusion of more indicators of academic integrity and scientific research ethics in the evaluation system, which can lead to more credible, open, and transparent clinical research for the benefit of the public, the academic community, and patients. However, it is important to carefully assess the value of these standards for science and the public in addition to their feasibility as promotion standards.
In summary, although the academic-oriented doctor evaluation system, which encourages doctors to produce academic papers and project applications, has been in use worldwide for a significant period, stakeholders are increasingly recognizing its flaws. For instance, the focus on publishing impedes the advancement of medical research, does not provide significant assistance to clinical practice, incentivizes doctors to engage in academic dishonesty to publish their work, and exhausts significant research funds in predatory journals that have little value. Consequently, it is urgently necessary to reform the current doctor evaluation system. We advocate the creation of a system for evaluating doctors that promotes innovation and produces high-quality research. The hypothetical doctor evaluation system should encompass various criteria while stressing scientific research ethics and integrity. Although the new doctor evaluation system may pose a challenge to implement in the short term, this research direction deserves attention and effort from the global academic research community.
Xiaojing Hu: Conceptualization, writing—original draft, writing—review and editing.