构建有效的评估系统,确定医生的研究能力

Xiaojing Hu
{"title":"构建有效的评估系统,确定医生的研究能力","authors":"Xiaojing Hu","doi":"10.1002/hcs2.82","DOIUrl":null,"url":null,"abstract":"<p>The events of the coronavirus disease 2019 pandemic have emphasized the indispensable role of doctors in promoting public health and well-being [<span>1</span>]. Although medicine and health care are being transformed by technological advances, such as artificial intelligence, big data, genomics, precision medicine, and telemedicine, doctors continue to play a critical role in providing health care. However, a key challenge today is the lack of recognition of doctors by society at large. Hospitals, patients, and public opinion all play a role in evaluating doctors. However, this study will focus on hospitals’ doctor evaluations.</p><p>At the macro level, doctor evaluations influence their value orientation, research directions, and resource allocation. Assessing doctors also impacts their research and behavior at the micro level, as it is a crucial element in their development. It is challenging to build a suitable doctor evaluation system; therefore, doctor evaluations are a common research subject among the global academic community. Various stakeholders have paid attention to this issue, which is still being debated in the literature.</p><p>The global academic community considers an evaluation system based purely on merit and performance to be the most suitable for doctor evaluations [<span>2, 3</span>] with a primary focus on clinical care and scientific research. In addition, doctors are expected to also teach when working at large academic medical centers. Among these three sections, the index for scientific research evaluation accounts for the highest proportion [<span>4</span>]. A survey of 170 universities randomly selected from the CWTS Leiden Ranking revealed that among the 92 universities offering a School of Biomedical Sciences and promoting the accessibility of evaluation criteria, the mentioned policies included peer-reviewed publications, funding, national or international reputations, author order, and journal impact factors in 95%, 67%, 48%, 37%, and 28% of cases, respectively. Furthermore, most institutions clearly indicate their expectations for the minimum number of papers to be published annually [<span>5</span>]. Alawi et al. have shown that in many countries, the evaluation of medical professionals is primarily based on their ability to publish papers and secure research funding [<span>6</span>]. The recognition of these achievements under the existing evaluation system has a significant impact on key evaluation factors, such as performance, publications, work roles, and research awards.</p><p>Doctors in Chinese hospitals are assessed primarily on the inclusion of their scientific publications in the Science Citation Index (SCI). The number of published papers indexed in SCI significantly influences their professional ranking and likelihood of promotion. Hence, many young Chinese doctors feel under pressure to publish academic papers in addition to performing their clinical duties [<span>7</span>]. According to the National Science and Technology Workers Survey Group [<span>8</span>], nearly half (45.9%) of Chinese science and technology workers perceived the overreliance on paper evaluations as a significant issue when assessing talent. The majority of authors (93.7%) acknowledged that professional advancement was their main reason for publishing papers, with 90.4% publishing to fulfill diverse evaluation requirements. Among the top three evaluation methods for hospital rankings in China, the number of published SCI papers is the most significant criterion for measuring the level of hospital research. This criterion plays a key role in enhancing the hospitals’ rankings; therefore, most medical staff hired by Chinese hospitals must have publications included in SCI journals. Newly published papers also directly influence medical staff's promotions and bonuses, which are often linked to the journals’ impact factor. Hence, doctors become motivated to publish more papers in journals with high impact factors [<span>9, 10</span>].</p><p>This quantitative evaluation system has undeniably played a vital role in the Chinese scientific community in the last 30 years and has driven the rapid growth of Chinese scientific papers in the literature. The number of Chinese papers indexed in SCI have increased from fifteenth place worldwide in 1991 to second place in 2021 [<span>11</span>], which demonstrates the recent considerable growth in Chinese scientific publishing. In particular, an upsurge in medical paper publications made significant contributions to this growth.</p><p>One of the clear benefits of this quantitative evaluation system is its objectivity, as all individuals are assessed based on a set of easily measurable standards. However, the worldwide academic community has increasingly reflected on the drawbacks of this quantitative evaluation system, such as its harmful impact on scientific progress, among other related issues. In particular, the current doctor evaluation system, which overemphasizes the publication of academic papers, is widely believed to cause several problems, such as emphasis on publications, prioritizing quantity over quality, incentives for swift publication. The evaluation of doctors’ research abilities should prioritize the quality of research, optimize classification systems, and develop more appropriate assessment criteria. These issues will be discussed in the following related sections.</p><p>The requirements for competitive evaluation leads to doctors pursuing research publications and sacrificing their scientific curiosity and independence as a consequence. In addition, the quantitative evaluation system has been shown to be a critical but insufficient method that does not fully reflect scientific development and progress [<span>12</span>]. A primary goal of medical research is to achieve a comprehensive understanding of disease and in the pursuit of knowledge, the process of caring for patients gives doctors a unique research perspective [<span>13</span>]. Studies that incorporate distinctive clinical queries can effectively enhance our knowledge of diseases [<span>14</span>]. The independence of doctors’ research depends on their curiosity [<span>15</span>], but the current evaluation system curbs their curiosity and independence because the basis of competition is that competing research is similar, and without similarity there is no competition. Nevertheless, it is important to acknowledge that genuinely ingenious and pioneering research must tackle distinct issues, and being distinct implies that it is arduous to compete on the same level. Park et al. [<span>16</span>] investigated the impact of newly published papers on the interpretation of historical documents and found a steady decline in the proportion of “breakthroughs” in scientific research since 1945, despite the recent significant scientific and technological advancements. Park et al. [<span>16</span>] also analyzed the most frequently used verbs in the papers and found that 1950s researchers tended to use words such as “produce” or “determine” when discussing the creation or discovery of concepts or objects. However, recent studies conducted during the 2010s used terms such as “improve” or “enhance” to indicate gradual progress. Hence, present-day research can be said to be less revolutionary than research in the past. Chu and Evans [<span>17</span>] analyzed 1.8 billion citations of 90 million papers published between 1960 and 2014, and found that newly published papers tended to build upon and refine existing perspectives rather than introduce groundbreaking ideas that disrupt the normative status quo. These findings demonstrate that the quantitative evaluation system for doctors only leads to the publication of “ordinary” papers that may advance and enhance current knowledge but are unable to generate truly revolutionary and innovative research outcomes.</p><p>Focusing on publishing a large number of academic papers rather than prioritizing their quality is not effective for enhancing clinical practice, which is one of the primary aims of medical research. Clinical research is the foundation of evidence-based medicine and landmark clinical trials have contributed remarkably to making considerable progress in improving disease prevention and treatment [<span>18</span>], particularly innovative clinical trials [<span>19</span>]. Despite criticisms indicating that the majority of doctors should prioritize their clinical practice over conducting research with the purpose of publishing papers [<span>20</span>], it cannot be ignored that many significant strides in modern medicine have been the result of doctors’ efforts to cure diseases [<span>21</span>]. Furthermore, conducting clinical research may enable doctors to effectively communicate their clinical and translational research findings to both patients and the public compared with doctors who do not conduct clinical research [<span>22</span>]. Diverse research strategies can enhance medical practices, such as promoting high-volume or -quality research productivity [<span>23</span>]. The first strategy is represented by the existing doctor evaluation system. Regrettably, empirical investigations have shown that the advancement of medical practice through high-caliber research is not accomplished by increasing the quantity of studies [<span>24, 25</span>]. Moreover, clinical studies published in journals with an average impact factor of ≥3 were related to lower readmission rates among both doctors and surgeons [<span>20</span>]. Therefore, the current focus on publishing more papers at the expense of research quality does not promote the advancement of clinical practices.</p><p>Using a single evaluation index incentivizes doctors to prioritize swift and effortless publications, even if it means disregarding scientific research ethics. Medicine is a primarily practice-based field where doctors may excel at diagnosing and treating illnesses, but lack academic interest or research skills. Nevertheless, the current evaluation system requires doctors to publish papers to achieve career promotion. Consequently, numerous doctors undertake risks for personal gain and pursue unethical publication avenues [<span>5, 26</span>].</p><p>Chawla [<span>27</span>] discovered over 400 counterfeit papers that potentially originated from the same paper factory and covered several medical fields, including pediatrics, cardiology, endocrinology, nephrology, and vascular surgery. The writers were all affiliated with Chinese hospitals. In 2021, the Royal Society of Chemistry Advances retracted 70 papers from Chinese hospitals due to their strikingly similar graphics and titles [<span>28</span>]. This trend is persistent. For example, Sabel et al. [<span>29</span>] estimated that 34% of neuroscience papers and 24% of medical papers published in 2020 may be fake or plagiarized, with China, Russia, Turkey, Egypt, and India producing the highest proportion of such papers. Other countries, such as Brazil, South Korea, Mexico, Serbia, Iran, Argentina, and Israel, were also studied, and the countries with the lowest percentage of fraudulent papers were Japan, Europe, Canada, Australia, the United States, and Ukraine.</p><p>The current doctor evaluation system excessively emphasizes the quantity of papers published, because hospitals often require doctors to have published a minimum number of studies to be promoted. Doctors may also receive large bonuses to incentivize their pursuit of publications, which leads to the rapid and voluminous publication of papers. Unfortunately, this also promotes predatory publishing practices and wasteful use of scientific research funds [<span>30</span>]. Hence, predatory journals become an ideal option for doctors seeking to publish a large quantity of their work quickly. According to Shamseer et al. [<span>31</span>], over half of the authors who published papers in predatory journals originated from upper middle- or high-income countries. Seventeen percent of papers received external funding, with the US National Institutes of Health being the most common funding agency. In particular, Shamseer et al. [<span>31</span>] highlighted the adverse impact of academic awards based on research publications, which encourage researchers with limited publishing experience to publish in predatory journals. The Inter Academy Partnership [<span>32</span>] found that 9% of the 1872 researchers from over 100 surveyed countries had unintentionally published in predatory journals, whereas 8% were uncertain if they had. It is estimated that more than one million researchers are impacted, with over $4 billion in research funding at risk of being squandered and predatory journals receiving a minimum of $178 million from article-processing charges. Another alarming discovery is that some scholars published papers in deceitful journals deliberately due to scientific research demands. Shamseer et al. [<span>31</span>] posits that the unsuitable assessment of researchers, which relies solely on very vague published metrics, promotes this misconduct.</p><p>These drawbacks of the quantitative evaluation system have garnered increasing attention and discourse within the global scientific community alongside growing calls for the reform of the current doctor evaluation system [<span>7</span>]. The Chinese government has recently acknowledged these issues within its current talent evaluation system. In May 2021, President Xi Jinping emphasized the need to improve the evaluation system by breaking through the “break the four unique” principle (It mainly refers to breaking the tendency of only papers, only titles, only academic qualifications, and only awards in talent evaluation) and establishing new standards [<span>33</span>]. However, creating a scientific, objective, and accurate evaluation system for medical professionals may lack consensus in practical terms; therefore, I propose the following suggestions for the academic community and colleagues to discuss and exchange.</p><p>From an evaluation standpoint, doctors should conduct innovative and diverse research and prioritize their research quality over the quantity of published papers. Medical history has repeatedly demonstrated that genuine medical breakthroughs and innovations are often solitary and not immediately evident. Therefore, doctors in the early stages of their careers should be encouraged to avoid subjectivity and explore unknown areas of medicine based on their own curiosity and interest. Specifically, hospitals should allocate a portion of their scientific research funds toward supporting unpopular research topics. Rewarding long-term success, tolerating early failures, and providing researchers with greater experimental freedom can enable the pursuit of innovative scientific projects leading to scientific breakthroughs [<span>24</span>]. Increasing the number of young reviewers would avoid the potential influence of senior experts’ tendency to support the existing knowledge system during the peer review process [<span>4</span>]. When evaluating and rewarding academic performance, hospitals should give more weight to “unpopular” research. Some may argue that this shift in focus would make currently unpopular research subjects popular in the future and this is indeed a possibility. A practical solution for the future is to create a national research database with weights allocated to each research subcategory. It should be noted that these weight distributions should not be static and should be updated dynamically based on research engagement and the number of correlated research findings.</p><p>As advocated by the <i>New England Journal of Medicine</i>, researchers should be evaluated based on the quality and quantity of their scientific contributions rather than the number of published papers [<span>34</span>]. Furthermore, it is crucial to emphasize that most doctors, especially surgeons, should conduct research that is highly relevant to clinical problems rather than pursuing basic research for the sake of increasing the quantity of publications. This focus on basic science may reduce the time doctors have available for clinical practice without providing proportional benefits [<span>20</span>].</p><p>From an objective standpoint, a classification system for evaluating doctors should use different criteria depending on the type of doctor. In the medical field, researchers propose that doctors be classified based on their primary responsibilities, such as clinical practice, research, and teaching, and then use various indicators to evaluate the doctors in each subcategory [<span>35</span>]. This is a sound proposal because individual doctors have varying levels of expertise and their professional intentions, time, and energy are finite. Therefore, expecting one person to excel across all areas is unrealistic [<span>36</span>]. Many doctors face significant challenges in engaging in clinical research, such as intensive clinical workloads, time and energy constraints, and limited training in scientific writing. Therefore, any doctor evaluations should prioritize their clinical outcomes rather than the quantity of projects for which they have secured funding successfully or articles published [<span>37, 38</span>]. However, if doctors assume top-level editorial roles, such as editor-in-chief, chapter editor, or editor of a medical textbook, becomes a key member of a national academic committee, or is granted a patent, these achievements could provide noteworthy references for promotions [<span>35</span>].</p><p>An increasing number of clinicians are assuming managerial roles in medical institutions [<span>4</span>]; therefore, to acknowledge these individuals who possess the willingness and skill to engage in various aspects of medical care, two new categories, namely, management and multidisciplinary studies, should be included in addition to the existing three categories (clinical practice, academic research and teaching) to ensure the fair and effective evaluation of all types of doctors.</p><p>From an academic integrity perspective, it is necessary to establish appropriate assessment criteria to guide doctors in conducting open, transparent, and ethical research. Insufficient academic inquiries and reporting are still widespread in the contemporary scientific research landscape [<span>39</span>]. Especially in the medical field, researchers who lack the ability to maintain their integrity in supporting ethical standards may experience significant negative outcomes given the importance of ethical standards to clinical decision-making. The results of a survey by Hammarfelt showed that researchers adjusted their publication practices to suit their institution's evaluation criteria [<span>40</span>]. Therefore, special attention should be given to the evaluation of academic integrity when assessing doctors’ research results. Nevertheless, the current evaluation system tends to prioritize the quantity of publications rather than their reliability, accuracy, reproducibility, and transparency [<span>41</span>]. Rice et al. [<span>5</span>] have shown that a minute percentage of medical schools share school data, publish open access articles, register clinical research before it is conducted, and their evaluation mechanisms comply with worldwide research reporting regulations, such as the Consolidated Standards of Reporting Trialsand Preferred Reporting Items for Systematic Reviews and Meta Analysis. As academic integrity and ethical standards are excluded from the mainstream doctor evaluation system, this phenomenon deserves great attention. As we already know, the promotions and rewards associated with evaluations can influence doctors’ behavior [<span>42</span>]. Therefore, it is reasonable to expect the inclusion of more indicators of academic integrity and scientific research ethics in the evaluation system, which can lead to more credible, open, and transparent clinical research for the benefit of the public, the academic community, and patients. However, it is important to carefully assess the value of these standards for science and the public in addition to their feasibility as promotion standards.</p><p>In summary, although the academic-oriented doctor evaluation system, which encourages doctors to produce academic papers and project applications, has been in use worldwide for a significant period, stakeholders are increasingly recognizing its flaws. For instance, the focus on publishing impedes the advancement of medical research, does not provide significant assistance to clinical practice, incentivizes doctors to engage in academic dishonesty to publish their work, and exhausts significant research funds in predatory journals that have little value. Consequently, it is urgently necessary to reform the current doctor evaluation system. We advocate the creation of a system for evaluating doctors that promotes innovation and produces high-quality research. The hypothetical doctor evaluation system should encompass various criteria while stressing scientific research ethics and integrity. Although the new doctor evaluation system may pose a challenge to implement in the short term, this research direction deserves attention and effort from the global academic research community.</p><p><b>Xiaojing Hu</b>: Conceptualization, writing—original draft, writing—review and editing.</p><p>The author declares no conflict of interest.</p><p>Not applicable.</p><p>Not applicable.</p>","PeriodicalId":100601,"journal":{"name":"Health Care Science","volume":"3 1","pages":"67-72"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/hcs2.82","citationCount":"0","resultStr":"{\"title\":\"Constructing an effective evaluation system to identify doctors’ research capabilities\",\"authors\":\"Xiaojing Hu\",\"doi\":\"10.1002/hcs2.82\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The events of the coronavirus disease 2019 pandemic have emphasized the indispensable role of doctors in promoting public health and well-being [<span>1</span>]. Although medicine and health care are being transformed by technological advances, such as artificial intelligence, big data, genomics, precision medicine, and telemedicine, doctors continue to play a critical role in providing health care. However, a key challenge today is the lack of recognition of doctors by society at large. Hospitals, patients, and public opinion all play a role in evaluating doctors. However, this study will focus on hospitals’ doctor evaluations.</p><p>At the macro level, doctor evaluations influence their value orientation, research directions, and resource allocation. Assessing doctors also impacts their research and behavior at the micro level, as it is a crucial element in their development. It is challenging to build a suitable doctor evaluation system; therefore, doctor evaluations are a common research subject among the global academic community. Various stakeholders have paid attention to this issue, which is still being debated in the literature.</p><p>The global academic community considers an evaluation system based purely on merit and performance to be the most suitable for doctor evaluations [<span>2, 3</span>] with a primary focus on clinical care and scientific research. In addition, doctors are expected to also teach when working at large academic medical centers. Among these three sections, the index for scientific research evaluation accounts for the highest proportion [<span>4</span>]. A survey of 170 universities randomly selected from the CWTS Leiden Ranking revealed that among the 92 universities offering a School of Biomedical Sciences and promoting the accessibility of evaluation criteria, the mentioned policies included peer-reviewed publications, funding, national or international reputations, author order, and journal impact factors in 95%, 67%, 48%, 37%, and 28% of cases, respectively. Furthermore, most institutions clearly indicate their expectations for the minimum number of papers to be published annually [<span>5</span>]. Alawi et al. have shown that in many countries, the evaluation of medical professionals is primarily based on their ability to publish papers and secure research funding [<span>6</span>]. The recognition of these achievements under the existing evaluation system has a significant impact on key evaluation factors, such as performance, publications, work roles, and research awards.</p><p>Doctors in Chinese hospitals are assessed primarily on the inclusion of their scientific publications in the Science Citation Index (SCI). The number of published papers indexed in SCI significantly influences their professional ranking and likelihood of promotion. Hence, many young Chinese doctors feel under pressure to publish academic papers in addition to performing their clinical duties [<span>7</span>]. According to the National Science and Technology Workers Survey Group [<span>8</span>], nearly half (45.9%) of Chinese science and technology workers perceived the overreliance on paper evaluations as a significant issue when assessing talent. The majority of authors (93.7%) acknowledged that professional advancement was their main reason for publishing papers, with 90.4% publishing to fulfill diverse evaluation requirements. Among the top three evaluation methods for hospital rankings in China, the number of published SCI papers is the most significant criterion for measuring the level of hospital research. This criterion plays a key role in enhancing the hospitals’ rankings; therefore, most medical staff hired by Chinese hospitals must have publications included in SCI journals. Newly published papers also directly influence medical staff's promotions and bonuses, which are often linked to the journals’ impact factor. Hence, doctors become motivated to publish more papers in journals with high impact factors [<span>9, 10</span>].</p><p>This quantitative evaluation system has undeniably played a vital role in the Chinese scientific community in the last 30 years and has driven the rapid growth of Chinese scientific papers in the literature. The number of Chinese papers indexed in SCI have increased from fifteenth place worldwide in 1991 to second place in 2021 [<span>11</span>], which demonstrates the recent considerable growth in Chinese scientific publishing. In particular, an upsurge in medical paper publications made significant contributions to this growth.</p><p>One of the clear benefits of this quantitative evaluation system is its objectivity, as all individuals are assessed based on a set of easily measurable standards. However, the worldwide academic community has increasingly reflected on the drawbacks of this quantitative evaluation system, such as its harmful impact on scientific progress, among other related issues. In particular, the current doctor evaluation system, which overemphasizes the publication of academic papers, is widely believed to cause several problems, such as emphasis on publications, prioritizing quantity over quality, incentives for swift publication. The evaluation of doctors’ research abilities should prioritize the quality of research, optimize classification systems, and develop more appropriate assessment criteria. These issues will be discussed in the following related sections.</p><p>The requirements for competitive evaluation leads to doctors pursuing research publications and sacrificing their scientific curiosity and independence as a consequence. In addition, the quantitative evaluation system has been shown to be a critical but insufficient method that does not fully reflect scientific development and progress [<span>12</span>]. A primary goal of medical research is to achieve a comprehensive understanding of disease and in the pursuit of knowledge, the process of caring for patients gives doctors a unique research perspective [<span>13</span>]. Studies that incorporate distinctive clinical queries can effectively enhance our knowledge of diseases [<span>14</span>]. The independence of doctors’ research depends on their curiosity [<span>15</span>], but the current evaluation system curbs their curiosity and independence because the basis of competition is that competing research is similar, and without similarity there is no competition. Nevertheless, it is important to acknowledge that genuinely ingenious and pioneering research must tackle distinct issues, and being distinct implies that it is arduous to compete on the same level. Park et al. [<span>16</span>] investigated the impact of newly published papers on the interpretation of historical documents and found a steady decline in the proportion of “breakthroughs” in scientific research since 1945, despite the recent significant scientific and technological advancements. Park et al. [<span>16</span>] also analyzed the most frequently used verbs in the papers and found that 1950s researchers tended to use words such as “produce” or “determine” when discussing the creation or discovery of concepts or objects. However, recent studies conducted during the 2010s used terms such as “improve” or “enhance” to indicate gradual progress. Hence, present-day research can be said to be less revolutionary than research in the past. Chu and Evans [<span>17</span>] analyzed 1.8 billion citations of 90 million papers published between 1960 and 2014, and found that newly published papers tended to build upon and refine existing perspectives rather than introduce groundbreaking ideas that disrupt the normative status quo. These findings demonstrate that the quantitative evaluation system for doctors only leads to the publication of “ordinary” papers that may advance and enhance current knowledge but are unable to generate truly revolutionary and innovative research outcomes.</p><p>Focusing on publishing a large number of academic papers rather than prioritizing their quality is not effective for enhancing clinical practice, which is one of the primary aims of medical research. Clinical research is the foundation of evidence-based medicine and landmark clinical trials have contributed remarkably to making considerable progress in improving disease prevention and treatment [<span>18</span>], particularly innovative clinical trials [<span>19</span>]. Despite criticisms indicating that the majority of doctors should prioritize their clinical practice over conducting research with the purpose of publishing papers [<span>20</span>], it cannot be ignored that many significant strides in modern medicine have been the result of doctors’ efforts to cure diseases [<span>21</span>]. Furthermore, conducting clinical research may enable doctors to effectively communicate their clinical and translational research findings to both patients and the public compared with doctors who do not conduct clinical research [<span>22</span>]. Diverse research strategies can enhance medical practices, such as promoting high-volume or -quality research productivity [<span>23</span>]. The first strategy is represented by the existing doctor evaluation system. Regrettably, empirical investigations have shown that the advancement of medical practice through high-caliber research is not accomplished by increasing the quantity of studies [<span>24, 25</span>]. Moreover, clinical studies published in journals with an average impact factor of ≥3 were related to lower readmission rates among both doctors and surgeons [<span>20</span>]. Therefore, the current focus on publishing more papers at the expense of research quality does not promote the advancement of clinical practices.</p><p>Using a single evaluation index incentivizes doctors to prioritize swift and effortless publications, even if it means disregarding scientific research ethics. Medicine is a primarily practice-based field where doctors may excel at diagnosing and treating illnesses, but lack academic interest or research skills. Nevertheless, the current evaluation system requires doctors to publish papers to achieve career promotion. Consequently, numerous doctors undertake risks for personal gain and pursue unethical publication avenues [<span>5, 26</span>].</p><p>Chawla [<span>27</span>] discovered over 400 counterfeit papers that potentially originated from the same paper factory and covered several medical fields, including pediatrics, cardiology, endocrinology, nephrology, and vascular surgery. The writers were all affiliated with Chinese hospitals. In 2021, the Royal Society of Chemistry Advances retracted 70 papers from Chinese hospitals due to their strikingly similar graphics and titles [<span>28</span>]. This trend is persistent. For example, Sabel et al. [<span>29</span>] estimated that 34% of neuroscience papers and 24% of medical papers published in 2020 may be fake or plagiarized, with China, Russia, Turkey, Egypt, and India producing the highest proportion of such papers. Other countries, such as Brazil, South Korea, Mexico, Serbia, Iran, Argentina, and Israel, were also studied, and the countries with the lowest percentage of fraudulent papers were Japan, Europe, Canada, Australia, the United States, and Ukraine.</p><p>The current doctor evaluation system excessively emphasizes the quantity of papers published, because hospitals often require doctors to have published a minimum number of studies to be promoted. Doctors may also receive large bonuses to incentivize their pursuit of publications, which leads to the rapid and voluminous publication of papers. Unfortunately, this also promotes predatory publishing practices and wasteful use of scientific research funds [<span>30</span>]. Hence, predatory journals become an ideal option for doctors seeking to publish a large quantity of their work quickly. According to Shamseer et al. [<span>31</span>], over half of the authors who published papers in predatory journals originated from upper middle- or high-income countries. Seventeen percent of papers received external funding, with the US National Institutes of Health being the most common funding agency. In particular, Shamseer et al. [<span>31</span>] highlighted the adverse impact of academic awards based on research publications, which encourage researchers with limited publishing experience to publish in predatory journals. The Inter Academy Partnership [<span>32</span>] found that 9% of the 1872 researchers from over 100 surveyed countries had unintentionally published in predatory journals, whereas 8% were uncertain if they had. It is estimated that more than one million researchers are impacted, with over $4 billion in research funding at risk of being squandered and predatory journals receiving a minimum of $178 million from article-processing charges. Another alarming discovery is that some scholars published papers in deceitful journals deliberately due to scientific research demands. Shamseer et al. [<span>31</span>] posits that the unsuitable assessment of researchers, which relies solely on very vague published metrics, promotes this misconduct.</p><p>These drawbacks of the quantitative evaluation system have garnered increasing attention and discourse within the global scientific community alongside growing calls for the reform of the current doctor evaluation system [<span>7</span>]. The Chinese government has recently acknowledged these issues within its current talent evaluation system. In May 2021, President Xi Jinping emphasized the need to improve the evaluation system by breaking through the “break the four unique” principle (It mainly refers to breaking the tendency of only papers, only titles, only academic qualifications, and only awards in talent evaluation) and establishing new standards [<span>33</span>]. However, creating a scientific, objective, and accurate evaluation system for medical professionals may lack consensus in practical terms; therefore, I propose the following suggestions for the academic community and colleagues to discuss and exchange.</p><p>From an evaluation standpoint, doctors should conduct innovative and diverse research and prioritize their research quality over the quantity of published papers. Medical history has repeatedly demonstrated that genuine medical breakthroughs and innovations are often solitary and not immediately evident. Therefore, doctors in the early stages of their careers should be encouraged to avoid subjectivity and explore unknown areas of medicine based on their own curiosity and interest. Specifically, hospitals should allocate a portion of their scientific research funds toward supporting unpopular research topics. Rewarding long-term success, tolerating early failures, and providing researchers with greater experimental freedom can enable the pursuit of innovative scientific projects leading to scientific breakthroughs [<span>24</span>]. Increasing the number of young reviewers would avoid the potential influence of senior experts’ tendency to support the existing knowledge system during the peer review process [<span>4</span>]. When evaluating and rewarding academic performance, hospitals should give more weight to “unpopular” research. Some may argue that this shift in focus would make currently unpopular research subjects popular in the future and this is indeed a possibility. A practical solution for the future is to create a national research database with weights allocated to each research subcategory. It should be noted that these weight distributions should not be static and should be updated dynamically based on research engagement and the number of correlated research findings.</p><p>As advocated by the <i>New England Journal of Medicine</i>, researchers should be evaluated based on the quality and quantity of their scientific contributions rather than the number of published papers [<span>34</span>]. Furthermore, it is crucial to emphasize that most doctors, especially surgeons, should conduct research that is highly relevant to clinical problems rather than pursuing basic research for the sake of increasing the quantity of publications. This focus on basic science may reduce the time doctors have available for clinical practice without providing proportional benefits [<span>20</span>].</p><p>From an objective standpoint, a classification system for evaluating doctors should use different criteria depending on the type of doctor. In the medical field, researchers propose that doctors be classified based on their primary responsibilities, such as clinical practice, research, and teaching, and then use various indicators to evaluate the doctors in each subcategory [<span>35</span>]. This is a sound proposal because individual doctors have varying levels of expertise and their professional intentions, time, and energy are finite. Therefore, expecting one person to excel across all areas is unrealistic [<span>36</span>]. Many doctors face significant challenges in engaging in clinical research, such as intensive clinical workloads, time and energy constraints, and limited training in scientific writing. Therefore, any doctor evaluations should prioritize their clinical outcomes rather than the quantity of projects for which they have secured funding successfully or articles published [<span>37, 38</span>]. However, if doctors assume top-level editorial roles, such as editor-in-chief, chapter editor, or editor of a medical textbook, becomes a key member of a national academic committee, or is granted a patent, these achievements could provide noteworthy references for promotions [<span>35</span>].</p><p>An increasing number of clinicians are assuming managerial roles in medical institutions [<span>4</span>]; therefore, to acknowledge these individuals who possess the willingness and skill to engage in various aspects of medical care, two new categories, namely, management and multidisciplinary studies, should be included in addition to the existing three categories (clinical practice, academic research and teaching) to ensure the fair and effective evaluation of all types of doctors.</p><p>From an academic integrity perspective, it is necessary to establish appropriate assessment criteria to guide doctors in conducting open, transparent, and ethical research. Insufficient academic inquiries and reporting are still widespread in the contemporary scientific research landscape [<span>39</span>]. Especially in the medical field, researchers who lack the ability to maintain their integrity in supporting ethical standards may experience significant negative outcomes given the importance of ethical standards to clinical decision-making. The results of a survey by Hammarfelt showed that researchers adjusted their publication practices to suit their institution's evaluation criteria [<span>40</span>]. Therefore, special attention should be given to the evaluation of academic integrity when assessing doctors’ research results. Nevertheless, the current evaluation system tends to prioritize the quantity of publications rather than their reliability, accuracy, reproducibility, and transparency [<span>41</span>]. Rice et al. [<span>5</span>] have shown that a minute percentage of medical schools share school data, publish open access articles, register clinical research before it is conducted, and their evaluation mechanisms comply with worldwide research reporting regulations, such as the Consolidated Standards of Reporting Trialsand Preferred Reporting Items for Systematic Reviews and Meta Analysis. As academic integrity and ethical standards are excluded from the mainstream doctor evaluation system, this phenomenon deserves great attention. As we already know, the promotions and rewards associated with evaluations can influence doctors’ behavior [<span>42</span>]. Therefore, it is reasonable to expect the inclusion of more indicators of academic integrity and scientific research ethics in the evaluation system, which can lead to more credible, open, and transparent clinical research for the benefit of the public, the academic community, and patients. However, it is important to carefully assess the value of these standards for science and the public in addition to their feasibility as promotion standards.</p><p>In summary, although the academic-oriented doctor evaluation system, which encourages doctors to produce academic papers and project applications, has been in use worldwide for a significant period, stakeholders are increasingly recognizing its flaws. For instance, the focus on publishing impedes the advancement of medical research, does not provide significant assistance to clinical practice, incentivizes doctors to engage in academic dishonesty to publish their work, and exhausts significant research funds in predatory journals that have little value. Consequently, it is urgently necessary to reform the current doctor evaluation system. We advocate the creation of a system for evaluating doctors that promotes innovation and produces high-quality research. The hypothetical doctor evaluation system should encompass various criteria while stressing scientific research ethics and integrity. Although the new doctor evaluation system may pose a challenge to implement in the short term, this research direction deserves attention and effort from the global academic research community.</p><p><b>Xiaojing Hu</b>: Conceptualization, writing—original draft, writing—review and editing.</p><p>The author declares no conflict of interest.</p><p>Not applicable.</p><p>Not applicable.</p>\",\"PeriodicalId\":100601,\"journal\":{\"name\":\"Health Care Science\",\"volume\":\"3 1\",\"pages\":\"67-72\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/hcs2.82\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Health Care Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/hcs2.82\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Care Science","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/hcs2.82","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

尤其是现行的医生评价体系,过分强调学术论文的发表,普遍认为会造成一些问题,如重论文发表、重数量轻质量、鼓励快速发表等。对医生科研能力的评价应优先考虑科研质量,优化分类体系,制定更合适的评价标准。竞争性评价的要求导致医生一味追求科研成果的发表,从而牺牲了他们的科研好奇心和独立性。此外,量化评价体系已被证明是一种关键但不充分的方法,不能充分反映科学的发展和进步[12]。医学研究的首要目标是实现对疾病的全面了解,在追求知识的过程中,对病人的护理过程赋予了医生独特的研究视角[13]。结合独特的临床疑问进行研究,可以有效提高我们对疾病的认识[14]。医生研究的独立性取决于他们的好奇心[15],但目前的评价体系遏制了他们的好奇心和独立性,因为竞争的基础是竞争研究的相似性,没有相似性就没有竞争。然而,必须承认,真正具有独创性和开拓性的研究必须解决与众不同的问题,而与众不同意味着要在同一水平上竞争是很困难的。Park 等人[16]调查了新发表的论文对历史文献解释的影响,发现自 1945 年以来,科学研究中 "突破性 "成果的比例持续下降,尽管近年来科技进步显著。Park 等人[16]还分析了论文中最常使用的动词,发现 1950 年代的研究人员在讨论概念或对象的创造或发现时倾向于使用 "产生 "或 "确定 "等词。然而,2010 年代进行的最新研究则使用了 "改进 "或 "提高 "等词语来表示逐步取得进展。因此,与过去的研究相比,现在的研究可以说没有那么具有革命性。Chu和Evans[17]分析了1960年至2014年间发表的9000万篇论文的18亿次引用,发现新发表的论文往往是在现有观点的基础上加以改进和完善,而不是提出打破常规现状的突破性观点。这些研究结果表明,医生的量化评价体系只会导致 "普通 "论文的发表,这些论文可能会推进和提升现有知识,但无法产生真正革命性和创新性的研究成果。"专注于发表大量学术论文,而不是优先考虑论文质量,并不能有效提升临床实践,而这正是医学研究的主要目的之一。临床研究是循证医学的基础,具有里程碑意义的临床试验在改善疾病预防和治疗方面取得了显著进展[18],尤其是创新性临床试验[19]。尽管有批评指出,大多数医生应优先考虑临床实践,而不是以发表论文为目的开展研究[20],但不可忽视的是,现代医学的许多重大进步都是医生努力治愈疾病的结果[21]。此外,与不开展临床研究的医生相比,开展临床研究可使医生有效地向患者和公众传达其临床和转化研究成果[22]。不同的研究策略可以提高医疗实践水平,如促进高产量或高质量的研究生产率[23]。第一种策略以现有的医生评价体系为代表。遗憾的是,实证调查表明,通过高水平研究促进医疗实践的发展并不是靠增加研究数量就能实现的[24, 25]。此外,在平均影响因子≥3 的期刊上发表的临床研究与医生和外科医生较低的再入院率有关[20]。因此,目前以牺牲研究质量为代价来发表更多论文的做法并不能促进临床实践的进步。使用单一的评价指标会激励医生优先考虑快速、省力的论文发表,即使这意味着无视科研伦理。医学是一个主要以实践为基础的领域,医生可能擅长诊断和治疗疾病,但缺乏学术兴趣或研究技能。然而,现行的评价体系要求医生发表论文,以实现职业晋升。 总之,以学术为导向的医生评价体系,鼓励医生撰写学术论文和项目申请,虽然已经在世界范围内使用了相当长的一段时间,但利益相关者越来越认识到它的缺陷。例如,注重发表论文阻碍了医学研究的发展,对临床实践帮助不大,鼓励医生为发表论文而进行学术不端行为,并将大量研究经费耗费在价值不大的掠夺性期刊上。因此,改革现行的医生评价体系迫在眉睫。我们主张建立一个促进创新和高质量研究的医生评价体系。假设的医生评价体系应包含各种标准,同时强调科研道德和诚信。尽管新的医生评价体系在短期内实施起来可能会面临挑战,但这一研究方向值得全球学术研究界的关注和努力:构思、写作-原稿、写作-审阅和编辑。作者声明无利益冲突。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Constructing an effective evaluation system to identify doctors’ research capabilities

The events of the coronavirus disease 2019 pandemic have emphasized the indispensable role of doctors in promoting public health and well-being [1]. Although medicine and health care are being transformed by technological advances, such as artificial intelligence, big data, genomics, precision medicine, and telemedicine, doctors continue to play a critical role in providing health care. However, a key challenge today is the lack of recognition of doctors by society at large. Hospitals, patients, and public opinion all play a role in evaluating doctors. However, this study will focus on hospitals’ doctor evaluations.

At the macro level, doctor evaluations influence their value orientation, research directions, and resource allocation. Assessing doctors also impacts their research and behavior at the micro level, as it is a crucial element in their development. It is challenging to build a suitable doctor evaluation system; therefore, doctor evaluations are a common research subject among the global academic community. Various stakeholders have paid attention to this issue, which is still being debated in the literature.

The global academic community considers an evaluation system based purely on merit and performance to be the most suitable for doctor evaluations [2, 3] with a primary focus on clinical care and scientific research. In addition, doctors are expected to also teach when working at large academic medical centers. Among these three sections, the index for scientific research evaluation accounts for the highest proportion [4]. A survey of 170 universities randomly selected from the CWTS Leiden Ranking revealed that among the 92 universities offering a School of Biomedical Sciences and promoting the accessibility of evaluation criteria, the mentioned policies included peer-reviewed publications, funding, national or international reputations, author order, and journal impact factors in 95%, 67%, 48%, 37%, and 28% of cases, respectively. Furthermore, most institutions clearly indicate their expectations for the minimum number of papers to be published annually [5]. Alawi et al. have shown that in many countries, the evaluation of medical professionals is primarily based on their ability to publish papers and secure research funding [6]. The recognition of these achievements under the existing evaluation system has a significant impact on key evaluation factors, such as performance, publications, work roles, and research awards.

Doctors in Chinese hospitals are assessed primarily on the inclusion of their scientific publications in the Science Citation Index (SCI). The number of published papers indexed in SCI significantly influences their professional ranking and likelihood of promotion. Hence, many young Chinese doctors feel under pressure to publish academic papers in addition to performing their clinical duties [7]. According to the National Science and Technology Workers Survey Group [8], nearly half (45.9%) of Chinese science and technology workers perceived the overreliance on paper evaluations as a significant issue when assessing talent. The majority of authors (93.7%) acknowledged that professional advancement was their main reason for publishing papers, with 90.4% publishing to fulfill diverse evaluation requirements. Among the top three evaluation methods for hospital rankings in China, the number of published SCI papers is the most significant criterion for measuring the level of hospital research. This criterion plays a key role in enhancing the hospitals’ rankings; therefore, most medical staff hired by Chinese hospitals must have publications included in SCI journals. Newly published papers also directly influence medical staff's promotions and bonuses, which are often linked to the journals’ impact factor. Hence, doctors become motivated to publish more papers in journals with high impact factors [9, 10].

This quantitative evaluation system has undeniably played a vital role in the Chinese scientific community in the last 30 years and has driven the rapid growth of Chinese scientific papers in the literature. The number of Chinese papers indexed in SCI have increased from fifteenth place worldwide in 1991 to second place in 2021 [11], which demonstrates the recent considerable growth in Chinese scientific publishing. In particular, an upsurge in medical paper publications made significant contributions to this growth.

One of the clear benefits of this quantitative evaluation system is its objectivity, as all individuals are assessed based on a set of easily measurable standards. However, the worldwide academic community has increasingly reflected on the drawbacks of this quantitative evaluation system, such as its harmful impact on scientific progress, among other related issues. In particular, the current doctor evaluation system, which overemphasizes the publication of academic papers, is widely believed to cause several problems, such as emphasis on publications, prioritizing quantity over quality, incentives for swift publication. The evaluation of doctors’ research abilities should prioritize the quality of research, optimize classification systems, and develop more appropriate assessment criteria. These issues will be discussed in the following related sections.

The requirements for competitive evaluation leads to doctors pursuing research publications and sacrificing their scientific curiosity and independence as a consequence. In addition, the quantitative evaluation system has been shown to be a critical but insufficient method that does not fully reflect scientific development and progress [12]. A primary goal of medical research is to achieve a comprehensive understanding of disease and in the pursuit of knowledge, the process of caring for patients gives doctors a unique research perspective [13]. Studies that incorporate distinctive clinical queries can effectively enhance our knowledge of diseases [14]. The independence of doctors’ research depends on their curiosity [15], but the current evaluation system curbs their curiosity and independence because the basis of competition is that competing research is similar, and without similarity there is no competition. Nevertheless, it is important to acknowledge that genuinely ingenious and pioneering research must tackle distinct issues, and being distinct implies that it is arduous to compete on the same level. Park et al. [16] investigated the impact of newly published papers on the interpretation of historical documents and found a steady decline in the proportion of “breakthroughs” in scientific research since 1945, despite the recent significant scientific and technological advancements. Park et al. [16] also analyzed the most frequently used verbs in the papers and found that 1950s researchers tended to use words such as “produce” or “determine” when discussing the creation or discovery of concepts or objects. However, recent studies conducted during the 2010s used terms such as “improve” or “enhance” to indicate gradual progress. Hence, present-day research can be said to be less revolutionary than research in the past. Chu and Evans [17] analyzed 1.8 billion citations of 90 million papers published between 1960 and 2014, and found that newly published papers tended to build upon and refine existing perspectives rather than introduce groundbreaking ideas that disrupt the normative status quo. These findings demonstrate that the quantitative evaluation system for doctors only leads to the publication of “ordinary” papers that may advance and enhance current knowledge but are unable to generate truly revolutionary and innovative research outcomes.

Focusing on publishing a large number of academic papers rather than prioritizing their quality is not effective for enhancing clinical practice, which is one of the primary aims of medical research. Clinical research is the foundation of evidence-based medicine and landmark clinical trials have contributed remarkably to making considerable progress in improving disease prevention and treatment [18], particularly innovative clinical trials [19]. Despite criticisms indicating that the majority of doctors should prioritize their clinical practice over conducting research with the purpose of publishing papers [20], it cannot be ignored that many significant strides in modern medicine have been the result of doctors’ efforts to cure diseases [21]. Furthermore, conducting clinical research may enable doctors to effectively communicate their clinical and translational research findings to both patients and the public compared with doctors who do not conduct clinical research [22]. Diverse research strategies can enhance medical practices, such as promoting high-volume or -quality research productivity [23]. The first strategy is represented by the existing doctor evaluation system. Regrettably, empirical investigations have shown that the advancement of medical practice through high-caliber research is not accomplished by increasing the quantity of studies [24, 25]. Moreover, clinical studies published in journals with an average impact factor of ≥3 were related to lower readmission rates among both doctors and surgeons [20]. Therefore, the current focus on publishing more papers at the expense of research quality does not promote the advancement of clinical practices.

Using a single evaluation index incentivizes doctors to prioritize swift and effortless publications, even if it means disregarding scientific research ethics. Medicine is a primarily practice-based field where doctors may excel at diagnosing and treating illnesses, but lack academic interest or research skills. Nevertheless, the current evaluation system requires doctors to publish papers to achieve career promotion. Consequently, numerous doctors undertake risks for personal gain and pursue unethical publication avenues [5, 26].

Chawla [27] discovered over 400 counterfeit papers that potentially originated from the same paper factory and covered several medical fields, including pediatrics, cardiology, endocrinology, nephrology, and vascular surgery. The writers were all affiliated with Chinese hospitals. In 2021, the Royal Society of Chemistry Advances retracted 70 papers from Chinese hospitals due to their strikingly similar graphics and titles [28]. This trend is persistent. For example, Sabel et al. [29] estimated that 34% of neuroscience papers and 24% of medical papers published in 2020 may be fake or plagiarized, with China, Russia, Turkey, Egypt, and India producing the highest proportion of such papers. Other countries, such as Brazil, South Korea, Mexico, Serbia, Iran, Argentina, and Israel, were also studied, and the countries with the lowest percentage of fraudulent papers were Japan, Europe, Canada, Australia, the United States, and Ukraine.

The current doctor evaluation system excessively emphasizes the quantity of papers published, because hospitals often require doctors to have published a minimum number of studies to be promoted. Doctors may also receive large bonuses to incentivize their pursuit of publications, which leads to the rapid and voluminous publication of papers. Unfortunately, this also promotes predatory publishing practices and wasteful use of scientific research funds [30]. Hence, predatory journals become an ideal option for doctors seeking to publish a large quantity of their work quickly. According to Shamseer et al. [31], over half of the authors who published papers in predatory journals originated from upper middle- or high-income countries. Seventeen percent of papers received external funding, with the US National Institutes of Health being the most common funding agency. In particular, Shamseer et al. [31] highlighted the adverse impact of academic awards based on research publications, which encourage researchers with limited publishing experience to publish in predatory journals. The Inter Academy Partnership [32] found that 9% of the 1872 researchers from over 100 surveyed countries had unintentionally published in predatory journals, whereas 8% were uncertain if they had. It is estimated that more than one million researchers are impacted, with over $4 billion in research funding at risk of being squandered and predatory journals receiving a minimum of $178 million from article-processing charges. Another alarming discovery is that some scholars published papers in deceitful journals deliberately due to scientific research demands. Shamseer et al. [31] posits that the unsuitable assessment of researchers, which relies solely on very vague published metrics, promotes this misconduct.

These drawbacks of the quantitative evaluation system have garnered increasing attention and discourse within the global scientific community alongside growing calls for the reform of the current doctor evaluation system [7]. The Chinese government has recently acknowledged these issues within its current talent evaluation system. In May 2021, President Xi Jinping emphasized the need to improve the evaluation system by breaking through the “break the four unique” principle (It mainly refers to breaking the tendency of only papers, only titles, only academic qualifications, and only awards in talent evaluation) and establishing new standards [33]. However, creating a scientific, objective, and accurate evaluation system for medical professionals may lack consensus in practical terms; therefore, I propose the following suggestions for the academic community and colleagues to discuss and exchange.

From an evaluation standpoint, doctors should conduct innovative and diverse research and prioritize their research quality over the quantity of published papers. Medical history has repeatedly demonstrated that genuine medical breakthroughs and innovations are often solitary and not immediately evident. Therefore, doctors in the early stages of their careers should be encouraged to avoid subjectivity and explore unknown areas of medicine based on their own curiosity and interest. Specifically, hospitals should allocate a portion of their scientific research funds toward supporting unpopular research topics. Rewarding long-term success, tolerating early failures, and providing researchers with greater experimental freedom can enable the pursuit of innovative scientific projects leading to scientific breakthroughs [24]. Increasing the number of young reviewers would avoid the potential influence of senior experts’ tendency to support the existing knowledge system during the peer review process [4]. When evaluating and rewarding academic performance, hospitals should give more weight to “unpopular” research. Some may argue that this shift in focus would make currently unpopular research subjects popular in the future and this is indeed a possibility. A practical solution for the future is to create a national research database with weights allocated to each research subcategory. It should be noted that these weight distributions should not be static and should be updated dynamically based on research engagement and the number of correlated research findings.

As advocated by the New England Journal of Medicine, researchers should be evaluated based on the quality and quantity of their scientific contributions rather than the number of published papers [34]. Furthermore, it is crucial to emphasize that most doctors, especially surgeons, should conduct research that is highly relevant to clinical problems rather than pursuing basic research for the sake of increasing the quantity of publications. This focus on basic science may reduce the time doctors have available for clinical practice without providing proportional benefits [20].

From an objective standpoint, a classification system for evaluating doctors should use different criteria depending on the type of doctor. In the medical field, researchers propose that doctors be classified based on their primary responsibilities, such as clinical practice, research, and teaching, and then use various indicators to evaluate the doctors in each subcategory [35]. This is a sound proposal because individual doctors have varying levels of expertise and their professional intentions, time, and energy are finite. Therefore, expecting one person to excel across all areas is unrealistic [36]. Many doctors face significant challenges in engaging in clinical research, such as intensive clinical workloads, time and energy constraints, and limited training in scientific writing. Therefore, any doctor evaluations should prioritize their clinical outcomes rather than the quantity of projects for which they have secured funding successfully or articles published [37, 38]. However, if doctors assume top-level editorial roles, such as editor-in-chief, chapter editor, or editor of a medical textbook, becomes a key member of a national academic committee, or is granted a patent, these achievements could provide noteworthy references for promotions [35].

An increasing number of clinicians are assuming managerial roles in medical institutions [4]; therefore, to acknowledge these individuals who possess the willingness and skill to engage in various aspects of medical care, two new categories, namely, management and multidisciplinary studies, should be included in addition to the existing three categories (clinical practice, academic research and teaching) to ensure the fair and effective evaluation of all types of doctors.

From an academic integrity perspective, it is necessary to establish appropriate assessment criteria to guide doctors in conducting open, transparent, and ethical research. Insufficient academic inquiries and reporting are still widespread in the contemporary scientific research landscape [39]. Especially in the medical field, researchers who lack the ability to maintain their integrity in supporting ethical standards may experience significant negative outcomes given the importance of ethical standards to clinical decision-making. The results of a survey by Hammarfelt showed that researchers adjusted their publication practices to suit their institution's evaluation criteria [40]. Therefore, special attention should be given to the evaluation of academic integrity when assessing doctors’ research results. Nevertheless, the current evaluation system tends to prioritize the quantity of publications rather than their reliability, accuracy, reproducibility, and transparency [41]. Rice et al. [5] have shown that a minute percentage of medical schools share school data, publish open access articles, register clinical research before it is conducted, and their evaluation mechanisms comply with worldwide research reporting regulations, such as the Consolidated Standards of Reporting Trialsand Preferred Reporting Items for Systematic Reviews and Meta Analysis. As academic integrity and ethical standards are excluded from the mainstream doctor evaluation system, this phenomenon deserves great attention. As we already know, the promotions and rewards associated with evaluations can influence doctors’ behavior [42]. Therefore, it is reasonable to expect the inclusion of more indicators of academic integrity and scientific research ethics in the evaluation system, which can lead to more credible, open, and transparent clinical research for the benefit of the public, the academic community, and patients. However, it is important to carefully assess the value of these standards for science and the public in addition to their feasibility as promotion standards.

In summary, although the academic-oriented doctor evaluation system, which encourages doctors to produce academic papers and project applications, has been in use worldwide for a significant period, stakeholders are increasingly recognizing its flaws. For instance, the focus on publishing impedes the advancement of medical research, does not provide significant assistance to clinical practice, incentivizes doctors to engage in academic dishonesty to publish their work, and exhausts significant research funds in predatory journals that have little value. Consequently, it is urgently necessary to reform the current doctor evaluation system. We advocate the creation of a system for evaluating doctors that promotes innovation and produces high-quality research. The hypothetical doctor evaluation system should encompass various criteria while stressing scientific research ethics and integrity. Although the new doctor evaluation system may pose a challenge to implement in the short term, this research direction deserves attention and effort from the global academic research community.

Xiaojing Hu: Conceptualization, writing—original draft, writing—review and editing.

The author declares no conflict of interest.

Not applicable.

Not applicable.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
0.90
自引率
0.00%
发文量
0
期刊最新文献
Study protocol: A national cross-sectional study on psychology and behavior investigation of Chinese residents in 2023. Caregiving in Asia: Priority areas for research, policy, and practice to support family caregivers. Innovative public strategies in response to COVID-19: A review of practices from China. Sixty years of ethical evolution: The 2024 revision of the Declaration of Helsinki (DoH). A novel ensemble ARIMA-LSTM approach for evaluating COVID-19 cases and future outbreak preparedness.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1