{"title":"方法论质量、偏见风险和报告质量:困惑依然存在。","authors":"Clovis Mariano Faggion Jr","doi":"10.1111/jebm.12550","DOIUrl":null,"url":null,"abstract":"<p>Several types of tools are used by researchers to assess the RoB and methodological quality of studies included in systematic reviews.<span><sup>1</sup></span> This is the case of the Cochrane approach for randomized controlled trials (RCTs), which is based on domain assessment,<span><sup>2</sup></span> and the Newcastle-Ottawa Scale (NOS) for assessing the methodological quality of nonrandomized studies in meta-analyses, including case-control and cohort studies.<span><sup>3</sup></span> Other tools may have other purposes; for example, tools have been developed to assess how a study is reported in a scientific article. This is the case with the CONSORT<span><sup>4</sup></span> and STROBE<span><sup>5</sup></span> checklists for guiding the reporting of RCTs and observational studies, respectively. However, it appears that some researchers inappropriately use reporting checklists to assess the methodological quality and RoB of studies included in systematic reviews.<span><sup>6</sup></span></p><p>The objective of this letter is to clarify different concepts related to the methodological assessment of the studies included in a systematic review. To support the arguments in this letter, the author also reported some examples of the use of reporting checklists to assess the methodological quality and RoB of studies included in systematic reviews of different biomedical disciplines.</p><p>The terms <i>methodological quality</i>, <i>RoB</i>, and <i>reporting quality</i> still appear to create confusion in how they are being applied in the biomedical literature. Methodological quality involves the application of specific methodological safeguards in the planning and conduct of a study to avoid or reduce systematic errors.<span><sup>7</sup></span> RoB is the chance of having a biased estimate, in other words, an overestimation or underestimation of the true effect estimate.<span><sup>2</sup></span> The assessment of RoB requires the interpretation and judgment of how methodological flaws (or a lack of methodological safeguards) may affect a study's results. Methodological quality assessment typically checks whether safeguards were applied, but with no emphasis on understanding whether these safeguards were in fact able to ensure that the study produced accurate estimates (i.e., values that are not under- or overestimated).<span><sup>7</sup></span> Reporting quality (or sometimes completeness of reporting) is a different concept. Reporting checklists, as the name implies, evaluate whether a study is reported in detail or not, or if important information is provided to allow reproducibility.<span><sup>8</sup></span> However, reporting checklists do not assess whether the procedure reported was, in fact, the correct one to use. Hence, a tool designed to assess reporting does not have adequate content validity to assess whether a study is of good/bad quality or whether a study has high or low RoB. Figure 1 reports the objectives of the different tools.</p><p>In order to understand whether researchers are applying the appropriate tool to their specific situation, the author of this letter searched the PubMed database on November 2, 2022 for relevant literature. The focus of the search was to identify literature on the potential inadequate use of reporting guidelines in systematic reviews. It is important to report that this letter had no intention to conduct a systematic review on the topic, but to provide some examples that illustrate the problem. The search included articles published between October 2020 and December 2022 with predefined keywords. The search strategy as well as the eligibility criteria and rationale for the assessment are reported in Supplementary File.</p><p>The search resulted in 217 potential articles, and after assessment of the 208 full texts, 100 publications with inappropriate use of reporting checklists and 108 with appropriate use were identified (Figure S1). The most inappropriately used checklist was STROBE (<i>n</i> = 54, 47.37%), followed by the CONSORT checklist (<i>n</i> = 24, 21.05%). The reporting tools are described in Table S1. Dentistry was the most frequent background (<i>n</i> = 20, 20%) of the corresponding authors of the articles, followed by nursing (<i>n</i> = 12, 12%) (Table S2).</p><p>The inappropriate use of the five selected reporting tools was identified in some medical disciplines (Table S2, Supplementary File). For example, in dentistry, one explanation for this high prevalence of inappropriate use of reporting tools was the lack of a proper tool to evaluate the methodological quality/RoB of basic research studies in the form of in vitro studies. For example, the authors of eleven reviews declared to have used a checklist this author developed more than 10 years ago.<span><sup>9</sup></span> This checklist used some items of the CONSORT checklist for RCTs and had the main objective of assessing the reporting of in vitro studies in dentistry. Interestingly, some authors of systematic reviews in the present sample claimed to have used this checklist to assess both methodological quality and RoB. Similarly, of the 13 systematic reviews in the nursing field, seven applied the STROBE checklist to attempt to assess the methodological quality of included observational studies. In fact, a study published more than 10 years ago previously identified the incorrect use of the STROBE checklist in systematic reviews.<span><sup>6</sup></span> In that study, the authors reported that 10 (53%) of 19 systematic reviews used STROBE inappropriately as a tool to evaluate methodological quality. These results are in agreement with the present study, which found that 47.4% of the selected systematic reviews used the STROBE checklist inappropriately. Therefore, it appears that little improvement has been made in the last decade to increase awareness of the correct use of these tools among researchers in the biomedical fields. As with the checklist for in vitro dental studies<span><sup>9</sup></span>), authors also seem to use STROBE to attempt to assess both methodological quality and RoB.</p><p>Another interesting finding was the modification of reporting tools by systematic review authors to assess the included primary studies. Ideally, tools that are modified should be first tested and validated before they may be applied, for example, by investigating the validity, reliability and utility of the tool.<span><sup>10</sup></span> The authors reported different forms of scoring for methodological quality and RoB, but no information on the validation of these changes was presented. It is also unclear whether modifications of these tools have been preceded after any contact or permission with the authors who originally produced the tools. Some authors also seem to be confused about the type of tools due to their structure. For example, some authors have reported the STROBE reporting checklist as a scale.<span><sup>11</sup></span></p><p>The findings reported here suggest that there may be important weaknesses in the background of the authors regarding their knowledge of the methodological aspects of research. It appears that some may have just adopted an approach in their systematic reviews based on what has already been done in the past without questioning its validity. This interpretation seems to be true for the misapplication of the checklist developed by the author of this letter. A potential explanation for this behavior is in the number of previous citations of articles describing a checklist<span><sup>9</sup></span> and its author´s background, which might have influenced others to apply it,<span><sup>12</sup></span> but for different purposes. Some evidence suggests that papers, where the authors are prominent, prestigious or well recognized, may increase the number of citations of this paper.<span><sup>12</sup></span> Similarly, early citations may be considered a predictor for future citations.<span><sup>12</sup></span> Hence, this behavior can perpetuate a system of inappropriate use in assessing the methodological quality and RoB of studies included in a systematic review. It is suggested that undergraduate/graduate courses in biomedical sciences emphasize the differences between the concepts presented here. Another potential action that would likely improve the current situation would be the inclusion of a methodologist on any systematic review team. One can consider that better planning in assessing primary studies included in systematic reviews would result when a methodologist is part of the research team. A similar rationale is suggested by the inclusion of a librarian to improve the quality of search strategies in systematic reviews.<span><sup>13</sup></span> Finally, any change in a methodological tool regarding its structure and scoring system should be conducted in conjunction with some form of robust validation.<span><sup>14</sup></span> This procedure would help ensure that the changed tool, in fact, measures what it is supposed to measure.</p><p>Some limitations of the present letter should be reported. Only five reporting checklists were the target of the study's search, which might have limited the number and characteristics of the selected studies. Therefore, the reported results might be only representative of these five reporting checklists. In fact, the situation could be even worse than reported, which would have been revealed if more checklists had been taken into account. The review was also conducted by one researcher only and some bias might have influenced the results. Finally, the search of articles was limited to a certain period of time and conducted in only one database. Therefore, the information on the frequency of tools and disciplines reported should be taken with caution.</p><p>In conclusion, the present letter suggests that many researchers inappropriately apply approaches to assess the methodological quality and RoB of the primary studies included in systematic reviews. The specific backgrounds of some researchers are identified, which may indicate poor knowledge regarding methodological concepts. Editors and reviewers of scientific journals should pay attention to the inadequate use of methodological tools when assessing and considering systematic reviews for publication.</p><p>The author has no relevant financial or nonfinancial interests to disclose.</p><p>The author declares that no funds, grants, or other support were received during the preparation of this manuscript.</p>","PeriodicalId":16090,"journal":{"name":"Journal of Evidence‐Based Medicine","volume":null,"pages":null},"PeriodicalIF":3.6000,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jebm.12550","citationCount":"0","resultStr":"{\"title\":\"Methodological quality, risk of bias, and reporting quality: A confusion persists\",\"authors\":\"Clovis Mariano Faggion Jr\",\"doi\":\"10.1111/jebm.12550\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Several types of tools are used by researchers to assess the RoB and methodological quality of studies included in systematic reviews.<span><sup>1</sup></span> This is the case of the Cochrane approach for randomized controlled trials (RCTs), which is based on domain assessment,<span><sup>2</sup></span> and the Newcastle-Ottawa Scale (NOS) for assessing the methodological quality of nonrandomized studies in meta-analyses, including case-control and cohort studies.<span><sup>3</sup></span> Other tools may have other purposes; for example, tools have been developed to assess how a study is reported in a scientific article. This is the case with the CONSORT<span><sup>4</sup></span> and STROBE<span><sup>5</sup></span> checklists for guiding the reporting of RCTs and observational studies, respectively. However, it appears that some researchers inappropriately use reporting checklists to assess the methodological quality and RoB of studies included in systematic reviews.<span><sup>6</sup></span></p><p>The objective of this letter is to clarify different concepts related to the methodological assessment of the studies included in a systematic review. To support the arguments in this letter, the author also reported some examples of the use of reporting checklists to assess the methodological quality and RoB of studies included in systematic reviews of different biomedical disciplines.</p><p>The terms <i>methodological quality</i>, <i>RoB</i>, and <i>reporting quality</i> still appear to create confusion in how they are being applied in the biomedical literature. Methodological quality involves the application of specific methodological safeguards in the planning and conduct of a study to avoid or reduce systematic errors.<span><sup>7</sup></span> RoB is the chance of having a biased estimate, in other words, an overestimation or underestimation of the true effect estimate.<span><sup>2</sup></span> The assessment of RoB requires the interpretation and judgment of how methodological flaws (or a lack of methodological safeguards) may affect a study's results. Methodological quality assessment typically checks whether safeguards were applied, but with no emphasis on understanding whether these safeguards were in fact able to ensure that the study produced accurate estimates (i.e., values that are not under- or overestimated).<span><sup>7</sup></span> Reporting quality (or sometimes completeness of reporting) is a different concept. Reporting checklists, as the name implies, evaluate whether a study is reported in detail or not, or if important information is provided to allow reproducibility.<span><sup>8</sup></span> However, reporting checklists do not assess whether the procedure reported was, in fact, the correct one to use. Hence, a tool designed to assess reporting does not have adequate content validity to assess whether a study is of good/bad quality or whether a study has high or low RoB. Figure 1 reports the objectives of the different tools.</p><p>In order to understand whether researchers are applying the appropriate tool to their specific situation, the author of this letter searched the PubMed database on November 2, 2022 for relevant literature. The focus of the search was to identify literature on the potential inadequate use of reporting guidelines in systematic reviews. It is important to report that this letter had no intention to conduct a systematic review on the topic, but to provide some examples that illustrate the problem. The search included articles published between October 2020 and December 2022 with predefined keywords. The search strategy as well as the eligibility criteria and rationale for the assessment are reported in Supplementary File.</p><p>The search resulted in 217 potential articles, and after assessment of the 208 full texts, 100 publications with inappropriate use of reporting checklists and 108 with appropriate use were identified (Figure S1). The most inappropriately used checklist was STROBE (<i>n</i> = 54, 47.37%), followed by the CONSORT checklist (<i>n</i> = 24, 21.05%). The reporting tools are described in Table S1. Dentistry was the most frequent background (<i>n</i> = 20, 20%) of the corresponding authors of the articles, followed by nursing (<i>n</i> = 12, 12%) (Table S2).</p><p>The inappropriate use of the five selected reporting tools was identified in some medical disciplines (Table S2, Supplementary File). For example, in dentistry, one explanation for this high prevalence of inappropriate use of reporting tools was the lack of a proper tool to evaluate the methodological quality/RoB of basic research studies in the form of in vitro studies. For example, the authors of eleven reviews declared to have used a checklist this author developed more than 10 years ago.<span><sup>9</sup></span> This checklist used some items of the CONSORT checklist for RCTs and had the main objective of assessing the reporting of in vitro studies in dentistry. Interestingly, some authors of systematic reviews in the present sample claimed to have used this checklist to assess both methodological quality and RoB. Similarly, of the 13 systematic reviews in the nursing field, seven applied the STROBE checklist to attempt to assess the methodological quality of included observational studies. In fact, a study published more than 10 years ago previously identified the incorrect use of the STROBE checklist in systematic reviews.<span><sup>6</sup></span> In that study, the authors reported that 10 (53%) of 19 systematic reviews used STROBE inappropriately as a tool to evaluate methodological quality. These results are in agreement with the present study, which found that 47.4% of the selected systematic reviews used the STROBE checklist inappropriately. Therefore, it appears that little improvement has been made in the last decade to increase awareness of the correct use of these tools among researchers in the biomedical fields. As with the checklist for in vitro dental studies<span><sup>9</sup></span>), authors also seem to use STROBE to attempt to assess both methodological quality and RoB.</p><p>Another interesting finding was the modification of reporting tools by systematic review authors to assess the included primary studies. Ideally, tools that are modified should be first tested and validated before they may be applied, for example, by investigating the validity, reliability and utility of the tool.<span><sup>10</sup></span> The authors reported different forms of scoring for methodological quality and RoB, but no information on the validation of these changes was presented. It is also unclear whether modifications of these tools have been preceded after any contact or permission with the authors who originally produced the tools. Some authors also seem to be confused about the type of tools due to their structure. For example, some authors have reported the STROBE reporting checklist as a scale.<span><sup>11</sup></span></p><p>The findings reported here suggest that there may be important weaknesses in the background of the authors regarding their knowledge of the methodological aspects of research. It appears that some may have just adopted an approach in their systematic reviews based on what has already been done in the past without questioning its validity. This interpretation seems to be true for the misapplication of the checklist developed by the author of this letter. A potential explanation for this behavior is in the number of previous citations of articles describing a checklist<span><sup>9</sup></span> and its author´s background, which might have influenced others to apply it,<span><sup>12</sup></span> but for different purposes. Some evidence suggests that papers, where the authors are prominent, prestigious or well recognized, may increase the number of citations of this paper.<span><sup>12</sup></span> Similarly, early citations may be considered a predictor for future citations.<span><sup>12</sup></span> Hence, this behavior can perpetuate a system of inappropriate use in assessing the methodological quality and RoB of studies included in a systematic review. It is suggested that undergraduate/graduate courses in biomedical sciences emphasize the differences between the concepts presented here. Another potential action that would likely improve the current situation would be the inclusion of a methodologist on any systematic review team. One can consider that better planning in assessing primary studies included in systematic reviews would result when a methodologist is part of the research team. A similar rationale is suggested by the inclusion of a librarian to improve the quality of search strategies in systematic reviews.<span><sup>13</sup></span> Finally, any change in a methodological tool regarding its structure and scoring system should be conducted in conjunction with some form of robust validation.<span><sup>14</sup></span> This procedure would help ensure that the changed tool, in fact, measures what it is supposed to measure.</p><p>Some limitations of the present letter should be reported. Only five reporting checklists were the target of the study's search, which might have limited the number and characteristics of the selected studies. Therefore, the reported results might be only representative of these five reporting checklists. In fact, the situation could be even worse than reported, which would have been revealed if more checklists had been taken into account. The review was also conducted by one researcher only and some bias might have influenced the results. Finally, the search of articles was limited to a certain period of time and conducted in only one database. Therefore, the information on the frequency of tools and disciplines reported should be taken with caution.</p><p>In conclusion, the present letter suggests that many researchers inappropriately apply approaches to assess the methodological quality and RoB of the primary studies included in systematic reviews. The specific backgrounds of some researchers are identified, which may indicate poor knowledge regarding methodological concepts. Editors and reviewers of scientific journals should pay attention to the inadequate use of methodological tools when assessing and considering systematic reviews for publication.</p><p>The author has no relevant financial or nonfinancial interests to disclose.</p><p>The author declares that no funds, grants, or other support were received during the preparation of this manuscript.</p>\",\"PeriodicalId\":16090,\"journal\":{\"name\":\"Journal of Evidence‐Based Medicine\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2023-09-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jebm.12550\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Evidence‐Based Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/jebm.12550\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Evidence‐Based Medicine","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jebm.12550","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
Methodological quality, risk of bias, and reporting quality: A confusion persists
Several types of tools are used by researchers to assess the RoB and methodological quality of studies included in systematic reviews.1 This is the case of the Cochrane approach for randomized controlled trials (RCTs), which is based on domain assessment,2 and the Newcastle-Ottawa Scale (NOS) for assessing the methodological quality of nonrandomized studies in meta-analyses, including case-control and cohort studies.3 Other tools may have other purposes; for example, tools have been developed to assess how a study is reported in a scientific article. This is the case with the CONSORT4 and STROBE5 checklists for guiding the reporting of RCTs and observational studies, respectively. However, it appears that some researchers inappropriately use reporting checklists to assess the methodological quality and RoB of studies included in systematic reviews.6
The objective of this letter is to clarify different concepts related to the methodological assessment of the studies included in a systematic review. To support the arguments in this letter, the author also reported some examples of the use of reporting checklists to assess the methodological quality and RoB of studies included in systematic reviews of different biomedical disciplines.
The terms methodological quality, RoB, and reporting quality still appear to create confusion in how they are being applied in the biomedical literature. Methodological quality involves the application of specific methodological safeguards in the planning and conduct of a study to avoid or reduce systematic errors.7 RoB is the chance of having a biased estimate, in other words, an overestimation or underestimation of the true effect estimate.2 The assessment of RoB requires the interpretation and judgment of how methodological flaws (or a lack of methodological safeguards) may affect a study's results. Methodological quality assessment typically checks whether safeguards were applied, but with no emphasis on understanding whether these safeguards were in fact able to ensure that the study produced accurate estimates (i.e., values that are not under- or overestimated).7 Reporting quality (or sometimes completeness of reporting) is a different concept. Reporting checklists, as the name implies, evaluate whether a study is reported in detail or not, or if important information is provided to allow reproducibility.8 However, reporting checklists do not assess whether the procedure reported was, in fact, the correct one to use. Hence, a tool designed to assess reporting does not have adequate content validity to assess whether a study is of good/bad quality or whether a study has high or low RoB. Figure 1 reports the objectives of the different tools.
In order to understand whether researchers are applying the appropriate tool to their specific situation, the author of this letter searched the PubMed database on November 2, 2022 for relevant literature. The focus of the search was to identify literature on the potential inadequate use of reporting guidelines in systematic reviews. It is important to report that this letter had no intention to conduct a systematic review on the topic, but to provide some examples that illustrate the problem. The search included articles published between October 2020 and December 2022 with predefined keywords. The search strategy as well as the eligibility criteria and rationale for the assessment are reported in Supplementary File.
The search resulted in 217 potential articles, and after assessment of the 208 full texts, 100 publications with inappropriate use of reporting checklists and 108 with appropriate use were identified (Figure S1). The most inappropriately used checklist was STROBE (n = 54, 47.37%), followed by the CONSORT checklist (n = 24, 21.05%). The reporting tools are described in Table S1. Dentistry was the most frequent background (n = 20, 20%) of the corresponding authors of the articles, followed by nursing (n = 12, 12%) (Table S2).
The inappropriate use of the five selected reporting tools was identified in some medical disciplines (Table S2, Supplementary File). For example, in dentistry, one explanation for this high prevalence of inappropriate use of reporting tools was the lack of a proper tool to evaluate the methodological quality/RoB of basic research studies in the form of in vitro studies. For example, the authors of eleven reviews declared to have used a checklist this author developed more than 10 years ago.9 This checklist used some items of the CONSORT checklist for RCTs and had the main objective of assessing the reporting of in vitro studies in dentistry. Interestingly, some authors of systematic reviews in the present sample claimed to have used this checklist to assess both methodological quality and RoB. Similarly, of the 13 systematic reviews in the nursing field, seven applied the STROBE checklist to attempt to assess the methodological quality of included observational studies. In fact, a study published more than 10 years ago previously identified the incorrect use of the STROBE checklist in systematic reviews.6 In that study, the authors reported that 10 (53%) of 19 systematic reviews used STROBE inappropriately as a tool to evaluate methodological quality. These results are in agreement with the present study, which found that 47.4% of the selected systematic reviews used the STROBE checklist inappropriately. Therefore, it appears that little improvement has been made in the last decade to increase awareness of the correct use of these tools among researchers in the biomedical fields. As with the checklist for in vitro dental studies9), authors also seem to use STROBE to attempt to assess both methodological quality and RoB.
Another interesting finding was the modification of reporting tools by systematic review authors to assess the included primary studies. Ideally, tools that are modified should be first tested and validated before they may be applied, for example, by investigating the validity, reliability and utility of the tool.10 The authors reported different forms of scoring for methodological quality and RoB, but no information on the validation of these changes was presented. It is also unclear whether modifications of these tools have been preceded after any contact or permission with the authors who originally produced the tools. Some authors also seem to be confused about the type of tools due to their structure. For example, some authors have reported the STROBE reporting checklist as a scale.11
The findings reported here suggest that there may be important weaknesses in the background of the authors regarding their knowledge of the methodological aspects of research. It appears that some may have just adopted an approach in their systematic reviews based on what has already been done in the past without questioning its validity. This interpretation seems to be true for the misapplication of the checklist developed by the author of this letter. A potential explanation for this behavior is in the number of previous citations of articles describing a checklist9 and its author´s background, which might have influenced others to apply it,12 but for different purposes. Some evidence suggests that papers, where the authors are prominent, prestigious or well recognized, may increase the number of citations of this paper.12 Similarly, early citations may be considered a predictor for future citations.12 Hence, this behavior can perpetuate a system of inappropriate use in assessing the methodological quality and RoB of studies included in a systematic review. It is suggested that undergraduate/graduate courses in biomedical sciences emphasize the differences between the concepts presented here. Another potential action that would likely improve the current situation would be the inclusion of a methodologist on any systematic review team. One can consider that better planning in assessing primary studies included in systematic reviews would result when a methodologist is part of the research team. A similar rationale is suggested by the inclusion of a librarian to improve the quality of search strategies in systematic reviews.13 Finally, any change in a methodological tool regarding its structure and scoring system should be conducted in conjunction with some form of robust validation.14 This procedure would help ensure that the changed tool, in fact, measures what it is supposed to measure.
Some limitations of the present letter should be reported. Only five reporting checklists were the target of the study's search, which might have limited the number and characteristics of the selected studies. Therefore, the reported results might be only representative of these five reporting checklists. In fact, the situation could be even worse than reported, which would have been revealed if more checklists had been taken into account. The review was also conducted by one researcher only and some bias might have influenced the results. Finally, the search of articles was limited to a certain period of time and conducted in only one database. Therefore, the information on the frequency of tools and disciplines reported should be taken with caution.
In conclusion, the present letter suggests that many researchers inappropriately apply approaches to assess the methodological quality and RoB of the primary studies included in systematic reviews. The specific backgrounds of some researchers are identified, which may indicate poor knowledge regarding methodological concepts. Editors and reviewers of scientific journals should pay attention to the inadequate use of methodological tools when assessing and considering systematic reviews for publication.
The author has no relevant financial or nonfinancial interests to disclose.
The author declares that no funds, grants, or other support were received during the preparation of this manuscript.
期刊介绍:
The Journal of Evidence-Based Medicine (EMB) is an esteemed international healthcare and medical decision-making journal, dedicated to publishing groundbreaking research outcomes in evidence-based decision-making, research, practice, and education. Serving as the official English-language journal of the Cochrane China Centre and West China Hospital of Sichuan University, we eagerly welcome editorials, commentaries, and systematic reviews encompassing various topics such as clinical trials, policy, drug and patient safety, education, and knowledge translation.