<div><h3>Objectives</h3><div>Systematic reviews (SRs) are pivotal to evidence-based medicine. Structured tools exist to guide their reporting and appraisal, such as Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and A Measurement Tool to Assess Systematic Reviews (AMSTAR). However, there are limited data on whether peer reviewers of SRs use such tools when assessing manuscripts. This study aimed to investigate the use of structured tools by peer reviewers when assessing SRs of interventions, identify which tools are used, and explore perceived needs for structured tools to support the peer-review process.</div></div><div><h3>Study Design and Setting</h3><div>In 2025, we conducted a cross-sectional study targeting individuals who peer-reviewed at least 1 SR of interventions in the past year. The online survey collected data on demographics, use, and familiarity with structured tools, as well as open-ended responses on potential needs.</div></div><div><h3>Results</h3><div>Two hundred seventeen peer reviewers took part in the study. PRISMA was the most familiar tool (99% familiar or very familiar) and most frequently used during peer review (53% always used). The use of other tools such as AMSTAR, Peer Review of Electronic Search Strategies (PRESS), A Risk of Bias Assessment Tool for Systematic Reviews (ROBIS), and JBI checklist was infrequent. Seventeen percent reported using other structured tools beyond those listed. Most participants indicated that journals rarely required use of structured tools, except PRISMA. A notable proportion (55%) expressed concerns about time constraints, and 25% noted the lack of a comprehensive tool. Nearly half (45%) expressed a need for a dedicated structured tool for SR peer review, with checklists in PDF or embedded formats preferred. Participants expressed both advantages and concerns related to such tools.</div></div><div><h3>Conclusion</h3><div>Most peer reviewers used PRISMA when assessing SRs, while other structured tools were seldom applied. Only a few journals provided or required such tools, revealing inconsistent editorial practices. Participants reported barriers, including time constraints and a lack of suitable instruments. These findings highlight the need for a practical, validated tool, built upon existing instruments and integrated into editorial workflows. Such a tool could make peer review of SRs more consistent and transparent.</div></div><div><h3>Plain Language Summary</h3><div>Systematic reviews (SRs) are a type of research that synthesizes results from primary studies. Several structured tools, such as PRISMA for reporting and AMSTAR 2 for methodological quality, exist to guide how SRs are written and appraised. When manuscripts that report SRs are submitted to scholarly journals, editors invite expert peer reviewers to assess these SRs. In this study, researchers aimed to analyze which tools peer reviewers actually use when evaluating SR manuscripts, their percep
{"title":"Use of structured tools by peer reviewers of systematic reviews: a cross-sectional study reveals high familiarity with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) but limited use of other tools","authors":"Livia Puljak , Sara Pintur , Tanja Rombey , Craig Lockwood , Dawid Pieper","doi":"10.1016/j.jclinepi.2025.112084","DOIUrl":"10.1016/j.jclinepi.2025.112084","url":null,"abstract":"<div><h3>Objectives</h3><div>Systematic reviews (SRs) are pivotal to evidence-based medicine. Structured tools exist to guide their reporting and appraisal, such as Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and A Measurement Tool to Assess Systematic Reviews (AMSTAR). However, there are limited data on whether peer reviewers of SRs use such tools when assessing manuscripts. This study aimed to investigate the use of structured tools by peer reviewers when assessing SRs of interventions, identify which tools are used, and explore perceived needs for structured tools to support the peer-review process.</div></div><div><h3>Study Design and Setting</h3><div>In 2025, we conducted a cross-sectional study targeting individuals who peer-reviewed at least 1 SR of interventions in the past year. The online survey collected data on demographics, use, and familiarity with structured tools, as well as open-ended responses on potential needs.</div></div><div><h3>Results</h3><div>Two hundred seventeen peer reviewers took part in the study. PRISMA was the most familiar tool (99% familiar or very familiar) and most frequently used during peer review (53% always used). The use of other tools such as AMSTAR, Peer Review of Electronic Search Strategies (PRESS), A Risk of Bias Assessment Tool for Systematic Reviews (ROBIS), and JBI checklist was infrequent. Seventeen percent reported using other structured tools beyond those listed. Most participants indicated that journals rarely required use of structured tools, except PRISMA. A notable proportion (55%) expressed concerns about time constraints, and 25% noted the lack of a comprehensive tool. Nearly half (45%) expressed a need for a dedicated structured tool for SR peer review, with checklists in PDF or embedded formats preferred. Participants expressed both advantages and concerns related to such tools.</div></div><div><h3>Conclusion</h3><div>Most peer reviewers used PRISMA when assessing SRs, while other structured tools were seldom applied. Only a few journals provided or required such tools, revealing inconsistent editorial practices. Participants reported barriers, including time constraints and a lack of suitable instruments. These findings highlight the need for a practical, validated tool, built upon existing instruments and integrated into editorial workflows. Such a tool could make peer review of SRs more consistent and transparent.</div></div><div><h3>Plain Language Summary</h3><div>Systematic reviews (SRs) are a type of research that synthesizes results from primary studies. Several structured tools, such as PRISMA for reporting and AMSTAR 2 for methodological quality, exist to guide how SRs are written and appraised. When manuscripts that report SRs are submitted to scholarly journals, editors invite expert peer reviewers to assess these SRs. In this study, researchers aimed to analyze which tools peer reviewers actually use when evaluating SR manuscripts, their percep","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112084"},"PeriodicalIF":5.2,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145582736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-20DOI: 10.1016/j.jclinepi.2025.112056
K.M. Mondragon , C.S. Tan-Lim , R. Velasco Jr. , C.P. Cordero , H.M. Strebel , L. Palileo-Villanueva , J.V. Mantaring
<div><h3>Background</h3><div>Systematic reviews (SRs) with network meta-analyses (NMAs) are increasingly used to inform guidelines, health technology assessments (HTAs), and policy decisions. Their methodological complexity, as well as the difficulty in assessing the exchangeability assumption and the large amount of results, makes appraisal more challenging than for SRs with pairwise NMAs. Numerous SR- and NMA-specific appraisal tools exist, but they vary in scope, intended users, and methodological guidance, and few have been validated.</div></div><div><h3>Objectives</h3><div>To identify and describe appraisal instruments and interpretive guides for SRs and NMAs specifically, summarizing their characteristics, domain coverage, development methods, and measurement-property evaluations.</div></div><div><h3>Methods</h3><div>We conducted a methodological scoping review which included structured appraisal instruments or interpretive guides for SRs with or without NMA-specific domains, aimed at review authors, clinicians, guideline developers, or HTA assessors from published or gray literature in English. Searches (inception–August 2025) covered major databases, registries, organizational websites, and reference lists. Two reviewers independently screened records; data were extracted by one and checked by a second. We synthesized the findings narratively. First, we classified tools as either structured instruments or interpretive guides. Second, we grouped them according to their intended audience and scope. Third, we assessed available measurement-property data using relevant COnsensus-based Standards for the selection of health Measurement INstruments items.</div></div><div><h3>Results</h3><div>Thirty-four articles described 22 instruments (11 NMA-specific, nine systematic reviews with meta-analysis-specific, 2 encompassing both systematic reviews with meta-analysis and NMA). NMA tools added domains such as network geometry, transitivity, and coherence, but guidance on transitivity evaluation, publication bias, and ranking was either limited or ineffective. Reviewer-focused tools were structured with explicit response options, whereas clinician-oriented guides posed appraisal questions with explanations but no prescribed response. Nine instruments reported measurement-property data, with validity and reliability varying widely.</div></div><div><h3>Conclusion</h3><div>This first comprehensive map of systematic reviews with meta-analysis and NMA appraisal resources highlights the need for clearer operational criteria, structured decision rules, and integrated rater training to improve reliability and align foundational SR domains with NMA-specific content.</div></div><div><h3>Plain Language Summary</h3><div>NMA is a way to compare many treatments at once by combining results from multiple studies—even when some treatments have not been directly compared head-to-head. Because NMAs are complex, users need clear tools to judge whether an analysis is tru
{"title":"A scoping review of critical appraisal tools and user guides for systematic reviews with network meta-analysis: methodological gaps and directions for tool development","authors":"K.M. Mondragon , C.S. Tan-Lim , R. Velasco Jr. , C.P. Cordero , H.M. Strebel , L. Palileo-Villanueva , J.V. Mantaring","doi":"10.1016/j.jclinepi.2025.112056","DOIUrl":"10.1016/j.jclinepi.2025.112056","url":null,"abstract":"<div><h3>Background</h3><div>Systematic reviews (SRs) with network meta-analyses (NMAs) are increasingly used to inform guidelines, health technology assessments (HTAs), and policy decisions. Their methodological complexity, as well as the difficulty in assessing the exchangeability assumption and the large amount of results, makes appraisal more challenging than for SRs with pairwise NMAs. Numerous SR- and NMA-specific appraisal tools exist, but they vary in scope, intended users, and methodological guidance, and few have been validated.</div></div><div><h3>Objectives</h3><div>To identify and describe appraisal instruments and interpretive guides for SRs and NMAs specifically, summarizing their characteristics, domain coverage, development methods, and measurement-property evaluations.</div></div><div><h3>Methods</h3><div>We conducted a methodological scoping review which included structured appraisal instruments or interpretive guides for SRs with or without NMA-specific domains, aimed at review authors, clinicians, guideline developers, or HTA assessors from published or gray literature in English. Searches (inception–August 2025) covered major databases, registries, organizational websites, and reference lists. Two reviewers independently screened records; data were extracted by one and checked by a second. We synthesized the findings narratively. First, we classified tools as either structured instruments or interpretive guides. Second, we grouped them according to their intended audience and scope. Third, we assessed available measurement-property data using relevant COnsensus-based Standards for the selection of health Measurement INstruments items.</div></div><div><h3>Results</h3><div>Thirty-four articles described 22 instruments (11 NMA-specific, nine systematic reviews with meta-analysis-specific, 2 encompassing both systematic reviews with meta-analysis and NMA). NMA tools added domains such as network geometry, transitivity, and coherence, but guidance on transitivity evaluation, publication bias, and ranking was either limited or ineffective. Reviewer-focused tools were structured with explicit response options, whereas clinician-oriented guides posed appraisal questions with explanations but no prescribed response. Nine instruments reported measurement-property data, with validity and reliability varying widely.</div></div><div><h3>Conclusion</h3><div>This first comprehensive map of systematic reviews with meta-analysis and NMA appraisal resources highlights the need for clearer operational criteria, structured decision rules, and integrated rater training to improve reliability and align foundational SR domains with NMA-specific content.</div></div><div><h3>Plain Language Summary</h3><div>NMA is a way to compare many treatments at once by combining results from multiple studies—even when some treatments have not been directly compared head-to-head. Because NMAs are complex, users need clear tools to judge whether an analysis is tru","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112056"},"PeriodicalIF":5.2,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145582728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-20DOI: 10.1016/j.jclinepi.2025.112085
Joanne Khabsa , Vanessa Helou , Hussein A. Noureldine , Reem Hoteit , Aya Hassoun , Ali H. Dakroub , Lea Assaf , Ahmed Mohamed , Tala Chehaitly , Leana Ellaham , Elie A. Akl
Background and Objectives
Interest-holder engagement is increasingly recognized as essential to the relevance and uptake of practice guidelines. “Interest-holders” are groups with legitimate interests in the health issue under consideration. The interests' legitimacy arises from the fact that these groups are responsible for or affected by health-related decisions. The objective of this study was to describe interest-holder engagement approaches for practice guideline development as described in guidance documents by guideline-producing organizations.
Methods
We compiled a list of guideline-producing organizations and searched for their guidance documents on guideline development. We abstracted data on interest-holder engagement details for each subtopic in the Guidelines International Network (GIN)-McMaster Guideline Development Checklist (a total of 23 subtopics following the division of some original checklist topics).
Results
Of the 133 identified organizations, 129 (97%) describe in their guidance documents engaging at least 1 interest-holder group in at least 1 GIN-McMaster checklist subtopic. The subtopics with most engagement are “developing recommendations and determining their strength” (96%) and “peer review” (81%), while the subtopics with the least engagement are “establishing guideline group processes” (3%) and “training” (2%). The interest-holder groups with the highest engagement in at least one of the subtopics are providers (95%), principal investigators (78%) and patient representatives (64%), while interest-holder groups with lower engagement are program managers (3%), and peer-reviewed journal editors (1%). Across most subtopics, engagement occurs mostly through panel membership and decision-making level.
Conclusion
A high proportion of organizations engaged at least 1 interest-holder group in at least 1 subtopic of guideline development, with panel membership being the most common approach. However, this engagement was limited to a few interest-holder groups, and to a few subtopics with highest engagement.
{"title":"Guideline organizations’ guidance documents paper 4: interest-holder engagement","authors":"Joanne Khabsa , Vanessa Helou , Hussein A. Noureldine , Reem Hoteit , Aya Hassoun , Ali H. Dakroub , Lea Assaf , Ahmed Mohamed , Tala Chehaitly , Leana Ellaham , Elie A. Akl","doi":"10.1016/j.jclinepi.2025.112085","DOIUrl":"10.1016/j.jclinepi.2025.112085","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Interest-holder engagement is increasingly recognized as essential to the relevance and uptake of practice guidelines. “Interest-holders” are groups with legitimate interests in the health issue under consideration. The interests' legitimacy arises from the fact that these groups are responsible for or affected by health-related decisions. The objective of this study was to describe interest-holder engagement approaches for practice guideline development as described in guidance documents by guideline-producing organizations.</div></div><div><h3>Methods</h3><div>We compiled a list of guideline-producing organizations and searched for their guidance documents on guideline development. We abstracted data on interest-holder engagement details for each subtopic in the Guidelines International Network (GIN)-McMaster Guideline Development Checklist (a total of 23 subtopics following the division of some original checklist topics).</div></div><div><h3>Results</h3><div>Of the 133 identified organizations, 129 (97%) describe in their guidance documents engaging at least 1 interest-holder group in at least 1 GIN-McMaster checklist subtopic. The subtopics with most engagement are “developing recommendations and determining their strength” (96%) and “peer review” (81%), while the subtopics with the least engagement are “establishing guideline group processes” (3%) and “training” (2%). The interest-holder groups with the highest engagement in at least one of the subtopics are providers (95%), principal investigators (78%) and patient representatives (64%), while interest-holder groups with lower engagement are program managers (3%), and peer-reviewed journal editors (1%). Across most subtopics, engagement occurs mostly through panel membership and decision-making level.</div></div><div><h3>Conclusion</h3><div>A high proportion of organizations engaged at least 1 interest-holder group in at least 1 subtopic of guideline development, with panel membership being the most common approach. However, this engagement was limited to a few interest-holder groups, and to a few subtopics with highest engagement.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112085"},"PeriodicalIF":5.2,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145582742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.jclinepi.2025.112063
Joanne Khabsa , Mariam Nour Eldine , Sally Yaacoub , Rayane El-Khoury , Noha El Yaman , Wojtek Wiercioch , Holger J. Schünemann , Elie A. Akl
Background and Objectives
Given the role of practice guidelines in impacting practice and health outcomes, it is important that their development follows rigorous methodology. We present a series of papers exploring various aspects of practice guideline development based on a descriptive summary of guidance documents from guideline-producing organizations. The overall aim is to describe the methods employed by these organizations in developing practice guidelines. This first paper of the series aims to (1) describe the methodology followed in the descriptive summary, including the identification process of a sample of guideline-producing organizations with publicly available guidance documents on guideline development; (2) characterize the included guideline-producing organizations and their guidance documents; and (3) assess the extent to which these organizations cover the topics of the GIN-McMaster Guideline Development Checklist in their guidance documents.
Methods
We conducted a descriptive summary of guideline-producing organizations' publicly available guidance documents on guideline development (eg, guideline handbooks). We exhaustively sampled a list of guideline-producing organizations from multiple sources and searched their websites and the peer-reviewed literature for publicly available guidance documents on their guideline development process. We abstracted data in duplicate and independently on both the organizations and the documents' general characteristics and on whether the organizations covered the topics of the GIN-McMaster Guideline Development Checklist in their guidance documents. We subdivided some of 18 main topics of the checklist to disaggregate key concepts. Based on a discussion between the lead authors, this resulted in 27 examined subtopics. We conducted descriptive statistical analyses.
Results
Our final sample consisted of 133 guideline-producing organizations. The majority were professional associations (59%), based in North America (51%), and from the clinical field (84%). Out of the 27 GIN-McMaster Guideline Development Checklist subtopics, the median number covered was 20 (interquartile range (IQR): 15–24). The subtopics most frequently covered were “consumer and stakeholder engagement” (97%), “conflict of interest considerations” (92%), and “guideline group membership” (92%). The subtopics least covered were “training” (40%) and “considering additional information” (42%).
Conclusion
The number of GIN-McMaster Guideline Development Checklist subtopics covered by a sample of guideline-producing organizations in their guidance documents is both variable and suboptimal.
{"title":"Guideline organizations' guidance documents paper 1: Introduction","authors":"Joanne Khabsa , Mariam Nour Eldine , Sally Yaacoub , Rayane El-Khoury , Noha El Yaman , Wojtek Wiercioch , Holger J. Schünemann , Elie A. Akl","doi":"10.1016/j.jclinepi.2025.112063","DOIUrl":"10.1016/j.jclinepi.2025.112063","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Given the role of practice guidelines in impacting practice and health outcomes, it is important that their development follows rigorous methodology. We present a series of papers exploring various aspects of practice guideline development based on a descriptive summary of guidance documents from guideline-producing organizations. The overall aim is to describe the methods employed by these organizations in developing practice guidelines. This first paper of the series aims to (1) describe the methodology followed in the descriptive summary, including the identification process of a sample of guideline-producing organizations with publicly available guidance documents on guideline development; (2) characterize the included guideline-producing organizations and their guidance documents; and (3) assess the extent to which these organizations cover the topics of the GIN-McMaster Guideline Development Checklist in their guidance documents.</div></div><div><h3>Methods</h3><div>We conducted a descriptive summary of guideline-producing organizations' publicly available guidance documents on guideline development (eg, guideline handbooks). We exhaustively sampled a list of guideline-producing organizations from multiple sources and searched their websites and the peer-reviewed literature for publicly available guidance documents on their guideline development process. We abstracted data in duplicate and independently on both the organizations and the documents' general characteristics and on whether the organizations covered the topics of the GIN-McMaster Guideline Development Checklist in their guidance documents. We subdivided some of 18 main topics of the checklist to disaggregate key concepts. Based on a discussion between the lead authors, this resulted in 27 examined subtopics. We conducted descriptive statistical analyses.</div></div><div><h3>Results</h3><div>Our final sample consisted of 133 guideline-producing organizations. The majority were professional associations (59%), based in North America (51%), and from the clinical field (84%). Out of the 27 GIN-McMaster Guideline Development Checklist subtopics, the median number covered was 20 (interquartile range (IQR): 15–24). The subtopics most frequently covered were “consumer and stakeholder engagement” (97%), “conflict of interest considerations” (92%), and “guideline group membership” (92%). The subtopics least covered were “training” (40%) and “considering additional information” (42%).</div></div><div><h3>Conclusion</h3><div>The number of GIN-McMaster Guideline Development Checklist subtopics covered by a sample of guideline-producing organizations in their guidance documents is both variable and suboptimal.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112063"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.jclinepi.2025.112057
Birgitte Nørgaard , Karen E. Lie , Hans Lund
<div><h3>Objectives</h3><div>To systematically map the factors associated with citation rates, to categorize the types of studies evaluating these factors, and to obtain an overall status of citation bias in scientific health literature.</div></div><div><h3>Study Design and Setting</h3><div>A scoping review was reported following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses scoping review extension checklist. Four electronic databases were searched, and the reference-lists of all included articles were screened. Empirical meta-research studies reporting any source of predictors of citation rates and/or citation bias within health care were included. Data are presented by descriptive statistics such as frequencies, portions, and percentages.</div></div><div><h3>Results</h3><div>A total of 165 studies were included. Fifty-four distinct factors of citation rates were evaluated in 786 quantitative analyses. Regardless of using the same basic methodological approach to calculate citation rate, 78 studies (48%) aimed to examined citation bias, whereas 79 studies (48%) aimed to optimizing article characteristics to enhance authors’ own citation rates. The remaining seven studies (4%) analyzed infrastructural characteristics at publication level to make all studies more accessible.</div></div><div><h3>Conclusion</h3><div>Seventy-nine of the 165 included studies (48%) explicitly recommended modifying paper characteristics—such as title length or author count—to boost citations rather than prioritizing scientific contribution. Such recommendations may conflict with principles of scientific integrity, which emphasize relevance and methodological rigor over strategic citation practices. Given the high proportion of analyses identifying a significant increase in citation rates, publication bias cannot be ruled out.</div></div><div><h3>Plain Language Summary</h3><div>Why was the study done? Within scientific research, it is important to cite previous research. This is done for specific reasons, including crediting earlier authors and providing a credible and trustworthy background for conducting the study. However, findings suggest that citations are not always chosen for their intended purpose. This is known as citation bias. What did the researchers do? The researchers searched for all existing studies evaluating predictors of citation rate, ie, how often is a specific study referred to by other researchers. They systematically mapped these studies to find out both the level of citation bias and the types of citation bias present in scientific health literature. To find these studies, the researchers searched four electronic databases and screened the reference lists of all included studies to be sure to include as many studies as possible. What did the researchers find? The researchers found a total of 165 studies that evaluated predictors of citation rate in no less than 786 analyses. However, the researchers found that the studie
{"title":"Predictors of citation rates and the problem of citation bias: a scoping review","authors":"Birgitte Nørgaard , Karen E. Lie , Hans Lund","doi":"10.1016/j.jclinepi.2025.112057","DOIUrl":"10.1016/j.jclinepi.2025.112057","url":null,"abstract":"<div><h3>Objectives</h3><div>To systematically map the factors associated with citation rates, to categorize the types of studies evaluating these factors, and to obtain an overall status of citation bias in scientific health literature.</div></div><div><h3>Study Design and Setting</h3><div>A scoping review was reported following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses scoping review extension checklist. Four electronic databases were searched, and the reference-lists of all included articles were screened. Empirical meta-research studies reporting any source of predictors of citation rates and/or citation bias within health care were included. Data are presented by descriptive statistics such as frequencies, portions, and percentages.</div></div><div><h3>Results</h3><div>A total of 165 studies were included. Fifty-four distinct factors of citation rates were evaluated in 786 quantitative analyses. Regardless of using the same basic methodological approach to calculate citation rate, 78 studies (48%) aimed to examined citation bias, whereas 79 studies (48%) aimed to optimizing article characteristics to enhance authors’ own citation rates. The remaining seven studies (4%) analyzed infrastructural characteristics at publication level to make all studies more accessible.</div></div><div><h3>Conclusion</h3><div>Seventy-nine of the 165 included studies (48%) explicitly recommended modifying paper characteristics—such as title length or author count—to boost citations rather than prioritizing scientific contribution. Such recommendations may conflict with principles of scientific integrity, which emphasize relevance and methodological rigor over strategic citation practices. Given the high proportion of analyses identifying a significant increase in citation rates, publication bias cannot be ruled out.</div></div><div><h3>Plain Language Summary</h3><div>Why was the study done? Within scientific research, it is important to cite previous research. This is done for specific reasons, including crediting earlier authors and providing a credible and trustworthy background for conducting the study. However, findings suggest that citations are not always chosen for their intended purpose. This is known as citation bias. What did the researchers do? The researchers searched for all existing studies evaluating predictors of citation rate, ie, how often is a specific study referred to by other researchers. They systematically mapped these studies to find out both the level of citation bias and the types of citation bias present in scientific health literature. To find these studies, the researchers searched four electronic databases and screened the reference lists of all included studies to be sure to include as many studies as possible. What did the researchers find? The researchers found a total of 165 studies that evaluated predictors of citation rate in no less than 786 analyses. However, the researchers found that the studie","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112057"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.jclinepi.2025.112058
Lidwine B. Mokkink , Iris Eekhout
Reliability and measurement error are related but distinct measurement properties. They are connected because both can be evaluated using the same data, typically collected from studies involving repeated measurements in individuals who are stable on the outcome of interest. However, they are calculated using different statistical methods and refer to different quality aspects of measurement instruments. We explain that a measurement error refers to the precision of a measurement, that is, how similar or close the scores are across repeated measurements in a stable individual (variation within individuals). In contrast, reliability indicates an instrument's ability to distinguish between individuals, which depends both on the variation between individuals (ie, heterogeneity in the outcome being measured in the population) and the precision of the score, ie, the measurement error. Evaluating reliability helps to understand if a particular source of variation (eg, occasion, type of machine, or rater) influences the score, and whether the measurement can be improved by better standardizing this source. Intraclass-correlation coefficients, standards error of measurement, and variance components are explained and illustrated with an example.
{"title":"The measurement properties reliability and measurement error explained – a COSMIN perspective","authors":"Lidwine B. Mokkink , Iris Eekhout","doi":"10.1016/j.jclinepi.2025.112058","DOIUrl":"10.1016/j.jclinepi.2025.112058","url":null,"abstract":"<div><div>Reliability and measurement error are related but distinct measurement properties. They are connected because both can be evaluated using the same data, typically collected from studies involving repeated measurements in individuals who are stable on the outcome of interest. However, they are calculated using different statistical methods and refer to different quality aspects of measurement instruments. We explain that a measurement error refers to the precision of a measurement, that is, how similar or close the scores are across repeated measurements in a stable individual (variation within individuals). In contrast, reliability indicates an instrument's ability to distinguish between individuals, which depends both on the variation between individuals (ie, heterogeneity in the outcome being measured in the population) and the precision of the score, ie, the measurement error. Evaluating reliability helps to understand if a particular source of variation (eg, occasion, type of machine, or rater) influences the score, and whether the measurement can be improved by better standardizing this source. Intraclass-correlation coefficients, standards error of measurement, and variance components are explained and illustrated with an example.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112058"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.jclinepi.2025.112065
Joanne Khabsa , Mohamed M. Khamis , Rachad Ghazal , Noha El Yaman , Reem Hoteit , Elsa Hebbo , Sally Yaacoub , Wojtek Wiercioch , Elie A. Akl
Background and Objectives
Determining the types of contributions to guideline development, as well as acknowledging these contributions groups, are critical steps in the guideline development process. The objective of this study was to describe types of contributions to guideline development and authorship policies of guideline-producing organizations as described in their guidance documents on guideline development.
Methods
We conducted a descriptive summary of guidance documents on guideline development. Using multiple sources, we initially compiled a list of guideline-producing organizations and then searched for their publicly available guidance documents on guideline development (eg, guideline handbooks). Authors abstracted data in duplicate and independently on the organizations’ characteristics, types of contributions to guideline development, and authorship policies.
Results
We identified 133 guideline-producing organizations with publicly available guidance documents, of which the majority were professional associations (59%) from the clinical field (84%). Types of contributions to guideline development described by the organizations could be categorized as related to: management; content expertise; technical expertise; or dissemination, implementation, and quality measures. Commonly reported specific contributions included panel membership (99%), executive (83%), evidence synthesis (86%), and peer review (92%). A minority of organizations mentioned entities specifically dedicated to conflict-of-interest management (20%) and to dissemination, implementation, and quality measures (24%). For most organizations, panelists were involved in either supporting or conducting the evidence synthesis (73%). Sixty percent of organizations mentioned that panels should be multidisciplinary, and 44% mentioned that they should be balanced according to at least one characteristic (eg, geographical region) (44%). A minority of organizations had a guideline authorship policy (38%). Out of those, a majority specified types of contributions eligible for authorship (76%), a minority specified criteria for exclusion from authorship (18%), and rules for authorship order (27%).
Conclusion
Guidance documents of guideline-developing organizations consistently describe four types of contributions (panel membership, executive, evidence synthesis, and peer review), while others are less commonly described. They also lack important details on authorship policies.
{"title":"Guideline organizations’ guidance documents paper 3: contributions and authorship","authors":"Joanne Khabsa , Mohamed M. Khamis , Rachad Ghazal , Noha El Yaman , Reem Hoteit , Elsa Hebbo , Sally Yaacoub , Wojtek Wiercioch , Elie A. Akl","doi":"10.1016/j.jclinepi.2025.112065","DOIUrl":"10.1016/j.jclinepi.2025.112065","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Determining the types of contributions to guideline development, as well as acknowledging these contributions groups, are critical steps in the guideline development process. The objective of this study was to describe types of contributions to guideline development and authorship policies of guideline-producing organizations as described in their guidance documents on guideline development.</div></div><div><h3>Methods</h3><div>We conducted a descriptive summary of guidance documents on guideline development. Using multiple sources, we initially compiled a list of guideline-producing organizations and then searched for their publicly available guidance documents on guideline development (eg, guideline handbooks). Authors abstracted data in duplicate and independently on the organizations’ characteristics, types of contributions to guideline development, and authorship policies.</div></div><div><h3>Results</h3><div>We identified 133 guideline-producing organizations with publicly available guidance documents, of which the majority were professional associations (59%) from the clinical field (84%). Types of contributions to guideline development described by the organizations could be categorized as related to: management; content expertise; technical expertise; or dissemination, implementation, and quality measures. Commonly reported specific contributions included panel membership (99%), executive (83%), evidence synthesis (86%), and peer review (92%). A minority of organizations mentioned entities specifically dedicated to conflict-of-interest management (20%) and to dissemination, implementation, and quality measures (24%). For most organizations, panelists were involved in either supporting or conducting the evidence synthesis (73%). Sixty percent of organizations mentioned that panels should be multidisciplinary, and 44% mentioned that they should be balanced according to at least one characteristic (eg, geographical region) (44%). A minority of organizations had a guideline authorship policy (38%). Out of those, a majority specified types of contributions eligible for authorship (76%), a minority specified criteria for exclusion from authorship (18%), and rules for authorship order (27%).</div></div><div><h3>Conclusion</h3><div>Guidance documents of guideline-developing organizations consistently describe four types of contributions (panel membership, executive, evidence synthesis, and peer review), while others are less commonly described. They also lack important details on authorship policies.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112065"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.jclinepi.2025.112068
Vanessa Helou , Lynn Basbous , Reem A. Mustafa , Joanne Khabsa , Elie A. Akl
Background and Objectives
Diagnostic tests are central to clinical decision-making, but the rigor and consistency of guidelines that inform their use remain unclear. This study aimed to describe processes for developing recommendations about diagnostic tests and strategies as outlined in guidance documents of guideline-producing organizations.
Methods
We conducted a descriptive summary of guidance documents from guideline-producing organizations. We first compiled a list of eligible organizations using different sources and retrieved their guidance documents on practice guideline development. Two authors screened organizations and their documents in duplicate and independently. We abstracted information on whether organizations provided guidance on developing recommendations about diagnostic tests and strategies for each of the Guideline International Network McMaster Guideline Development Checklist topics and their corresponding details.
Results
Out of 133 guideline-producing organizations identified, 44 (33%) described processes for developing recommendations about diagnostic tests and strategies in their guidance documents. The majority of these organizations were professional (52%) and were at the national level (80%). The median number of topics for which guidance specific to diagnostic tests and strategies was provided was 1 (range = 1–5). The topics most frequently addressed were “judging the quality, strength, or certainty of evidence” (55%) and “question generation” (32%). Topics not addressed by any organization were “establishing guideline group processes,” “identifying target audience and topic selection,” “consumer and stakeholder involvement, ” “conflict of interest considerations,” “dissemination and implementation,” and “updating.”
Conclusion
A minority of guideline-producing organizations’ guidance documents mentioned specific processes for developing recommendations for diagnostic tests and strategies. Further refinement of guidance tailored to diagnostic tests is needed to improve practice guideline development processes.
{"title":"Guideline organizations’ guidance documents paper 10: developing recommendations about diagnostic tests and strategies","authors":"Vanessa Helou , Lynn Basbous , Reem A. Mustafa , Joanne Khabsa , Elie A. Akl","doi":"10.1016/j.jclinepi.2025.112068","DOIUrl":"10.1016/j.jclinepi.2025.112068","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Diagnostic tests are central to clinical decision-making, but the rigor and consistency of guidelines that inform their use remain unclear. This study aimed to describe processes for developing recommendations about diagnostic tests and strategies as outlined in guidance documents of guideline-producing organizations.</div></div><div><h3>Methods</h3><div>We conducted a descriptive summary of guidance documents from guideline-producing organizations. We first compiled a list of eligible organizations using different sources and retrieved their guidance documents on practice guideline development. Two authors screened organizations and their documents in duplicate and independently. We abstracted information on whether organizations provided guidance on developing recommendations about diagnostic tests and strategies for each of the Guideline International Network McMaster Guideline Development Checklist topics and their corresponding details.</div></div><div><h3>Results</h3><div>Out of 133 guideline-producing organizations identified, 44 (33%) described processes for developing recommendations about diagnostic tests and strategies in their guidance documents. The majority of these organizations were professional (52%) and were at the national level (80%). The median number of topics for which guidance specific to diagnostic tests and strategies was provided was 1 (range = 1–5). The topics most frequently addressed were “judging the quality, strength, or certainty of evidence” (55%) and “question generation” (32%). Topics not addressed by any organization were “establishing guideline group processes,” “identifying target audience and topic selection,” “consumer and stakeholder involvement, ” “conflict of interest considerations,” “dissemination and implementation,” and “updating.”</div></div><div><h3>Conclusion</h3><div>A minority of guideline-producing organizations’ guidance documents mentioned specific processes for developing recommendations for diagnostic tests and strategies. Further refinement of guidance tailored to diagnostic tests is needed to improve practice guideline development processes.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112068"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.jclinepi.2025.112066
Lili Zeidan , Maria Abou Mansour , Holger J. Schünemann , Murad Alam , Joanne Khabsa , Elie A. Akl
<div><h3>Background and Objectives</h3><div>Collaboration allows participants to share and leverage their strengths, mitigating resource limitations and duplication of efforts. Guideline-producing organizations interested in collaboration have developed policies to engage in collaborative effort. Our objective was to describe co-operative approaches to practice guideline development used by guideline-producing organizations as described in their guidance documents.</div></div><div><h3>Methods</h3><div>We conducted a systematic search to identify publicly available guidance documents from guideline-producing organizations. Two authors assessed eligibility and abstracted data on the organizations' characteristics and their policies for co-operative approaches in guideline development. Regarding key concepts in guideline development, co-operative approaches refer to collaborative efforts among different guideline-producing organizations to ensure the production of comprehensive practice guidelines. Collaboration involves a co-operative and co-ordinated effort between guideline-producing organizations, wherein multiple entities contribute to and engage in guideline development. Endorsement refers to the formal approval of practice guidelines by external entities, indicating their agreement or validation of the proposed guidelines. We distinguished between two perspectives of endorsement: i) Endorser: Organization X endorses guidelines established by an external organization. ii) Endorsee: External organizations endorse guidelines established by Organization X. We prespecified a list of co-operation components based on findings from a systematic review on health research collaboration by academic entities. Additionally, we accommodated any emerging components identified from the data. Subsequently, we abstracted data related to these components. Our study excludes intraorganizational or interteam co-operation. We analyzed categorical variables using frequencies and percentages and summarized the findings in textual, graphical, and tabular formats.</div></div><div><h3>Results</h3><div>Of the 133 identified guideline-producing organizations that described their methods in a publicly available guidance document, 73 (55%) organizations described at least one co-operative approach (<em>N</em> = 59), classified as “collaboration” and (<em>N</em> = 41) as “endorsement.” The most frequently addressed components for collaboration include dissemination plan (50%), team structure and governance (48%), conditions for co-operation and authority for decision to co-operate (42%), and conflict of interest policy (32%). The least addressed components include division of labor (2%), communication plan (2%), and writing (2%). Many components, such as sharing resources, shared benefits, and protocol development plan, were not addressed at all. Components of the endorsement approach varied depending on whether the perspective was an endorser or an endorsee organization.</
{"title":"Guideline organizations' guidance documents paper 9: co-operative approaches","authors":"Lili Zeidan , Maria Abou Mansour , Holger J. Schünemann , Murad Alam , Joanne Khabsa , Elie A. Akl","doi":"10.1016/j.jclinepi.2025.112066","DOIUrl":"10.1016/j.jclinepi.2025.112066","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Collaboration allows participants to share and leverage their strengths, mitigating resource limitations and duplication of efforts. Guideline-producing organizations interested in collaboration have developed policies to engage in collaborative effort. Our objective was to describe co-operative approaches to practice guideline development used by guideline-producing organizations as described in their guidance documents.</div></div><div><h3>Methods</h3><div>We conducted a systematic search to identify publicly available guidance documents from guideline-producing organizations. Two authors assessed eligibility and abstracted data on the organizations' characteristics and their policies for co-operative approaches in guideline development. Regarding key concepts in guideline development, co-operative approaches refer to collaborative efforts among different guideline-producing organizations to ensure the production of comprehensive practice guidelines. Collaboration involves a co-operative and co-ordinated effort between guideline-producing organizations, wherein multiple entities contribute to and engage in guideline development. Endorsement refers to the formal approval of practice guidelines by external entities, indicating their agreement or validation of the proposed guidelines. We distinguished between two perspectives of endorsement: i) Endorser: Organization X endorses guidelines established by an external organization. ii) Endorsee: External organizations endorse guidelines established by Organization X. We prespecified a list of co-operation components based on findings from a systematic review on health research collaboration by academic entities. Additionally, we accommodated any emerging components identified from the data. Subsequently, we abstracted data related to these components. Our study excludes intraorganizational or interteam co-operation. We analyzed categorical variables using frequencies and percentages and summarized the findings in textual, graphical, and tabular formats.</div></div><div><h3>Results</h3><div>Of the 133 identified guideline-producing organizations that described their methods in a publicly available guidance document, 73 (55%) organizations described at least one co-operative approach (<em>N</em> = 59), classified as “collaboration” and (<em>N</em> = 41) as “endorsement.” The most frequently addressed components for collaboration include dissemination plan (50%), team structure and governance (48%), conditions for co-operation and authority for decision to co-operate (42%), and conflict of interest policy (32%). The least addressed components include division of labor (2%), communication plan (2%), and writing (2%). Many components, such as sharing resources, shared benefits, and protocol development plan, were not addressed at all. Components of the endorsement approach varied depending on whether the perspective was an endorser or an endorsee organization.</","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112066"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}