Pub Date : 2025-11-19DOI: 10.1016/j.jclinepi.2025.112064
Rayane El-Khoury , Mariam Nour Eldine , Walid Abboud , Joanne Khabsa , Holger J. Schünemann , Elie A. Akl
Background and Objectives
Developing high-quality practice guidelines is resource-intensive, leading many guideline-producing organizations to adapt existing recommendations. The objective of this paper is to describe processes for adapting practice guidelines of guideline-producing organizations as described in their guidance documents on guideline development.
Methods
We conducted a descriptive summary. Using multiple sources, we compiled a list of guideline-producing organizations and then searched for their publicly available guidance documents on guideline development (eg, handbooks). We included organizations addressing adaptation in their guidance documents. Teams of two authors assessed eligibility and abstracted data in duplicate and independently. We synthesized data in both textual and tabular formats.
Results
Of 133 identified guideline-producing organizations with guidance documents on guideline development, 23 (17%) addressed adaptation. The most frequently addressed aspects of the adaptation process were (1) developing an adaptation plan (91%); (2) the factors considered for modifying source recommendations (91%), including acceptability of the intervention (52%), resource considerations (48%), values and preferences (43%), and applicability of the intervention (40%); (3) assessing the source guidelines (83%), with the three main criteria used being quality (65%), currency (65%), and relevance (39%); and (4) using source recommendations (78%). None of the organizations described a detailed approach for how to explain discordance of recommendations between different source guidelines, and which one to use.
Conclusion
Although this study provides insight into different aspects of the adaptation process, most organizations do not address these aspects in a comprehensive or detailed way. Clearer guidance and checklists are needed to support organizations in conducting efficient and context-specific adaptation efforts.
{"title":"Guideline organizations’ guidance documents paper 6: adaptation of practice guidelines","authors":"Rayane El-Khoury , Mariam Nour Eldine , Walid Abboud , Joanne Khabsa , Holger J. Schünemann , Elie A. Akl","doi":"10.1016/j.jclinepi.2025.112064","DOIUrl":"10.1016/j.jclinepi.2025.112064","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Developing high-quality practice guidelines is resource-intensive, leading many guideline-producing organizations to adapt existing recommendations. The objective of this paper is to describe processes for adapting practice guidelines of guideline-producing organizations as described in their guidance documents on guideline development.</div></div><div><h3>Methods</h3><div>We conducted a descriptive summary. Using multiple sources, we compiled a list of guideline-producing organizations and then searched for their publicly available guidance documents on guideline development (eg, handbooks). We included organizations addressing adaptation in their guidance documents. Teams of two authors assessed eligibility and abstracted data in duplicate and independently. We synthesized data in both textual and tabular formats.</div></div><div><h3>Results</h3><div>Of 133 identified guideline-producing organizations with guidance documents on guideline development, 23 (17%) addressed adaptation. The most frequently addressed aspects of the adaptation process were (1) developing an adaptation plan (91%); (2) the factors considered for modifying source recommendations (91%), including acceptability of the intervention (52%), resource considerations (48%), values and preferences (43%), and applicability of the intervention (40%); (3) assessing the source guidelines (83%), with the three main criteria used being quality (65%), currency (65%), and relevance (39%); and (4) using source recommendations (78%). None of the organizations described a detailed approach for how to explain discordance of recommendations between different source guidelines, and which one to use.</div></div><div><h3>Conclusion</h3><div>Although this study provides insight into different aspects of the adaptation process, most organizations do not address these aspects in a comprehensive or detailed way. Clearer guidance and checklists are needed to support organizations in conducting efficient and context-specific adaptation efforts.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112064"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the development of practice guidelines, priority setting of topics, questions, and outcomes ensures relevance and resource efficiency. The objective of this study was to describe priority setting processes as described in guidance documents by guideline-producing organizations.
Methods
We conducted a descriptive summary of guideline-producing organizations' publicly available guidance documents on practice guideline development (eg, guideline handbooks). We screened guideline-producing organizations' documents and abstracted data in duplicate and independently. We abstracted data on the elements of the priority setting process, including generation of initial list, method or tool used in the priority setting process, use of priority setting criteria, and refinement.
Results
Of the 133 identified organizations with publicly available guidance documents, 94 (71%) reported on a priority setting process for guideline development, with 16 also reporting on a priority setting process for guideline updating (12%). Most of the organizations addressed, in their guidance documents, topic priority setting (94%), whereas a minority addressed priority setting of questions (36%), outcomes (29%), implementation (12%), quality measures (15%), and future research (5%). In the guidance documents, generation of the initial list was the most addressed element for topics (88%), questions (65%) and outcomes (59%), followed by the use of criteria for topics (89%) and questions (59%), and refinement for outcomes (52%). A minority of organizations provided guidance to a published priority setting method or tool, which was only for topics (24%). The top used criteria for priority setting of topics were the impact of intervention on health outcomes (74%), variation/gaps in practice (69%), availability of evidence (69%), and disease health burden (68%); whereas for questions, top criteria were availability of evidence (60%), followed by interest at health professional/organization level (50%), uncertainty or controversy about best practice (40%), and variation/gaps in practice (40%).
Conclusion
This analysis of guideline-producing organizations revealed that a majority reported a priority-setting process, which primarily focused on topic selection and less on aspects like questions and outcomes. Although generating an initial list and using priority-setting criteria are common, few organizations report in their guidance documents using formal priority-setting tools, addressing refinement, or providing guidance for guideline updating or adaptation. A standardized priority setting process for all aspects of guideline development is needed.
{"title":"Guideline organizations' guidance documents paper 2: priority setting","authors":"May Mohamad , Joanne Khabsa , Mariam Nour Eldine , Sally Yaacoub , Fatimah Chamseddine , Zeina Itani , Rayane El-Khoury , Elie A. Akl","doi":"10.1016/j.jclinepi.2025.112070","DOIUrl":"10.1016/j.jclinepi.2025.112070","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>In the development of practice guidelines, priority setting of topics, questions, and outcomes ensures relevance and resource efficiency. The objective of this study was to describe priority setting processes as described in guidance documents by guideline-producing organizations.</div></div><div><h3>Methods</h3><div>We conducted a descriptive summary of guideline-producing organizations' publicly available guidance documents on practice guideline development (eg, guideline handbooks). We screened guideline-producing organizations' documents and abstracted data in duplicate and independently. We abstracted data on the elements of the priority setting process, including generation of initial list, method or tool used in the priority setting process, use of priority setting criteria, and refinement.</div></div><div><h3>Results</h3><div>Of the 133 identified organizations with publicly available guidance documents, 94 (71%) reported on a priority setting process for guideline development, with 16 also reporting on a priority setting process for guideline updating (12%). Most of the organizations addressed, in their guidance documents, topic priority setting (94%), whereas a minority addressed priority setting of questions (36%), outcomes (29%), implementation (12%), quality measures (15%), and future research (5%). In the guidance documents, generation of the initial list was the most addressed element for topics (88%), questions (65%) and outcomes (59%), followed by the use of criteria for topics (89%) and questions (59%), and refinement for outcomes (52%). A minority of organizations provided guidance to a published priority setting method or tool, which was only for topics (24%). The top used criteria for priority setting of topics were the impact of intervention on health outcomes (74%), variation/gaps in practice (69%), availability of evidence (69%), and disease health burden (68%); whereas for questions, top criteria were availability of evidence (60%), followed by interest at health professional/organization level (50%), uncertainty or controversy about best practice (40%), and variation/gaps in practice (40%).</div></div><div><h3>Conclusion</h3><div>This analysis of guideline-producing organizations revealed that a majority reported a priority-setting process, which primarily focused on topic selection and less on aspects like questions and outcomes. Although generating an initial list and using priority-setting criteria are common, few organizations report in their guidance documents using formal priority-setting tools, addressing refinement, or providing guidance for guideline updating or adaptation. A standardized priority setting process for all aspects of guideline development is needed.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112070"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145565552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.jclinepi.2025.112061
Joanne Khabsa , Zeina Itani , Hussein A. Noureldine , Francesco Nonino , Mohamed M. Khamis , Jose F. Meneses-Echavez , Joseph Bejjani , Sally Yaacoub , Holger J. Schünemann , Elie A. Akl
Background and Objectives
Bias in the development process of practice guidelines can be introduced through contributors' conflicts of interest (COIs) and the funding sources. The objective of this study was to describe policies of guideline-producing organizations on COIs of contributors and funding of practice guideline projects.
Methods
We conducted a descriptive summary of publicly available guidance documents of guideline-producing organizations. Two authors assessed eligibility and abstracted data on the organizations' characteristics, COI policies (declaration, verification, assessment of whether an interest qualifies as a COI, management, and reporting), and funding policies.
Results
Out of 133 identified guideline-producing organizations, 110 reported a COI and/or a funding policy. Most COI policies required the declaration of relevant interests only (60%). A minority of policies described a process to verify declarations (10%). Most policies mentioned the assessment of whether an interest qualifies as a COI (55%), but few provided specific criteria. Policies mostly specified discussions (43%) and voting (43%) as parts of the process from which conflicted individuals should be excluded. A minority of policies reported on the process by which COIs were evaluated and managed (25%). Most organizations accept external funding (70%), either any external funding (44%) or any external funding except for industry funding (26%), with 72% mentioning mitigation strategies.
Conclusion
Some, but not all, aspects of COI and funding were commonly addressed by policies of guideline-producing organizations. There were also inconsistencies across policies.
{"title":"Guideline organizations' guidance documents paper 5: conflict of interest and funding","authors":"Joanne Khabsa , Zeina Itani , Hussein A. Noureldine , Francesco Nonino , Mohamed M. Khamis , Jose F. Meneses-Echavez , Joseph Bejjani , Sally Yaacoub , Holger J. Schünemann , Elie A. Akl","doi":"10.1016/j.jclinepi.2025.112061","DOIUrl":"10.1016/j.jclinepi.2025.112061","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Bias in the development process of practice guidelines can be introduced through contributors' conflicts of interest (COIs) and the funding sources. The objective of this study was to describe policies of guideline-producing organizations on COIs of contributors and funding of practice guideline projects.</div></div><div><h3>Methods</h3><div>We conducted a descriptive summary of publicly available guidance documents of guideline-producing organizations. Two authors assessed eligibility and abstracted data on the organizations' characteristics, COI policies (declaration, verification, assessment of whether an interest qualifies as a COI, management, and reporting), and funding policies.</div></div><div><h3>Results</h3><div>Out of 133 identified guideline-producing organizations, 110 reported a COI and/or a funding policy. Most COI policies required the declaration of relevant interests only (60%). A minority of policies described a process to verify declarations (10%). Most policies mentioned the assessment of whether an interest qualifies as a COI (55%), but few provided specific criteria. Policies mostly specified discussions (43%) and voting (43%) as parts of the process from which conflicted individuals should be excluded. A minority of policies reported on the process by which COIs were evaluated and managed (25%). Most organizations accept external funding (70%), either any external funding (44%) or any external funding except for industry funding (26%), with 72% mentioning mitigation strategies.</div></div><div><h3>Conclusion</h3><div>Some, but not all, aspects of COI and funding were commonly addressed by policies of guideline-producing organizations. There were also inconsistencies across policies.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112061"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1016/j.jclinepi.2025.112052
Samuel J. White , Timothy H. Barker , Tracy Merlin , Grace Holland , Sharon Sanders , Aoife O'Mahony , Thanya Pathirana , Rebecca Theiss , Danielle Pollock , Natasha Reid , Zachary Munn
<div><h3>Background and Objectives</h3><div>Diagnostic criteria play an important role in informing clinical decision-making, particularly for conditions lacking objective tests, biomarkers, or reference standards. Despite their importance, there is no established methodological guidance for developing diagnostic criteria. This scoping review aimed to identify and describe the methodological approaches used to develop diagnostic criteria in the absence of objective tests, biomarkers, or reference standards.</div></div><div><h3>Study Design and Setting</h3><div>We conducted a scoping review in accordance with JBI methodology and the PRISMA-ScR reporting guideline. Studies published between 2000 and 2024 that described methods used to develop diagnostic criteria for conditions without objective tests, biomarkers, or reference standards were included. A comprehensive search was performed across multiple databases and supplemented with gray literature searches and expert consultation. Data were extracted independently by two reviewers and synthesized using descriptive statistics and qualitative content analysis.</div></div><div><h3>Results</h3><div>We included 139 studies. Suboptimal reporting of methodology was a barrier to assessment of methodological credibility. Authors used one or more of three main approaches to develop diagnostic criteria: consensus-based, literature-based, and/or primary study–based. Consensus methods were used in 98/139 (71%) of studies, with Delphi or modified Delphi approaches being the most commonly adopted. The role of evidence in diagnostic criteria development was not described in 36/139 (26%) of the included studies. In studies using consensus methodology to develop diagnostic criteria, prospective approaches to ensuring appropriate diversity among the diagnostic criteria development panel were employed in only 5/98 (5%) of studies and patient/advocate consultation was performed in 18/98 (18%) of studies.</div></div><div><h3>Conclusion</h3><div>Methodological approaches to developing diagnostic criteria for conditions without objective tests or standards are variable, inconsistently reported and often lack a clear evidence base. This could be aided by development of specific methodological guidance.</div></div><div><h3>Plain Language Summary</h3><div>When lab tests or scans are not available to confirm a diagnosis, doctors may use diagnostic criteria to help them decide what condition a patient may have. There is currently no clear way to create these criteria, which can lead to inconsistency and confusion. We looked at why this matters—because diagnostic criteria developed without transparency or methodological rigor may lead to incorrect diagnosis, patient harm or inequitable access to care—and explored how researchers develop diagnostic criteria when an objective test does not exist. Most studies relied on expert meetings, but many did not explain how they chose experts, gathered evidence or involved diverse perspe
{"title":"Methods for developing diagnostic criteria for conditions without objective tests, biomarkers, or reference standards: a scoping review","authors":"Samuel J. White , Timothy H. Barker , Tracy Merlin , Grace Holland , Sharon Sanders , Aoife O'Mahony , Thanya Pathirana , Rebecca Theiss , Danielle Pollock , Natasha Reid , Zachary Munn","doi":"10.1016/j.jclinepi.2025.112052","DOIUrl":"10.1016/j.jclinepi.2025.112052","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Diagnostic criteria play an important role in informing clinical decision-making, particularly for conditions lacking objective tests, biomarkers, or reference standards. Despite their importance, there is no established methodological guidance for developing diagnostic criteria. This scoping review aimed to identify and describe the methodological approaches used to develop diagnostic criteria in the absence of objective tests, biomarkers, or reference standards.</div></div><div><h3>Study Design and Setting</h3><div>We conducted a scoping review in accordance with JBI methodology and the PRISMA-ScR reporting guideline. Studies published between 2000 and 2024 that described methods used to develop diagnostic criteria for conditions without objective tests, biomarkers, or reference standards were included. A comprehensive search was performed across multiple databases and supplemented with gray literature searches and expert consultation. Data were extracted independently by two reviewers and synthesized using descriptive statistics and qualitative content analysis.</div></div><div><h3>Results</h3><div>We included 139 studies. Suboptimal reporting of methodology was a barrier to assessment of methodological credibility. Authors used one or more of three main approaches to develop diagnostic criteria: consensus-based, literature-based, and/or primary study–based. Consensus methods were used in 98/139 (71%) of studies, with Delphi or modified Delphi approaches being the most commonly adopted. The role of evidence in diagnostic criteria development was not described in 36/139 (26%) of the included studies. In studies using consensus methodology to develop diagnostic criteria, prospective approaches to ensuring appropriate diversity among the diagnostic criteria development panel were employed in only 5/98 (5%) of studies and patient/advocate consultation was performed in 18/98 (18%) of studies.</div></div><div><h3>Conclusion</h3><div>Methodological approaches to developing diagnostic criteria for conditions without objective tests or standards are variable, inconsistently reported and often lack a clear evidence base. This could be aided by development of specific methodological guidance.</div></div><div><h3>Plain Language Summary</h3><div>When lab tests or scans are not available to confirm a diagnosis, doctors may use diagnostic criteria to help them decide what condition a patient may have. There is currently no clear way to create these criteria, which can lead to inconsistency and confusion. We looked at why this matters—because diagnostic criteria developed without transparency or methodological rigor may lead to incorrect diagnosis, patient harm or inequitable access to care—and explored how researchers develop diagnostic criteria when an objective test does not exist. Most studies relied on expert meetings, but many did not explain how they chose experts, gathered evidence or involved diverse perspe","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112052"},"PeriodicalIF":5.2,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-17DOI: 10.1016/j.jclinepi.2025.112069
Lili Zeidan , Ahmad Najia , Elena Parmelli , Miranda Langendam , Joanne Khabsa , Elie A. Akl
Background and Objectives
The integration of quality assurance and improvement (QAI) into all steps of guideline development can promote the measurability and relevance of guidelines to real-world practice. Our objective was to describe QAI processes as described in guidance documents of guideline-producing organizations.
Methods
We conducted a comprehensive search in 2021, and updated it in 2024, to identify guidance documents publicly available by guideline-producing organizations. We abstracted data based on the items of the Guidelines International Network (GIN)-McMaster Checklist Extension for Quality Assurance and Quality Improvement (published in 2022) as well as any additional items that emerged from our data. Two authors independently assessed the eligibility of identified organizations and abstracted data on the organization's characteristics, QAI processes in guideline development, and on terminology used to refer to QAI elements (eg, quality indicators, performance measures). We analyzed categorical variables using frequencies and percentages and summarized the findings in textual and tabular formats.
Results
Sixty-nine of 133 guideline-producing organizations (about half) addressed QAI either as a brief mention (57%), a section in a document (33%), or a dedicated document (10%). Guideline-producing organizations used inconsistent terminology when referring to QAI elements. The most frequently addressed QAI items were predefining the process to select final quality measures (26%), the need for project subgroups to work on QAI (25%), and identifying the individuals of the subgroups (23%). The least addressed items were considering institutional conflicts of interest (1%), providing clarity on accountability to making the changes in quality indicators (3%), and developing/adopting a standardized reporting format (4%). Some QAI items were not addressed at all, including determining the QAI scheme scope, perspective, and pilot-testing of indicators with target users.
Conclusion
The coverage of the items of the GIN-McMaster GDC QAI extension varies across the guidance documents of guideline-producing organizations. Also, the organizations used inconsistent terminology when referring to QAI elements.
{"title":"Guideline organizations' guidance documents paper 11: quality assurance and improvement in guideline development","authors":"Lili Zeidan , Ahmad Najia , Elena Parmelli , Miranda Langendam , Joanne Khabsa , Elie A. Akl","doi":"10.1016/j.jclinepi.2025.112069","DOIUrl":"10.1016/j.jclinepi.2025.112069","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>The integration of quality assurance and improvement (QAI) into all steps of guideline development can promote the measurability and relevance of guidelines to real-world practice. Our objective was to describe QAI processes as described in guidance documents of guideline-producing organizations.</div></div><div><h3>Methods</h3><div>We conducted a comprehensive search in 2021, and updated it in 2024, to identify guidance documents publicly available by guideline-producing organizations. We abstracted data based on the items of the Guidelines International Network (GIN)-McMaster Checklist Extension for Quality Assurance and Quality Improvement (published in 2022) as well as any additional items that emerged from our data. Two authors independently assessed the eligibility of identified organizations and abstracted data on the organization's characteristics, QAI processes in guideline development, and on terminology used to refer to QAI elements (eg, quality indicators, performance measures). We analyzed categorical variables using frequencies and percentages and summarized the findings in textual and tabular formats.</div></div><div><h3>Results</h3><div>Sixty-nine of 133 guideline-producing organizations (about half) addressed QAI either as a brief mention (57%), a section in a document (33%), or a dedicated document (10%). Guideline-producing organizations used inconsistent terminology when referring to QAI elements. The most frequently addressed QAI items were predefining the process to select final quality measures (26%), the need for project subgroups to work on QAI (25%), and identifying the individuals of the subgroups (23%). The least addressed items were considering institutional conflicts of interest (1%), providing clarity on accountability to making the changes in quality indicators (3%), and developing/adopting a standardized reporting format (4%). Some QAI items were not addressed at all, including determining the QAI scheme scope, perspective, and pilot-testing of indicators with target users.</div></div><div><h3>Conclusion</h3><div>The coverage of the items of the GIN-McMaster GDC QAI extension varies across the guidance documents of guideline-producing organizations. Also, the organizations used inconsistent terminology when referring to QAI elements.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112069"},"PeriodicalIF":5.2,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145558205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-17DOI: 10.1016/j.jclinepi.2025.112067
Jana Khawandi , Noha El Yaman , Omar Dewidar , Tracy Faddoul , Lynn Lteif , Reem A. Mustafa , Elie A. Akl , Joanne Khabsa
Background and Objectives
To ensure practice guidelines contribute to improving equity, guideline developers need to consciously consider populations experiencing inequities throughout the process. Our aim was to describe whether and how guideline-producing organizations consider equity in the guideline development process as described in their guidance documents on guideline development.
Methods
We conducted a descriptive summary of guideline-producing organizations using different sources and retrieved their publicly available guidance documents on guideline development (eg, handbooks). We screened guidance documents and abstracted information about equity consideration within topics of the GIN-McMaster guideline development checklist.
Results
Of 133 identified guideline-producing organizations with guidance documents on guideline development, 52% considered equity in their guidance documents in at least one of the 18 topics of the GIN-McMaster checklist. Most organizations were professional (55%) and national (77%). The median number of topics considered per organization was 2 (IQR = 1–4), with the World Health Organization (WHO) considering the highest number of topics. The topic in which equity was considered the most was “guideline group membership” (57%), while the topic in which equity was considered the least was “conflict of interest considerations” (1%). In addition, some of the terms related to equity and populations experiencing inequities used were “inequitable or equality” and “minority or disadvantaged”, respectively.
Conclusion
More than half of guideline-producing organizations consider equity in guideline development, with considerations limited to a few groups of populations and a few topics of the guideline development process.
{"title":"Guideline organizations' guidance documents paper 8: considering equity","authors":"Jana Khawandi , Noha El Yaman , Omar Dewidar , Tracy Faddoul , Lynn Lteif , Reem A. Mustafa , Elie A. Akl , Joanne Khabsa","doi":"10.1016/j.jclinepi.2025.112067","DOIUrl":"10.1016/j.jclinepi.2025.112067","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>To ensure practice guidelines contribute to improving equity, guideline developers need to consciously consider populations experiencing inequities throughout the process. Our aim was to describe whether and how guideline-producing organizations consider equity in the guideline development process as described in their guidance documents on guideline development.</div></div><div><h3>Methods</h3><div>We conducted a descriptive summary of guideline-producing organizations using different sources and retrieved their publicly available guidance documents on guideline development (eg, handbooks). We screened guidance documents and abstracted information about equity consideration within topics of the GIN-McMaster guideline development checklist.</div></div><div><h3>Results</h3><div>Of 133 identified guideline-producing organizations with guidance documents on guideline development, 52% considered equity in their guidance documents in at least one of the 18 topics of the GIN-McMaster checklist. Most organizations were professional (55%) and national (77%). The median number of topics considered per organization was 2 (IQR = 1–4), with the World Health Organization (WHO) considering the highest number of topics. The topic in which equity was considered the most was “guideline group membership” (57%), while the topic in which equity was considered the least was “conflict of interest considerations” (1%). In addition, some of the terms related to equity and populations experiencing inequities used were “inequitable or equality” and “minority or disadvantaged”, respectively.</div></div><div><h3>Conclusion</h3><div>More than half of guideline-producing organizations consider equity in guideline development, with considerations limited to a few groups of populations and a few topics of the guideline development process.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112067"},"PeriodicalIF":5.2,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145558304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-17DOI: 10.1016/j.jclinepi.2025.112053
Sarah Batson , Matthew J. Randell , Catherine Bane , Julia Geppert , Pranshu Mundada , Chris Stinton , Eleanor Cozens , Maggie Powell , Sian Taylor-Phillips
Objectives
To estimate the extent of potential research waste in the production of evidence-synthesis products for health screening, providing an initial assessment of its magnitude.
Study Design and Setting
Evidence-synthesis products supporting screening recommendations for adult populations, published by the UK National Screening Committee (UK NSC) and the US Preventive Services Task Force (USPSTF) between 2014 and 2024, were identified as anchor reviews. For each anchor review, Embase, Medline, and national and international organization websites were searched for overlapping evidence reviews on the same topic, defined as addressing the same research questions with at least partial overlap in the population, interventions, comparisons, and outcomes.
Results
A total of 48 anchor reviews (covering 33 conditions) were identified from the UK NSC and USPSTF. Overlapping evidence reviews were identified for 92% (44/48) of these, with a median of 4 additional reviews per anchor review (range: 0–60; interquartile range [IQR]: 2–15). Of the overlapping reviews, 11% showed overlap with partial use of prior work, explicitly updating or building upon prior external work, but with new elements and scope differences that maintained their classification as overlapping. Focusing on a core subset of conditions of shared interest to both organizations, the median overlap increased to 13 (range: 2–47; IQR: 4–17), indicating substantial duplication in priority areas. Seventy percent of all reviews in the evidence base were conducted in North America (28%) and Western Europe (42%), with limited representation from low- and middle-income countries.
Conclusion
The results of this review highlight potential research waste due to duplication in evidence synthesis efforts. Coordinated action among organizations advising policymakers, such as NSCs, public health agencies, and evidence review bodies may help establish more efficient, collaborative approaches that enable reuse and adaptation across contexts. Such action could include real-time sharing of ongoing reviews, multiregion comprehensive reviews, and the use of stratified analyses to tailor findings to country-specific needs. These strategies should be explored to determine whether organizations can reduce unnecessary duplication, enhance equity, improve the timeliness and relevance of guidance, and redirect resources toward unmet research priorities and other pressing public health challenges.
{"title":"Potential waste in evidence synthesis for health screening: a scoping review and call for action","authors":"Sarah Batson , Matthew J. Randell , Catherine Bane , Julia Geppert , Pranshu Mundada , Chris Stinton , Eleanor Cozens , Maggie Powell , Sian Taylor-Phillips","doi":"10.1016/j.jclinepi.2025.112053","DOIUrl":"10.1016/j.jclinepi.2025.112053","url":null,"abstract":"<div><h3>Objectives</h3><div>To estimate the extent of potential research waste in the production of evidence-synthesis products for health screening, providing an initial assessment of its magnitude.</div></div><div><h3>Study Design and Setting</h3><div>Evidence-synthesis products supporting screening recommendations for adult populations, published by the UK National Screening Committee (UK NSC) and the US Preventive Services Task Force (USPSTF) between 2014 and 2024, were identified as anchor reviews. For each anchor review, Embase, Medline, and national and international organization websites were searched for overlapping evidence reviews on the same topic, defined as addressing the same research questions with at least partial overlap in the population, interventions, comparisons, and outcomes.</div></div><div><h3>Results</h3><div>A total of 48 anchor reviews (covering 33 conditions) were identified from the UK NSC and USPSTF. Overlapping evidence reviews were identified for 92% (44/48) of these, with a median of 4 additional reviews per anchor review (range: 0–60; interquartile range [IQR]: 2–15). Of the overlapping reviews, 11% showed overlap with partial use of prior work, explicitly updating or building upon prior external work, but with new elements and scope differences that maintained their classification as overlapping. Focusing on a core subset of conditions of shared interest to both organizations, the median overlap increased to 13 (range: 2–47; IQR: 4–17), indicating substantial duplication in priority areas. Seventy percent of all reviews in the evidence base were conducted in North America (28%) and Western Europe (42%), with limited representation from low- and middle-income countries.</div></div><div><h3>Conclusion</h3><div>The results of this review highlight potential research waste due to duplication in evidence synthesis efforts. Coordinated action among organizations advising policymakers, such as NSCs, public health agencies, and evidence review bodies may help establish more efficient, collaborative approaches that enable reuse and adaptation across contexts. Such action could include real-time sharing of ongoing reviews, multiregion comprehensive reviews, and the use of stratified analyses to tailor findings to country-specific needs. These strategies should be explored to determine whether organizations can reduce unnecessary duplication, enhance equity, improve the timeliness and relevance of guidance, and redirect resources toward unmet research priorities and other pressing public health challenges.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112053"},"PeriodicalIF":5.2,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145558295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-15DOI: 10.1016/j.jclinepi.2025.112059
Eve Tomlinson , Jude Holmes , Anne W.S. Rutjes , Clare Davenport , Mariska Leeflang , Bada Yang , Sue Mallett , Penny Whiting
<div><h3>Objectives</h3><div>Assessment of the applicability of primary studies is an essential but often a challenging aspect of systematic reviews of diagnostic test accuracy studies (DTA reviews). We explored review authors’ applicability assessments for the QUADAS-2 reference standard domain within Cochrane DTA reviews. We highlight applicability concerns, identify potential issues with assessment, and develop a framework for assessing the applicability of the target condition as defined by the reference standard.</div></div><div><h3>Study Design and Setting</h3><div>Methodological review. DTA reviews in the Cochrane Library that used QUADAS-2 and judged applicability for the reference standard domain as “high concern” for at least one study were eligible. One reviewer extracted the rationale for the “high concern” and this was checked by a second reviewer. Two reviewers categorized the rationale inductively into themes, and a third reviewer verified these. Discussions regarding the extracted information informed framework development.</div></div><div><h3>Results</h3><div>We identified 50 eligible reviews. Five themes emerged: study uses different reference standard threshold to define the target condition (six reviews), misclassification by the reference standard in the study such that the target condition in the study does not match the review question (11 reviews), reference standard could not be applied to all participants resulting in a different target condition (five reviews), misunderstanding QUADAS-2 applicability (seven reviews), and insufficient information (21 reviews). Our framework for researchers outlines four potential applicability concerns for the assessment of the target condition as defined by the reference standard: different sub-categories of the target condition, different threshold used to define the target condition, reference standard not applied to full study group, and misclassification of the target condition by the reference standard.</div></div><div><h3>Conclusion</h3><div>Clear sources of applicability concerns are identifiable, but several Cochrane review authors struggle to adequately identify and report them. We have developed an applicability framework to guide review authors in their assessment of applicability concerns for the QUADAS reference standard domain.</div></div><div><h3>Plain Language Summary</h3><div>What is the problem? Doctors use tests to help to decide if a person has a certain condition. They want to know how accurate the test is before they use it. This means how well it can tell people who have the condition from people who do not have it. This information can be found in “diagnostic systematic reviews”. Diagnostic systematic reviews start with a research question. They bring together findings from studies that have already been done to try to answer this question. It is important for researchers to check that the studies match the review question. This is called an “applicability assess
{"title":"Developing a framework for assessing the applicability of the target condition in diagnostic research","authors":"Eve Tomlinson , Jude Holmes , Anne W.S. Rutjes , Clare Davenport , Mariska Leeflang , Bada Yang , Sue Mallett , Penny Whiting","doi":"10.1016/j.jclinepi.2025.112059","DOIUrl":"10.1016/j.jclinepi.2025.112059","url":null,"abstract":"<div><h3>Objectives</h3><div>Assessment of the applicability of primary studies is an essential but often a challenging aspect of systematic reviews of diagnostic test accuracy studies (DTA reviews). We explored review authors’ applicability assessments for the QUADAS-2 reference standard domain within Cochrane DTA reviews. We highlight applicability concerns, identify potential issues with assessment, and develop a framework for assessing the applicability of the target condition as defined by the reference standard.</div></div><div><h3>Study Design and Setting</h3><div>Methodological review. DTA reviews in the Cochrane Library that used QUADAS-2 and judged applicability for the reference standard domain as “high concern” for at least one study were eligible. One reviewer extracted the rationale for the “high concern” and this was checked by a second reviewer. Two reviewers categorized the rationale inductively into themes, and a third reviewer verified these. Discussions regarding the extracted information informed framework development.</div></div><div><h3>Results</h3><div>We identified 50 eligible reviews. Five themes emerged: study uses different reference standard threshold to define the target condition (six reviews), misclassification by the reference standard in the study such that the target condition in the study does not match the review question (11 reviews), reference standard could not be applied to all participants resulting in a different target condition (five reviews), misunderstanding QUADAS-2 applicability (seven reviews), and insufficient information (21 reviews). Our framework for researchers outlines four potential applicability concerns for the assessment of the target condition as defined by the reference standard: different sub-categories of the target condition, different threshold used to define the target condition, reference standard not applied to full study group, and misclassification of the target condition by the reference standard.</div></div><div><h3>Conclusion</h3><div>Clear sources of applicability concerns are identifiable, but several Cochrane review authors struggle to adequately identify and report them. We have developed an applicability framework to guide review authors in their assessment of applicability concerns for the QUADAS reference standard domain.</div></div><div><h3>Plain Language Summary</h3><div>What is the problem? Doctors use tests to help to decide if a person has a certain condition. They want to know how accurate the test is before they use it. This means how well it can tell people who have the condition from people who do not have it. This information can be found in “diagnostic systematic reviews”. Diagnostic systematic reviews start with a research question. They bring together findings from studies that have already been done to try to answer this question. It is important for researchers to check that the studies match the review question. This is called an “applicability assess","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112059"},"PeriodicalIF":5.2,"publicationDate":"2025-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145543777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-15DOI: 10.1016/j.jclinepi.2025.112055
Miranda W. Langendam, Ignacio Neumann, Holger J. Schünemann
{"title":"Challenges in using GRADE by systematic review authors and how to overcome them: a response to Andric et al.","authors":"Miranda W. Langendam, Ignacio Neumann, Holger J. Schünemann","doi":"10.1016/j.jclinepi.2025.112055","DOIUrl":"10.1016/j.jclinepi.2025.112055","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112055"},"PeriodicalIF":5.2,"publicationDate":"2025-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145543732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-15DOI: 10.1016/j.jclinepi.2025.112054
Brennan C. Kahan , Declan Devane
In clinical trials, postrandomization events, such as treatment discontinuation or the use of rescue medication, can complicate the interpretation of results. An estimand is a precise description of the treatment effect that investigators wish to estimate. Estimands facilitate more straightforward interpretation of trial results by explicitly defining how postrandomization “intercurrent” events are incorporated into the research question. This article introduces the five key attributes of estimands (population, treatment conditions, endpoint, summary measure, and strategies for intercurrent events) and explains the five main strategies for managing intercurrent events (treatment policy, composite, while on treatment, hypothetical, and principal stratum). Using a practical example of a trial comparing cognitive behavioral therapy vs medication for mild anxiety, we demonstrate how different estimand choices lead to varying study designs, analyses, and interpretations. Understanding estimands helps researchers design better trials and enables stakeholders to determine if the results are relevant to their situation. We also explain how sensitivity analyses can be used to check the reliability of results by assessing how results change under different statistical assumptions.
{"title":"Estimands: what they are and why we should use them","authors":"Brennan C. Kahan , Declan Devane","doi":"10.1016/j.jclinepi.2025.112054","DOIUrl":"10.1016/j.jclinepi.2025.112054","url":null,"abstract":"<div><div>In clinical trials, postrandomization events, such as treatment discontinuation or the use of rescue medication, can complicate the interpretation of results. An estimand is a precise description of the treatment effect that investigators wish to estimate. Estimands facilitate more straightforward interpretation of trial results by explicitly defining how postrandomization “intercurrent” events are incorporated into the research question. This article introduces the five key attributes of estimands (population, treatment conditions, endpoint, summary measure, and strategies for intercurrent events) and explains the five main strategies for managing intercurrent events (treatment policy, composite, while on treatment, hypothetical, and principal stratum). Using a practical example of a trial comparing cognitive behavioral therapy vs medication for mild anxiety, we demonstrate how different estimand choices lead to varying study designs, analyses, and interpretations. Understanding estimands helps researchers design better trials and enables stakeholders to determine if the results are relevant to their situation. We also explain how sensitivity analyses can be used to check the reliability of results by assessing how results change under different statistical assumptions.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112054"},"PeriodicalIF":5.2,"publicationDate":"2025-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145543764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}