{"title":"Comment on “GRADE concept paper 9: rationale and process for creating a GRADE ontology”","authors":"S.Dhanya Dedeepya, Vaishali Goel, Nivedita Nikhil Desai","doi":"10.1016/j.jclinepi.2025.112023","DOIUrl":"10.1016/j.jclinepi.2025.112023","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112023"},"PeriodicalIF":5.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145338056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1016/j.jclinepi.2025.112130
Andrea C. Tricco, María Ximena Rojas, David Tovey
{"title":"Editors’ Choice January 2026","authors":"Andrea C. Tricco, María Ximena Rojas, David Tovey","doi":"10.1016/j.jclinepi.2025.112130","DOIUrl":"10.1016/j.jclinepi.2025.112130","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112130"},"PeriodicalIF":5.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1016/j.jclinepi.2025.112011
Holger J. Schünemann, Elie A. Akl, Ignacio Neumann, Joerg J. Meerpohl
{"title":"Reply: A needed evolution in GRADE to address dissemination (publication) bias","authors":"Holger J. Schünemann, Elie A. Akl, Ignacio Neumann, Joerg J. Meerpohl","doi":"10.1016/j.jclinepi.2025.112011","DOIUrl":"10.1016/j.jclinepi.2025.112011","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112011"},"PeriodicalIF":5.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145276572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1016/j.jclinepi.2025.112022
Colin Xu, Florian Naudet, Thomas T. Kim, Michael P. Hengartner, Mark A. Horowitz, Joanna Moncrieff, Ed Pigott, Martin Plöderl
{"title":"Results of finite mixture models remain inconsistent: Reply to Stone","authors":"Colin Xu, Florian Naudet, Thomas T. Kim, Michael P. Hengartner, Mark A. Horowitz, Joanna Moncrieff, Ed Pigott, Martin Plöderl","doi":"10.1016/j.jclinepi.2025.112022","DOIUrl":"10.1016/j.jclinepi.2025.112022","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112022"},"PeriodicalIF":5.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.jclinepi.2025.112119
C. Veal , K.R. Krause , E.I. Fried , A. Cipriani , P. Cuijpers , J. Downs , T.A. Furukawa , G. Gartlehner , S.D. Hollon , H. Levy-Soussan , G. Sahlem , A. Tomlinson , S. Touboul , P. Ravaud , V.-T. Tran , A. Chevance
Background and Objective
Heterogeneous outcome measurement limits the comparison and combination of results from randomized controlled trials and observational studies aimed at evaluating therapeutic interventions for depression. We report here the protocol for the development of a Core Outcome Set (COS) for adults with depression.
Methods
Development will follow a multistep approach with: (1) generating outcome domains that matter to people with lived experiences of depression, health care professionals, and carers through a large online international survey using open-ended questions; (2). selecting domains based on the preferences of key interest holders through an international online preference elicitation survey; and (3) identifying relevant outcome measures with measurement properties considered sufficient through several systematic reviews conducted according to COnsensus-based Standards for the selection of health Measurement INstruments standards.
Discussion
The protocol describes a proof-of-concept approach to include large numbers of individuals from all key interest holder groups in COS development, which could be replicated in other conditions and contexts.
{"title":"A protocol for the development of a core outcome set for adults with depression","authors":"C. Veal , K.R. Krause , E.I. Fried , A. Cipriani , P. Cuijpers , J. Downs , T.A. Furukawa , G. Gartlehner , S.D. Hollon , H. Levy-Soussan , G. Sahlem , A. Tomlinson , S. Touboul , P. Ravaud , V.-T. Tran , A. Chevance","doi":"10.1016/j.jclinepi.2025.112119","DOIUrl":"10.1016/j.jclinepi.2025.112119","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Heterogeneous outcome measurement limits the comparison and combination of results from randomized controlled trials and observational studies aimed at evaluating therapeutic interventions for depression. We report here the protocol for the development of a Core Outcome Set (COS) for adults with depression.</div></div><div><h3>Methods</h3><div>Development will follow a multistep approach with: (1) generating outcome domains that matter to people with lived experiences of depression, health care professionals, and carers through a large online international survey using open-ended questions; (2). selecting domains based on the preferences of key interest holders through an international online preference elicitation survey; and (3) identifying relevant outcome measures with measurement properties considered sufficient through several systematic reviews conducted according to COnsensus-based Standards for the selection of health Measurement INstruments standards.</div></div><div><h3>Discussion</h3><div>The protocol describes a proof-of-concept approach to include large numbers of individuals from all key interest holder groups in COS development, which could be replicated in other conditions and contexts.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"191 ","pages":"Article 112119"},"PeriodicalIF":5.2,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-27DOI: 10.1016/j.jclinepi.2025.112120
Yanjiao Shen , Zhengchi Li , Xianlin Gu , Yifan Yao , Sameer Parpia , Diane Heels-Ansdell , Yaping Chang , Ying Wang , Qingyang Shi , Qiukui Hao , Sepideh Mardani Jadid , Tachit Jiravichitchai , Akira Kuriyama , Zuojia Shang , Yuting Wang , Yunli Zhao , Ya Gao , Liang Du , Jin Huang , Gordon Guyatt
<div><h3>Background and Objectives</h3><div>Missing outcome data (hereafter referred to as “missing data,” typically due to loss to follow-up) is a major problem in randomized controlled trials (RCTs) and systematic reviews of RCTs. While prior work has examined the impact of missing binary outcomes, the influence of missing continuous patient-reported outcome measures (PROMs) on pooled effect estimates remains poorly understood. We therefore assessed the risk of bias introduced by missing data in systematic reviews of PROMs.</div></div><div><h3>Study Design and Setting</h3><div>We selected a representative sample of 100 systematic reviews that included meta-analyses reporting a statistically significant effect on a continuous patient-reported efficacy outcome. We applied four increasingly stringent imputation strategies based on the grading of recommendations assessment, development, and evaluation (GRADE) approach, along with three alternative approaches for handling studies in which investigators had already imputed results for missing data. We also conducted Firth logistic regression analyses to identify factors associated with crossing the null after imputation.</div></div><div><h3>Results</h3><div>Results from 100 systematic reviews that included 1298 RCTs proved similar across all three approaches to addressing imputed data. Using the least stringent strategy for imputing missing data, the percentage of meta-analyses in which the 95% CI crossed the null proved under 4%. Applying the next most stringent strategy, the percentage of CIs that crossed the null increased to 47.9%. Percentages crossing the null increased only marginally for the two most stringent approaches, crossing up to 53.1% in the next most stringent and 54.2% in the most stringent. Firth logistic regression identified two significant predictors of crossing the null after imputation: a higher average missing data (odds ratio [OR] 1.23, 95% CI: 1.11–1.43 per 1% increase in missing data) and a larger magnitude of the treatment effect, which was associated with lower odds of crossing the null (OR 0.70, 95% CI: 0.39–0.91 per 1 standardized mean difference increase). Neither database type (Cochrane vs. non-Cochrane) nor duration of follow-up proved associated with CI crossing the null.</div></div><div><h3>Conclusion</h3><div>A plausible imputation approach to test the potential risk of bias as a result of missing data in studies addressing treatment effects on PROMs resulted in 95% CIs in a high proportion of studies initially suggesting benefit crossing the null. The greater the proportion of missing data and the smaller the treatment effect, the more likely the CI crossed the null. Systematic review authors may consider formally testing the robustness of their results with respect to missing data.</div></div><div><h3>Plain Language Summary</h3><div>When studies included in a systematic review have missing outcome data, the study results may be biased and therefore misleading. I
{"title":"An imputation study shows that missing outcome data can substantially bias pooled estimates in systematic reviews of patient-reported outcomes","authors":"Yanjiao Shen , Zhengchi Li , Xianlin Gu , Yifan Yao , Sameer Parpia , Diane Heels-Ansdell , Yaping Chang , Ying Wang , Qingyang Shi , Qiukui Hao , Sepideh Mardani Jadid , Tachit Jiravichitchai , Akira Kuriyama , Zuojia Shang , Yuting Wang , Yunli Zhao , Ya Gao , Liang Du , Jin Huang , Gordon Guyatt","doi":"10.1016/j.jclinepi.2025.112120","DOIUrl":"10.1016/j.jclinepi.2025.112120","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Missing outcome data (hereafter referred to as “missing data,” typically due to loss to follow-up) is a major problem in randomized controlled trials (RCTs) and systematic reviews of RCTs. While prior work has examined the impact of missing binary outcomes, the influence of missing continuous patient-reported outcome measures (PROMs) on pooled effect estimates remains poorly understood. We therefore assessed the risk of bias introduced by missing data in systematic reviews of PROMs.</div></div><div><h3>Study Design and Setting</h3><div>We selected a representative sample of 100 systematic reviews that included meta-analyses reporting a statistically significant effect on a continuous patient-reported efficacy outcome. We applied four increasingly stringent imputation strategies based on the grading of recommendations assessment, development, and evaluation (GRADE) approach, along with three alternative approaches for handling studies in which investigators had already imputed results for missing data. We also conducted Firth logistic regression analyses to identify factors associated with crossing the null after imputation.</div></div><div><h3>Results</h3><div>Results from 100 systematic reviews that included 1298 RCTs proved similar across all three approaches to addressing imputed data. Using the least stringent strategy for imputing missing data, the percentage of meta-analyses in which the 95% CI crossed the null proved under 4%. Applying the next most stringent strategy, the percentage of CIs that crossed the null increased to 47.9%. Percentages crossing the null increased only marginally for the two most stringent approaches, crossing up to 53.1% in the next most stringent and 54.2% in the most stringent. Firth logistic regression identified two significant predictors of crossing the null after imputation: a higher average missing data (odds ratio [OR] 1.23, 95% CI: 1.11–1.43 per 1% increase in missing data) and a larger magnitude of the treatment effect, which was associated with lower odds of crossing the null (OR 0.70, 95% CI: 0.39–0.91 per 1 standardized mean difference increase). Neither database type (Cochrane vs. non-Cochrane) nor duration of follow-up proved associated with CI crossing the null.</div></div><div><h3>Conclusion</h3><div>A plausible imputation approach to test the potential risk of bias as a result of missing data in studies addressing treatment effects on PROMs resulted in 95% CIs in a high proportion of studies initially suggesting benefit crossing the null. The greater the proportion of missing data and the smaller the treatment effect, the more likely the CI crossed the null. Systematic review authors may consider formally testing the robustness of their results with respect to missing data.</div></div><div><h3>Plain Language Summary</h3><div>When studies included in a systematic review have missing outcome data, the study results may be biased and therefore misleading. I","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"191 ","pages":"Article 112120"},"PeriodicalIF":5.2,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145858961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-27DOI: 10.1016/j.jclinepi.2025.112122
Raphael E. Cuomo
Objectives
Epidemiology is largely organized to explain who becomes ill, yet many clinical and public health decisions occur after diagnosis. I introduce and formally define survival epidemiology as a new branch of science focused on assessing how people live longer and better with established disease, and I provide justification that prevention estimates should not be assumed to apply postdiagnosis.
Study Design and Setting
Conceptual and methodological commentary synthesizing evidence across cardiovascular, renal, oncologic, pulmonary, and hepatic conditions and integrating causal-inference and time-to-event principles for postdiagnosis questions.
Results
Across diseases, associations measured for incidence often fail to reproduce, and sometimes reverse, among patients with established disease. Diagnosis acts as a causal threshold that changes time scales and bias structures, including conditioning on disease (collider stratification), time-dependent confounding, immortal time bias, and reverse causation. Credible postdiagnosis inference requires designs that emulate randomized trials; explicit alignment of time zero with clinical decision points; strategies defined as used in practice; and handling of competing risks, multistate transitions, and longitudinal biomarkers (including joint models when appropriate). Essential postdiagnosis data include stage, molecular subtype, prior therapy lines, dose intensity and modifications, adverse events, performance status, and patient-reported outcomes. Recommended practice is parallel estimation of prevention and postdiagnosis survival effects for the same exposure–disease pairs and routine reporting of heterogeneity by stage, subtype, treatment pathway, and time since diagnosis.
Conclusion
Prevention and postdiagnosis survival are distinct inferential targets. Journals should require clarity on whether claims pertain to prevention or survival and report target-trial elements; guideline bodies should distinguish prevention from survival recommendations when evidence allows; and funders, training programs, and public communication should support survival-focused methods, data standards, and context-specific messaging for people living with disease.
{"title":"Defining survival epidemiology: postdiagnosis population science for people living with disease","authors":"Raphael E. Cuomo","doi":"10.1016/j.jclinepi.2025.112122","DOIUrl":"10.1016/j.jclinepi.2025.112122","url":null,"abstract":"<div><h3>Objectives</h3><div>Epidemiology is largely organized to explain who becomes ill, yet many clinical and public health decisions occur after diagnosis. I introduce and formally define survival epidemiology as a new branch of science focused on assessing how people live longer and better with established disease, and I provide justification that prevention estimates should not be assumed to apply postdiagnosis.</div></div><div><h3>Study Design and Setting</h3><div>Conceptual and methodological commentary synthesizing evidence across cardiovascular, renal, oncologic, pulmonary, and hepatic conditions and integrating causal-inference and time-to-event principles for postdiagnosis questions.</div></div><div><h3>Results</h3><div>Across diseases, associations measured for incidence often fail to reproduce, and sometimes reverse, among patients with established disease. Diagnosis acts as a causal threshold that changes time scales and bias structures, including conditioning on disease (collider stratification), time-dependent confounding, immortal time bias, and reverse causation. Credible postdiagnosis inference requires designs that emulate randomized trials; explicit alignment of time zero with clinical decision points; strategies defined as used in practice; and handling of competing risks, multistate transitions, and longitudinal biomarkers (including joint models when appropriate). Essential postdiagnosis data include stage, molecular subtype, prior therapy lines, dose intensity and modifications, adverse events, performance status, and patient-reported outcomes. Recommended practice is parallel estimation of prevention and postdiagnosis survival effects for the same exposure–disease pairs and routine reporting of heterogeneity by stage, subtype, treatment pathway, and time since diagnosis.</div></div><div><h3>Conclusion</h3><div>Prevention and postdiagnosis survival are distinct inferential targets. Journals should require clarity on whether claims pertain to prevention or survival and report target-trial elements; guideline bodies should distinguish prevention from survival recommendations when evidence allows; and funders, training programs, and public communication should support survival-focused methods, data standards, and context-specific messaging for people living with disease.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"191 ","pages":"Article 112122"},"PeriodicalIF":5.2,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145859002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-27DOI: 10.1016/j.jclinepi.2025.112121
Sarah B. Windle , Sam Harper , Jasleen Arneja , Peter Socha , Arijit Nandi
Background
In contrast to other observational study designs, quasi-experimental approaches (eg, difference-in-differences, interrupted time series, regression discontinuity, instrumental variable, synthetic control) account for some sources of unmeasured confounding and can estimate causal effects under weaker assumptions. Studies which apply quasi-experimental approaches have increased in popularity in recent decades, therefore investigators conducting systematic reviews of observational studies, particularly in biomedical, public health, or epidemiologic content areas, must be prepared to encounter and appropriately assess these approaches.
Objective
Our objective is to describe key methodological challenges and considerations for systematic reviews including quasi-experimental studies, with attention to current recommendations and approaches which have been applied in previous reviews.
Conclusion
Recommendations for authors of systematic reviews: We recommend that individuals conducting systematic reviews including quasi-experimental studies: (1) search a broad range of bibliographic databases and gray literature, including preprint repositories; (2) do not use search strategies which require specific terms for study design for identification, given inconsistent nomenclature and poor database indexing for quasi-experimental studies; (3) ensure that their review team includes several individuals with expertise in quasi-experimental designs for screening and risk of bias assessment in duplicate; (4) use an approach to risk of bias assessment which is sufficiently granular to identify studies most likely to report unbiased estimates of causal effects (eg, modified Risk Of Bias In Nonrandomized Studies - of Interventions); and (5) consider the implications of varied estimands when interpreting estimates from different quasi-experimental designs. Researchers may also consider restricting systematic review inclusion to quasi-experimental studies for feasibility when addressing research questions with large bodies of literature. However, a more inclusive approach is preferred, as well-designed studies using a variety of methodological approaches may be more credible than a quasi-experiment which violates causal assumptions.
Recommendations for the research community: Many of the challenges faced in conducting systematic reviews of quasi-experimental studies would be ameliorated by improved consistency in nomenclature, as well as greater transparency from authors in describing their research designs. The broader community (eg, research networks, journals) should consider the creation and implementation of reporting standards and protocol registration for quasi-experimental studies to improve study identification in systematic reviews.
{"title":"Systematic reviews of quasi-experimental studies: challenges and considerations","authors":"Sarah B. Windle , Sam Harper , Jasleen Arneja , Peter Socha , Arijit Nandi","doi":"10.1016/j.jclinepi.2025.112121","DOIUrl":"10.1016/j.jclinepi.2025.112121","url":null,"abstract":"<div><h3>Background</h3><div>In contrast to other observational study designs, quasi-experimental approaches (eg, difference-in-differences, interrupted time series, regression discontinuity, instrumental variable, synthetic control) account for some sources of unmeasured confounding and can estimate causal effects under weaker assumptions. Studies which apply quasi-experimental approaches have increased in popularity in recent decades, therefore investigators conducting systematic reviews of observational studies, particularly in biomedical, public health, or epidemiologic content areas, must be prepared to encounter and appropriately assess these approaches.</div></div><div><h3>Objective</h3><div>Our objective is to describe key methodological challenges and considerations for systematic reviews including quasi-experimental studies, with attention to current recommendations and approaches which have been applied in previous reviews.</div></div><div><h3>Conclusion</h3><div><em>Recommendations for authors of systematic reviews:</em> We recommend that individuals conducting systematic reviews including quasi-experimental studies: (1) search a broad range of bibliographic databases and gray literature, including preprint repositories; (2) do not use search strategies which require specific terms for study design for identification, given inconsistent nomenclature and poor database indexing for quasi-experimental studies; (3) ensure that their review team includes several individuals with expertise in quasi-experimental designs for screening and risk of bias assessment in duplicate; (4) use an approach to risk of bias assessment which is sufficiently granular to identify studies most likely to report unbiased estimates of causal effects (eg, modified Risk Of Bias In Nonrandomized Studies - of Interventions); and (5) consider the implications of varied estimands when interpreting estimates from different quasi-experimental designs. Researchers may also consider restricting systematic review inclusion to quasi-experimental studies for feasibility when addressing research questions with large bodies of literature. However, a more inclusive approach is preferred, as well-designed studies using a variety of methodological approaches may be more credible than a quasi-experiment which violates causal assumptions.</div><div><em>Recommendations for the research community:</em> Many of the challenges faced in conducting systematic reviews of quasi-experimental studies would be ameliorated by improved consistency in nomenclature, as well as greater transparency from authors in describing their research designs. The broader community (eg, research networks, journals) should consider the creation and implementation of reporting standards and protocol registration for quasi-experimental studies to improve study identification in systematic reviews.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"191 ","pages":"Article 112121"},"PeriodicalIF":5.2,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145858997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1016/j.jclinepi.2025.112091
David Ruben Teindl Laursen, Mihaela Ivosevic Broager, Mathias Weis Damkjær, Andreas Halgreen Eiset, Mia Elkjær, Erlend Faltinsen, Ingrid Rose MacLean-Nyegaard, Camilla Hansen Nejstgaard, Asger Sand Paludan-Müller, Lasse Adrup Benné Petersen, Søren Viborg Vestergaard, Asbjørn Hróbjartsson
{"title":"Corrigendum to \"Impact of active placebo controls on estimated drug effects in randomized trials: a meta-epidemiological study\" [Journal of Clinical Epidemiology 188 (2025) 111998].","authors":"David Ruben Teindl Laursen, Mihaela Ivosevic Broager, Mathias Weis Damkjær, Andreas Halgreen Eiset, Mia Elkjær, Erlend Faltinsen, Ingrid Rose MacLean-Nyegaard, Camilla Hansen Nejstgaard, Asger Sand Paludan-Müller, Lasse Adrup Benné Petersen, Søren Viborg Vestergaard, Asbjørn Hróbjartsson","doi":"10.1016/j.jclinepi.2025.112091","DOIUrl":"https://doi.org/10.1016/j.jclinepi.2025.112091","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"112091"},"PeriodicalIF":5.2,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145846980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1016/j.jclinepi.2025.112118
Emilie de Kanter , Tabea Kaul , Pauline Heus , Tom M. de Groot , René Harmen Kuijten , Johannes B. Reitsma , Gary S. Collins , Lotty Hooft , Karel G.M. Moons , Johanna A.A. Damen
<div><h3>Objectives</h3><div>Incomplete reporting of research limits its usefulness and contributes to research waste. Numerous reporting guidelines have been developed to support complete and accurate reporting of health-care research studies. Completeness of reporting can be measured by evaluating the adherence to reporting guidelines. However, assessing adherence to a reporting guideline often lacks uniformity. In 2019, we developed a reporting adherence tool for the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) statement. With recent advances in regression and artificial intelligence (AI)/machine learning (ML)–based methods, TRIPOD + AI (<span><span>www.tripod-statment.org</span><svg><path></path></svg></span>) was developed to replace the TRIPOD statement. The aim of this study was to develop an updated adherence tool for TRIPOD + AI.</div></div><div><h3>Study Design and Setting</h3><div>Based on the TRIPOD + AI full reporting guideline, including the accompanying explanation and elaboration light, and TRIPOD + AI for abstracts, we updated and expanded the original TRIPOD adherence tool and refined the adherence elements and their scoring rules through discussions within the author team and a pilot test.</div></div><div><h3>Results</h3><div>The updated tool comprises of 37 main items and 136 adherence elements and includes several automated scoring rules. We developed separate TRIPOD + AI adherence tools for model development, model evaluation, and for studies describing both in a single paper.</div></div><div><h3>Conclusion</h3><div>A uniform approach to assessing reporting adherence of TRIPOD + AI allows for comparisons across various fields, monitor reporting over time, and incentivizes primary study authors to comply.</div></div><div><h3>Plain Language Summary</h3><div>Accurate and complete reporting is crucial in biomedical research to ensure findings can be effectively used. To support researchers in reporting their findings well, reporting guidelines have been developed for different study types. One such guideline is TRIPOD, which focuses on research studies about medical prediction tools. In 2024, TRIPOD was updated to TRIPOD + AI to address the increasing use of AI and ML in prediction model studies. In 2019, we developed a scoring system to evaluate how well research papers on prediction tools adhered to the TRIPOD guideline, resulting in a reporting completeness score. This score allows for easier comparison of reporting completeness across various medical fields, and to monitor improvement in reporting over time. With the introduction of TRIPOD + AI, an update of the scoring system was required to align with the new reporting recommendations. We achieved this by reviewing our previous scoring system and incorporating the new items from TRIPOD + AI to better suit studies involving AI. We believe that this system will facilitate comparisons of prediction model reporting co
{"title":"Adherence to TRIPOD+AI guideline: an updated reporting assessment tool","authors":"Emilie de Kanter , Tabea Kaul , Pauline Heus , Tom M. de Groot , René Harmen Kuijten , Johannes B. Reitsma , Gary S. Collins , Lotty Hooft , Karel G.M. Moons , Johanna A.A. Damen","doi":"10.1016/j.jclinepi.2025.112118","DOIUrl":"10.1016/j.jclinepi.2025.112118","url":null,"abstract":"<div><h3>Objectives</h3><div>Incomplete reporting of research limits its usefulness and contributes to research waste. Numerous reporting guidelines have been developed to support complete and accurate reporting of health-care research studies. Completeness of reporting can be measured by evaluating the adherence to reporting guidelines. However, assessing adherence to a reporting guideline often lacks uniformity. In 2019, we developed a reporting adherence tool for the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) statement. With recent advances in regression and artificial intelligence (AI)/machine learning (ML)–based methods, TRIPOD + AI (<span><span>www.tripod-statment.org</span><svg><path></path></svg></span>) was developed to replace the TRIPOD statement. The aim of this study was to develop an updated adherence tool for TRIPOD + AI.</div></div><div><h3>Study Design and Setting</h3><div>Based on the TRIPOD + AI full reporting guideline, including the accompanying explanation and elaboration light, and TRIPOD + AI for abstracts, we updated and expanded the original TRIPOD adherence tool and refined the adherence elements and their scoring rules through discussions within the author team and a pilot test.</div></div><div><h3>Results</h3><div>The updated tool comprises of 37 main items and 136 adherence elements and includes several automated scoring rules. We developed separate TRIPOD + AI adherence tools for model development, model evaluation, and for studies describing both in a single paper.</div></div><div><h3>Conclusion</h3><div>A uniform approach to assessing reporting adherence of TRIPOD + AI allows for comparisons across various fields, monitor reporting over time, and incentivizes primary study authors to comply.</div></div><div><h3>Plain Language Summary</h3><div>Accurate and complete reporting is crucial in biomedical research to ensure findings can be effectively used. To support researchers in reporting their findings well, reporting guidelines have been developed for different study types. One such guideline is TRIPOD, which focuses on research studies about medical prediction tools. In 2024, TRIPOD was updated to TRIPOD + AI to address the increasing use of AI and ML in prediction model studies. In 2019, we developed a scoring system to evaluate how well research papers on prediction tools adhered to the TRIPOD guideline, resulting in a reporting completeness score. This score allows for easier comparison of reporting completeness across various medical fields, and to monitor improvement in reporting over time. With the introduction of TRIPOD + AI, an update of the scoring system was required to align with the new reporting recommendations. We achieved this by reviewing our previous scoring system and incorporating the new items from TRIPOD + AI to better suit studies involving AI. We believe that this system will facilitate comparisons of prediction model reporting co","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"191 ","pages":"Article 112118"},"PeriodicalIF":5.2,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145835262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}