Syed I Khalid, Elie Massaad, Joanna Mary Roy, Kyle Thomson, Pranav Mirpuri, Ali Kiapour, John H Shin
{"title":"An Appraisal of the Quality of Development and Reporting of Predictive Models in Neurosurgery: A Systematic Review.","authors":"Syed I Khalid, Elie Massaad, Joanna Mary Roy, Kyle Thomson, Pranav Mirpuri, Ali Kiapour, John H Shin","doi":"10.1227/neu.0000000000003074","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and objectives: </strong>Significant evidence has indicated that the reporting quality of novel predictive models is poor because of confounding by small data sets, inappropriate statistical analyses, and a lack of validation and reproducibility. The Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) statement was developed to increase the generalizability of predictive models. This study evaluated the quality of predictive models reported in neurosurgical literature through their compliance with the TRIPOD guidelines.</p><p><strong>Methods: </strong>Articles reporting prediction models published in the top 5 neurosurgery journals by SCImago Journal Rank-2 (Neurosurgery, Journal of Neurosurgery, Journal of Neurosurgery: Spine, Journal of NeuroInterventional Surgery, and Journal of Neurology, Neurosurgery, and Psychiatry) between January 1st, 2018, and January 1st, 2023, were identified through a PubMed search strategy that combined terms related to machine learning and prediction modeling. These original research articles were analyzed against the TRIPOD criteria.</p><p><strong>Results: </strong>A total of 110 articles were assessed with the TRIPOD checklist. The median compliance was 57.4% (IQR: 50.0%-66.7%). Models using machine learning-based models exhibited lower compliance on average compared with conventional learning-based models (57.1%, 50.0%-66.7% vs 68.1%, 50.2%-68.1%, P = .472). Among the TRIPOD criteria, the lowest compliance was observed in blinding the assessment of predictors and outcomes (n = 7, 12.7% and n = 10, 16.9%, respectively), including an informative title (n = 17, 15.6%) and reporting model performance measures such as confidence intervals (n = 27, 24.8%). Few studies provided sufficient information to allow for the external validation of results (n = 26, 25.7%).</p><p><strong>Conclusion: </strong>Published predictive models in neurosurgery commonly fall short of meeting the established guidelines laid out by TRIPOD for optimal development, validation, and reporting. This lack of compliance may represent the minor extent to which these models have been subjected to external validation or adopted into routine clinical practice in neurosurgery.</p>","PeriodicalId":19276,"journal":{"name":"Neurosurgery","volume":" ","pages":"269-275"},"PeriodicalIF":3.9000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurosurgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1227/neu.0000000000003074","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/28 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Background and objectives: Significant evidence has indicated that the reporting quality of novel predictive models is poor because of confounding by small data sets, inappropriate statistical analyses, and a lack of validation and reproducibility. The Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) statement was developed to increase the generalizability of predictive models. This study evaluated the quality of predictive models reported in neurosurgical literature through their compliance with the TRIPOD guidelines.
Methods: Articles reporting prediction models published in the top 5 neurosurgery journals by SCImago Journal Rank-2 (Neurosurgery, Journal of Neurosurgery, Journal of Neurosurgery: Spine, Journal of NeuroInterventional Surgery, and Journal of Neurology, Neurosurgery, and Psychiatry) between January 1st, 2018, and January 1st, 2023, were identified through a PubMed search strategy that combined terms related to machine learning and prediction modeling. These original research articles were analyzed against the TRIPOD criteria.
Results: A total of 110 articles were assessed with the TRIPOD checklist. The median compliance was 57.4% (IQR: 50.0%-66.7%). Models using machine learning-based models exhibited lower compliance on average compared with conventional learning-based models (57.1%, 50.0%-66.7% vs 68.1%, 50.2%-68.1%, P = .472). Among the TRIPOD criteria, the lowest compliance was observed in blinding the assessment of predictors and outcomes (n = 7, 12.7% and n = 10, 16.9%, respectively), including an informative title (n = 17, 15.6%) and reporting model performance measures such as confidence intervals (n = 27, 24.8%). Few studies provided sufficient information to allow for the external validation of results (n = 26, 25.7%).
Conclusion: Published predictive models in neurosurgery commonly fall short of meeting the established guidelines laid out by TRIPOD for optimal development, validation, and reporting. This lack of compliance may represent the minor extent to which these models have been subjected to external validation or adopted into routine clinical practice in neurosurgery.
期刊介绍:
Neurosurgery, the official journal of the Congress of Neurological Surgeons, publishes research on clinical and experimental neurosurgery covering the very latest developments in science, technology, and medicine. For professionals aware of the rapid pace of developments in the field, this journal is nothing short of indispensable as the most complete window on the contemporary field of neurosurgery.
Neurosurgery is the fastest-growing journal in the field, with a worldwide reputation for reliable coverage delivered with a fresh and dynamic outlook.