Introduction
YouTube is increasingly used for neurosurgical learning; however, the educational quality, transparency, and reliability of neurosurgery-related content—and whether these features differ by video source—remain unclear.
Objective
To synthesize published evaluations of neurosurgical YouTube videos and meta-analyze standardized quality/reliability scores, exploring source- and time-related differences and reporting gaps in validation and procedural completeness.
Methods
Following PRISMA, we searched PubMed, Scopus, Embase, and Web of Science (2017–2024) for studies assessing neurosurgical YouTube videos using standardized tools (DISCERN/mDISCERN, JAMA Benchmark, Global Quality Score [GQS]). Data were pooled using random-effects meta-analysis with Hartung–Knapp adjustment; scores were also transformed to the Proportion of Maximum Possible (POMP, 0–100).
Results
Sixteen studies (12–1,233 videos each) were included. On native scales, pooled means were: DISCERN per-item 3.06/5, DISCERN total 30.1/80, JAMA 2.41/4, and GQS 3.04/5. Harmonized POMP point estimates (0–100) were: DISCERN 39.8, JAMA 60.3, and GQS 51.0. Heterogeneity was substantial (I2 > 95%) except for DISCERN total (I2 = 0%). Subgroup analyses suggested higher scores for institutional versus non-institutional sources, although meta-regression did not confirm significance. Validation and procedural completeness were infrequently reported.
Conclusion
Neurosurgical YouTube content shows moderate-to-low educational quality with substantial inconsistency. Institutional sources may perform better, but gaps in transparency, validation, structure, and procedural completeness are common. Standardized production criteria and curated peer-reviewed repositories may improve safe integration into neurosurgical education.
Clinical trial number
Not applicable
扫码关注我们
求助内容:
应助结果提醒方式:
