Automated pediatric brain tumor imaging assessment tool from CBTN: Enhancing suprasellar region inclusion and managing limited data with deep learning.
Deep B Gandhi, Nastaran Khalili, Ariana M Familiar, Anurag Gottipati, Neda Khalili, Wenxin Tu, Shuvanjan Haldar, Hannah Anderson, Karthik Viswanathan, Phillip B Storm, Jeffrey B Ware, Adam Resnick, Arastoo Vossough, Ali Nabavizadeh, Anahita Fathi Kazerooni
{"title":"Automated pediatric brain tumor imaging assessment tool from CBTN: Enhancing suprasellar region inclusion and managing limited data with deep learning.","authors":"Deep B Gandhi, Nastaran Khalili, Ariana M Familiar, Anurag Gottipati, Neda Khalili, Wenxin Tu, Shuvanjan Haldar, Hannah Anderson, Karthik Viswanathan, Phillip B Storm, Jeffrey B Ware, Adam Resnick, Arastoo Vossough, Ali Nabavizadeh, Anahita Fathi Kazerooni","doi":"10.1093/noajnl/vdae190","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Fully automatic skull-stripping and tumor segmentation are crucial for monitoring pediatric brain tumors (PBT). Current methods, however, often lack generalizability, particularly for rare tumors in the sellar/suprasellar regions and when applied to real-world clinical data in limited data scenarios. To address these challenges, we propose AI-driven techniques for skull-stripping and tumor segmentation.</p><p><strong>Methods: </strong>Multi-institutional, multi-parametric MRI scans from 527 pediatric patients (<i>n</i> = 336 for skull-stripping, <i>n</i> = 489 for tumor segmentation) with various PBT histologies were processed to train separate nnU-Net-based deep learning models for skull-stripping, whole tumor (WT), and enhancing tumor (ET) segmentation. These models utilized single (T2/FLAIR) or multiple (T1-Gd and T2/FLAIR) input imaging sequences. Performance was evaluated using Dice scores, sensitivity, and 95% Hausdorff distances. Statistical comparisons included paired or unpaired 2-sample <i>t</i>-tests and Pearson's correlation coefficient based on Dice scores from different models and PBT histologies.</p><p><strong>Results: </strong>Dice scores for the skull-stripping models for whole brain and sellar/suprasellar region segmentation were 0.98 ± 0.01 (median 0.98) for both multi- and single-parametric models, with significant Pearson's correlation coefficient between single- and multi-parametric Dice scores (<i>r</i> > 0.80; <i>P</i> < .05 for all). Whole tumor Dice scores for single-input tumor segmentation models were 0.84 ± 0.17 (median = 0.90) for T2 and 0.82 ± 0.19 (median = 0.89) for FLAIR inputs. Enhancing tumor Dice scores were 0.65 ± 0.35 (median = 0.79) for T1-Gd+FLAIR and 0.64 ± 0.36 (median = 0.79) for T1-Gd+T2 inputs.</p><p><strong>Conclusion: </strong>Our skull-stripping models demonstrate excellent performance and include sellar/suprasellar regions, using single- or multi-parametric inputs. Additionally, our automated tumor segmentation models can reliably delineate whole lesions and ET regions, adapting to MRI sessions with missing sequences in limited data context.</p>","PeriodicalId":94157,"journal":{"name":"Neuro-oncology advances","volume":"6 1","pages":"vdae190"},"PeriodicalIF":3.7000,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11664259/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuro-oncology advances","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/noajnl/vdae190","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Fully automatic skull-stripping and tumor segmentation are crucial for monitoring pediatric brain tumors (PBT). Current methods, however, often lack generalizability, particularly for rare tumors in the sellar/suprasellar regions and when applied to real-world clinical data in limited data scenarios. To address these challenges, we propose AI-driven techniques for skull-stripping and tumor segmentation.
Methods: Multi-institutional, multi-parametric MRI scans from 527 pediatric patients (n = 336 for skull-stripping, n = 489 for tumor segmentation) with various PBT histologies were processed to train separate nnU-Net-based deep learning models for skull-stripping, whole tumor (WT), and enhancing tumor (ET) segmentation. These models utilized single (T2/FLAIR) or multiple (T1-Gd and T2/FLAIR) input imaging sequences. Performance was evaluated using Dice scores, sensitivity, and 95% Hausdorff distances. Statistical comparisons included paired or unpaired 2-sample t-tests and Pearson's correlation coefficient based on Dice scores from different models and PBT histologies.
Results: Dice scores for the skull-stripping models for whole brain and sellar/suprasellar region segmentation were 0.98 ± 0.01 (median 0.98) for both multi- and single-parametric models, with significant Pearson's correlation coefficient between single- and multi-parametric Dice scores (r > 0.80; P < .05 for all). Whole tumor Dice scores for single-input tumor segmentation models were 0.84 ± 0.17 (median = 0.90) for T2 and 0.82 ± 0.19 (median = 0.89) for FLAIR inputs. Enhancing tumor Dice scores were 0.65 ± 0.35 (median = 0.79) for T1-Gd+FLAIR and 0.64 ± 0.36 (median = 0.79) for T1-Gd+T2 inputs.
Conclusion: Our skull-stripping models demonstrate excellent performance and include sellar/suprasellar regions, using single- or multi-parametric inputs. Additionally, our automated tumor segmentation models can reliably delineate whole lesions and ET regions, adapting to MRI sessions with missing sequences in limited data context.