Comprehensive segmentation of gray matter structures on T1-weighted brain MRI: A Comparative Study of CNN, CNN hybrid-transformer or -Mamba architectures.
Yujia Wei, Jaidip Manikrao Jagtap, Yashbir Singh, Bardia Khosravi, Jason Cai, Jeffrey L Gunter, Bradley J Erickson
{"title":"Comprehensive segmentation of gray matter structures on T1-weighted brain MRI: A Comparative Study of CNN, CNN hybrid-transformer or -Mamba architectures.","authors":"Yujia Wei, Jaidip Manikrao Jagtap, Yashbir Singh, Bardia Khosravi, Jason Cai, Jeffrey L Gunter, Bradley J Erickson","doi":"10.3174/ajnr.A8544","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and purpose: </strong>Recent advances in deep learning have shown promising results in medical image analysis and segmentation. However, most brain MRI segmentation models are limited by the size of their datasets and/or the number of structures they can identify. This study evaluates the performance of six advanced deep learning models in segmenting 122 brain structures from T1-weighted MRI scans, aiming to identify the most effective model for clinical and research applications.</p><p><strong>Materials and methods: </strong>1,510 T1-weighted MRIs were used to compare six deep-learning models for the segmentation of 122 distinct gray matter structures: nnU-Net, SegResNet, SwinUNETR, UNETR, U-Mamba_BOT and U-Mamba_ Enc. Each model was rigorously tested for accuracy using the Dice Similarity Coefficient (DSC) and the 95th percentile Hausdorff Distance (HD95). Additionally, the volume of each structure was calculated and compared between normal control (NC) and Alzheimer's Disease (AD) patients.</p><p><strong>Results: </strong>U-Mamba_Bot achieved the highest performance with a median DSC of 0.9112 [IQR:0.8957, 0.9250]. nnU-Net achieved a median DSC of 0.9027 [IQR: 0.8847, 0.9205] and had the highest HD95 of 1.392[IQR: 1.174, 2.029]. The value of each HD95 (<3mm) indicates its superior capability in capturing detailed brain structures accurately. Following segmentation, volume calculations were performed, and the resultant volumes of normal controls and AD patients were compared. The volume changes observed in thirteen brain substructures were all consistent with those reported in existing literature, reinforcing the reliability of the segmentation outputs.</p><p><strong>Conclusions: </strong>This study underscores the efficacy of U-Mamba_Bot as a robust tool for detailed brain structure segmentation in T1-weighted MRI scans. The congruence of our volumetric analysis with the literature further validates the potential of advanced deep-learning models to enhance the understanding of neurodegenerative diseases such as AD. Future research should consider larger datasets to validate these findings further and explore the applicability of these models in other neurological conditions.</p><p><strong>Abbreviations: </strong>AD = Alzheimer's Disease; ADNI = Alzheimer's Disease Neuroimaging Initiative; DSC = Dice Similarity Coefficient; HD95 = the 95th Percentile Hausdorff Distance; IQR = Interquartile Range; NC = Normal Control; SSMs = State-space Sequence Models.</p>","PeriodicalId":93863,"journal":{"name":"AJNR. American journal of neuroradiology","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AJNR. American journal of neuroradiology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3174/ajnr.A8544","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background and purpose: Recent advances in deep learning have shown promising results in medical image analysis and segmentation. However, most brain MRI segmentation models are limited by the size of their datasets and/or the number of structures they can identify. This study evaluates the performance of six advanced deep learning models in segmenting 122 brain structures from T1-weighted MRI scans, aiming to identify the most effective model for clinical and research applications.
Materials and methods: 1,510 T1-weighted MRIs were used to compare six deep-learning models for the segmentation of 122 distinct gray matter structures: nnU-Net, SegResNet, SwinUNETR, UNETR, U-Mamba_BOT and U-Mamba_ Enc. Each model was rigorously tested for accuracy using the Dice Similarity Coefficient (DSC) and the 95th percentile Hausdorff Distance (HD95). Additionally, the volume of each structure was calculated and compared between normal control (NC) and Alzheimer's Disease (AD) patients.
Results: U-Mamba_Bot achieved the highest performance with a median DSC of 0.9112 [IQR:0.8957, 0.9250]. nnU-Net achieved a median DSC of 0.9027 [IQR: 0.8847, 0.9205] and had the highest HD95 of 1.392[IQR: 1.174, 2.029]. The value of each HD95 (<3mm) indicates its superior capability in capturing detailed brain structures accurately. Following segmentation, volume calculations were performed, and the resultant volumes of normal controls and AD patients were compared. The volume changes observed in thirteen brain substructures were all consistent with those reported in existing literature, reinforcing the reliability of the segmentation outputs.
Conclusions: This study underscores the efficacy of U-Mamba_Bot as a robust tool for detailed brain structure segmentation in T1-weighted MRI scans. The congruence of our volumetric analysis with the literature further validates the potential of advanced deep-learning models to enhance the understanding of neurodegenerative diseases such as AD. Future research should consider larger datasets to validate these findings further and explore the applicability of these models in other neurological conditions.