{"title":"利用深度学习实现颞骨计算机断层扫描中耳蜗的自动分割。","authors":"Zhenhua Li, Langtao Zhou, Songhua Tan, Bin Liu, Yu Xiao, Anzhou Tang","doi":"10.1177/02841851241307333","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Segmentation of the cochlea in temporal bone computed tomography (CT) is the basis for image-guided otologic surgery. Manual segmentation is time-consuming and laborious.</p><p><strong>Purpose: </strong>To assess the utility of deep learning analysis in automatic segmentation of the cochleae in temporal bone CT to differentiate abnormal images from normal images.</p><p><strong>Material and methods: </strong>Three models (3D U-Net, UNETR, and SegResNet) were trained to segment the cochlea on two CT datasets (two CT types: GE 64 and GE 256). One dataset included 77 normal samples, and the other included 154 samples (77 normal and 77 abnormal). A total of 20 samples that contained normal and abnormal cochleae in three CT types (GE 64, GE 256, and SE-DS) were tested on the three models. The Dice similarity coefficient (DSC) and Hausdorff distance (HD) were used to assess the models.</p><p><strong>Results: </strong>The segmentation performances of the three models improved after adding abnormal cochlear images for training. SegResNet achieved the best performance. The average DSC on the test set was 0.94, and the HD was 0.16 mm; the performance was higher than those obtained by the 3D U-Net and UNETR models. The DSCs obtained using the GE 256 CT, SE-DS CT, and GE 64 CT models were 0.95, 0.94, and 0.93, respectively, and the HDs were 0.15, 0.18, and 0.12 mm, respectively.</p><p><strong>Conclusion: </strong>The SegResNet model is feasible and accurate for automated cochlear segmentation of temporal bone CT images.</p>","PeriodicalId":7143,"journal":{"name":"Acta radiologica","volume":" ","pages":"2841851241307333"},"PeriodicalIF":1.1000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Utilizing deep learning for automatic segmentation of the cochleae in temporal bone computed tomography.\",\"authors\":\"Zhenhua Li, Langtao Zhou, Songhua Tan, Bin Liu, Yu Xiao, Anzhou Tang\",\"doi\":\"10.1177/02841851241307333\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Segmentation of the cochlea in temporal bone computed tomography (CT) is the basis for image-guided otologic surgery. Manual segmentation is time-consuming and laborious.</p><p><strong>Purpose: </strong>To assess the utility of deep learning analysis in automatic segmentation of the cochleae in temporal bone CT to differentiate abnormal images from normal images.</p><p><strong>Material and methods: </strong>Three models (3D U-Net, UNETR, and SegResNet) were trained to segment the cochlea on two CT datasets (two CT types: GE 64 and GE 256). One dataset included 77 normal samples, and the other included 154 samples (77 normal and 77 abnormal). A total of 20 samples that contained normal and abnormal cochleae in three CT types (GE 64, GE 256, and SE-DS) were tested on the three models. The Dice similarity coefficient (DSC) and Hausdorff distance (HD) were used to assess the models.</p><p><strong>Results: </strong>The segmentation performances of the three models improved after adding abnormal cochlear images for training. SegResNet achieved the best performance. The average DSC on the test set was 0.94, and the HD was 0.16 mm; the performance was higher than those obtained by the 3D U-Net and UNETR models. The DSCs obtained using the GE 256 CT, SE-DS CT, and GE 64 CT models were 0.95, 0.94, and 0.93, respectively, and the HDs were 0.15, 0.18, and 0.12 mm, respectively.</p><p><strong>Conclusion: </strong>The SegResNet model is feasible and accurate for automated cochlear segmentation of temporal bone CT images.</p>\",\"PeriodicalId\":7143,\"journal\":{\"name\":\"Acta radiologica\",\"volume\":\" \",\"pages\":\"2841851241307333\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2025-01-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Acta radiologica\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/02841851241307333\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Acta radiologica","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/02841851241307333","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Utilizing deep learning for automatic segmentation of the cochleae in temporal bone computed tomography.
Background: Segmentation of the cochlea in temporal bone computed tomography (CT) is the basis for image-guided otologic surgery. Manual segmentation is time-consuming and laborious.
Purpose: To assess the utility of deep learning analysis in automatic segmentation of the cochleae in temporal bone CT to differentiate abnormal images from normal images.
Material and methods: Three models (3D U-Net, UNETR, and SegResNet) were trained to segment the cochlea on two CT datasets (two CT types: GE 64 and GE 256). One dataset included 77 normal samples, and the other included 154 samples (77 normal and 77 abnormal). A total of 20 samples that contained normal and abnormal cochleae in three CT types (GE 64, GE 256, and SE-DS) were tested on the three models. The Dice similarity coefficient (DSC) and Hausdorff distance (HD) were used to assess the models.
Results: The segmentation performances of the three models improved after adding abnormal cochlear images for training. SegResNet achieved the best performance. The average DSC on the test set was 0.94, and the HD was 0.16 mm; the performance was higher than those obtained by the 3D U-Net and UNETR models. The DSCs obtained using the GE 256 CT, SE-DS CT, and GE 64 CT models were 0.95, 0.94, and 0.93, respectively, and the HDs were 0.15, 0.18, and 0.12 mm, respectively.
Conclusion: The SegResNet model is feasible and accurate for automated cochlear segmentation of temporal bone CT images.
期刊介绍:
Acta Radiologica publishes articles on all aspects of radiology, from clinical radiology to experimental work. It is known for articles based on experimental work and contrast media research, giving priority to scientific original papers. The distinguished international editorial board also invite review articles, short communications and technical and instrumental notes.