Keira L. Barr, J. Laframboise, T. Ungi, L. Hookey, G. Fichtinger
{"title":"Automated segmentation of computed tomography colonography images using a 3D U-Net","authors":"Keira L. Barr, J. Laframboise, T. Ungi, L. Hookey, G. Fichtinger","doi":"10.1117/12.2549749","DOIUrl":null,"url":null,"abstract":"PURPOSE: The segmentation of Computed Tomography (CT) colonography images is important to both colorectal research and diagnosis. This process often relies on manual interaction, and therefore depends on the user. Consequently, there is unavoidable interrater variability. An accurate method which eliminates this variability would be preferable. Current barriers to automated segmentation include discontinuities of the colon, liquid pooling, and that all air will appear the same intensity on the scan. This study proposes an automated approach to segmentation which employs a 3D implementation of U-Net. METHODS: This research is conducted on 76 CT scans. The U-Net comprises an analysis and synthesis path, both with 7 convolutional layers. By nature of the U-Net, output segmentation resolution matches the input resolution of the CT volumes. K-fold cross-validation is applied to ensure no evaluative bias, and accuracy is assessed by the Sorensen-Dice coefficient. Binary cross-entropy is employed as a loss metric. RESULTS: Average network accuracy is 98.81%, with maximum and minimum accuracies of 99.48% and 97.03% respectively. Standard deviation of K accuracies is 0.5%. CONCLUSION: The network performs with considerable accuracy, and can reliably distinguish between colon, small intestine, lungs, and ambient air. A low standard deviation is indicative of high consistency. This method for automatic segmentation could prove a supplemental or alternative tool for threshold-based segmentation. Future studies will include an expanded dataset and a further optimized network.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Imaging: Image-Guided Procedures","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2549749","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
PURPOSE: The segmentation of Computed Tomography (CT) colonography images is important to both colorectal research and diagnosis. This process often relies on manual interaction, and therefore depends on the user. Consequently, there is unavoidable interrater variability. An accurate method which eliminates this variability would be preferable. Current barriers to automated segmentation include discontinuities of the colon, liquid pooling, and that all air will appear the same intensity on the scan. This study proposes an automated approach to segmentation which employs a 3D implementation of U-Net. METHODS: This research is conducted on 76 CT scans. The U-Net comprises an analysis and synthesis path, both with 7 convolutional layers. By nature of the U-Net, output segmentation resolution matches the input resolution of the CT volumes. K-fold cross-validation is applied to ensure no evaluative bias, and accuracy is assessed by the Sorensen-Dice coefficient. Binary cross-entropy is employed as a loss metric. RESULTS: Average network accuracy is 98.81%, with maximum and minimum accuracies of 99.48% and 97.03% respectively. Standard deviation of K accuracies is 0.5%. CONCLUSION: The network performs with considerable accuracy, and can reliably distinguish between colon, small intestine, lungs, and ambient air. A low standard deviation is indicative of high consistency. This method for automatic segmentation could prove a supplemental or alternative tool for threshold-based segmentation. Future studies will include an expanded dataset and a further optimized network.