Ke Cao, Josephine Yeung, Yasser Arafat, Jing Qiao, Richard Gartrell, Mobin Master, Justin M C Yeung, Paul N Baird
Introduction: This study aimed to evaluate the accuracy of our own artificial intelligence (AI)-generated model to assess automated segmentation and quantification of body composition-derived computed tomography (CT) slices from the lumber (L3) region in colorectal cancer (CRC) patients.
Methods: A total of 541 axial CT slices at the L3 vertebra were retrospectively collected from 319 patients with CRC diagnosed during 2012-2019 at a single Australian tertiary institution, Western Health in Melbourne. A two-dimensional U-Net convolutional network was trained on 338 slices to segment muscle, visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT). Manual reading of these same slices of muscle, VAT and SAT was created to serve as ground truth data. The Dice similarity coefficient was used to assess the U-Net-based segmentation performance on both a validation dataset (68 slices) and a test dataset (203 slices). The measurement of cross-sectional area and Hounsfield unit (HU) density of muscle, VAT and SAT were compared between two methods.
Results: The segmentation for muscle, VAT and SAT demonstrated excellent performance for both the validation (Dice similarity coefficients >0.98, respectively) and test (Dice similarity coefficients >0.97, respectively) datasets. There was a strong positive correlation between manual and AI segmentation measurements of body composition for both datasets (Spearman's correlation coefficients: 0.944-0.999, P < 0.001).
Conclusions: Compared to the gold standard, this fully automated segmentation system exhibited a high accuracy for assessing segmentation and quantification of abdominal muscle and adipose tissues of CT slices at the L3 in CRC patients.
{"title":"Using a new artificial intelligence-aided method to assess body composition CT segmentation in colorectal cancer patients.","authors":"Ke Cao, Josephine Yeung, Yasser Arafat, Jing Qiao, Richard Gartrell, Mobin Master, Justin M C Yeung, Paul N Baird","doi":"10.1002/jmrs.798","DOIUrl":"https://doi.org/10.1002/jmrs.798","url":null,"abstract":"<p><strong>Introduction: </strong>This study aimed to evaluate the accuracy of our own artificial intelligence (AI)-generated model to assess automated segmentation and quantification of body composition-derived computed tomography (CT) slices from the lumber (L3) region in colorectal cancer (CRC) patients.</p><p><strong>Methods: </strong>A total of 541 axial CT slices at the L3 vertebra were retrospectively collected from 319 patients with CRC diagnosed during 2012-2019 at a single Australian tertiary institution, Western Health in Melbourne. A two-dimensional U-Net convolutional network was trained on 338 slices to segment muscle, visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT). Manual reading of these same slices of muscle, VAT and SAT was created to serve as ground truth data. The Dice similarity coefficient was used to assess the U-Net-based segmentation performance on both a validation dataset (68 slices) and a test dataset (203 slices). The measurement of cross-sectional area and Hounsfield unit (HU) density of muscle, VAT and SAT were compared between two methods.</p><p><strong>Results: </strong>The segmentation for muscle, VAT and SAT demonstrated excellent performance for both the validation (Dice similarity coefficients >0.98, respectively) and test (Dice similarity coefficients >0.97, respectively) datasets. There was a strong positive correlation between manual and AI segmentation measurements of body composition for both datasets (Spearman's correlation coefficients: 0.944-0.999, P < 0.001).</p><p><strong>Conclusions: </strong>Compared to the gold standard, this fully automated segmentation system exhibited a high accuracy for assessing segmentation and quantification of abdominal muscle and adipose tissues of CT slices at the L3 in CRC patients.</p>","PeriodicalId":16382,"journal":{"name":"Journal of Medical Radiation Sciences","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141081409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maximise your CPD by reading the following selected article and answer the five questions. Please remember to self-claim your CPD and retain your supporting evidence. Answers will be available via the QR code and published in JMRS – Volume 71, Issue 4 December 2024.
{"title":"Continuing Professional Development - Medical Imaging","authors":"","doi":"10.1002/jmrs.795","DOIUrl":"10.1002/jmrs.795","url":null,"abstract":"<p>Maximise your CPD by reading the following selected article and answer the five questions. Please remember to self-claim your CPD and retain your supporting evidence. Answers will be available via the QR code and published in JMRS – Volume 71, Issue 4 December 2024.</p><p>Scan this QR code to find the answers.</p>","PeriodicalId":16382,"journal":{"name":"Journal of Medical Radiation Sciences","volume":"71 2","pages":"318"},"PeriodicalIF":2.1,"publicationDate":"2024-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jmrs.795","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}