Yazdan Salimi, Zahra Mansouri, Isaac Shiri, Ismini Mainta, Habib Zaidi
{"title":"Deep Learning-powered CT-less Multi-tracer Organ Segmentation from PET Images: A solution for unreliable CT segmentation in PET/CT Imaging","authors":"Yazdan Salimi, Zahra Mansouri, Isaac Shiri, Ismini Mainta, Habib Zaidi","doi":"10.1101/2024.08.27.24312482","DOIUrl":null,"url":null,"abstract":"Introduction: The common approach for organ segmentation in hybrid imaging relies on co-registered CT (CTAC) images. This method, however, presents several limitations in real clinical workflows where mismatch between PET and CT images are very common. Moreover, low-dose CTAC images have poor quality, thus challenging the segmentation task. Recent advances in CT-less PET imaging further highlight the necessity for an effective PET organ segmentation pipeline that does not rely on CT images. Therefore, the goal of this study was to develop a CT-less multi-tracer PET segmentation framework.\nMethods: We collected 2062 PET/CT images from multiple scanners. The patients were injected with either 18F-FDG (1487) or 68Ga-PSMA (575). PET/CT images with any kind of mismatch between PET and CT images were detected through visual assessment and excluded from our study. Multiple organs were delineated on CT components using previously trained in-house developed nnU-Net models. The segmentation masks were resampled to co-registered PET images and used to train four different deep-learning models using different images as input, including non-corrected PET (PET-NC) and attenuation and scatter-corrected PET (PET-ASC) for 18F-FDG (tasks #1 and #2, respectively using 22 organs) and PET-NC and PET-ASC for 68Ga tracers (tasks #3 and #4, respectively, using 15 organs). The models performance was evaluated in terms of Dice coefficient, Jaccard index, and segment volume difference.\nResults: The average Dice coefficient over all organs was 0.81±0.15, 0.82±0.14, 0.77±0.17, and 0.79±0.16 for tasks #1, #2, #3, and #4, respectively. PET-ASC models outperformed PET-NC models (P-value < 0.05). The highest Dice values were achieved for the brain (0.93 to 0.96 in all four tasks), whereas the lowest values were achieved for small organs, such as the adrenal glands. The trained models showed robust performance on dynamic noisy images as well.\nConclusion: Deep learning models allow high performance multi-organ segmentation for two popular PET tracers without the use of CT information. These models may tackle the limitations of using CT segmentation in PET/CT image quantification, kinetic modeling, radiomics analysis, dosimetry, or any other tasks that require organ segmentation masks.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"17 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Radiology and Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.08.27.24312482","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: The common approach for organ segmentation in hybrid imaging relies on co-registered CT (CTAC) images. This method, however, presents several limitations in real clinical workflows where mismatch between PET and CT images are very common. Moreover, low-dose CTAC images have poor quality, thus challenging the segmentation task. Recent advances in CT-less PET imaging further highlight the necessity for an effective PET organ segmentation pipeline that does not rely on CT images. Therefore, the goal of this study was to develop a CT-less multi-tracer PET segmentation framework.
Methods: We collected 2062 PET/CT images from multiple scanners. The patients were injected with either 18F-FDG (1487) or 68Ga-PSMA (575). PET/CT images with any kind of mismatch between PET and CT images were detected through visual assessment and excluded from our study. Multiple organs were delineated on CT components using previously trained in-house developed nnU-Net models. The segmentation masks were resampled to co-registered PET images and used to train four different deep-learning models using different images as input, including non-corrected PET (PET-NC) and attenuation and scatter-corrected PET (PET-ASC) for 18F-FDG (tasks #1 and #2, respectively using 22 organs) and PET-NC and PET-ASC for 68Ga tracers (tasks #3 and #4, respectively, using 15 organs). The models performance was evaluated in terms of Dice coefficient, Jaccard index, and segment volume difference.
Results: The average Dice coefficient over all organs was 0.81±0.15, 0.82±0.14, 0.77±0.17, and 0.79±0.16 for tasks #1, #2, #3, and #4, respectively. PET-ASC models outperformed PET-NC models (P-value < 0.05). The highest Dice values were achieved for the brain (0.93 to 0.96 in all four tasks), whereas the lowest values were achieved for small organs, such as the adrenal glands. The trained models showed robust performance on dynamic noisy images as well.
Conclusion: Deep learning models allow high performance multi-organ segmentation for two popular PET tracers without the use of CT information. These models may tackle the limitations of using CT segmentation in PET/CT image quantification, kinetic modeling, radiomics analysis, dosimetry, or any other tasks that require organ segmentation masks.