Amir Hossein Farshchitabrizi, Mohammad Hossein Sadeghi, Sedigheh Sina, Mehrosadat Alavi, Zahra Nasiri Feshani, Hamid Omidi
{"title":"AI-enhanced PET/CT image synthesis using CycleGAN for improved ovarian cancer imaging.","authors":"Amir Hossein Farshchitabrizi, Mohammad Hossein Sadeghi, Sedigheh Sina, Mehrosadat Alavi, Zahra Nasiri Feshani, Hamid Omidi","doi":"10.5114/pjr/196804","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Ovarian cancer is the fifth fatal cancer among women. Positron emission tomography (PET), which offers detailed metabolic data, can be effectively used for early cancer screening. However, proper attenuation correction is essential for interpreting the data obtained by this imaging modality. Computed tomography (CT) imaging is commonly performed alongside PET imaging for attenuation correction. This approach may introduce some issues in spatial alignment and registration of the images obtained by the two modalities. This study aims to perform PET image attenuation correction by using generative adversarial networks (GANs), without additional CT imaging.</p><p><strong>Material and methods: </strong>The PET/CT data from 55 ovarian cancer patients were used in this study. Three GAN architectures: Conditional GAN, Wasserstein GAN, and CycleGAN, were evaluated for attenuation correction. The statistical performance of each model was assessed by calculating the mean squared error (MSE) and mean absolute error (MAE). The radiological performance assessments of the models were performed by comparing the standardised uptake value and the Hounsfield unit values of the whole body and selected organs, in the synthetic and real PET and CT images.</p><p><strong>Results: </strong>Based on the results, CycleGAN demonstrated effective attenuation correction and pseudo-CT generation, with high accuracy. The MAE and MSE for all images were 2.15 ± 0.34 and 3.14 ± 0.56, respectively. For CT reconstruction, such values were found to be 4.17 ± 0.96 and 5.66 ± 1.01, respectively.</p><p><strong>Conclusions: </strong>The results showed the potential of deep learning in reducing radiation exposure and improving the quality of PET imaging. Further refinement and clinical validation are needed for full clinical applicability.</p>","PeriodicalId":94174,"journal":{"name":"Polish journal of radiology","volume":"90 ","pages":"e26-e35"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11891552/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Polish journal of radiology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5114/pjr/196804","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: Ovarian cancer is the fifth fatal cancer among women. Positron emission tomography (PET), which offers detailed metabolic data, can be effectively used for early cancer screening. However, proper attenuation correction is essential for interpreting the data obtained by this imaging modality. Computed tomography (CT) imaging is commonly performed alongside PET imaging for attenuation correction. This approach may introduce some issues in spatial alignment and registration of the images obtained by the two modalities. This study aims to perform PET image attenuation correction by using generative adversarial networks (GANs), without additional CT imaging.
Material and methods: The PET/CT data from 55 ovarian cancer patients were used in this study. Three GAN architectures: Conditional GAN, Wasserstein GAN, and CycleGAN, were evaluated for attenuation correction. The statistical performance of each model was assessed by calculating the mean squared error (MSE) and mean absolute error (MAE). The radiological performance assessments of the models were performed by comparing the standardised uptake value and the Hounsfield unit values of the whole body and selected organs, in the synthetic and real PET and CT images.
Results: Based on the results, CycleGAN demonstrated effective attenuation correction and pseudo-CT generation, with high accuracy. The MAE and MSE for all images were 2.15 ± 0.34 and 3.14 ± 0.56, respectively. For CT reconstruction, such values were found to be 4.17 ± 0.96 and 5.66 ± 1.01, respectively.
Conclusions: The results showed the potential of deep learning in reducing radiation exposure and improving the quality of PET imaging. Further refinement and clinical validation are needed for full clinical applicability.