Automated segmentation of lesions and organs at risk on [68Ga]Ga-PSMA-11 PET/CT images using self-supervised learning with Swin UNETR.

IF 3.5 2区 医学 Q2 ONCOLOGY Cancer Imaging Pub Date : 2024-02-29 DOI:10.1186/s40644-024-00675-x
Elmira Yazdani, Najme Karamzadeh-Ziarati, Seyyed Saeid Cheshmi, Mahdi Sadeghi, Parham Geramifar, Habibeh Vosoughi, Mahmood Kazemi Jahromi, Saeed Reza Kheradpisheh
{"title":"Automated segmentation of lesions and organs at risk on [<sup>68</sup>Ga]Ga-PSMA-11 PET/CT images using self-supervised learning with Swin UNETR.","authors":"Elmira Yazdani, Najme Karamzadeh-Ziarati, Seyyed Saeid Cheshmi, Mahdi Sadeghi, Parham Geramifar, Habibeh Vosoughi, Mahmood Kazemi Jahromi, Saeed Reza Kheradpisheh","doi":"10.1186/s40644-024-00675-x","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Prostate-specific membrane antigen (PSMA) PET/CT imaging is widely used for quantitative image analysis, especially in radioligand therapy (RLT) for metastatic castration-resistant prostate cancer (mCRPC). Unknown features influencing PSMA biodistribution can be explored by analyzing segmented organs at risk (OAR) and lesions. Manual segmentation is time-consuming and labor-intensive, so automated segmentation methods are desirable. Training deep-learning segmentation models is challenging due to the scarcity of high-quality annotated images. Addressing this, we developed shifted windows UNEt TRansformers (Swin UNETR) for fully automated segmentation. Within a self-supervised framework, the model's encoder was pre-trained on unlabeled data. The entire model was fine-tuned, including its decoder, using labeled data.</p><p><strong>Methods: </strong>In this work, 752 whole-body [<sup>68</sup>Ga]Ga-PSMA-11 PET/CT images were collected from two centers. For self-supervised model pre-training, 652 unlabeled images were employed. The remaining 100 images were manually labeled for supervised training. In the supervised training phase, 5-fold cross-validation was used with 64 images for model training and 16 for validation, from one center. For testing, 20 hold-out images, evenly distributed between two centers, were used. Image segmentation and quantification metrics were evaluated on the test set compared to the ground-truth segmentation conducted by a nuclear medicine physician.</p><p><strong>Results: </strong>The model generates high-quality OARs and lesion segmentation in lesion-positive cases, including mCRPC. The results show that self-supervised pre-training significantly improved the average dice similarity coefficient (DSC) for all classes by about 3%. Compared to nnU-Net, a well-established model in medical image segmentation, our approach outperformed with a 5% higher DSC. This improvement was attributed to our model's combined use of self-supervised pre-training and supervised fine-tuning, specifically when applied to PET/CT input. Our best model had the lowest DSC for lesions at 0.68 and the highest for liver at 0.95.</p><p><strong>Conclusions: </strong>We developed a state-of-the-art neural network using self-supervised pre-training on whole-body [<sup>68</sup>Ga]Ga-PSMA-11 PET/CT images, followed by fine-tuning on a limited set of annotated images. The model generates high-quality OARs and lesion segmentation for PSMA image analysis. The generalizable model holds potential for various clinical applications, including enhanced RLT and patient-specific internal dosimetry.</p>","PeriodicalId":9548,"journal":{"name":"Cancer Imaging","volume":null,"pages":null},"PeriodicalIF":3.5000,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10903052/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cancer Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s40644-024-00675-x","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Prostate-specific membrane antigen (PSMA) PET/CT imaging is widely used for quantitative image analysis, especially in radioligand therapy (RLT) for metastatic castration-resistant prostate cancer (mCRPC). Unknown features influencing PSMA biodistribution can be explored by analyzing segmented organs at risk (OAR) and lesions. Manual segmentation is time-consuming and labor-intensive, so automated segmentation methods are desirable. Training deep-learning segmentation models is challenging due to the scarcity of high-quality annotated images. Addressing this, we developed shifted windows UNEt TRansformers (Swin UNETR) for fully automated segmentation. Within a self-supervised framework, the model's encoder was pre-trained on unlabeled data. The entire model was fine-tuned, including its decoder, using labeled data.

Methods: In this work, 752 whole-body [68Ga]Ga-PSMA-11 PET/CT images were collected from two centers. For self-supervised model pre-training, 652 unlabeled images were employed. The remaining 100 images were manually labeled for supervised training. In the supervised training phase, 5-fold cross-validation was used with 64 images for model training and 16 for validation, from one center. For testing, 20 hold-out images, evenly distributed between two centers, were used. Image segmentation and quantification metrics were evaluated on the test set compared to the ground-truth segmentation conducted by a nuclear medicine physician.

Results: The model generates high-quality OARs and lesion segmentation in lesion-positive cases, including mCRPC. The results show that self-supervised pre-training significantly improved the average dice similarity coefficient (DSC) for all classes by about 3%. Compared to nnU-Net, a well-established model in medical image segmentation, our approach outperformed with a 5% higher DSC. This improvement was attributed to our model's combined use of self-supervised pre-training and supervised fine-tuning, specifically when applied to PET/CT input. Our best model had the lowest DSC for lesions at 0.68 and the highest for liver at 0.95.

Conclusions: We developed a state-of-the-art neural network using self-supervised pre-training on whole-body [68Ga]Ga-PSMA-11 PET/CT images, followed by fine-tuning on a limited set of annotated images. The model generates high-quality OARs and lesion segmentation for PSMA image analysis. The generalizable model holds potential for various clinical applications, including enhanced RLT and patient-specific internal dosimetry.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用 Swin UNETR 自监督学习在 [68Ga]Ga-PSMA-11 PET/CT 图像上自动分割病变和危险器官。
背景:前列腺特异性膜抗原(PSMA)PET/CT 成像被广泛用于定量图像分析,尤其是在治疗转移性去势抵抗性前列腺癌(mCRPC)的放射性配体疗法(RLT)中。通过分析分割的危险器官(OAR)和病灶,可以探索影响 PSMA 生物分布的未知特征。人工分割既耗时又耗力,因此需要自动化的分割方法。由于缺乏高质量的注释图像,训练深度学习分割模型具有挑战性。针对这一问题,我们开发了用于全自动分割的移位窗口 UNEt TRansformers(Swin UNETR)。在自监督框架内,该模型的编码器在无标记数据上进行了预训练。整个模型,包括其解码器,都是通过标注数据进行微调的:在这项工作中,从两个中心收集了 752 幅全身[68Ga]Ga-PSMA-11 PET/CT 图像。在自监督模型预训练中,使用了 652 张未标记的图像。剩下的 100 张图像则由人工标注,用于监督训练。在监督训练阶段,使用来自一个中心的 64 张图像进行 5 倍交叉验证,其中 16 张用于模型训练,16 张用于验证。在测试阶段,使用了 20 张保留图像,平均分布在两个中心。对测试集的图像分割和量化指标进行了评估,并与核医学医生进行的地面实况分割进行了比较:结果:该模型在病变阳性病例(包括 mCRPC)中生成了高质量的 OAR 和病变分割。结果表明,自我监督预训练显著提高了所有类别的平均骰子相似系数(DSC),提高幅度约为 3%。与医学图像分割领域的成熟模型 nnU-Net 相比,我们的方法更胜一筹,DSC 高出 5%。这一改进归功于我们的模型结合使用了自我监督预训练和监督微调,特别是在应用于 PET/CT 输入时。我们的最佳模型对病变的 DSC 值最低,为 0.68,对肝脏的 DSC 值最高,为 0.95:我们在全身[68Ga]Ga-PSMA-11 PET/CT图像上进行自监督预训练,然后在有限的注释图像集上进行微调,从而开发出了最先进的神经网络。该模型可生成用于 PSMA 图像分析的高质量 OAR 和病灶分割。该模型具有通用性,可用于各种临床应用,包括增强 RLT 和患者特异性内部剂量测定。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Cancer Imaging
Cancer Imaging ONCOLOGY-RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
CiteScore
7.00
自引率
0.00%
发文量
66
审稿时长
>12 weeks
期刊介绍: Cancer Imaging is an open access, peer-reviewed journal publishing original articles, reviews and editorials written by expert international radiologists working in oncology. The journal encompasses CT, MR, PET, ultrasound, radionuclide and multimodal imaging in all kinds of malignant tumours, plus new developments, techniques and innovations. Topics of interest include: Breast Imaging Chest Complications of treatment Ear, Nose & Throat Gastrointestinal Hepatobiliary & Pancreatic Imaging biomarkers Interventional Lymphoma Measurement of tumour response Molecular functional imaging Musculoskeletal Neuro oncology Nuclear Medicine Paediatric.
期刊最新文献
Multi-parameter MRI radiomics model in predicting postoperative progressive cerebral edema and hemorrhage after resection of meningioma. Correction: New frontiers in domain-inspired radiomics and radiogenomics: increasing role of molecular diagnostics in CNS tumor classification and grading following WHO CNS-5 updates. Role of [18F]FDG PET/CT in the management of follicular cell-derived thyroid carcinoma. Differentiation of pathological subtypes and Ki-67 and TTF-1 expression by dual-energy CT (DECT) volumetric quantitative analysis in non-small cell lung cancer. Ultrasound-guided intra-tumoral administration of directly-injected therapies: a review of the technical and logistical considerations.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1