Automated quantification of brain PET in PET/CT using deep learning-based CT-to-MR translation: a feasibility study

IF 7.6 1区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING European Journal of Nuclear Medicine and Molecular Imaging Pub Date : 2025-02-18 DOI:10.1007/s00259-025-07132-2
Daesung Kim, Kyobin Choo, Sangwon Lee, Seongjin Kang, Mijin Yun, Jaewon Yang
{"title":"Automated quantification of brain PET in PET/CT using deep learning-based CT-to-MR translation: a feasibility study","authors":"Daesung Kim, Kyobin Choo, Sangwon Lee, Seongjin Kang, Mijin Yun, Jaewon Yang","doi":"10.1007/s00259-025-07132-2","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Quantitative analysis of PET images in brain PET/CT relies on MRI-derived regions of interest (ROIs). However, the pairs of PET/CT and MR images are not always available, and their alignment is challenging if their acquisition times differ considerably. To address these problems, this study proposes a deep learning framework for translating CT of PET/CT to synthetic MR images (MR<sub>SYN</sub>) and performing automated quantitative regional analysis using MR<sub>SYN</sub>-derived segmentation.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>In this retrospective study, 139 subjects who underwent brain [<sup>18</sup>F]FBB PET/CT and T1-weighted MRI were included. A U-Net-like model was trained to translate CT images to MR<sub>SYN</sub>; subsequently, a separate model was trained to segment MR<sub>SYN</sub> into 95 regions. Regional and composite standardised uptake value ratio (SUVr) was calculated in [<sup>18</sup>F]FBB PET images using the acquired ROIs. For evaluation of MR<sub>SYN</sub>, quantitative measurements including structural similarity index measure (SSIM) were employed, while for MR<sub>SYN</sub>-based segmentation evaluation, Dice similarity coefficient (DSC) was calculated. Wilcoxon signed-rank test was performed for SUVrs computed using MR<sub>SYN</sub> and ground-truth MR (MR<sub>GT</sub>).</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>Compared to MR<sub>GT</sub>, the mean SSIM of MR<sub>SYN</sub> was 0.974 ± 0.005. The MR<sub>SYN</sub>-based segmentation achieved a mean DSC of 0.733 across 95 regions. No statistical significance (<i>P</i> &gt; 0.05) was found for SUVr between the ROIs from MR<sub>SYN</sub> and those from MR<sub>GT</sub>, excluding the precuneus.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>We demonstrated a deep learning framework for automated regional brain analysis in PET/CT with MR<sub>SYN</sub>. Our proposed framework can benefit patients who have difficulties in performing an MRI scan.</p>","PeriodicalId":11909,"journal":{"name":"European Journal of Nuclear Medicine and Molecular Imaging","volume":"24 1","pages":""},"PeriodicalIF":7.6000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Nuclear Medicine and Molecular Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00259-025-07132-2","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

Quantitative analysis of PET images in brain PET/CT relies on MRI-derived regions of interest (ROIs). However, the pairs of PET/CT and MR images are not always available, and their alignment is challenging if their acquisition times differ considerably. To address these problems, this study proposes a deep learning framework for translating CT of PET/CT to synthetic MR images (MRSYN) and performing automated quantitative regional analysis using MRSYN-derived segmentation.

Methods

In this retrospective study, 139 subjects who underwent brain [18F]FBB PET/CT and T1-weighted MRI were included. A U-Net-like model was trained to translate CT images to MRSYN; subsequently, a separate model was trained to segment MRSYN into 95 regions. Regional and composite standardised uptake value ratio (SUVr) was calculated in [18F]FBB PET images using the acquired ROIs. For evaluation of MRSYN, quantitative measurements including structural similarity index measure (SSIM) were employed, while for MRSYN-based segmentation evaluation, Dice similarity coefficient (DSC) was calculated. Wilcoxon signed-rank test was performed for SUVrs computed using MRSYN and ground-truth MR (MRGT).

Results

Compared to MRGT, the mean SSIM of MRSYN was 0.974 ± 0.005. The MRSYN-based segmentation achieved a mean DSC of 0.733 across 95 regions. No statistical significance (P > 0.05) was found for SUVr between the ROIs from MRSYN and those from MRGT, excluding the precuneus.

Conclusion

We demonstrated a deep learning framework for automated regional brain analysis in PET/CT with MRSYN. Our proposed framework can benefit patients who have difficulties in performing an MRI scan.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用基于深度学习的CT到mr转换在PET/CT中自动量化脑PET:可行性研究
目的脑PET/CT中PET图像的定量分析依赖于mri衍生的感兴趣区域(roi)。然而,PET/CT和MR图像对并不总是可用的,如果它们的获取时间相差很大,它们的对齐就很有挑战性。为了解决这些问题,本研究提出了一个深度学习框架,用于将PET/CT的CT转换为合成MR图像(MRSYN),并使用MRSYN衍生的分割进行自动定量区域分析。方法本回顾性研究纳入139例脑[18F]FBB PET/CT和t1加权MRI。训练类似u - net的模型将CT图像转换为MRSYN;随后,训练一个单独的模型将MRSYN分割为95个区域。利用获取的roi计算[18F]FBB PET图像的区域和复合标准化摄取值比(SUVr)。对于MRSYN的评价,采用了结构相似指数(SSIM)等定量度量,而对于基于MRSYN的分割评价,则计算了Dice相似系数(DSC)。使用MRSYN和MRGT计算的suv进行Wilcoxon符号秩检验。结果与MRGT比较,MRSYN的平均SSIM为0.974±0.005。基于mrsyn的分割在95个区域中获得了0.733的平均DSC。除楔前叶外,MRSYN组与MRGT组的SUVr差异无统计学意义(P > 0.05)。我们展示了一个深度学习框架,用于PET/CT与MRSYN的自动脑区域分析。我们提出的框架可以使在MRI扫描中有困难的患者受益。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
15.60
自引率
9.90%
发文量
392
审稿时长
3 months
期刊介绍: The European Journal of Nuclear Medicine and Molecular Imaging serves as a platform for the exchange of clinical and scientific information within nuclear medicine and related professions. It welcomes international submissions from professionals involved in the functional, metabolic, and molecular investigation of diseases. The journal's coverage spans physics, dosimetry, radiation biology, radiochemistry, and pharmacy, providing high-quality peer review by experts in the field. Known for highly cited and downloaded articles, it ensures global visibility for research work and is part of the EJNMMI journal family.
期刊最新文献
Challenges and unmet needs of [18F]fluorocholine PET in hyperparathyroidism: A framework for refining current practice and advancing parathyroid imaging. Prediction of alzheimer's disease time to dementia onset using cross-sectional data from spatiotemporal biomarker progression patterns. Molecular imaging of lymphatic organs provides prognostic value after acute myocardial infarction. Omitted staging PSMA PET/CT is associated with advance disease extent at time of PSA persistence: a single center retrospective analysis. Evaluating the prognostic impact of multiple 18FDG-PET imaging parameters in small-cell lung cancer: Insights from the long-term analysis of the CONVERT trial
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1