Pub Date : 2023-12-05DOI: 10.1109/TRPMS.2023.3339173
Yuxin Xue;Lei Bi;Yige Peng;Michael Fulham;David Dagan Feng;Jinman Kim
Positron emission tomography (PET) is a widely used, highly sensitive molecular imaging in clinical diagnosis. There is interest in reducing the radiation exposure from PET but also maintaining adequate image quality. Recent methods using convolutional neural networks (CNNs) to generate synthesized high-quality PET images from “low-dose” counterparts have been reported to be “state-of-the-art” for low-to-high-image recovery methods. However, these methods are prone to exhibiting discrepancies in texture and structure between synthesized and real images. Furthermore, the distribution shift between low-dose PET and standard PET has not been fully investigated. To address these issues, we developed a self-supervised adaptive residual estimation generative adversarial network (SS-AEGAN). We introduce 1) an adaptive residual estimation mapping mechanism, AE-Net, designed to dynamically rectify the preliminary synthesized PET images by taking the residual map between the low-dose PET and synthesized output as the input and 2) a self-supervised pretraining strategy to enhance the feature representation of the coarse generator. Our experiments with a public benchmark dataset of total-body PET images show that SS-AEGAN consistently outperformed the state-of-the-art synthesis methods with various dose reduction factors.
正电子发射断层扫描(PET)是一种广泛应用于临床诊断的高灵敏度分子成像技术。人们希望减少 PET 的辐射量,同时又能保持足够的图像质量。据报道,最近使用卷积神经网络(CNN)从 "低剂量 "对应图像生成合成高质量 PET 图像的方法是 "最先进的 "低剂量到高质量图像复原方法。然而,这些方法容易在合成图像和真实图像之间出现纹理和结构差异。此外,低剂量 PET 和标准 PET 之间的分布偏移尚未得到充分研究。为了解决这些问题,我们开发了自监督自适应残差估计生成对抗网络(SS-AEGAN)。我们引入了:1)自适应残差估计映射机制 AE-Net,旨在将低剂量 PET 和合成输出之间的残差映射作为输入,动态修正初步合成的 PET 图像;2)自监督预训练策略,以增强粗生成器的特征表示。我们使用公开的全身 PET 图像基准数据集进行的实验表明,SS-AEGAN 的性能始终优于使用各种剂量降低系数的最先进合成方法。
{"title":"PET Synthesis via Self-Supervised Adaptive Residual Estimation Generative Adversarial Network","authors":"Yuxin Xue;Lei Bi;Yige Peng;Michael Fulham;David Dagan Feng;Jinman Kim","doi":"10.1109/TRPMS.2023.3339173","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3339173","url":null,"abstract":"Positron emission tomography (PET) is a widely used, highly sensitive molecular imaging in clinical diagnosis. There is interest in reducing the radiation exposure from PET but also maintaining adequate image quality. Recent methods using convolutional neural networks (CNNs) to generate synthesized high-quality PET images from “low-dose” counterparts have been reported to be “state-of-the-art” for low-to-high-image recovery methods. However, these methods are prone to exhibiting discrepancies in texture and structure between synthesized and real images. Furthermore, the distribution shift between low-dose PET and standard PET has not been fully investigated. To address these issues, we developed a self-supervised adaptive residual estimation generative adversarial network (SS-AEGAN). We introduce 1) an adaptive residual estimation mapping mechanism, AE-Net, designed to dynamically rectify the preliminary synthesized PET images by taking the residual map between the low-dose PET and synthesized output as the input and 2) a self-supervised pretraining strategy to enhance the feature representation of the coarse generator. Our experiments with a public benchmark dataset of total-body PET images show that SS-AEGAN consistently outperformed the state-of-the-art synthesis methods with various dose reduction factors.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 4","pages":"426-438"},"PeriodicalIF":4.4,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-29DOI: 10.1109/TRPMS.2023.3334276
Si Young Yie;Keon Min Kim;Sangjin Bae;Jae Sung Lee
Introduction of the total-body positron emission tomography (TB PET) system is a remarkable advancement in noninvasive imaging, improving annihilation photon detection sensitivity and bringing the quality of positron emission tomography (PET) images one step closer to that of anatomical images. This enables reduced scan times or radiation doses and can ultimately improve other PET images through denoising. This study investigated the effect of loss functions: mean squared error (MSE), Poisson negative log-likelihood derived from the Poisson statistics of radiation activity, and L1 derived from the histogram of count differences between the full and partial scans. Furthermore, the effect of supervision methods, comparing supervised denoising, self-supervised denoising, and interpolation of input and self-supervised denoising based on dependency relations of the partial and full scans are explored. The supervised denoising method using the L1 norm loss function shows high-denoising performance regardless of harsh denoising conditions, and the interpolated self-supervised denoising using MSE loss preserves local features.
全身正电子发射计算机断层扫描(TB PET)系统的引入是无创成像领域的一大进步,它提高了湮灭光子检测灵敏度,使正电子发射计算机断层扫描(PET)图像的质量更接近解剖图像。这样就能减少扫描时间或辐射剂量,并最终通过去噪改善其他 PET 图像。本研究调查了损失函数的影响:均方误差(MSE)、从辐射活动的泊松统计中得出的泊松负对数概率以及从完整扫描和部分扫描之间的计数差异直方图中得出的 L1。此外,还探讨了监督方法的效果,比较了监督去噪、自监督去噪、基于部分扫描和完整扫描的依赖关系的输入插值和自监督去噪。使用 L1 准则损失函数的监督去噪方法无论在何种苛刻的去噪条件下都表现出很高的去噪性能,而使用 MSE 损失的插值自监督去噪方法则保留了局部特征。
{"title":"Effects of Loss Functions and Supervision Methods on Total-Body PET Denoising","authors":"Si Young Yie;Keon Min Kim;Sangjin Bae;Jae Sung Lee","doi":"10.1109/TRPMS.2023.3334276","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3334276","url":null,"abstract":"Introduction of the total-body positron emission tomography (TB PET) system is a remarkable advancement in noninvasive imaging, improving annihilation photon detection sensitivity and bringing the quality of positron emission tomography (PET) images one step closer to that of anatomical images. This enables reduced scan times or radiation doses and can ultimately improve other PET images through denoising. This study investigated the effect of loss functions: mean squared error (MSE), Poisson negative log-likelihood derived from the Poisson statistics of radiation activity, and L1 derived from the histogram of count differences between the full and partial scans. Furthermore, the effect of supervision methods, comparing supervised denoising, self-supervised denoising, and interpolation of input and self-supervised denoising based on dependency relations of the partial and full scans are explored. The supervised denoising method using the L1 norm loss function shows high-denoising performance regardless of harsh denoising conditions, and the interpolated self-supervised denoising using MSE loss preserves local features.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 4","pages":"379-390"},"PeriodicalIF":4.4,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As positron emission tomography (PET) imaging is accompanied by substantial radiation exposure and cancer risk, reducing radiation dose in PET scans is an important topic. However, low-count PET scans often suffer from high-image noise, which can negatively impact image quality and diagnostic performance. Recent advances in deep learning have shown great potential for recovering underlying signal from noisy counterparts. However, neural networks trained on a specific noise level cannot be easily generalized to other noise levels due to different noise amplitude and variances. To obtain optimal denoised results, we may need to train multiple networks using data with different noise levels. But this approach may be infeasible in reality due to limited data availability. Denoising dynamic PET images presents additional challenge due to tracer decay and continuously changing noise levels across dynamic frames. To address these issues, we propose a unified noise-aware network (UNN) that combines multiple subnetworks with varying denoising power to generate optimal denoised results regardless of the input noise levels. Evaluated using large-scale data from two medical centers with different vendors, presented results showed that the UNN can consistently produce promising denoised results regardless of input noise levels, and demonstrate superior performance over networks trained on single noise level data, especially for extremely low-count data.
正电子发射断层扫描(PET)成像伴随着大量的辐射暴露和癌症风险,因此降低 PET 扫描的辐射剂量是一个重要的课题。然而,低计数 PET 扫描往往存在高图像噪声,这会对图像质量和诊断性能产生负面影响。深度学习的最新进展表明,从噪声对应图像中恢复底层信号具有巨大潜力。然而,由于噪声的振幅和方差不同,在特定噪声水平上训练的神经网络不能轻易推广到其他噪声水平。为了获得最佳的去噪结果,我们可能需要使用不同噪声水平的数据来训练多个网络。但由于数据可用性有限,这种方法在现实中可能并不可行。由于示踪剂衰减和动态帧中不断变化的噪声水平,动态 PET 图像的去噪面临更多挑战。为了解决这些问题,我们提出了一种统一噪声感知网络(UNN),它结合了多个具有不同去噪能力的子网络,无论输入噪声水平如何,都能生成最佳的去噪结果。我们使用来自两个不同供应商的医疗中心的大规模数据进行了评估,结果表明,无论输入噪声水平如何,统一噪声感知网络都能始终如一地生成令人满意的去噪结果,而且其性能优于在单一噪声水平数据上训练的网络,特别是对于极低计数的数据。
{"title":"Unified Noise-Aware Network for Low-Count PET Denoising With Varying Count Levels","authors":"Huidong Xie;Qiong Liu;Bo Zhou;Xiongchao Chen;Xueqi Guo;Hanzhong Wang;Biao Li;Axel Rominger;Kuangyu Shi;Chi Liu","doi":"10.1109/TRPMS.2023.3334105","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3334105","url":null,"abstract":"As positron emission tomography (PET) imaging is accompanied by substantial radiation exposure and cancer risk, reducing radiation dose in PET scans is an important topic. However, low-count PET scans often suffer from high-image noise, which can negatively impact image quality and diagnostic performance. Recent advances in deep learning have shown great potential for recovering underlying signal from noisy counterparts. However, neural networks trained on a specific noise level cannot be easily generalized to other noise levels due to different noise amplitude and variances. To obtain optimal denoised results, we may need to train multiple networks using data with different noise levels. But this approach may be infeasible in reality due to limited data availability. Denoising dynamic PET images presents additional challenge due to tracer decay and continuously changing noise levels across dynamic frames. To address these issues, we propose a unified noise-aware network (UNN) that combines multiple subnetworks with varying denoising power to generate optimal denoised results regardless of the input noise levels. Evaluated using large-scale data from two medical centers with different vendors, presented results showed that the UNN can consistently produce promising denoised results regardless of input noise levels, and demonstrate superior performance over networks trained on single noise level data, especially for extremely low-count data.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 4","pages":"366-378"},"PeriodicalIF":4.4,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10323300","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low-dose positron emission tomography (PET) reconstruction algorithms manage to reduce the injected dose and/or scanning time in PET examination while maintaining the image quality, and thus has been extensively studied. In this article, we proposed a novel ultralow-dose reconstruction method for total-body PET. Specifically, we developed a deep learning model named ISS-Unet based on U-Net and introduced 3-D PixelUnshuffle/PixelShuffle pair in image space to reduce the training time and GPU memory. We then introduced two body sampling methods in the training patch preparation step to improve the training efficiency and local metrics. We also reported the misalignment artifacts that were often neglected in 2-D training. The proposed method was evaluated on the MICCAI 2022 Ultralow-Dose PET Imaging Challenge dataset and won the first prize in the first-round competition according to the comprehensive score combining global and local metrics. In this article, we disclosed the implementation details of the proposed method followed by the comparison results with three typical methods.
低剂量正电子发射计算机断层扫描(PET)重建算法能够在保证图像质量的前提下减少 PET 检查的注射剂量和/或扫描时间,因此被广泛研究。在本文中,我们提出了一种用于全身正电子发射断层扫描的新型超低剂量重建方法。具体来说,我们在 U-Net 的基础上开发了一种名为 ISS-Unet 的深度学习模型,并在图像空间中引入了三维 PixelUnshuffle/PixelShuffle 对,以减少训练时间和 GPU 内存。然后,我们在训练补丁准备步骤中引入了两种体采样方法,以提高训练效率和局部指标。我们还报告了在二维训练中经常被忽略的错位伪影。所提出的方法在 MICCAI 2022 超低剂量 PET 成像挑战赛数据集上进行了评估,并根据全局和局部指标相结合的综合得分在第一轮比赛中获得了一等奖。本文披露了所提方法的实现细节,以及与三种典型方法的比较结果。
{"title":"A Total-Body Ultralow-Dose PET Reconstruction Method via Image Space Shuffle U-Net and Body Sampling","authors":"Gaoyu Chen;Sheng Liu;Wenxiang Ding;Li Lv;Chen Zhao;Fenghua Weng;Yong Long;Yunlong Zan;Qiu Huang","doi":"10.1109/TRPMS.2023.3333839","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3333839","url":null,"abstract":"Low-dose positron emission tomography (PET) reconstruction algorithms manage to reduce the injected dose and/or scanning time in PET examination while maintaining the image quality, and thus has been extensively studied. In this article, we proposed a novel ultralow-dose reconstruction method for total-body PET. Specifically, we developed a deep learning model named ISS-Unet based on U-Net and introduced 3-D PixelUnshuffle/PixelShuffle pair in image space to reduce the training time and GPU memory. We then introduced two body sampling methods in the training patch preparation step to improve the training efficiency and local metrics. We also reported the misalignment artifacts that were often neglected in 2-D training. The proposed method was evaluated on the MICCAI 2022 Ultralow-Dose PET Imaging Challenge dataset and won the first prize in the first-round competition according to the comprehensive score combining global and local metrics. In this article, we disclosed the implementation details of the proposed method followed by the comparison results with three typical methods.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 4","pages":"357-365"},"PeriodicalIF":4.4,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10320380","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-16DOI: 10.1109/TRPMS.2023.3333202
Erik Reimers;Ju-Chieh Cheng;Vesna Sossi
Data-driven intraframe motion correction of a dynamic brain PET scan (with each frame duration on the order of minutes) is often achieved through the co-registration of high-temporal-resolution (e.g., 1-s duration) subframes to estimate subject head motion. However, this conventional method of subframe co-registration may perform poorly during periods of low counts and/or drastic changes in the spatial tracer distribution over time. Here, we propose a deep learning (DL), U-Net-based convolutional neural network model which aids in the PET motion estimation to overcome these limitations. Unlike DL models for PET denoising, a nonstandard 2.5-D DL model was used which transforms the high-temporal-resolution subframes into nonquantitative DL subframes which allow for improved differentiation between noise and structural/functional landmarks and estimate a constant tracer distribution across time. When estimating motion during periods of drastic change in spatial distribution (within the first minute of the scan, ~1-s temporal resolution), the proposed DL method was found to reduce the expected magnitude of error (+/−) in the estimation for an artificially injected motion trace from 16 mm and 7° (conventional method) to 0.7 mm and 0.6° (DL method). During periods of low counts but a relatively constant spatial tracer distribution (60th min of the scan, ~1-s temporal resolution), an expected error was reduced from 0.5 mm and 0.7° (conventional method) to 0.3 mm and 0.4° (DL method). The use of the DL method was found to significantly improve the accuracy of an image-derived input function calculation when motion was present during the first minute of the scan.
{"title":"Deep-Learning-Aided Intraframe Motion Correction for Low-Count Dynamic Brain PET","authors":"Erik Reimers;Ju-Chieh Cheng;Vesna Sossi","doi":"10.1109/TRPMS.2023.3333202","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3333202","url":null,"abstract":"Data-driven intraframe motion correction of a dynamic brain PET scan (with each frame duration on the order of minutes) is often achieved through the co-registration of high-temporal-resolution (e.g., 1-s duration) subframes to estimate subject head motion. However, this conventional method of subframe co-registration may perform poorly during periods of low counts and/or drastic changes in the spatial tracer distribution over time. Here, we propose a deep learning (DL), U-Net-based convolutional neural network model which aids in the PET motion estimation to overcome these limitations. Unlike DL models for PET denoising, a nonstandard 2.5-D DL model was used which transforms the high-temporal-resolution subframes into nonquantitative DL subframes which allow for improved differentiation between noise and structural/functional landmarks and estimate a constant tracer distribution across time. When estimating motion during periods of drastic change in spatial distribution (within the first minute of the scan, ~1-s temporal resolution), the proposed DL method was found to reduce the expected magnitude of error (+/−) in the estimation for an artificially injected motion trace from 16 mm and 7° (conventional method) to 0.7 mm and 0.6° (DL method). During periods of low counts but a relatively constant spatial tracer distribution (60th min of the scan, ~1-s temporal resolution), an expected error was reduced from 0.5 mm and 0.7° (conventional method) to 0.3 mm and 0.4° (DL method). The use of the DL method was found to significantly improve the accuracy of an image-derived input function calculation when motion was present during the first minute of the scan.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"53-63"},"PeriodicalIF":4.4,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139081230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unsupervised domain adaptation (UDA) methods have achieved promising performance in alleviating the domain shift between different imaging modalities. In this article, we propose a robust two-stage 3-D anatomy-guided self-training cross-modality segmentation (ASTCMSeg) framework based on UDA for unpaired cross-modality image segmentation, including the anatomy-guided image translation and self-training segmentation stages. In the translation stage, we first leverage the similarity distributions between patches to capture the latent anatomical relationships and propose an anatomical relation consistency (ARC) for preserving the correct anatomical relationships. Then, we design a frequency domain constraint to enforce the consistency of important frequency components during image translation. Finally, we integrate the ARC and frequency domain constraint with contrastive learning for anatomy-guided image translation. In the segmentation stage, we propose a context-aware anisotropic mesh network for segmenting anisotropic volumes in the target domain. Meanwhile, we design a volumetric adaptive self-training method that dynamically selects appropriate pseudo-label thresholds to learn the abundant label information from unlabeled target volumes. Our proposed method is validated on the cross-modality brain structure, cardiac substructure, and abdominal multiorgan segmentation tasks. Experimental results show that our proposed method achieves state-of-the-art performance in all tasks and significantly outperforms other 2-D based or 3-D based UDA methods.
{"title":"A 3-D Anatomy-Guided Self-Training Segmentation Framework for Unpaired Cross-Modality Medical Image Segmentation","authors":"Yuzhou Zhuang;Hong Liu;Enmin Song;Xiangyang Xu;Yongde Liao;Guanchao Ye;Chih-Cheng Hung","doi":"10.1109/TRPMS.2023.3332619","DOIUrl":"10.1109/TRPMS.2023.3332619","url":null,"abstract":"Unsupervised domain adaptation (UDA) methods have achieved promising performance in alleviating the domain shift between different imaging modalities. In this article, we propose a robust two-stage 3-D anatomy-guided self-training cross-modality segmentation (ASTCMSeg) framework based on UDA for unpaired cross-modality image segmentation, including the anatomy-guided image translation and self-training segmentation stages. In the translation stage, we first leverage the similarity distributions between patches to capture the latent anatomical relationships and propose an anatomical relation consistency (ARC) for preserving the correct anatomical relationships. Then, we design a frequency domain constraint to enforce the consistency of important frequency components during image translation. Finally, we integrate the ARC and frequency domain constraint with contrastive learning for anatomy-guided image translation. In the segmentation stage, we propose a context-aware anisotropic mesh network for segmenting anisotropic volumes in the target domain. Meanwhile, we design a volumetric adaptive self-training method that dynamically selects appropriate pseudo-label thresholds to learn the abundant label information from unlabeled target volumes. Our proposed method is validated on the cross-modality brain structure, cardiac substructure, and abdominal multiorgan segmentation tasks. Experimental results show that our proposed method achieves state-of-the-art performance in all tasks and significantly outperforms other 2-D based or 3-D based UDA methods.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"33-52"},"PeriodicalIF":4.4,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135661085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.1109/TRPMS.2023.3332288
Hadley DeBrosse;Ling Jian Meng;Patrick La Rivière
Imaging the spatial distribution of low concentrations of metal is a growing problem of interest with applications in medical and material sciences. X-ray fluorescence emission tomography (XFET) is an emerging metal mapping imaging modality with potential sensitivity improvements and practical advantages over other methods. However, XFET detector placement must first be optimized to ensure accurate metal density quantification and adequate spatial resolution. In this work, we first use singular value decomposition of the imaging model and eigendecomposition of the object-specific Fisher information matrix to study how detector arrangement affects spatial resolution and feature preservation. We then perform joint image reconstructions of a numerical gold phantom. For this phantom, we show that two parallel detectors provide metal quantification with similar accuracy to four detectors, despite the resulting anisotropic spatial resolution in the attenuation map estimate. Two orthogonal detectors provide improved spatial resolution along one axis, but underestimate the metal concentration in distant regions. Therefore, this work demonstrates the minor effect of using fewer, but strategically placed, detectors in the case where detector placement is restricted. This work is a critical investigation into the limitations and capabilities of XFET prior to its translation to preclinical and benchtop uses.
{"title":"Effect of Detector Placement on Joint Estimation in X-Ray Fluorescence Emission Tomography","authors":"Hadley DeBrosse;Ling Jian Meng;Patrick La Rivière","doi":"10.1109/TRPMS.2023.3332288","DOIUrl":"10.1109/TRPMS.2023.3332288","url":null,"abstract":"Imaging the spatial distribution of low concentrations of metal is a growing problem of interest with applications in medical and material sciences. X-ray fluorescence emission tomography (XFET) is an emerging metal mapping imaging modality with potential sensitivity improvements and practical advantages over other methods. However, XFET detector placement must first be optimized to ensure accurate metal density quantification and adequate spatial resolution. In this work, we first use singular value decomposition of the imaging model and eigendecomposition of the object-specific Fisher information matrix to study how detector arrangement affects spatial resolution and feature preservation. We then perform joint image reconstructions of a numerical gold phantom. For this phantom, we show that two parallel detectors provide metal quantification with similar accuracy to four detectors, despite the resulting anisotropic spatial resolution in the attenuation map estimate. Two orthogonal detectors provide improved spatial resolution along one axis, but underestimate the metal concentration in distant regions. Therefore, this work demonstrates the minor effect of using fewer, but strategically placed, detectors in the case where detector placement is restricted. This work is a critical investigation into the limitations and capabilities of XFET prior to its translation to preclinical and benchtop uses.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"21-32"},"PeriodicalIF":4.4,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135611150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-08DOI: 10.1109/TRPMS.2023.3330365
{"title":"2023 Index IEEE Transactions on Radiation and Plasma Medical Sciences Vol. 7","authors":"","doi":"10.1109/TRPMS.2023.3330365","DOIUrl":"10.1109/TRPMS.2023.3330365","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"7 8","pages":"1-20"},"PeriodicalIF":4.4,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10312794","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135515041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1109/TRPMS.2023.3330772
Lu Wen;Jianghong Xiao;Chen Zu;Xi Wu;Jiliu Zhou;Xingchen Peng;Yan Wang
Cervical cancer stands as a prominent female malignancy, posing a serious threat to women’s health. The clinical solution typically involves time-consuming and laborious radiotherapy planning. Although convolutional neural network (CNN)-based models have been investigated to automate the radiotherapy planning by predicting its outcomes, i.e., dose distribution maps, the insufficiency of data in the cervical cancer dataset limits the prediction performance and generalization of models. Additionally, the intrinsic locality of convolution operations also hinders models from capturing dose information at a global range, limiting the prediction accuracy. In this article, we propose a transfer learning framework embedded with transformer, namely, DoseTransfer, to automatically predict the dose distribution for cervical cancer. To address the limited data in the cervical cancer dataset, we leverage highly correlated clinical information from rectum cancer and transfer this knowledge in a two-phase framework. Specifically, the first phase is the pretraining phase which aims to pretrain the model with the rectum cancer dataset and extract prior knowledge from rectum cancer, while the second phase is the transferring phase where the priorly learned knowledge is effectively transferred to cervical cancer and guides the model to achieve better accuracy. Moreover, both phases are embedded with transformers to capture the global dependencies ignored by CNN, learning wider feature representations. Experimental results on the in-house datasets (i.e., rectum cancer dataset and cervical cancer dataset) have demonstrated the effectiveness of the proposed method.
{"title":"DoseTransfer: A Transformer Embedded Model With Transfer Learning for Radiotherapy Dose Prediction of Cervical Cancer","authors":"Lu Wen;Jianghong Xiao;Chen Zu;Xi Wu;Jiliu Zhou;Xingchen Peng;Yan Wang","doi":"10.1109/TRPMS.2023.3330772","DOIUrl":"10.1109/TRPMS.2023.3330772","url":null,"abstract":"Cervical cancer stands as a prominent female malignancy, posing a serious threat to women’s health. The clinical solution typically involves time-consuming and laborious radiotherapy planning. Although convolutional neural network (CNN)-based models have been investigated to automate the radiotherapy planning by predicting its outcomes, i.e., dose distribution maps, the insufficiency of data in the cervical cancer dataset limits the prediction performance and generalization of models. Additionally, the intrinsic locality of convolution operations also hinders models from capturing dose information at a global range, limiting the prediction accuracy. In this article, we propose a transfer learning framework embedded with transformer, namely, DoseTransfer, to automatically predict the dose distribution for cervical cancer. To address the limited data in the cervical cancer dataset, we leverage highly correlated clinical information from rectum cancer and transfer this knowledge in a two-phase framework. Specifically, the first phase is the pretraining phase which aims to pretrain the model with the rectum cancer dataset and extract prior knowledge from rectum cancer, while the second phase is the transferring phase where the priorly learned knowledge is effectively transferred to cervical cancer and guides the model to achieve better accuracy. Moreover, both phases are embedded with transformers to capture the global dependencies ignored by CNN, learning wider feature representations. Experimental results on the in-house datasets (i.e., rectum cancer dataset and cervical cancer dataset) have demonstrated the effectiveness of the proposed method.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"95-104"},"PeriodicalIF":4.4,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135507701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spectral computed tomography (CT) offers the possibility to reconstruct attenuation images at different energy levels, which can be then used for material decomposition. However, traditional methods reconstruct each energy bin individually and are vulnerable to noise. In this article, we propose a novel synergistic method for spectral CT reconstruction, namely, Uconnect. It utilizes trained convolutional neural networks (CNNs) to connect the energy bins to a latent image so that the full binned data is used synergistically. We experiment on two types of low-dose data: 1) simulated and 2) real patient data. Qualitative and quantitative analysis show that our proposed Uconnect outperforms state-of-the-art model-based iterative reconstruction (MBIR) techniques as well as CNN-based denoising.
{"title":"Uconnect: Synergistic Spectral CT Reconstruction With U-Nets Connecting the Energy Bins","authors":"Zhihan Wang;Alexandre Bousse;Franck Vermet;Jacques Froment;Béatrice Vedel;Alessandro Perelli;Jean-Pierre Tasu;Dimitris Visvikis","doi":"10.1109/TRPMS.2023.3330045","DOIUrl":"10.1109/TRPMS.2023.3330045","url":null,"abstract":"Spectral computed tomography (CT) offers the possibility to reconstruct attenuation images at different energy levels, which can be then used for material decomposition. However, traditional methods reconstruct each energy bin individually and are vulnerable to noise. In this article, we propose a novel synergistic method for spectral CT reconstruction, namely, Uconnect. It utilizes trained convolutional neural networks (CNNs) to connect the energy bins to a latent image so that the full binned data is used synergistically. We experiment on two types of low-dose data: 1) simulated and 2) real patient data. Qualitative and quantitative analysis show that our proposed Uconnect outperforms state-of-the-art model-based iterative reconstruction (MBIR) techniques as well as CNN-based denoising.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 2","pages":"222-233"},"PeriodicalIF":4.4,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134982611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}