首页 > 最新文献

IEEE Transactions on Radiation and Plasma Medical Sciences最新文献

英文 中文
PET Synthesis via Self-Supervised Adaptive Residual Estimation Generative Adversarial Network 通过自监督自适应残差估计生成对抗网络进行 PET 合成
IF 4.4 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2023-12-05 DOI: 10.1109/TRPMS.2023.3339173
Yuxin Xue;Lei Bi;Yige Peng;Michael Fulham;David Dagan Feng;Jinman Kim
Positron emission tomography (PET) is a widely used, highly sensitive molecular imaging in clinical diagnosis. There is interest in reducing the radiation exposure from PET but also maintaining adequate image quality. Recent methods using convolutional neural networks (CNNs) to generate synthesized high-quality PET images from “low-dose” counterparts have been reported to be “state-of-the-art” for low-to-high-image recovery methods. However, these methods are prone to exhibiting discrepancies in texture and structure between synthesized and real images. Furthermore, the distribution shift between low-dose PET and standard PET has not been fully investigated. To address these issues, we developed a self-supervised adaptive residual estimation generative adversarial network (SS-AEGAN). We introduce 1) an adaptive residual estimation mapping mechanism, AE-Net, designed to dynamically rectify the preliminary synthesized PET images by taking the residual map between the low-dose PET and synthesized output as the input and 2) a self-supervised pretraining strategy to enhance the feature representation of the coarse generator. Our experiments with a public benchmark dataset of total-body PET images show that SS-AEGAN consistently outperformed the state-of-the-art synthesis methods with various dose reduction factors.
正电子发射断层扫描(PET)是一种广泛应用于临床诊断的高灵敏度分子成像技术。人们希望减少 PET 的辐射量,同时又能保持足够的图像质量。据报道,最近使用卷积神经网络(CNN)从 "低剂量 "对应图像生成合成高质量 PET 图像的方法是 "最先进的 "低剂量到高质量图像复原方法。然而,这些方法容易在合成图像和真实图像之间出现纹理和结构差异。此外,低剂量 PET 和标准 PET 之间的分布偏移尚未得到充分研究。为了解决这些问题,我们开发了自监督自适应残差估计生成对抗网络(SS-AEGAN)。我们引入了:1)自适应残差估计映射机制 AE-Net,旨在将低剂量 PET 和合成输出之间的残差映射作为输入,动态修正初步合成的 PET 图像;2)自监督预训练策略,以增强粗生成器的特征表示。我们使用公开的全身 PET 图像基准数据集进行的实验表明,SS-AEGAN 的性能始终优于使用各种剂量降低系数的最先进合成方法。
{"title":"PET Synthesis via Self-Supervised Adaptive Residual Estimation Generative Adversarial Network","authors":"Yuxin Xue;Lei Bi;Yige Peng;Michael Fulham;David Dagan Feng;Jinman Kim","doi":"10.1109/TRPMS.2023.3339173","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3339173","url":null,"abstract":"Positron emission tomography (PET) is a widely used, highly sensitive molecular imaging in clinical diagnosis. There is interest in reducing the radiation exposure from PET but also maintaining adequate image quality. Recent methods using convolutional neural networks (CNNs) to generate synthesized high-quality PET images from “low-dose” counterparts have been reported to be “state-of-the-art” for low-to-high-image recovery methods. However, these methods are prone to exhibiting discrepancies in texture and structure between synthesized and real images. Furthermore, the distribution shift between low-dose PET and standard PET has not been fully investigated. To address these issues, we developed a self-supervised adaptive residual estimation generative adversarial network (SS-AEGAN). We introduce 1) an adaptive residual estimation mapping mechanism, AE-Net, designed to dynamically rectify the preliminary synthesized PET images by taking the residual map between the low-dose PET and synthesized output as the input and 2) a self-supervised pretraining strategy to enhance the feature representation of the coarse generator. Our experiments with a public benchmark dataset of total-body PET images show that SS-AEGAN consistently outperformed the state-of-the-art synthesis methods with various dose reduction factors.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 4","pages":"426-438"},"PeriodicalIF":4.4,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Loss Functions and Supervision Methods on Total-Body PET Denoising 损失函数和监督方法对全身 PET 去噪的影响
IF 4.4 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2023-11-29 DOI: 10.1109/TRPMS.2023.3334276
Si Young Yie;Keon Min Kim;Sangjin Bae;Jae Sung Lee
Introduction of the total-body positron emission tomography (TB PET) system is a remarkable advancement in noninvasive imaging, improving annihilation photon detection sensitivity and bringing the quality of positron emission tomography (PET) images one step closer to that of anatomical images. This enables reduced scan times or radiation doses and can ultimately improve other PET images through denoising. This study investigated the effect of loss functions: mean squared error (MSE), Poisson negative log-likelihood derived from the Poisson statistics of radiation activity, and L1 derived from the histogram of count differences between the full and partial scans. Furthermore, the effect of supervision methods, comparing supervised denoising, self-supervised denoising, and interpolation of input and self-supervised denoising based on dependency relations of the partial and full scans are explored. The supervised denoising method using the L1 norm loss function shows high-denoising performance regardless of harsh denoising conditions, and the interpolated self-supervised denoising using MSE loss preserves local features.
全身正电子发射计算机断层扫描(TB PET)系统的引入是无创成像领域的一大进步,它提高了湮灭光子检测灵敏度,使正电子发射计算机断层扫描(PET)图像的质量更接近解剖图像。这样就能减少扫描时间或辐射剂量,并最终通过去噪改善其他 PET 图像。本研究调查了损失函数的影响:均方误差(MSE)、从辐射活动的泊松统计中得出的泊松负对数概率以及从完整扫描和部分扫描之间的计数差异直方图中得出的 L1。此外,还探讨了监督方法的效果,比较了监督去噪、自监督去噪、基于部分扫描和完整扫描的依赖关系的输入插值和自监督去噪。使用 L1 准则损失函数的监督去噪方法无论在何种苛刻的去噪条件下都表现出很高的去噪性能,而使用 MSE 损失的插值自监督去噪方法则保留了局部特征。
{"title":"Effects of Loss Functions and Supervision Methods on Total-Body PET Denoising","authors":"Si Young Yie;Keon Min Kim;Sangjin Bae;Jae Sung Lee","doi":"10.1109/TRPMS.2023.3334276","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3334276","url":null,"abstract":"Introduction of the total-body positron emission tomography (TB PET) system is a remarkable advancement in noninvasive imaging, improving annihilation photon detection sensitivity and bringing the quality of positron emission tomography (PET) images one step closer to that of anatomical images. This enables reduced scan times or radiation doses and can ultimately improve other PET images through denoising. This study investigated the effect of loss functions: mean squared error (MSE), Poisson negative log-likelihood derived from the Poisson statistics of radiation activity, and L1 derived from the histogram of count differences between the full and partial scans. Furthermore, the effect of supervision methods, comparing supervised denoising, self-supervised denoising, and interpolation of input and self-supervised denoising based on dependency relations of the partial and full scans are explored. The supervised denoising method using the L1 norm loss function shows high-denoising performance regardless of harsh denoising conditions, and the interpolated self-supervised denoising using MSE loss preserves local features.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 4","pages":"379-390"},"PeriodicalIF":4.4,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Noise-Aware Network for Low-Count PET Denoising With Varying Count Levels 用于不同计数水平低计数 PET 去噪的统一噪声感知网络
IF 4.4 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2023-11-20 DOI: 10.1109/TRPMS.2023.3334105
Huidong Xie;Qiong Liu;Bo Zhou;Xiongchao Chen;Xueqi Guo;Hanzhong Wang;Biao Li;Axel Rominger;Kuangyu Shi;Chi Liu
As positron emission tomography (PET) imaging is accompanied by substantial radiation exposure and cancer risk, reducing radiation dose in PET scans is an important topic. However, low-count PET scans often suffer from high-image noise, which can negatively impact image quality and diagnostic performance. Recent advances in deep learning have shown great potential for recovering underlying signal from noisy counterparts. However, neural networks trained on a specific noise level cannot be easily generalized to other noise levels due to different noise amplitude and variances. To obtain optimal denoised results, we may need to train multiple networks using data with different noise levels. But this approach may be infeasible in reality due to limited data availability. Denoising dynamic PET images presents additional challenge due to tracer decay and continuously changing noise levels across dynamic frames. To address these issues, we propose a unified noise-aware network (UNN) that combines multiple subnetworks with varying denoising power to generate optimal denoised results regardless of the input noise levels. Evaluated using large-scale data from two medical centers with different vendors, presented results showed that the UNN can consistently produce promising denoised results regardless of input noise levels, and demonstrate superior performance over networks trained on single noise level data, especially for extremely low-count data.
正电子发射断层扫描(PET)成像伴随着大量的辐射暴露和癌症风险,因此降低 PET 扫描的辐射剂量是一个重要的课题。然而,低计数 PET 扫描往往存在高图像噪声,这会对图像质量和诊断性能产生负面影响。深度学习的最新进展表明,从噪声对应图像中恢复底层信号具有巨大潜力。然而,由于噪声的振幅和方差不同,在特定噪声水平上训练的神经网络不能轻易推广到其他噪声水平。为了获得最佳的去噪结果,我们可能需要使用不同噪声水平的数据来训练多个网络。但由于数据可用性有限,这种方法在现实中可能并不可行。由于示踪剂衰减和动态帧中不断变化的噪声水平,动态 PET 图像的去噪面临更多挑战。为了解决这些问题,我们提出了一种统一噪声感知网络(UNN),它结合了多个具有不同去噪能力的子网络,无论输入噪声水平如何,都能生成最佳的去噪结果。我们使用来自两个不同供应商的医疗中心的大规模数据进行了评估,结果表明,无论输入噪声水平如何,统一噪声感知网络都能始终如一地生成令人满意的去噪结果,而且其性能优于在单一噪声水平数据上训练的网络,特别是对于极低计数的数据。
{"title":"Unified Noise-Aware Network for Low-Count PET Denoising With Varying Count Levels","authors":"Huidong Xie;Qiong Liu;Bo Zhou;Xiongchao Chen;Xueqi Guo;Hanzhong Wang;Biao Li;Axel Rominger;Kuangyu Shi;Chi Liu","doi":"10.1109/TRPMS.2023.3334105","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3334105","url":null,"abstract":"As positron emission tomography (PET) imaging is accompanied by substantial radiation exposure and cancer risk, reducing radiation dose in PET scans is an important topic. However, low-count PET scans often suffer from high-image noise, which can negatively impact image quality and diagnostic performance. Recent advances in deep learning have shown great potential for recovering underlying signal from noisy counterparts. However, neural networks trained on a specific noise level cannot be easily generalized to other noise levels due to different noise amplitude and variances. To obtain optimal denoised results, we may need to train multiple networks using data with different noise levels. But this approach may be infeasible in reality due to limited data availability. Denoising dynamic PET images presents additional challenge due to tracer decay and continuously changing noise levels across dynamic frames. To address these issues, we propose a unified noise-aware network (UNN) that combines multiple subnetworks with varying denoising power to generate optimal denoised results regardless of the input noise levels. Evaluated using large-scale data from two medical centers with different vendors, presented results showed that the UNN can consistently produce promising denoised results regardless of input noise levels, and demonstrate superior performance over networks trained on single noise level data, especially for extremely low-count data.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 4","pages":"366-378"},"PeriodicalIF":4.4,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10323300","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Total-Body Ultralow-Dose PET Reconstruction Method via Image Space Shuffle U-Net and Body Sampling 通过图像空间洗牌 U-Net 和身体采样的全身超低剂量 PET 重构方法
IF 4.4 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2023-11-17 DOI: 10.1109/TRPMS.2023.3333839
Gaoyu Chen;Sheng Liu;Wenxiang Ding;Li Lv;Chen Zhao;Fenghua Weng;Yong Long;Yunlong Zan;Qiu Huang
Low-dose positron emission tomography (PET) reconstruction algorithms manage to reduce the injected dose and/or scanning time in PET examination while maintaining the image quality, and thus has been extensively studied. In this article, we proposed a novel ultralow-dose reconstruction method for total-body PET. Specifically, we developed a deep learning model named ISS-Unet based on U-Net and introduced 3-D PixelUnshuffle/PixelShuffle pair in image space to reduce the training time and GPU memory. We then introduced two body sampling methods in the training patch preparation step to improve the training efficiency and local metrics. We also reported the misalignment artifacts that were often neglected in 2-D training. The proposed method was evaluated on the MICCAI 2022 Ultralow-Dose PET Imaging Challenge dataset and won the first prize in the first-round competition according to the comprehensive score combining global and local metrics. In this article, we disclosed the implementation details of the proposed method followed by the comparison results with three typical methods.
低剂量正电子发射计算机断层扫描(PET)重建算法能够在保证图像质量的前提下减少 PET 检查的注射剂量和/或扫描时间,因此被广泛研究。在本文中,我们提出了一种用于全身正电子发射断层扫描的新型超低剂量重建方法。具体来说,我们在 U-Net 的基础上开发了一种名为 ISS-Unet 的深度学习模型,并在图像空间中引入了三维 PixelUnshuffle/PixelShuffle 对,以减少训练时间和 GPU 内存。然后,我们在训练补丁准备步骤中引入了两种体采样方法,以提高训练效率和局部指标。我们还报告了在二维训练中经常被忽略的错位伪影。所提出的方法在 MICCAI 2022 超低剂量 PET 成像挑战赛数据集上进行了评估,并根据全局和局部指标相结合的综合得分在第一轮比赛中获得了一等奖。本文披露了所提方法的实现细节,以及与三种典型方法的比较结果。
{"title":"A Total-Body Ultralow-Dose PET Reconstruction Method via Image Space Shuffle U-Net and Body Sampling","authors":"Gaoyu Chen;Sheng Liu;Wenxiang Ding;Li Lv;Chen Zhao;Fenghua Weng;Yong Long;Yunlong Zan;Qiu Huang","doi":"10.1109/TRPMS.2023.3333839","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3333839","url":null,"abstract":"Low-dose positron emission tomography (PET) reconstruction algorithms manage to reduce the injected dose and/or scanning time in PET examination while maintaining the image quality, and thus has been extensively studied. In this article, we proposed a novel ultralow-dose reconstruction method for total-body PET. Specifically, we developed a deep learning model named ISS-Unet based on U-Net and introduced 3-D PixelUnshuffle/PixelShuffle pair in image space to reduce the training time and GPU memory. We then introduced two body sampling methods in the training patch preparation step to improve the training efficiency and local metrics. We also reported the misalignment artifacts that were often neglected in 2-D training. The proposed method was evaluated on the MICCAI 2022 Ultralow-Dose PET Imaging Challenge dataset and won the first prize in the first-round competition according to the comprehensive score combining global and local metrics. In this article, we disclosed the implementation details of the proposed method followed by the comparison results with three typical methods.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 4","pages":"357-365"},"PeriodicalIF":4.4,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10320380","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-Learning-Aided Intraframe Motion Correction for Low-Count Dynamic Brain PET 低计数动态脑 PET 的深度学习辅助帧内运动校正
IF 4.4 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2023-11-16 DOI: 10.1109/TRPMS.2023.3333202
Erik Reimers;Ju-Chieh Cheng;Vesna Sossi
Data-driven intraframe motion correction of a dynamic brain PET scan (with each frame duration on the order of minutes) is often achieved through the co-registration of high-temporal-resolution (e.g., 1-s duration) subframes to estimate subject head motion. However, this conventional method of subframe co-registration may perform poorly during periods of low counts and/or drastic changes in the spatial tracer distribution over time. Here, we propose a deep learning (DL), U-Net-based convolutional neural network model which aids in the PET motion estimation to overcome these limitations. Unlike DL models for PET denoising, a nonstandard 2.5-D DL model was used which transforms the high-temporal-resolution subframes into nonquantitative DL subframes which allow for improved differentiation between noise and structural/functional landmarks and estimate a constant tracer distribution across time. When estimating motion during periods of drastic change in spatial distribution (within the first minute of the scan, ~1-s temporal resolution), the proposed DL method was found to reduce the expected magnitude of error (+/−) in the estimation for an artificially injected motion trace from 16 mm and 7° (conventional method) to 0.7 mm and 0.6° (DL method). During periods of low counts but a relatively constant spatial tracer distribution (60th min of the scan, ~1-s temporal resolution), an expected error was reduced from 0.5 mm and 0.7° (conventional method) to 0.3 mm and 0.4° (DL method). The use of the DL method was found to significantly improve the accuracy of an image-derived input function calculation when motion was present during the first minute of the scan.
对动态脑 PET 扫描(每帧持续时间约为几分钟)进行数据驱动的帧内运动校正,通常是通过对高时间分辨率(如 1 秒持续时间)子帧进行共配准来估计受试者的头部运动。然而,这种传统的子帧共存方法在低计数和/或空间示踪剂分布随时间发生急剧变化时可能表现不佳。在此,我们提出了一种基于深度学习(DL)、U-Net 的卷积神经网络模型,该模型有助于 PET 运动估计,以克服这些局限性。与用于 PET 去噪的 DL 模型不同,我们使用的是一种非标准的 2.5-D DL 模型,该模型将高时间分辨率子帧转换为非定量 DL 子帧,从而改进了噪音与结构/功能性地标之间的区分,并估算出跨时间的恒定示踪剂分布。在空间分布急剧变化期间(扫描的前一分钟内,约 1 秒的时间分辨率)估计运动时,发现提议的 DL 方法可将人工注入运动轨迹的估计误差预期幅度(+/-)从 16 毫米和 7°(传统方法)减少到 0.7 毫米和 0.6°(DL 方法)。在低计数但空间示踪剂分布相对恒定的时期(扫描的第 60 分钟,~1 秒时间分辨率),预期误差从 0.5 毫米和 0.7°(传统方法)减小到 0.3 毫米和 0.4°(DL 方法)。当扫描的前一分钟出现运动时,使用 DL 方法可显著提高图像衍生输入函数计算的准确性。
{"title":"Deep-Learning-Aided Intraframe Motion Correction for Low-Count Dynamic Brain PET","authors":"Erik Reimers;Ju-Chieh Cheng;Vesna Sossi","doi":"10.1109/TRPMS.2023.3333202","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3333202","url":null,"abstract":"Data-driven intraframe motion correction of a dynamic brain PET scan (with each frame duration on the order of minutes) is often achieved through the co-registration of high-temporal-resolution (e.g., 1-s duration) subframes to estimate subject head motion. However, this conventional method of subframe co-registration may perform poorly during periods of low counts and/or drastic changes in the spatial tracer distribution over time. Here, we propose a deep learning (DL), U-Net-based convolutional neural network model which aids in the PET motion estimation to overcome these limitations. Unlike DL models for PET denoising, a nonstandard 2.5-D DL model was used which transforms the high-temporal-resolution subframes into nonquantitative DL subframes which allow for improved differentiation between noise and structural/functional landmarks and estimate a constant tracer distribution across time. When estimating motion during periods of drastic change in spatial distribution (within the first minute of the scan, ~1-s temporal resolution), the proposed DL method was found to reduce the expected magnitude of error (+/−) in the estimation for an artificially injected motion trace from 16 mm and 7° (conventional method) to 0.7 mm and 0.6° (DL method). During periods of low counts but a relatively constant spatial tracer distribution (60th min of the scan, ~1-s temporal resolution), an expected error was reduced from 0.5 mm and 0.7° (conventional method) to 0.3 mm and 0.4° (DL method). The use of the DL method was found to significantly improve the accuracy of an image-derived input function calculation when motion was present during the first minute of the scan.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"53-63"},"PeriodicalIF":4.4,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139081230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A 3-D Anatomy-Guided Self-Training Segmentation Framework for Unpaired Cross-Modality Medical Image Segmentation 用于非配对跨模态医学图像分割的三维解剖学引导自我训练分割框架
IF 4.4 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2023-11-14 DOI: 10.1109/TRPMS.2023.3332619
Yuzhou Zhuang;Hong Liu;Enmin Song;Xiangyang Xu;Yongde Liao;Guanchao Ye;Chih-Cheng Hung
Unsupervised domain adaptation (UDA) methods have achieved promising performance in alleviating the domain shift between different imaging modalities. In this article, we propose a robust two-stage 3-D anatomy-guided self-training cross-modality segmentation (ASTCMSeg) framework based on UDA for unpaired cross-modality image segmentation, including the anatomy-guided image translation and self-training segmentation stages. In the translation stage, we first leverage the similarity distributions between patches to capture the latent anatomical relationships and propose an anatomical relation consistency (ARC) for preserving the correct anatomical relationships. Then, we design a frequency domain constraint to enforce the consistency of important frequency components during image translation. Finally, we integrate the ARC and frequency domain constraint with contrastive learning for anatomy-guided image translation. In the segmentation stage, we propose a context-aware anisotropic mesh network for segmenting anisotropic volumes in the target domain. Meanwhile, we design a volumetric adaptive self-training method that dynamically selects appropriate pseudo-label thresholds to learn the abundant label information from unlabeled target volumes. Our proposed method is validated on the cross-modality brain structure, cardiac substructure, and abdominal multiorgan segmentation tasks. Experimental results show that our proposed method achieves state-of-the-art performance in all tasks and significantly outperforms other 2-D based or 3-D based UDA methods.
无监督领域适应(UDA)方法在缓解不同成像模式之间的领域偏移方面取得了可喜的成绩。在本文中,我们提出了一种基于 UDA 的鲁棒两阶段三维解剖引导自我训练跨模态分割(ASTCMSeg)框架,用于无配对跨模态图像分割,包括解剖引导图像平移和自我训练分割阶段。在平移阶段,我们首先利用斑块间的相似性分布来捕捉潜在的解剖关系,并提出一种解剖关系一致性(ARC)来保留正确的解剖关系。然后,我们设计了一种频域约束,在图像翻译过程中强制执行重要频率成分的一致性。最后,我们将 ARC 和频域约束与对比学习相结合,实现解剖引导的图像翻译。在分割阶段,我们提出了一种情境感知各向异性网状网络,用于分割目标域中的各向异性体积。同时,我们还设计了一种体积自适应自我训练方法,可动态选择适当的伪标签阈值,从未标明的目标体积中学习丰富的标签信息。我们提出的方法在跨模态大脑结构、心脏亚结构和腹部多器官分割任务中得到了验证。实验结果表明,我们提出的方法在所有任务中都达到了最先进的性能,并明显优于其他基于二维或三维的 UDA 方法。
{"title":"A 3-D Anatomy-Guided Self-Training Segmentation Framework for Unpaired Cross-Modality Medical Image Segmentation","authors":"Yuzhou Zhuang;Hong Liu;Enmin Song;Xiangyang Xu;Yongde Liao;Guanchao Ye;Chih-Cheng Hung","doi":"10.1109/TRPMS.2023.3332619","DOIUrl":"10.1109/TRPMS.2023.3332619","url":null,"abstract":"Unsupervised domain adaptation (UDA) methods have achieved promising performance in alleviating the domain shift between different imaging modalities. In this article, we propose a robust two-stage 3-D anatomy-guided self-training cross-modality segmentation (ASTCMSeg) framework based on UDA for unpaired cross-modality image segmentation, including the anatomy-guided image translation and self-training segmentation stages. In the translation stage, we first leverage the similarity distributions between patches to capture the latent anatomical relationships and propose an anatomical relation consistency (ARC) for preserving the correct anatomical relationships. Then, we design a frequency domain constraint to enforce the consistency of important frequency components during image translation. Finally, we integrate the ARC and frequency domain constraint with contrastive learning for anatomy-guided image translation. In the segmentation stage, we propose a context-aware anisotropic mesh network for segmenting anisotropic volumes in the target domain. Meanwhile, we design a volumetric adaptive self-training method that dynamically selects appropriate pseudo-label thresholds to learn the abundant label information from unlabeled target volumes. Our proposed method is validated on the cross-modality brain structure, cardiac substructure, and abdominal multiorgan segmentation tasks. Experimental results show that our proposed method achieves state-of-the-art performance in all tasks and significantly outperforms other 2-D based or 3-D based UDA methods.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"33-52"},"PeriodicalIF":4.4,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135661085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effect of Detector Placement on Joint Estimation in X-Ray Fluorescence Emission Tomography 探测器位置对 X 射线荧光发射断层成像中联合估算的影响
IF 4.4 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2023-11-13 DOI: 10.1109/TRPMS.2023.3332288
Hadley DeBrosse;Ling Jian Meng;Patrick La Rivière
Imaging the spatial distribution of low concentrations of metal is a growing problem of interest with applications in medical and material sciences. X-ray fluorescence emission tomography (XFET) is an emerging metal mapping imaging modality with potential sensitivity improvements and practical advantages over other methods. However, XFET detector placement must first be optimized to ensure accurate metal density quantification and adequate spatial resolution. In this work, we first use singular value decomposition of the imaging model and eigendecomposition of the object-specific Fisher information matrix to study how detector arrangement affects spatial resolution and feature preservation. We then perform joint image reconstructions of a numerical gold phantom. For this phantom, we show that two parallel detectors provide metal quantification with similar accuracy to four detectors, despite the resulting anisotropic spatial resolution in the attenuation map estimate. Two orthogonal detectors provide improved spatial resolution along one axis, but underestimate the metal concentration in distant regions. Therefore, this work demonstrates the minor effect of using fewer, but strategically placed, detectors in the case where detector placement is restricted. This work is a critical investigation into the limitations and capabilities of XFET prior to its translation to preclinical and benchtop uses.
对低浓度金属的空间分布进行成像是医学和材料科学领域日益关注的问题。X 射线荧光发射断层扫描(XFET)是一种新兴的金属绘图成像方式,与其他方法相比,它具有潜在的灵敏度改进和实用优势。然而,XFET 探测器的放置必须首先进行优化,以确保准确的金属密度量化和足够的空间分辨率。在这项工作中,我们首先使用成像模型的奇异值分解和特定对象费舍尔信息矩阵的高分解来研究探测器的布置如何影响空间分辨率和特征保存。然后,我们对一个数值黄金模型进行了联合图像重建。在该模型中,我们发现尽管衰减图估算的空间分辨率各向异性,但两个平行探测器提供的金属量化精度与四个探测器相似。两个正交探测器沿一条轴线提高了空间分辨率,但低估了远处区域的金属浓度。因此,这项工作展示了在探测器位置受限的情况下,使用较少但有策略地放置探测器的微小效果。在将 XFET 应用于临床前和台式设备之前,这项工作是对其局限性和能力的重要研究。
{"title":"Effect of Detector Placement on Joint Estimation in X-Ray Fluorescence Emission Tomography","authors":"Hadley DeBrosse;Ling Jian Meng;Patrick La Rivière","doi":"10.1109/TRPMS.2023.3332288","DOIUrl":"10.1109/TRPMS.2023.3332288","url":null,"abstract":"Imaging the spatial distribution of low concentrations of metal is a growing problem of interest with applications in medical and material sciences. X-ray fluorescence emission tomography (XFET) is an emerging metal mapping imaging modality with potential sensitivity improvements and practical advantages over other methods. However, XFET detector placement must first be optimized to ensure accurate metal density quantification and adequate spatial resolution. In this work, we first use singular value decomposition of the imaging model and eigendecomposition of the object-specific Fisher information matrix to study how detector arrangement affects spatial resolution and feature preservation. We then perform joint image reconstructions of a numerical gold phantom. For this phantom, we show that two parallel detectors provide metal quantification with similar accuracy to four detectors, despite the resulting anisotropic spatial resolution in the attenuation map estimate. Two orthogonal detectors provide improved spatial resolution along one axis, but underestimate the metal concentration in distant regions. Therefore, this work demonstrates the minor effect of using fewer, but strategically placed, detectors in the case where detector placement is restricted. This work is a critical investigation into the limitations and capabilities of XFET prior to its translation to preclinical and benchtop uses.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"21-32"},"PeriodicalIF":4.4,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135611150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2023 Index IEEE Transactions on Radiation and Plasma Medical Sciences Vol. 7 2023 IEEE辐射与等离子体医学科学汇刊第7卷
IF 4.4 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2023-11-08 DOI: 10.1109/TRPMS.2023.3330365
{"title":"2023 Index IEEE Transactions on Radiation and Plasma Medical Sciences Vol. 7","authors":"","doi":"10.1109/TRPMS.2023.3330365","DOIUrl":"10.1109/TRPMS.2023.3330365","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"7 8","pages":"1-20"},"PeriodicalIF":4.4,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10312794","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135515041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DoseTransfer: A Transformer Embedded Model With Transfer Learning for Radiotherapy Dose Prediction of Cervical Cancer 剂量转移:用于宫颈癌放疗剂量预测的具有迁移学习功能的变压器嵌入式模型
IF 4.4 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2023-11-07 DOI: 10.1109/TRPMS.2023.3330772
Lu Wen;Jianghong Xiao;Chen Zu;Xi Wu;Jiliu Zhou;Xingchen Peng;Yan Wang
Cervical cancer stands as a prominent female malignancy, posing a serious threat to women’s health. The clinical solution typically involves time-consuming and laborious radiotherapy planning. Although convolutional neural network (CNN)-based models have been investigated to automate the radiotherapy planning by predicting its outcomes, i.e., dose distribution maps, the insufficiency of data in the cervical cancer dataset limits the prediction performance and generalization of models. Additionally, the intrinsic locality of convolution operations also hinders models from capturing dose information at a global range, limiting the prediction accuracy. In this article, we propose a transfer learning framework embedded with transformer, namely, DoseTransfer, to automatically predict the dose distribution for cervical cancer. To address the limited data in the cervical cancer dataset, we leverage highly correlated clinical information from rectum cancer and transfer this knowledge in a two-phase framework. Specifically, the first phase is the pretraining phase which aims to pretrain the model with the rectum cancer dataset and extract prior knowledge from rectum cancer, while the second phase is the transferring phase where the priorly learned knowledge is effectively transferred to cervical cancer and guides the model to achieve better accuracy. Moreover, both phases are embedded with transformers to capture the global dependencies ignored by CNN, learning wider feature representations. Experimental results on the in-house datasets (i.e., rectum cancer dataset and cervical cancer dataset) have demonstrated the effectiveness of the proposed method.
宫颈癌是突出的女性恶性肿瘤,严重威胁着妇女的健康。临床解决方案通常包括费时费力的放疗计划。虽然已经研究了基于卷积神经网络(CNN)的模型,通过预测放疗结果(即剂量分布图)来实现放疗计划的自动化,但宫颈癌数据集的数据不足限制了模型的预测性能和泛化。此外,卷积操作的固有局部性也阻碍了模型捕捉全局范围内的剂量信息,从而限制了预测的准确性。在这篇文章中,我们提出了一种嵌入了转换器的迁移学习框架,即 DoseTransfer,用于自动预测宫颈癌的剂量分布。针对宫颈癌数据集数据有限的问题,我们利用了直肠癌中高度相关的临床信息,并在一个两阶段框架中转移了这些知识。具体来说,第一阶段是预训练阶段,旨在利用直肠癌数据集对模型进行预训练,并从直肠癌中提取先验知识;第二阶段是转移阶段,将先验知识有效地转移到宫颈癌中,并指导模型达到更高的准确度。此外,这两个阶段都嵌入了转换器,以捕捉 CNN 忽略的全局依赖关系,学习更广泛的特征表征。在内部数据集(即直肠癌数据集和宫颈癌数据集)上的实验结果证明了所提方法的有效性。
{"title":"DoseTransfer: A Transformer Embedded Model With Transfer Learning for Radiotherapy Dose Prediction of Cervical Cancer","authors":"Lu Wen;Jianghong Xiao;Chen Zu;Xi Wu;Jiliu Zhou;Xingchen Peng;Yan Wang","doi":"10.1109/TRPMS.2023.3330772","DOIUrl":"10.1109/TRPMS.2023.3330772","url":null,"abstract":"Cervical cancer stands as a prominent female malignancy, posing a serious threat to women’s health. The clinical solution typically involves time-consuming and laborious radiotherapy planning. Although convolutional neural network (CNN)-based models have been investigated to automate the radiotherapy planning by predicting its outcomes, i.e., dose distribution maps, the insufficiency of data in the cervical cancer dataset limits the prediction performance and generalization of models. Additionally, the intrinsic locality of convolution operations also hinders models from capturing dose information at a global range, limiting the prediction accuracy. In this article, we propose a transfer learning framework embedded with transformer, namely, DoseTransfer, to automatically predict the dose distribution for cervical cancer. To address the limited data in the cervical cancer dataset, we leverage highly correlated clinical information from rectum cancer and transfer this knowledge in a two-phase framework. Specifically, the first phase is the pretraining phase which aims to pretrain the model with the rectum cancer dataset and extract prior knowledge from rectum cancer, while the second phase is the transferring phase where the priorly learned knowledge is effectively transferred to cervical cancer and guides the model to achieve better accuracy. Moreover, both phases are embedded with transformers to capture the global dependencies ignored by CNN, learning wider feature representations. Experimental results on the in-house datasets (i.e., rectum cancer dataset and cervical cancer dataset) have demonstrated the effectiveness of the proposed method.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"95-104"},"PeriodicalIF":4.4,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135507701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uconnect: Synergistic Spectral CT Reconstruction With U-Nets Connecting the Energy Bins Uconnect:利用 U 型网络连接能量盒进行协同频谱 CT 重构
IF 4.4 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2023-11-03 DOI: 10.1109/TRPMS.2023.3330045
Zhihan Wang;Alexandre Bousse;Franck Vermet;Jacques Froment;Béatrice Vedel;Alessandro Perelli;Jean-Pierre Tasu;Dimitris Visvikis
Spectral computed tomography (CT) offers the possibility to reconstruct attenuation images at different energy levels, which can be then used for material decomposition. However, traditional methods reconstruct each energy bin individually and are vulnerable to noise. In this article, we propose a novel synergistic method for spectral CT reconstruction, namely, Uconnect. It utilizes trained convolutional neural networks (CNNs) to connect the energy bins to a latent image so that the full binned data is used synergistically. We experiment on two types of low-dose data: 1) simulated and 2) real patient data. Qualitative and quantitative analysis show that our proposed Uconnect outperforms state-of-the-art model-based iterative reconstruction (MBIR) techniques as well as CNN-based denoising.
光谱计算机断层扫描(CT)可以重建不同能量级别的衰减图像,然后用于材料分解。然而,传统的方法是单独重建每个能级,容易受到噪声的影响。在本文中,我们提出了一种用于光谱 CT 重建的新型协同方法,即 Uconnect。它利用训练有素的卷积神经网络(CNN)将能量分区与潜在图像连接起来,从而协同使用完整的分区数据。我们对两种低剂量数据进行了实验:1)模拟数据;2)真实患者数据。定性和定量分析表明,我们提出的 Uconnect 优于最先进的基于模型的迭代重建(MBIR)技术和基于 CNN 的去噪技术。
{"title":"Uconnect: Synergistic Spectral CT Reconstruction With U-Nets Connecting the Energy Bins","authors":"Zhihan Wang;Alexandre Bousse;Franck Vermet;Jacques Froment;Béatrice Vedel;Alessandro Perelli;Jean-Pierre Tasu;Dimitris Visvikis","doi":"10.1109/TRPMS.2023.3330045","DOIUrl":"10.1109/TRPMS.2023.3330045","url":null,"abstract":"Spectral computed tomography (CT) offers the possibility to reconstruct attenuation images at different energy levels, which can be then used for material decomposition. However, traditional methods reconstruct each energy bin individually and are vulnerable to noise. In this article, we propose a novel synergistic method for spectral CT reconstruction, namely, Uconnect. It utilizes trained convolutional neural networks (CNNs) to connect the energy bins to a latent image so that the full binned data is used synergistically. We experiment on two types of low-dose data: 1) simulated and 2) real patient data. Qualitative and quantitative analysis show that our proposed Uconnect outperforms state-of-the-art model-based iterative reconstruction (MBIR) techniques as well as CNN-based denoising.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 2","pages":"222-233"},"PeriodicalIF":4.4,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134982611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Radiation and Plasma Medical Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1