The parallax error (PE) significantly deteriorates the spatial resolution and imaging quality of positron emission tomography (PET) scanners. Existing PE correction methods either rely on depth decoding detectors in hardware which increases development costs, or optimize the system response matrix (SRM) in software providing limited compensation for PE. This work proposed a novel PE correction method in projection space based on deep learning (DL), consisting of two steps. First, the sinogram affected by PE was processed by a neural network (PEC-Net). The corrected sinogram output from the PEC-Net was then reconstructed to an improved image. To generate ideal PE-corrected labels, we synthesized training data using Monte Carlo (MC) simulation-based SRMs as forward projectors. The proposed method was validated using simulation data and real data. Experimental results show that the proposed method effectively eliminated artifacts caused by PE, and the reconstructed images of simulation data outperformed those obtained at 4 mm depth of interaction (DOI) resolution in terms of structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR). The PEC-Net may provide a low-cost, high-performance, software-based PE correction method for PET scanners without DOI measurement.
{"title":"Deep-Learning-Based PET Parallax Error Correction: A 2-D Simulation and Phantom Study","authors":"Yu Liu;Jiayou Lan;Ran Cheng;Qingguo Xie;Xiaoping Wang;Bensheng Qiu;Xun Chen;Peng Xiao","doi":"10.1109/TRPMS.2025.3577903","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3577903","url":null,"abstract":"The parallax error (PE) significantly deteriorates the spatial resolution and imaging quality of positron emission tomography (PET) scanners. Existing PE correction methods either rely on depth decoding detectors in hardware which increases development costs, or optimize the system response matrix (SRM) in software providing limited compensation for PE. This work proposed a novel PE correction method in projection space based on deep learning (DL), consisting of two steps. First, the sinogram affected by PE was processed by a neural network (PEC-Net). The corrected sinogram output from the PEC-Net was then reconstructed to an improved image. To generate ideal PE-corrected labels, we synthesized training data using Monte Carlo (MC) simulation-based SRMs as forward projectors. The proposed method was validated using simulation data and real data. Experimental results show that the proposed method effectively eliminated artifacts caused by PE, and the reconstructed images of simulation data outperformed those obtained at 4 mm depth of interaction (DOI) resolution in terms of structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR). The PEC-Net may provide a low-cost, high-performance, software-based PE correction method for PET scanners without DOI measurement.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"10 2","pages":"218-228"},"PeriodicalIF":3.5,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11028920","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article presents a novel image reconstruction pipeline for three-gamma (3-$gamma $ ) positron emission tomography (PET) aimed at improving spatial resolution and reducing noise in nuclear medicine. The proposed Direct$3gamma $ pipeline addresses the inherent challenges in 3-$gamma $ PET systems, such as detector imperfections and uncertainty in photon interaction points. A key feature of the pipeline is its ability to determine the order of interactions through a model trained on Monte Carlo (MC) simulations using the Geant4 Application for Tomography Emission (GATE) toolkit, thus providing the necessary information to construct Compton cones which intersects with the line of response (LOR) to provide an estimate of the emission point. The pipeline processes 3-$gamma $ PET raw data, reconstructs histoimages by propagating energy and spatial uncertainties along the LOR, and applies a 3-D convolutional neural network (CNN) to refine these intermediate images into high-quality reconstructions. To further enhance image quality, the pipeline leverages both supervised learning and adversarial losses, the latter preserving fine structural details. Experimental results show that Direct$3gamma $ consistently outperforms conventional 200-ps time-of-flight (TOF) PET in terms of structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR).
{"title":"Direct 3γ: A Pipeline for Direct Three-Gamma PET Image Reconstruction","authors":"Youness Mellak;Alexandre Bousse;Thibaut Merlin;Debora Giovagnoli;Dimitris Visvikis","doi":"10.1109/TRPMS.2025.3577810","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3577810","url":null,"abstract":"This article presents a novel image reconstruction pipeline for three-gamma (3-<inline-formula> <tex-math>$gamma $ </tex-math></inline-formula>) positron emission tomography (PET) aimed at improving spatial resolution and reducing noise in nuclear medicine. The proposed Direct<inline-formula> <tex-math>$3gamma $ </tex-math></inline-formula> pipeline addresses the inherent challenges in 3-<inline-formula> <tex-math>$gamma $ </tex-math></inline-formula> PET systems, such as detector imperfections and uncertainty in photon interaction points. A key feature of the pipeline is its ability to determine the order of interactions through a model trained on Monte Carlo (MC) simulations using the Geant4 Application for Tomography Emission (GATE) toolkit, thus providing the necessary information to construct Compton cones which intersects with the line of response (LOR) to provide an estimate of the emission point. The pipeline processes 3-<inline-formula> <tex-math>$gamma $ </tex-math></inline-formula> PET raw data, reconstructs histoimages by propagating energy and spatial uncertainties along the LOR, and applies a 3-D convolutional neural network (CNN) to refine these intermediate images into high-quality reconstructions. To further enhance image quality, the pipeline leverages both supervised learning and adversarial losses, the latter preserving fine structural details. Experimental results show that Direct<inline-formula> <tex-math>$3gamma $ </tex-math></inline-formula> consistently outperforms conventional 200-ps time-of-flight (TOF) PET in terms of structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR).","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"10 2","pages":"181-191"},"PeriodicalIF":3.5,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-09DOI: 10.1109/TRPMS.2025.3577309
Juan E. Arco;Carmen Jiménez-Mesa;Andrés Ortiz;Javier Ramírez;Johannes Levin;Juan M. Górriz
Medical imaging fusion combines complementary information from multiple modalities to enhance diagnostic accuracy. However, evaluating the quality of fused images remains challenging, with many studies relying solely on classification performance, which may lead to incorrect conclusions. We introduce a novel framework for improving image fusion, focusing on preserving fine-grained details. Our model uses a siamese autoencoder to process T1-MRI and FDG-PET images in the context of Alzheimer’s disease (AD). The framework optimizes fusion by minimizing reconstruction error between generated and input images, while maximizing differences between modalities through cosine distance. Additionally, we propose a supervised variant, incorporating binary cross-entropy loss between diagnostic labels and probabilities. Fusion quality is rigorously assessed through three tests: 1) classification of AD patients and controls using fused images; 2) an atlas-based occlusion test for identifying regions relevant to cognitive decline; and 3) analysis of structural–functional relationships via Euclidean distance. Results show an AUC of 0.92 for AD detection, reveal the involvement of brain regions linked to preclinical AD stages, and demonstrate preserved structural–functional brain networks, indicating that subtle differences are successfully captured through our fusion approach.
{"title":"Explainable Intermodality Medical Information Transfer Using Siamese Autoencoders","authors":"Juan E. Arco;Carmen Jiménez-Mesa;Andrés Ortiz;Javier Ramírez;Johannes Levin;Juan M. Górriz","doi":"10.1109/TRPMS.2025.3577309","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3577309","url":null,"abstract":"Medical imaging fusion combines complementary information from multiple modalities to enhance diagnostic accuracy. However, evaluating the quality of fused images remains challenging, with many studies relying solely on classification performance, which may lead to incorrect conclusions. We introduce a novel framework for improving image fusion, focusing on preserving fine-grained details. Our model uses a siamese autoencoder to process T1-MRI and FDG-PET images in the context of Alzheimer’s disease (AD). The framework optimizes fusion by minimizing reconstruction error between generated and input images, while maximizing differences between modalities through cosine distance. Additionally, we propose a supervised variant, incorporating binary cross-entropy loss between diagnostic labels and probabilities. Fusion quality is rigorously assessed through three tests: 1) classification of AD patients and controls using fused images; 2) an atlas-based occlusion test for identifying regions relevant to cognitive decline; and 3) analysis of structural–functional relationships via Euclidean distance. Results show an AUC of 0.92 for AD detection, reveal the involvement of brain regions linked to preclinical AD stages, and demonstrate preserved structural–functional brain networks, indicating that subtle differences are successfully captured through our fusion approach.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"10 2","pages":"192-209"},"PeriodicalIF":3.5,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11029061","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
225Ac-based radiopharmaceuticals for targeted alpha therapy (TAT) have shown positive outcomes in recent clinical trials and preclinical studies, and it has emerged as a promising solution for future cancer treatments. Small-animal in-vivo imaging is critical to better understand 225Ac radiopharmaceuticals biokinetics and to accelerate evaluation and discovery of new 225Ac radiopharmaceuticals. However, gamma-ray imaging of 225Ac and its daughters is challenging due to the extremely low injected activities, the low branching ratios of the emitted $gamma $ rays, and their broad range of energies. State-of-the-art scanners for single-photon emission computed tomography (SPECT) have sensitivity limitations when imaging such low activities, and imaging sessions of several hours are necessary, precluding in-vivo studies. We propose Compton imaging as an alternative to traditional SPECT imagers in order to enable a higher sensitivity and to decrease the minimum imageable activities of current systems. In this study, we explore a 3D-positioning cadmium zinc telluride (CZT) camera (M400, H3D) to achieve highly sensitive Compton imaging of 225Ac daughters at both high-energy (440 keV from 213Bi) and low-energy gamma rays (218 keV from 221Fr). The Compton sensitivity of the imaging system with a source as close as possible from the detector (7 mm) were 1014(33) cps/MBq and 467(23) cps/MBq for 213Bi and 221Fr, respectively. We studied the response of the camera using 225Ac point sources, including the demonstration of simultaneous imaging of 213Bi and 221Fr from multiple 225Ac sources at sub-$mu $ Ci activity levels, ranging from 7.4 to 25.9 kBq, in a 18-min imaging session. Furthermore, we performed a mouse phantom experiment to demonstrate that we could form high-sensitive Compton images of 213Bi and 221Fr, concluding that we can image a mouse phantom with an activity of ~0.55 MBq in just 9 and 36 s for 213Bi and 221Fr, respectively, with a single detector head and in a single bed position. This is equivalent to imaging an activity of 3.7 kBq, a typical tumor uptake in mouse experiments with 225Ac, in 23 min for 213Bi and 90 min for 221Fr with a small 5.7 cm $times 5$ .7 cm area prototype. Increasing angular coverage would further increase sensitivity. Finally, we also compared Compton imaging with collimated imaging.
{"title":"Compton Imaging of Ac-225 in Preclinical Phantoms With a 3D-positioning CZT Camera","authors":"Biswajit Das;Baharak Mehrdel;David Goodman;Michael Streicher;Youngho Seo;Javier Caravaca","doi":"10.1109/TRPMS.2025.3577212","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3577212","url":null,"abstract":"225Ac-based radiopharmaceuticals for targeted alpha therapy (TAT) have shown positive outcomes in recent clinical trials and preclinical studies, and it has emerged as a promising solution for future cancer treatments. Small-animal in-vivo imaging is critical to better understand 225Ac radiopharmaceuticals biokinetics and to accelerate evaluation and discovery of new 225Ac radiopharmaceuticals. However, gamma-ray imaging of 225Ac and its daughters is challenging due to the extremely low injected activities, the low branching ratios of the emitted <inline-formula> <tex-math>$gamma $ </tex-math></inline-formula> rays, and their broad range of energies. State-of-the-art scanners for single-photon emission computed tomography (SPECT) have sensitivity limitations when imaging such low activities, and imaging sessions of several hours are necessary, precluding in-vivo studies. We propose Compton imaging as an alternative to traditional SPECT imagers in order to enable a higher sensitivity and to decrease the minimum imageable activities of current systems. In this study, we explore a 3D-positioning cadmium zinc telluride (CZT) camera (M400, H3D) to achieve highly sensitive Compton imaging of 225Ac daughters at both high-energy (440 keV from 213Bi) and low-energy gamma rays (218 keV from 221Fr). The Compton sensitivity of the imaging system with a source as close as possible from the detector (7 mm) were 1014(33) cps/MBq and 467(23) cps/MBq for 213Bi and 221Fr, respectively. We studied the response of the camera using 225Ac point sources, including the demonstration of simultaneous imaging of 213Bi and 221Fr from multiple 225Ac sources at sub-<inline-formula> <tex-math>$mu $ </tex-math></inline-formula>Ci activity levels, ranging from 7.4 to 25.9 kBq, in a 18-min imaging session. Furthermore, we performed a mouse phantom experiment to demonstrate that we could form high-sensitive Compton images of 213Bi and 221Fr, concluding that we can image a mouse phantom with an activity of ~0.55 MBq in just 9 and 36 s for 213Bi and 221Fr, respectively, with a single detector head and in a single bed position. This is equivalent to imaging an activity of 3.7 kBq, a typical tumor uptake in mouse experiments with 225Ac, in 23 min for 213Bi and 90 min for 221Fr with a small 5.7 cm <inline-formula> <tex-math>$times 5$ </tex-math></inline-formula>.7 cm area prototype. Increasing angular coverage would further increase sensitivity. Finally, we also compared Compton imaging with collimated imaging.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"10 1","pages":"112-125"},"PeriodicalIF":3.5,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145861208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite advancements in lung cancer therapy, the prognosis for advanced or metastatic patients remains poor, yet many patients eventually develop resistance to standard treatments leading to disease progression and poor survival. Here, we described a combination of cold atmosphere plasma (CAP) and nanoparticles [ZrO2 NPs (zirconium oxide nanoparticle) and 3Y-TZP NPs (3% mol yttria tetragonal zirconia polycrystal nanoparticle)] for lung cancer therapy. We found that $mathrm {ZrO_{2}}$ NPs caused obvious damage to the inside of the lung cancer cells. CAP and $mathrm {ZrO_{2}}$ NPs mainly affected the mitochondria function, leading to a decrease in mitochondrial membrane potential and ATP levels, also causing endoplasmic reticulum stress and cell nucleus internal DNA damage, etc. CAP combined with $mathrm {ZrO_{2}}$ NPs (CAP@ZrO2) induced lung cancer cell apoptosis by activating the TGF-$beta $ pathway. However, 3Y-TZP NPs showed beneficial effects for cancer cells, promoting their proliferation. This contrasting finding highlights that not all zirconia nanoparticles may be appropriate for lung cancer treatment in general. CAP@ZrO2 offers a new therapy for the clinical treatment of lung cancer.
{"title":"Cold Atmospheric Plasma Combines With Zirconia Nanoparticles for Lung Cancer Therapy via TGF- β Signaling Pathway","authors":"Yueye Huang;Rui Zhang;Xiao Chen;Fei Cao;Qiujie Fang;Qingnan Xu;Shicong Huang;Yufan Wang;Guojun Chen;Zhitong Chen","doi":"10.1109/TRPMS.2025.3576730","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3576730","url":null,"abstract":"Despite advancements in lung cancer therapy, the prognosis for advanced or metastatic patients remains poor, yet many patients eventually develop resistance to standard treatments leading to disease progression and poor survival. Here, we described a combination of cold atmosphere plasma (CAP) and nanoparticles [ZrO2 NPs (zirconium oxide nanoparticle) and 3Y-TZP NPs (3% mol yttria tetragonal zirconia polycrystal nanoparticle)] for lung cancer therapy. We found that <inline-formula> <tex-math>$mathrm {ZrO_{2}}$ </tex-math></inline-formula> NPs caused obvious damage to the inside of the lung cancer cells. CAP and <inline-formula> <tex-math>$mathrm {ZrO_{2}}$ </tex-math></inline-formula> NPs mainly affected the mitochondria function, leading to a decrease in mitochondrial membrane potential and ATP levels, also causing endoplasmic reticulum stress and cell nucleus internal DNA damage, etc. CAP combined with <inline-formula> <tex-math>$mathrm {ZrO_{2}}$ </tex-math></inline-formula> NPs (CAP@ZrO2) induced lung cancer cell apoptosis by activating the TGF-<inline-formula> <tex-math>$beta $ </tex-math></inline-formula> pathway. However, 3Y-TZP NPs showed beneficial effects for cancer cells, promoting their proliferation. This contrasting finding highlights that not all zirconia nanoparticles may be appropriate for lung cancer treatment in general. CAP@ZrO2 offers a new therapy for the clinical treatment of lung cancer.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"10 1","pages":"144-158"},"PeriodicalIF":3.5,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145861204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-30DOI: 10.1109/TRPMS.2025.3565797
Ran Hong;Yuxia Huang;Lei Liu;Mengxiao Geng;Zhonghui Wu;Bingxuan Li;Xuemei Wang;Qiegen Liu
PET imaging is widely employed for observing biological metabolic activities within the human body. However, numerous benign conditions can cause increased uptake of radiopharmaceuticals, confounding differentiation from malignant tumors. Several studies have indicated that dual-time PET imaging holds promise in distinguishing between malignant and benign tumor processes. Nevertheless, the hour-long distribution period of radiopharmaceuticals post-injection complicates the determination of optimal timing for the second scan, presenting challenges in both practical applications and research. Notably, we have identified that delay time PET imaging can be framed as an image-to-image conversion problem. Motivated by this insight, we propose a novel Spatial-Temporal guided diffusion transformer probabilistic model (st-DTPM) to solve dual-time PET imaging prediction problem. Specifically, this architecture leverages the U-net framework that integrates patch-wise features of CNN and pixel-wise relevance of transformer to obtain local and global information, and then employs a conditional DDPM model for image synthesis. Furthermore, on spatial condition, we concatenate early scan PET images and noisy PET images on every denoising step to guide the spatial distribution of denoising sampling. On temporal condition, we convert diffusion time steps and delay time to a universal time vector, then embed it to each layer of model architecture to further improve the accuracy of predictions. Experimental results demonstrated the superiority of our method over alternative approaches in preserving image quality and structural information, thereby affirming its efficacy in predictive task.
{"title":"st-DTPM: Spatial-Temporal Guided Diffusion Transformer Probabilistic Model for Delayed Scan PET Image Prediction","authors":"Ran Hong;Yuxia Huang;Lei Liu;Mengxiao Geng;Zhonghui Wu;Bingxuan Li;Xuemei Wang;Qiegen Liu","doi":"10.1109/TRPMS.2025.3565797","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3565797","url":null,"abstract":"PET imaging is widely employed for observing biological metabolic activities within the human body. However, numerous benign conditions can cause increased uptake of radiopharmaceuticals, confounding differentiation from malignant tumors. Several studies have indicated that dual-time PET imaging holds promise in distinguishing between malignant and benign tumor processes. Nevertheless, the hour-long distribution period of radiopharmaceuticals post-injection complicates the determination of optimal timing for the second scan, presenting challenges in both practical applications and research. Notably, we have identified that delay time PET imaging can be framed as an image-to-image conversion problem. Motivated by this insight, we propose a novel Spatial-Temporal guided diffusion transformer probabilistic model (st-DTPM) to solve dual-time PET imaging prediction problem. Specifically, this architecture leverages the U-net framework that integrates patch-wise features of CNN and pixel-wise relevance of transformer to obtain local and global information, and then employs a conditional DDPM model for image synthesis. Furthermore, on spatial condition, we concatenate early scan PET images and noisy PET images on every denoising step to guide the spatial distribution of denoising sampling. On temporal condition, we convert diffusion time steps and delay time to a universal time vector, then embed it to each layer of model architecture to further improve the accuracy of predictions. Experimental results demonstrated the superiority of our method over alternative approaches in preserving image quality and structural information, thereby affirming its efficacy in predictive task.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"10 1","pages":"26-40"},"PeriodicalIF":3.5,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145861217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unsupervised learning methods effectively reduce the noise level of positron emission tomography (PET) images with limited training data. Recent research indicates that the performance of these methods is greatly influenced by the network architecture. However, there has been a lack of investigation into the optimal network architecture for unsupervised PET imaging in previous studies. To address this gap, we developed a neural architecture search method to search for a better network architecture for unsupervised PET image denoising tasks. Our approach searches the network architecture in two separate spaces: 1) the network-level search space and 2) the cell-level search space. Continuous relaxation techniques are utilized to reduce time consumption during the search process. In our proposed framework, high-count PET images were used to search the network architecture, while low-count PET images were used to optimize operation parameters. After identifying the optimal network architecture, we evaluated its performance on phantom data and patient data with a variety of tracers. Our experimental results demonstrated that the searched network outperformed other methods.
{"title":"Neural Architecture Search for Unsupervised PET Image Denoising","authors":"Jinming Li;Jing Wang;Yang Lv;Puming Zhang;Jun Zhao","doi":"10.1109/TRPMS.2025.3565655","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3565655","url":null,"abstract":"Unsupervised learning methods effectively reduce the noise level of positron emission tomography (PET) images with limited training data. Recent research indicates that the performance of these methods is greatly influenced by the network architecture. However, there has been a lack of investigation into the optimal network architecture for unsupervised PET imaging in previous studies. To address this gap, we developed a neural architecture search method to search for a better network architecture for unsupervised PET image denoising tasks. Our approach searches the network architecture in two separate spaces: 1) the network-level search space and 2) the cell-level search space. Continuous relaxation techniques are utilized to reduce time consumption during the search process. In our proposed framework, high-count PET images were used to search the network architecture, while low-count PET images were used to optimize operation parameters. After identifying the optimal network architecture, we evaluated its performance on phantom data and patient data with a variety of tracers. Our experimental results demonstrated that the searched network outperformed other methods.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"10 1","pages":"51-62"},"PeriodicalIF":3.5,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145860196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-21DOI: 10.1109/TRPMS.2025.3560667
Sanaz Nazari-Farsani;Mojtaba Jafaritadi;Jonathan Fisher;Myungheon Chin;Garry Chinn;Mehdi Khalighi;Greg Zaharchuk;Craig S. Levin
The signal-to-noise ratio (SNR) of positron emission tomography (PET) images is determined by several factors including the geometry of the scanner. Low system sensitivity caused by a short axial field of view (FOV) results in a low reconstructed image SNR that can complicate clinical decision-making. Therefore, a longer FOV is highly desirable (e.g., a total body geometry). However, this raises the scanner’s cost by increasing the volume of crystals, number of detectors, and readout electronics. We have developed a deep-learning framework to enhance the image quality of data acquired from a prototype brain-dedicated PET insert system for PET/MRI with an axial FOV of just 2.8 cm. We employed a retrospective analysis on 18F-fluorodeoxyglucose PET scans of 28 patients with either Glioblastoma (n = 9) or Alzheimer’s disease (n = 19) acquired on a commercial PET/MRI scanner with 60 cm diameter and 25 cm axial FOV. From this data we reconstructed low statistics PET images mimicking that acquired from the 2.8 cm axial FOV brain PET prototype using the 25-cm axial FOV commercial system dataset using a fault-tolerant reconstruction algorithm, which allowed us to constrain the count statistics from a set of detectors in a single ring of the latter system to match the geometry of the former system. A conditional generative adversarial network (cGAN) was trained and tested using the simulated short axial FOV images as input, with the paired 25 cm axial FOV image data as the target. We performed five-fold cross-validation and compared the deep learning (DL)-enhanced images to the target images using four metrics: 1) peak-signal-to-noise-ratio (PSNR); 2) root mean squared error (RMSE); 3) mean absolute error (MAE); and 4) structural similarity index (SSIM). The DL-enhanced PET images from the 2.8 cm axial FOV system had a median PSNR of 39.09 (interquartile range (IQR): 32.80–45.32), a median SSIM of 0.98 (IQR: 0.97–0.99), a median RMSE of 0.07 (IQR: 0.04–0.09), and a median MAE of 0.004 (IQR: 0.000–0.009). We also assessed the pretrained cGAN model’s performance in a zero-shot denoising task using patient data collected with our first generation PETcoil system. The ability of the cGAN model to enhance the quality of PET images acquired with a short axial FOV suggests a potential method to provide high-quality, high-accuracy images comparable to those of large axial FOV systems.
{"title":"Image SNR Enhancement for a Short Axial FOV Brain PET System Using Generative Deep Learning","authors":"Sanaz Nazari-Farsani;Mojtaba Jafaritadi;Jonathan Fisher;Myungheon Chin;Garry Chinn;Mehdi Khalighi;Greg Zaharchuk;Craig S. Levin","doi":"10.1109/TRPMS.2025.3560667","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3560667","url":null,"abstract":"The signal-to-noise ratio (SNR) of positron emission tomography (PET) images is determined by several factors including the geometry of the scanner. Low system sensitivity caused by a short axial field of view (FOV) results in a low reconstructed image SNR that can complicate clinical decision-making. Therefore, a longer FOV is highly desirable (e.g., a total body geometry). However, this raises the scanner’s cost by increasing the volume of crystals, number of detectors, and readout electronics. We have developed a deep-learning framework to enhance the image quality of data acquired from a prototype brain-dedicated PET insert system for PET/MRI with an axial FOV of just 2.8 cm. We employed a retrospective analysis on 18F-fluorodeoxyglucose PET scans of 28 patients with either Glioblastoma (n = 9) or Alzheimer’s disease (n = 19) acquired on a commercial PET/MRI scanner with 60 cm diameter and 25 cm axial FOV. From this data we reconstructed low statistics PET images mimicking that acquired from the 2.8 cm axial FOV brain PET prototype using the 25-cm axial FOV commercial system dataset using a fault-tolerant reconstruction algorithm, which allowed us to constrain the count statistics from a set of detectors in a single ring of the latter system to match the geometry of the former system. A conditional generative adversarial network (cGAN) was trained and tested using the simulated short axial FOV images as input, with the paired 25 cm axial FOV image data as the target. We performed five-fold cross-validation and compared the deep learning (DL)-enhanced images to the target images using four metrics: 1) peak-signal-to-noise-ratio (PSNR); 2) root mean squared error (RMSE); 3) mean absolute error (MAE); and 4) structural similarity index (SSIM). The DL-enhanced PET images from the 2.8 cm axial FOV system had a median PSNR of 39.09 (interquartile range (IQR): 32.80–45.32), a median SSIM of 0.98 (IQR: 0.97–0.99), a median RMSE of 0.07 (IQR: 0.04–0.09), and a median MAE of 0.004 (IQR: 0.000–0.009). We also assessed the pretrained cGAN model’s performance in a zero-shot denoising task using patient data collected with our first generation PETcoil system. The ability of the cGAN model to enhance the quality of PET images acquired with a short axial FOV suggests a potential method to provide high-quality, high-accuracy images comparable to those of large axial FOV systems.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"10 1","pages":"41-50"},"PeriodicalIF":3.5,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145861200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boron Neutron Capture Therapy (BNCT) is an advanced cancer treatment that combines radiation therapy with targeted drug delivery. Patients are administered a boron compound that accumulates in tumour cells and are then irradiated with thermal neutrons that induce 10B(n,$alpha $ )7Li reactions, whose high-LET products locally deposit a high dose to tumour cells. The additional 478 keV gamma ray generated by the de-excitation of 7Li can be detected outside the patient’s body and can be used for dose localization and monitoring using the SPECT technique. In this study, we show the first experimental tomographic results obtained with a prototype BNCT-SPECT system at the LENA neutron facility in Pavia, Italy. Measurements are acquired with the BeNEdiCTE detection module, based on a 5 cm $times $ 5 cm $times $ 2 cm LaBr3(Ce+Sr) monolithic scintillator crystal coupled to an $8times 8$ matrix of Near Ultraviolet High-Density silicon photomultipliers (SiPMs). The system shows good performance in detecting the incoming radiation of interest and in reconstructing 2-D planar images of boron samples irradiated with thermal neutrons. Thanks to the aid of Software for Tomographic Image Reconstruction (STIR), we show a successful 3-D reconstruction of 2 vials containing 7371 ppm of 10B placed at 1.4 cm distance, starting from four partial projections and using 10 iterations of the Maximum Likelihood Expectation Maximization (MLEM) algorithm.
硼中子俘获疗法(BNCT)是一种结合放射治疗和靶向药物输送的晚期癌症治疗方法。患者服用在肿瘤细胞中积累的硼化合物,然后用热中子照射,诱导10B(n, $ α $)7Li反应,其高let产物在肿瘤细胞局部沉积高剂量。7Li去激发产生的额外478 keV伽马射线可以在患者体外检测到,并且可以使用SPECT技术用于剂量定位和监测。在这项研究中,我们展示了在意大利帕维亚的LENA中子设施使用原型BNCT-SPECT系统获得的第一个实验层析成像结果。BeNEdiCTE检测模块基于一个5 cm × 5 cm × 2 cm的LaBr3(Ce+Sr)单片闪烁体晶体,与一个8 × 8美元的近紫外高密度硅光电倍增管(SiPMs)矩阵耦合。该系统在探测感兴趣的入射辐射和重建受热中子辐照的硼样品的二维平面图像方面表现出良好的性能。在层析图像重建软件(STIR)的帮助下,我们展示了从四个部分投影开始,使用最大似然期望最大化(MLEM)算法的10次迭代,成功地对2个小瓶进行了3-D重建,其中含有7371 ppm的10B,放置在1.4厘米的距离上。
{"title":"Design and Validation of a SPECT Prototype for Treatment Monitoring in BNCT and First Experimental Tomographic Results","authors":"T. Ferri;A. Caracciolo;F. Ghisio;M. Piroddi;M. Pandocchi;C. Fiorini;M. Carminati;V. Pascali;N. Protti;D. Mazzucconi;L. Grisoni;D. Ramos;N. Ferrara;K. Thielemans;G. Borghi","doi":"10.1109/TRPMS.2025.3562079","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3562079","url":null,"abstract":"Boron Neutron Capture Therapy (BNCT) is an advanced cancer treatment that combines radiation therapy with targeted drug delivery. Patients are administered a boron compound that accumulates in tumour cells and are then irradiated with thermal neutrons that induce 10B(n,<inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>)7Li reactions, whose high-LET products locally deposit a high dose to tumour cells. The additional 478 keV gamma ray generated by the de-excitation of 7Li can be detected outside the patient’s body and can be used for dose localization and monitoring using the SPECT technique. In this study, we show the first experimental tomographic results obtained with a prototype BNCT-SPECT system at the LENA neutron facility in Pavia, Italy. Measurements are acquired with the BeNEdiCTE detection module, based on a 5 cm <inline-formula> <tex-math>$times $ </tex-math></inline-formula> 5 cm <inline-formula> <tex-math>$times $ </tex-math></inline-formula> 2 cm LaBr3(Ce+Sr) monolithic scintillator crystal coupled to an <inline-formula> <tex-math>$8times 8$ </tex-math></inline-formula> matrix of Near Ultraviolet High-Density silicon photomultipliers (SiPMs). The system shows good performance in detecting the incoming radiation of interest and in reconstructing 2-D planar images of boron samples irradiated with thermal neutrons. Thanks to the aid of Software for Tomographic Image Reconstruction (STIR), we show a successful 3-D reconstruction of 2 vials containing 7371 ppm of 10B placed at 1.4 cm distance, starting from four partial projections and using 10 iterations of the Maximum Likelihood Expectation Maximization (MLEM) algorithm.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"10 1","pages":"126-136"},"PeriodicalIF":3.5,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10969104","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145861234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-14DOI: 10.1109/TRPMS.2025.3560267
Chunyuan Liu;Tongyuan Huang;Yunze He;Huayu Chen;Zipeng Wu;Yihan Yang
Medical lesion segmentation plays a crucial role in computer-aided diagnosis, yet acquiring fully annotated images remains a significant challenge. Semi-supervised learning has shown great potential in scenarios with limited labeled data. However, pseudo-labels, commonly used for unlabeled data, may adversely affect model performance due to their inherent inaccuracies. To address this issue, we propose a semi-supervised lesion segmentation framework based on a contrast-guided diffusion model (CGDM). To mitigate the impact of inaccurate pseudo-labels, we exploit the contrastive relationship between lesion and healthy images, restoring lesion regions to a healthy-like appearance. By directly incorporating this contrastive semantic information during training, we alleviate the model’s over-reliance on pseudo-labels and mitigate its detrimental effects on model performance. Furthermore, we introduce a structural similarity contrast (SSC) loss function to balance supervised and unsupervised learning. This function constructs sample pairs for contrastive learning, maximizing the disparity between paired lesion and healthy images while minimizing the resemblance of lesion regions in unpaired lesion images. Experimental results on the BUSI, BraTS2018, and KiTS19 datasets demonstrate that CGDM achieves superior performance compared to state-of-the-art semi-supervised segmentation methods.
{"title":"Semi-Supervised Medical Lesion Image Segmentation Based on a Contrast-Guided Diffusion Model","authors":"Chunyuan Liu;Tongyuan Huang;Yunze He;Huayu Chen;Zipeng Wu;Yihan Yang","doi":"10.1109/TRPMS.2025.3560267","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3560267","url":null,"abstract":"Medical lesion segmentation plays a crucial role in computer-aided diagnosis, yet acquiring fully annotated images remains a significant challenge. Semi-supervised learning has shown great potential in scenarios with limited labeled data. However, pseudo-labels, commonly used for unlabeled data, may adversely affect model performance due to their inherent inaccuracies. To address this issue, we propose a semi-supervised lesion segmentation framework based on a contrast-guided diffusion model (CGDM). To mitigate the impact of inaccurate pseudo-labels, we exploit the contrastive relationship between lesion and healthy images, restoring lesion regions to a healthy-like appearance. By directly incorporating this contrastive semantic information during training, we alleviate the model’s over-reliance on pseudo-labels and mitigate its detrimental effects on model performance. Furthermore, we introduce a structural similarity contrast (SSC) loss function to balance supervised and unsupervised learning. This function constructs sample pairs for contrastive learning, maximizing the disparity between paired lesion and healthy images while minimizing the resemblance of lesion regions in unpaired lesion images. Experimental results on the BUSI, BraTS2018, and KiTS19 datasets demonstrate that CGDM achieves superior performance compared to state-of-the-art semi-supervised segmentation methods.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 8","pages":"1036-1050"},"PeriodicalIF":3.5,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145435710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}