Diffusion probabilistic models (DPMs) have been demonstrated to be effective for denoising positron emission tomography (PET) images due to their ability to model complex data distributions. However, limitations in efficiency, accuracy, and generalizability remain open challenges in this area. In PET denoising, where high fidelity to the ground truth is critical, DPMs often require a large number of iterations and tend to offer limited quantitative accuracy. Moreover, traditional DPMs struggle to model variabilities in the data distribution arising from the use of multiple scanners and tracers. To address these issues, we propose a dual-branch gating diffusion transformer (DG-DiT) network for multi-tracer and multi-scanner PET denoising. The proposed DG-DiT exploits the strong distribution modeling capabilities of a diffusion transformer (DiT) to learn prior knowledge from a compact and regularized latent space. The design of the latent space enables efficient few-step diffusion. In addition, an image restoration transformer (IRT) model is employed for generating the final denoised image. The DiT backbone and the IRT both utilize a dual-branch gating mechanism to efficiently fuse information from multiple inputs. We conducted extensive experiments on multi-tracer and multi-scanner datasets. The results demonstrate that the proposed DG-DiT model achieves the highest quantitative accuracy across every scanner and tracer, with a PSNR improvement of up to 0.2 dB compared to several state-of-the-art deep learning models. Contrast-to-noise ratio evaluation shows that the proposed model is able to recover contrast in small and critical brain regions while effectively reducing noise. This suggests that the proposed DG-DiT model can consistently deliver superior denoising performance.
{"title":"DG-DiT: Dual-Branch Gating Diffusion Transformer for Multi-Tracer and Multi-Scanner Brain PET Image Denoising.","authors":"Ziyuan Zhou, Fan Yang, Tzu-An Song, Bowen Lei, Yubo Zhang, Joyita Dutta","doi":"10.1109/trpms.2025.3630161","DOIUrl":"10.1109/trpms.2025.3630161","url":null,"abstract":"<p><p>Diffusion probabilistic models (DPMs) have been demonstrated to be effective for denoising positron emission tomography (PET) images due to their ability to model complex data distributions. However, limitations in efficiency, accuracy, and generalizability remain open challenges in this area. In PET denoising, where high fidelity to the ground truth is critical, DPMs often require a large number of iterations and tend to offer limited quantitative accuracy. Moreover, traditional DPMs struggle to model variabilities in the data distribution arising from the use of multiple scanners and tracers. To address these issues, we propose a dual-branch gating diffusion transformer (DG-DiT) network for multi-tracer and multi-scanner PET denoising. The proposed DG-DiT exploits the strong distribution modeling capabilities of a diffusion transformer (DiT) to learn prior knowledge from a compact and regularized latent space. The design of the latent space enables efficient few-step diffusion. In addition, an image restoration transformer (IRT) model is employed for generating the final denoised image. The DiT backbone and the IRT both utilize a dual-branch gating mechanism to efficiently fuse information from multiple inputs. We conducted extensive experiments on multi-tracer and multi-scanner datasets. The results demonstrate that the proposed DG-DiT model achieves the highest quantitative accuracy across every scanner and tracer, with a PSNR improvement of up to 0.2 dB compared to several state-of-the-art deep learning models. Contrast-to-noise ratio evaluation shows that the proposed model is able to recover contrast in small and critical brain regions while effectively reducing noise. This suggests that the proposed DG-DiT model can consistently deliver superior denoising performance.</p>","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":" ","pages":""},"PeriodicalIF":3.5,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12931958/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147311111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1109/TRPMS.2025.3623747
{"title":"IEEE Transactions on Radiation and Plasma Medical Sciences Publication Information","authors":"","doi":"10.1109/TRPMS.2025.3623747","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3623747","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 8","pages":"C2-C2"},"PeriodicalIF":3.5,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11225879","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145435699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1109/TRPMS.2025.3624770
{"title":"IEEE DataPort","authors":"","doi":"10.1109/TRPMS.2025.3624770","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3624770","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 8","pages":"1147-1147"},"PeriodicalIF":3.5,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11225871","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145435673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1109/TRPMS.2025.3624772
{"title":">Member Get-a-Member (MGM) Program","authors":"","doi":"10.1109/TRPMS.2025.3624772","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3624772","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 8","pages":"1148-1148"},"PeriodicalIF":3.5,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11225874","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145435712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1109/TRPMS.2025.3623749
{"title":"IEEE Transactions on Radiation and Plasma Medical Sciences Information for Authors","authors":"","doi":"10.1109/TRPMS.2025.3623749","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3623749","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 8","pages":"C3-C3"},"PeriodicalIF":3.5,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11225913","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145435695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1109/TRPMS.2025.3619872
Yassir Najmaoui, Yanis Chemli, Maxime Toussaint, Yoann Petibon, Baptiste Marty, Kathryn Fontaine, Jean-Dominique Gallezot, Gašper Razdevšek, Matic Orehar, Maeva Dhaynaut, Nicolas Guehl, Rok Dolenec, Rok Pestotnik, Keith Johnson, Jinsong Ouyang, Marc Normandin, Marc-André Tétrault, Roger Lecomte, Georges El Fakhri, Thibault Marin
Image reconstruction for positron emission tomography (PET) requires an accurate model of the PET scanner geometry and degrading factors to produce high-quality and clinically meaningful images. It is typically implemented by scanner manufacturers, with proprietary software designed specifically for each scanner. This limits the ability to perform direct comparisons between scanners or to develop advanced image reconstruction algorithms. Open-source image reconstruction software can offer an alternative to manufacturer implementations, allowing more control and portability. Several existing software packages offer a wide range of features and interfaces, but there is still a need for an engine that simultaneously offers reusable code, fast implementation and convenient interfaces for interoperability and extensibility. In this work, we introduce YRT-PET (Yale Reconstruction Toolkit for Positron Emission Tomography), an open-source toolkit for PET image reconstruction that aims for flexibility, reproducibility, speed, and interoperability with existing research software. The toolkit is implemented in C++ with CUDA-enabled GPU acceleration, relies on a plugin system to facilitate the use with multiple scanners, and offers Python bindings to enable the development of advanced algorithms. It includes support for list-mode/histogram data formats, multiple PET projectors, incorporation of time-of-flight information, event-by-event rigid motion correction, point-spread function modeling. It can incorporate correction factors such as normalization, randoms and scatter, obtained from scanner-specific plugins or provided by the user. The toolkit also includes an experimental module for scatter estimation without time-of-flight. To evaluate the capabilities of the software, two different scanners in four different contexts were tested: dynamic imaging, motion correction, deep image prior, and reconstruction for a limited-angle scanner geometry with time-of-flight. Comparisons with existing tools demonstrated good agreement in image quality and the effectiveness of the correction methods. The proposed software toolkit offers high versatility and potential for research, including the development of novel reconstruction algorithms and new PET scanner systems.
{"title":"YRT-PET: An Open-Source GPU-accelerated Image Reconstruction Engine for Positron Emission Tomography.","authors":"Yassir Najmaoui, Yanis Chemli, Maxime Toussaint, Yoann Petibon, Baptiste Marty, Kathryn Fontaine, Jean-Dominique Gallezot, Gašper Razdevšek, Matic Orehar, Maeva Dhaynaut, Nicolas Guehl, Rok Dolenec, Rok Pestotnik, Keith Johnson, Jinsong Ouyang, Marc Normandin, Marc-André Tétrault, Roger Lecomte, Georges El Fakhri, Thibault Marin","doi":"10.1109/TRPMS.2025.3619872","DOIUrl":"10.1109/TRPMS.2025.3619872","url":null,"abstract":"<p><p>Image reconstruction for positron emission tomography (PET) requires an accurate model of the PET scanner geometry and degrading factors to produce high-quality and clinically meaningful images. It is typically implemented by scanner manufacturers, with proprietary software designed specifically for each scanner. This limits the ability to perform direct comparisons between scanners or to develop advanced image reconstruction algorithms. Open-source image reconstruction software can offer an alternative to manufacturer implementations, allowing more control and portability. Several existing software packages offer a wide range of features and interfaces, but there is still a need for an engine that simultaneously offers reusable code, fast implementation and convenient interfaces for interoperability and extensibility. In this work, we introduce YRT-PET (Yale Reconstruction Toolkit for Positron Emission Tomography), an open-source toolkit for PET image reconstruction that aims for flexibility, reproducibility, speed, and interoperability with existing research software. The toolkit is implemented in C++ with CUDA-enabled GPU acceleration, relies on a plugin system to facilitate the use with multiple scanners, and offers Python bindings to enable the development of advanced algorithms. It includes support for list-mode/histogram data formats, multiple PET projectors, incorporation of time-of-flight information, event-by-event rigid motion correction, point-spread function modeling. It can incorporate correction factors such as normalization, randoms and scatter, obtained from scanner-specific plugins or provided by the user. The toolkit also includes an experimental module for scatter estimation without time-of-flight. To evaluate the capabilities of the software, two different scanners in four different contexts were tested: dynamic imaging, motion correction, deep image prior, and reconstruction for a limited-angle scanner geometry with time-of-flight. Comparisons with existing tools demonstrated good agreement in image quality and the effectiveness of the correction methods. The proposed software toolkit offers high versatility and potential for research, including the development of novel reconstruction algorithms and new PET scanner systems.</p>","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":" ","pages":""},"PeriodicalIF":3.5,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12714321/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145805973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-03DOI: 10.1109/trpms.2025.3617225
Farhan Sadik, Christopher L Newman, Stuart J Warden, Rachel K Surowiec
Rigid-motion artifacts, such as cortical bone streaking and trabecular smearing, hinder in vivo assessment of bone microstructures in high-resolution peripheral quantitative computed tomography (HR-pQCT). Despite various motion grading techniques, no motion correction methods exist due to the lack of standardized degradation models. We optimize a conventional sinogram-based method to simulate motion artifacts in HR-pQCT images, creating paired datasets of motion-corrupted images and their corresponding ground truth, which enables seamless integration into supervised learning frameworks for motion correction. As such, we propose an Edge-enhanced Self-attention Wasserstein Generative Adversarial Network with Gradient Penalty (ESWGAN-GP) to address motion artifacts in both simulated (source) and real-world (target) datasets. The model incorporates edge-enhancing skip connections to preserve trabecular edges and self-attention mechanisms to capture long-range dependencies, facilitating motion correction. A visual geometry group (VGG)-based perceptual loss is used to reconstruct fine micro-structural features. The ESWGAN-GP achieves a mean signal-to-noise ratio (SNR) of 26.78, structural similarity index measure (SSIM) of 0.81, and visual information fidelity (VIF) of 0.76 for the source dataset, while showing improved performance on the target dataset with an SNR of 29.31, SSIM of 0.87, and VIF of 0.81. The proposed methods address a simplified representation of real-world motion that may not fully capture the complexity of in vivo motion artifacts. Nevertheless, because motion artifacts present one of the foremost challenges to more widespread adoption of this modality, these methods represent an important initial step toward implementing deep learning-based motion correction in HR-pQCT.
{"title":"Simulating Sinogram-Domain Motion and Correcting Image-Domain Artifacts Using Deep Learning in HR-pQCT Bone Imaging.","authors":"Farhan Sadik, Christopher L Newman, Stuart J Warden, Rachel K Surowiec","doi":"10.1109/trpms.2025.3617225","DOIUrl":"10.1109/trpms.2025.3617225","url":null,"abstract":"<p><p>Rigid-motion artifacts, such as cortical bone streaking and trabecular smearing, hinder in vivo assessment of bone microstructures in high-resolution peripheral quantitative computed tomography (HR-pQCT). Despite various motion grading techniques, no motion correction methods exist due to the lack of standardized degradation models. We optimize a conventional sinogram-based method to simulate motion artifacts in HR-pQCT images, creating paired datasets of motion-corrupted images and their corresponding ground truth, which enables seamless integration into supervised learning frameworks for motion correction. As such, we propose an Edge-enhanced Self-attention Wasserstein Generative Adversarial Network with Gradient Penalty (ESWGAN-GP) to address motion artifacts in both simulated (source) and real-world (target) datasets. The model incorporates edge-enhancing skip connections to preserve trabecular edges and self-attention mechanisms to capture long-range dependencies, facilitating motion correction. A visual geometry group (VGG)-based perceptual loss is used to reconstruct fine micro-structural features. The ESWGAN-GP achieves a mean signal-to-noise ratio (SNR) of 26.78, structural similarity index measure (SSIM) of 0.81, and visual information fidelity (VIF) of 0.76 for the source dataset, while showing improved performance on the target dataset with an SNR of 29.31, SSIM of 0.87, and VIF of 0.81. The proposed methods address a simplified representation of real-world motion that may not fully capture the complexity of in vivo motion artifacts. Nevertheless, because motion artifacts present one of the foremost challenges to more widespread adoption of this modality, these methods represent an important initial step toward implementing deep learning-based motion correction in HR-pQCT.</p>","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":" ","pages":""},"PeriodicalIF":3.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12574536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145432618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1109/TRPMS.2025.3599622
{"title":"IEEE Transactions on Radiation and Plasma Medical Sciences Publication Information","authors":"","doi":"10.1109/TRPMS.2025.3599622","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3599622","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 7","pages":"C2-C2"},"PeriodicalIF":3.5,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11152387","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144997933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1109/TRPMS.2025.3600231
{"title":">Member Get-a-Member (MGM) Program","authors":"","doi":"10.1109/TRPMS.2025.3600231","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3600231","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 7","pages":"979-979"},"PeriodicalIF":3.5,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11152382","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1109/TRPMS.2025.3600229
{"title":"IEEE DataPort","authors":"","doi":"10.1109/TRPMS.2025.3600229","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3600229","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 7","pages":"978-978"},"PeriodicalIF":3.5,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11152383","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}