Pub Date : 2025-11-04DOI: 10.1109/TRPMS.2025.3623749
{"title":"IEEE Transactions on Radiation and Plasma Medical Sciences Information for Authors","authors":"","doi":"10.1109/TRPMS.2025.3623749","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3623749","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 8","pages":"C3-C3"},"PeriodicalIF":3.5,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11225913","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145435695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1109/TRPMS.2025.3619872
Yassir Najmaoui, Yanis Chemli, Maxime Toussaint, Yoann Petibon, Baptiste Marty, Kathryn Fontaine, Jean-Dominique Gallezot, Gašper Razdevšek, Matic Orehar, Maeva Dhaynaut, Nicolas Guehl, Rok Dolenec, Rok Pestotnik, Keith Johnson, Jinsong Ouyang, Marc Normandin, Marc-André Tétrault, Roger Lecomte, Georges El Fakhri, Thibault Marin
Image reconstruction for positron emission tomography (PET) requires an accurate model of the PET scanner geometry and degrading factors to produce high-quality and clinically meaningful images. It is typically implemented by scanner manufacturers, with proprietary software designed specifically for each scanner. This limits the ability to perform direct comparisons between scanners or to develop advanced image reconstruction algorithms. Open-source image reconstruction software can offer an alternative to manufacturer implementations, allowing more control and portability. Several existing software packages offer a wide range of features and interfaces, but there is still a need for an engine that simultaneously offers reusable code, fast implementation and convenient interfaces for interoperability and extensibility. In this work, we introduce YRT-PET (Yale Reconstruction Toolkit for Positron Emission Tomography), an open-source toolkit for PET image reconstruction that aims for flexibility, reproducibility, speed, and interoperability with existing research software. The toolkit is implemented in C++ with CUDA-enabled GPU acceleration, relies on a plugin system to facilitate the use with multiple scanners, and offers Python bindings to enable the development of advanced algorithms. It includes support for list-mode/histogram data formats, multiple PET projectors, incorporation of time-of-flight information, event-by-event rigid motion correction, point-spread function modeling. It can incorporate correction factors such as normalization, randoms and scatter, obtained from scanner-specific plugins or provided by the user. The toolkit also includes an experimental module for scatter estimation without time-of-flight. To evaluate the capabilities of the software, two different scanners in four different contexts were tested: dynamic imaging, motion correction, deep image prior, and reconstruction for a limited-angle scanner geometry with time-of-flight. Comparisons with existing tools demonstrated good agreement in image quality and the effectiveness of the correction methods. The proposed software toolkit offers high versatility and potential for research, including the development of novel reconstruction algorithms and new PET scanner systems.
{"title":"YRT-PET: An Open-Source GPU-accelerated Image Reconstruction Engine for Positron Emission Tomography.","authors":"Yassir Najmaoui, Yanis Chemli, Maxime Toussaint, Yoann Petibon, Baptiste Marty, Kathryn Fontaine, Jean-Dominique Gallezot, Gašper Razdevšek, Matic Orehar, Maeva Dhaynaut, Nicolas Guehl, Rok Dolenec, Rok Pestotnik, Keith Johnson, Jinsong Ouyang, Marc Normandin, Marc-André Tétrault, Roger Lecomte, Georges El Fakhri, Thibault Marin","doi":"10.1109/TRPMS.2025.3619872","DOIUrl":"10.1109/TRPMS.2025.3619872","url":null,"abstract":"<p><p>Image reconstruction for positron emission tomography (PET) requires an accurate model of the PET scanner geometry and degrading factors to produce high-quality and clinically meaningful images. It is typically implemented by scanner manufacturers, with proprietary software designed specifically for each scanner. This limits the ability to perform direct comparisons between scanners or to develop advanced image reconstruction algorithms. Open-source image reconstruction software can offer an alternative to manufacturer implementations, allowing more control and portability. Several existing software packages offer a wide range of features and interfaces, but there is still a need for an engine that simultaneously offers reusable code, fast implementation and convenient interfaces for interoperability and extensibility. In this work, we introduce YRT-PET (Yale Reconstruction Toolkit for Positron Emission Tomography), an open-source toolkit for PET image reconstruction that aims for flexibility, reproducibility, speed, and interoperability with existing research software. The toolkit is implemented in C++ with CUDA-enabled GPU acceleration, relies on a plugin system to facilitate the use with multiple scanners, and offers Python bindings to enable the development of advanced algorithms. It includes support for list-mode/histogram data formats, multiple PET projectors, incorporation of time-of-flight information, event-by-event rigid motion correction, point-spread function modeling. It can incorporate correction factors such as normalization, randoms and scatter, obtained from scanner-specific plugins or provided by the user. The toolkit also includes an experimental module for scatter estimation without time-of-flight. To evaluate the capabilities of the software, two different scanners in four different contexts were tested: dynamic imaging, motion correction, deep image prior, and reconstruction for a limited-angle scanner geometry with time-of-flight. Comparisons with existing tools demonstrated good agreement in image quality and the effectiveness of the correction methods. The proposed software toolkit offers high versatility and potential for research, including the development of novel reconstruction algorithms and new PET scanner systems.</p>","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":" ","pages":""},"PeriodicalIF":3.5,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12714321/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145805973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-03DOI: 10.1109/trpms.2025.3617225
Farhan Sadik, Christopher L Newman, Stuart J Warden, Rachel K Surowiec
Rigid-motion artifacts, such as cortical bone streaking and trabecular smearing, hinder in vivo assessment of bone microstructures in high-resolution peripheral quantitative computed tomography (HR-pQCT). Despite various motion grading techniques, no motion correction methods exist due to the lack of standardized degradation models. We optimize a conventional sinogram-based method to simulate motion artifacts in HR-pQCT images, creating paired datasets of motion-corrupted images and their corresponding ground truth, which enables seamless integration into supervised learning frameworks for motion correction. As such, we propose an Edge-enhanced Self-attention Wasserstein Generative Adversarial Network with Gradient Penalty (ESWGAN-GP) to address motion artifacts in both simulated (source) and real-world (target) datasets. The model incorporates edge-enhancing skip connections to preserve trabecular edges and self-attention mechanisms to capture long-range dependencies, facilitating motion correction. A visual geometry group (VGG)-based perceptual loss is used to reconstruct fine micro-structural features. The ESWGAN-GP achieves a mean signal-to-noise ratio (SNR) of 26.78, structural similarity index measure (SSIM) of 0.81, and visual information fidelity (VIF) of 0.76 for the source dataset, while showing improved performance on the target dataset with an SNR of 29.31, SSIM of 0.87, and VIF of 0.81. The proposed methods address a simplified representation of real-world motion that may not fully capture the complexity of in vivo motion artifacts. Nevertheless, because motion artifacts present one of the foremost challenges to more widespread adoption of this modality, these methods represent an important initial step toward implementing deep learning-based motion correction in HR-pQCT.
{"title":"Simulating Sinogram-Domain Motion and Correcting Image-Domain Artifacts Using Deep Learning in HR-pQCT Bone Imaging.","authors":"Farhan Sadik, Christopher L Newman, Stuart J Warden, Rachel K Surowiec","doi":"10.1109/trpms.2025.3617225","DOIUrl":"10.1109/trpms.2025.3617225","url":null,"abstract":"<p><p>Rigid-motion artifacts, such as cortical bone streaking and trabecular smearing, hinder in vivo assessment of bone microstructures in high-resolution peripheral quantitative computed tomography (HR-pQCT). Despite various motion grading techniques, no motion correction methods exist due to the lack of standardized degradation models. We optimize a conventional sinogram-based method to simulate motion artifacts in HR-pQCT images, creating paired datasets of motion-corrupted images and their corresponding ground truth, which enables seamless integration into supervised learning frameworks for motion correction. As such, we propose an Edge-enhanced Self-attention Wasserstein Generative Adversarial Network with Gradient Penalty (ESWGAN-GP) to address motion artifacts in both simulated (source) and real-world (target) datasets. The model incorporates edge-enhancing skip connections to preserve trabecular edges and self-attention mechanisms to capture long-range dependencies, facilitating motion correction. A visual geometry group (VGG)-based perceptual loss is used to reconstruct fine micro-structural features. The ESWGAN-GP achieves a mean signal-to-noise ratio (SNR) of 26.78, structural similarity index measure (SSIM) of 0.81, and visual information fidelity (VIF) of 0.76 for the source dataset, while showing improved performance on the target dataset with an SNR of 29.31, SSIM of 0.87, and VIF of 0.81. The proposed methods address a simplified representation of real-world motion that may not fully capture the complexity of in vivo motion artifacts. Nevertheless, because motion artifacts present one of the foremost challenges to more widespread adoption of this modality, these methods represent an important initial step toward implementing deep learning-based motion correction in HR-pQCT.</p>","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":" ","pages":""},"PeriodicalIF":3.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12574536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145432618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1109/TRPMS.2025.3599622
{"title":"IEEE Transactions on Radiation and Plasma Medical Sciences Publication Information","authors":"","doi":"10.1109/TRPMS.2025.3599622","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3599622","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 7","pages":"C2-C2"},"PeriodicalIF":3.5,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11152387","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144997933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1109/TRPMS.2025.3600231
{"title":">Member Get-a-Member (MGM) Program","authors":"","doi":"10.1109/TRPMS.2025.3600231","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3600231","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 7","pages":"979-979"},"PeriodicalIF":3.5,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11152382","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1109/TRPMS.2025.3600229
{"title":"IEEE DataPort","authors":"","doi":"10.1109/TRPMS.2025.3600229","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3600229","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 7","pages":"978-978"},"PeriodicalIF":3.5,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11152383","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1109/TRPMS.2025.3599624
{"title":"IEEE Transactions on Radiation and Plasma Medical Sciences Information for Authors","authors":"","doi":"10.1109/TRPMS.2025.3599624","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3599624","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 7","pages":"C3-C3"},"PeriodicalIF":3.5,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11152386","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-25DOI: 10.1109/TRPMS.2025.3602262
George Webber, Alexander Hammers, Andrew P King, Andrew J Reader
Recent work has shown improved lesion detectability and flexibility to reconstruction hyperparameters (e.g. scanner geometry or dose level) when PET images are reconstructed by leveraging pre-trained diffusion models. Such methods train a diffusion model (without sinogram data) on high-quality, but still noisy, PET images. In this work, we propose a simple method for generating subject-specific PET images from a dataset of multisubject PET-MR scans, synthesizing "pseudo-PET" images by transforming between different patients' anatomy using image registration. The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features compared to the original set of PET images. With simulated and real [18F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific "pseudo-PET" images improves reconstruction accuracy with low-count data. In particular, the method shows promise in combining information from a guidance MR scan without overly imposing anatomical features, demonstrating an improved trade-off between reconstructing PET-unique image features versus features present in both PET and MR. We believe this approach for generating and utilizing synthetic data has further applications to medical imaging tasks, particularly because patient-specific PET images can be generated without resorting to generative deep learning or large training datasets.
{"title":"Personalized MR-Informed Diffusion Models for 3D PET Image Reconstruction.","authors":"George Webber, Alexander Hammers, Andrew P King, Andrew J Reader","doi":"10.1109/TRPMS.2025.3602262","DOIUrl":"10.1109/TRPMS.2025.3602262","url":null,"abstract":"<p><p>Recent work has shown improved lesion detectability and flexibility to reconstruction hyperparameters (e.g. scanner geometry or dose level) when PET images are reconstructed by leveraging pre-trained diffusion models. Such methods train a diffusion model (without sinogram data) on high-quality, but still noisy, PET images. In this work, we propose a simple method for generating subject-specific PET images from a dataset of multisubject PET-MR scans, synthesizing \"pseudo-PET\" images by transforming between different patients' anatomy using image registration. The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features compared to the original set of PET images. With simulated and real [<sup>18</sup>F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific \"pseudo-PET\" images improves reconstruction accuracy with low-count data. In particular, the method shows promise in combining information from a guidance MR scan without overly imposing anatomical features, demonstrating an improved trade-off between reconstructing PET-unique image features versus features present in both PET and MR. We believe this approach for generating and utilizing synthetic data has further applications to medical imaging tasks, particularly because patient-specific PET images can be generated without resorting to generative deep learning or large training datasets.</p>","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":" ","pages":"1"},"PeriodicalIF":3.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7618295/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145543068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-30DOI: 10.1109/TRPMS.2025.3594103
Fiammetta Pagano;Francis Loignon-Houle;David Sanchez;Nicolas A. Karakatsanis;Jorge Alamo;Sadek A. Nehmeh;Antonio J. Gonzalez
Semi-monolithic detectors, a hybrid configuration combining the benefits of pixelated arrays and monolithic blocks, present a compelling and cost-effective solution for positron emission tomography (PET) scanners with both time-of-flight (TOF) and depth-of-interaction (DOI) capabilities. In this work, we evaluate four LYSO-based semi-monolithic arrays with various surface treatments, read out with the PETsys TOFPET2 ASIC, to identify the optimal configuration for a novel brain PET scanner. The chosen array, featuring ESR on all surfaces except for the black-painted lateral pixelated ones, achieved $15.9~pm ~0.6$ % energy resolution and $253~pm ~15$ ps detector time resolution (DTR). neural network with multilayer perceptron architectures were used to estimate the annihilation photon impact position, yielding average accuracies of $3.7~pm ~1$ .1 mm and $2.6~pm ~0$ .7 mm (FWHM) along the DOI and monolithic directions, respectively. The comparative analysis of the four arrays also prompted an investigation into light sharing in semi-monolithic detectors, supported by a GATE-based simulation framework which was designed to complement the experimental results and confirm the observed trends in time resolution. By refining the detector design based on semi-monolithic geometry and optimized surface crystal treatment to enhance positioning accuracy, this study contributes to the development of a next-generation brain PET scanner, with competitive performance but at a moderate cost.
{"title":"Semi-Monolithic Detectors for TOF-DOI Brain PET: Optimization of Time, Energy, and Positioning Resolutions With Varying Surface Treatments","authors":"Fiammetta Pagano;Francis Loignon-Houle;David Sanchez;Nicolas A. Karakatsanis;Jorge Alamo;Sadek A. Nehmeh;Antonio J. Gonzalez","doi":"10.1109/TRPMS.2025.3594103","DOIUrl":"https://doi.org/10.1109/TRPMS.2025.3594103","url":null,"abstract":"Semi-monolithic detectors, a hybrid configuration combining the benefits of pixelated arrays and monolithic blocks, present a compelling and cost-effective solution for positron emission tomography (PET) scanners with both time-of-flight (TOF) and depth-of-interaction (DOI) capabilities. In this work, we evaluate four LYSO-based semi-monolithic arrays with various surface treatments, read out with the PETsys TOFPET2 ASIC, to identify the optimal configuration for a novel brain PET scanner. The chosen array, featuring ESR on all surfaces except for the black-painted lateral pixelated ones, achieved <inline-formula> <tex-math>$15.9~pm ~0.6$ </tex-math></inline-formula>% energy resolution and <inline-formula> <tex-math>$253~pm ~15$ </tex-math></inline-formula>ps detector time resolution (DTR). neural network with multilayer perceptron architectures were used to estimate the annihilation photon impact position, yielding average accuracies of <inline-formula> <tex-math>$3.7~pm ~1$ </tex-math></inline-formula>.1 mm and <inline-formula> <tex-math>$2.6~pm ~0$ </tex-math></inline-formula>.7 mm (FWHM) along the DOI and monolithic directions, respectively. The comparative analysis of the four arrays also prompted an investigation into light sharing in semi-monolithic detectors, supported by a GATE-based simulation framework which was designed to complement the experimental results and confirm the observed trends in time resolution. By refining the detector design based on semi-monolithic geometry and optimized surface crystal treatment to enhance positioning accuracy, this study contributes to the development of a next-generation brain PET scanner, with competitive performance but at a moderate cost.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"10 2","pages":"276-287"},"PeriodicalIF":3.5,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11104820","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-21DOI: 10.1109/trpms.2025.3591035
Margaret E Daube-Witherspoon, Stephen C Moore, Joel S Karp
The high sensitivity of long axial field-of-view (AFOV) PET scanners has enabled studies over a wide range of count rates and count densities. However, these systems have a large axial acceptance angle that necessitates a wide coincidence window to capture the oblique true coincidences. In addition, the measured delays sinogram is sparse and noisy. We studied four methods of randoms estimation on a long AFOV system to assess their impact on accuracy and image noise: measured delays using a delayed coincidence window (RD), 2D Casey averaging of measured delays (RD-smooth), 2D average of measured delays (RD-ave - the current default method on the PennPET Explorer), and estimation of randoms from singles (RS). We looked at cases with varying count densities, randoms fractions, and non-pure positron emitters. A positive bias observed at low randoms counts for the RD and RD-smooth methods was not seen with the RD-ave or RS methods. For all cases, quantitative results with RS agreed to within 2.5% of the RD-ave method, while RD and RD-smooth estimates showed differences of 5-49%, with larger differences in areas of low uptake. The RS method is a practical technique for list-mode data and list-mode reconstruction by reducing the size of stored list events. It also avoids small approximations in the RD-ave method. For long AFOV systems, estimating randoms from singles is a practical and accurate method.
{"title":"Randoms Estimation for Long Axial Field-of-View PET.","authors":"Margaret E Daube-Witherspoon, Stephen C Moore, Joel S Karp","doi":"10.1109/trpms.2025.3591035","DOIUrl":"10.1109/trpms.2025.3591035","url":null,"abstract":"<p><p>The high sensitivity of long axial field-of-view (AFOV) PET scanners has enabled studies over a wide range of count rates and count densities. However, these systems have a large axial acceptance angle that necessitates a wide coincidence window to capture the oblique true coincidences. In addition, the measured delays sinogram is sparse and noisy. We studied four methods of randoms estimation on a long AFOV system to assess their impact on accuracy and image noise: measured delays using a delayed coincidence window (RD), 2D Casey averaging of measured delays (RD-smooth), 2D average of measured delays (RD-ave - the current default method on the PennPET Explorer), and estimation of randoms from singles (RS). We looked at cases with varying count densities, randoms fractions, and non-pure positron emitters. A positive bias observed at low randoms counts for the RD and RD-smooth methods was not seen with the RD-ave or RS methods. For all cases, quantitative results with RS agreed to within 2.5% of the RD-ave method, while RD and RD-smooth estimates showed differences of 5-49%, with larger differences in areas of low uptake. The RS method is a practical technique for list-mode data and list-mode reconstruction by reducing the size of stored list events. It also avoids small approximations in the RD-ave method. For long AFOV systems, estimating randoms from singles is a practical and accurate method.</p>","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":" ","pages":""},"PeriodicalIF":3.5,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12629634/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145565669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}