Pub Date : 2026-01-20DOI: 10.1088/1361-6560/ae35c8
S R Soleti, P Dietz, R Esteve, J García-Barrena, V Herrero, F Lopez, F Monrabal, L Navarro-Cozcolluela, E Oblak, J Pelegrín, J Renner, J Toledo, S Torelli, J J Gómez-Cadenas
Objective.Total body positron emission tomography (TBPET) scanners have the potential to substantially reduce both acquisition time and administered radiation dose, owing to their high sensitivity. However, their widespread clinical adoption is hindered by the high cost of currently available systems. This work explores the use of pure cesium iodide (CsI) monolithic crystals operated at cryogenic temperatures as a cost-effective alternative to rare-earth scintillators for TBPET.Approach.We investigate the performance of pure CsI crystals operated at cryogenic temperatures (∼100 K), where they achieve a light yield of approximately 105photons/MeV. The implications for energy resolution, spatial resolution (including depth-of-interaction (d.o.i.) capability), and timing performance are assessed, with a view toward their integration into a TBPET system.Main results.Cryogenic CsI crystals demonstrated energy resolution below 7% and coincidence time resolution (CTR) at the nanosecond level, despite their relatively slow scintillation decay time. A Monte Carlo simulation of monolithic CsI crystals shows that a millimeter-scale spatial resolution in all three dimensions can be obtained. These characteristics indicate that high-performance PET imaging is achievable with this technology.Significance.A TBPET scanner based on cryogenic CsI monolithic crystals could combine excellent imaging performance with significantly reduced detector costs, enabling broader accessibility and accelerating the adoption of TBPET in both clinical and research settings.
{"title":"CRYSP: a total-body PET based on cryogenic cesium iodide crystals.","authors":"S R Soleti, P Dietz, R Esteve, J García-Barrena, V Herrero, F Lopez, F Monrabal, L Navarro-Cozcolluela, E Oblak, J Pelegrín, J Renner, J Toledo, S Torelli, J J Gómez-Cadenas","doi":"10.1088/1361-6560/ae35c8","DOIUrl":"10.1088/1361-6560/ae35c8","url":null,"abstract":"<p><p><i>Objective.</i>Total body positron emission tomography (TBPET) scanners have the potential to substantially reduce both acquisition time and administered radiation dose, owing to their high sensitivity. However, their widespread clinical adoption is hindered by the high cost of currently available systems. This work explores the use of pure cesium iodide (CsI) monolithic crystals operated at cryogenic temperatures as a cost-effective alternative to rare-earth scintillators for TBPET.<i>Approach.</i>We investigate the performance of pure CsI crystals operated at cryogenic temperatures (∼100 K), where they achieve a light yield of approximately 10<sup>5</sup>photons/MeV. The implications for energy resolution, spatial resolution (including depth-of-interaction (d.o.i.) capability), and timing performance are assessed, with a view toward their integration into a TBPET system.<i>Main results.</i>Cryogenic CsI crystals demonstrated energy resolution below 7% and coincidence time resolution (CTR) at the nanosecond level, despite their relatively slow scintillation decay time. A Monte Carlo simulation of monolithic CsI crystals shows that a millimeter-scale spatial resolution in all three dimensions can be obtained. These characteristics indicate that high-performance PET imaging is achievable with this technology.<i>Significance.</i>A TBPET scanner based on cryogenic CsI monolithic crystals could combine excellent imaging performance with significantly reduced detector costs, enabling broader accessibility and accelerating the adoption of TBPET in both clinical and research settings.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145934668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1088/1361-6560/ae3b01
Jooho Lee, Adam S Wang, Jongduk Baek
Objective: Normalized metal artifact reduction (NMAR) is a robust and widely used method for reducing metal artifacts in computed tomography (CT). However, conventional NMAR requires at least two forward projections, one for metal trace detection and the other for prior sinogram generation, resulting in redundant computation and limited efficiency. This study aims to reformulate NMAR into a single forward projection-based framework that maintains artifact reduction performance while improving computational efficiency and structural simplicity.
Approach: We show that the two separate forward projections in NMAR can be unified into a single operation by leveraging deep learning (DL) priors, thereby eliminating the explicit forward projection for metal trace. The metal trace is inferred directly from localized discrepancies between the original sinogram and the forward projection of the DL prior image, allowing both interpolation and trace identification within a unified forward projection. Simulations and cadaver experiments were performed to compare the proposed method with NMAR, DL reconstruction, and conventional DL-NMAR.
Main results: The proposed method reduced metal artifacts with image quality comparable to conventional DL-NMAR while improving computational efficiency. By reducing the number of forward projections from two to one, the proposed method achieved the lowest number of projection operations among all compared methods, highlighting its computational advantage.
Significance: This study demonstrates that deep learning priors can be seamlessly integrated into physics-based NMAR frameworks to simplify image reconstruction process and enhance computational performance. The proposed unified forward projection provides an efficient solution to accelerate metal artifact reduction in CT imaging.
{"title":"Improving the efficiency of normalized metal artifact reduction via a unified forward projection.","authors":"Jooho Lee, Adam S Wang, Jongduk Baek","doi":"10.1088/1361-6560/ae3b01","DOIUrl":"https://doi.org/10.1088/1361-6560/ae3b01","url":null,"abstract":"<p><strong>Objective: </strong>Normalized metal artifact reduction (NMAR) is a robust and widely used method for reducing metal artifacts in computed tomography (CT). However, conventional NMAR requires at least two forward projections, one for metal trace detection and the other for prior sinogram generation, resulting in redundant computation and limited efficiency. This study aims to reformulate NMAR into a single forward projection-based framework that maintains artifact reduction performance while improving computational efficiency and structural simplicity.
Approach: We show that the two separate forward projections in NMAR can be unified into a single operation by leveraging deep learning (DL) priors, thereby eliminating the explicit forward projection for metal trace. The metal trace is inferred directly from localized discrepancies between the original sinogram and the forward projection of the DL prior image, allowing both interpolation and trace identification within a unified forward projection. Simulations and cadaver experiments were performed to compare the proposed method with NMAR, DL reconstruction, and conventional DL-NMAR.
Main results: The proposed method reduced metal artifacts with image quality comparable to conventional DL-NMAR while improving computational efficiency. By reducing the number of forward projections from two to one, the proposed method achieved the lowest number of projection operations among all compared methods, highlighting its computational advantage.
Significance: This study demonstrates that deep learning priors can be seamlessly integrated into physics-based NMAR frameworks to simplify image reconstruction process and enhance computational performance. The proposed unified forward projection provides an efficient solution to accelerate metal artifact reduction in CT imaging.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146011961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1088/1361-6560/ae3b02
Han Gyu Kang, Hideaki Tashima, Makoto Higuchi, Taiga Yamaya
Objective: For rodent brain PET imaging, spatial resolution is the most important factor for identifying small brain structures. Previously, we developed a submillimeter resolution PET scanner with 1 mm crystal pitch using 3-layer depth-of-interaction (DOI) detectors. However, the spatial resolution was over 0.5 mm due to a relatively large crystal pitch and an unoptimized crystal layer design. Here we use GATE Monte Carlo simulations to design and optimize a sub-0.5 mm resolution PET scanner with 3-layer DOI detectors.
Methods: The proposed PET scanner has 2 rings, each of which has 16 DOI detectors, resulting in a 23.4 mm axial coverage. Each DOI detector has 3-layer LYSO crystal arrays with a 0.8 mm crystal pitch. We employed GATE Monte Carlo simulations to optimize three crystal layer designs, A (4+4+7 mm), B (3+4+4 mm), and C (3+3+5 mm). Spatial resolution and imaging performance were evaluated with a point source and resolution phantom using analytical and iterative algorithms.
Main Results: Among the three designs, design C provided the most uniform spatial resolution up to the radial offset of 15 mm. The 0.45 mm diameter rod structures were resolved clearly with design C using the iterative algorithm. The GATE simulation results agreed with the experimental data in terms of radial resolution except at the radial offset of 15 mm.
Significance: We optimized the crystal layer design of the mouse brain PET scanner with GATE simulations, thereby achieving sub-0.5 mm resolution in the resolution phantom study.
{"title":"Design optimization using GATE Monte Carlo simulations for a sub-0.5 mm resolution PET scanner with 3-layer DOI detectors.","authors":"Han Gyu Kang, Hideaki Tashima, Makoto Higuchi, Taiga Yamaya","doi":"10.1088/1361-6560/ae3b02","DOIUrl":"https://doi.org/10.1088/1361-6560/ae3b02","url":null,"abstract":"<p><strong>Objective: </strong>For rodent brain PET imaging, spatial resolution is the most important factor for identifying small brain structures. Previously, we developed a submillimeter resolution PET scanner with 1 mm crystal pitch using 3-layer depth-of-interaction (DOI) detectors. However, the spatial resolution was over 0.5 mm due to a relatively large crystal pitch and an unoptimized crystal layer design. Here we use GATE Monte Carlo simulations to design and optimize a sub-0.5 mm resolution PET scanner with 3-layer DOI detectors. 
Methods: The proposed PET scanner has 2 rings, each of which has 16 DOI detectors, resulting in a 23.4 mm axial coverage. Each DOI detector has 3-layer LYSO crystal arrays with a 0.8 mm crystal pitch. We employed GATE Monte Carlo simulations to optimize three crystal layer designs, A (4+4+7 mm), B (3+4+4 mm), and C (3+3+5 mm). Spatial resolution and imaging performance were evaluated with a point source and resolution phantom using analytical and iterative algorithms. 
Main Results: Among the three designs, design C provided the most uniform spatial resolution up to the radial offset of 15 mm. The 0.45 mm diameter rod structures were resolved clearly with design C using the iterative algorithm. The GATE simulation results agreed with the experimental data in terms of radial resolution except at the radial offset of 15 mm.
Significance: We optimized the crystal layer design of the mouse brain PET scanner with GATE simulations, thereby achieving sub-0.5 mm resolution in the resolution phantom study.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146011924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: This study aims to develop a motion-robust magnetic resonance fingerprinting (MR-MRF) technique for liver cancer imaging to eliminate the need for breath-hold scanning.
Approach: To mitigate respiratory motion artifacts in free-breathing abdominal MRF, the MR-MRF technique comprising two core components. First, respiratory motion is modeled by applying an isotropic total variation (TV)-regularized registration algorithm between a target end-of-exhalation (EOE) phase and three motion phases. Second, motion-resolved tissue property maps are reconstructed using a low-rank total variation (LRTV) optimization framework, which incorporates the estimated inter-phase motion to align all acquired MRF dynamics to the EOE phase. MR-MRF is evaluated by 22 patients (mean age, 62 years ± 10 [SD]; 15 males and 7 females) with hepatocellular carcinoma. Radiologist's blinded assessment and organ boundary sharpness measurements are performed to evaluate the image quality of MR-MRF-derived tissue maps. The test-retest tissue quantification repeatability is assessed by two consecutive MRF scans with distinct breathing patterns. Paired Student's t-test is used for statistical significance analysis with a p-value threshold of 0.05.
Main results: MR-MRF achieved successful reconstruction of motion-resolved tissue maps at EOE phase, with blinded radiologist assessment yielding an average score of 3 (moderate quality - sufficient for diagnosis) for overall image impression. The FWHM of organ boundaries in MR-MRF-derived tissue maps is 3.1mm ± 1.7mm, significantly lower than motion-blurred tissue maps (9.9mm ± 3.4mm, p-value<0.0001). Test-retest analysis demonstrated good repeatability: liver coefficient of variation was 5.5% ± 7.1% (T1), 8.2% ± 4.4% (T2), and 5.0% ± 2.0% (PD), with excellent linear agreement (R² = 0.96, 0.80, and 0.85 for T1, T2, and PD, respectively).
Significance: This study establishes the technical foundation of MR-MRF to achieve repeatable and quantitative liver T1/T2/PD mapping under free-breathing conditions at 3T. The results validate the feasibility of addressing respiratory motion in abdominal multi-parametric quantitative MRI.
.
{"title":"Motion-robust magnetic resonance fingerprinting (MR-MRF) for quantitative liver cancer imaging.","authors":"Chenyang Liu, Tian Li, Lu Wang, Yat-Lam Wong, Mandi Wang, Huiqin Zhang, Zuojun Wang, Haonan Xiao, Shaohua Zhi, Wen Li, Jiang Zhang, Xinzhi Teng, Victor Ho-Fun Lee, Peng Cao, Jing Cai","doi":"10.1088/1361-6560/ae3b03","DOIUrl":"https://doi.org/10.1088/1361-6560/ae3b03","url":null,"abstract":"<p><strong>Objective: </strong>This study aims to develop a motion-robust magnetic resonance fingerprinting (MR-MRF) technique for liver cancer imaging to eliminate the need for breath-hold scanning. 
Approach: To mitigate respiratory motion artifacts in free-breathing abdominal MRF, the MR-MRF technique comprising two core components. First, respiratory motion is modeled by applying an isotropic total variation (TV)-regularized registration algorithm between a target end-of-exhalation (EOE) phase and three motion phases. Second, motion-resolved tissue property maps are reconstructed using a low-rank total variation (LRTV) optimization framework, which incorporates the estimated inter-phase motion to align all acquired MRF dynamics to the EOE phase. MR-MRF is evaluated by 22 patients (mean age, 62 years ± 10 [SD]; 15 males and 7 females) with hepatocellular carcinoma. Radiologist's blinded assessment and organ boundary sharpness measurements are performed to evaluate the image quality of MR-MRF-derived tissue maps. The test-retest tissue quantification repeatability is assessed by two consecutive MRF scans with distinct breathing patterns. Paired Student's t-test is used for statistical significance analysis with a p-value threshold of 0.05.
Main results: MR-MRF achieved successful reconstruction of motion-resolved tissue maps at EOE phase, with blinded radiologist assessment yielding an average score of 3 (moderate quality - sufficient for diagnosis) for overall image impression. The FWHM of organ boundaries in MR-MRF-derived tissue maps is 3.1mm ± 1.7mm, significantly lower than motion-blurred tissue maps (9.9mm ± 3.4mm, p-value<0.0001). Test-retest analysis demonstrated good repeatability: liver coefficient of variation was 5.5% ± 7.1% (T1), 8.2% ± 4.4% (T2), and 5.0% ± 2.0% (PD), with excellent linear agreement (R² = 0.96, 0.80, and 0.85 for T1, T2, and PD, respectively).
Significance: This study establishes the technical foundation of MR-MRF to achieve repeatable and quantitative liver T1/T2/PD mapping under free-breathing conditions at 3T. The results validate the feasibility of addressing respiratory motion in abdominal multi-parametric quantitative MRI.
.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146011957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1088/1361-6560/ae3a31
Simon Noë, Seyed Amir Zaman Pour, Ahmadreza Rezaei, Charles Stearns, Johan Nuyts, Georg Schramm
Objective: Scattered coincidences are a major source of quantitative bias in positron emission tomography (PET) and must be compensated during reconstruction using an estimate of scattered coincidences per line-of-response and time-of-flight bin. Such estimates are typically obtained from simulators with simple cylindrical scanner models that omit detector physics. Incorporating detector sensitivities for scatter is challenging, as scattered coincidences have less constrained properties (e.g., incidence angles) than true coincidences.
Approach: We integrated a 5D single-photon detection probability lookup table (photon energy, incidence angle, detector location) into the simulator logic. The resulting scatter sinogram is multiplied by a precomputed, LUT-specific scatter sensitivity sinogram to yield the scatter estimate. Scatter was simulated with MCGPU-PET, a fast Monte Carlo simulator with a simplified scanner model, and applied to phantom data from a simulated GE Signa PET/MR in GATE. We evaluated three scenarios:
1. Long, high-count MCGPU-PET simulations from a known activity distribution (reference).
2. Same distribution with limited simulation time and counts.
3. Same low-count data with joint estimation of activity and scatter during reconstruction.
Main result: In scenario 1, scatter-compensated reconstructions achieved <1% global bias in all active regions relative to true-only reconstructions. In scenario 2, noisy scatter estimates caused strong positive bias, but Gaussian smoothing restored accuracy to scenario 1 levels. In scenario 3, joint estimation under low-count conditions maintained <1% global bias in nearly all regions.
Significance: Although demonstrated with a fast Monte Carlo simulator, the proposed scatter sensitivity modeling could enhance existing single scatter simulators used clinically, which typically neglect detector physics. This proof-of-concept also supports the feasibility of scatter estimation for real scans using fast Monte Carlo simulation, offering potentially greater accuracy and robustness to acquisition noise.
.
{"title":"Object independent scatter sensitivities for PET, applied to scatter estimation through fast Monte Carlo simulation.","authors":"Simon Noë, Seyed Amir Zaman Pour, Ahmadreza Rezaei, Charles Stearns, Johan Nuyts, Georg Schramm","doi":"10.1088/1361-6560/ae3a31","DOIUrl":"https://doi.org/10.1088/1361-6560/ae3a31","url":null,"abstract":"<p><strong>Objective: </strong>Scattered coincidences are a major source of quantitative bias in positron emission tomography (PET) and must be compensated during reconstruction using an estimate of scattered coincidences per line-of-response and time-of-flight bin. Such estimates are typically obtained from simulators with simple cylindrical scanner models that omit detector physics. Incorporating detector sensitivities for scatter is challenging, as scattered coincidences have less constrained properties (e.g., incidence angles) than true coincidences.

Approach: We integrated a 5D single-photon detection probability lookup table (photon energy, incidence angle, detector location) into the simulator logic. The resulting scatter sinogram is multiplied by a precomputed, LUT-specific scatter sensitivity sinogram to yield the scatter estimate. Scatter was simulated with MCGPU-PET, a fast Monte Carlo simulator with a simplified scanner model, and applied to phantom data from a simulated GE Signa PET/MR in GATE. We evaluated three scenarios:
1. Long, high-count MCGPU-PET simulations from a known activity distribution (reference).
2. Same distribution with limited simulation time and counts.
3. Same low-count data with joint estimation of activity and scatter during reconstruction.

Main result: In scenario 1, scatter-compensated reconstructions achieved <1% global bias in all active regions relative to true-only reconstructions. In scenario 2, noisy scatter estimates caused strong positive bias, but Gaussian smoothing restored accuracy to scenario 1 levels. In scenario 3, joint estimation under low-count conditions maintained <1% global bias in nearly all regions.

Significance: Although demonstrated with a fast Monte Carlo simulator, the proposed scatter sensitivity modeling could enhance existing single scatter simulators used clinically, which typically neglect detector physics. This proof-of-concept also supports the feasibility of scatter estimation for real scans using fast Monte Carlo simulation, offering potentially greater accuracy and robustness to acquisition noise.
.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146003845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1088/1361-6560/ae399e
Roel C Kwakernaak, Massimiliano Zanoli, Zoltán Perkó, Maarten M Paulides, Sergio Curto
Objective: Hyperthermia, the elevation of tumor temperature to 39-44◦C, is an effective adjuvant treatment for head and neck (H&N) cancer, enhancing the effects of radiotherapy and chemotherapy. This study investigates the robustness of hyperthermia treatment planning (HTP) for H&N cancer using the HyperCollar3D applicator, focusing on uncertainties in patient positioning, tissue properties, and water bolus cooling efficacy.
Approach: A retrospective analysis was conducted of 16 patients treated at the Erasmus Medical Center, utilizing Polynomial Chaos Expansion to model the impact of uncertainties on temperature distributions and treatment quality metrics.
Main results: Our findings indicate significant variability in target temperatures due to uncertainties in these tissue properties (2.1◦C T90 95% confidence interval), further exacerbated by patient positioning errors (2.3◦C T90 95% confidence interval for 5mm positioning errors). Uncertainty in dielectric tissue properties causes the largest chunk of the variance (47%) in T90 followed by positioning errors (22%).
Significance: This study highlights the critical importance of accurate measurement of tissue properties and precise patient positioning to achieve effective hyperthermia treatment outcomes. Our findings strongly advocate the development of more robust and quantitative treatment planning and delivery approaches, aiming to enhance the precision and clinical efficacy of HTP protocols for H&N cancer treatments.
{"title":"Uncertainty analysis in hyperthermia treatment planning for head & neck cancer using polynomial chaos expansion.","authors":"Roel C Kwakernaak, Massimiliano Zanoli, Zoltán Perkó, Maarten M Paulides, Sergio Curto","doi":"10.1088/1361-6560/ae399e","DOIUrl":"https://doi.org/10.1088/1361-6560/ae399e","url":null,"abstract":"<p><strong>Objective: </strong>Hyperthermia, the elevation of tumor temperature to 39-44◦C, is an effective adjuvant treatment for head and neck (H&N) cancer, enhancing the effects of radiotherapy and chemotherapy. This study investigates the robustness of hyperthermia treatment planning (HTP) for H&N cancer using the HyperCollar3D applicator, focusing on uncertainties in patient positioning, tissue properties, and water bolus cooling efficacy.
Approach: A retrospective analysis was conducted of 16 patients treated at the Erasmus Medical Center, utilizing Polynomial Chaos Expansion to model the impact of uncertainties on temperature distributions and treatment quality metrics.
Main results: Our findings indicate significant variability in target temperatures due to uncertainties in these tissue properties (2.1◦C T90 95% confidence interval), further exacerbated by patient positioning errors (2.3◦C T90 95% confidence interval for 5mm positioning errors). Uncertainty in dielectric tissue properties causes the largest chunk of the variance (47%) in T90 followed by positioning errors (22%).
Significance: This study highlights the critical importance of accurate measurement of tissue properties and precise patient positioning to achieve effective hyperthermia treatment outcomes. Our findings strongly advocate the development of more robust and quantitative treatment planning and delivery approaches, aiming to enhance the precision and clinical efficacy of HTP protocols for H&N cancer treatments.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145998806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1088/1361-6560/ae39a0
Lisa Stefanie Fankhauser, Maria Giulia Toro, Andreas Johan Smolders, Renato Bellotti, Antony John Lomax, Francesca Albertini
Objective: Manual review of daily auto-generated contours remains a challenge for clinical implementation of online adaptive radiotherapy. This study introduces a Contour Assessment Tool for Quality Assurance (CAT-QA), an automatic workflow designed to flag organs-at-risk (OAR) contours requiring manual revision.
Approach: CAT-QA applies sequential geometric and dosimetric tests to each auto-generated OAR contour to flag structures requiring review. The tool was retrospectively applied to ten head and neck (H&N) patients (44 CTs with manual contours) treated with proton therapy, split into training and test sets. For each image, three treatment plans were created: one with manual contours (Gold), one with automatic OAR contours (Auto), and one combining auto-contours with manual ones for flagged OARs (CAT-QA plan). Generalizability was assessed on six abdominal patients (8 CTs) without retuning.
Main results: CAT-QA flagged 21% of OARs in H&N and 27% in abdominal cases. No dose failures (>5% of prescribed dose vs. Gold) were observed in H&N. One abdominal OAR (1.4%) exceeded this threshold. In contrast, auto plans resulted in dose failures in 7.5% (H&N) and 8.5% (abdomen). The higher flag rate observed in the abdomen was driven by a single failed auto-contouring case; excluding this outlier, the average flag rate was 20%, comparable to H&N. CAT-QA runtime averaged <2min, supporting feasibility for integration into online workflows.
Significance: CAT-QA shows promise for improving the safety and efficiency of auto-contouring in online adaptive radiotherapy by flagging OARs that need manual review, with initial results suggesting generalizability across treatment sites.
{"title":"Contour assessment tool for quality assurance (CAT-QA) to speed up online adaptive radiotherapy.","authors":"Lisa Stefanie Fankhauser, Maria Giulia Toro, Andreas Johan Smolders, Renato Bellotti, Antony John Lomax, Francesca Albertini","doi":"10.1088/1361-6560/ae39a0","DOIUrl":"https://doi.org/10.1088/1361-6560/ae39a0","url":null,"abstract":"<p><strong>Objective: </strong>Manual review of daily auto-generated contours remains a challenge for clinical implementation of online adaptive radiotherapy. This study introduces a Contour Assessment Tool for Quality Assurance (CAT-QA), an automatic workflow designed to flag organs-at-risk (OAR) contours requiring manual revision. 
Approach: CAT-QA applies sequential geometric and dosimetric tests to each auto-generated OAR contour to flag structures requiring review. The tool was retrospectively applied to ten head and neck (H&N) patients (44 CTs with manual contours) treated with proton therapy, split into training and test sets. For each image, three treatment plans were created: one with manual contours (Gold), one with automatic OAR contours (Auto), and one combining auto-contours with manual ones for flagged OARs (CAT-QA plan). Generalizability was assessed on six abdominal patients (8 CTs) without retuning.
Main results: CAT-QA flagged 21% of OARs in H&N and 27% in abdominal cases. No dose failures (>5% of prescribed dose vs. Gold) were observed in H&N. One abdominal OAR (1.4%) exceeded this threshold. In contrast, auto plans resulted in dose failures in 7.5% (H&N) and 8.5% (abdomen). The higher flag rate observed in the abdomen was driven by a single failed auto-contouring case; excluding this outlier, the average flag rate was 20%, comparable to H&N. CAT-QA runtime averaged <2min, supporting feasibility for integration into online workflows.
Significance: CAT-QA shows promise for improving the safety and efficiency of auto-contouring in online adaptive radiotherapy by flagging OARs that need manual review, with initial results suggesting generalizability across treatment sites.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145990325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: Establishing control and specification limits for Volumetric Modulated Arc Therapy (VMAT) pre-treatment quality assurance (PTQA) is essential for streamlining PTQA workflows and optimizing plan complexity. This study aimed to develop and implement new methods and tools across treatment sites of varying complexity using multiple global and local gamma index criteria.
Approach: 350 VMAT plans comprising brain, prostate, pelvis and head and neck treatments were retrospectively compiled. For each site, control limits were obtained using Statistical Process Control (SPC) along with heuristic methods (scaled weighted variance (SWV), weighted standard deviation (WSD), skewness correction (SC)). Specification limits were derived employing a new formalism aligned with the heuristic approaches. Calculations were performed under various global and local gamma index criteria using custom-built software (freely available at https://github.com/AEvgeneia/SPC_GUI_Scientific_Tool.git).
Main results: WSD and SC control and specification limits were comparable, while SWV deviated with increasing complexity and stricter gamma index criteria. Conventional criteria (e.g. global 3%/2 mm) lacked sensitivity to detect subtle errors. Global 2%/1 mm and 1%/2 mm, and local criteria stricter than 3%/1 mm, met sensitivity requirements for low-complexity plans while maintaining clear separation between control and specification limits to identify plans with suboptimal delivery accuracy. High-complexity plans showed that global criteria stricter than 3%/1 mm and all evaluated local criteria are optimal, provided specification limits for the most stringent criteria remain clinically acceptable.
Significance: A nuanced framework is provided for determining control and specification limits for gamma index passing rates, as well as corresponding thresholds for the mean gamma index, allowing for site-specific detection of suboptimal treatment plans. The open-source software tool developed can operationalise the proposed methodology facilitating the clinical adoption of advanced statistical methods. Site-specific thresholds could serve as inputs for machine learning and deep learning algorithms aimed at automating error detection and PTQA classification for plan complexity management.
.
{"title":"Institution-specific pre-treatment quality assurance control and specification limits: a tool to implement a new formalism and criteria optimization using statistical process control and heuristic methods.","authors":"Aspasia Evangelia Evgeneia, Panagiotis Alafogiannis, Nikolaos Dikaios, Evaggelos Pantelis, Panagiotis Papagiannis, Vasiliki Peppa","doi":"10.1088/1361-6560/ae399f","DOIUrl":"https://doi.org/10.1088/1361-6560/ae399f","url":null,"abstract":"<p><strong>Objective: </strong>Establishing control and specification limits for Volumetric Modulated Arc Therapy (VMAT) pre-treatment quality assurance (PTQA) is essential for streamlining PTQA workflows and optimizing plan complexity. This study aimed to develop and implement new methods and tools across treatment sites of varying complexity using multiple global and local gamma index criteria.</p><p><strong>Approach: </strong>350 VMAT plans comprising brain, prostate, pelvis and head and neck treatments were retrospectively compiled. For each site, control limits were obtained using Statistical Process Control (SPC) along with heuristic methods (scaled weighted variance (SWV), weighted standard deviation (WSD), skewness correction (SC)). Specification limits were derived employing a new formalism aligned with the heuristic approaches. Calculations were performed under various global and local gamma index criteria using custom-built software (freely available at https://github.com/AEvgeneia/SPC_GUI_Scientific_Tool.git).</p><p><strong>Main results: </strong>WSD and SC control and specification limits were comparable, while SWV deviated with increasing complexity and stricter gamma index criteria. Conventional criteria (e.g. global 3%/2 mm) lacked sensitivity to detect subtle errors. Global 2%/1 mm and 1%/2 mm, and local criteria stricter than 3%/1 mm, met sensitivity requirements for low-complexity plans while maintaining clear separation between control and specification limits to identify plans with suboptimal delivery accuracy. High-complexity plans showed that global criteria stricter than 3%/1 mm and all evaluated local criteria are optimal, provided specification limits for the most stringent criteria remain clinically acceptable.</p><p><strong>Significance: </strong>A nuanced framework is provided for determining control and specification limits for gamma index passing rates, as well as corresponding thresholds for the mean gamma index, allowing for site-specific detection of suboptimal treatment plans. The open-source software tool developed can operationalise the proposed methodology facilitating the clinical adoption of advanced statistical methods. Site-specific thresholds could serve as inputs for machine learning and deep learning algorithms aimed at automating error detection and PTQA classification for plan complexity management.

.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145990328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: The complex internal organization of subcortical structures forms the foundation of critical neural circuits that support sensorimotor processing, emotion regulation, and memory. However, their complex internal organization poses a significant challenge to reliable, fine-scale parcellation.
Approach: To overcome the trade-off between anatomical specificity and cross-subject consistency, we propose a novel multiscale subcortical parcellation framework grounded in consensus graph representation learning of diffusion MRI (dMRI) tractography data. We propose a novel fiber-cluster-based connectivity representation to address the limitations of conventional voxel-level tractography features, thereby enhancing anatomical fidelity and reducing tracking noise. Furthermore, our method preserves local structural coherence while significantly mitigating the curse of dimensionality by leveraging 3D-SLIC supervoxel preparcellation. Finally, we integrate consensus graph representation learning with low-rank tensor modeling, enabling population-level regularization that refines individual embeddings and ensures consistent subcortical parcellations across subjects. By utilizing this framework, we create a new, fine-grained subcortical atlas.
Main results: Evaluations using Ultra-High-Field dMRI from Human Connectome Project demonstrate that our method yields subcortical parcels with enhanced reproducibility and microstructural homogeneity. Across diffusion-derived microstructure indices, our atlas consistently achieves the lowest or second-lowest coefficient of variation, with average reductions of 15-25% compared to existing atlases, thereby supporting robust downstream analyses of structural homology and regional variability.
Significance: Our pipeline provides a powerful tool for detailed mapping of subcortical organization, offering promising applications in precision neuroimaging and the discovery of clinical biomarkers for neurological and psychiatric disorders that affect these structures (e.g., Parkinson's disease, schizophrenia, and major depressive disorder). Our code is available at https://anonymous.4open.science/r/SubcorticalParcellation-D254/.
{"title":"Constructing fine-grained subcortical atlases with connectional consensus graph representation learning.","authors":"Zhonghua Wan, Peng Wang, Yazhe Zhai, Yu Xie, Yifei He, Ye Wu","doi":"10.1088/1361-6560/ae399d","DOIUrl":"https://doi.org/10.1088/1361-6560/ae399d","url":null,"abstract":"<p><strong>Objective: </strong>The complex internal organization of subcortical structures forms the foundation of critical neural circuits that support sensorimotor processing, emotion regulation, and memory. However, their complex internal organization poses a significant challenge to reliable, fine-scale parcellation.</p><p><strong>Approach: </strong>To overcome the trade-off between anatomical specificity and cross-subject consistency, we propose a novel multiscale subcortical parcellation framework grounded in consensus graph representation learning of diffusion MRI (dMRI) tractography data. We propose a novel fiber-cluster-based connectivity representation to address the limitations of conventional voxel-level tractography features, thereby enhancing anatomical fidelity and reducing tracking noise. Furthermore, our method preserves local structural coherence while significantly mitigating the curse of dimensionality by leveraging 3D-SLIC supervoxel preparcellation. Finally, we integrate consensus graph representation learning with low-rank tensor modeling, enabling population-level regularization that refines individual embeddings and ensures consistent subcortical parcellations across subjects. By utilizing this framework, we create a new, fine-grained subcortical atlas.</p><p><strong>Main results: </strong>Evaluations using Ultra-High-Field dMRI from Human Connectome Project demonstrate that our method yields subcortical parcels with enhanced reproducibility and microstructural homogeneity. Across diffusion-derived microstructure indices, our atlas consistently achieves the lowest or second-lowest coefficient of variation, with average reductions of 15-25% compared to existing atlases, thereby supporting robust downstream analyses of structural homology and regional variability.</p><p><strong>Significance: </strong>Our pipeline provides a powerful tool for detailed mapping of subcortical organization, offering promising applications in precision neuroimaging and the discovery of clinical biomarkers for neurological and psychiatric disorders that affect these structures (e.g., Parkinson's disease, schizophrenia, and major depressive disorder). Our code is available at https://anonymous.4open.science/r/SubcorticalParcellation-D254/.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145990363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1088/1361-6560/ae22b7
Stewart Mein, Takamitsu Masuda, Koki Kasamatsu, Taku Nakaji, Yusuke Nomura, Jiayao Sun, Ken Katagiri, Yoshiyuki Iwata, Nobuyuki Kanematsu, Kota Mizushima, Taku Inaniwa, Sodai Tanaka
Neon ion (20Ne) beam radiotherapy was one of the primary particle therapy candidates investigated during the clinical trials beginning in the 1970s at the Lawrence Berkely National Laboratory (LBNL), which shut down in the early 1990s. Currently, therapeutic neon ion beams are available at only one clinical facility worldwide, the National Institutes for Quantum Science and Technology (QST) in Chiba, Japan. Recently, neon ion beams were commissioned at QST Hospital as part of the first clinical multi-ion therapy (MIT) program, which aims to improve clinical outcomes by escalating higher linear energy transfer (LET) radiation in the tumor for treating therapy-resistant disease. With the advancement of high-precision scanning delivery techniques, neon ion treatments in the present day could be delivered more safely and with greater precision compared to the first and only clinical application decades prior at LBNL using passive scattering technology. Despite their promising results, preclinical investigations of neon ions are scarce outside of Japan and further independent studies are needed. Clinically, neon ion therapy may offer benefits in treating certain malignancies by escalating LET in the tumor, but its limited availability and high costs restrict its current use and adoption. Studies have shown that20Ne or multi-ion mixtures (4He,12C,16O and/or20Ne) can provide larger degrees of freedom in optimization of dose, LET and relative biological effectiveness, otherwise unattainable with other single ion techniques. Neon ion beams are under investigation in the ongoing MIT clinical trials which will establish their broader applicability. In this review, the technology, physics, radiobiology, and potential clinical applications of neon ion beams are outlined. The status of therapeutic neon ion beams is provided while discussing future research and clinical directions, including technological development of novel particle therapy delivery techniques, such as multi-ion, mini-beam, arc, and ultra-high dose rate.
{"title":"Neon ion radiotherapy: physics and biology.","authors":"Stewart Mein, Takamitsu Masuda, Koki Kasamatsu, Taku Nakaji, Yusuke Nomura, Jiayao Sun, Ken Katagiri, Yoshiyuki Iwata, Nobuyuki Kanematsu, Kota Mizushima, Taku Inaniwa, Sodai Tanaka","doi":"10.1088/1361-6560/ae22b7","DOIUrl":"10.1088/1361-6560/ae22b7","url":null,"abstract":"<p><p>Neon ion (<sup>20</sup>Ne) beam radiotherapy was one of the primary particle therapy candidates investigated during the clinical trials beginning in the 1970s at the Lawrence Berkely National Laboratory (LBNL), which shut down in the early 1990s. Currently, therapeutic neon ion beams are available at only one clinical facility worldwide, the National Institutes for Quantum Science and Technology (QST) in Chiba, Japan. Recently, neon ion beams were commissioned at QST Hospital as part of the first clinical multi-ion therapy (MIT) program, which aims to improve clinical outcomes by escalating higher linear energy transfer (LET) radiation in the tumor for treating therapy-resistant disease. With the advancement of high-precision scanning delivery techniques, neon ion treatments in the present day could be delivered more safely and with greater precision compared to the first and only clinical application decades prior at LBNL using passive scattering technology. Despite their promising results, preclinical investigations of neon ions are scarce outside of Japan and further independent studies are needed. Clinically, neon ion therapy may offer benefits in treating certain malignancies by escalating LET in the tumor, but its limited availability and high costs restrict its current use and adoption. Studies have shown that<sup>20</sup>Ne or multi-ion mixtures (<sup>4</sup>He,<sup>12</sup>C,<sup>16</sup>O and/or<sup>20</sup>Ne) can provide larger degrees of freedom in optimization of dose, LET and relative biological effectiveness, otherwise unattainable with other single ion techniques. Neon ion beams are under investigation in the ongoing MIT clinical trials which will establish their broader applicability. In this review, the technology, physics, radiobiology, and potential clinical applications of neon ion beams are outlined. The status of therapeutic neon ion beams is provided while discussing future research and clinical directions, including technological development of novel particle therapy delivery techniques, such as multi-ion, mini-beam, arc, and ultra-high dose rate.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}