Pub Date : 2026-02-03DOI: 10.1088/1361-6560/ae4167
Malvika Viswanathan, Leqi Yin, Yashwant Kurmi, You Chen, Xiaoyu Jiang, Junzhong Xu, Aqeela Afzal, Zhongliang Zu
Objective: Rapid and accurate mapping of brain tissue pH is crucial for early diagnosis and management of ischemic stroke. Amide proton transfer (APT) imaging has been used for this purpose but suffers from hypointense contrast and low signal intensity in lesions. Guanidine chemical exchange saturation transfer (CEST) imaging provides hyperintense contrast and higher signal intensity in lesions at appropriate saturation power, making it a promising complementary approach. However, quantifying the guanidine CEST effect remains challenging due to its proximity to water resonance and the influence of multiple confounding effects. This study presents a machine learning (ML) framework to improve the accuracy and robustness of guanidine CEST quantification with reduced scan time.
Approach: The model was trained on partially synthetic data, where measured line-shape information from experiments were incorporated into a simulation framework along with other CEST pools whose solute fraction (fs), exchange rate (ksw), and relaxation parameters were systematically varied. Gradient-based feature selection was used to identify the most informative frequency offsets to reduce the number of acquisition points.
Main results: The proposed model achieved significantly higher accuracy than polynomial fitting, multi-pool Lorentzian fitting, and ML models trained solely on synthetic or in vivo data. Gradient-based feature selection identified the most informative frequency offsets, reducing acquisition points from 69 to 19, a 72% reduction in CEST scan time without loss of accuracy. In vivo, conventional fitting methods produced unclear lesion contrast, whereas our model predicted clear hyperintense lesion maps. The strong negative correlation between guanidine and APT effects supports its physiological relevance to tissue acidosis.
Significance: The use of partially synthetic training data combines realistic spectral features with known ground-truth values, overcoming limitations of purely synthetic or limited in vivo datasets. Leveraging this data with ML, enables robust quantification of guanidine CEST effects, showing potential for rapid pH-sensitive imaging.
{"title":"A rapid and accurate guanidine CEST imaging in ischemic stroke using a machine learning approach.","authors":"Malvika Viswanathan, Leqi Yin, Yashwant Kurmi, You Chen, Xiaoyu Jiang, Junzhong Xu, Aqeela Afzal, Zhongliang Zu","doi":"10.1088/1361-6560/ae4167","DOIUrl":"https://doi.org/10.1088/1361-6560/ae4167","url":null,"abstract":"<p><strong>Objective: </strong>Rapid and accurate mapping of brain tissue pH is crucial for early diagnosis and management of ischemic stroke. Amide proton transfer (APT) imaging has been used for this purpose but suffers from hypointense contrast and low signal intensity in lesions. Guanidine chemical exchange saturation transfer (CEST) imaging provides hyperintense contrast and higher signal intensity in lesions at appropriate saturation power, making it a promising complementary approach. However, quantifying the guanidine CEST effect remains challenging due to its proximity to water resonance and the influence of multiple confounding effects. This study presents a machine learning (ML) framework to improve the accuracy and robustness of guanidine CEST quantification with reduced scan time.</p><p><strong>Approach: </strong>The model was trained on partially synthetic data, where measured line-shape information from experiments were incorporated into a simulation framework along with other CEST pools whose solute fraction (fs), exchange rate (ksw), and relaxation parameters were systematically varied. Gradient-based feature selection was used to identify the most informative frequency offsets to reduce the number of acquisition points.</p><p><strong>Main results: </strong>The proposed model achieved significantly higher accuracy than polynomial fitting, multi-pool Lorentzian fitting, and ML models trained solely on synthetic or in vivo data. Gradient-based feature selection identified the most informative frequency offsets, reducing acquisition points from 69 to 19, a 72% reduction in CEST scan time without loss of accuracy. In vivo, conventional fitting methods produced unclear lesion contrast, whereas our model predicted clear hyperintense lesion maps. The strong negative correlation between guanidine and APT effects supports its physiological relevance to tissue acidosis.</p><p><strong>Significance: </strong>The use of partially synthetic training data combines realistic spectral features with known ground-truth values, overcoming limitations of purely synthetic or limited in vivo datasets. Leveraging this data with ML, enables robust quantification of guanidine CEST effects, showing potential for rapid pH-sensitive imaging.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146113957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1088/1361-6560/ae37c2
Shengzi Zhao, Le Shen, Donghang Miao, Yuxiang Xing
<p><p><i>Objective.</i>X-ray diffraction (XRD) is a non-destructive technique capable of obtaining molecular structural information of materials and achieving higher sensitivity than transmission tomography (CT) for substances with similar densities. It has great potential in medical and security applications, such as rapid breast cancer screening, calculi composition analysis, and detection of drugs and explosives. Among various XRD tomography (XRDT) systems, snapshot coded aperture XRDT (SCA-XRDT) achieves the fastest scanning speed, making it well-suited for practical medical imaging and security inspection. However, SCA-XRDT suffers from poor data condition and an ill-posed reconstruction problem, leading to significant challenges in accurate image reconstruction. In this work, we explore the inherent characteristics of XRD patterns and incorporate a novel and effective prior accordingly into an iterative reconstruction algorithm, thereby improving the reconstruction performance.<i>Approach.</i>By analyzing the key physical factors that shape XRD patterns, we represent XRD patterns as a linear combination of basis functions, and validate the feasibility and generality of this representation using experimental data. Building upon this, we propose a novel basis-function-decomposition reconstruction (BFD-Recon) method that incorporates the basis function representation as a prior into a model-based SCA-XRDT reconstruction framework. This method transforms the optimization target from entire XRD patterns to parameters of basis functions. We further impose smoothness and sparsity constraints on the parameters to restrict the solution space. We employ the Split Bregman algorithm to iteratively solve the optimization problem. Both simulation and experimental results demonstrate the effectiveness of the proposed BFD-Recon method.<i>Main-results.</i>Compared with a conventional MBIR method for XRDT reconstruction, the proposed BFD-Recon method results in more accurate reconstruction of XRD patterns, especially the sharp peaks that closely match the ground truth. It substantially suppresses the noise and the impact of background signals on the reconstructed XRD patterns. Since the proposed basis function decomposition and the prior align well with the characteristics of XRD patterns, its value is well manifested along the spectral dimension of the reconstructed images. It also reduces blur along the x-ray path in the spatial dimension. Quantitatively, BFD-Recon increases the correlation coefficients between the reconstructed and ground-truth XRD patterns by up to 10% and the average PSNR by 20%.<i>Significance.</i>Through theoretical analysis and experiments, we propose a basis function decomposition method for XRD patterns and demonstrate its effectiveness and general applicability. Incorporating the basis-function-decomposition into the model-based iterative reconstruction can significantly enhance the XRDT reconstruction performance. The method prov
{"title":"A novel reconstruction method based on basis function decomposition for snapshot CAXRDT system.","authors":"Shengzi Zhao, Le Shen, Donghang Miao, Yuxiang Xing","doi":"10.1088/1361-6560/ae37c2","DOIUrl":"10.1088/1361-6560/ae37c2","url":null,"abstract":"<p><p><i>Objective.</i>X-ray diffraction (XRD) is a non-destructive technique capable of obtaining molecular structural information of materials and achieving higher sensitivity than transmission tomography (CT) for substances with similar densities. It has great potential in medical and security applications, such as rapid breast cancer screening, calculi composition analysis, and detection of drugs and explosives. Among various XRD tomography (XRDT) systems, snapshot coded aperture XRDT (SCA-XRDT) achieves the fastest scanning speed, making it well-suited for practical medical imaging and security inspection. However, SCA-XRDT suffers from poor data condition and an ill-posed reconstruction problem, leading to significant challenges in accurate image reconstruction. In this work, we explore the inherent characteristics of XRD patterns and incorporate a novel and effective prior accordingly into an iterative reconstruction algorithm, thereby improving the reconstruction performance.<i>Approach.</i>By analyzing the key physical factors that shape XRD patterns, we represent XRD patterns as a linear combination of basis functions, and validate the feasibility and generality of this representation using experimental data. Building upon this, we propose a novel basis-function-decomposition reconstruction (BFD-Recon) method that incorporates the basis function representation as a prior into a model-based SCA-XRDT reconstruction framework. This method transforms the optimization target from entire XRD patterns to parameters of basis functions. We further impose smoothness and sparsity constraints on the parameters to restrict the solution space. We employ the Split Bregman algorithm to iteratively solve the optimization problem. Both simulation and experimental results demonstrate the effectiveness of the proposed BFD-Recon method.<i>Main-results.</i>Compared with a conventional MBIR method for XRDT reconstruction, the proposed BFD-Recon method results in more accurate reconstruction of XRD patterns, especially the sharp peaks that closely match the ground truth. It substantially suppresses the noise and the impact of background signals on the reconstructed XRD patterns. Since the proposed basis function decomposition and the prior align well with the characteristics of XRD patterns, its value is well manifested along the spectral dimension of the reconstructed images. It also reduces blur along the x-ray path in the spatial dimension. Quantitatively, BFD-Recon increases the correlation coefficients between the reconstructed and ground-truth XRD patterns by up to 10% and the average PSNR by 20%.<i>Significance.</i>Through theoretical analysis and experiments, we propose a basis function decomposition method for XRD patterns and demonstrate its effectiveness and general applicability. Incorporating the basis-function-decomposition into the model-based iterative reconstruction can significantly enhance the XRDT reconstruction performance. The method prov","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145966877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1088/1361-6560/ae3afe
Maximilian Hasslberger, Mathew G Abraham, Kasra Naftchi-Ardebili, Alexander H Paulus, Kim Butts Pauly
Objective. Low-intensity focused ultrasound has emerged as a versatile tool for various applications including noninvasive neuromodulation and blood-brain barrier (BBB) opening. To achieve precise individual targeting, phase aberration correction (PAC) is essential to compensate for the heterogeneities introduced by the skull. Traditional methods for PAC are restricted to single point-based targets, resulting in elongated, cigar-shaped focal beams that often fail to align with the geometry of the intended target. Additionally, these approaches demand lengthy simulation times, making the simultaneous sonication of multiple targets within a reasonable timeframe infeasible.Approach. This work introduces rapid optimization-based sonication of volumetric brain targets. By leveraging a pair of linear phased array transducers aligned orthogonally above the skull, the approach is capable of optimizing phase and amplitude parameters within seconds to focus acoustic pressure at multiple targets inside target volumes while limiting potential off-target activation.Main results. Three brain areas were targeted under different orthogonal transducer alignments, enforcing the desired intracranial peak pressure at a minimum of three target points in each region. Further results demonstrate the sensitivity of transducer displacements, particularly with translational and rotational misalignments. A ray tracing correction scheme was employed, restoring the peak pressure at the intended target region while keeping the increase in off-target pressure below 20%.Significance. Overall, these advancements hold promise for enhancing targeting in focused ultrasound-guided BBB opening and neuromodulatory applications, expanding the utility of ultrasound in clinical and experimental settings.
{"title":"Rapid optimization of focused ultrasound for complex targeting with phased array transducers and precomputed propagation operators.","authors":"Maximilian Hasslberger, Mathew G Abraham, Kasra Naftchi-Ardebili, Alexander H Paulus, Kim Butts Pauly","doi":"10.1088/1361-6560/ae3afe","DOIUrl":"10.1088/1361-6560/ae3afe","url":null,"abstract":"<p><p><i>Objective</i>. Low-intensity focused ultrasound has emerged as a versatile tool for various applications including noninvasive neuromodulation and blood-brain barrier (BBB) opening. To achieve precise individual targeting, phase aberration correction (PAC) is essential to compensate for the heterogeneities introduced by the skull. Traditional methods for PAC are restricted to single point-based targets, resulting in elongated, cigar-shaped focal beams that often fail to align with the geometry of the intended target. Additionally, these approaches demand lengthy simulation times, making the simultaneous sonication of multiple targets within a reasonable timeframe infeasible.<i>Approach</i>. This work introduces rapid optimization-based sonication of volumetric brain targets. By leveraging a pair of linear phased array transducers aligned orthogonally above the skull, the approach is capable of optimizing phase and amplitude parameters within seconds to focus acoustic pressure at multiple targets inside target volumes while limiting potential off-target activation.<i>Main results</i>. Three brain areas were targeted under different orthogonal transducer alignments, enforcing the desired intracranial peak pressure at a minimum of three target points in each region. Further results demonstrate the sensitivity of transducer displacements, particularly with translational and rotational misalignments. A ray tracing correction scheme was employed, restoring the peak pressure at the intended target region while keeping the increase in off-target pressure below 20%.<i>Significance</i>. Overall, these advancements hold promise for enhancing targeting in focused ultrasound-guided BBB opening and neuromodulatory applications, expanding the utility of ultrasound in clinical and experimental settings.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146011994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1088/1361-6560/ae3b02
Han Gyu Kang, Hideaki Tashima, Makoto Higuchi, Taiga Yamaya
Objective.For rodent brain PET imaging, spatial resolution is the most important factor for identifying small brain structures. Previously, we developed a submillimeter resolution PET scanner with 1 mm crystal pitch using 3-layer depth-of-interaction (DOI) detectors. However, the spatial resolution was over 0.5 mm due to a relatively large crystal pitch and an unoptimized crystal layer design. Here we use Geant4 Application Tomographic Emission (GATE) Monte Carlo simulations to design and optimize a sub-0.5 mm resolution PET scanner with 3-layer DOI detectors.Methods.The proposed PET scanner has 2 rings, each of which has 16 DOI detectors, resulting in a 23.4 mm axial coverage. Each DOI detector has 3-layer lutetium yttrium orthosilicate crystal arrays with a 0.8 mm crystal pitch. We employed GATE Monte Carlo simulations to optimize three crystal layer designs, A (4 + 4 + 7 mm), B (3 + 4 + 4 mm), and C (3 + 3 + 5 mm). Spatial resolution and imaging performance were evaluated with a point source and resolution phantom using analytical and iterative algorithms.Main results.Among the three designs, design C provided the most uniform spatial resolution up to the radial offset of 15 mm. The 0.45 mm diameter rod structures were resolved clearly with design C using the iterative algorithm. The GATE simulation results agreed with the experimental data in terms of radial resolution except at the radial offset of 15 mm.Significance.We optimized the crystal layer design of the mouse brain PET scanner with GATE simulations, thereby achieving sub-0.5 mm resolution in the resolution phantom study.
{"title":"Design optimization using GATE Monte Carlo simulations for a sub-0.5 mm resolution PET scanner with 3-layer DOI detectors.","authors":"Han Gyu Kang, Hideaki Tashima, Makoto Higuchi, Taiga Yamaya","doi":"10.1088/1361-6560/ae3b02","DOIUrl":"10.1088/1361-6560/ae3b02","url":null,"abstract":"<p><p><i>Objective.</i>For rodent brain PET imaging, spatial resolution is the most important factor for identifying small brain structures. Previously, we developed a submillimeter resolution PET scanner with 1 mm crystal pitch using 3-layer depth-of-interaction (DOI) detectors. However, the spatial resolution was over 0.5 mm due to a relatively large crystal pitch and an unoptimized crystal layer design. Here we use Geant4 Application Tomographic Emission (GATE) Monte Carlo simulations to design and optimize a sub-0.5 mm resolution PET scanner with 3-layer DOI detectors.<i>Methods.</i>The proposed PET scanner has 2 rings, each of which has 16 DOI detectors, resulting in a 23.4 mm axial coverage. Each DOI detector has 3-layer lutetium yttrium orthosilicate crystal arrays with a 0.8 mm crystal pitch. We employed GATE Monte Carlo simulations to optimize three crystal layer designs, A (4 + 4 + 7 mm), B (3 + 4 + 4 mm), and C (3 + 3 + 5 mm). Spatial resolution and imaging performance were evaluated with a point source and resolution phantom using analytical and iterative algorithms.<i>Main results.</i>Among the three designs, design C provided the most uniform spatial resolution up to the radial offset of 15 mm. The 0.45 mm diameter rod structures were resolved clearly with design C using the iterative algorithm. The GATE simulation results agreed with the experimental data in terms of radial resolution except at the radial offset of 15 mm.<i>Significance.</i>We optimized the crystal layer design of the mouse brain PET scanner with GATE simulations, thereby achieving sub-0.5 mm resolution in the resolution phantom study.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146011924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1088/1361-6560/ae3fff
Yoel Samuel Pérez Haas, Lena Kretzschmar, Bertrand Pouymayou, Stephanie Tanadini-Lang, Jan Unkelbach
Objective: Online-adaptive, Magnetic-Resonance-(MR)-guided radiotherapy on a hybrid MR-linear accelerators enables stereotactic body radiotherapy (SBRT) of abdominal/pelvic tumors with large interfractional motion. However, overlaps between planning target volume (PTV) and dose-limiting organs at risk (OARs) often force compromises in PTV-coverage. Overlap-guided adaptive fractionation (AF) leverages daily variations in PTV/OAR overlap to improve PTV-coverage by administering variable fraction doses based on measured overlap volume. This study aims to assess the potential benefits of overlap-guided AF.
Approach: We analyzed 58 patients with abdominal/pelvic tumors having received 5-fraction MR-guided SBRT (>6Gy/fraction), in whom PTV-overlap with at least one dose-limiting OAR (bowel, duodenum, stomach) occurred in ≥ 1 fraction. Dose-limiting OARs were constrained to 1cc ≤ 6Gy per fraction, rendering overlapping PTV volumes underdosed. AF aims to reduce this underdosage by delivering higher doses to the PTV on days with less overlap volume, lower doses on days with more. PTV-coverage-gain compared to uniform fractionation was quantified by the area above the PTV dose-volume-histogram-curve and expressed in ccGy (1ccGy = 1cc receiving 1Gy more). The optimal dose for each fraction was determined through dynamic programming by formulating AF as a Markov decision process.
Main results: PTV/OAR overlap volume variation (standard deviation) varied substantially between patients (0.02 - 5.76cc). Algorithm-based calculations showed that 55 of 58 patients benefited in PTV-coverage from AF. Mean cohort benefit was 2.93ccGy (range -4.44 (disadvantage) to 22.42ccGy). Higher PTV/OAR overlap variation correlated with larger AF benefit.
Significance: Overlap-guided AF for abdominal/pelvic SBRT is a promising strategy to improve PTV-coverage without compromising OAR sparing. Since the benefit of AF depends on PTV/OAR overlap variation-which is low in many patients-the mean cohort advantage is modest. However, well-selected patients with marked PTV/OAR overlap variation derive a relevant dosimetric benefit. Prospective studies are needed to evaluate AF feasibility and quantify clinical benefits.
{"title":"Overlap guided adaptive fractionation.","authors":"Yoel Samuel Pérez Haas, Lena Kretzschmar, Bertrand Pouymayou, Stephanie Tanadini-Lang, Jan Unkelbach","doi":"10.1088/1361-6560/ae3fff","DOIUrl":"https://doi.org/10.1088/1361-6560/ae3fff","url":null,"abstract":"<p><strong>Objective: </strong>Online-adaptive, Magnetic-Resonance-(MR)-guided radiotherapy on a hybrid MR-linear accelerators enables stereotactic body radiotherapy (SBRT) of abdominal/pelvic tumors with large interfractional motion. However, overlaps between planning target volume (PTV) and dose-limiting organs at risk (OARs) often force compromises in PTV-coverage. Overlap-guided adaptive fractionation (AF) leverages daily variations in PTV/OAR overlap to improve PTV-coverage by administering variable fraction doses based on measured overlap volume. This study aims to assess the potential benefits of overlap-guided AF.

Approach: We analyzed 58 patients with abdominal/pelvic tumors having received 5-fraction MR-guided SBRT (>6Gy/fraction), in whom PTV-overlap with at least one dose-limiting OAR (bowel, duodenum, stomach) occurred in ≥ 1 fraction. Dose-limiting OARs were constrained to 1cc ≤ 6Gy per fraction, rendering overlapping PTV volumes underdosed. AF aims to reduce this underdosage by delivering higher doses to the PTV on days with less overlap volume, lower doses on days with more. PTV-coverage-gain compared to uniform fractionation was quantified by the area above the PTV dose-volume-histogram-curve and expressed in ccGy (1ccGy = 1cc receiving 1Gy more). The optimal dose for each fraction was determined through dynamic programming by formulating AF as a Markov decision process. 

Main results: PTV/OAR overlap volume variation (standard deviation) varied substantially between patients (0.02 - 5.76cc). Algorithm-based calculations showed that 55 of 58 patients benefited in PTV-coverage from AF. Mean cohort benefit was 2.93ccGy (range -4.44 (disadvantage) to 22.42ccGy). Higher PTV/OAR overlap variation correlated with larger AF benefit.

Significance: Overlap-guided AF for abdominal/pelvic SBRT is a promising strategy to improve PTV-coverage without compromising OAR sparing. Since the benefit of AF depends on PTV/OAR overlap variation-which is low in many patients-the mean cohort advantage is modest. However, well-selected patients with marked PTV/OAR overlap variation derive a relevant dosimetric benefit. Prospective studies are needed to evaluate AF feasibility and quantify clinical benefits.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146093829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1088/1361-6560/ae39a0
Lisa Stefanie Fankhauser, Maria Giulia Toro, Andreas Johan Smolders, Renato Bellotti, Antony John Lomax, Francesca Albertini
Objective.Manual review of daily auto-generated contours remains a challenge for clinical implementation of online adaptive radiotherapy (OART). This study introduces a contour assessment tool for quality assurance (CAT-QA), an automatic workflow designed to flag organs-at-risk (OAR) contours requiring manual revision.Approach.CAT-QA applies sequential geometric and dosimetric tests to each auto-generated OAR contour to flag structures requiring review. The tool was retrospectively applied to ten head and neck (H&N) patients (44 CTs with manual contours) treated with proton therapy, split into training and test sets. For each image, three treatment plans were created: one with manual contours (Gold), one with automatic OAR contours (Auto), and one combining auto-contours with manual ones for flagged OARs (CAT-QA plan). Generalizability was assessed on six abdominal patients (8 CTs) without retuning.Main Results.CAT-QA flagged 21% of OARs in H&N and 27% in abdominal cases. No dose failures (>5% of prescribed dose vs. Gold) were observed in H&N. One abdominal OAR (1.4%) exceeded this threshold. In contrast, auto plans resulted in dose failures in 7.5% H&N and 8.5% (abdomen). The higher flag rate observed in the abdomen was driven by a single failed auto-contouring case; excluding this outlier, the average flag rate was 20%, comparable to H&N. CAT-QA runtime averaged <2 min, supporting feasibility for integration into online workflows.Significance.CAT-QA shows promise for improving the safety and efficiency of auto-contouring in OART by flagging OARs that need manual review, with initial results suggesting generalizability across treatment sites.
{"title":"Contour assessment tool for quality assurance (CAT-QA) to speed up online adaptive radiotherapy.","authors":"Lisa Stefanie Fankhauser, Maria Giulia Toro, Andreas Johan Smolders, Renato Bellotti, Antony John Lomax, Francesca Albertini","doi":"10.1088/1361-6560/ae39a0","DOIUrl":"10.1088/1361-6560/ae39a0","url":null,"abstract":"<p><p><i>Objective.</i>Manual review of daily auto-generated contours remains a challenge for clinical implementation of online adaptive radiotherapy (OART). This study introduces a contour assessment tool for quality assurance (CAT-QA), an automatic workflow designed to flag organs-at-risk (OAR) contours requiring manual revision.<i>Approach.</i>CAT-QA applies sequential geometric and dosimetric tests to each auto-generated OAR contour to flag structures requiring review. The tool was retrospectively applied to ten head and neck (H&N) patients (44 CTs with manual contours) treated with proton therapy, split into training and test sets. For each image, three treatment plans were created: one with manual contours (Gold), one with automatic OAR contours (Auto), and one combining auto-contours with manual ones for flagged OARs (CAT-QA plan). Generalizability was assessed on six abdominal patients (8 CTs) without retuning.<i>Main Results.</i>CAT-QA flagged 21% of OARs in H&N and 27% in abdominal cases. No dose failures (>5% of prescribed dose vs. Gold) were observed in H&N. One abdominal OAR (1.4%) exceeded this threshold. In contrast, auto plans resulted in dose failures in 7.5% H&N and 8.5% (abdomen). The higher flag rate observed in the abdomen was driven by a single failed auto-contouring case; excluding this outlier, the average flag rate was 20%, comparable to H&N. CAT-QA runtime averaged <2 min, supporting feasibility for integration into online workflows.<i>Significance.</i>CAT-QA shows promise for improving the safety and efficiency of auto-contouring in OART by flagging OARs that need manual review, with initial results suggesting generalizability across treatment sites.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145990325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective.Establishing control and specification limits for volumetric modulated arc therapy (VMAT) pre-treatment quality assurance (PTQA) is essential for streamlining PTQA workflows and optimizing plan complexity. This study aimed to develop and implement new methods and tools across treatment sites of varying complexity using multiple global and local gamma index criteria.Approach.350 VMAT plans comprising brain, prostate, pelvis and head and neck treatments were retrospectively compiled. For each site, control limits were obtained using statistical process control (SPC) along with heuristic methods (scaled weighted variance (SWV), weighted standard deviation (WSD), skewness correction (SC)). Specification limits were derived employing a new formalism aligned with the heuristic approaches. Calculations were performed under various global and local gamma index criteria using custom-built software (freely available athttps://github.com/AEvgeneia/SPC_GUI_Scientific_Tool.git).Main results.WSD and SC control and specification limits were comparable, while SWV deviated with increasing complexity and stricter gamma index criteria. Conventional criteria (e.g. global 3%/2 mm) lacked sensitivity to detect subtle errors. Global 2%/1 mm and 1%/2 mm, and local criteria stricter than 3%/1 mm, met sensitivity requirements for low-complexity plans while maintaining clear separation between control and specification limits to identify plans with suboptimal delivery accuracy. High-complexity plans showed that global criteria stricter than 3%/1 mm and all evaluated local criteria are optimal, provided specification limits for the most stringent criteria remain clinically acceptable.Significance.A nuanced framework is provided for determining control and specification limits for gamma index passing rates, as well as corresponding thresholds for the mean gamma index, allowing for site-specific detection of suboptimal treatment plans. The open-source software tool developed can operationalize the proposed methodology facilitating the clinical adoption of advanced statistical methods. Site-specific thresholds could serve as inputs for machine learning and deep learning algorithms aimed at automating error detection and PTQA classification for plan complexity management.
{"title":"Institution-specific pre-treatment quality assurance control and specification limits: a tool to implement a new formalism and criteria optimization using statistical process control and heuristic methods.","authors":"Aspasia E Evgeneia, Panagiotis Alafogiannis, Nikolaos Dikaios, Evaggelos Pantelis, Panagiotis Papagiannis, Vasiliki Peppa","doi":"10.1088/1361-6560/ae399f","DOIUrl":"10.1088/1361-6560/ae399f","url":null,"abstract":"<p><p><i>Objective.</i>Establishing control and specification limits for volumetric modulated arc therapy (VMAT) pre-treatment quality assurance (PTQA) is essential for streamlining PTQA workflows and optimizing plan complexity. This study aimed to develop and implement new methods and tools across treatment sites of varying complexity using multiple global and local gamma index criteria.<i>Approach.</i>350 VMAT plans comprising brain, prostate, pelvis and head and neck treatments were retrospectively compiled. For each site, control limits were obtained using statistical process control (SPC) along with heuristic methods (scaled weighted variance (SWV), weighted standard deviation (WSD), skewness correction (SC)). Specification limits were derived employing a new formalism aligned with the heuristic approaches. Calculations were performed under various global and local gamma index criteria using custom-built software (freely available athttps://github.com/AEvgeneia/SPC_GUI_Scientific_Tool.git).<i>Main results.</i>WSD and SC control and specification limits were comparable, while SWV deviated with increasing complexity and stricter gamma index criteria. Conventional criteria (e.g. global 3%/2 mm) lacked sensitivity to detect subtle errors. Global 2%/1 mm and 1%/2 mm, and local criteria stricter than 3%/1 mm, met sensitivity requirements for low-complexity plans while maintaining clear separation between control and specification limits to identify plans with suboptimal delivery accuracy. High-complexity plans showed that global criteria stricter than 3%/1 mm and all evaluated local criteria are optimal, provided specification limits for the most stringent criteria remain clinically acceptable.<i>Significance.</i>A nuanced framework is provided for determining control and specification limits for gamma index passing rates, as well as corresponding thresholds for the mean gamma index, allowing for site-specific detection of suboptimal treatment plans. The open-source software tool developed can operationalize the proposed methodology facilitating the clinical adoption of advanced statistical methods. Site-specific thresholds could serve as inputs for machine learning and deep learning algorithms aimed at automating error detection and PTQA classification for plan complexity management.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145990328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective.The complex internal organization of subcortical structures forms the foundation of critical neural circuits that support sensorimotor processing, emotion regulation, and memory. However, their complex internal organization poses a significant challenge to reliable, fine-scale parcellation.Approach.To overcome the trade-off between anatomical specificity and cross-subject consistency, we propose a novel multiscale subcortical parcellation framework grounded in consensus graph representation learning of diffusion magnetic resonance imaging (dMRI) tractography data. We propose a novel fiber-cluster-based connectivity representation to address the limitations of conventional voxel-level tractography features, thereby enhancing anatomical fidelity and reducing tracking noise. Furthermore, our method preserves local structural coherence while significantly mitigating the curse of dimensionality by leveraging 3D-SLIC supervoxel preparcellation. Finally, we integrate consensus graph representation learning with low-rank tensor modeling, enabling population-level regularization that refines individual embeddings and ensures consistent subcortical parcellations across subjects. By utilizing this framework, we create a new, fine-grained subcortical atlas.Main results.Evaluations using ultra-high-field dMRI from Human Connectome Project demonstrate that our method yields subcortical parcels with enhanced reproducibility and microstructural homogeneity. Across diffusion-derived microstructure indices, our atlas consistently achieves the lowest or second-lowest coefficient of variation, with average reductions of 15%-25% compared to existing atlases, thereby supporting robust downstream analyses of structural homology and regional variability.Significance.Our pipeline provides a powerful tool for detailed mapping of subcortical organization, offering promising applications in precision neuroimaging and the discovery of clinical biomarkers for neurological and psychiatric disorders that affect these structures (e.g. Parkinson's disease, schizophrenia, and major depressive disorder). Our code is available athttps://github.com/WanZhonghua/SubcorticalParcellation.
{"title":"Constructing fine-grained subcortical atlases with connectional consensus graph representation learning.","authors":"Zhonghua Wan, Peng Wang, Yazhe Zhai, Yu Xie, Yifei He, Ye Wu","doi":"10.1088/1361-6560/ae399d","DOIUrl":"10.1088/1361-6560/ae399d","url":null,"abstract":"<p><p><i>Objective.</i>The complex internal organization of subcortical structures forms the foundation of critical neural circuits that support sensorimotor processing, emotion regulation, and memory. However, their complex internal organization poses a significant challenge to reliable, fine-scale parcellation.<i>Approach.</i>To overcome the trade-off between anatomical specificity and cross-subject consistency, we propose a novel multiscale subcortical parcellation framework grounded in consensus graph representation learning of diffusion magnetic resonance imaging (dMRI) tractography data. We propose a novel fiber-cluster-based connectivity representation to address the limitations of conventional voxel-level tractography features, thereby enhancing anatomical fidelity and reducing tracking noise. Furthermore, our method preserves local structural coherence while significantly mitigating the curse of dimensionality by leveraging 3D-SLIC supervoxel preparcellation. Finally, we integrate consensus graph representation learning with low-rank tensor modeling, enabling population-level regularization that refines individual embeddings and ensures consistent subcortical parcellations across subjects. By utilizing this framework, we create a new, fine-grained subcortical atlas.<i>Main results.</i>Evaluations using ultra-high-field dMRI from Human Connectome Project demonstrate that our method yields subcortical parcels with enhanced reproducibility and microstructural homogeneity. Across diffusion-derived microstructure indices, our atlas consistently achieves the lowest or second-lowest coefficient of variation, with average reductions of 15%-25% compared to existing atlases, thereby supporting robust downstream analyses of structural homology and regional variability.<i>Significance.</i>Our pipeline provides a powerful tool for detailed mapping of subcortical organization, offering promising applications in precision neuroimaging and the discovery of clinical biomarkers for neurological and psychiatric disorders that affect these structures (e.g. Parkinson's disease, schizophrenia, and major depressive disorder). Our code is available athttps://github.com/WanZhonghua/SubcorticalParcellation.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145990363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1088/1361-6560/ae3b01
Jooho Lee, Adam S Wang, Jongduk Baek
Objective.Normalized metal artifact reduction (NMAR) is a robust and widely used method for reducing metal artifacts in computed tomography (CT). However, conventional NMAR requires at least two forward projections, one for metal trace detection and the other for prior sinogram generation, resulting in redundant computation and limited efficiency. This study aims to reformulate NMAR into a single forward projection-based framework that maintains artifact reduction performance while improving computational efficiency and structural simplicity.Approach.We show that the two separate forward projections in NMAR can be unified into a single operation by leveraging deep learning (DL) priors, thereby eliminating the explicit forward projection for metal trace. The metal trace is inferred directly from localized discrepancies between the original sinogram and the forward projection of the DL prior image, allowing both interpolation and trace identification within a unified forward projection. Simulations and cadaver experiments were performed to compare the proposed method with NMAR, DL reconstruction, and conventional DL-NMAR.Main results.The proposed method reduced metal artifacts with image quality comparable to conventional DL-NMAR while improving computational efficiency. By reducing the number of forward projections from two to one, the proposed method achieved the lowest number of projection operations among all compared methods, highlighting its computational advantage.Significance.This study demonstrates that DL priors can be seamlessly integrated into physics-based NMAR frameworks to simplify image reconstruction pipelines and enhance computational performance. The proposed unified forward projection provides an efficient solution to accelerate MAR in CT imaging.
{"title":"Improving the efficiency of normalized metal artifact reduction via a unified forward projection.","authors":"Jooho Lee, Adam S Wang, Jongduk Baek","doi":"10.1088/1361-6560/ae3b01","DOIUrl":"10.1088/1361-6560/ae3b01","url":null,"abstract":"<p><p><i>Objective.</i>Normalized metal artifact reduction (NMAR) is a robust and widely used method for reducing metal artifacts in computed tomography (CT). However, conventional NMAR requires at least two forward projections, one for metal trace detection and the other for prior sinogram generation, resulting in redundant computation and limited efficiency. This study aims to reformulate NMAR into a single forward projection-based framework that maintains artifact reduction performance while improving computational efficiency and structural simplicity.<i>Approach.</i>We show that the two separate forward projections in NMAR can be unified into a single operation by leveraging deep learning (DL) priors, thereby eliminating the explicit forward projection for metal trace. The metal trace is inferred directly from localized discrepancies between the original sinogram and the forward projection of the DL prior image, allowing both interpolation and trace identification within a unified forward projection. Simulations and cadaver experiments were performed to compare the proposed method with NMAR, DL reconstruction, and conventional DL-NMAR.<i>Main results.</i>The proposed method reduced metal artifacts with image quality comparable to conventional DL-NMAR while improving computational efficiency. By reducing the number of forward projections from two to one, the proposed method achieved the lowest number of projection operations among all compared methods, highlighting its computational advantage.<i>Significance.</i>This study demonstrates that DL priors can be seamlessly integrated into physics-based NMAR frameworks to simplify image reconstruction pipelines and enhance computational performance. The proposed unified forward projection provides an efficient solution to accelerate MAR in CT imaging.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146011961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1088/1361-6560/ae3a31
Simon Noë, Seyed Amir Zaman Pour, Ahmadreza Rezaei, Charles Stearns, Johan Nuyts, Georg Schramm
Objective.Scattered coincidences are a major source of quantitative bias in positron emission tomography (PET) and must be compensated during reconstruction using an estimate of scattered coincidences per line-of-response and time-of-flight bin. Such estimates are typically obtained from simulators with simple cylindrical scanner models that omit detector physics. Incorporating detector sensitivities for scatter is challenging, as scattered coincidences have less constrained properties (e.g. incidence angles) than true coincidences.Approach.We integrated a 5D single-photon detection probability lookup table (photon energy, incidence angle, detector location) into the simulator logic. The resulting scatter sinogram is multiplied by a precomputed, lookup table-specific scatter sensitivity sinogram to yield the scatter estimate. Scatter was simulated with MCGPU-PET, a fast Monte Carlo (MC) simulator with a simplified scanner model, and applied to phantom data from a simulated GE Signa PET/MR in GATE. We evaluated three scenarios:Long, high-count MCGPU-PET simulations from a known activity distribution (reference).Same distribution with limited simulation time and counts.Same low-count data with joint estimation of activity and scatter during reconstruction.We also adapted the approach to test it on two acquisitions from a real Signa PET/MR.Main result.In scenario 1, scatter-compensated reconstructions achieved<1%global bias in all active regions relative to true-only reconstructions. In scenario 2, noisy scatter estimates caused strong positive bias, but Gaussian smoothing restored accuracy to scenario 1 levels. In scenario 3, joint estimation under low-count conditions maintained<1%global bias in nearly all regions. For real scans, the Monte Carlo-based scatter estimate was very similar to the vendor scatter estimate.Significance.Although demonstrated with a fast MC simulator, the proposed scatter sensitivity modeling could enhance existing single scatter simulators used clinically, which typically neglect detector physics. This proof-of-concept also supports the feasibility of scatter estimation for real scans using fast MC simulation, offering potentially greater accuracy and robustness to acquisition noise.
{"title":"Object independent scatter sensitivities for PET, applied to scatter estimation through fast Monte Carlo simulation.","authors":"Simon Noë, Seyed Amir Zaman Pour, Ahmadreza Rezaei, Charles Stearns, Johan Nuyts, Georg Schramm","doi":"10.1088/1361-6560/ae3a31","DOIUrl":"10.1088/1361-6560/ae3a31","url":null,"abstract":"<p><p><i>Objective.</i>Scattered coincidences are a major source of quantitative bias in positron emission tomography (PET) and must be compensated during reconstruction using an estimate of scattered coincidences per line-of-response and time-of-flight bin. Such estimates are typically obtained from simulators with simple cylindrical scanner models that omit detector physics. Incorporating detector sensitivities for scatter is challenging, as scattered coincidences have less constrained properties (e.g. incidence angles) than true coincidences.<i>Approach.</i>We integrated a 5D single-photon detection probability lookup table (photon energy, incidence angle, detector location) into the simulator logic. The resulting scatter sinogram is multiplied by a precomputed, lookup table-specific scatter sensitivity sinogram to yield the scatter estimate. Scatter was simulated with MCGPU-PET, a fast Monte Carlo (MC) simulator with a simplified scanner model, and applied to phantom data from a simulated GE Signa PET/MR in GATE. We evaluated three scenarios:Long, high-count MCGPU-PET simulations from a known activity distribution (reference).Same distribution with limited simulation time and counts.Same low-count data with joint estimation of activity and scatter during reconstruction.We also adapted the approach to test it on two acquisitions from a real Signa PET/MR.<i>Main result.</i>In scenario 1, scatter-compensated reconstructions achieved<1%global bias in all active regions relative to true-only reconstructions. In scenario 2, noisy scatter estimates caused strong positive bias, but Gaussian smoothing restored accuracy to scenario 1 levels. In scenario 3, joint estimation under low-count conditions maintained<1%global bias in nearly all regions. For real scans, the Monte Carlo-based scatter estimate was very similar to the vendor scatter estimate.<i>Significance.</i>Although demonstrated with a fast MC simulator, the proposed scatter sensitivity modeling could enhance existing single scatter simulators used clinically, which typically neglect detector physics. This proof-of-concept also supports the feasibility of scatter estimation for real scans using fast MC simulation, offering potentially greater accuracy and robustness to acquisition noise.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146003845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}