Pub Date : 2026-01-23DOI: 10.1088/1361-6560/ae36e1
Nicolas Desjardins-Proulx, John Kildea
A comprehensive understanding of the energy-dependent stochastic risks associated with neutron exposure is crucial to develop robust radioprotection systems. However, the scarcity of experimental data presents significant challenges in this domain. Track-structure Monte Carlo (TSMC) simulations with DNA models have demonstrated their potential to further our fundamental understanding of neutron-induced stochastic risks. To date, most TSMC studies on the relative biological effectiveness (RBE) of neutrons have focused on various types of DNA damage clusters defined using base pair distances. In this study, we extend these methodologies by incorporating the simulation of non-homologous end joining DNA repair in order to evaluate the RBE of neutrons for misrepairs. To achieve this, we adapted our previously published Monte Carlo DNA damage simulation pipeline, which combines condensed-history and TSMC methods, to support the standard DNA damage data format. This adaptation enabled seamless integration of neutron-induced DNA damage results with the DNA mechanistic repair simulator toolkit. Additionally, we developed a clustering algorithm that reproduces pre-repair endpoints studied in prior works, as well as novel damage clusters based on Euclidean distances. The neutron RBE for misrepairs obtained in this study exhibits a qualitatively similar shape as the RBE obtained for previously reported pre-repair endpoints. However, it peaks higher, reaching a maximum RBE value of 23(1) at a neutron energy of 0.5 MeV. Furthermore, we found that misrepair outcomes were better reproduced using the pre-repair endpoint defined with the Euclidean distance between double-strand breaks rather than with previously published pre-repair endpoints based on base-pair distances. The optimal maximal Euclidean distances were 18 nm for 0.5 MeV neutrons and 60 nm for 250 keV photons. Although this may indicate that Euclidean-distance-based clustering more accurately reflects the DNA damage configurations that lead to misrepairs, the fact that neutrons and photons require different distances raises doubts on whether a single, universal pre-repair endpoint can used as a stand-in for larger-scale aberrations across all radiation qualities.
{"title":"<i>In silico</i>neutron relative biological effectiveness estimations for Pre-DNA repair and post-DNA repair endpoints.","authors":"Nicolas Desjardins-Proulx, John Kildea","doi":"10.1088/1361-6560/ae36e1","DOIUrl":"10.1088/1361-6560/ae36e1","url":null,"abstract":"<p><p>A comprehensive understanding of the energy-dependent stochastic risks associated with neutron exposure is crucial to develop robust radioprotection systems. However, the scarcity of experimental data presents significant challenges in this domain. Track-structure Monte Carlo (TSMC) simulations with DNA models have demonstrated their potential to further our fundamental understanding of neutron-induced stochastic risks. To date, most TSMC studies on the relative biological effectiveness (RBE) of neutrons have focused on various types of DNA damage clusters defined using base pair distances. In this study, we extend these methodologies by incorporating the simulation of non-homologous end joining DNA repair in order to evaluate the RBE of neutrons for misrepairs. To achieve this, we adapted our previously published Monte Carlo DNA damage simulation pipeline, which combines condensed-history and TSMC methods, to support the standard DNA damage data format. This adaptation enabled seamless integration of neutron-induced DNA damage results with the DNA mechanistic repair simulator toolkit. Additionally, we developed a clustering algorithm that reproduces pre-repair endpoints studied in prior works, as well as novel damage clusters based on Euclidean distances. The neutron RBE for misrepairs obtained in this study exhibits a qualitatively similar shape as the RBE obtained for previously reported pre-repair endpoints. However, it peaks higher, reaching a maximum RBE value of 23(1) at a neutron energy of 0.5 MeV. Furthermore, we found that misrepair outcomes were better reproduced using the pre-repair endpoint defined with the Euclidean distance between double-strand breaks rather than with previously published pre-repair endpoints based on base-pair distances. The optimal maximal Euclidean distances were 18 nm for 0.5 MeV neutrons and 60 nm for 250 keV photons. Although this may indicate that Euclidean-distance-based clustering more accurately reflects the DNA damage configurations that lead to misrepairs, the fact that neutrons and photons require different distances raises doubts on whether a single, universal pre-repair endpoint can used as a stand-in for larger-scale aberrations across all radiation qualities.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1088/1361-6560/ae365b
Marco Montefiori, Luca Baldini, Maria Giuseppina Bisogni, Giuseppe Felici, Faustino Gómez, Leonardo Lucchesi, Matteo Morrocchi, Leonardo Orsini, Fabiola Paiar, José Paz-Martín, Carmelo Sgró, Fabio Di Martino
Objective.ultra-high dose-per-pulse (UHDP) dosimetry remains a key challenge in FLASH radiotherapy. Conventional ionization chambers (ICs) experience large general recombination losses under UHDP due to the high charge densities that are enhanced by severe electric field perturbation. A novel IC design, the ALLS chamber, has been proposed to overcome these limitations by using a low-pressure noble gas, eliminating ion recombination and enabling an analytical description of charge collection up to 40 Gy/pulse with argon at 1 hPa pressure as active medium. However, designing such an IC requires meeting both dosimetric and mechanical constraints for low-pressure operation. Since the actual requirements for FLASH dosimetry involve dose per pulse up to 10 Gy, pressures in range from 1 hPa up to 100 hPa could be applied.Approach.To explore possible configurations in terms of filling gas, pressure and bias electric field to measure a certain dose per pulse, a Python-based numerical simulation was developed to model charge transport in noble gases. The IC response was evaluated in terms of charge collection efficiency (CCE) by varying the dose per pulse, the bias field, the filling gas and its pressure. The aim is to explore suitable experimental conditions in which the response of the IC is stable for a given range of dose per pulse.Main results.Simulations identified helium and nitrogen as best candidates to be used as filling gas of an ALLS-like IC, capable of measuring up to 15 Gy/pulse at 50 and 10 hPa, respectively, while keeping the relative deviations of CCE respect to unity below 1%.Significance.These results support the feasibility of designing ICs for UHDP beams using moderate depressurization, offering a promising path toward the realization of robust, accurate detectors for FLASH reference dosimetry.
{"title":"Numerical simulations of charge transport in low-pressure noble gases for ultra-high dose per pulse applications.","authors":"Marco Montefiori, Luca Baldini, Maria Giuseppina Bisogni, Giuseppe Felici, Faustino Gómez, Leonardo Lucchesi, Matteo Morrocchi, Leonardo Orsini, Fabiola Paiar, José Paz-Martín, Carmelo Sgró, Fabio Di Martino","doi":"10.1088/1361-6560/ae365b","DOIUrl":"10.1088/1361-6560/ae365b","url":null,"abstract":"<p><p><i>Objective.</i>ultra-high dose-per-pulse (UHDP) dosimetry remains a key challenge in FLASH radiotherapy. Conventional ionization chambers (ICs) experience large general recombination losses under UHDP due to the high charge densities that are enhanced by severe electric field perturbation. A novel IC design, the ALLS chamber, has been proposed to overcome these limitations by using a low-pressure noble gas, eliminating ion recombination and enabling an analytical description of charge collection up to 40 Gy/pulse with argon at 1 hPa pressure as active medium. However, designing such an IC requires meeting both dosimetric and mechanical constraints for low-pressure operation. Since the actual requirements for FLASH dosimetry involve dose per pulse up to 10 Gy, pressures in range from 1 hPa up to 100 hPa could be applied.<i>Approach.</i>To explore possible configurations in terms of filling gas, pressure and bias electric field to measure a certain dose per pulse, a Python-based numerical simulation was developed to model charge transport in noble gases. The IC response was evaluated in terms of charge collection efficiency (CCE) by varying the dose per pulse, the bias field, the filling gas and its pressure. The aim is to explore suitable experimental conditions in which the response of the IC is stable for a given range of dose per pulse.<i>Main results.</i>Simulations identified helium and nitrogen as best candidates to be used as filling gas of an ALLS-like IC, capable of measuring up to 15 Gy/pulse at 50 and 10 hPa, respectively, while keeping the relative deviations of CCE respect to unity below 1%.<i>Significance.</i>These results support the feasibility of designing ICs for UHDP beams using moderate depressurization, offering a promising path toward the realization of robust, accurate detectors for FLASH reference dosimetry.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145945735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1088/1361-6560/ae3cf6
Matthew Muscat, Juanita Crook, Andrew Jirasek, Jeff Andrews, Nathan Becker
Objective: Develop a spatially resolved probabilistic framework that explicitly models localization uncer-
tainty to map along-core tissue-class sampling probabilities Pi(z) for MR-informed, US-guided transperineal
prostate biopsies, yielding millimetre-scale DIL-sampling descriptors for planning, quality assurance, and
biology-related research. We also outline an exploratory linkage to core-level pathology; formal clinical vali-
dation remains future work.
Approach: Using retrospectively analysed data from 15 HDR-brachytherapy patients enrolled on a prospec-
tive trial, we linked 51 TRUS biopsy tracks to mpMRI DICOM structure sets with 26 DILs contoured.
Procedural localization uncertainty was modelled as independent rigid translations for each structure type,
sampled from zero-mean Gaussians (SDs 1.25-2.2 mm) and propagated via a 10,000-trial Monte Carlo method
to obtain Pi(z) and nominal labels Bi(z). Core-level DIL sampling metrics (⟨PD⟩, max PD) were reported per
core and at cohort level.
Main results: Continuous along-core probability maps that propagate sampling-location and delineation
uncertainties go beyond a nominal along-core hit/miss trace, capturing lesion-enriched sub-segments pre-
dicted by the mpMRI derived structure set, transition-band width, and benign prostatic stretches. Across
cores, median DIL-sampling descriptors were ⟨PD⟩ ≈ 0.24 and max PD ≈ 0.48; urethral and rectal sampling
probabilities were near zero, consistent with safe practice.
Significance: The framework converts measured localization uncertainty into interpretable, millimetre-scale
tissue sampling metrics. These descriptors can inform pre-procedure plan checks and biopsy pre-planning
and, where localization is available, intra-procedural estimates of expected DIL sampling. At the clinic
level they offer QA summaries by tracking DIL-sampling metrics such as ⟨PD⟩ and max PD across cores,
patients, and operators, and they provide spatially contextualized covariates/weights for downstream assays
(e.g., Raman spectroscopy, genomics). Model assumptions (rigid, Gaussian, independent sources) are stated
explicitly, with a presented clear path to validation against pathology. These descriptors pertain to sampling
of mpMRI-defined DILs and are not, by themselves, malignancy classifiers.
{"title":"A probabilistic tissue classification metric for MR-US guided prostate core-needle biopsies with explicit modelling of localization uncertainty.","authors":"Matthew Muscat, Juanita Crook, Andrew Jirasek, Jeff Andrews, Nathan Becker","doi":"10.1088/1361-6560/ae3cf6","DOIUrl":"https://doi.org/10.1088/1361-6560/ae3cf6","url":null,"abstract":"<p><strong>Objective: </strong>Develop a spatially resolved probabilistic framework that explicitly models localization uncer-
tainty to map along-core tissue-class sampling probabilities Pi(z) for MR-informed, US-guided transperineal
prostate biopsies, yielding millimetre-scale DIL-sampling descriptors for planning, quality assurance, and
biology-related research. We also outline an exploratory linkage to core-level pathology; formal clinical vali-
dation remains future work.
Approach: Using retrospectively analysed data from 15 HDR-brachytherapy patients enrolled on a prospec-
tive trial, we linked 51 TRUS biopsy tracks to mpMRI DICOM structure sets with 26 DILs contoured.
Procedural localization uncertainty was modelled as independent rigid translations for each structure type,
sampled from zero-mean Gaussians (SDs 1.25-2.2 mm) and propagated via a 10,000-trial Monte Carlo method
to obtain Pi(z) and nominal labels Bi(z). Core-level DIL sampling metrics (⟨PD⟩, max PD) were reported per
core and at cohort level.
Main results: Continuous along-core probability maps that propagate sampling-location and delineation
uncertainties go beyond a nominal along-core hit/miss trace, capturing lesion-enriched sub-segments pre-
dicted by the mpMRI derived structure set, transition-band width, and benign prostatic stretches. Across
cores, median DIL-sampling descriptors were ⟨PD⟩ ≈ 0.24 and max PD ≈ 0.48; urethral and rectal sampling
probabilities were near zero, consistent with safe practice.
Significance: The framework converts measured localization uncertainty into interpretable, millimetre-scale
tissue sampling metrics. These descriptors can inform pre-procedure plan checks and biopsy pre-planning
and, where localization is available, intra-procedural estimates of expected DIL sampling. At the clinic
level they offer QA summaries by tracking DIL-sampling metrics such as ⟨PD⟩ and max PD across cores,
patients, and operators, and they provide spatially contextualized covariates/weights for downstream assays
(e.g., Raman spectroscopy, genomics). Model assumptions (rigid, Gaussian, independent sources) are stated
explicitly, with a presented clear path to validation against pathology. These descriptors pertain to sampling
of mpMRI-defined DILs and are not, by themselves, malignancy classifiers.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146041276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1088/1361-6560/ae36e6
Jun Nakao, Takamitsu Masuda, Tsubasa Yamano, Toshiyuki Toshito, Teiji Nishio
Objective.The range determination uncertainty (σest) based on positron emission tomography (PET) imaging, which stems from the Poisson statistics of the detected signal, can be theoretically predicted using Fisher information. This study aims to experimentally validate a Fisher information-based predictive framework that optimizes the irradiation dose and measurement time required for reliable range verification in PET-guided online adaptive proton therapy.Approach.First, we defined a precision criterion of1.5σest<2mmfor reliable range verification. Then, using polyethylene, water, and a head and neck phantom, we determined the minimum measurement time-calculated in 2 s increments-required to satisfy this criterion at given irradiation doses (0.5 Gy and 0.1 Gy) based on Fisher information. For each condition, 5000 PET images were generated from the measurement datasets, and the maximum likelihood estimation method was independently applied to each to determine the standard deviation of the measured range (σmeas). Finally, the values ofσmeaswere compared with those ofσestto validate the predictive framework.Main results.The values ofσmeasandσestshowed consistent agreement (within approximately 0.5 mm), regardless of target properties, dose levels, and measurement times. Furthermore, the measured range uncertainty satisfied the pre-defined precision criterion of1.5σmeas<2mmunder almost all of the tested conditions.Significance.This study provides the first experimental validation of the Fisher information-based predictive framework for PET-based range verification. The findings offer a rationale for integrating this framework into PET-guided online adaptive proton therapy, which will potentially enable reliable range verification with the minimum pre-irradiation dose and measurement time.
{"title":"Experimental validation of a Fisher information-based predictive framework for dose and time optimization in PET-guided online adaptive proton therapy.","authors":"Jun Nakao, Takamitsu Masuda, Tsubasa Yamano, Toshiyuki Toshito, Teiji Nishio","doi":"10.1088/1361-6560/ae36e6","DOIUrl":"10.1088/1361-6560/ae36e6","url":null,"abstract":"<p><p><i>Objective.</i>The range determination uncertainty (σest) based on positron emission tomography (PET) imaging, which stems from the Poisson statistics of the detected signal, can be theoretically predicted using Fisher information. This study aims to experimentally validate a Fisher information-based predictive framework that optimizes the irradiation dose and measurement time required for reliable range verification in PET-guided online adaptive proton therapy.<i>Approach.</i>First, we defined a precision criterion of1.5σest<2mmfor reliable range verification. Then, using polyethylene, water, and a head and neck phantom, we determined the minimum measurement time-calculated in 2 s increments-required to satisfy this criterion at given irradiation doses (0.5 Gy and 0.1 Gy) based on Fisher information. For each condition, 5000 PET images were generated from the measurement datasets, and the maximum likelihood estimation method was independently applied to each to determine the standard deviation of the measured range (σmeas). Finally, the values ofσmeaswere compared with those ofσestto validate the predictive framework.<i>Main results.</i>The values ofσmeasandσestshowed consistent agreement (within approximately 0.5 mm), regardless of target properties, dose levels, and measurement times. Furthermore, the measured range uncertainty satisfied the pre-defined precision criterion of1.5σmeas<2mmunder almost all of the tested conditions.<i>Significance.</i>This study provides the first experimental validation of the Fisher information-based predictive framework for PET-based range verification. The findings offer a rationale for integrating this framework into PET-guided online adaptive proton therapy, which will potentially enable reliable range verification with the minimum pre-irradiation dose and measurement time.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1088/1361-6560/ae35c9
A Smolders, A J Lomax, F Albertini
Objective.Online adaptive proton therapy could benefit from reoptimization that considers the total dose delivered in previous fractions. However, the accumulated dose is uncertain because of deformable image registration (DIR) uncertainties. This work aims to evaluate the accuracy of a tool predicting the dose accumulation reliability of a treatment plan, allowing consideration of this reliability during treatment planning.Approach.A previously developed deep-learning-based DIR uncertainty model was extended to calculate theexpectedDIR uncertainty only from the planning computed tomography (CT) and theexpecteddose accumulation uncertainty by including the planned dose distribution. For 5 lung cancer patients, the expected dose accumulation uncertainty was compared to the uncertainty of the accumulated dose of 9 repeated CTs. The model was then applied to several alternative treatment plans for each patient to evaluate its potential for plan selection.Results.The average accumulated dose uncertainty was close to the expected dose uncertainty for a large range of expected uncertainties. For high expected uncertainties, the model slightly overestimated the uncertainty. For individual voxels, errors up to 5% of the prescribed dose were common, mainly due to the daily dose distribution deviating from the plan and not because of inaccuracies in the expected DIR uncertainty. Despite the voxel-wise inaccuracies, the method proved suitable to select and compare treatment plans with respect to their accumulation reliability.Significance.Using our tool to select reliably accumulatable treatment plans can facilitate the use of accumulated doses during online reoptimization.
{"title":"Predicting dose accumulation reliability at the planning stage, with an application to adaptive proton therapy.","authors":"A Smolders, A J Lomax, F Albertini","doi":"10.1088/1361-6560/ae35c9","DOIUrl":"10.1088/1361-6560/ae35c9","url":null,"abstract":"<p><p><i>Objective.</i>Online adaptive proton therapy could benefit from reoptimization that considers the total dose delivered in previous fractions. However, the accumulated dose is uncertain because of deformable image registration (DIR) uncertainties. This work aims to evaluate the accuracy of a tool predicting the dose accumulation reliability of a treatment plan, allowing consideration of this reliability during treatment planning.<i>Approach.</i>A previously developed deep-learning-based DIR uncertainty model was extended to calculate the<i>expected</i>DIR uncertainty only from the planning computed tomography (CT) and the<i>expected</i>dose accumulation uncertainty by including the planned dose distribution. For 5 lung cancer patients, the expected dose accumulation uncertainty was compared to the uncertainty of the accumulated dose of 9 repeated CTs. The model was then applied to several alternative treatment plans for each patient to evaluate its potential for plan selection.<i>Results.</i>The average accumulated dose uncertainty was close to the expected dose uncertainty for a large range of expected uncertainties. For high expected uncertainties, the model slightly overestimated the uncertainty. For individual voxels, errors up to 5% of the prescribed dose were common, mainly due to the daily dose distribution deviating from the plan and not because of inaccuracies in the expected DIR uncertainty. Despite the voxel-wise inaccuracies, the method proved suitable to select and compare treatment plans with respect to their accumulation reliability.<i>Significance.</i>Using our tool to select reliably accumulatable treatment plans can facilitate the use of accumulated doses during online reoptimization.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145934682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1088/1361-6560/ae3c53
Jingyan Xu, Frédéric Noo
Objective:
We propose a new formulation for ideal observers (IOs) that incorporate stochastic object models (SOMs) for data acquisition optimization.
Approach:
A data acquisition system is considered as a (possibly nonlinear) discrete-to-discrete mapping from a finite-dimensional object space, x∈R^(n_d), to a finite-dimensional measurement space, y∈R^m. For binary tasks, the two underlying SOMs, H_0 and H_1, are specified by two probability density functions (PDFs) p_0 (x), p_1 (x). This leads to the notion of intrinsic likelihood ratio (LR) Λ_I (x)=p_1 (x)/p_0 (x) and intrinsic class separability (ICS), the latter quantifies the population separability that is independent of data acquisition. With respect to ICS, the IO employs the "extrinsic" LR Λ(y)=pr (y|H_1)/pr(y|H_0) of the data and quantifies the extrinsic class separability (ECS). The difference between ICS and ECS measures the efficiency of data acquisition. We show that the extrinsic LR Λ(y) is the expectation of the intrinsic LR Λ_I (x), where the expectation is with respect to the posterior PDF pr(x│y,H_0 ) under H_0.
Main results:
We use two examples, one to clarify the new IO and the second to demonstrate its potential for real world applications. Specifically, we apply the new IO to spectral optimization in dual-energy CT projection domain material decomposition (pMD), for which SOMs are used to describe variability of basis material line integrals. The performance rank orders obtained by IO agree with physics predictions.
Significance:
The main computation in the new IO involves sampling from the posterior PDF pr(x│y,H_0 ), which are similar to (fully) Bayesian reconstruction. Thus our IO computation is amenable to standard techniques already familiar to CT researchers. The example of dual-energy pMD serves as a prototype for other spectral optimization problems, e.g., for photon counting CT or multi-energy CT with multi-layer detectors.
.
{"title":"Ideal observer estimation for binary tasks with stochastic object models.","authors":"Jingyan Xu, Frédéric Noo","doi":"10.1088/1361-6560/ae3c53","DOIUrl":"https://doi.org/10.1088/1361-6560/ae3c53","url":null,"abstract":"<p><strong>Objective: </strong>
We propose a new formulation for ideal observers (IOs) that incorporate stochastic object models (SOMs) for data acquisition optimization.
Approach:
A data acquisition system is considered as a (possibly nonlinear) discrete-to-discrete mapping from a finite-dimensional object space, x∈R^(n_d), to a finite-dimensional measurement space, y∈R^m. For binary tasks, the two underlying SOMs, H_0 and H_1, are specified by two probability density functions (PDFs) p_0 (x), p_1 (x). This leads to the notion of intrinsic likelihood ratio (LR) Λ_I (x)=p_1 (x)/p_0 (x) and intrinsic class separability (ICS), the latter quantifies the population separability that is independent of data acquisition. With respect to ICS, the IO employs the \"extrinsic\" LR Λ(y)=pr (y|H_1)/pr(y|H_0) of the data and quantifies the extrinsic class separability (ECS). The difference between ICS and ECS measures the efficiency of data acquisition. We show that the extrinsic LR Λ(y) is the expectation of the intrinsic LR Λ_I (x), where the expectation is with respect to the posterior PDF pr(x│y,H_0 ) under H_0.
Main results:
We use two examples, one to clarify the new IO and the second to demonstrate its potential for real world applications. Specifically, we apply the new IO to spectral optimization in dual-energy CT projection domain material decomposition (pMD), for which SOMs are used to describe variability of basis material line integrals. The performance rank orders obtained by IO agree with physics predictions.
Significance:
The main computation in the new IO involves sampling from the posterior PDF pr(x│y,H_0 ), which are similar to (fully) Bayesian reconstruction. Thus our IO computation is amenable to standard techniques already familiar to CT researchers. The example of dual-energy pMD serves as a prototype for other spectral optimization problems, e.g., for photon counting CT or multi-energy CT with multi-layer detectors.
.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146030651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1088/1361-6560/ae3101
Le Yang, Haiyang Zhang, Lei Zheng, Tianfeng Zhang, Duojin Xia, Xuefei Song, Lei Zhou, Huifang Zhou
Objective.To develop an efficient deep learning framework for precise three-dimensional (3D) segmentation of complex orbital structures in multi-sequence magnetic resonance imaging (MRI) and robust assessment of thyroid eye disease (TED) activity, thereby addressing limitations in computational complexity, segmentation accuracy, and integration of multi-sequence features to support clinical decision-making.Approach.We propose RQNet, a U-shaped 3D segmentation network that incorporates the novel Refined Query Transformer Block with refined attention query multi-head self-attention. This design reduces attention complexity fromO(N2)toO(N⋅M)(M≪N) through pooled refined queries. High-quality segmentations then feed into a radiomics pipeline that extracts features per region of interest-including shape, first-order, and texture descriptors. The MRI features from the three sequences-T1-weighted imaging (T1WI), contrast-enhanced T1WI (T1CE), and T2-weighted imaging (T2WI)-are subsequently integrated, with support vector machine, random forest, and logistic regression models employed for assessment to distinguish between active and inactive TED phases.Main results.RQNet achieved Dice similarity coefficients of 83.34%-87.15% on TED datasets (T1WI, T2WI, T1CE), outperforming state-of-the-art models such as nnFormer, UNETR, SwinUNETR, SegResNet, and nnUNet. The radiomics fusion pipeline yielded area under the curve values of 84.65%-85.89% for TED activity assessment, surpassing single-sequence baselines and confirming the benefits of multi-sequence MRI feature fusion enhancements.Significance.The proposed RQNet establishes an efficient segmentation network for 3D orbital MRI, providing accurate depictions of TED structures, robust radiomics-based activity assessment, and enhanced TED assessment through multi-sequence MRI feature integration.
{"title":"Refined query network (RQNet) for precise MRI segmentation and robust TED activity assessment.","authors":"Le Yang, Haiyang Zhang, Lei Zheng, Tianfeng Zhang, Duojin Xia, Xuefei Song, Lei Zhou, Huifang Zhou","doi":"10.1088/1361-6560/ae3101","DOIUrl":"10.1088/1361-6560/ae3101","url":null,"abstract":"<p><p><i>Objective.</i>To develop an efficient deep learning framework for precise three-dimensional (3D) segmentation of complex orbital structures in multi-sequence magnetic resonance imaging (MRI) and robust assessment of thyroid eye disease (TED) activity, thereby addressing limitations in computational complexity, segmentation accuracy, and integration of multi-sequence features to support clinical decision-making.<i>Approach.</i>We propose RQNet, a U-shaped 3D segmentation network that incorporates the novel Refined Query Transformer Block with refined attention query multi-head self-attention. This design reduces attention complexity fromO(N2)toO(N⋅M)(M≪N) through pooled refined queries. High-quality segmentations then feed into a radiomics pipeline that extracts features per region of interest-including shape, first-order, and texture descriptors. The MRI features from the three sequences-T1-weighted imaging (T1WI), contrast-enhanced T1WI (T1CE), and T2-weighted imaging (T2WI)-are subsequently integrated, with support vector machine, random forest, and logistic regression models employed for assessment to distinguish between active and inactive TED phases.<i>Main results.</i>RQNet achieved Dice similarity coefficients of 83.34%-87.15% on TED datasets (T1WI, T2WI, T1CE), outperforming state-of-the-art models such as nnFormer, UNETR, SwinUNETR, SegResNet, and nnUNet. The radiomics fusion pipeline yielded area under the curve values of 84.65%-85.89% for TED activity assessment, surpassing single-sequence baselines and confirming the benefits of multi-sequence MRI feature fusion enhancements.<i>Significance.</i>The proposed RQNet establishes an efficient segmentation network for 3D orbital MRI, providing accurate depictions of TED structures, robust radiomics-based activity assessment, and enhanced TED assessment through multi-sequence MRI feature integration.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145827636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1088/1361-6560/ae3658
Arman Gorji, Nima Sanati, Amir Hossein Pouria, Somayeh Sadat Mehrnia, Ilker Hacihaliloglu, Arman Rahmim, Mohammad R Salmanpour
Objective.Radiomics-based artificial intelligence (AI) models show potential in breast cancer diagnosis but lack interpretability. This study bridges the gap between radiomic features (RFs) and Breast Imaging Reporting and Data System (BI-RADS) descriptors through a clinically interpretable framework.Methods. We developed a dual-dictionary approach. First, a clinical mapping dictionary (CMD) was constructed by mapping 56 RFs to BI-RADS descriptors (shape, margin, internal enhancement (IE)) based on literature and expert review. Second, we applied this framework to a classification task to predict triple-negative (TNBC) versus non-TNBC subtypes using dynamic contrast-enhanced MRI data from a multi-institutional cohort of 1549 patients. We trained 27 machine learning classifiers with 27 feature selection methods. Using SHapley Additive exPlanations (SHAP), we interpreted the model's predictions and developed a Statistical Mapping Dictionary for 51 RFs, not included in the CMD.Results. The best-performing model (variance inflation factor feature selector + extra trees classifier) achieved an average cross-validation accuracy of 0.83 ± 0.02. Our dual-dictionary approach successfully translated predictive RFs into understandable clinical concepts. For example, higher values of 'Sphericity', corresponding to a round/oval shape, were predictive of TNBC. Similarly, lower values of 'Busyness', indicating more homogeneous IE, were also associated with TNBC, aligning with existing clinical observations. This framework confirmed known imaging biomarkers and identified novel, data-driven quantitative features.Conclusion.This study introduces a novel dual-dictionary framework (BM1.0) that bridges RFs and the BI-RADS clinical lexicon. By enhancing the interpretability and transparency of AI models, the framework supports greater clinical trust and paves the way for integrating RFs into breast cancer diagnosis and personalized care.
{"title":"Radiological and biological dictionary of radiomics features: addressing understandable AI issues in personalized breast cancer; dictionary version BM1.0.","authors":"Arman Gorji, Nima Sanati, Amir Hossein Pouria, Somayeh Sadat Mehrnia, Ilker Hacihaliloglu, Arman Rahmim, Mohammad R Salmanpour","doi":"10.1088/1361-6560/ae3658","DOIUrl":"10.1088/1361-6560/ae3658","url":null,"abstract":"<p><p><i>Objective.</i>Radiomics-based artificial intelligence (AI) models show potential in breast cancer diagnosis but lack interpretability. This study bridges the gap between radiomic features (RFs) and Breast Imaging Reporting and Data System (BI-RADS) descriptors through a clinically interpretable framework.<i>Methods</i>. We developed a dual-dictionary approach. First, a clinical mapping dictionary (CMD) was constructed by mapping 56 RFs to BI-RADS descriptors (shape, margin, internal enhancement (IE)) based on literature and expert review. Second, we applied this framework to a classification task to predict triple-negative (TNBC) versus non-TNBC subtypes using dynamic contrast-enhanced MRI data from a multi-institutional cohort of 1549 patients. We trained 27 machine learning classifiers with 27 feature selection methods. Using SHapley Additive exPlanations (SHAP), we interpreted the model's predictions and developed a Statistical Mapping Dictionary for 51 RFs, not included in the CMD.<i>Results</i>. The best-performing model (variance inflation factor feature selector + extra trees classifier) achieved an average cross-validation accuracy of 0.83 ± 0.02. Our dual-dictionary approach successfully translated predictive RFs into understandable clinical concepts. For example, higher values of 'Sphericity', corresponding to a round/oval shape, were predictive of TNBC. Similarly, lower values of 'Busyness', indicating more homogeneous IE, were also associated with TNBC, aligning with existing clinical observations. This framework confirmed known imaging biomarkers and identified novel, data-driven quantitative features.<i>Conclusion.</i>This study introduces a novel dual-dictionary framework (BM1.0) that bridges RFs and the BI-RADS clinical lexicon. By enhancing the interpretability and transparency of AI models, the framework supports greater clinical trust and paves the way for integrating RFs into breast cancer diagnosis and personalized care.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145945663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1088/1361-6560/ae36e4
Harris Hamilton, Daniel Björkman, Antony Lomax, Jan Hrbacek
Purpose.Ocular torsion is a challenge occasionally encountered in ocular proton therapy (OPT) consisting of a rotation of the eye about the visual axis. This can result in the safety margin being compromised and reduced conformity of the dose field to the target. This note investigates the effect of ocular torsion on the lateral margin to verify and explore quantitative adaptation strategies to mitigate the adverse effect on this margin.Methods.OCULARIS, an in-house OPT research planning tool, was used to simulate 14 patients undergoing OPT. The lateral margin was determined for each patient at ocular torsion angles ranging from -8∘to 8∘in discrete steps of 2∘, with 19 collimator rotations simulated at each torsion angle.Results.Margin loss increases with greater ocular torsion, with significant inter-patient variability being influenced by the shape of the target. Aligning collimator rotation with ocular torsion nominal torsion matching (NTM) retains 61% of the margin, patient-specific adaptations achieve superior dose conformity to the target. A simple regression method, setting the collimator rotation to the ocular torsion angle minus 1∘for torsions greater than 2∘, offers some benefit over NTM in this cohort.Conclusions.Margin loss increases with ocular torsion, with the extent of loss being influenced by patient-specific geometry. The NTM collimator rotation strategy was found to adequately compensate for torsion-induced margin loss. Alternative collimator rotation strategies were also explored, including a framework for optimising collimator rotation in the event of ocular torsion.
{"title":"Mitigating ocular torsion induced margin loss in ocular proton therapy via collimator rotation.","authors":"Harris Hamilton, Daniel Björkman, Antony Lomax, Jan Hrbacek","doi":"10.1088/1361-6560/ae36e4","DOIUrl":"10.1088/1361-6560/ae36e4","url":null,"abstract":"<p><p><i>Purpose.</i>Ocular torsion is a challenge occasionally encountered in ocular proton therapy (OPT) consisting of a rotation of the eye about the visual axis. This can result in the safety margin being compromised and reduced conformity of the dose field to the target. This note investigates the effect of ocular torsion on the lateral margin to verify and explore quantitative adaptation strategies to mitigate the adverse effect on this margin.<i>Methods.</i>OCULARIS, an in-house OPT research planning tool, was used to simulate 14 patients undergoing OPT. The lateral margin was determined for each patient at ocular torsion angles ranging from -8<sup>∘</sup>to 8<sup>∘</sup>in discrete steps of 2<sup>∘</sup>, with 19 collimator rotations simulated at each torsion angle.<i>Results.</i>Margin loss increases with greater ocular torsion, with significant inter-patient variability being influenced by the shape of the target. Aligning collimator rotation with ocular torsion nominal torsion matching (NTM) retains 61% of the margin, patient-specific adaptations achieve superior dose conformity to the target. A simple regression method, setting the collimator rotation to the ocular torsion angle minus 1<sup>∘</sup>for torsions greater than 2<sup>∘</sup>, offers some benefit over NTM in this cohort.<i>Conclusions.</i>Margin loss increases with ocular torsion, with the extent of loss being influenced by patient-specific geometry. The NTM collimator rotation strategy was found to adequately compensate for torsion-induced margin loss. Alternative collimator rotation strategies were also explored, including a framework for optimising collimator rotation in the event of ocular torsion.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective.Accurate and personalized radiation dose estimation is crucial for effective targeted radionuclide therapy (TRT). Deep learning (DL) holds promise for this purpose. However, current DL-based dosimetry methods require large-scale supervised data, which is scarce in clinical practice.Approach.To address this challenge, we propose exploring semi-supervised learning (SSL) framework that leverages readily available pre-therapy positron emission tomography (PET) data, where only a small subset requires dose labels, to predict radiation doses, thereby reducing the dependency on extensive labeled datasets. In this study, traditional classification-based SSL approaches were adapted and extended in regression task specifically designed for dose prediction. To facilitate comprehensive testing and validation, we developed a synthetic dataset that simulates PET images and dose calculation using Monte Carlo simulations.Main results.In the experiment, several regression-adapted SSL methods were compared and evaluated under varying proportions of labeled data in the training set. The overall mean absolute percentage error of dose prediction remained between 9% and 11% across different organs, which achieved comparable performance than fully supervised ones.Significance.The preliminary experimental results demonstrated that the proposed SSL methods yield promising outcomes for organ-level dose prediction, particularly in scenarios where clinical data are not available in sufficient quantities.
{"title":"Semi-supervised learning for dose prediction in targeted radionuclide therapy: a synthetic data study.","authors":"Jing Zhang, Alexandre Bousse, Chi-Hieu Pham, Kuangyu Shi, Julien Bert","doi":"10.1088/1361-6560/ae36df","DOIUrl":"10.1088/1361-6560/ae36df","url":null,"abstract":"<p><p><i>Objective.</i>Accurate and personalized radiation dose estimation is crucial for effective targeted radionuclide therapy (TRT). Deep learning (DL) holds promise for this purpose. However, current DL-based dosimetry methods require large-scale supervised data, which is scarce in clinical practice.<i>Approach.</i>To address this challenge, we propose exploring semi-supervised learning (SSL) framework that leverages readily available pre-therapy positron emission tomography (PET) data, where only a small subset requires dose labels, to predict radiation doses, thereby reducing the dependency on extensive labeled datasets. In this study, traditional classification-based SSL approaches were adapted and extended in regression task specifically designed for dose prediction. To facilitate comprehensive testing and validation, we developed a synthetic dataset that simulates PET images and dose calculation using Monte Carlo simulations.<i>Main results.</i>In the experiment, several regression-adapted SSL methods were compared and evaluated under varying proportions of labeled data in the training set. The overall mean absolute percentage error of dose prediction remained between 9% and 11% across different organs, which achieved comparable performance than fully supervised ones.<i>Significance.</i>The preliminary experimental results demonstrated that the proposed SSL methods yield promising outcomes for organ-level dose prediction, particularly in scenarios where clinical data are not available in sufficient quantities.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}