首页 > 最新文献

Medical physics最新文献

英文 中文
Measuring temperature in polyvinylpyrrolidone (PVP) solutions using MR spectroscopy.
Pub Date : 2025-02-17 DOI: 10.1002/mp.17683
Neville D Gai, Ruifeng Dong, Jan Willem van der Veen, Ronald Ouwerkerk, Carlo Pierpaoli
<p><strong>Background: </strong>Polyvinylpyrrolidone (PVP) water solutions could be used for cross-site and cross-vendor validation of diffusion-related measurements. However, since water diffusivity varies as a function of temperature, knowing the temperature of the PVP solution at the time of the measurement is fundamental in accomplishing this task.</p><p><strong>Purpose: </strong>MR spectroscopy (MRS) could provide absolute temperature measurements since the water peak moves relative to any stable peak as temperature changes. In this work, the PVP proton spectrum was investigated to see if any stable peaks would allow for temperature determination. Reproducibility and repeatability for three scanners from three vendors were also assessed.</p><p><strong>Methods: </strong>A spherical 17 cm container filled with 40% PVP w/w in distilled water was used for the experiments. A Point REsolved Spectroscopy Sequence (PRESS) with water suppression was employed on three 3T scanners from different vendors-GE, Siemens, and Philips. Frequency separation (in ppm) between peaks was measured in a voxel at the location of a fiber optic temperature probe and mapped to probe measured temperature. The center peak of the first methylene proton triplet closest to water peak was selected for analysis in jMRUI due to its ease of identification and echo time shift invariance. Shift in ppm of the central methylene peak proton was mapped against measured temperatures. Repeatability and reproducibility across the three scanners were determined at room temperature using 10 repeated PRESS scans. MRS established ppm shift versus temperature relationship was used to predict temperature in different PVP phantoms which were then compared against fiber optic probe measured temperature values.</p><p><strong>Results: </strong>Several <sup>1</sup>H peaks were identified on all scans of the PVP phantom. The water peak moved by ∼-0.01 ppm/°C on the three scanners relative to a central methylene peak. The maximum mean absolute temperature difference over a temperature range of 18-35°C between the three scanners was 0.16°C while the minimum was 0.057°C. Repeatability on each scanner was excellent (std range: 0.00-0.14°C) over 10 repeated PRESS scans. Reproducibility across the three scanners was also excellent with mean temperature difference between scanners ranging between 0.1 and 0.4°C. Temperature values from MRS were within prediction bounds on the three scanners for another in-house prepared 40% PVP phantom (maximum difference<0.3°C), while they were consistently overestimated for another 30% PVP phantom (<1°C) and underestimated for a CaliberMRI 40% PVP phantom (<2.8°C).</p><p><strong>Conclusions: </strong>PVP solutions exhibit stable proton peaks, one of which was used for assessing the temperature of the solution using MR proton spectroscopy. These measurements are fast and feasible with standard sequences and postprocessing MRS software and provide fundamental information fo
{"title":"Measuring temperature in polyvinylpyrrolidone (PVP) solutions using MR spectroscopy.","authors":"Neville D Gai, Ruifeng Dong, Jan Willem van der Veen, Ronald Ouwerkerk, Carlo Pierpaoli","doi":"10.1002/mp.17683","DOIUrl":"https://doi.org/10.1002/mp.17683","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;Polyvinylpyrrolidone (PVP) water solutions could be used for cross-site and cross-vendor validation of diffusion-related measurements. However, since water diffusivity varies as a function of temperature, knowing the temperature of the PVP solution at the time of the measurement is fundamental in accomplishing this task.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;MR spectroscopy (MRS) could provide absolute temperature measurements since the water peak moves relative to any stable peak as temperature changes. In this work, the PVP proton spectrum was investigated to see if any stable peaks would allow for temperature determination. Reproducibility and repeatability for three scanners from three vendors were also assessed.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;A spherical 17 cm container filled with 40% PVP w/w in distilled water was used for the experiments. A Point REsolved Spectroscopy Sequence (PRESS) with water suppression was employed on three 3T scanners from different vendors-GE, Siemens, and Philips. Frequency separation (in ppm) between peaks was measured in a voxel at the location of a fiber optic temperature probe and mapped to probe measured temperature. The center peak of the first methylene proton triplet closest to water peak was selected for analysis in jMRUI due to its ease of identification and echo time shift invariance. Shift in ppm of the central methylene peak proton was mapped against measured temperatures. Repeatability and reproducibility across the three scanners were determined at room temperature using 10 repeated PRESS scans. MRS established ppm shift versus temperature relationship was used to predict temperature in different PVP phantoms which were then compared against fiber optic probe measured temperature values.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;Several &lt;sup&gt;1&lt;/sup&gt;H peaks were identified on all scans of the PVP phantom. The water peak moved by ∼-0.01 ppm/°C on the three scanners relative to a central methylene peak. The maximum mean absolute temperature difference over a temperature range of 18-35°C between the three scanners was 0.16°C while the minimum was 0.057°C. Repeatability on each scanner was excellent (std range: 0.00-0.14°C) over 10 repeated PRESS scans. Reproducibility across the three scanners was also excellent with mean temperature difference between scanners ranging between 0.1 and 0.4°C. Temperature values from MRS were within prediction bounds on the three scanners for another in-house prepared 40% PVP phantom (maximum difference&lt;0.3°C), while they were consistently overestimated for another 30% PVP phantom (&lt;1°C) and underestimated for a CaliberMRI 40% PVP phantom (&lt;2.8°C).&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;PVP solutions exhibit stable proton peaks, one of which was used for assessing the temperature of the solution using MR proton spectroscopy. These measurements are fast and feasible with standard sequences and postprocessing MRS software and provide fundamental information fo","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel approach to long-term monitoring of accelerator-based boron neutron capture therapy.
Pub Date : 2025-02-17 DOI: 10.1002/mp.17699
Masashi Takada, Natsumi Yagi, Kenzi Shimada, Ryo Fujii, Masaru Nakamura, Satoshi Nakamura, Shoujirou Kato, Tomoya Nunomiya, Kei Aoyama, Masakuni Narita, Takashi Nakamura
<p><strong>Background: </strong>Boron neutron capture therapy (BNCT) was conducted in a hospital using an accelerator-based neutron source. The neutron beam intensity at the patient position was evaluated offline using a gold-based neutron activation method. During BNCT neutron beam irradiation on patients, the neutron intensity was controlled in real time by measuring the proton beam current irradiated on a lithium neutron target. The neutron intensity at NCCH decreased owing to the degradation of the lithium neutron target during neutron irradiation. The reduction in the neutron beam intensity could not be monitored via proton beam measurement due to the dependence of neutron production on the neutron target condition.</p><p><strong>Purpose: </strong>The duration of BNCT neutron irradiation should be controlled by monitoring the neutron beam intensity with a real-time neutron detector for reliable neutron irradiation on patients. The measurement accuracy of the online neutron beam monitor was experimentally obtained by comparing the gold radioactivity measured at the patient position. Radiation-induced damage was observed from the variation in the pulse height distributions of multichannel analyzer during long-term neutron exposure.</p><p><strong>Methods: </strong>Neutron beams were measured during neutron beam irradiation at the BNCT facility of Edogawa hospital in Japan using a neutron beam monitor comprising a 0.07- <math> <semantics><mrow><mi>μ</mi> <mi>m</mi></mrow> <annotation>$umu{rm m}$</annotation></semantics> </math> LiF layer and 40- <math> <semantics><mrow><mi>μ</mi> <mi>m</mi></mrow> <annotation>$umu{rm m}$</annotation></semantics> </math> back-illuminated thin Si pin diode. The proton beam was continuously irradiated until a cumulative total beam charge of approximately 3 kC was achieved. The online neutron beam monitor counting rates on the neutron target unit and gold saturation activities at the patient position were simultaneously measured through the entire duration of proton beam irradiation.</p><p><strong>Results: </strong>The experimental results demonstrated the long-term operation of the online neutron beam monitor positioned on the neutron target unit during the entire duration of the neutron target lifespan without significant performance deterioration. A good synchronization was observed in a correlation distribution measured using the online neutron beam monitor and the gold neutron activation method. A conversion coefficient of 1.199 <math> <semantics><msup><mi>Bq</mi> <mrow><mo>-</mo> <mn>1</mn></mrow> </msup> <annotation>${rm Bq}^{-1}$</annotation></semantics> </math> g with a standard deviation of 2.5% was evaluated. The neutron beam intensity irradiating on patients within an acceptable level of <math><semantics><mo>±</mo> <annotation>$pm$</annotation></semantics> </math> 5% as per the International Commission on Radiation Units and Measurements was evaluated from the online neutron counting rate at the 95% conf
{"title":"Novel approach to long-term monitoring of accelerator-based boron neutron capture therapy.","authors":"Masashi Takada, Natsumi Yagi, Kenzi Shimada, Ryo Fujii, Masaru Nakamura, Satoshi Nakamura, Shoujirou Kato, Tomoya Nunomiya, Kei Aoyama, Masakuni Narita, Takashi Nakamura","doi":"10.1002/mp.17699","DOIUrl":"https://doi.org/10.1002/mp.17699","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;Boron neutron capture therapy (BNCT) was conducted in a hospital using an accelerator-based neutron source. The neutron beam intensity at the patient position was evaluated offline using a gold-based neutron activation method. During BNCT neutron beam irradiation on patients, the neutron intensity was controlled in real time by measuring the proton beam current irradiated on a lithium neutron target. The neutron intensity at NCCH decreased owing to the degradation of the lithium neutron target during neutron irradiation. The reduction in the neutron beam intensity could not be monitored via proton beam measurement due to the dependence of neutron production on the neutron target condition.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;The duration of BNCT neutron irradiation should be controlled by monitoring the neutron beam intensity with a real-time neutron detector for reliable neutron irradiation on patients. The measurement accuracy of the online neutron beam monitor was experimentally obtained by comparing the gold radioactivity measured at the patient position. Radiation-induced damage was observed from the variation in the pulse height distributions of multichannel analyzer during long-term neutron exposure.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;Neutron beams were measured during neutron beam irradiation at the BNCT facility of Edogawa hospital in Japan using a neutron beam monitor comprising a 0.07- &lt;math&gt; &lt;semantics&gt;&lt;mrow&gt;&lt;mi&gt;μ&lt;/mi&gt; &lt;mi&gt;m&lt;/mi&gt;&lt;/mrow&gt; &lt;annotation&gt;$umu{rm m}$&lt;/annotation&gt;&lt;/semantics&gt; &lt;/math&gt; LiF layer and 40- &lt;math&gt; &lt;semantics&gt;&lt;mrow&gt;&lt;mi&gt;μ&lt;/mi&gt; &lt;mi&gt;m&lt;/mi&gt;&lt;/mrow&gt; &lt;annotation&gt;$umu{rm m}$&lt;/annotation&gt;&lt;/semantics&gt; &lt;/math&gt; back-illuminated thin Si pin diode. The proton beam was continuously irradiated until a cumulative total beam charge of approximately 3 kC was achieved. The online neutron beam monitor counting rates on the neutron target unit and gold saturation activities at the patient position were simultaneously measured through the entire duration of proton beam irradiation.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;The experimental results demonstrated the long-term operation of the online neutron beam monitor positioned on the neutron target unit during the entire duration of the neutron target lifespan without significant performance deterioration. A good synchronization was observed in a correlation distribution measured using the online neutron beam monitor and the gold neutron activation method. A conversion coefficient of 1.199 &lt;math&gt; &lt;semantics&gt;&lt;msup&gt;&lt;mi&gt;Bq&lt;/mi&gt; &lt;mrow&gt;&lt;mo&gt;-&lt;/mo&gt; &lt;mn&gt;1&lt;/mn&gt;&lt;/mrow&gt; &lt;/msup&gt; &lt;annotation&gt;${rm Bq}^{-1}$&lt;/annotation&gt;&lt;/semantics&gt; &lt;/math&gt; g with a standard deviation of 2.5% was evaluated. The neutron beam intensity irradiating on patients within an acceptable level of &lt;math&gt;&lt;semantics&gt;&lt;mo&gt;±&lt;/mo&gt; &lt;annotation&gt;$pm$&lt;/annotation&gt;&lt;/semantics&gt; &lt;/math&gt; 5% as per the International Commission on Radiation Units and Measurements was evaluated from the online neutron counting rate at the 95% conf","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Label-efficient sequential model-based weakly supervised intracranial hemorrhage segmentation in low-data non-contrast CT imaging.
Pub Date : 2025-02-17 DOI: 10.1002/mp.17689
Shreyas H Ramananda, Vaanathi Sundaresan
<p><strong>Background: </strong>In clinical settings, intracranial hemorrhages (ICH) are routinely diagnosed using non-contrast CT (NCCT) in emergency stroke imaging for severity assessment. However, compared to magnetic resonance imaging (MRI), ICH shows low contrast and poor signal-to-noise ratio on NCCT images. Accurate automated segmentation of ICH lesions using deep learning methods typically requires a large number of voxelwise annotated data with sufficient diversity to capture ICH characteristics.</p><p><strong>Purpose: </strong>To reduce the requirement for voxelwise labeled data, in this study, we propose a weakly supervised (WS) method to segment ICH in NCCT images using image-level labels (presence/absence of ICH). Obtaining such image-level annotations is typically less time-consuming for clinicians. Hence, determining ICH segmentation from image-level labels provides highly time- and manually resource-efficient site-specific solutions in clinical emergency point-of-care (POC) settings. Moreover, because clinical datasets often consist of a limited amount of data, we show the utility of image-level annotated large datasets for training our proposed WS method to obtain a robust ICH segmentation in large as well as low-data regimes.</p><p><strong>Methods: </strong>Our proposed WS method determines the location of ICH using class activation maps (CAMs) from image-level labels and further refines ICH pseudo-masks in an unsupervised manner to train a segmentation model. Unlike existing WS methods for ICH segmentation, we used interslice dependencies across contiguous slices in NCCT volumes to obtain robust activation maps from the classification step. Additionally, we showed the effect of a large dataset on low-data regimes by comparing the WS segmentation trained on a large dataset with the baseline performance in low-data regimes. We used the radiological society of North America (RSNA) dataset (21,784 subjects) as a large dataset and the INSTANCE (100 subjects) and PhysioNet (75 subjects) datasets as low-data regimes. In addition, we performed the first ever investigation of the minimum amount (lower bound) of training data (from a large dataset) required for robust ICH segmentation performance in low-data regimes. We also evaluated the performance of our model across different ICH subtypes. In RSNA, 541 2D slices were designated for annotation and held as test data. The remaining samples were divided, with training:testing of 90%:10%. For INSTANCE and PhysioNet, the data were divided into five-fold for cross validation.</p><p><strong>Results: </strong>Using only 50% of the ICH slices from a large data for training, our proposed method achieved a Dice overlap value (DSC) values of 0.583 and 0.64 on PhysioNet and INSTANCE datasets, respectively, representing low-data regimes, which was significantly better (p-value <math><semantics><mo><</mo> <annotation>$<$</annotation></semantics> </math> 0.001) than their baseline fully supervised (F
{"title":"Label-efficient sequential model-based weakly supervised intracranial hemorrhage segmentation in low-data non-contrast CT imaging.","authors":"Shreyas H Ramananda, Vaanathi Sundaresan","doi":"10.1002/mp.17689","DOIUrl":"https://doi.org/10.1002/mp.17689","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;In clinical settings, intracranial hemorrhages (ICH) are routinely diagnosed using non-contrast CT (NCCT) in emergency stroke imaging for severity assessment. However, compared to magnetic resonance imaging (MRI), ICH shows low contrast and poor signal-to-noise ratio on NCCT images. Accurate automated segmentation of ICH lesions using deep learning methods typically requires a large number of voxelwise annotated data with sufficient diversity to capture ICH characteristics.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;To reduce the requirement for voxelwise labeled data, in this study, we propose a weakly supervised (WS) method to segment ICH in NCCT images using image-level labels (presence/absence of ICH). Obtaining such image-level annotations is typically less time-consuming for clinicians. Hence, determining ICH segmentation from image-level labels provides highly time- and manually resource-efficient site-specific solutions in clinical emergency point-of-care (POC) settings. Moreover, because clinical datasets often consist of a limited amount of data, we show the utility of image-level annotated large datasets for training our proposed WS method to obtain a robust ICH segmentation in large as well as low-data regimes.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;Our proposed WS method determines the location of ICH using class activation maps (CAMs) from image-level labels and further refines ICH pseudo-masks in an unsupervised manner to train a segmentation model. Unlike existing WS methods for ICH segmentation, we used interslice dependencies across contiguous slices in NCCT volumes to obtain robust activation maps from the classification step. Additionally, we showed the effect of a large dataset on low-data regimes by comparing the WS segmentation trained on a large dataset with the baseline performance in low-data regimes. We used the radiological society of North America (RSNA) dataset (21,784 subjects) as a large dataset and the INSTANCE (100 subjects) and PhysioNet (75 subjects) datasets as low-data regimes. In addition, we performed the first ever investigation of the minimum amount (lower bound) of training data (from a large dataset) required for robust ICH segmentation performance in low-data regimes. We also evaluated the performance of our model across different ICH subtypes. In RSNA, 541 2D slices were designated for annotation and held as test data. The remaining samples were divided, with training:testing of 90%:10%. For INSTANCE and PhysioNet, the data were divided into five-fold for cross validation.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;Using only 50% of the ICH slices from a large data for training, our proposed method achieved a Dice overlap value (DSC) values of 0.583 and 0.64 on PhysioNet and INSTANCE datasets, respectively, representing low-data regimes, which was significantly better (p-value &lt;math&gt;&lt;semantics&gt;&lt;mo&gt;&lt;&lt;/mo&gt; &lt;annotation&gt;$&lt;$&lt;/annotation&gt;&lt;/semantics&gt; &lt;/math&gt; 0.001) than their baseline fully supervised (F","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying deep generative model in plan review of intensity modulated radiotherapy.
Pub Date : 2025-02-17 DOI: 10.1002/mp.17704
Peng Huang, Jiawen Shang, Yuhan Fan, Zhixing Chang, Yingjie Xu, Ke Zhang, Zhihui Hu, Jianrong Dai, Hui Yan

Background: Plan review is critical for safely delivering radiation dose to a patient under radiotherapy and mainly performed by medical physicist in routine clinical practice. Recently, the deep-learning models have been used to assist this manual process. As black-box models the reason for their predictions are unknown. Thus, it is important to improve the model interpretability to make them more reliable for clinical deployment.

Purpose: To alleviate this issue, a deep generative model, adversarial autoencoder networks (AAE), was employed to automatically detect anomalies in intensity-modulated radiotherapy plans.

Methods: The typical plan parameters (collimator position, gantry angle, monitor unit, etc.) were collected to form a feature vector for the training sample. The reconstruction error was the difference between the output and input of the model. Based on the distribution of reconstruction errors of the training samples, a detection threshold was determined. For a test plan, its reconstruction error obtained by the learned model was compared with the threshold to determine its category (anomaly or regular). The model was tested with four network settings. It was also compared with the vanilla AE and the other six classic models. The area under receiver operating characteristic curve (AUC) along with other statistical metrics was employed for evaluation.

Results: The AAE model achieved the highest accuracy (AUC = 0.997). The AUCs of the other seven classic methods are 0.935 (AE), 0.981 (K-means), 0.896 (principle component analysis), 0.978 (one-class support vector machine), 0.934 (local outlier factor), and 0.944 (hierarchical density-based spatial clustering of applications with noise), and 0.882 (isolation forest). This indicates that AAE model could detect more anomalous plans with less false positive rate.

Conclusions: The AAE model can effectively detect anomaly in radiotherapy plans for lung cancer patients. Comparing with the vanialla AE and other classic detection models, the AAE model is more accurate and transparent. The proposed AAE model can improve the interpretability of the results for radiotherapy plan review.

{"title":"Applying deep generative model in plan review of intensity modulated radiotherapy.","authors":"Peng Huang, Jiawen Shang, Yuhan Fan, Zhixing Chang, Yingjie Xu, Ke Zhang, Zhihui Hu, Jianrong Dai, Hui Yan","doi":"10.1002/mp.17704","DOIUrl":"https://doi.org/10.1002/mp.17704","url":null,"abstract":"<p><strong>Background: </strong>Plan review is critical for safely delivering radiation dose to a patient under radiotherapy and mainly performed by medical physicist in routine clinical practice. Recently, the deep-learning models have been used to assist this manual process. As black-box models the reason for their predictions are unknown. Thus, it is important to improve the model interpretability to make them more reliable for clinical deployment.</p><p><strong>Purpose: </strong>To alleviate this issue, a deep generative model, adversarial autoencoder networks (AAE), was employed to automatically detect anomalies in intensity-modulated radiotherapy plans.</p><p><strong>Methods: </strong>The typical plan parameters (collimator position, gantry angle, monitor unit, etc.) were collected to form a feature vector for the training sample. The reconstruction error was the difference between the output and input of the model. Based on the distribution of reconstruction errors of the training samples, a detection threshold was determined. For a test plan, its reconstruction error obtained by the learned model was compared with the threshold to determine its category (anomaly or regular). The model was tested with four network settings. It was also compared with the vanilla AE and the other six classic models. The area under receiver operating characteristic curve (AUC) along with other statistical metrics was employed for evaluation.</p><p><strong>Results: </strong>The AAE model achieved the highest accuracy (AUC = 0.997). The AUCs of the other seven classic methods are 0.935 (AE), 0.981 (K-means), 0.896 (principle component analysis), 0.978 (one-class support vector machine), 0.934 (local outlier factor), and 0.944 (hierarchical density-based spatial clustering of applications with noise), and 0.882 (isolation forest). This indicates that AAE model could detect more anomalous plans with less false positive rate.</p><p><strong>Conclusions: </strong>The AAE model can effectively detect anomaly in radiotherapy plans for lung cancer patients. Comparing with the vanialla AE and other classic detection models, the AAE model is more accurate and transparent. The proposed AAE model can improve the interpretability of the results for radiotherapy plan review.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel fast strategy to calculate equieffective doses under different dose rate conditions.
Pub Date : 2025-02-15 DOI: 10.1002/mp.17688
Mark J Macsuka, Roger W Howell, Katherine A Vallis, Daniel R McGowan
<p><strong>Background: </strong>Radiopharmaceutical therapy (RPT) has gained notable attention for its potential in treating difficult cancers, with [<sup>177</sup>Lu]Lu-DOTATATE being a notable example. However, the radiobiology of RPT is less understood compared to external beam radiotherapy (EBRT), and dosimetry protocols are not standardized. Organ dose limits and tumor dose-response correlations are often based on radiobiologically motivated equieffective doses (EQDX). On top of absorbed dose, these measures are also functions of the absorbed dose rate and radiobiological parameters that quantify tissue radiosensitivity and damage repair rate. Typically, the absorbed dose and repair rates are assumed to follow a monoexponential pattern, although describing the dose rate function often requires two or more phases to describe the data.</p><p><strong>Purpose: </strong>Here we present novel expressions for calculating the equieffective dose in 2 Gy fractions (EQD2) for RPT, considering various absorbed dose rate scenarios and the rate of sublethal DNA damage repair. We aimed to establish an approach that is scalable, robust, and can be used alongside various absorbed dose integration methods.</p><p><strong>Methods: </strong>By assuming a simple exponential decay for DNA damage repair and employing a biexponential function for absorbed dose rate decay, we have re-established the solutions for EQDX in a concise analytical form. Additionally, we have devised a novel hybrid solution applicable to piecewise-defined absorbed dose-rate functions, leveraging both numerical and analytical methodologies. To validate these expressions, simulated measurements were utilized, and comparisons were made with a fully numerical approach. We also investigated the reliability of three methodologies-fully numerical, fully analytical, and a hybrid approach-when simplifying comprehensive dosimetry protocols. Utilizing publicly available clinical data from two patients undergoing [<sup>177</sup>Lu]Lu-DOTATATE therapy, we defined the baseline absorbed dose rate model based on the best biexponential fit to four post-injection SPECT measurements at the organ level. We then explored variations in EQD2 values resulting from the omission of the final measurement.</p><p><strong>Results: </strong>The proposed expressions were found to be accurate and scalable, providing a reliable alternative to fully numerical methods. The results of the fully numerical method converged to our solutions with increasing accuracy as the extrapolation time after injection was increased. However, we found that to achieve an accuracy in EQD2 to within 2%, the numerical method had to extrapolate for up to 890 h in some cases, at which point overflow errors are likely to occur. Our hybrid method also achieved a significant decrease in computation time compared to the fully numerical method.Using data from two patients, we found that the numerical, hybrid, and analytical approaches underestimated the
{"title":"A novel fast strategy to calculate equieffective doses under different dose rate conditions.","authors":"Mark J Macsuka, Roger W Howell, Katherine A Vallis, Daniel R McGowan","doi":"10.1002/mp.17688","DOIUrl":"https://doi.org/10.1002/mp.17688","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;Radiopharmaceutical therapy (RPT) has gained notable attention for its potential in treating difficult cancers, with [&lt;sup&gt;177&lt;/sup&gt;Lu]Lu-DOTATATE being a notable example. However, the radiobiology of RPT is less understood compared to external beam radiotherapy (EBRT), and dosimetry protocols are not standardized. Organ dose limits and tumor dose-response correlations are often based on radiobiologically motivated equieffective doses (EQDX). On top of absorbed dose, these measures are also functions of the absorbed dose rate and radiobiological parameters that quantify tissue radiosensitivity and damage repair rate. Typically, the absorbed dose and repair rates are assumed to follow a monoexponential pattern, although describing the dose rate function often requires two or more phases to describe the data.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;Here we present novel expressions for calculating the equieffective dose in 2 Gy fractions (EQD2) for RPT, considering various absorbed dose rate scenarios and the rate of sublethal DNA damage repair. We aimed to establish an approach that is scalable, robust, and can be used alongside various absorbed dose integration methods.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;By assuming a simple exponential decay for DNA damage repair and employing a biexponential function for absorbed dose rate decay, we have re-established the solutions for EQDX in a concise analytical form. Additionally, we have devised a novel hybrid solution applicable to piecewise-defined absorbed dose-rate functions, leveraging both numerical and analytical methodologies. To validate these expressions, simulated measurements were utilized, and comparisons were made with a fully numerical approach. We also investigated the reliability of three methodologies-fully numerical, fully analytical, and a hybrid approach-when simplifying comprehensive dosimetry protocols. Utilizing publicly available clinical data from two patients undergoing [&lt;sup&gt;177&lt;/sup&gt;Lu]Lu-DOTATATE therapy, we defined the baseline absorbed dose rate model based on the best biexponential fit to four post-injection SPECT measurements at the organ level. We then explored variations in EQD2 values resulting from the omission of the final measurement.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;The proposed expressions were found to be accurate and scalable, providing a reliable alternative to fully numerical methods. The results of the fully numerical method converged to our solutions with increasing accuracy as the extrapolation time after injection was increased. However, we found that to achieve an accuracy in EQD2 to within 2%, the numerical method had to extrapolate for up to 890 h in some cases, at which point overflow errors are likely to occur. Our hybrid method also achieved a significant decrease in computation time compared to the fully numerical method.Using data from two patients, we found that the numerical, hybrid, and analytical approaches underestimated the","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lesion segmentation method for multiple types of liver cancer based on balanced dice loss.
Pub Date : 2025-02-13 DOI: 10.1002/mp.17624
Jun Xie, Jiajun Zhou, Meiyi Yang, Lifeng Xu, Tongtong Li, Haoyang Jia, Yu Gong, Xiansong Li, Bin Song, Yi Wei, Ming Liu

Background: Obtaining accurate segmentation regions for liver cancer is of paramount importance for the clinical diagnosis and treatment of the disease. In recent years, a large number of variants of deep learning based liver cancer segmentation methods have been proposed to assist radiologists. Due to the differences in characteristics between different types of liver tumors and data imbalance, it is difficult to train a deep model that can achieve accurate segmentation for multiple types of liver cancer.

Purpose: In this paper, We propose a balance Dice Loss(BD Loss) function for balanced learning of multiple categories segmentation features. We also introduce a comprehensive method based on BD Loss to achieve accurate segmentation of multiple categories of liver cancer.

Materials and methods: We retrospectively collected computed tomography (CT) screening images and tumor segmentation of 591 patients with malignant liver tumors from West China Hospital of Sichuan University. We use the proposed BD Loss to train a deep model that can segment multiple types of liver tumors and, through a greedy parameter averaging algorithm (GPA algorithm) obtain a more generalized segmentation model. Finally, we employ model integration and our proposed post-processing method, which leverages inter-slice information, to achieve more accurate segmentation of liver cancer lesions.

Results: We evaluated the performance of our proposed automatic liver cancer segmentation method on the dataset we collected. The BD loss we proposed can effectively mitigate the adverse effects of data imbalance on the segmentation model. Our proposed method can achieve a dice per case (DPC) of 0.819 (95%CI 0.798-0.841), significantly higher than baseline which achieve a DPC of 0.768(95%CI 0.740-0.796).

Conclusions: The differences in CT images between different types of liver cancer necessitate deep learning models to learn distinct features. Our method addresses this challenge, enabling balanced and accurate segmentation performance across multiple types of liver cancer.

{"title":"Lesion segmentation method for multiple types of liver cancer based on balanced dice loss.","authors":"Jun Xie, Jiajun Zhou, Meiyi Yang, Lifeng Xu, Tongtong Li, Haoyang Jia, Yu Gong, Xiansong Li, Bin Song, Yi Wei, Ming Liu","doi":"10.1002/mp.17624","DOIUrl":"https://doi.org/10.1002/mp.17624","url":null,"abstract":"<p><strong>Background: </strong>Obtaining accurate segmentation regions for liver cancer is of paramount importance for the clinical diagnosis and treatment of the disease. In recent years, a large number of variants of deep learning based liver cancer segmentation methods have been proposed to assist radiologists. Due to the differences in characteristics between different types of liver tumors and data imbalance, it is difficult to train a deep model that can achieve accurate segmentation for multiple types of liver cancer.</p><p><strong>Purpose: </strong>In this paper, We propose a balance Dice Loss(BD Loss) function for balanced learning of multiple categories segmentation features. We also introduce a comprehensive method based on BD Loss to achieve accurate segmentation of multiple categories of liver cancer.</p><p><strong>Materials and methods: </strong>We retrospectively collected computed tomography (CT) screening images and tumor segmentation of 591 patients with malignant liver tumors from West China Hospital of Sichuan University. We use the proposed BD Loss to train a deep model that can segment multiple types of liver tumors and, through a greedy parameter averaging algorithm (GPA algorithm) obtain a more generalized segmentation model. Finally, we employ model integration and our proposed post-processing method, which leverages inter-slice information, to achieve more accurate segmentation of liver cancer lesions.</p><p><strong>Results: </strong>We evaluated the performance of our proposed automatic liver cancer segmentation method on the dataset we collected. The BD loss we proposed can effectively mitigate the adverse effects of data imbalance on the segmentation model. Our proposed method can achieve a dice per case (DPC) of 0.819 (95%CI 0.798-0.841), significantly higher than baseline which achieve a DPC of 0.768(95%CI 0.740-0.796).</p><p><strong>Conclusions: </strong>The differences in CT images between different types of liver cancer necessitate deep learning models to learn distinct features. Our method addresses this challenge, enabling balanced and accurate segmentation performance across multiple types of liver cancer.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143412131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-informed model-based generative neural network for synthesizing scanner- and algorithm-specific low-dose CT exams.
Pub Date : 2025-02-13 DOI: 10.1002/mp.17680
Hao Gong, Lifeng Yu, Shuai Leng, Scott S Hsieh, Joel G Fletcher, Cynthia H McCollough
<p><strong>Background: </strong>Accurate low-dose CT simulation is required to efficiently assess reconstruction and dose reduction techniques. Projection domain noise insertion requires proprietary information from manufacturers. Analytic image domain noise insertion methods are successful for linear reconstruction algorithms, however extending them to non-linear algorithms remains challenging. Emerging, deep-learning-based image domain noise insertion methods have potential, but few approaches have explicitly incorporated physics information and a texture-synthesis model to guide the generation of locally and globally correlated noise texture.</p><p><strong>Purpose: </strong>We proposed a physics-informed model-based generative neural network for simulating scanner- and algorithm-specific low-dose CT exams (PALETTE). It is expected to provide an alternative to projection domain noise insertion methods in the absence of manufacturers' proprietary information and tools.</p><p><strong>Methods: </strong>PALETTE integrated a physics-based noise prior generation process, a Noise2Noisier sub-network, and a noise texture synthesis sub-network. The Noise2Noisier sub-network provided a bias prior, which, combined with the noise prior, served as the inputs to noise texture synthesis sub-network. Explicit regularizations in spatial and frequency domains were developed to account for noise spatial correlation and frequency characteristics. For proof-of-concept, PALETTE was trained and validated for a commercial iterative reconstruction algorithm (SAFIRE, Siemens Healthineers), using the paired routine and 25% dose images from CT phantoms (lateral size 30-40 cm; three training and four testing phantoms) and open-access patient cases (10 training and 20 testing cases). In phantom validation, noise power spectra (NPS) were compared in water background and tissue-mimicking inserts, using peak frequency and mean-absolute-error (MAE). In patient case evaluation, visual inspection and quantitative assessment were conducted on axial, coronal, and sagittal planes. Local and global noise texture were visually inspected in low-dose CT images and the difference images between routine and low dose. Noise levels in liver and fat were measured. Local and global 2D Fourier magnitude spectra of the difference images and the corresponding radial mean profiles were used to assess similarity in noise frequency components within tissues and entire field-of-view, using spectral correlation mapper (SCM) and spectral angle mapper (SAM). Several baseline neural network models (e.g., GAN) were included in the evaluation. Statistical significance was tested using a t-test for related samples.</p><p><strong>Results: </strong>PALETTE-derived NPS showed accurate noise peak frequency (PALETTE/reference: water 1.40/1.40 lp/cm; inserts 1.7/1.7lp/cm) and small MAE (≤0.65 HU<sup>2</sup>cm<sup>2</sup>). PALETTE created anatomy-dependent noise texture, showing realistic local and global granul
{"title":"Physics-informed model-based generative neural network for synthesizing scanner- and algorithm-specific low-dose CT exams.","authors":"Hao Gong, Lifeng Yu, Shuai Leng, Scott S Hsieh, Joel G Fletcher, Cynthia H McCollough","doi":"10.1002/mp.17680","DOIUrl":"https://doi.org/10.1002/mp.17680","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;Accurate low-dose CT simulation is required to efficiently assess reconstruction and dose reduction techniques. Projection domain noise insertion requires proprietary information from manufacturers. Analytic image domain noise insertion methods are successful for linear reconstruction algorithms, however extending them to non-linear algorithms remains challenging. Emerging, deep-learning-based image domain noise insertion methods have potential, but few approaches have explicitly incorporated physics information and a texture-synthesis model to guide the generation of locally and globally correlated noise texture.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;We proposed a physics-informed model-based generative neural network for simulating scanner- and algorithm-specific low-dose CT exams (PALETTE). It is expected to provide an alternative to projection domain noise insertion methods in the absence of manufacturers' proprietary information and tools.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;PALETTE integrated a physics-based noise prior generation process, a Noise2Noisier sub-network, and a noise texture synthesis sub-network. The Noise2Noisier sub-network provided a bias prior, which, combined with the noise prior, served as the inputs to noise texture synthesis sub-network. Explicit regularizations in spatial and frequency domains were developed to account for noise spatial correlation and frequency characteristics. For proof-of-concept, PALETTE was trained and validated for a commercial iterative reconstruction algorithm (SAFIRE, Siemens Healthineers), using the paired routine and 25% dose images from CT phantoms (lateral size 30-40 cm; three training and four testing phantoms) and open-access patient cases (10 training and 20 testing cases). In phantom validation, noise power spectra (NPS) were compared in water background and tissue-mimicking inserts, using peak frequency and mean-absolute-error (MAE). In patient case evaluation, visual inspection and quantitative assessment were conducted on axial, coronal, and sagittal planes. Local and global noise texture were visually inspected in low-dose CT images and the difference images between routine and low dose. Noise levels in liver and fat were measured. Local and global 2D Fourier magnitude spectra of the difference images and the corresponding radial mean profiles were used to assess similarity in noise frequency components within tissues and entire field-of-view, using spectral correlation mapper (SCM) and spectral angle mapper (SAM). Several baseline neural network models (e.g., GAN) were included in the evaluation. Statistical significance was tested using a t-test for related samples.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;PALETTE-derived NPS showed accurate noise peak frequency (PALETTE/reference: water 1.40/1.40 lp/cm; inserts 1.7/1.7lp/cm) and small MAE (≤0.65 HU&lt;sup&gt;2&lt;/sup&gt;cm&lt;sup&gt;2&lt;/sup&gt;). PALETTE created anatomy-dependent noise texture, showing realistic local and global granul","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143412133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Static proton arc therapy: Comprehensive plan quality evaluation and first clinical treatments in patients with complex head and neck targets.
Pub Date : 2025-02-12 DOI: 10.1002/mp.17669
Francesco Fracchiolla, Erik Engwall, Victor Mikhalev, Marco Cianchetti, Irene Giacomelli, Benedetta Siniscalchi, Johan Sundström, Otte Marthin, Viktor Wase, Mattia Bertolini, Roberto Righetto, Annalisa Trianni, Frank Lohr, Stefano Lorentini
<p><strong>Background: </strong>Proton Arc Treatment (PAT) has shown potential over Multi-Field Optimization (MFO) for out-of-target dose reduction in particular for head and neck (H&N) patients. A feasibility test, including delivery in a clinical environment is still missing in the literature and a necessary requirement before clinical application of PAT.</p><p><strong>Purpose: </strong>To perform a comprehensive comparison between clinically delivered MFO plans and static PAT plans for H&N treatments, followed by end-to-end commissioning of the system to prepare for clinical treatments.</p><p><strong>Methods: </strong>Anonymized datasets of 10 patients treated for H&N cancer (median prescription dose 70 GyRBE) were selected for this study. Both MFO and PAT plans were created in RayStation and robustly optimized for setup and range uncertainties as in our clinical routine. PAT plans were created with 30 angle directions. 1. Comparisons were performed regarding: 2. nominal dose distributions in terms of target coverage, dose to primary and secondary OARs 3. robustness evaluation (D<sub>95</sub> of the target and D<sub>1</sub> of primary OARs) 4. Normal tissue complication probability (NTCP) values for xerostomia, swallowing dysfunction, tube feeding, and sticky saliva 5. D·LET<sub>d</sub> distributions 6. the probability of replanning at least once due to anatomical changes 7. delivery time: MFO and PAT plans, for one patient, were delivered in a clinical gantry room. For PAT, two plans with 30 and with 20 discrete beam directions were optimized and delivered.</p><p><strong>Results: </strong>In PAT plans, a significant reduction was observed in the near maximum dose to the brainstem, while no statistically significant differences were found for other primary OARs or target coverage metrics (D<sub>95</sub> and D<sub>98</sub>) in both nominal plans and robustness evaluation scenarios. For secondary OARs, PAT plans achieved an impressive reduction in mean dose. Max D·LETd distributions in brainstem, brain, and temporal lobes showed no statistical differences between MFO and PAT plans while mean D·LETd values were lower with PAT. Median NTCP was significantly reduced for xerostomia as endpoint (ΔNTCP = 8.5%), while reductions in other endpoints were not statistically significant. The number of patients that would need at least one replanning during the treatment for PAT was similar to MFO, showing that the established clinical workflow for monitoring of anatomy changes will remain the same for both delivery methods. Comparison in terms of delivery time from the start of the first beam until the end of the last (comprising all the technically motivated delays due to operation of OIS/Therapy Control System operation, gantry rotations, couch rotations, beam line preparation etc.) resulted in delivery times that were similar for both techniques.</p><p><strong>Conclusion: </strong>Static PAT plans demonstrate the capability to increase plan quality with
{"title":"Static proton arc therapy: Comprehensive plan quality evaluation and first clinical treatments in patients with complex head and neck targets.","authors":"Francesco Fracchiolla, Erik Engwall, Victor Mikhalev, Marco Cianchetti, Irene Giacomelli, Benedetta Siniscalchi, Johan Sundström, Otte Marthin, Viktor Wase, Mattia Bertolini, Roberto Righetto, Annalisa Trianni, Frank Lohr, Stefano Lorentini","doi":"10.1002/mp.17669","DOIUrl":"https://doi.org/10.1002/mp.17669","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;Proton Arc Treatment (PAT) has shown potential over Multi-Field Optimization (MFO) for out-of-target dose reduction in particular for head and neck (H&N) patients. A feasibility test, including delivery in a clinical environment is still missing in the literature and a necessary requirement before clinical application of PAT.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;To perform a comprehensive comparison between clinically delivered MFO plans and static PAT plans for H&N treatments, followed by end-to-end commissioning of the system to prepare for clinical treatments.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;Anonymized datasets of 10 patients treated for H&N cancer (median prescription dose 70 GyRBE) were selected for this study. Both MFO and PAT plans were created in RayStation and robustly optimized for setup and range uncertainties as in our clinical routine. PAT plans were created with 30 angle directions. 1. Comparisons were performed regarding: 2. nominal dose distributions in terms of target coverage, dose to primary and secondary OARs 3. robustness evaluation (D&lt;sub&gt;95&lt;/sub&gt; of the target and D&lt;sub&gt;1&lt;/sub&gt; of primary OARs) 4. Normal tissue complication probability (NTCP) values for xerostomia, swallowing dysfunction, tube feeding, and sticky saliva 5. D·LET&lt;sub&gt;d&lt;/sub&gt; distributions 6. the probability of replanning at least once due to anatomical changes 7. delivery time: MFO and PAT plans, for one patient, were delivered in a clinical gantry room. For PAT, two plans with 30 and with 20 discrete beam directions were optimized and delivered.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;In PAT plans, a significant reduction was observed in the near maximum dose to the brainstem, while no statistically significant differences were found for other primary OARs or target coverage metrics (D&lt;sub&gt;95&lt;/sub&gt; and D&lt;sub&gt;98&lt;/sub&gt;) in both nominal plans and robustness evaluation scenarios. For secondary OARs, PAT plans achieved an impressive reduction in mean dose. Max D·LETd distributions in brainstem, brain, and temporal lobes showed no statistical differences between MFO and PAT plans while mean D·LETd values were lower with PAT. Median NTCP was significantly reduced for xerostomia as endpoint (ΔNTCP = 8.5%), while reductions in other endpoints were not statistically significant. The number of patients that would need at least one replanning during the treatment for PAT was similar to MFO, showing that the established clinical workflow for monitoring of anatomy changes will remain the same for both delivery methods. Comparison in terms of delivery time from the start of the first beam until the end of the last (comprising all the technically motivated delays due to operation of OIS/Therapy Control System operation, gantry rotations, couch rotations, beam line preparation etc.) resulted in delivery times that were similar for both techniques.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusion: &lt;/strong&gt;Static PAT plans demonstrate the capability to increase plan quality with ","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143412136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beam's eye view to fluence maps 3D network for ultra fast VMAT radiotherapy planning.
Pub Date : 2025-02-11 DOI: 10.1002/mp.17673
Simon Arberet, Florin C Ghesu, Riqiang Gao, Martin Kraus, Jonathan Sackett, Esa Kuusela, Ali Kamen

Background: Volumetric modulated arc therapy (VMAT) revolutionizes cancer treatment by precisely delivering radiation while sparing healthy tissues. Fluence maps generation, crucial in VMAT planning, traditionally involves complex and iterative, and thus time consuming processes. These fluence maps are subsequently leveraged for leaf-sequence. The deep-learning approach presented in this article aims to expedite this by directly predicting fluence maps from patient data.

Purpose: To accelerate VMAT treatment planning by quickly predicting fluence maps from a 3D dose map. The predicted fluence maps can be quickly leaf sequenced because the network was trained to take into account the machine constraints.

Methods: We developed a 3D network which we trained in a supervised way using a combination of L 1 $L_1$ and L 2 $L_2$ losses, and radiation therapy (RT) plans generated by Eclipse and from the REQUITE dataset, taking the RT dose map as input and the fluence maps computed from the corresponding RT plans as target. Our network predicts jointly the 180 fluence maps corresponding to the 180 control points (CP) of single arc VMAT plans. In order to help the network, we preprocess the input dose by computing the projections of the 3D dose map to the beam's eye view (BEV) of the 180 CPs, in the same coordinate system as the fluence maps. We generated over 2000 VMAT plans using Eclipse to scale up the dataset size. Additionally, we evaluated various network architectures and analyzed the impact of increasing the dataset size.

Results: We are measuring the performance in the 2D fluence maps domain using image metrics (PSNR and SSIM), as well as in the 3D dose domain using the dose-volume histogram (DVH) on a test set. The network inference, which does not include the data loading and processing, is less than 20 ms. Using our proposed 3D network architecture as well as increasing the dataset size using Eclipse improved the fluence map reconstruction performance by approximately 8 dB in PSNR compared to a U-Net architecture trained on the original REQUITE dataset. The resulting DVHs are very close to the one of the input target dose.

Conclusions: We developed a novel deep learning approach for ultrafast VMAT planning by predicting all the fluence maps of a VMAT arc in one single network inference. The small difference of the DVH validate this approach for ultrafast VMAT planning.

{"title":"Beam's eye view to fluence maps 3D network for ultra fast VMAT radiotherapy planning.","authors":"Simon Arberet, Florin C Ghesu, Riqiang Gao, Martin Kraus, Jonathan Sackett, Esa Kuusela, Ali Kamen","doi":"10.1002/mp.17673","DOIUrl":"https://doi.org/10.1002/mp.17673","url":null,"abstract":"<p><strong>Background: </strong>Volumetric modulated arc therapy (VMAT) revolutionizes cancer treatment by precisely delivering radiation while sparing healthy tissues. Fluence maps generation, crucial in VMAT planning, traditionally involves complex and iterative, and thus time consuming processes. These fluence maps are subsequently leveraged for leaf-sequence. The deep-learning approach presented in this article aims to expedite this by directly predicting fluence maps from patient data.</p><p><strong>Purpose: </strong>To accelerate VMAT treatment planning by quickly predicting fluence maps from a 3D dose map. The predicted fluence maps can be quickly leaf sequenced because the network was trained to take into account the machine constraints.</p><p><strong>Methods: </strong>We developed a 3D network which we trained in a supervised way using a combination of <math> <semantics><msub><mi>L</mi> <mn>1</mn></msub> <annotation>$L_1$</annotation></semantics> </math> and <math> <semantics><msub><mi>L</mi> <mn>2</mn></msub> <annotation>$L_2$</annotation></semantics> </math> losses, and radiation therapy (RT) plans generated by Eclipse and from the REQUITE dataset, taking the RT dose map as input and the fluence maps computed from the corresponding RT plans as target. Our network predicts jointly the 180 fluence maps corresponding to the 180 control points (CP) of single arc VMAT plans. In order to help the network, we preprocess the input dose by computing the projections of the 3D dose map to the beam's eye view (BEV) of the 180 CPs, in the same coordinate system as the fluence maps. We generated over 2000 VMAT plans using Eclipse to scale up the dataset size. Additionally, we evaluated various network architectures and analyzed the impact of increasing the dataset size.</p><p><strong>Results: </strong>We are measuring the performance in the 2D fluence maps domain using image metrics (PSNR and SSIM), as well as in the 3D dose domain using the dose-volume histogram (DVH) on a test set. The network inference, which does not include the data loading and processing, is less than 20 ms. Using our proposed 3D network architecture as well as increasing the dataset size using Eclipse improved the fluence map reconstruction performance by approximately 8 dB in PSNR compared to a U-Net architecture trained on the original REQUITE dataset. The resulting DVHs are very close to the one of the input target dose.</p><p><strong>Conclusions: </strong>We developed a novel deep learning approach for ultrafast VMAT planning by predicting all the fluence maps of a VMAT arc in one single network inference. The small difference of the DVH validate this approach for ultrafast VMAT planning.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143401026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asymmetry analysis of nuclear Overhauser enhancement effect at -1.6 ppm in ischemic stroke.
Pub Date : 2025-02-11 DOI: 10.1002/mp.17677
Yu Zhao, Aqeela Afzal, Zhongliang Zu

Background: The nuclear Overhauser enhancement (NOE)-mediated saturation transfer effect at -1.6 ppm, termed NOE(-1.6 ppm), has demonstrated potential for detecting ischemic stroke. However, the quantification of the NOE(-1.6 ppm) effect usually relies on a multiple-pool Lorentzian fit method, which necessitates a time-consuming acquisition of the entire chemical exchange saturation transfer (CEST) Z-spectrum with high-frequency resolution, thus hindering its clinical applications.

Purpose: This study aims to assess the feasibility of employing asymmetry analysis, a rapid CEST data acquisition and analysis method, for quantifying the NOE(-1.6 ppm) effect in an animal model of ischemic stroke.

Methods: We examined potential contaminations from guanidinium/amine CEST, NOE(-3.5 ppm), and asymmetric magnetization transfer (MT) effects, which could reduce the specificity of the asymmetry analysis of NOE(-1.6 ppm). First, a Lorentzian difference (LD) analysis was used to mitigate direct water saturation and MT effects, providing separate estimations of the contributions from the guanidinium/amine CEST and NOE effects. Then, the asymmetry analysis of the LD fitted spectrum was compared with the asymmetry analysis of the raw CEST Z-spectrum to evaluate the contribution of the asymmetric MT effect at -1.6 ppm.

Results: Results show that the variations of the LD quantified NOE(-1.6 ppm) in stroke lesions are much greater than that of the CEST signals at +1.6 ppm and NOE(-3.5 ppm), suggesting that NOE(-1.6 ppm) has a dominating contribution to the asymmetry analysis at -1.6 ppm compared with the guanidinium/amine CEST and NOE(-3.5 ppm) in ischemic stroke. The NOE(-1.6 ppm) variations in the asymmetry analysis of the raw CEST Z-spectrum are close to those in the asymmetry analysis of the LD fitted spectrum, revealing that the NOE(-1.6 ppm) dominates over the asymmetric MT effects.

Conclusion: Our study demonstrates that the asymmetry analysis can quantify the NOE(-1.6 ppm) contrast in ischemic stroke with high specificity, thus presenting a viable alternative for rapid mapping of ischemic stroke.

{"title":"Asymmetry analysis of nuclear Overhauser enhancement effect at -1.6 ppm in ischemic stroke.","authors":"Yu Zhao, Aqeela Afzal, Zhongliang Zu","doi":"10.1002/mp.17677","DOIUrl":"https://doi.org/10.1002/mp.17677","url":null,"abstract":"<p><strong>Background: </strong>The nuclear Overhauser enhancement (NOE)-mediated saturation transfer effect at -1.6 ppm, termed NOE(-1.6 ppm), has demonstrated potential for detecting ischemic stroke. However, the quantification of the NOE(-1.6 ppm) effect usually relies on a multiple-pool Lorentzian fit method, which necessitates a time-consuming acquisition of the entire chemical exchange saturation transfer (CEST) Z-spectrum with high-frequency resolution, thus hindering its clinical applications.</p><p><strong>Purpose: </strong>This study aims to assess the feasibility of employing asymmetry analysis, a rapid CEST data acquisition and analysis method, for quantifying the NOE(-1.6 ppm) effect in an animal model of ischemic stroke.</p><p><strong>Methods: </strong>We examined potential contaminations from guanidinium/amine CEST, NOE(-3.5 ppm), and asymmetric magnetization transfer (MT) effects, which could reduce the specificity of the asymmetry analysis of NOE(-1.6 ppm). First, a Lorentzian difference (LD) analysis was used to mitigate direct water saturation and MT effects, providing separate estimations of the contributions from the guanidinium/amine CEST and NOE effects. Then, the asymmetry analysis of the LD fitted spectrum was compared with the asymmetry analysis of the raw CEST Z-spectrum to evaluate the contribution of the asymmetric MT effect at -1.6 ppm.</p><p><strong>Results: </strong>Results show that the variations of the LD quantified NOE(-1.6 ppm) in stroke lesions are much greater than that of the CEST signals at +1.6 ppm and NOE(-3.5 ppm), suggesting that NOE(-1.6 ppm) has a dominating contribution to the asymmetry analysis at -1.6 ppm compared with the guanidinium/amine CEST and NOE(-3.5 ppm) in ischemic stroke. The NOE(-1.6 ppm) variations in the asymmetry analysis of the raw CEST Z-spectrum are close to those in the asymmetry analysis of the LD fitted spectrum, revealing that the NOE(-1.6 ppm) dominates over the asymmetric MT effects.</p><p><strong>Conclusion: </strong>Our study demonstrates that the asymmetry analysis can quantify the NOE(-1.6 ppm) contrast in ischemic stroke with high specificity, thus presenting a viable alternative for rapid mapping of ischemic stroke.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143400945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical physics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1