首页 > 最新文献

Physics in medicine and biology最新文献

英文 中文
Real-time dose reconstruction in proton therapy from in-beam PET measurements.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-12 DOI: 10.1088/1361-6560/adbfd9
Victor Valladolid-Onecha, Andrea Espinosa Rodriguez, Cayetano Soneira Landín, Fernando Arias-Valcayo, Sara Gaitán-Dominguez, Victor Martinez-Nouvilas, Miguel Garcia-Diez, Paula Ibáñez, Samuel Espana, Daniel Sanchez-Parcerisa, Fernando Cerron-Campoo, Juan Antonio Vera Sánchez, Alejandro Mazal, José Manuel Udías, Luis Mario Fraile

Objective: Clinical implementation of in-beam PET monitoring in proton therapy requires the integration of an online fast and reliable dose calculation engine. This manuscript reports on the achievement of real-time reconstruction of 3D dose and activity maps with proton range verification from experimental in-beam PET measurements. Approach: Several cylindrical homogeneous PMMA phantoms were irradiated with a monoenergetic 70-MeV proton beam in a clinical facility. Additionally, PMMA range-shifting foils of varying thicknesses were placed at the proximal surface of the phantom to investigate range shift prediction capabilities. PET activity was measured using a state-of-the-art in-house developed six-module PET scanner equipped with online PET reconstruction capabilities. For real-time dose estimation, we integrated this system with an in-beam dose estimation (IDE) algorithm, which combines a GPU-based 3D reconstruction algorithm with a dictionary-based software, capable of estimating deposited doses from the 3D PET activity images. The range shift prediction performance has been quantitatively studied in terms of the minimum dose to be delivered and the maximum acquisition time. Main results: With this framework, 3D dose maps were accurately reconstructed and displayed with a delay as short as one second. For a dose fraction of 8.4 Gy at the Bragg peak maximum, range shifts as small as 1 mm could be detected. The quantitative analysis shows that accumulating 20 seconds of statistics from the start of the irradiation, doses down to 1 Gy could be estimated online with total uncertainties smaller than 2 mm. Significance. The hardware and software combination employed in this work can deliver dose maps and accurately predict range shifts after short acquisition times and small doses, suggesting that real-time monitoring and dose reconstruction during proton therapy are within reach. Future work will focus on testing the methodology in more complex clinical scenarios and on upgrading the PET prototype for increased sensitivity. .

{"title":"Real-time dose reconstruction in proton therapy from in-beam PET measurements.","authors":"Victor Valladolid-Onecha, Andrea Espinosa Rodriguez, Cayetano Soneira Landín, Fernando Arias-Valcayo, Sara Gaitán-Dominguez, Victor Martinez-Nouvilas, Miguel Garcia-Diez, Paula Ibáñez, Samuel Espana, Daniel Sanchez-Parcerisa, Fernando Cerron-Campoo, Juan Antonio Vera Sánchez, Alejandro Mazal, José Manuel Udías, Luis Mario Fraile","doi":"10.1088/1361-6560/adbfd9","DOIUrl":"https://doi.org/10.1088/1361-6560/adbfd9","url":null,"abstract":"<p><strong>Objective: </strong>Clinical implementation of in-beam PET monitoring in proton therapy requires the integration of an online fast and reliable dose calculation engine. This manuscript reports on the achievement of real-time reconstruction of 3D dose and activity maps with proton range verification from experimental in-beam PET measurements. &#xD;&#xD;Approach: Several cylindrical homogeneous PMMA phantoms were irradiated with a monoenergetic 70-MeV proton beam in a clinical facility. Additionally, PMMA range-shifting foils of varying thicknesses were placed at the proximal surface of the phantom to investigate range shift prediction capabilities. PET activity was measured using a state-of-the-art in-house developed six-module PET scanner equipped with online PET reconstruction capabilities. For real-time dose estimation, we integrated this system with an in-beam dose estimation (IDE) algorithm, which combines a GPU-based 3D reconstruction algorithm with a dictionary-based software, capable of estimating deposited doses from the 3D PET activity images. The range shift prediction performance has been quantitatively studied in terms of the minimum dose to be delivered and the maximum acquisition time.&#xD;&#xD;Main results: With this framework, 3D dose maps were accurately reconstructed and displayed with a delay as short as one second. For a dose fraction of 8.4 Gy at the Bragg peak maximum, range shifts as small as 1 mm could be detected. The quantitative analysis shows that accumulating 20 seconds of statistics from the start of the irradiation, doses down to 1 Gy could be estimated online with total uncertainties smaller than 2 mm. &#xD;&#xD;Significance. The hardware and software combination employed in this work can deliver dose maps and accurately predict range shifts after short acquisition times and small doses, suggesting that real-time monitoring and dose reconstruction during proton therapy are within reach. Future work will focus on testing the methodology in more complex clinical scenarios and on upgrading the PET prototype for increased sensitivity.&#xD.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143616770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A feasibility study of automating radiotherapy planning with large language model agents.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-12 DOI: 10.1088/1361-6560/adbff1
Qingxin Wang, Zhongqiu Wang, Minghua Li, Xinye Ni, Rong Tan, Wenwen Zhang, Maitudi Wubulaishan, Wei Wang, Zhiyong Yuan, Zhen Zhang, Cong Liu

Objective: Radiotherapy planning requires significant expertise to balance tumor control and organ-at-risk (OAR) sparing. Automated planning can improve both efficiency and quality. This study introduces GPT-Plan, a novel multi-agent system powered by the GPT-4 family of large language models (LLMs), for automating the iterative radiotherapy plan optimization.Approach: GPT-Plan uses LLM-driven agents, mimicking the collaborative clinical workflow of a dosimetrist and physicist, to iteratively generate and evaluate text-based radiotherapy plans based on predefined criteria. Supporting tools assist the agents by leveraging historical plans, mitigating LLM hallucinations, and balancing exploration and exploitation. Performance was evaluated on 12 lung (IMRT) and 5 cervical (VMAT) cancer cases, benchmarked against the ECHO auto-planning method and manual plans. The impact of historical plan retrieval on efficiency was also assessed.Results: For IMRT lung cancer cases, GPT-Plan generated high-quality plans, demonstrating superior target coverage and homogeneity compared to ECHO while maintaining comparable or better OAR sparing. For VMAT cervical cancer cases, plan quality was comparable to a senior physicist and consistently superior to a junior physicist, particularly for OAR sparing. Retrieving historical plans significantly reduced the number of required optimization iterations for lung cases (p < 0.01) and yielded iteration counts comparable to those of the senior physicist for cervical cases (p=0.313). Occasional LLM hallucinations have been mitigated by self-reflection mechanisms. One limitation was the inaccuracy of vision-based LLMs in interpreting dose images.Significance: This pioneering study demonstrates the feasibility of automating radiotherapy planning using LLM-powered agents for complex treatment decision-making tasks. While challenges remain in addressing LLM limitations, ongoing advancements hold potential for further refining and expanding GPT-Plan's capabilities. .

{"title":"A feasibility study of automating radiotherapy planning with large language model agents.","authors":"Qingxin Wang, Zhongqiu Wang, Minghua Li, Xinye Ni, Rong Tan, Wenwen Zhang, Maitudi Wubulaishan, Wei Wang, Zhiyong Yuan, Zhen Zhang, Cong Liu","doi":"10.1088/1361-6560/adbff1","DOIUrl":"https://doi.org/10.1088/1361-6560/adbff1","url":null,"abstract":"<p><p><b>Objective</b>: Radiotherapy planning requires significant expertise to balance tumor control and organ-at-risk (OAR) sparing. Automated planning can improve both efficiency and quality. This study introduces GPT-Plan, a novel multi-agent system powered by the GPT-4 family of large language models (LLMs), for automating the iterative radiotherapy plan optimization.<b>Approach</b>: GPT-Plan uses LLM-driven agents, mimicking the collaborative clinical workflow of a dosimetrist and physicist, to iteratively generate and evaluate text-based radiotherapy plans based on predefined criteria. Supporting tools assist the agents by leveraging historical plans, mitigating LLM hallucinations, and balancing exploration and exploitation. Performance was evaluated on 12 lung (IMRT) and 5 cervical (VMAT) cancer cases, benchmarked against the ECHO auto-planning method and manual plans. The impact of historical plan retrieval on efficiency was also assessed.<b>Results</b>: For IMRT lung cancer cases, GPT-Plan generated high-quality plans, demonstrating superior target coverage and homogeneity compared to ECHO while maintaining comparable or better OAR sparing. For VMAT cervical cancer cases, plan quality was comparable to a senior physicist and consistently superior to a junior physicist, particularly for OAR sparing. Retrieving historical plans significantly reduced the number of required optimization iterations for lung cases (p < 0.01) and yielded iteration counts comparable to those of the senior physicist for cervical cases (p=0.313). Occasional LLM hallucinations have been mitigated by self-reflection mechanisms. One limitation was the inaccuracy of vision-based LLMs in interpreting dose images.<b>Significance</b>: This pioneering study demonstrates the feasibility of automating radiotherapy planning using LLM-powered agents for complex treatment decision-making tasks. While challenges remain in addressing LLM limitations, ongoing advancements hold potential for further refining and expanding GPT-Plan's capabilities.&#xD.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143616768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IPEM code of practice for proton therapy dosimetry based on the NPL primary standard proton calorimeter calibration service.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-12 DOI: 10.1088/1361-6560/adad2e
Stuart Green, Ana Lourenço, Hugo Palmans, Nigel Lee, Richard A Amos, Derek D' Souza, Francesca Fiorini, Frank Van Den Heuvel, Andrzej Kacperek, Ranald Mackay, John Pettingell, Russell Thomas

Internationally, reference dosimetry for clinical proton beams largely follows the guidelines published by the International Atomic Energy Agency (IAEA TRS-398 Rev. 1 (2024). This approach yields a relative standard uncertainty of 1.7% (k= 1) on the absorbed dose to water determined under reference conditions. The new IPEM code of practice presented here, enables the relative standard uncertainty on the absorbed dose to water measured under reference conditions to be reduced to 1.0% (k= 1). This improvement is based on the absorbed dose to water calibration service for proton beams provided by the National Physical Laboratory (NPL), the UK's primary standards laboratory. This significantly reduced uncertainty is achieved through the use of a primary standard level graphite calorimeter to derive absorbed dose to water directly in the clinical department's beam. This eliminates the need for beam quality correction factors (kQ,Q0) as required by the IAEA TRS-398 approach. The portable primary standard level graphite calorimeter, developed over a number of years at the NPL, is sufficiently robust to be useable in the proton beams of clinical facilities both in the UK and overseas. The new code of practice involves performing reference dosimetry measurements directly traceable to the primary standard level graphite calorimeter in a clinical proton beam. Calibration of an ionisation chamber is performed in the centre of a standard test volume (STV) of dose, defined here to be a 10 × 10 × 10 cm volume in water, centred at a depth of 15 cm. Further STVs at reduced and increased depths are also utilised. The designated ionisation chambers are Roos-type plane-parallel chambers. This article provides all the necessary background material, formalism, and specifications of reference conditions required to implement reference dosimetry according to this new code of practice. The Annexes provide a detailed review of ion recombination and how this should be assessed (Annex A1) and detailed work instructions for creating and delivering the STVs (Annex A2).

{"title":"IPEM code of practice for proton therapy dosimetry based on the NPL primary standard proton calorimeter calibration service.","authors":"Stuart Green, Ana Lourenço, Hugo Palmans, Nigel Lee, Richard A Amos, Derek D' Souza, Francesca Fiorini, Frank Van Den Heuvel, Andrzej Kacperek, Ranald Mackay, John Pettingell, Russell Thomas","doi":"10.1088/1361-6560/adad2e","DOIUrl":"10.1088/1361-6560/adad2e","url":null,"abstract":"<p><p>Internationally, reference dosimetry for clinical proton beams largely follows the guidelines published by the International Atomic Energy Agency (IAEA TRS-398 Rev. 1 (2024). This approach yields a relative standard uncertainty of 1.7% (<i>k</i>= 1) on the absorbed dose to water determined under reference conditions. The new IPEM code of practice presented here, enables the relative standard uncertainty on the absorbed dose to water measured under reference conditions to be reduced to 1.0% (<i>k</i>= 1). This improvement is based on the absorbed dose to water calibration service for proton beams provided by the National Physical Laboratory (NPL), the UK's primary standards laboratory. This significantly reduced uncertainty is achieved through the use of a primary standard level graphite calorimeter to derive absorbed dose to water directly in the clinical department's beam. This eliminates the need for beam quality correction factors (kQ,Q0) as required by the IAEA TRS-398 approach. The portable primary standard level graphite calorimeter, developed over a number of years at the NPL, is sufficiently robust to be useable in the proton beams of clinical facilities both in the UK and overseas. The new code of practice involves performing reference dosimetry measurements directly traceable to the primary standard level graphite calorimeter in a clinical proton beam. Calibration of an ionisation chamber is performed in the centre of a standard test volume (STV) of dose, defined here to be a 10 × 10 × 10 cm volume in water, centred at a depth of 15 cm. Further STVs at reduced and increased depths are also utilised. The designated ionisation chambers are Roos-type plane-parallel chambers. This article provides all the necessary background material, formalism, and specifications of reference conditions required to implement reference dosimetry according to this new code of practice. The Annexes provide a detailed review of ion recombination and how this should be assessed (Annex A1) and detailed work instructions for creating and delivering the STVs (Annex A2).</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143023993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-institution investigations of online daily adaptive proton strategies for head and neck cancer patients.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-11 DOI: 10.1088/1361-6560/adbb51
Evangelia Choulilitsa, Mislav Bobić, Brian Winey, Harald Paganetti, Antony J Lomax, Francesca Albertini

Objective.Fast computation of daily reoptimization is key for an efficient online adaptive proton therapy workflow. Various approaches aim to expedite this process, often compromising daily dose. This study compares Massachusetts General Hospital's (MGH's) online dose reoptimization approach, Paul Scherrer Institute's (PSI's) online replanning workflow and a full reoptimization adaptive workflow for head and neck cancer (H&N) patients.Approach.Ten H&N patients (PSI:5, MGH:5) with daily cone beam computed tomographys (CBCTs) were included. Synthetic CTs were created by deforming the planning CT to each CBCT. Targets and organs at risk (OARs) were deformed on daily images. Three adaptive approaches were investigated: (i) an online dose reoptimization approach modifying the fluence of a subset of beamlets, (ii) full reoptimization adaptive workflow modifying the fluence of all beamlets, and (iii) a full online replanning approach, allowing the optimizer to modify both fluence and position of all beamlets. Two non-adapted (NA) scenarios were simulated by recalculating the original plan on the daily image using: Monte Carlo for NAMGHand raycasting algorithm for NAPSI.Main results.All adaptive scenarios from both institutions achieved the prescribed daily target dose, with further improvements from online replanning. For all patients, low-dose CTV D98%shows mean daily deviations of -2.2%, -1.1%, and 0.4% for workflows (i), (ii), and (iii), respectively. For the online adaptive scenarios, plan optimization averages 2.2 min for (iii) and 2.4 for (i) while the full dose reoptimization requires 72 min. The OAMGH20%dose reoptimization approach produced results comparable to online replanning for most patients and fractions. However, for one patient, differences up to 11% in low-dose CTV D98%occurred.Significance.Despite significant anatomical changes, all three adaptive approaches ensure target coverage without compromising OAR sparing. Our data suggests 20% dose reoptimization suffices, for most cases, yielding comparable results to online replanning with a marginal time increase due to Monte Carlo. For optimal daily adaptation, a rapid online replanning is preferable.

{"title":"Multi-institution investigations of online daily adaptive proton strategies for head and neck cancer patients.","authors":"Evangelia Choulilitsa, Mislav Bobić, Brian Winey, Harald Paganetti, Antony J Lomax, Francesca Albertini","doi":"10.1088/1361-6560/adbb51","DOIUrl":"10.1088/1361-6560/adbb51","url":null,"abstract":"<p><p><i>Objective.</i>Fast computation of daily reoptimization is key for an efficient online adaptive proton therapy workflow. Various approaches aim to expedite this process, often compromising daily dose. This study compares Massachusetts General Hospital's (MGH's) online dose reoptimization approach, Paul Scherrer Institute's (PSI's) online replanning workflow and a full reoptimization adaptive workflow for head and neck cancer (H&N) patients.<i>Approach.</i>Ten H&N patients (PSI:5, MGH:5) with daily cone beam computed tomographys (CBCTs) were included. Synthetic CTs were created by deforming the planning CT to each CBCT. Targets and organs at risk (OARs) were deformed on daily images. Three adaptive approaches were investigated: (i) an online dose reoptimization approach modifying the fluence of a subset of beamlets, (ii) full reoptimization adaptive workflow modifying the fluence of all beamlets, and (iii) a full online replanning approach, allowing the optimizer to modify both fluence and position of all beamlets. Two non-adapted (NA) scenarios were simulated by recalculating the original plan on the daily image using: Monte Carlo for NA<sub>MGH</sub>and raycasting algorithm for NA<sub>PSI</sub>.<i>Main results.</i>All adaptive scenarios from both institutions achieved the prescribed daily target dose, with further improvements from online replanning. For all patients, low-dose CTV D<sub>98%</sub>shows mean daily deviations of -2.2%, -1.1%, and 0.4% for workflows (i), (ii), and (iii), respectively. For the online adaptive scenarios, plan optimization averages 2.2 min for (iii) and 2.4 for (i) while the full dose reoptimization requires 72 min. The OA<sub>MGH20%</sub>dose reoptimization approach produced results comparable to online replanning for most patients and fractions. However, for one patient, differences up to 11% in low-dose CTV D<sub>98%</sub>occurred.<i>Significance.</i>Despite significant anatomical changes, all three adaptive approaches ensure target coverage without compromising OAR sparing. Our data suggests 20% dose reoptimization suffices, for most cases, yielding comparable results to online replanning with a marginal time increase due to Monte Carlo. For optimal daily adaptation, a rapid online replanning is preferable.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143524138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-dose CT reconstruction using cross-domain deep learning with domain transfer module.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-11 DOI: 10.1088/1361-6560/adb932
Yoseob Han

Objective. X-ray computed tomography employing low-dose x-ray source is actively researched to reduce radiation exposure. However, the reduced photon count in low-dose x-ray sources leads to severe noise artifacts in analytic reconstruction methods like filtered backprojection. Recently, deep learning (DL)-based approaches employing uni-domain networks, either in the image-domain or projection-domain, have demonstrated remarkable effectiveness in reducing image noise and Poisson noise caused by low-dose x-ray source. Furthermore, dual-domain networks that integrate image-domain and projection-domain networks are being developed to surpass the performance of uni-domain networks. Despite this advancement, dual-domain networks require twice the computational resources of uni-domain networks, even though their underlying network architectures are not substantially different.Approach. The U-Net architecture, a type of Hourglass network, comprises encoder and decoder modules. The encoder extracts meaningful representations from the input data, while the decoder uses these representations to reconstruct the target data. In dual-domain networks, however, encoders and decoders are redundantly utilized due to the sequential use of two networks, leading to increased computational demands. To address this issue, this study proposes a cross-domain DL approach that leverages analytical domain transfer functions. These functions enable the transfer of features extracted by an encoder trained in input domain to target domain, thereby reducing redundant computations. The target data is then reconstructed using a decoder trained in the corresponding domain, optimizing resource efficiency without compromising performance.Main results. The proposed cross-domain network, comprising a projection-domain encoder and an image-domain decoder, demonstrated effective performance by leveraging the domain transfer function, achieving comparable results with only half the trainable parameters of dual-domain networks. Moreover, the proposed method outperformed conventional iterative reconstruction techniques and existing DL approaches in reconstruction quality.Significance. The proposed network leverages the transfer function to bypass redundant encoder and decoder modules, enabling direct connections between different domains. This approach not only surpasses the performance of dual-domain networks but also significantly reduces the number of required parameters. By facilitating the transfer of primal representations across domains, the method achieves synergistic effects, delivering high quality reconstruction images with reduced radiation doses.

{"title":"Low-dose CT reconstruction using cross-domain deep learning with domain transfer module.","authors":"Yoseob Han","doi":"10.1088/1361-6560/adb932","DOIUrl":"10.1088/1361-6560/adb932","url":null,"abstract":"<p><p><i>Objective</i>. X-ray computed tomography employing low-dose x-ray source is actively researched to reduce radiation exposure. However, the reduced photon count in low-dose x-ray sources leads to severe noise artifacts in analytic reconstruction methods like filtered backprojection. Recently, deep learning (DL)-based approaches employing uni-domain networks, either in the image-domain or projection-domain, have demonstrated remarkable effectiveness in reducing image noise and Poisson noise caused by low-dose x-ray source. Furthermore, dual-domain networks that integrate image-domain and projection-domain networks are being developed to surpass the performance of uni-domain networks. Despite this advancement, dual-domain networks require twice the computational resources of uni-domain networks, even though their underlying network architectures are not substantially different.<i>Approach</i>. The U-Net architecture, a type of Hourglass network, comprises encoder and decoder modules. The encoder extracts meaningful representations from the input data, while the decoder uses these representations to reconstruct the target data. In dual-domain networks, however, encoders and decoders are redundantly utilized due to the sequential use of two networks, leading to increased computational demands. To address this issue, this study proposes a cross-domain DL approach that leverages analytical domain transfer functions. These functions enable the transfer of features extracted by an encoder trained in input domain to target domain, thereby reducing redundant computations. The target data is then reconstructed using a decoder trained in the corresponding domain, optimizing resource efficiency without compromising performance.<i>Main results</i>. The proposed cross-domain network, comprising a projection-domain encoder and an image-domain decoder, demonstrated effective performance by leveraging the domain transfer function, achieving comparable results with only half the trainable parameters of dual-domain networks. Moreover, the proposed method outperformed conventional iterative reconstruction techniques and existing DL approaches in reconstruction quality.<i>Significance</i>. The proposed network leverages the transfer function to bypass redundant encoder and decoder modules, enabling direct connections between different domains. This approach not only surpasses the performance of dual-domain networks but also significantly reduces the number of required parameters. By facilitating the transfer of primal representations across domains, the method achieves synergistic effects, delivering high quality reconstruction images with reduced radiation doses.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143472829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Power absorption and temperature rise in deep learning based head models for local radiofrequency exposures.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-11 DOI: 10.1088/1361-6560/adb935
Sachiko Kodera, Reina Yoshida, Essam A Rashed, Yinliang Diao, Hiroyuki Takizawa, Akimasa Hirata

Objective.Computational uncertainty and variability of power absorption and temperature rise in humans for radiofrequency (RF) exposure is a critical factor in ensuring human protection. This aspect has been emphasized as a priority. However, accurately modeling head tissue composition and assigning tissue dielectric and thermal properties remains a challenging task. This study investigated the impact of segmentation-based versus segmentation-free models for assessing localized RF exposure.Approach.Two computational head models were compared: one employing traditional tissue segmentation and the other leveraging deep learning to estimate tissue dielectric and thermal properties directly from magnetic resonance images. The finite-difference time-domain method and the bioheat transfer equation was solved to assess temperature rise for local exposure. Inter-subject variability and dosimetric uncertainties were analyzed across multiple frequencies.Main results.The comparison between the two methods for head modeling demonstrated strong consistency, with differences in peak temperature rise of 7.6 ± 6.4%. The segmentation-free model showed reduced inter-subject variability, particularly at higher frequencies where superficial heating dominates. The maximum relative standard deviation in the inter-subject variability of heating factor was 15.0% at 3 GHz and decreased with increasing frequencies.Significance.This study highlights the advantages of segmentation-free deep-learning models for RF dosimetry, particularly in reducing inter-subject variability and improving computational efficiency. While the differences between the two models are relatively small compared to overall dosimetric uncertainty, segmentation-free models offer a promising approach for refining individual-specific exposure assessments. These findings contribute to improving the accuracy and consistency of human protection guidelines against RF electromagnetic field exposure.

{"title":"Power absorption and temperature rise in deep learning based head models for local radiofrequency exposures.","authors":"Sachiko Kodera, Reina Yoshida, Essam A Rashed, Yinliang Diao, Hiroyuki Takizawa, Akimasa Hirata","doi":"10.1088/1361-6560/adb935","DOIUrl":"10.1088/1361-6560/adb935","url":null,"abstract":"<p><p><i>Objective.</i>Computational uncertainty and variability of power absorption and temperature rise in humans for radiofrequency (RF) exposure is a critical factor in ensuring human protection. This aspect has been emphasized as a priority. However, accurately modeling head tissue composition and assigning tissue dielectric and thermal properties remains a challenging task. This study investigated the impact of segmentation-based versus segmentation-free models for assessing localized RF exposure.<i>Approach.</i>Two computational head models were compared: one employing traditional tissue segmentation and the other leveraging deep learning to estimate tissue dielectric and thermal properties directly from magnetic resonance images. The finite-difference time-domain method and the bioheat transfer equation was solved to assess temperature rise for local exposure. Inter-subject variability and dosimetric uncertainties were analyzed across multiple frequencies.<i>Main results.</i>The comparison between the two methods for head modeling demonstrated strong consistency, with differences in peak temperature rise of 7.6 ± 6.4%. The segmentation-free model showed reduced inter-subject variability, particularly at higher frequencies where superficial heating dominates. The maximum relative standard deviation in the inter-subject variability of heating factor was 15.0% at 3 GHz and decreased with increasing frequencies.<i>Significance.</i>This study highlights the advantages of segmentation-free deep-learning models for RF dosimetry, particularly in reducing inter-subject variability and improving computational efficiency. While the differences between the two models are relatively small compared to overall dosimetric uncertainty, segmentation-free models offer a promising approach for refining individual-specific exposure assessments. These findings contribute to improving the accuracy and consistency of human protection guidelines against RF electromagnetic field exposure.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143472644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based segmentation of head and neck organs at risk on CBCT images with dosimetric assessment for radiotherapy.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-11 DOI: 10.1088/1361-6560/adbf63
Lucía Cubero, Cédric Hémon, Anaïs Barateau, Joel Castelli, Renaud de Crevoisier, Oscar Acosta, Javier Pascau

Objective: Cone beam computed tomography (CBCT) has become an essential tool in head and neck cancer (HNC) radiotherapy (RT) treatment delivery. Automatic segmentation of the organs at risk (OARs) on CBCT can trigger and accelerate treatment replanning but is still a challenge due to the poor soft tissue contrast, artifacts, and limited field-of-view of these images, alongside the lack of large, annotated datasets to train deep learning models. This study aims to develop a comprehensive framework to segment 25 HN OARs on CBCT to facilitate treatment replanning. Approach. The proposed framework was developed in three steps: (i) refining an in-house framework to segment 25 OARs on computed tomography (CT); (ii) training a deep learning model to segment the same OARs on synthetic CT (sCT) images derived from CBCT using contours propagated from CT as ground truth, integrating high-contrast information from CT and texture features of sCT; and (iii) validating the clinical relevance of sCT segmentations through a dosimetric analysis on an external cohort. Main results. Most OARs achieved a Dice Score Coefficient over 70%, with mean Average Surface Distances of 1.30 mm for CT and 1.27 mm for sCT. The dosimetric analysis demonstrated a strong agreement in the mean dose and D2 (%) values, with most OARs showing non-significant differences between automatic CT and sCT segmentations. Significance. These results support the feasibility and clinical relevance of using deep learning models for OAR segmentation on both CT and CBCT for HNC RT. .

{"title":"Deep learning-based segmentation of head and neck organs at risk on CBCT images with dosimetric assessment for radiotherapy.","authors":"Lucía Cubero, Cédric Hémon, Anaïs Barateau, Joel Castelli, Renaud de Crevoisier, Oscar Acosta, Javier Pascau","doi":"10.1088/1361-6560/adbf63","DOIUrl":"https://doi.org/10.1088/1361-6560/adbf63","url":null,"abstract":"<p><strong>Objective: </strong>Cone beam computed tomography (CBCT) has become an essential tool in head and neck cancer (HNC) radiotherapy (RT) treatment delivery. Automatic segmentation of the organs at risk (OARs) on CBCT can trigger and accelerate treatment replanning but is still a challenge due to the poor soft tissue contrast, artifacts, and limited field-of-view of these images, alongside the lack of large, annotated datasets to train deep learning models. This study aims to develop a comprehensive framework to segment 25 HN OARs on CBCT to facilitate treatment replanning.&#xD;Approach. The proposed framework was developed in three steps: (i) refining an in-house framework to segment 25 OARs on computed tomography (CT); (ii) training a deep learning model to segment the same OARs on synthetic CT (sCT) images derived from CBCT using contours propagated from CT as ground truth, integrating high-contrast information from CT and texture features of sCT; and (iii) validating the clinical relevance of sCT segmentations through a dosimetric analysis on an external cohort. &#xD;Main results. Most OARs achieved a Dice Score Coefficient over 70%, with mean Average Surface Distances of 1.30 mm for CT and 1.27 mm for sCT. The dosimetric analysis demonstrated a strong agreement in the mean dose and D2 (%) values, with most OARs showing non-significant differences between automatic CT and sCT segmentations. &#xD;Significance. These results support the feasibility and clinical relevance of using deep learning models for OAR segmentation on both CT and CBCT for HNC RT.&#xD.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143605737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of patient-specific deep learning markerless lung tumor tracking aided by 4DCBCT.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-10 DOI: 10.1088/1361-6560/adb89c
L Huang, A Thummerer, C I Papadopoulou, S Corradini, C Belka, M Riboldi, C Kurz, G Landry

Objective. Tracking tumors with multi-leaf collimators and x-ray imaging can be a cost-effective motion management method to reduce internal target volume margins for lung cancer patients, sparing normal tissues while ensuring target coverage. To realize that, accurate tumor localization on x-ray images is essential. We aimed to develop a systematic method for automatically generating tumor segmentation ground truth (GT) on cone-beam computed tomography (CBCT) projections and use it to help refine and validate our patient-specific AI-based tumor localization model.Approach. To obtain the tumor segmentation GT on CBCT projections, we propose a 4DCBCT-aided GT generation pipeline consisting of three steps: breathing phase extraction and 10-phase 4DCBCT reconstruction, manual segmentation on phase 50% followed by deformable contour propagation to other phases, and forward projection of the 3D segmentation to the CBCT projection of the corresponding phase. We then used the CBCT projections from one fraction in the angular range of [-10∘, 10] and [80, 100] to refine a Retina U-Net baseline model, which was pretrained on 1140231 digitally reconstructed radiographs generated from a public lung dataset for automatic tumor delineation on projections, and used later-fraction CBCT projections in the same angular range for testing. Six LMU University Hospital patient CBCT projection sets were reserved for validation and 11 for testing. Tracking accuracy was evaluated as the center-of-mass (COM) error and the Dice similarity coefficient (DSC) between the predicted and ground-truth segmentations.Main results. Over the 11 testing patients, each with around 40 CBCT projections tested, the patient refined models had a mean COM error of 2.3 ± 0.9 mm/4.2 ± 1.7 mm and a mean DSC of 0.83 ± 0.06/0.72 ± 0.13 for angles within [-10∘, 10] / [80, 100]. The mean inference time was 68 ms/frame. The patient-specific training segmentation loss was found to be correlated to the segmentation performance at [-10∘, 10].Significance. Our proposed approach allows patient-specific real-time markerless lung tumor tracking, which could be validated thanks to the novel 4DCBCT-aided GT generation approach.

{"title":"Validation of patient-specific deep learning markerless lung tumor tracking aided by 4DCBCT.","authors":"L Huang, A Thummerer, C I Papadopoulou, S Corradini, C Belka, M Riboldi, C Kurz, G Landry","doi":"10.1088/1361-6560/adb89c","DOIUrl":"10.1088/1361-6560/adb89c","url":null,"abstract":"<p><p><i>Objective</i>. Tracking tumors with multi-leaf collimators and x-ray imaging can be a cost-effective motion management method to reduce internal target volume margins for lung cancer patients, sparing normal tissues while ensuring target coverage. To realize that, accurate tumor localization on x-ray images is essential. We aimed to develop a systematic method for automatically generating tumor segmentation ground truth (GT) on cone-beam computed tomography (CBCT) projections and use it to help refine and validate our patient-specific AI-based tumor localization model.<i>Approach</i>. To obtain the tumor segmentation GT on CBCT projections, we propose a 4DCBCT-aided GT generation pipeline consisting of three steps: breathing phase extraction and 10-phase 4DCBCT reconstruction, manual segmentation on phase 50% followed by deformable contour propagation to other phases, and forward projection of the 3D segmentation to the CBCT projection of the corresponding phase. We then used the CBCT projections from one fraction in the angular range of [-10∘, 10<sup>∘</sup>] and [80<sup>∘</sup>, 100<sup>∘</sup>] to refine a Retina U-Net baseline model, which was pretrained on 1140231 digitally reconstructed radiographs generated from a public lung dataset for automatic tumor delineation on projections, and used later-fraction CBCT projections in the same angular range for testing. Six LMU University Hospital patient CBCT projection sets were reserved for validation and 11 for testing. Tracking accuracy was evaluated as the center-of-mass (COM) error and the Dice similarity coefficient (DSC) between the predicted and ground-truth segmentations.<i>Main results</i>. Over the 11 testing patients, each with around 40 CBCT projections tested, the patient refined models had a mean COM error of 2.3 ± 0.9 mm/4.2 ± 1.7 mm and a mean DSC of 0.83 ± 0.06/0.72 ± 0.13 for angles within [-10∘, 10<sup>∘</sup>] / [80<sup>∘</sup>, 100<sup>∘</sup>]. The mean inference time was 68 ms/frame. The patient-specific training segmentation loss was found to be correlated to the segmentation performance at [-10∘, 10<sup>∘</sup>].<i>Significance</i>. Our proposed approach allows patient-specific real-time markerless lung tumor tracking, which could be validated thanks to the novel 4DCBCT-aided GT generation approach.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143468800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time 3D synthetic MRI based on kV imaging for motion monitoring of abdominal radiotherapy in a conventional LINAC.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-10 DOI: 10.1088/1361-6560/adbeb5
Paulo Quintero, Can Wu, Hao Zhang, Ricardo Otazo, Laura Cervino, Wendy Harris

Introduction.Real-time 2D-kV-triggered images used to evaluate intra-fraction motion during abdominal radiotherapy only provides 2D information with poor soft-tissue contrast. The main goal of this research is to evaluate a novel method that generates synthetic 3D-MRI from single 2D-kV images for online motion monitoring in abdominal radiotherapy. Methods.Deformable image registration (DIR) is performed between one 4D-MRI reference phase and all other phases, and principal-component-analysis (PCA) is implemented on their respective deformation vectors. By sampling 1,000 times the PCA eigenvalues and applying the new deformations over a reference CT, 1,000 digital reconstructed radiographs (DRRs) were generated to train a convolutional neural network (CNN) to predict their respective eigenvalues. The method was implemented and tested using a digital phantom (XCAT) and an MRI-compatible phantom (ZEUS) with five DRR angles (0°, 45°, 90°, 135°, 180°). Seven motion scenarios were tested. For model performance, mean absolute error (MAE) and root mean square error (RMSE) were reported. Image quality was evaluated with structure similarity index (SSIM) and normalized RMSE (nRMSE), and target-volume variations were evaluated with volumetric dice coefficient (VDC) and Hausdorff-distance (HD). Results.The model performance across the evaluated angles were MAE(XCAT, ZEUS)=(0.053±0.003, 0.094±0.003), and RMSE(XCAT, ZEUS)=(0.054±0.007, 0.103±0.002). Similarly, SSIM(XCAT, ZEUS)=(0.994±0.001, 0.96±0.02), and nRMSE(XCAT, ZEUS)=(0.13±0.01, 0.17±0.03). For all motion scenarios for XCAT and ZEUS, SSIM were 0.98±0.01 and 0.84±0.02, nRMSE were 0.14±0.01 and 0.27±0.02, VDC were 0.98±0.01 and 0.90±0.01, and HD were 0.24±0.02 mm and 2.3±0.8 mm, respectively, averaged across all angles. Finally, SSIM, nRMSE, VDC and HU values for ZEUS using the deformed images as ground truth, presented an improvement of 13%, 28%, 4%, and 76%, respectively. Conclusion. Results from a digital and physical phantom demonstrate a novel approach to generate real-time 3D synthetic MRI from onboard kV images on a conventional LINAC for intra-fraction monitoring in abdominal radiotherapy.

{"title":"Real-time 3D synthetic MRI based on kV imaging for motion monitoring of abdominal radiotherapy in a conventional LINAC.","authors":"Paulo Quintero, Can Wu, Hao Zhang, Ricardo Otazo, Laura Cervino, Wendy Harris","doi":"10.1088/1361-6560/adbeb5","DOIUrl":"https://doi.org/10.1088/1361-6560/adbeb5","url":null,"abstract":"<p><p><b>Introduction.</b>Real-time 2D-kV-triggered images used to evaluate intra-fraction motion during abdominal radiotherapy only provides 2D information with poor soft-tissue contrast. The main goal of this research is to evaluate a novel method that generates synthetic 3D-MRI from single 2D-kV images for online motion monitoring in abdominal radiotherapy.&#xD;<b>Methods.</b>Deformable image registration (DIR) is performed between one 4D-MRI reference phase and all other phases, and principal-component-analysis (PCA) is implemented on their respective deformation vectors. By sampling 1,000 times the PCA eigenvalues and applying the new deformations over a reference CT, 1,000 digital reconstructed radiographs (DRRs) were generated to train a convolutional neural network (CNN) to predict their respective eigenvalues. The method was implemented and tested using a digital phantom (XCAT) and an MRI-compatible phantom (ZEUS) with five DRR angles (0°, 45°, 90°, 135°, 180°). Seven motion scenarios were tested. For model performance, mean absolute error (MAE) and root mean square error (RMSE) were reported. Image quality was evaluated with structure similarity index (SSIM) and normalized RMSE (nRMSE), and target-volume variations were evaluated with volumetric dice coefficient (VDC) and Hausdorff-distance (HD).&#xD;<b>Results.</b>The model performance across the evaluated angles were MAE<sub>(XCAT, ZEUS)</sub>=(0.053±0.003, 0.094±0.003), and RMSE<sub>(XCAT, ZEUS)</sub>=(0.054±0.007, 0.103±0.002). Similarly, SSIM<sub>(XCAT, ZEUS)</sub>=(0.994±0.001, 0.96±0.02), and nRMSE<sub>(XCAT, ZEUS)</sub>=(0.13±0.01, 0.17±0.03). For all motion scenarios for XCAT and ZEUS, SSIM were 0.98±0.01 and 0.84±0.02, nRMSE were 0.14±0.01 and 0.27±0.02, VDC were 0.98±0.01 and 0.90±0.01, and HD were 0.24±0.02 mm and 2.3±0.8 mm, respectively, averaged across all angles. Finally, SSIM, nRMSE, VDC and HU values for ZEUS using the deformed images as ground truth, presented an improvement of 13%, 28%, 4%, and 76%, respectively.&#xD;<b>Conclusion</b>. Results from a digital and physical phantom demonstrate a novel approach to generate real-time 3D synthetic MRI from onboard kV images on a conventional LINAC for intra-fraction monitoring in abdominal radiotherapy.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143597638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patient-specific MRI super-resolution via implicit neural representations and knowledge transfer.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-10 DOI: 10.1088/1361-6560/adbed4
Yunxiang Li, Yen-Peng Liao, Jing Wang, Weiguo Lu, You Zhang

Objective: Magnetic Resonance Imaging (MRI) is a non-invasive imaging technique that provides high soft tissue contrast, playing a vital role in disease diagnosis and treatment planning. However, due to limitations in imaging hardware, scan time, and patient compliance, the resolution of MRI images is often insufficient. Super-resolution (SR) techniques can enhance MRI resolution, reveal more detailed anatomical information, and improve the identification of complex structures, while also reducing scan time and patient discomfort. However, traditional population-based models trained on large datasets may introduce artifacts or hallucinated structures, which compromise their reliability in clinical applications. Approach: To address these challenges, we propose a patient-specific Knowledge Transfer Implicit Neural Representation (KT-INR) super-resolution model. The KT-INR model integrates a dual-head Implicit Neural Network (INR) with a pre-trained Generative Adversarial Network (GAN) model trained on a large-scale dataset. Anatomical information from different MRI sequences of the same patient, combined with the super-resolution mappings learned by the GAN model on population-based dataset, is transferred as prior knowledge to the INR. This integration enhances both the performance and reliability of the super resolution model. Main Results: We validated the effectiveness of the KT-INR model across three distinct clinical super-resolution tasks on the BRATS dataset. For Task 1, KT-INR achieved an average SSIM, PSNR, and LPIPS of 0.9813, 36.845, and 0.0186, respectively. In comparison, a state-of-the-art super resolution technique, ArSSR, attained average values of 0.9689, 33.4557, and 0.0309 for the same metrics. The experimental results demonstrate that KT-INR outperforms all other methods across all tasks and evaluation metrics, with particularly remarkable performance in resolving fine anatomical details. Significance: The KT-INR model significantly enhances the reliability of super-resolution results, effectively addressing the hallucination effects commonly seen in traditional models. It provides a robust solution for patient-specific MRI super-resolution.

目的: 磁共振成像(MRI)是一种非侵入性成像技术,可提供较高的软组织对比度,在疾病诊断和治疗计划中发挥着重要作用。然而,由于成像硬件、扫描时间和患者依从性等方面的限制,核磁共振成像图像的分辨率往往不够高。超分辨率(SR)技术可以提高核磁共振成像的分辨率,显示更详细的解剖信息,改善复杂结构的识别,同时还能减少扫描时间和患者的不适感。然而,基于大型数据集训练的传统群体模型可能会引入伪影或幻觉结构,从而影响其在临床应用中的可靠性。KT-INR 模型集成了双头隐式神经网络 (INR) 和在大规模数据集上预先训练好的生成对抗网络 (GAN) 模型。来自同一患者不同核磁共振成像序列的解剖信息,结合 GAN 模型在基于群体的数据集上学习到的超分辨率映射,作为先验知识传递给 INR。这种整合提高了超分辨率模型的性能和可靠性。主要结果:我们在 BRATS 数据集上的三个不同临床超分辨率任务中验证了 KT-INR 模型的有效性。在任务 1 中,KT-INR 的平均 SSIM、PSNR 和 LPIPS 分别为 0.9813、36.845 和 0.0186。相比之下,最先进的超分辨率技术 ArSSR 在相同指标下的平均值分别为 0.9689、33.4557 和 0.0309。实验结果表明,KT-INR 在所有任务和评估指标上都优于所有其他方法,尤其是在解析精细解剖细节方面表现突出。它为针对特定患者的磁共振成像超分辨率提供了稳健的解决方案。
{"title":"Patient-specific MRI super-resolution via implicit neural representations and knowledge transfer.","authors":"Yunxiang Li, Yen-Peng Liao, Jing Wang, Weiguo Lu, You Zhang","doi":"10.1088/1361-6560/adbed4","DOIUrl":"https://doi.org/10.1088/1361-6560/adbed4","url":null,"abstract":"<p><strong>Objective: </strong>&#xD;Magnetic Resonance Imaging (MRI) is a non-invasive imaging technique that provides high soft tissue contrast, playing a vital role in disease diagnosis and treatment planning. However, due to limitations in imaging hardware, scan time, and patient compliance, the resolution of MRI images is often insufficient. Super-resolution (SR) techniques can enhance MRI resolution, reveal more detailed anatomical information, and improve the identification of complex structures, while also reducing scan time and patient discomfort. However, traditional population-based models trained on large datasets may introduce artifacts or hallucinated structures, which compromise their reliability in clinical applications.&#xD;&#xD;Approach:&#xD;To address these challenges, we propose a patient-specific Knowledge Transfer Implicit Neural Representation (KT-INR) super-resolution model. The KT-INR model integrates a dual-head Implicit Neural Network (INR) with a pre-trained Generative Adversarial Network (GAN) model trained on a large-scale dataset. Anatomical information from different MRI sequences of the same patient, combined with the super-resolution mappings learned by the GAN model on population-based dataset, is transferred as prior knowledge to the INR. This integration enhances both the performance and reliability of the super resolution model.&#xD;&#xD;Main Results:&#xD;We validated the effectiveness of the KT-INR model across three distinct clinical super-resolution tasks on the BRATS dataset. For Task 1, KT-INR achieved an average SSIM, PSNR, and LPIPS of 0.9813, 36.845, and 0.0186, respectively. In comparison, a state-of-the-art super resolution technique, ArSSR, attained average values of 0.9689, 33.4557, and 0.0309 for the same metrics. The experimental results demonstrate that KT-INR outperforms all other methods across all tasks and evaluation metrics, with particularly remarkable performance in resolving fine anatomical details.&#xD;&#xD;Significance:&#xD;The KT-INR model significantly enhances the reliability of super-resolution results, effectively addressing the hallucination effects commonly seen in traditional models. It provides a robust solution for patient-specific MRI super-resolution.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143597607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Physics in medicine and biology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1