首页 > 最新文献

Physics in medicine and biology最新文献

英文 中文
Enhancing U-Net-based Pseudo-CT generation from MRI using CT-guided bone segmentation for radiation treatment planning in head & neck cancer patients.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-31 DOI: 10.1088/1361-6560/adb124
Ama Katseena Yawson, Habiba Sallem, Katharina Seidensaal, Thomas Welzel, Sebastian Klüter, Katharina Paul, Stefan Dorsch, Cedric Beyer, Jürgen Debus, Oliver Jaekel, Julia Bauer, Kristina Giske

Objective: This study investigates the effects of various training protocols on enhancing the precision of MRI-only Pseudo-CT generation for radiation treatment planning and adaptation in head & neck cancer patients. It specifically tackles the challenge of differentiating bone from air, a limitation that frequently results in substantial deviations in the representation of bony structures on Pseudo-CT images.

Approach: The study included 25 patients, utilizing pre-treatment MRI-CT image pairs. Five cases were randomly selected for testing, with the remaining 20 used for model training and validation. A 3D U-Net deep learning model was employed, trained on patches of size 643with an overlap of 323. MRI scans were acquired using the Dixon gradient echo (GRE) technique, and various contrasts were explored to improve Pseudo-CT accuracy, including in-phase, water-only, and combined water-only and fat-only images. Additionally, bone extraction from the fat-only image was integrated as an additional channel to better capture bone structures on Pseudo-CTs. The evaluation involved both image quality and dosimetric metrics.

Main results: The generated Pseudo-CTs were compared with their corresponding registered target CTs. The mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) for the base model using combined water-only and fat-only images were 19.20 ± 5.30 HU and 57.24 ± 1.44 dB, respectively. Following the integration of an additional channel using a CT-guided bone segmentation, the model's performance improved, achieving MAE and PSNR of 18.32 ± 5.51 HU and 57.82 ± 1.31 dB, respectively. The dosimetric assessment confirmed that radiation treatment planning on Pseudo-CT achieved accuracy comparable to conventional CT. The measured results are statistically significant, with ap-value < 0.05.

Significance: This study demonstrates improved accuracy in bone representation on Pseudo-CTs achieved through a combination of water-only, fat-only and extracted bone images; thus, enhancing feasibility of MRI-based simulation for radiation treatment planning.

{"title":"Enhancing U-Net-based Pseudo-CT generation from MRI using CT-guided bone segmentation for radiation treatment planning in head & neck cancer patients.","authors":"Ama Katseena Yawson, Habiba Sallem, Katharina Seidensaal, Thomas Welzel, Sebastian Klüter, Katharina Paul, Stefan Dorsch, Cedric Beyer, Jürgen Debus, Oliver Jaekel, Julia Bauer, Kristina Giske","doi":"10.1088/1361-6560/adb124","DOIUrl":"https://doi.org/10.1088/1361-6560/adb124","url":null,"abstract":"<p><strong>Objective: </strong>This study investigates the effects of various training protocols on enhancing the precision of MRI-only Pseudo-CT generation for radiation treatment planning and adaptation in head & neck cancer patients. It specifically tackles the challenge of differentiating bone from air, a limitation that frequently results in substantial deviations in the representation of bony structures on Pseudo-CT images.</p><p><strong>Approach: </strong>The study included 25 patients, utilizing pre-treatment MRI-CT image pairs. Five cases were randomly selected for testing, with the remaining 20 used for model training and validation. A 3D U-Net deep learning model was employed, trained on patches of size 64<sup>3</sup>with an overlap of 32<sup>3</sup>. MRI scans were acquired using the Dixon gradient echo (GRE) technique, and various contrasts were explored to improve Pseudo-CT accuracy, including in-phase, water-only, and combined water-only and fat-only images. Additionally, bone extraction from the fat-only image was integrated as an additional channel to better capture bone structures on Pseudo-CTs. The evaluation involved both image quality and dosimetric metrics.</p><p><strong>Main results: </strong>The generated Pseudo-CTs were compared with their corresponding registered target CTs. The mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) for the base model using combined water-only and fat-only images were 19.20 ± 5.30 HU and 57.24 ± 1.44 dB, respectively. Following the integration of an additional channel using a CT-guided bone segmentation, the model's performance improved, achieving MAE and PSNR of 18.32 ± 5.51 HU and 57.82 ± 1.31 dB, respectively. The dosimetric assessment confirmed that radiation treatment planning on Pseudo-CT achieved accuracy comparable to conventional CT. The measured results are statistically significant, with a<i>p</i>-value < 0.05.</p><p><strong>Significance: </strong>This study demonstrates improved accuracy in bone representation on Pseudo-CTs achieved through a combination of water-only, fat-only and extracted bone images; thus, enhancing feasibility of MRI-based simulation for radiation treatment planning.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143080856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoReSi: a GPU-based software for Compton camera reconstruction and simulation in collimator-free SPECT. CoReSi:一个基于gpu的软件,用于康普顿相机重建和模拟无准直器SPECT。
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-31 DOI: 10.1088/1361-6560/adaacc
Vincent Lequertier, Étienne Testa, Voichiţa Maxim

Objective.Compton cameras (CCs) are imaging devices that may improve observation of sources ofγphotons. The images are obtained by solving a difficult inverse problem. We present CoReSi, a Compton reconstruction and simulation software implemented in Python and powered by PyTorch to leverage multi-threading and to easily interface with image processing and deep learning algorithms. The code is mainly dedicated to medical imaging and near-field experiments where images are reconstructed in 3D.Approach.The code was developed over several years in C++, with the initial version being proprietary. We have since redesigned and translated it into Python, adding new features to improve its adaptability and performances. This paper reviews the literature on CC mathematical models, explains the implementation strategies we have adopted and presents the features of CoReSi.Main results.The code includes state-of-the-art mathematical models from the literature, from the simplest, which allow limited knowledge of the sources, to more sophisticated ones with a finer description of the physics involved. It offers flexibility in defining the geometry of the CC and the detector materials. Several identical cameras can be considered at arbitrary positions in space. The main functions of the code are dedicated to the computation of the system matrix, leading to the forward and backward projector operators. These are the cornerstones of any image reconstruction algorithm. A simplified Monte Carlo data simulation function is provided to facilitate code development and fast prototyping.Significance.As far as we know, there is no open source code for CC reconstruction, except for MEGAlib, which is mainly dedicated to astronomy applications. This code aims to facilitate research as more and more teams from different communities such as applied mathematics, electrical engineering, physics, medical physics get involved in CC studies. Implementation with PyTorch will also facilitate interfacing with deep learning algorithms.

康普顿照相机是一种成像设备,可以改善对γ光子源的观察。我们介绍了CoReSi,一个用Python实现的康普顿重构和仿真软件,由PyTorch提供支持,利用多线程,轻松地与图像处理和深度学习算法接口。该代码主要用于医学成像和近场实验,其中图像以3D形式重建。它包括来自文献的最先进的数学模型,从最简单的,允许对来源的有限知识,到更复杂的,对所涉及的物理更精细的描述。它在定义康普顿相机和探测器材料的几何形状方面提供了灵活性。可以考虑在空间的任意位置放置几个相同的相机。该代码的主要功能是致力于系统矩阵的计算,导致前向和后向投影运算。这些是任何图像重建算法的基石。提供了简化的蒙特卡罗数据模拟功能,以方便代码开发和快速原型。在论文被接受后,代码将在开源许可下发布。
{"title":"CoReSi: a GPU-based software for Compton camera reconstruction and simulation in collimator-free SPECT.","authors":"Vincent Lequertier, Étienne Testa, Voichiţa Maxim","doi":"10.1088/1361-6560/adaacc","DOIUrl":"10.1088/1361-6560/adaacc","url":null,"abstract":"<p><p><i>Objective.</i>Compton cameras (CCs) are imaging devices that may improve observation of sources of<i>γ</i>photons. The images are obtained by solving a difficult inverse problem. We present CoReSi, a Compton reconstruction and simulation software implemented in Python and powered by PyTorch to leverage multi-threading and to easily interface with image processing and deep learning algorithms. The code is mainly dedicated to medical imaging and near-field experiments where images are reconstructed in 3D.<i>Approach.</i>The code was developed over several years in C++, with the initial version being proprietary. We have since redesigned and translated it into Python, adding new features to improve its adaptability and performances. This paper reviews the literature on CC mathematical models, explains the implementation strategies we have adopted and presents the features of CoReSi.<i>Main results.</i>The code includes state-of-the-art mathematical models from the literature, from the simplest, which allow limited knowledge of the sources, to more sophisticated ones with a finer description of the physics involved. It offers flexibility in defining the geometry of the CC and the detector materials. Several identical cameras can be considered at arbitrary positions in space. The main functions of the code are dedicated to the computation of the system matrix, leading to the forward and backward projector operators. These are the cornerstones of any image reconstruction algorithm. A simplified Monte Carlo data simulation function is provided to facilitate code development and fast prototyping.<i>Significance.</i>As far as we know, there is no open source code for CC reconstruction, except for MEGAlib, which is mainly dedicated to astronomy applications. This code aims to facilitate research as more and more teams from different communities such as applied mathematics, electrical engineering, physics, medical physics get involved in CC studies. Implementation with PyTorch will also facilitate interfacing with deep learning algorithms.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143009981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and correction of translational motion in SPECT with exponential data consistency conditions.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-30 DOI: 10.1088/1361-6560/adb09a
My Hoang Hoa Bui, Antoine Robert, Ane Etxebeste, Simon Rit

Rigid patient motion can cause artifacts in single photon emission computed tomography (SPECT) images, compromising the diagnosis and treatment planning. Exponential data consistency conditions (eDCCs) are mathematical equations describing the redundancy of exponential SPECT measurements. It has been recently shown that eDCCs can be used to detect patient motion in SPECT projections. This study aimed at developing a fully data-driven method based on eDCCs to estimate and correct for translational motion in SPECT. If all activity is encompassed inside a convex region K of constant attenuation, eDCCs can be derived from SPECT projections and can be used to verify the pair-wise consistency of these projections. Our method assumes a single patient translation between two detector gantry positions. The proposed method estimates both the three-dimensional shift and the motion index, i.e. the index of the first gantry position after motion occurred. The estimation minimizes the eDCCs between the subset of projections before the motion index and the subset of motion-corrected projections after the motion index. We evaluated the proposed method using Monte Carlo simulated and experimental data of a NEMA IEC phantom and simulated projections of a liver patient. The method's robustness was assessed by applying various motion vectors and motion indices. Motion detection and correction with eDCCs were sensitive to movements above 3~mm. The accuracy of the estimation was below the 2.39~mm pixel spacing with good precision in all studied cases. The proposed method led to a significant improvement in the quality of reconstructed SPECT images. The activity recovery coefficient relative to the SPECT image without motion was above 90% on average over the six spheres of the NEMA IEC phantom and 97% for the liver patient. For example, for a (2,2,2)~cm translation in the middle of the liver acquisition, the activity recovery coefficient was improved from 35% (non-corrected projections) to 99% (motion-corrected projections). The study proposed and demonstrated the good performance of translational motion detection and correction with eDCCs in SPECT acquisition data.

{"title":"Detection and correction of translational motion in SPECT with exponential data consistency conditions.","authors":"My Hoang Hoa Bui, Antoine Robert, Ane Etxebeste, Simon Rit","doi":"10.1088/1361-6560/adb09a","DOIUrl":"https://doi.org/10.1088/1361-6560/adb09a","url":null,"abstract":"<p><p>Rigid patient motion can cause artifacts in single photon emission computed tomography (SPECT) images, compromising the diagnosis and treatment planning. Exponential data consistency conditions (eDCCs) are mathematical equations describing the redundancy of exponential SPECT measurements. It has been recently shown that eDCCs can be used to detect patient motion in SPECT projections.&#xD; This study aimed at developing a fully data-driven method based on eDCCs to estimate and correct for translational motion in SPECT.&#xD; If all activity is encompassed inside a convex region K of constant attenuation, eDCCs can be derived from SPECT projections and can be used to verify the pair-wise consistency of these projections. Our method assumes a single patient translation between two detector gantry positions. The proposed method estimates both the three-dimensional shift and the motion index, i.e. the index of the first gantry position after motion occurred. The estimation minimizes the eDCCs between the subset of projections before the motion index and the subset of motion-corrected projections after the motion index. We evaluated the proposed method using Monte Carlo simulated and experimental data of a NEMA IEC phantom and simulated projections of a liver patient. The method's robustness was assessed by applying various motion vectors and motion indices.&#xD; Motion detection and correction with eDCCs were sensitive to movements above 3~mm.&#xD;The accuracy of the estimation was below the 2.39~mm pixel spacing with good precision in all studied cases. The proposed method led to a significant improvement in the quality of reconstructed SPECT images. The activity recovery coefficient relative to the SPECT image without motion was above 90% on average over the six spheres of the NEMA IEC phantom and 97% for the liver patient. For example, for a (2,2,2)~cm translation in the middle of the liver acquisition, the activity recovery coefficient was improved from 35% (non-corrected projections) to 99% (motion-corrected projections).&#xD;The study proposed and demonstrated the good performance of translational motion detection and correction with eDCCs in SPECT acquisition data.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143067296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based quick MLC sequencing for MRI-guided online adaptive radiotherapy: a feasibility study for pancreatic cancer patients.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-30 DOI: 10.1088/1361-6560/adb099
Ahmet Efe Ahunbay, Eric S Paulson, Ergun Ahunbay, Ying Zhang

Objective: One bottleneck of MRI-guided Online Adaptive Radiotherapy (MRoART) is the time-consuming daily online replanning process. The current leaf sequencing method takes up to 10 minutes, with potential dosimetric degradation and small segment openings that increase delivery time. This work aims to replace this process with a fast deep learning-based method to provide deliverable MLC sequences almost instantaneously, potentially accelerating and enhancing online adaption. Approach: Daily MRIs and plans from 242 daily fractions of 49 abdomen cancer patients on a 1.5T MR-Linac were used. The architecture included: 1) a recurrent conditional Generative Adversarial Network (rcGAN) model to predict segment shapes from a fluence map (FM), recurrently predicting each segment's shape; and 2) a linear matrix equation module to optimize the monitor units (MU) weights of segments. Multiple models with different segment numbers per beam (4-7) were trained. The final MLC sequences with the smallest relative absolute errors were selected. The predicted MLC sequence was imported into treatment planning system for dose calculation and compared with the original plans. Main results: The gamma passing rate for all fractions was 99.7±0.2% (2%/2mm criteria) and 92.7±1.6% (1%/1mm criteria). The average number of segments per beam in the proposed method was 6.0±0.6 compared to 7.5 ± 0.3 in the original plan. The average total MUs were reduced from 1641 ± 262 to 1569.5 ± 236.7 in the predicted plans. The estimated delivery time was reduced from 9.7 minutes to 8.3 minutes, an average reduction of 14% and up to 25% for individual plans. Execution time for one plan was less than 10 seconds using a GTX1660TIGPU. Significance: The developed models can quickly and accurately generate an optimized, deliverable leaf sequence from a FM with fewer segments. This can seamlessly integrate into the current online replanning workflow, greatly expediting the daily plan adaptation process. .

{"title":"Deep learning-based quick MLC sequencing for MRI-guided online adaptive radiotherapy: a feasibility study for pancreatic cancer patients.","authors":"Ahmet Efe Ahunbay, Eric S Paulson, Ergun Ahunbay, Ying Zhang","doi":"10.1088/1361-6560/adb099","DOIUrl":"https://doi.org/10.1088/1361-6560/adb099","url":null,"abstract":"<p><strong>Objective: </strong>One bottleneck of MRI-guided Online Adaptive Radiotherapy (MRoART) is the time-consuming daily online replanning process. The current leaf sequencing method takes up to 10 minutes, with potential dosimetric degradation and small segment openings that increase delivery time. This work aims to replace this process with a fast deep learning-based method to provide deliverable MLC sequences almost instantaneously, potentially accelerating and enhancing online adaption.&#xD;Approach: Daily MRIs and plans from 242 daily fractions of 49 abdomen cancer patients on a 1.5T MR-Linac were used. The architecture included: 1) a recurrent conditional Generative Adversarial Network (rcGAN) model to predict segment shapes from a fluence map (FM), recurrently predicting each segment's shape; and 2) a linear matrix equation module to optimize the monitor units (MU) weights of segments. Multiple models with different segment numbers per beam (4-7) were trained. The final MLC sequences with the smallest relative absolute errors were selected. The predicted MLC sequence was imported into treatment planning system for dose calculation and compared with the original plans.&#xD;Main results: The gamma passing rate for all fractions was 99.7±0.2% (2%/2mm criteria) and 92.7±1.6% (1%/1mm criteria). The average number of segments per beam in the proposed method was 6.0±0.6 compared to 7.5 ± 0.3 in the original plan. The average total MUs were reduced from 1641 ± 262 to 1569.5 ± 236.7 in the predicted plans. The estimated delivery time was reduced from 9.7 minutes to 8.3 minutes, an average reduction of 14% and up to 25% for individual plans. Execution time for one plan was less than 10 seconds using a GTX1660TIGPU.&#xD;Significance: The developed models can quickly and accurately generate an optimized, deliverable leaf sequence from a FM with fewer segments. This can seamlessly integrate into the current online replanning workflow, greatly expediting the daily plan adaptation process.&#xD;&#xD.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143067295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust auto-contouring and data augmentation pipeline for adaptive MRI-guided radiotherapy of pancreatic cancer with a limited dataset. 具有有限数据集的自适应mri引导胰腺癌放疗的鲁棒自动轮廓和数据增强管道。
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-30 DOI: 10.1088/1361-6560/adabac
Mehdi Shojaei, Björn Eiben, Jamie R McClelland, Simeon Nill, Alex Dunlop, Arabella Hunt, Brian Ng-Cheng-Hin, Uwe Oelfke

Objective.This study aims to develop and evaluate a fast and robust deep learning-based auto-segmentation approach for organs at risk in MRI-guided radiotherapy of pancreatic cancer to overcome the problems of time-intensive manual contouring in online adaptive workflows. The research focuses on implementing novel data augmentation techniques to address the challenges posed by limited datasets.Approach.This study was conducted in two phases. In phase I, we selected and customized the best-performing segmentation model among ResU-Net, SegResNet, and nnU-Net, using 43 balanced 3DVane images from 10 patients with 5-fold cross-validation. Phase II focused on optimizing the chosen model through two advanced data augmentation approaches to improve performance and generalizability by increasing the effective input dataset: (1) a novel structure-guided deformation-based augmentation approach (sgDefAug) and (2) a generative adversarial network-based method using a cycleGAN (GANAug). These were compared with comprehensive conventional augmentations (ConvAug). The approaches were evaluated using geometric (Dice score, average surface distance (ASD)) and dosimetric (D2% and D50% from dose-volume histograms) criteria.Main results.The nnU-Net framework demonstrated superior performance (mean Dice: 0.78 ± 0.10, mean ASD: 3.92 ± 1.94 mm) compared to other models. The sgDefAug and GANAug approaches significantly improved model performance over ConvAug, with sgDefAug demonstrating slightly superior results (mean Dice: 0.84 ± 0.09, mean ASD: 3.14 ± 1.79 mm). The proposed methodology produced auto-contours in under 30 s, with 75% of organs showing less than 1% difference in D2% and D50% dose criteria compared to ground truth.Significance.The integration of the nnU-Net framework with our proposed novel augmentation technique effectively addresses the challenges of limited datasets and stringent time constraints in online adaptive radiotherapy for pancreatic cancer. Our approach offers a promising solution for streamlining online adaptive workflows and represents a substantial step forward in the practical application of auto-segmentation techniques in clinical radiotherapy settings.

目的:本研究旨在开发和评估一种快速、鲁棒的基于深度学习的胰腺癌mri引导放射治疗中危险器官的自动分割方法,以克服在线自适应工作流程中耗时的人工轮廓问题。研究的重点是实施新的数据增强技术,以解决有限数据集带来的挑战。方法:本研究分为两个阶段进行。在第一阶段,我们选择并定制了ResU-Net、SegResNet和nnUNet中表现最好的分割模型,使用了来自10名患者的43张平衡3DVane图像,并进行了5倍交叉验证。第二阶段的重点是通过两种先进的数据增强方法来优化所选模型,通过增加有效输入数据集来提高性能和泛化性:(1)一种新的基于结构引导的变形增强方法(sgDefAug)和(2)一种基于生成对抗网络的方法,使用循环gan (GANAug)。并与综合常规增强术(ConvAug)进行比较。采用几何(Dice评分,平均表面距离(ASD))和剂量学(剂量-体积直方图D2%和D50%)标准对方法进行评估。主要结果:与其他模型相比,nnU-Net框架表现出更优的性能(平均Dice: 0.78±0.10,平均ASD: 3.92±1.94 mm)。与ConvAug相比,sgDefAug和GANAug方法显著提高了模型性能,其中sgDefAug的结果略好(平均Dice: 0.84±0.09,平均ASD: 3.14±1.79 mm)。所提出的方法在30秒内生成自动轮廓,与地面相比,75%的器官在D2%和D50%剂量标准上的差异小于1%。意义:将nnU-Net框架与我们提出的新型增强技术相结合,有效地解决了胰腺癌在线适应性放疗中有限的数据集和严格的时间限制的挑战。我们的方法为简化在线自适应工作流程提供了一个有前途的解决方案,并代表了在临床放疗设置中自动分割技术的实际应用向前迈出了实质性的一步。
{"title":"A robust auto-contouring and data augmentation pipeline for adaptive MRI-guided radiotherapy of pancreatic cancer with a limited dataset.","authors":"Mehdi Shojaei, Björn Eiben, Jamie R McClelland, Simeon Nill, Alex Dunlop, Arabella Hunt, Brian Ng-Cheng-Hin, Uwe Oelfke","doi":"10.1088/1361-6560/adabac","DOIUrl":"10.1088/1361-6560/adabac","url":null,"abstract":"<p><p><i>Objective.</i>This study aims to develop and evaluate a fast and robust deep learning-based auto-segmentation approach for organs at risk in MRI-guided radiotherapy of pancreatic cancer to overcome the problems of time-intensive manual contouring in online adaptive workflows. The research focuses on implementing novel data augmentation techniques to address the challenges posed by limited datasets.<i>Approach.</i>This study was conducted in two phases. In phase I, we selected and customized the best-performing segmentation model among ResU-Net, SegResNet, and nnU-Net, using 43 balanced 3DVane images from 10 patients with 5-fold cross-validation. Phase II focused on optimizing the chosen model through two advanced data augmentation approaches to improve performance and generalizability by increasing the effective input dataset: (1) a novel structure-guided deformation-based augmentation approach (sgDefAug) and (2) a generative adversarial network-based method using a cycleGAN (GANAug). These were compared with comprehensive conventional augmentations (ConvAug). The approaches were evaluated using geometric (Dice score, average surface distance (ASD)) and dosimetric (D2% and D50% from dose-volume histograms) criteria.<i>Main results.</i>The nnU-Net framework demonstrated superior performance (mean Dice: 0.78 ± 0.10, mean ASD: 3.92 ± 1.94 mm) compared to other models. The sgDefAug and GANAug approaches significantly improved model performance over ConvAug, with sgDefAug demonstrating slightly superior results (mean Dice: 0.84 ± 0.09, mean ASD: 3.14 ± 1.79 mm). The proposed methodology produced auto-contours in under 30 s, with 75% of organs showing less than 1% difference in D2% and D50% dose criteria compared to ground truth.<i>Significance.</i>The integration of the nnU-Net framework with our proposed novel augmentation technique effectively addresses the challenges of limited datasets and stringent time constraints in online adaptive radiotherapy for pancreatic cancer. Our approach offers a promising solution for streamlining online adaptive workflows and represents a substantial step forward in the practical application of auto-segmentation techniques in clinical radiotherapy settings.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11783596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143009976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmenting motion artifacts to enhance auto-contouring of complex structures in cone-beam computed tomography imaging.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-30 DOI: 10.1088/1361-6560/ada0a0
Angelo Genghi, Mário João Fartaria, Anna Siroki-Galambos, Simon Flückiger, Fernando Franco, Adam Strzelecki, Pascal Paysan, Julius Turian, Zhen Wu, Luca Boldrini, Giuditta Chiloiro, Thomas Costantino, Justin English, Tomasz Morgas, Thomas Coradi

Objective. To develop an augmentation method that simulates cone-beam computed tomography (CBCT) related motion artifacts, which can be used to generate training-data to increase the performance of artificial intelligence models dedicated to auto-contouring tasks.Approach.The augmentation technique generates data that simulates artifacts typically present in CBCT imaging. The simulated pseudo-CBCT (pCBCT) is created using interleaved sequences of simulated breath-hold and free-breathing projections. Neural networks for auto-contouring of head and neck and bowel structures were trained with and without pCBCT data. Quantitative and qualitative assessment was done in two independent test sets containing CT and real CBCT data focus on four anatomical regions: head, neck, abdomen, and pelvis. Qualitative analyses were conducted by five clinical experts from three different healthcare institutions.Main results.The generated pCBCT images demonstrate realistic motion artifacts comparable to those observed in real CBCT data. Training a neural network with CT and pCBCT data improved Dice similarity coefficient (DSC) and average contour distance (ACD) results on CBCT test sets. The results were statistically significant (p-value ⩽.03) for bone-mandible (model without/with pCBCT: 0.91/0.92 DSC,p⩽ .01; 0.74/0.66 mm ACD,p⩽.01), brain (0.34/0.93 DSC,p⩽ 1 × 10-5; 17.5/2.79 mm ACD,p= 1 × 10-5), oral-cavity (0.81/0.83 DSC,p⩽.01; 5.11/4.61 mm ACD,p= .02), left-submandibular-gland (0.58/0.77 DSC,p⩽.001; 3.24/2.12 mm ACD,p⩽ .001), right-submandibular-gland (0.00/0.75 DSC,p⩽.1 × 10-5; 17.5/2.26 mm ACD,p⩽ 1 × 10-5), left-parotid (0.68/0.78 DSC,p⩽ .001; 3.34/2.58 mm ACD,p⩽.01), large-bowel (0.60/0.75 DSC,p⩽ .01; 6.14/4.56 mm ACD,p= .03) and small-bowel (3.08/2.65 mm ACD,p= .03). Visual evaluation showed fewer false positives, false negatives, and misclassifications in artifact-affected areas. Qualitative analyses demonstrated that, auto-generated contours are clinically acceptable in over 90% of cases for most structures, with only a few requiring adjustments.Significance.The inclusion of pCBCT improves the performance of trainable auto-contouring approaches, particularly in cases where the images are prone to severe artifacts.

{"title":"Augmenting motion artifacts to enhance auto-contouring of complex structures in cone-beam computed tomography imaging.","authors":"Angelo Genghi, Mário João Fartaria, Anna Siroki-Galambos, Simon Flückiger, Fernando Franco, Adam Strzelecki, Pascal Paysan, Julius Turian, Zhen Wu, Luca Boldrini, Giuditta Chiloiro, Thomas Costantino, Justin English, Tomasz Morgas, Thomas Coradi","doi":"10.1088/1361-6560/ada0a0","DOIUrl":"https://doi.org/10.1088/1361-6560/ada0a0","url":null,"abstract":"<p><p><i>Objective</i>. To develop an augmentation method that simulates cone-beam computed tomography (CBCT) related motion artifacts, which can be used to generate training-data to increase the performance of artificial intelligence models dedicated to auto-contouring tasks.<i>Approach.</i>The augmentation technique generates data that simulates artifacts typically present in CBCT imaging. The simulated pseudo-CBCT (pCBCT) is created using interleaved sequences of simulated breath-hold and free-breathing projections. Neural networks for auto-contouring of head and neck and bowel structures were trained with and without pCBCT data. Quantitative and qualitative assessment was done in two independent test sets containing CT and real CBCT data focus on four anatomical regions: head, neck, abdomen, and pelvis. Qualitative analyses were conducted by five clinical experts from three different healthcare institutions.<i>Main results.</i>The generated pCBCT images demonstrate realistic motion artifacts comparable to those observed in real CBCT data. Training a neural network with CT and pCBCT data improved Dice similarity coefficient (DSC) and average contour distance (ACD) results on CBCT test sets. The results were statistically significant (<i>p</i>-value ⩽.03) for bone-mandible (model without/with pCBCT: 0.91/0.92 DSC,<i>p</i>⩽ .01; 0.74/0.66 mm ACD,<i>p</i>⩽.01), brain (0.34/0.93 DSC,<i>p</i>⩽ 1 × 10<sup>-5</sup>; 17.5/2.79 mm ACD,<i>p</i>= 1 × 10<sup>-5</sup>), oral-cavity (0.81/0.83 DSC,<i>p</i>⩽.01; 5.11/4.61 mm ACD,<i>p</i>= .02), left-submandibular-gland (0.58/0.77 DSC,<i>p</i>⩽.001; 3.24/2.12 mm ACD,<i>p</i>⩽ .001), right-submandibular-gland (0.00/0.75 DSC,<i>p</i>⩽.1 × 10<sup>-5</sup>; 17.5/2.26 mm ACD,<i>p</i>⩽ 1 × 10<sup>-5</sup>), left-parotid (0.68/0.78 DSC,<i>p</i>⩽ .001; 3.34/2.58 mm ACD,<i>p</i>⩽.01), large-bowel (0.60/0.75 DSC,<i>p</i>⩽ .01; 6.14/4.56 mm ACD,<i>p</i>= .03) and small-bowel (3.08/2.65 mm ACD,<i>p</i>= .03). Visual evaluation showed fewer false positives, false negatives, and misclassifications in artifact-affected areas. Qualitative analyses demonstrated that, auto-generated contours are clinically acceptable in over 90% of cases for most structures, with only a few requiring adjustments.<i>Significance.</i>The inclusion of pCBCT improves the performance of trainable auto-contouring approaches, particularly in cases where the images are prone to severe artifacts.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":"70 3","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143067298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated estimation of individualized organ-specific dose and noise from clinical CT scans.
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-29 DOI: 10.1088/1361-6560/ada67f
Sen Wang, Maria Jose Medrano, Abdullah Al Zubaer Imran, Wonkyeong Lee, Jennie Jiayi Cao, Grant M Stevens, Justin Ruey Tse, Adam S Wang

Objective. Radiation dose and diagnostic image quality are opposing constraints in x-ray computed tomography (CT). Conventional methods do not fully account for organ-level radiation dose and noise when considering radiation risk and clinical task. In this work, we develop a pipeline to generate individualized organ-specific dose and noise at desired dose levels from clinical CT scans.Approach. To estimate organ-specific dose and noise, we compute dose maps, noise maps at desired dose levels and organ segmentations. In our pipeline, dose maps are generated using Monte Carlo simulation. The noise map is obtained by scaling the inserted noise in synthetic low-dose emulation in order to avoid anatomical structures, where the scaling coefficients are empirically calibrated. Organ segmentations are generated by a deep learning-based method (TotalSegmentator). The proposed noise model is evaluated on a clinical dataset of 12 CT scans, a phantom dataset of 3 uniform phantom scans, and a cross-site dataset of 26 scans. The accuracy of deep learning-based segmentations for organ-level dose and noise estimates was tested using a dataset of 41 cases with expert segmentations of six organs: lungs, liver, kidneys, bladder, spleen, and pancreas.Main results. The empirical noise model performs well, with an average RMSE approximately 1.5 HU and an average relative RMSE approximately 5% across different dose levels. The segmentation from TotalSegmentator yielded a mean Dice score of 0.8597 across the six organs (max = 0.9315 in liver, min = 0.6855 in pancreas). The resulting error in organ-level dose and noise estimation was less than 2% for most organs.Significance. The proposed pipeline can output individualized organ-specific dose and noise estimates accurately for personalized protocol evaluation and optimization. It is fully automated and can be scalable to large clinical datasets. This pipeline can be used to optimize image quality for specific organs and thus clinical tasks, without adversely affecting overall radiation dose.

{"title":"Automated estimation of individualized organ-specific dose and noise from clinical CT scans.","authors":"Sen Wang, Maria Jose Medrano, Abdullah Al Zubaer Imran, Wonkyeong Lee, Jennie Jiayi Cao, Grant M Stevens, Justin Ruey Tse, Adam S Wang","doi":"10.1088/1361-6560/ada67f","DOIUrl":"https://doi.org/10.1088/1361-6560/ada67f","url":null,"abstract":"<p><p><i>Objective</i>. Radiation dose and diagnostic image quality are opposing constraints in x-ray computed tomography (CT). Conventional methods do not fully account for organ-level radiation dose and noise when considering radiation risk and clinical task. In this work, we develop a pipeline to generate individualized organ-specific dose and noise at desired dose levels from clinical CT scans.<i>Approach</i>. To estimate organ-specific dose and noise, we compute dose maps, noise maps at desired dose levels and organ segmentations. In our pipeline, dose maps are generated using Monte Carlo simulation. The noise map is obtained by scaling the inserted noise in synthetic low-dose emulation in order to avoid anatomical structures, where the scaling coefficients are empirically calibrated. Organ segmentations are generated by a deep learning-based method (TotalSegmentator). The proposed noise model is evaluated on a clinical dataset of 12 CT scans, a phantom dataset of 3 uniform phantom scans, and a cross-site dataset of 26 scans. The accuracy of deep learning-based segmentations for organ-level dose and noise estimates was tested using a dataset of 41 cases with expert segmentations of six organs: lungs, liver, kidneys, bladder, spleen, and pancreas.<i>Main results</i>. The empirical noise model performs well, with an average RMSE approximately 1.5 HU and an average relative RMSE approximately 5% across different dose levels. The segmentation from TotalSegmentator yielded a mean Dice score of 0.8597 across the six organs (max = 0.9315 in liver, min = 0.6855 in pancreas). The resulting error in organ-level dose and noise estimation was less than 2% for most organs.<i>Significance</i>. The proposed pipeline can output individualized organ-specific dose and noise estimates accurately for personalized protocol evaluation and optimization. It is fully automated and can be scalable to large clinical datasets. This pipeline can be used to optimize image quality for specific organs and thus clinical tasks, without adversely affecting overall radiation dose.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":"70 3","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-modality flow phantom for ultrasound and optical flow measurements. 用于超声和光学流量测量的双模态流模。
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-29 DOI: 10.1088/1361-6560/ada5a3
Chris M Kallweit, Adrian J Y Chee, Billy Y S Yiu, Sean D Peterson, Alfred C H Yu

As ultrasound-compatible flow phantoms are devised for performance testing and calibration, there is a practical need to obtain independent flow measurements for validation using a gold-standard technique such as particle image velocimetry (PIV). In this paper, we present the design of a new dual-modality flow phantom that allows ultrasound and PIV measurements to be simultaneously performed. Our phantom's tissue mimicking material is based on a novel hydrogel formula that uses propylene glycol to lower the freezing temperature of an ultrasound-compatible poly(vinyl) alcohol cryogel and, in turn, maintain the solution's optical transparency after thermocycling. The hydrogel's optical attenuation {1.56 dB cm-1with 95% confidence interval (CI) of [1.512 1.608]}, refractive index {1.337, CI: [1.340 1.333]}, acoustic attenuation {0.038 dB/(cm × MHzb), CI: [0.0368 0.0403]; frequency dependent factor of 1.321, CI: [1.296 1.346]}, and speed of sound {1523.6 m s-1, CI: [1523.8 1523.4]} were found to be suitable for PIV and ultrasound flow measurements. As an application demonstration, a bimodal flow phantom with spiral lumen was fabricated and used in simultaneous flow measurements with PIV and ultrasound color flow imaging (CFI). Velocity fields and profiles were compared between the two modalities under a constant flow rate (2.5 ml s-1). CFI was found to overestimate flow speed compared to the PIV measurements, with a 14%, 10%, and 6% difference between PIV and ultrasound for the 60°, 45°, and 30° angles measured. These results demonstrate the new phantom's feasibility in enabling performance validation of ultrasound flow mapping tools.

由于超声兼容流模是为性能测试和校准而设计的,因此实际需要使用金标准技术(如粒子图像测速法(PIV))获得独立的流量测量以进行验证。在本文中,我们提出了一种新的双模流模的设计,允许超声和PIV测量同时进行。我们的幻影组织模拟材料基于一种新颖的水凝胶配方,该配方使用丙二醇来降低超声兼容聚乙烯醇冷冻凝胶的冷冻温度,从而在热循环后保持溶液的光学透明度。水凝胶的光学衰减{1.56 dB/cm, 95%可信区间(CI)为[1.512 1.608]},折射率{1.337,CI:[1.340 1.333]},声衰减{0.038 dB/(cm*MHz), CI: [0.0368 0.0403];频率相关系数为1.321,CI为[1.296 1.346]},声速为{1523.6 m/s, CI为[1523.8 1523.4]},适合于PIV和超声流量测量。作为应用演示,制作了螺旋管腔双峰流模,并将其应用于PIV和超声彩色流成像(CFI)的同时流量测量。在恒定流速(2.5 mL/s)下,比较了两种模式的速度场和分布。与PIV测量值相比,CFI被发现高估了流速,PIV与超声测量的60°、45°和30°角度分别有14%、10%和6%的差异。这些结果证明了新模体在实现超声流成像工具性能验证方面的可行性。
{"title":"Dual-modality flow phantom for ultrasound and optical flow measurements.","authors":"Chris M Kallweit, Adrian J Y Chee, Billy Y S Yiu, Sean D Peterson, Alfred C H Yu","doi":"10.1088/1361-6560/ada5a3","DOIUrl":"10.1088/1361-6560/ada5a3","url":null,"abstract":"<p><p>As ultrasound-compatible flow phantoms are devised for performance testing and calibration, there is a practical need to obtain independent flow measurements for validation using a gold-standard technique such as particle image velocimetry (PIV). In this paper, we present the design of a new dual-modality flow phantom that allows ultrasound and PIV measurements to be simultaneously performed. Our phantom's tissue mimicking material is based on a novel hydrogel formula that uses propylene glycol to lower the freezing temperature of an ultrasound-compatible poly(vinyl) alcohol cryogel and, in turn, maintain the solution's optical transparency after thermocycling. The hydrogel's optical attenuation {1.56 dB cm<sup>-1</sup>with 95% confidence interval (CI) of [1.512 1.608]}, refractive index {1.337, CI: [1.340 1.333]}, acoustic attenuation {0.038 dB/(cm × MHz<i><sup>b</sup></i>), CI: [0.0368 0.0403]; frequency dependent factor of 1.321, CI: [1.296 1.346]}, and speed of sound {1523.6 m s<sup>-1</sup>, CI: [1523.8 1523.4]} were found to be suitable for PIV and ultrasound flow measurements. As an application demonstration, a bimodal flow phantom with spiral lumen was fabricated and used in simultaneous flow measurements with PIV and ultrasound color flow imaging (CFI). Velocity fields and profiles were compared between the two modalities under a constant flow rate (2.5 ml s<sup>-1</sup>). CFI was found to overestimate flow speed compared to the PIV measurements, with a 14%, 10%, and 6% difference between PIV and ultrasound for the 60°, 45°, and 30° angles measured. These results demonstrate the new phantom's feasibility in enabling performance validation of ultrasound flow mapping tools.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142927769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GMmorph: dynamic spatial matching registration model for 3D medical image based on gated Mamba. GMmorph:基于门控曼巴的三维医学图像动态空间匹配配准模型。
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-29 DOI: 10.1088/1361-6560/adaacd
Hao Lin, Yonghong Song, Qi Zhang

Objective.Deformable registration aims to achieve nonlinear alignment of image space by estimating a dense displacement field. It is commonly used as a preprocessing step in clinical and image analysis applications, such as surgical planning, diagnostic assistance, and surgical navigation. We aim to overcome these challenges: Deep learning-based registration methods often struggle with complex displacements and lack effective interaction between global and local feature information. They also neglect the spatial position matching process, leading to insufficient registration accuracy and reduced robustness when handling abnormal tissues.Approach.We propose a dual-branch interactive registration model architecture from the perspective of spatial matching. Implicit regularization is achieved through a consistency loss, enabling the network to balance high accuracy with a low folding rate. We introduced the dynamic matching module between the two branches of the registration, which generates learnable offsets based on all the tokens across the entire resolution range of the base branch features. Using trilinear interpolation, the model adjusts its feature expression range according to the learned offsets, capturing highly flexible positional differences. To facilitate the spatial matching process, we designed the gated mamba layer to globally model pixel-level features by associating all voxel information, while the detail enhancement module, which is based on channel and spatial attention, enhances the richness of local feature details.Main results.Our study explores the model's performance in single-modal and multi-modal image registration, including normal brain, brain tumor, and lung images. We propose unsupervised and semi-supervised registration modes and conduct extensive validation experiments. The results demonstrate that the model achieves state-of-the-art performance across multiple datasets.Significance.By introducing a novel perspective of position matching, the model achieves precise registration of various types of medical data, offering significant clinical value in medical applications.

目的:形变配准是通过估计密集的位移场来实现图像空间的非线性对齐。它通常用作临床和图像分析应用中的预处理步骤,例如手术计划,诊断辅助和手术导航。我们的目标是克服这些挑战:基于深度学习的配准方法通常难以处理复杂的位移,并且缺乏全局和局部特征信息之间的有效交互。它们忽略了空间位置匹配过程,导致在处理异常组织时配准精度不足,鲁棒性降低。 ;方法:我们从空间匹配的角度提出了一种双分支交互配准模型架构。隐式正则化是通过一致性损失实现的,使网络能够平衡高准确率和低折叠率。我们在注册的两个分支之间引入了动态匹配模块(DMM),它在整个基本分支特征的分辨率范围内基于所有令牌生成可学习的偏移量。利用三线性插值,模型根据学习到的偏移量调整特征表达范围,捕捉高度灵活的位置差异。为了方便空间匹配过程,我们设计了门控曼巴层(Gated Mamba Layer, GML),通过关联所有体素信息对像素级特征进行全局建模,而基于通道和空间关注的细节增强模块(Detail Enhancement Module, DEM)增强了局部特征细节的丰富度。 ;主要结果:我们研究了该模型在单模态和多模态图像配准中的性能,包括正常大脑、脑肿瘤和肺部图像。我们提出了无监督和半监督注册模式,并进行了大量的验证实验。结果表明,该模型在多个数据集上实现了最先进的性能。意义:通过引入新的位置匹配视角,该模型实现了各种类型医疗数据的精确配准,在医疗应用中具有重要的临床价值。
{"title":"GMmorph: dynamic spatial matching registration model for 3D medical image based on gated Mamba.","authors":"Hao Lin, Yonghong Song, Qi Zhang","doi":"10.1088/1361-6560/adaacd","DOIUrl":"10.1088/1361-6560/adaacd","url":null,"abstract":"<p><p><i>Objective.</i>Deformable registration aims to achieve nonlinear alignment of image space by estimating a dense displacement field. It is commonly used as a preprocessing step in clinical and image analysis applications, such as surgical planning, diagnostic assistance, and surgical navigation. We aim to overcome these challenges: Deep learning-based registration methods often struggle with complex displacements and lack effective interaction between global and local feature information. They also neglect the spatial position matching process, leading to insufficient registration accuracy and reduced robustness when handling abnormal tissues.<i>Approach.</i>We propose a dual-branch interactive registration model architecture from the perspective of spatial matching. Implicit regularization is achieved through a consistency loss, enabling the network to balance high accuracy with a low folding rate. We introduced the dynamic matching module between the two branches of the registration, which generates learnable offsets based on all the tokens across the entire resolution range of the base branch features. Using trilinear interpolation, the model adjusts its feature expression range according to the learned offsets, capturing highly flexible positional differences. To facilitate the spatial matching process, we designed the gated mamba layer to globally model pixel-level features by associating all voxel information, while the detail enhancement module, which is based on channel and spatial attention, enhances the richness of local feature details.<i>Main results.</i>Our study explores the model's performance in single-modal and multi-modal image registration, including normal brain, brain tumor, and lung images. We propose unsupervised and semi-supervised registration modes and conduct extensive validation experiments. The results demonstrate that the model achieves state-of-the-art performance across multiple datasets.<i>Significance.</i>By introducing a novel perspective of position matching, the model achieves precise registration of various types of medical data, offering significant clinical value in medical applications.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143010000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting the safety limit in magnetic nanoparticle hyperthermia: insights from eddy current induced heating. 重新审视磁性纳米粒子热疗的安全限制:来自涡流感应加热的见解。
IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-29 DOI: 10.1088/1361-6560/adaad0
Konstantinos Pilpilidis, George Tsanidis, Maria Anastasia Rouni, John Markakis, Theodoros Samaras

Objective.Magnetic nanoparticle hyperthermia (MNH) emerges as a promising therapeutic strategy for cancer treatment, leveraging alternating magnetic fields (AMFs) to induce localized heating through magnetic nanoparticles. However, the interaction of AMFs with biological tissues leads to non-specific heating caused by eddy currents, triggering thermoregulatory responses and complex thermal gradients throughout the body of the patient. While previous studies have implemented the Atkinson-Brezovich limit to mitigate potential harm, recent research underscores discrepancies between this threshold and clinical outcomes, necessitating a re-evaluation of this safety limit. Therefore, in this study, through electromagnetic (EM) simulations, the complex interaction between AMFs and anatomical models was investigated.Approach.In particular, we considered a circular coil configuration placed at different positions along the craniocaudal axis of various anatomical human models. The excitation current was normalized, at different frequencies, to meet the basic restriction of local 10 g-averaged specific energy absorption rate (SAR) in the human models, as defined by the exposure guidelines of the International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the standard IEC 60601-2-33 of the International Electrotechnical Commission (IEC).Main results.The resulting permissible magnetic field strength values, for the reference levels set by the ICNIRP 2020 guidelines, emerged to be up to approximately 1.4 and 3 times less than that defined in the Atkinson-Brezovich limit. The widely used limit was found to align more closely with the first level of controlled operating mode defined in the IEC 60601-2-33 standard.Significance.The results indicate that the permissible magnetic field amplitude during MNH treatment should be much lower than that in the Atkinson-Brezovich limit. This study offers valuable insights into the role of computational simulations in advancing the potential to establish a reliable metric for safety evaluation and monitoring within the clinical framework of MNH.

目的:磁性纳米颗粒热疗(MNH)是一种很有前景的癌症治疗策略,利用交变磁场(AMFs)通过磁性纳米颗粒(MNPs)诱导局部加热。然而,AMFs与生物组织的相互作用导致涡流引起的非特异性加热,引发患者全身的热调节反应和复杂的热梯度。虽然以前的研究已经实施了Atkinson-Brezovich极限来减轻潜在的危害,但最近的研究强调了该阈值与临床结果之间的差异,需要重新评估该安全极限。因此,在本研究中,通过电磁(EM)模拟,研究了AMFs与解剖模型之间复杂的相互作用。& # xD;方法。特别是,我们考虑了一个圆形线圈配置放置在不同位置的颅-尾轴的各种解剖人体模型。根据国际非电离辐射防护委员会(ICNIRP 2020)的暴露指南和国际电工委员会(IEC 2022)的标准IEC 60601-2-33的定义,在不同频率下对激励电流进行归一化,以满足人体模型局部10g平均比能量吸收率(SAR)的基本限制。& # xD;主要结果。由此产生的ICNIRP(2020)为职业和一般公众暴露设定的参考水平的允许磁场强度值,比阿特金森-布雷佐维奇限值所定义的值低约1.4和3倍。广泛使用的极限被发现更接近于IEC 60601-2-33 (IEC 2022)中定义的第一级受控操作模式。& # xD;意义。结果表明,MNH处理过程中允许的磁场幅值应远低于Atkinson-Brezovich极限。这项研究为计算模拟在推进MNH临床框架内建立安全评估和监测可靠指标的潜力方面的作用提供了有价值的见解。
{"title":"Revisiting the safety limit in magnetic nanoparticle hyperthermia: insights from eddy current induced heating.","authors":"Konstantinos Pilpilidis, George Tsanidis, Maria Anastasia Rouni, John Markakis, Theodoros Samaras","doi":"10.1088/1361-6560/adaad0","DOIUrl":"10.1088/1361-6560/adaad0","url":null,"abstract":"<p><p><i>Objective.</i>Magnetic nanoparticle hyperthermia (MNH) emerges as a promising therapeutic strategy for cancer treatment, leveraging alternating magnetic fields (AMFs) to induce localized heating through magnetic nanoparticles. However, the interaction of AMFs with biological tissues leads to non-specific heating caused by eddy currents, triggering thermoregulatory responses and complex thermal gradients throughout the body of the patient. While previous studies have implemented the Atkinson-Brezovich limit to mitigate potential harm, recent research underscores discrepancies between this threshold and clinical outcomes, necessitating a re-evaluation of this safety limit. Therefore, in this study, through electromagnetic (EM) simulations, the complex interaction between AMFs and anatomical models was investigated.<i>Approach.</i>In particular, we considered a circular coil configuration placed at different positions along the craniocaudal axis of various anatomical human models. The excitation current was normalized, at different frequencies, to meet the basic restriction of local 10 g-averaged specific energy absorption rate (SAR) in the human models, as defined by the exposure guidelines of the International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the standard IEC 60601-2-33 of the International Electrotechnical Commission (IEC).<i>Main results.</i>The resulting permissible magnetic field strength values, for the reference levels set by the ICNIRP 2020 guidelines, emerged to be up to approximately 1.4 and 3 times less than that defined in the Atkinson-Brezovich limit. The widely used limit was found to align more closely with the first level of controlled operating mode defined in the IEC 60601-2-33 standard.<i>Significance.</i>The results indicate that the permissible magnetic field amplitude during MNH treatment should be much lower than that in the Atkinson-Brezovich limit. This study offers valuable insights into the role of computational simulations in advancing the potential to establish a reliable metric for safety evaluation and monitoring within the clinical framework of MNH.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143010050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Physics in medicine and biology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1