Pub Date : 2025-01-31DOI: 10.1088/1361-6560/adaacc
Vincent Lequertier, Étienne Testa, Voichiţa Maxim
Objective.Compton cameras (CCs) are imaging devices that may improve observation of sources ofγphotons. The images are obtained by solving a difficult inverse problem. We present CoReSi, a Compton reconstruction and simulation software implemented in Python and powered by PyTorch to leverage multi-threading and to easily interface with image processing and deep learning algorithms. The code is mainly dedicated to medical imaging and near-field experiments where images are reconstructed in 3D.Approach.The code was developed over several years in C++, with the initial version being proprietary. We have since redesigned and translated it into Python, adding new features to improve its adaptability and performances. This paper reviews the literature on CC mathematical models, explains the implementation strategies we have adopted and presents the features of CoReSi.Main results.The code includes state-of-the-art mathematical models from the literature, from the simplest, which allow limited knowledge of the sources, to more sophisticated ones with a finer description of the physics involved. It offers flexibility in defining the geometry of the CC and the detector materials. Several identical cameras can be considered at arbitrary positions in space. The main functions of the code are dedicated to the computation of the system matrix, leading to the forward and backward projector operators. These are the cornerstones of any image reconstruction algorithm. A simplified Monte Carlo data simulation function is provided to facilitate code development and fast prototyping.Significance.As far as we know, there is no open source code for CC reconstruction, except for MEGAlib, which is mainly dedicated to astronomy applications. This code aims to facilitate research as more and more teams from different communities such as applied mathematics, electrical engineering, physics, medical physics get involved in CC studies. Implementation with PyTorch will also facilitate interfacing with deep learning algorithms.
{"title":"CoReSi: a GPU-based software for Compton camera reconstruction and simulation in collimator-free SPECT.","authors":"Vincent Lequertier, Étienne Testa, Voichiţa Maxim","doi":"10.1088/1361-6560/adaacc","DOIUrl":"10.1088/1361-6560/adaacc","url":null,"abstract":"<p><p><i>Objective.</i>Compton cameras (CCs) are imaging devices that may improve observation of sources of<i>γ</i>photons. The images are obtained by solving a difficult inverse problem. We present CoReSi, a Compton reconstruction and simulation software implemented in Python and powered by PyTorch to leverage multi-threading and to easily interface with image processing and deep learning algorithms. The code is mainly dedicated to medical imaging and near-field experiments where images are reconstructed in 3D.<i>Approach.</i>The code was developed over several years in C++, with the initial version being proprietary. We have since redesigned and translated it into Python, adding new features to improve its adaptability and performances. This paper reviews the literature on CC mathematical models, explains the implementation strategies we have adopted and presents the features of CoReSi.<i>Main results.</i>The code includes state-of-the-art mathematical models from the literature, from the simplest, which allow limited knowledge of the sources, to more sophisticated ones with a finer description of the physics involved. It offers flexibility in defining the geometry of the CC and the detector materials. Several identical cameras can be considered at arbitrary positions in space. The main functions of the code are dedicated to the computation of the system matrix, leading to the forward and backward projector operators. These are the cornerstones of any image reconstruction algorithm. A simplified Monte Carlo data simulation function is provided to facilitate code development and fast prototyping.<i>Significance.</i>As far as we know, there is no open source code for CC reconstruction, except for MEGAlib, which is mainly dedicated to astronomy applications. This code aims to facilitate research as more and more teams from different communities such as applied mathematics, electrical engineering, physics, medical physics get involved in CC studies. Implementation with PyTorch will also facilitate interfacing with deep learning algorithms.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143009981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-30DOI: 10.1088/1361-6560/adabac
Mehdi Shojaei, Björn Eiben, Jamie R McClelland, Simeon Nill, Alex Dunlop, Arabella Hunt, Brian Ng-Cheng-Hin, Uwe Oelfke
Objective.This study aims to develop and evaluate a fast and robust deep learning-based auto-segmentation approach for organs at risk in MRI-guided radiotherapy of pancreatic cancer to overcome the problems of time-intensive manual contouring in online adaptive workflows. The research focuses on implementing novel data augmentation techniques to address the challenges posed by limited datasets.Approach.This study was conducted in two phases. In phase I, we selected and customized the best-performing segmentation model among ResU-Net, SegResNet, and nnU-Net, using 43 balanced 3DVane images from 10 patients with 5-fold cross-validation. Phase II focused on optimizing the chosen model through two advanced data augmentation approaches to improve performance and generalizability by increasing the effective input dataset: (1) a novel structure-guided deformation-based augmentation approach (sgDefAug) and (2) a generative adversarial network-based method using a cycleGAN (GANAug). These were compared with comprehensive conventional augmentations (ConvAug). The approaches were evaluated using geometric (Dice score, average surface distance (ASD)) and dosimetric (D2% and D50% from dose-volume histograms) criteria.Main results.The nnU-Net framework demonstrated superior performance (mean Dice: 0.78 ± 0.10, mean ASD: 3.92 ± 1.94 mm) compared to other models. The sgDefAug and GANAug approaches significantly improved model performance over ConvAug, with sgDefAug demonstrating slightly superior results (mean Dice: 0.84 ± 0.09, mean ASD: 3.14 ± 1.79 mm). The proposed methodology produced auto-contours in under 30 s, with 75% of organs showing less than 1% difference in D2% and D50% dose criteria compared to ground truth.Significance.The integration of the nnU-Net framework with our proposed novel augmentation technique effectively addresses the challenges of limited datasets and stringent time constraints in online adaptive radiotherapy for pancreatic cancer. Our approach offers a promising solution for streamlining online adaptive workflows and represents a substantial step forward in the practical application of auto-segmentation techniques in clinical radiotherapy settings.
{"title":"A robust auto-contouring and data augmentation pipeline for adaptive MRI-guided radiotherapy of pancreatic cancer with a limited dataset.","authors":"Mehdi Shojaei, Björn Eiben, Jamie R McClelland, Simeon Nill, Alex Dunlop, Arabella Hunt, Brian Ng-Cheng-Hin, Uwe Oelfke","doi":"10.1088/1361-6560/adabac","DOIUrl":"10.1088/1361-6560/adabac","url":null,"abstract":"<p><p><i>Objective.</i>This study aims to develop and evaluate a fast and robust deep learning-based auto-segmentation approach for organs at risk in MRI-guided radiotherapy of pancreatic cancer to overcome the problems of time-intensive manual contouring in online adaptive workflows. The research focuses on implementing novel data augmentation techniques to address the challenges posed by limited datasets.<i>Approach.</i>This study was conducted in two phases. In phase I, we selected and customized the best-performing segmentation model among ResU-Net, SegResNet, and nnU-Net, using 43 balanced 3DVane images from 10 patients with 5-fold cross-validation. Phase II focused on optimizing the chosen model through two advanced data augmentation approaches to improve performance and generalizability by increasing the effective input dataset: (1) a novel structure-guided deformation-based augmentation approach (sgDefAug) and (2) a generative adversarial network-based method using a cycleGAN (GANAug). These were compared with comprehensive conventional augmentations (ConvAug). The approaches were evaluated using geometric (Dice score, average surface distance (ASD)) and dosimetric (D2% and D50% from dose-volume histograms) criteria.<i>Main results.</i>The nnU-Net framework demonstrated superior performance (mean Dice: 0.78 ± 0.10, mean ASD: 3.92 ± 1.94 mm) compared to other models. The sgDefAug and GANAug approaches significantly improved model performance over ConvAug, with sgDefAug demonstrating slightly superior results (mean Dice: 0.84 ± 0.09, mean ASD: 3.14 ± 1.79 mm). The proposed methodology produced auto-contours in under 30 s, with 75% of organs showing less than 1% difference in D2% and D50% dose criteria compared to ground truth.<i>Significance.</i>The integration of the nnU-Net framework with our proposed novel augmentation technique effectively addresses the challenges of limited datasets and stringent time constraints in online adaptive radiotherapy for pancreatic cancer. Our approach offers a promising solution for streamlining online adaptive workflows and represents a substantial step forward in the practical application of auto-segmentation techniques in clinical radiotherapy settings.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11783596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143009976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-30DOI: 10.1088/1361-6560/ada0a0
Angelo Genghi, Mário João Fartaria, Anna Siroki-Galambos, Simon Flückiger, Fernando Franco, Adam Strzelecki, Pascal Paysan, Julius Turian, Zhen Wu, Luca Boldrini, Giuditta Chiloiro, Thomas Costantino, Justin English, Tomasz Morgas, Thomas Coradi
Objective. To develop an augmentation method that simulates cone-beam computed tomography (CBCT) related motion artifacts, which can be used to generate training-data to increase the performance of artificial intelligence models dedicated to auto-contouring tasks.Approach.The augmentation technique generates data that simulates artifacts typically present in CBCT imaging. The simulated pseudo-CBCT (pCBCT) is created using interleaved sequences of simulated breath-hold and free-breathing projections. Neural networks for auto-contouring of head and neck and bowel structures were trained with and without pCBCT data. Quantitative and qualitative assessment was done in two independent test sets containing CT and real CBCT data focus on four anatomical regions: head, neck, abdomen, and pelvis. Qualitative analyses were conducted by five clinical experts from three different healthcare institutions.Main results.The generated pCBCT images demonstrate realistic motion artifacts comparable to those observed in real CBCT data. Training a neural network with CT and pCBCT data improved Dice similarity coefficient (DSC) and average contour distance (ACD) results on CBCT test sets. The results were statistically significant (p-value ⩽.03) for bone-mandible (model without/with pCBCT: 0.91/0.92 DSC,p⩽ .01; 0.74/0.66 mm ACD,p⩽.01), brain (0.34/0.93 DSC,p⩽ 1 × 10-5; 17.5/2.79 mm ACD,p= 1 × 10-5), oral-cavity (0.81/0.83 DSC,p⩽.01; 5.11/4.61 mm ACD,p= .02), left-submandibular-gland (0.58/0.77 DSC,p⩽.001; 3.24/2.12 mm ACD,p⩽ .001), right-submandibular-gland (0.00/0.75 DSC,p⩽.1 × 10-5; 17.5/2.26 mm ACD,p⩽ 1 × 10-5), left-parotid (0.68/0.78 DSC,p⩽ .001; 3.34/2.58 mm ACD,p⩽.01), large-bowel (0.60/0.75 DSC,p⩽ .01; 6.14/4.56 mm ACD,p= .03) and small-bowel (3.08/2.65 mm ACD,p= .03). Visual evaluation showed fewer false positives, false negatives, and misclassifications in artifact-affected areas. Qualitative analyses demonstrated that, auto-generated contours are clinically acceptable in over 90% of cases for most structures, with only a few requiring adjustments.Significance.The inclusion of pCBCT improves the performance of trainable auto-contouring approaches, particularly in cases where the images are prone to severe artifacts.
{"title":"Augmenting motion artifacts to enhance auto-contouring of complex structures in cone-beam computed tomography imaging.","authors":"Angelo Genghi, Mário João Fartaria, Anna Siroki-Galambos, Simon Flückiger, Fernando Franco, Adam Strzelecki, Pascal Paysan, Julius Turian, Zhen Wu, Luca Boldrini, Giuditta Chiloiro, Thomas Costantino, Justin English, Tomasz Morgas, Thomas Coradi","doi":"10.1088/1361-6560/ada0a0","DOIUrl":"https://doi.org/10.1088/1361-6560/ada0a0","url":null,"abstract":"<p><p><i>Objective</i>. To develop an augmentation method that simulates cone-beam computed tomography (CBCT) related motion artifacts, which can be used to generate training-data to increase the performance of artificial intelligence models dedicated to auto-contouring tasks.<i>Approach.</i>The augmentation technique generates data that simulates artifacts typically present in CBCT imaging. The simulated pseudo-CBCT (pCBCT) is created using interleaved sequences of simulated breath-hold and free-breathing projections. Neural networks for auto-contouring of head and neck and bowel structures were trained with and without pCBCT data. Quantitative and qualitative assessment was done in two independent test sets containing CT and real CBCT data focus on four anatomical regions: head, neck, abdomen, and pelvis. Qualitative analyses were conducted by five clinical experts from three different healthcare institutions.<i>Main results.</i>The generated pCBCT images demonstrate realistic motion artifacts comparable to those observed in real CBCT data. Training a neural network with CT and pCBCT data improved Dice similarity coefficient (DSC) and average contour distance (ACD) results on CBCT test sets. The results were statistically significant (<i>p</i>-value ⩽.03) for bone-mandible (model without/with pCBCT: 0.91/0.92 DSC,<i>p</i>⩽ .01; 0.74/0.66 mm ACD,<i>p</i>⩽.01), brain (0.34/0.93 DSC,<i>p</i>⩽ 1 × 10<sup>-5</sup>; 17.5/2.79 mm ACD,<i>p</i>= 1 × 10<sup>-5</sup>), oral-cavity (0.81/0.83 DSC,<i>p</i>⩽.01; 5.11/4.61 mm ACD,<i>p</i>= .02), left-submandibular-gland (0.58/0.77 DSC,<i>p</i>⩽.001; 3.24/2.12 mm ACD,<i>p</i>⩽ .001), right-submandibular-gland (0.00/0.75 DSC,<i>p</i>⩽.1 × 10<sup>-5</sup>; 17.5/2.26 mm ACD,<i>p</i>⩽ 1 × 10<sup>-5</sup>), left-parotid (0.68/0.78 DSC,<i>p</i>⩽ .001; 3.34/2.58 mm ACD,<i>p</i>⩽.01), large-bowel (0.60/0.75 DSC,<i>p</i>⩽ .01; 6.14/4.56 mm ACD,<i>p</i>= .03) and small-bowel (3.08/2.65 mm ACD,<i>p</i>= .03). Visual evaluation showed fewer false positives, false negatives, and misclassifications in artifact-affected areas. Qualitative analyses demonstrated that, auto-generated contours are clinically acceptable in over 90% of cases for most structures, with only a few requiring adjustments.<i>Significance.</i>The inclusion of pCBCT improves the performance of trainable auto-contouring approaches, particularly in cases where the images are prone to severe artifacts.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":"70 3","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143067298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-29DOI: 10.1088/1361-6560/ada67f
Sen Wang, Maria Jose Medrano, Abdullah Al Zubaer Imran, Wonkyeong Lee, Jennie Jiayi Cao, Grant M Stevens, Justin Ruey Tse, Adam S Wang
Objective. Radiation dose and diagnostic image quality are opposing constraints in x-ray computed tomography (CT). Conventional methods do not fully account for organ-level radiation dose and noise when considering radiation risk and clinical task. In this work, we develop a pipeline to generate individualized organ-specific dose and noise at desired dose levels from clinical CT scans.Approach. To estimate organ-specific dose and noise, we compute dose maps, noise maps at desired dose levels and organ segmentations. In our pipeline, dose maps are generated using Monte Carlo simulation. The noise map is obtained by scaling the inserted noise in synthetic low-dose emulation in order to avoid anatomical structures, where the scaling coefficients are empirically calibrated. Organ segmentations are generated by a deep learning-based method (TotalSegmentator). The proposed noise model is evaluated on a clinical dataset of 12 CT scans, a phantom dataset of 3 uniform phantom scans, and a cross-site dataset of 26 scans. The accuracy of deep learning-based segmentations for organ-level dose and noise estimates was tested using a dataset of 41 cases with expert segmentations of six organs: lungs, liver, kidneys, bladder, spleen, and pancreas.Main results. The empirical noise model performs well, with an average RMSE approximately 1.5 HU and an average relative RMSE approximately 5% across different dose levels. The segmentation from TotalSegmentator yielded a mean Dice score of 0.8597 across the six organs (max = 0.9315 in liver, min = 0.6855 in pancreas). The resulting error in organ-level dose and noise estimation was less than 2% for most organs.Significance. The proposed pipeline can output individualized organ-specific dose and noise estimates accurately for personalized protocol evaluation and optimization. It is fully automated and can be scalable to large clinical datasets. This pipeline can be used to optimize image quality for specific organs and thus clinical tasks, without adversely affecting overall radiation dose.
{"title":"Automated estimation of individualized organ-specific dose and noise from clinical CT scans.","authors":"Sen Wang, Maria Jose Medrano, Abdullah Al Zubaer Imran, Wonkyeong Lee, Jennie Jiayi Cao, Grant M Stevens, Justin Ruey Tse, Adam S Wang","doi":"10.1088/1361-6560/ada67f","DOIUrl":"https://doi.org/10.1088/1361-6560/ada67f","url":null,"abstract":"<p><p><i>Objective</i>. Radiation dose and diagnostic image quality are opposing constraints in x-ray computed tomography (CT). Conventional methods do not fully account for organ-level radiation dose and noise when considering radiation risk and clinical task. In this work, we develop a pipeline to generate individualized organ-specific dose and noise at desired dose levels from clinical CT scans.<i>Approach</i>. To estimate organ-specific dose and noise, we compute dose maps, noise maps at desired dose levels and organ segmentations. In our pipeline, dose maps are generated using Monte Carlo simulation. The noise map is obtained by scaling the inserted noise in synthetic low-dose emulation in order to avoid anatomical structures, where the scaling coefficients are empirically calibrated. Organ segmentations are generated by a deep learning-based method (TotalSegmentator). The proposed noise model is evaluated on a clinical dataset of 12 CT scans, a phantom dataset of 3 uniform phantom scans, and a cross-site dataset of 26 scans. The accuracy of deep learning-based segmentations for organ-level dose and noise estimates was tested using a dataset of 41 cases with expert segmentations of six organs: lungs, liver, kidneys, bladder, spleen, and pancreas.<i>Main results</i>. The empirical noise model performs well, with an average RMSE approximately 1.5 HU and an average relative RMSE approximately 5% across different dose levels. The segmentation from TotalSegmentator yielded a mean Dice score of 0.8597 across the six organs (max = 0.9315 in liver, min = 0.6855 in pancreas). The resulting error in organ-level dose and noise estimation was less than 2% for most organs.<i>Significance</i>. The proposed pipeline can output individualized organ-specific dose and noise estimates accurately for personalized protocol evaluation and optimization. It is fully automated and can be scalable to large clinical datasets. This pipeline can be used to optimize image quality for specific organs and thus clinical tasks, without adversely affecting overall radiation dose.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":"70 3","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-29DOI: 10.1088/1361-6560/ada5a3
Chris M Kallweit, Adrian J Y Chee, Billy Y S Yiu, Sean D Peterson, Alfred C H Yu
As ultrasound-compatible flow phantoms are devised for performance testing and calibration, there is a practical need to obtain independent flow measurements for validation using a gold-standard technique such as particle image velocimetry (PIV). In this paper, we present the design of a new dual-modality flow phantom that allows ultrasound and PIV measurements to be simultaneously performed. Our phantom's tissue mimicking material is based on a novel hydrogel formula that uses propylene glycol to lower the freezing temperature of an ultrasound-compatible poly(vinyl) alcohol cryogel and, in turn, maintain the solution's optical transparency after thermocycling. The hydrogel's optical attenuation {1.56 dB cm-1with 95% confidence interval (CI) of [1.512 1.608]}, refractive index {1.337, CI: [1.340 1.333]}, acoustic attenuation {0.038 dB/(cm × MHzb), CI: [0.0368 0.0403]; frequency dependent factor of 1.321, CI: [1.296 1.346]}, and speed of sound {1523.6 m s-1, CI: [1523.8 1523.4]} were found to be suitable for PIV and ultrasound flow measurements. As an application demonstration, a bimodal flow phantom with spiral lumen was fabricated and used in simultaneous flow measurements with PIV and ultrasound color flow imaging (CFI). Velocity fields and profiles were compared between the two modalities under a constant flow rate (2.5 ml s-1). CFI was found to overestimate flow speed compared to the PIV measurements, with a 14%, 10%, and 6% difference between PIV and ultrasound for the 60°, 45°, and 30° angles measured. These results demonstrate the new phantom's feasibility in enabling performance validation of ultrasound flow mapping tools.
{"title":"Dual-modality flow phantom for ultrasound and optical flow measurements.","authors":"Chris M Kallweit, Adrian J Y Chee, Billy Y S Yiu, Sean D Peterson, Alfred C H Yu","doi":"10.1088/1361-6560/ada5a3","DOIUrl":"10.1088/1361-6560/ada5a3","url":null,"abstract":"<p><p>As ultrasound-compatible flow phantoms are devised for performance testing and calibration, there is a practical need to obtain independent flow measurements for validation using a gold-standard technique such as particle image velocimetry (PIV). In this paper, we present the design of a new dual-modality flow phantom that allows ultrasound and PIV measurements to be simultaneously performed. Our phantom's tissue mimicking material is based on a novel hydrogel formula that uses propylene glycol to lower the freezing temperature of an ultrasound-compatible poly(vinyl) alcohol cryogel and, in turn, maintain the solution's optical transparency after thermocycling. The hydrogel's optical attenuation {1.56 dB cm<sup>-1</sup>with 95% confidence interval (CI) of [1.512 1.608]}, refractive index {1.337, CI: [1.340 1.333]}, acoustic attenuation {0.038 dB/(cm × MHz<i><sup>b</sup></i>), CI: [0.0368 0.0403]; frequency dependent factor of 1.321, CI: [1.296 1.346]}, and speed of sound {1523.6 m s<sup>-1</sup>, CI: [1523.8 1523.4]} were found to be suitable for PIV and ultrasound flow measurements. As an application demonstration, a bimodal flow phantom with spiral lumen was fabricated and used in simultaneous flow measurements with PIV and ultrasound color flow imaging (CFI). Velocity fields and profiles were compared between the two modalities under a constant flow rate (2.5 ml s<sup>-1</sup>). CFI was found to overestimate flow speed compared to the PIV measurements, with a 14%, 10%, and 6% difference between PIV and ultrasound for the 60°, 45°, and 30° angles measured. These results demonstrate the new phantom's feasibility in enabling performance validation of ultrasound flow mapping tools.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142927769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-29DOI: 10.1088/1361-6560/adaacd
Hao Lin, Yonghong Song, Qi Zhang
Objective.Deformable registration aims to achieve nonlinear alignment of image space by estimating a dense displacement field. It is commonly used as a preprocessing step in clinical and image analysis applications, such as surgical planning, diagnostic assistance, and surgical navigation. We aim to overcome these challenges: Deep learning-based registration methods often struggle with complex displacements and lack effective interaction between global and local feature information. They also neglect the spatial position matching process, leading to insufficient registration accuracy and reduced robustness when handling abnormal tissues.Approach.We propose a dual-branch interactive registration model architecture from the perspective of spatial matching. Implicit regularization is achieved through a consistency loss, enabling the network to balance high accuracy with a low folding rate. We introduced the dynamic matching module between the two branches of the registration, which generates learnable offsets based on all the tokens across the entire resolution range of the base branch features. Using trilinear interpolation, the model adjusts its feature expression range according to the learned offsets, capturing highly flexible positional differences. To facilitate the spatial matching process, we designed the gated mamba layer to globally model pixel-level features by associating all voxel information, while the detail enhancement module, which is based on channel and spatial attention, enhances the richness of local feature details.Main results.Our study explores the model's performance in single-modal and multi-modal image registration, including normal brain, brain tumor, and lung images. We propose unsupervised and semi-supervised registration modes and conduct extensive validation experiments. The results demonstrate that the model achieves state-of-the-art performance across multiple datasets.Significance.By introducing a novel perspective of position matching, the model achieves precise registration of various types of medical data, offering significant clinical value in medical applications.
{"title":"GMmorph: dynamic spatial matching registration model for 3D medical image based on gated Mamba.","authors":"Hao Lin, Yonghong Song, Qi Zhang","doi":"10.1088/1361-6560/adaacd","DOIUrl":"10.1088/1361-6560/adaacd","url":null,"abstract":"<p><p><i>Objective.</i>Deformable registration aims to achieve nonlinear alignment of image space by estimating a dense displacement field. It is commonly used as a preprocessing step in clinical and image analysis applications, such as surgical planning, diagnostic assistance, and surgical navigation. We aim to overcome these challenges: Deep learning-based registration methods often struggle with complex displacements and lack effective interaction between global and local feature information. They also neglect the spatial position matching process, leading to insufficient registration accuracy and reduced robustness when handling abnormal tissues.<i>Approach.</i>We propose a dual-branch interactive registration model architecture from the perspective of spatial matching. Implicit regularization is achieved through a consistency loss, enabling the network to balance high accuracy with a low folding rate. We introduced the dynamic matching module between the two branches of the registration, which generates learnable offsets based on all the tokens across the entire resolution range of the base branch features. Using trilinear interpolation, the model adjusts its feature expression range according to the learned offsets, capturing highly flexible positional differences. To facilitate the spatial matching process, we designed the gated mamba layer to globally model pixel-level features by associating all voxel information, while the detail enhancement module, which is based on channel and spatial attention, enhances the richness of local feature details.<i>Main results.</i>Our study explores the model's performance in single-modal and multi-modal image registration, including normal brain, brain tumor, and lung images. We propose unsupervised and semi-supervised registration modes and conduct extensive validation experiments. The results demonstrate that the model achieves state-of-the-art performance across multiple datasets.<i>Significance.</i>By introducing a novel perspective of position matching, the model achieves precise registration of various types of medical data, offering significant clinical value in medical applications.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143010000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-29DOI: 10.1088/1361-6560/adaad0
Konstantinos Pilpilidis, George Tsanidis, Maria Anastasia Rouni, John Markakis, Theodoros Samaras
Objective.Magnetic nanoparticle hyperthermia (MNH) emerges as a promising therapeutic strategy for cancer treatment, leveraging alternating magnetic fields (AMFs) to induce localized heating through magnetic nanoparticles. However, the interaction of AMFs with biological tissues leads to non-specific heating caused by eddy currents, triggering thermoregulatory responses and complex thermal gradients throughout the body of the patient. While previous studies have implemented the Atkinson-Brezovich limit to mitigate potential harm, recent research underscores discrepancies between this threshold and clinical outcomes, necessitating a re-evaluation of this safety limit. Therefore, in this study, through electromagnetic (EM) simulations, the complex interaction between AMFs and anatomical models was investigated.Approach.In particular, we considered a circular coil configuration placed at different positions along the craniocaudal axis of various anatomical human models. The excitation current was normalized, at different frequencies, to meet the basic restriction of local 10 g-averaged specific energy absorption rate (SAR) in the human models, as defined by the exposure guidelines of the International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the standard IEC 60601-2-33 of the International Electrotechnical Commission (IEC).Main results.The resulting permissible magnetic field strength values, for the reference levels set by the ICNIRP 2020 guidelines, emerged to be up to approximately 1.4 and 3 times less than that defined in the Atkinson-Brezovich limit. The widely used limit was found to align more closely with the first level of controlled operating mode defined in the IEC 60601-2-33 standard.Significance.The results indicate that the permissible magnetic field amplitude during MNH treatment should be much lower than that in the Atkinson-Brezovich limit. This study offers valuable insights into the role of computational simulations in advancing the potential to establish a reliable metric for safety evaluation and monitoring within the clinical framework of MNH.
{"title":"Revisiting the safety limit in magnetic nanoparticle hyperthermia: insights from eddy current induced heating.","authors":"Konstantinos Pilpilidis, George Tsanidis, Maria Anastasia Rouni, John Markakis, Theodoros Samaras","doi":"10.1088/1361-6560/adaad0","DOIUrl":"10.1088/1361-6560/adaad0","url":null,"abstract":"<p><p><i>Objective.</i>Magnetic nanoparticle hyperthermia (MNH) emerges as a promising therapeutic strategy for cancer treatment, leveraging alternating magnetic fields (AMFs) to induce localized heating through magnetic nanoparticles. However, the interaction of AMFs with biological tissues leads to non-specific heating caused by eddy currents, triggering thermoregulatory responses and complex thermal gradients throughout the body of the patient. While previous studies have implemented the Atkinson-Brezovich limit to mitigate potential harm, recent research underscores discrepancies between this threshold and clinical outcomes, necessitating a re-evaluation of this safety limit. Therefore, in this study, through electromagnetic (EM) simulations, the complex interaction between AMFs and anatomical models was investigated.<i>Approach.</i>In particular, we considered a circular coil configuration placed at different positions along the craniocaudal axis of various anatomical human models. The excitation current was normalized, at different frequencies, to meet the basic restriction of local 10 g-averaged specific energy absorption rate (SAR) in the human models, as defined by the exposure guidelines of the International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the standard IEC 60601-2-33 of the International Electrotechnical Commission (IEC).<i>Main results.</i>The resulting permissible magnetic field strength values, for the reference levels set by the ICNIRP 2020 guidelines, emerged to be up to approximately 1.4 and 3 times less than that defined in the Atkinson-Brezovich limit. The widely used limit was found to align more closely with the first level of controlled operating mode defined in the IEC 60601-2-33 standard.<i>Significance.</i>The results indicate that the permissible magnetic field amplitude during MNH treatment should be much lower than that in the Atkinson-Brezovich limit. This study offers valuable insights into the role of computational simulations in advancing the potential to establish a reliable metric for safety evaluation and monitoring within the clinical framework of MNH.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143010050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-29DOI: 10.1088/1361-6560/adabaf
Qing Wei, Daowu Li, Xianchao Huang, Long Wei, Zhiming Zhang, Xiaorou Han, Yingjie Wang
Objective.Timing calibration is essential for positron emission tomography (PET) system as it enhances timing resolution to improve image quality. Traditionally, positron sources are employed for timing calibration. However, the photons emitted by these sources travel in opposite directions, necessitating that positrons annihilate at multiple locations to collect coincidence data across a greater number of lines of response. To overcome this limitation, this study proposes a timing calibration method utilising a60Co point source.Approach.The60Co source emits cascaded photons without angular correlation, allowing the collection of coincidence events throughout the field of view (FOV) with a single60Co point source positioned at the centre of the FOV to determine the timing offsets of the pixels. Leveraging the properties of60Co, we propose a calibration method and implement it on a long axial animal PET system. Initially, we calibrated the timing offsets of the pixels within two blocks to establish reference detectors, and subsequently employed a60Co point source to determine the timing offsets of all the pixels in the system relative to these reference detectors. In addition, we evaluated the system's timing resolution before and after the calibration to validate the efficacy of the proposed method.Main results.We measured the timing offsets of the pixels across the entire system, ranging from -5.0 to 2.0 ns. After implementing the timing offset lookup table, the system timing resolution was improved from 6.30 ns before calibration to 1.04 ns.Significance. In this study, the60Co source is employed for timing calibration, offering the advantages of operational simplicity, broad applicability, and potential application in time-of-flight PET.
{"title":"Crystal-level timing calibration using cascaded photons of<sup>60</sup>Co point source for long axial animal PET system.","authors":"Qing Wei, Daowu Li, Xianchao Huang, Long Wei, Zhiming Zhang, Xiaorou Han, Yingjie Wang","doi":"10.1088/1361-6560/adabaf","DOIUrl":"10.1088/1361-6560/adabaf","url":null,"abstract":"<p><p><i>Objective.</i>Timing calibration is essential for positron emission tomography (PET) system as it enhances timing resolution to improve image quality. Traditionally, positron sources are employed for timing calibration. However, the photons emitted by these sources travel in opposite directions, necessitating that positrons annihilate at multiple locations to collect coincidence data across a greater number of lines of response. To overcome this limitation, this study proposes a timing calibration method utilising a<sup>60</sup>Co point source.<i>Approach.</i>The<sup>60</sup>Co source emits cascaded photons without angular correlation, allowing the collection of coincidence events throughout the field of view (FOV) with a single<sup>60</sup>Co point source positioned at the centre of the FOV to determine the timing offsets of the pixels. Leveraging the properties of<sup>60</sup>Co, we propose a calibration method and implement it on a long axial animal PET system. Initially, we calibrated the timing offsets of the pixels within two blocks to establish reference detectors, and subsequently employed a<sup>60</sup>Co point source to determine the timing offsets of all the pixels in the system relative to these reference detectors. In addition, we evaluated the system's timing resolution before and after the calibration to validate the efficacy of the proposed method.<i>Main results.</i>We measured the timing offsets of the pixels across the entire system, ranging from -5.0 to 2.0 ns. After implementing the timing offset lookup table, the system timing resolution was improved from 6.30 ns before calibration to 1.04 ns.<i>Significance</i>. In this study, the<sup>60</sup>Co source is employed for timing calibration, offering the advantages of operational simplicity, broad applicability, and potential application in time-of-flight PET.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143009993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-29DOI: 10.1088/1361-6560/adaacf
J M E Pluim, J B van de Kamer, E Heeling, I M C van der Ploeg, D J W Hulsen
Objective.The treatment of breast cancer during pregnancy requires careful consideration of consequences for both maternal and fetal health. In non-pregnant patients, the use of radioactive iodine-125 (125I)-seeds is standard practice for localising non-palpable breast tumors before breast-conserving surgery. However, the use of125I-seeds in pregnant patients has been avoided due to concerns about fetal radiation exposure.Approach.In this study a mathematical model was developed to estimate the fetal absorbed dose based on several factors: the radioactivity of the125I-seed, the duration of implantation, and the distance between the125I-seed and fetus as a function of maternal anatomy, gestational age, and fetal development. Three scenarios, representing a range of maternal and fetal anatomy, were evaluated, including a worst-case scenario from a radiation safety perspective.Main results.The results show that the fetal absorbed dose varies across the three scenarios, with ranges of 0.0-0.4 mGy, 0.0-1.0 mGy, and 0.0-1.6 mGy, depending on when the125I-seed was implanted and when it was removed. These dose ranges are similar to conventional diagnostic x-ray scans. The maximum calculated absorbed dose (1.6 mGy) is unlikely to be reached in practice and is well below the 100 mGy threshold associated with possible fetal malformations. The associated theoretical cancer risk increase (0.016%) is minimal.Significance.The use of125I-seeds as localisation method of breast tumors in pregnant patients results in low fetal radiation doses and should not be avoided due to dose concerns.
{"title":"Assessing fetal radiation dose from iodine-125 seeds in pregnant breast cancer patients: an updated model.","authors":"J M E Pluim, J B van de Kamer, E Heeling, I M C van der Ploeg, D J W Hulsen","doi":"10.1088/1361-6560/adaacf","DOIUrl":"10.1088/1361-6560/adaacf","url":null,"abstract":"<p><p><i>Objective.</i>The treatment of breast cancer during pregnancy requires careful consideration of consequences for both maternal and fetal health. In non-pregnant patients, the use of radioactive iodine-125 (<sup>125</sup>I)-seeds is standard practice for localising non-palpable breast tumors before breast-conserving surgery. However, the use of<sup>125</sup>I-seeds in pregnant patients has been avoided due to concerns about fetal radiation exposure.<i>Approach.</i>In this study a mathematical model was developed to estimate the fetal absorbed dose based on several factors: the radioactivity of the<sup>125</sup>I-seed, the duration of implantation, and the distance between the<sup>125</sup>I-seed and fetus as a function of maternal anatomy, gestational age, and fetal development. Three scenarios, representing a range of maternal and fetal anatomy, were evaluated, including a worst-case scenario from a radiation safety perspective.<i>Main results.</i>The results show that the fetal absorbed dose varies across the three scenarios, with ranges of 0.0-0.4 mGy, 0.0-1.0 mGy, and 0.0-1.6 mGy, depending on when the<sup>125</sup>I-seed was implanted and when it was removed. These dose ranges are similar to conventional diagnostic x-ray scans. The maximum calculated absorbed dose (1.6 mGy) is unlikely to be reached in practice and is well below the 100 mGy threshold associated with possible fetal malformations. The associated theoretical cancer risk increase (0.016%) is minimal.<i>Significance.</i>The use of<sup>125</sup>I-seeds as localisation method of breast tumors in pregnant patients results in low fetal radiation doses and should not be avoided due to dose concerns.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143009978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-27DOI: 10.1088/1361-6560/ada719
Xinyi Fu, Katelyn Hasse, Di Xu, Qifan Xu, Martina Descovich, Dan Ruan, Ke Sheng
Objective.Lung tumors can be obscured in x-rays, preventing accurate and robust localization. To improve lung conspicuity for image-guided procedures, we isolate the lungs in the anterior-posterior (AP) x-rays using a lung extraction network (LeX-net) that virtually removes overlapping thoracic structures, including ribs, diaphragm, liver, heart, and trachea.Approach.73 965 thoracic 3DCTs and 106 thoracic 4DCTs were included. The 3D lung volume was extracted using an open-source lung volume segmentation model. AP digitally reconstructed radiographs (DRRs) of the full anatomy CT and extracted lungs were computed as the input and reference to train a network (LeX-net) to generate lung-extracted DRRs (LeX-net DRRs) from full anatomy DRRs, which adopted a Swin-UNet model with conditional GAN. Subsequently, the trained LeX-net on 3DCT was applied to 4DCT-derived DRRs. Lung tumor tracking was then performed on DRRs using a template-matching method on a holdoff 4DCT test set of 79 patients whose gross tumor volumes were smaller than 20 cm3.Main results. LeX-net successfully isolated the lungs in DRRs, achieving an SSIM of 0.9581 ± 0.0151 and a PSNR of 30.78 ± 2.50 on the testing set of 3DCT-derived DRRs. Its performance declined slightly when applied to the 4DCT but maintained useable lung-only 2D views. On the challenging test set including cases of organ overlap, high tumor mobility, and small tumor size, the individual tumor tracking error for LeX-net DRRs was 0.97 ± 0.86 mm, significantly lower than that of 3.13 ± 5.82 mm using the full anatomy DRRs. LeX-net improved success rates of using 5 mm, 3 mm, and 1 mm tracking windows from 88.1%, 80.0%, and 58.7% to 98.1%, 94.2%, and 73.8%, respectively.Significance. LeX-net removes overlapping anatomies and enhances visualization of the lungs in x-rays. The model trained using 3DCTs is generalizable to 4DCT-derived DRRs, achieving significantly improved tumor tracking outcome.
{"title":"Real-time lung extraction from synthesized x-rays improves pulmonary image-guided radiotherapy.","authors":"Xinyi Fu, Katelyn Hasse, Di Xu, Qifan Xu, Martina Descovich, Dan Ruan, Ke Sheng","doi":"10.1088/1361-6560/ada719","DOIUrl":"10.1088/1361-6560/ada719","url":null,"abstract":"<p><p><i>Objective.</i>Lung tumors can be obscured in x-rays, preventing accurate and robust localization. To improve lung conspicuity for image-guided procedures, we isolate the lungs in the anterior-posterior (AP) x-rays using a lung extraction network (LeX-net) that virtually removes overlapping thoracic structures, including ribs, diaphragm, liver, heart, and trachea.<i>Approach.</i>73 965 thoracic 3DCTs and 106 thoracic 4DCTs were included. The 3D lung volume was extracted using an open-source lung volume segmentation model. AP digitally reconstructed radiographs (DRRs) of the full anatomy CT and extracted lungs were computed as the input and reference to train a network (LeX-net) to generate lung-extracted DRRs (LeX-net DRRs) from full anatomy DRRs, which adopted a Swin-UNet model with conditional GAN. Subsequently, the trained LeX-net on 3DCT was applied to 4DCT-derived DRRs. Lung tumor tracking was then performed on DRRs using a template-matching method on a holdoff 4DCT test set of 79 patients whose gross tumor volumes were smaller than 20 cm<sup>3</sup>.<i>Main results</i>. LeX-net successfully isolated the lungs in DRRs, achieving an SSIM of 0.9581 ± 0.0151 and a PSNR of 30.78 ± 2.50 on the testing set of 3DCT-derived DRRs. Its performance declined slightly when applied to the 4DCT but maintained useable lung-only 2D views. On the challenging test set including cases of organ overlap, high tumor mobility, and small tumor size, the individual tumor tracking error for LeX-net DRRs was 0.97 ± 0.86 mm, significantly lower than that of 3.13 ± 5.82 mm using the full anatomy DRRs. LeX-net improved success rates of using 5 mm, 3 mm, and 1 mm tracking windows from 88.1%, 80.0%, and 58.7% to 98.1%, 94.2%, and 73.8%, respectively.<i>Significance</i>. LeX-net removes overlapping anatomies and enhances visualization of the lungs in x-rays. The model trained using 3DCTs is generalizable to 4DCT-derived DRRs, achieving significantly improved tumor tracking outcome.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142952911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}