Pub Date : 2024-12-20DOI: 10.1088/2057-1976/ad992e
Abdellah Khallouqi, Hamza Sekkat, Omar El Rhazouani, Abdellah Halimi
The primary objective of this study was to compare organ doses measured using optically stimulated luminescent dosimeters (OSLDs) with those estimated by the CT-EXPO software for common CT protocols. An anthropomorphic ATOM phantom was employed to measure organ doses across head, chest, and abdominal CT scans performed on a Hitachi Supria 16-slice CT scanner. These OSLD measurements were then compared to the estimates provided by the widely used CT-EXPO software. Organ doses were assessed using OSLDs placed in an adult anthropomorphic phantom, with calibration performed through a comprehensive process involving multiple tube potentials and sensitivity corrections. Results from three CT acquisitions per protocol were compared to estimates provided by CT-EXPO software. Findings reveal significant discrepancies between measured and estimated organ doses, with p-values consistently below 0.05 across all organs. For head CT, measured eye lens doses averaged 33.51 mGy, 6.0% lower than the estimated 35.65 mGy. In chest CT, the thyroid dose was 9.82 mGy, 13.5% higher than the estimated 8.65 mGy. For abdominal CT, the liver dose measured 12.11 mGy, 9.6% higher than the estimated 11.05 mGy. Measured doses for the rest of organs were generally lower than those predicted by CT-EXPO, showing some limitations in current estimation models and the importance of precise dosimetry. This study highlights the potential of OSLD measurements as a complementary method for organ dose assessment in CT imaging, emphasizing the need for more accurate organ dose measurement to optimize patient care.
{"title":"Investigation of organs dosimetry precision using ATOM phantom and optically stimulated luminescence detectors in computed tomography.","authors":"Abdellah Khallouqi, Hamza Sekkat, Omar El Rhazouani, Abdellah Halimi","doi":"10.1088/2057-1976/ad992e","DOIUrl":"10.1088/2057-1976/ad992e","url":null,"abstract":"<p><p>The primary objective of this study was to compare organ doses measured using optically stimulated luminescent dosimeters (OSLDs) with those estimated by the CT-EXPO software for common CT protocols. An anthropomorphic ATOM phantom was employed to measure organ doses across head, chest, and abdominal CT scans performed on a Hitachi Supria 16-slice CT scanner. These OSLD measurements were then compared to the estimates provided by the widely used CT-EXPO software. Organ doses were assessed using OSLDs placed in an adult anthropomorphic phantom, with calibration performed through a comprehensive process involving multiple tube potentials and sensitivity corrections. Results from three CT acquisitions per protocol were compared to estimates provided by CT-EXPO software. Findings reveal significant discrepancies between measured and estimated organ doses, with p-values consistently below 0.05 across all organs. For head CT, measured eye lens doses averaged 33.51 mGy, 6.0% lower than the estimated 35.65 mGy. In chest CT, the thyroid dose was 9.82 mGy, 13.5% higher than the estimated 8.65 mGy. For abdominal CT, the liver dose measured 12.11 mGy, 9.6% higher than the estimated 11.05 mGy. Measured doses for the rest of organs were generally lower than those predicted by CT-EXPO, showing some limitations in current estimation models and the importance of precise dosimetry. This study highlights the potential of OSLD measurements as a complementary method for organ dose assessment in CT imaging, emphasizing the need for more accurate organ dose measurement to optimize patient care.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142765903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1088/2057-1976/ad992d
Sunita Bhatt, Richa Gupta, Vijay R N Prabhakar, Prashant Kumar Shukla, Sudip Kumar Datta, Satish Kumar Dubey
Smartphone-assisted urine analyzers estimate the urinary albumin by quantifying color changes at sensor pad of test strips. These strips yield color variations due to the total protein present in the sample, making it difficult to relate to color changes due to specific analyte. We have addressed it using a Lateral Flow Assay (LFA) device for automatic detection and quantification of urinary albumin. LFAs are specific to individual analytes, allowing color changes to be linked to the specific analyte, minimizing the interference. The proposed reader performs automatic segmentation of the region of interest (ROI) using YOLOv5, a deep learning-based model. Concentrations of urinary albumin in clinical samples were classified using customized machine learning algorithms. An accuracy of 96% was achieved on the test data using the k-Nearest Neighbour (k-NN) algorithm. Performance of the model was also evaluated under different illumination conditions and with different smartphone cameras, and validated using standard nephelometer.
{"title":"Quantification of urinary albumin in clinical samples using smartphone enabled LFA reader incorporating automated segmentation.","authors":"Sunita Bhatt, Richa Gupta, Vijay R N Prabhakar, Prashant Kumar Shukla, Sudip Kumar Datta, Satish Kumar Dubey","doi":"10.1088/2057-1976/ad992d","DOIUrl":"10.1088/2057-1976/ad992d","url":null,"abstract":"<p><p>Smartphone-assisted urine analyzers estimate the urinary albumin by quantifying color changes at sensor pad of test strips. These strips yield color variations due to the total protein present in the sample, making it difficult to relate to color changes due to specific analyte. We have addressed it using a Lateral Flow Assay (LFA) device for automatic detection and quantification of urinary albumin. LFAs are specific to individual analytes, allowing color changes to be linked to the specific analyte, minimizing the interference. The proposed reader performs automatic segmentation of the region of interest (ROI) using YOLOv5, a deep learning-based model. Concentrations of urinary albumin in clinical samples were classified using customized machine learning algorithms. An accuracy of 96% was achieved on the test data using the k-Nearest Neighbour (k-NN) algorithm. Performance of the model was also evaluated under different illumination conditions and with different smartphone cameras, and validated using standard nephelometer.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142765904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1088/2057-1976/ad9c7d
Nada Yousif, Peter G Bain, Dipankar Nandi, Roman Borisyuk
Conventional deep brain stimulation (DBS) for movement disorders is a well-established clinical treatment. Over the last few decades, over 200,000 people have been treated by DBS worldwide for several neurological conditions, including Parkinson's disease and Essential Tremor. DBS involves implanting electrodes into disorder-specific targets in the brain and applying an electric current. Although the hardware has developed in recent years, the clinically used stimulation pattern has remained as a regular frequency square pulse. Recent studies have suggested that phase-locking, coordinated reset or irregular patterns may be as or more effective at desynchronising the pathological neural activity. Such studies have shown efficacy using detailed neuron models or highly simplified networks and considered one frequency band. We previously described a population level model which generates oscillatory activity in both the beta band (20 Hz) and the tremor band (4 Hz). Here we use this model to look at the impact of applying regular, irregular and phase dependent bursts of stimulation, and show how this influences both tremor- and beta-band activity. We found that bursts are as or more effective at suppressing the pathological oscillations compared to continuous DBS. Importantly however, at higher amplitudes we found that the stimulus drove the network activity, as seen previously. Strikingly, this suppression was most apparent for the tremor band oscillations, with beta band pathological activity being more resistant to the burst stimulation compared to continuous, conventional DBS. Furthermore, our simulations showed that phase-locked bursts of stimulation did not convey much improvement on regular bursts of oscillation. Using a genetic algorithm optimisation approach to find the best stimulation parameters for regular, irregular and phase-locked bursts, we confirmed that tremor band oscillations could be more readily suppressed. Our results allow exploration of stimulation mechanisms at the network level to formulate testable predictions regarding parameter settings in DBS.
{"title":"Non-conventional deep brain stimulation in a network model of movement disorders.","authors":"Nada Yousif, Peter G Bain, Dipankar Nandi, Roman Borisyuk","doi":"10.1088/2057-1976/ad9c7d","DOIUrl":"10.1088/2057-1976/ad9c7d","url":null,"abstract":"<p><p>Conventional deep brain stimulation (DBS) for movement disorders is a well-established clinical treatment. Over the last few decades, over 200,000 people have been treated by DBS worldwide for several neurological conditions, including Parkinson's disease and Essential Tremor. DBS involves implanting electrodes into disorder-specific targets in the brain and applying an electric current. Although the hardware has developed in recent years, the clinically used stimulation pattern has remained as a regular frequency square pulse. Recent studies have suggested that phase-locking, coordinated reset or irregular patterns may be as or more effective at desynchronising the pathological neural activity. Such studies have shown efficacy using detailed neuron models or highly simplified networks and considered one frequency band. We previously described a population level model which generates oscillatory activity in both the beta band (20 Hz) and the tremor band (4 Hz). Here we use this model to look at the impact of applying regular, irregular and phase dependent bursts of stimulation, and show how this influences both tremor- and beta-band activity. We found that bursts are as or more effective at suppressing the pathological oscillations compared to continuous DBS. Importantly however, at higher amplitudes we found that the stimulus drove the network activity, as seen previously. Strikingly, this suppression was most apparent for the tremor band oscillations, with beta band pathological activity being more resistant to the burst stimulation compared to continuous, conventional DBS. Furthermore, our simulations showed that phase-locked bursts of stimulation did not convey much improvement on regular bursts of oscillation. Using a genetic algorithm optimisation approach to find the best stimulation parameters for regular, irregular and phase-locked bursts, we confirmed that tremor band oscillations could be more readily suppressed. Our results allow exploration of stimulation mechanisms at the network level to formulate testable predictions regarding parameter settings in DBS.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142827239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1088/2057-1976/ada1db
Jihun Bae, Hunmin Lee, Jinglu Hu
Recent studies on graph representation learning in brain tumor learning tasks have garnered significant interest by encoding and learning inherent relationships among the geometric features of tumors. There are serious class imbalance problems that occur on brain tumor MRI datasets. Impressive deep learning models like CNN- and Transformer-based can easily address this problem through their complex model architectures with large parameters.
However, graph-based networks are not suitable for this approach because of chronic over-smoothing and oscillation convergence problems. To address these challenges at once, we propose novel graph spectral convolutional networks called HeatGSNs, which incorporate eigenfilters and learnable low-pass graph heat kernels to capture geometric similarities within tumor classes. They operate to a continuous feature propagation mechanism derived by the forward finite difference of graph heat kernels, which is approximated by the cosine form for the shift-scaled Chebyshev polynomial and modified Bessel functions, leading to fast and accurate performance achievement. Our experimental results show a best average Dice score of 90%, an average Hausdorff Distance (95%) of 5.45mm, and an average accuracy of 80.11% in the BRATS2021 dataset. Moreover, HeatGSNs require significantly fewer parameters, averaging 1.79M, compared to other existing methods, demonstrating efficiency and effectiveness.
{"title":"HeatGSNs: Integrating Eigenfilters and Low-Pass Graph Heat Kernels into Graph Spectral Convolutional Networks for Brain Tumor Segmentation and Classification.","authors":"Jihun Bae, Hunmin Lee, Jinglu Hu","doi":"10.1088/2057-1976/ada1db","DOIUrl":"https://doi.org/10.1088/2057-1976/ada1db","url":null,"abstract":"<p><p>Recent studies on graph representation learning in brain tumor learning tasks have garnered significant interest by encoding and learning inherent relationships among the geometric features of tumors. There are serious class imbalance problems that occur on brain tumor MRI datasets. Impressive deep learning models like CNN- and Transformer-based can easily address this problem through their complex model architectures with large parameters.
However, graph-based networks are not suitable for this approach because of chronic over-smoothing and oscillation convergence problems. To address these challenges at once, we propose novel graph spectral convolutional networks called HeatGSNs, which incorporate eigenfilters and learnable low-pass graph heat kernels to capture geometric similarities within tumor classes. They operate to a continuous feature propagation mechanism derived by the forward finite difference of graph heat kernels, which is approximated by the cosine form for the shift-scaled Chebyshev polynomial and modified Bessel functions, leading to fast and accurate performance achievement. Our experimental results show a best average Dice score of 90%, an average Hausdorff Distance (95%) of 5.45mm, and an average accuracy of 80.11% in the BRATS2021 dataset. Moreover, HeatGSNs require significantly fewer parameters, averaging 1.79M, compared to other existing methods, demonstrating efficiency and effectiveness.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142869363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1088/2057-1976/ad9c7f
Yadi Zhu, Chao Lian, Xiang Ji, Xiaoxiang Zhang, Chunjing Li, Yunqing Bai, Jun Gao
In this paper, we propose the design of extending collimators aimed at reducing the radiation dose received by patients with normal tissues and protecting organs at risk in Boron Neutron Capture Therapy (BNCT). Three types of extended collimators are studied: Type 1, which is a traditional design; Type 2, which is built upon Type 1 by incorporating additional polyethylene material containing lithium fluoride (PE(LiF)); Type 3, which adds lead (Pb) to Type 1. We evaluated the dose distribution characteristics of the above-extended collimators using Monte Carlo methods simulations under different configurations: in air, in a homogeneous phantom, and a humanoid phantom model. Firstly, the neutron and gamma-ray fluxes at the collimator outlet of the three designs showed no significant changes, thus it can be expected that their therapeutic effects on tumors will be similar. Then, the dose distribution outside the irradiation field was studied. The results showed that, compared with Type 1, Type 2 has a maximum reduction of 57.14% in neutron leakage dose, and Type 3 has a maximum reduction of 21.88% in gamma-ray leakage dose. This will help to reduce the radiation dose to the local skin. Finally, the doses of different organs were simulated. The results showed that the neutron dose of Type 2 was relatively low, especially for the skin, thyroid, spinal cord, and left lung, with the neutron dose reduced by approximately 20.34%, 16.18%, 26.05%, and 18.91% respectively compared to Type 1. Type 3 collimator benefits in reducing gamma-ray dose for the thyroid, esophagus, and left lung organs, with gamma-ray dose reductions of around 10.81%, 9.45%, and 10.42% respectively. This indicates that attaching PE(LiF) or Pb materials to a standard collimator can suppress the dose distribution of patient organs, which can provide valuable insights for the design of extended collimators in BNCT.
{"title":"Dose optimization of extended collimators in boron neutron capture therapy.","authors":"Yadi Zhu, Chao Lian, Xiang Ji, Xiaoxiang Zhang, Chunjing Li, Yunqing Bai, Jun Gao","doi":"10.1088/2057-1976/ad9c7f","DOIUrl":"10.1088/2057-1976/ad9c7f","url":null,"abstract":"<p><p>In this paper, we propose the design of extending collimators aimed at reducing the radiation dose received by patients with normal tissues and protecting organs at risk in Boron Neutron Capture Therapy (BNCT). Three types of extended collimators are studied: Type 1, which is a traditional design; Type 2, which is built upon Type 1 by incorporating additional polyethylene material containing lithium fluoride (PE(LiF)); Type 3, which adds lead (Pb) to Type 1. We evaluated the dose distribution characteristics of the above-extended collimators using Monte Carlo methods simulations under different configurations: in air, in a homogeneous phantom, and a humanoid phantom model. Firstly, the neutron and gamma-ray fluxes at the collimator outlet of the three designs showed no significant changes, thus it can be expected that their therapeutic effects on tumors will be similar. Then, the dose distribution outside the irradiation field was studied. The results showed that, compared with Type 1, Type 2 has a maximum reduction of 57.14% in neutron leakage dose, and Type 3 has a maximum reduction of 21.88% in gamma-ray leakage dose. This will help to reduce the radiation dose to the local skin. Finally, the doses of different organs were simulated. The results showed that the neutron dose of Type 2 was relatively low, especially for the skin, thyroid, spinal cord, and left lung, with the neutron dose reduced by approximately 20.34%, 16.18%, 26.05%, and 18.91% respectively compared to Type 1. Type 3 collimator benefits in reducing gamma-ray dose for the thyroid, esophagus, and left lung organs, with gamma-ray dose reductions of around 10.81%, 9.45%, and 10.42% respectively. This indicates that attaching PE(LiF) or Pb materials to a standard collimator can suppress the dose distribution of patient organs, which can provide valuable insights for the design of extended collimators in BNCT.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142827237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1088/2057-1976/ad960d
Yoshiaki Yasumoto, Hiromitsu Daisaki, Mitsuru Sato
Introduction. Monte Carlo simulation codes simulating medical imaging nuclear detectors (SIMIND) are notable tools used to model nuclear medicine experiments.This study aimed to confirm the usability of SIMIND as an alternative method for nuclear medicine experiments with a cardiac phantom HL, simulating human body structures, by comparing the actual experiment data.Methods. A cardiac phantom HL that simulates myocardial scintigraphy using123I-meta-iodobenzylguanidine was developed, and single-photon emission computed tomography/computed tomography imaging was performed using Discovery NM/CT 670 scanner. Aside from the main-energy window(159 keV ± 10%), additional windows were set on the low(137.5 keV ± 4% ) and high(180.5 keV ± 3%)-energy sides. The simulations were performed under the same conditions as the actual experiments. Regions of interest (ROIs) were set in each organ part of the experiments and simulated data, and a polar map for the myocardial part was developed. The mean, maximum (max), and minimum (min) counts within each ROI, as well as the relative errors of each segment in the polar map, were calculated to evaluate the accuracy of the simulation.Results. Overall, the results were favorable with relative errors of <10% except in some areas based on the data from the main-energy window and postreconstruction. On the other hand, relative errors of >10% were found in both the low and high subenergy windows. The smallest error occurred when assessing using mean values within the ROIs. The relative error was high at the cardiac base in the polar map evaluation; however, it remained <10% from the mid to apical heart sections.Conclusion. SIMIND is considered an alternative method for nuclear medicine experiments using a myocardial phantom HL that closely resembles human body structures. However, caution is warranted as accuracy may decrease under specific conditions.
{"title":"Validation of the SIMIND simulation code using the myocardial phantom HL.","authors":"Yoshiaki Yasumoto, Hiromitsu Daisaki, Mitsuru Sato","doi":"10.1088/2057-1976/ad960d","DOIUrl":"10.1088/2057-1976/ad960d","url":null,"abstract":"<p><p><i>Introduction</i>. Monte Carlo simulation codes simulating medical imaging nuclear detectors (SIMIND) are notable tools used to model nuclear medicine experiments.This study aimed to confirm the usability of SIMIND as an alternative method for nuclear medicine experiments with a cardiac phantom HL, simulating human body structures, by comparing the actual experiment data.<i>Methods</i>. A cardiac phantom HL that simulates myocardial scintigraphy using<sup>123</sup>I-meta-iodobenzylguanidine was developed, and single-photon emission computed tomography/computed tomography imaging was performed using Discovery NM/CT 670 scanner. Aside from the main-energy window(159 keV ± 10%), additional windows were set on the low(137.5 keV ± 4% ) and high(180.5 keV ± 3%)-energy sides. The simulations were performed under the same conditions as the actual experiments. Regions of interest (ROIs) were set in each organ part of the experiments and simulated data, and a polar map for the myocardial part was developed. The mean, maximum (max), and minimum (min) counts within each ROI, as well as the relative errors of each segment in the polar map, were calculated to evaluate the accuracy of the simulation.<i>Results</i>. Overall, the results were favorable with relative errors of <10% except in some areas based on the data from the main-energy window and postreconstruction. On the other hand, relative errors of >10% were found in both the low and high subenergy windows. The smallest error occurred when assessing using mean values within the ROIs. The relative error was high at the cardiac base in the polar map evaluation; however, it remained <10% from the mid to apical heart sections.<i>Conclusion</i>. SIMIND is considered an alternative method for nuclear medicine experiments using a myocardial phantom HL that closely resembles human body structures. However, caution is warranted as accuracy may decrease under specific conditions.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142692601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1088/2057-1976/ad9c7c
Indra J Das, Ahtesham U Khan, Sara Lim, Poonam Yadav, Eric Donnelley, Bharat B Mittal
Highlight. Electron beam treatment often requires bolus to augment surface dose to nearly 100%. There are no optimum bolus materials and hence a high-Z based clothlike material is investigated to reduce air column in treatment that provides optimum surface dose. This material is well suited as it can be used multiple times and can be sanitized. Characteristics of W-Si material is provided.Purpose /Objective(s). Electron beams are frequently used for superficial tumors. However, due to electron beam characteristics the surface dose is 75-95% of the prescribed dose depending on beam energy thus requiring placement of bolus to augment surface dose. Various types of boluses are commonly used in clinics, each having it's own unique limitation. Most bolus devices do not conform to the skin contour and create airgaps that are known to produce dose perturbations creating hot and cold spots. A cloth-like high-Z materials; Tungsten, (Z = 74) and Bismuth, (Z = 83) impregnated in silicone gel is investigated for electron bolus.Materials/Methods. Super soft silicone-gel based submillimeter thin tungsten and bismuth sheets were investigated for bolus for 6-12 MeV. Parallel plate ion chamber measurements were performed in a solid water phantom on a Varian machine. Depth dose characteristics were measured to optimize the thickness for surface dose to be 100% for selected electron therapy and validated with Monte Carlo simulations.Results. Silicone-gel tungsten and bismuth sheets produce significant electrons thus increasing surface dose. Based on measured depth dose, our data showed that tungsten sheets of 0.14 mm, 0.18 mm and 0.2 mm and Bismuth sheets of 0.42 mm, 0.18 mm and 0.2 mm provide 100% surface dose for 6, 9 and 12 MeV beams, respectively without any significant changes in depth dose except increasing surface dose.Conclusions. The new high-Z clothlike sheets are extremely soft but high tensile metallic bolus materials that can fit flawlessly on any skin contour. Only 0.2 mm thick sheets are needed for 100% surface dose without degradation of the depth dose characteristics. These materials are reusable and ideal for bolus in electron beam treatment. This investigation opens a new frontier in designing new bolus materials optimum for patient treatment.
{"title":"An investigation of high-Z material for bolus in electron beam therapy.","authors":"Indra J Das, Ahtesham U Khan, Sara Lim, Poonam Yadav, Eric Donnelley, Bharat B Mittal","doi":"10.1088/2057-1976/ad9c7c","DOIUrl":"10.1088/2057-1976/ad9c7c","url":null,"abstract":"<p><p><i>Highlight</i>. Electron beam treatment often requires bolus to augment surface dose to nearly 100%. There are no optimum bolus materials and hence a high-Z based clothlike material is investigated to reduce air column in treatment that provides optimum surface dose. This material is well suited as it can be used multiple times and can be sanitized. Characteristics of W-Si material is provided.<i>Purpose /Objective(s)</i>. Electron beams are frequently used for superficial tumors. However, due to electron beam characteristics the surface dose is 75-95% of the prescribed dose depending on beam energy thus requiring placement of bolus to augment surface dose. Various types of boluses are commonly used in clinics, each having it's own unique limitation. Most bolus devices do not conform to the skin contour and create airgaps that are known to produce dose perturbations creating hot and cold spots. A cloth-like high-Z materials; Tungsten, (Z = 74) and Bismuth, (Z = 83) impregnated in silicone gel is investigated for electron bolus.<i>Materials/Methods</i>. Super soft silicone-gel based submillimeter thin tungsten and bismuth sheets were investigated for bolus for 6-12 MeV. Parallel plate ion chamber measurements were performed in a solid water phantom on a Varian machine. Depth dose characteristics were measured to optimize the thickness for surface dose to be 100% for selected electron therapy and validated with Monte Carlo simulations.<i>Results</i>. Silicone-gel tungsten and bismuth sheets produce significant electrons thus increasing surface dose. Based on measured depth dose, our data showed that tungsten sheets of 0.14 mm, 0.18 mm and 0.2 mm and Bismuth sheets of 0.42 mm, 0.18 mm and 0.2 mm provide 100% surface dose for 6, 9 and 12 MeV beams, respectively without any significant changes in depth dose except increasing surface dose.<i>Conclusions</i>. The new high-Z clothlike sheets are extremely soft but high tensile metallic bolus materials that can fit flawlessly on any skin contour. Only 0.2 mm thick sheets are needed for 100% surface dose without degradation of the depth dose characteristics. These materials are reusable and ideal for bolus in electron beam treatment. This investigation opens a new frontier in designing new bolus materials optimum for patient treatment.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142827235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1088/2057-1976/ad98a3
Yiming Liu, Yuehua Liang, Ting Yu, Xiang Tao, Xin Wu, Yan Wang, Qingli Li
A quantitative assessment for measuring the placenta during gross examination is a crucial step in evaluating the health status of both the mother and the fetus. However, in the current clinical practice, time-consuming and observer-variant drawbacks are caused due to manual measurement and subjective determination of placental characteristics. Therefore, we propose a quantitative assessment system for placenta gross examination to efficiently and accurately measuring placental characteristics according to Amsterdam Consensus, including weight and thickness of placenta, length and width of placental disc, length and diameter of umbilical cord, distance from umbilical cord insertion point to placental edges, etc. The proposed system consists of (1) an instrument designed for standard acquisition of image, weight and thickness of placenta and (2) an algorithm for quantitative morphological assessment based on precise segmentation of placental disc and umbilical cord and localization of umbilical cord insertion point. Considering the complexity of spatial distribution and ambiguous texture of umbilical cord insertion point, we design Umbilical Cord Insertion Point Candidate Generator to provide reliable umbilical cord insertion point location by employing prior structural knowledge of umbilical cord. Therefore, we integrate the Umbilical Cord Insertion Point Candidate Generator with a Base Detector to ensure umbilical cord insertion point is provided when the Base Detector fails to generate high-scoring candidate points. Experimental results on our self-collected placenta dataset demonstrate the effectiveness of our proposed algorithm. The measurements of placental morphological assessment are calculated based on segmentation and localization results. Our proposed quantitative assessment system, along with its associated instrument and algorithm, can automatically extract numerical measurements to boost the standardization and efficiency of placental gross examination.
{"title":"Quantitative assessment system for placental gross examination with precise localization of umbilical cord insertion point.","authors":"Yiming Liu, Yuehua Liang, Ting Yu, Xiang Tao, Xin Wu, Yan Wang, Qingli Li","doi":"10.1088/2057-1976/ad98a3","DOIUrl":"10.1088/2057-1976/ad98a3","url":null,"abstract":"<p><p>A quantitative assessment for measuring the placenta during gross examination is a crucial step in evaluating the health status of both the mother and the fetus. However, in the current clinical practice, time-consuming and observer-variant drawbacks are caused due to manual measurement and subjective determination of placental characteristics. Therefore, we propose a quantitative assessment system for placenta gross examination to efficiently and accurately measuring placental characteristics according to Amsterdam Consensus, including weight and thickness of placenta, length and width of placental disc, length and diameter of umbilical cord, distance from umbilical cord insertion point to placental edges, etc. The proposed system consists of (1) an instrument designed for standard acquisition of image, weight and thickness of placenta and (2) an algorithm for quantitative morphological assessment based on precise segmentation of placental disc and umbilical cord and localization of umbilical cord insertion point. Considering the complexity of spatial distribution and ambiguous texture of umbilical cord insertion point, we design Umbilical Cord Insertion Point Candidate Generator to provide reliable umbilical cord insertion point location by employing prior structural knowledge of umbilical cord. Therefore, we integrate the Umbilical Cord Insertion Point Candidate Generator with a Base Detector to ensure umbilical cord insertion point is provided when the Base Detector fails to generate high-scoring candidate points. Experimental results on our self-collected placenta dataset demonstrate the effectiveness of our proposed algorithm. The measurements of placental morphological assessment are calculated based on segmentation and localization results. Our proposed quantitative assessment system, along with its associated instrument and algorithm, can automatically extract numerical measurements to boost the standardization and efficiency of placental gross examination.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142754534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1088/2057-1976/ada1da
Aurelle Tchagna Kouanou, Issa Karambal, Yae Gaba, Christian Tchito Tchapga, Alain Marcel Simo Dikande, Clemence Alla Takam, Daniel Tchiotsop
Background and objective: Auto-encoders have demonstrated outstanding performance in computer vision tasks such as biomedical imaging, including classification, segmentation, and denoising. Many of the current techniques for image denoising in biomedical applications involve training an autoencoder or convolutional neural network (CNN) using pairs of clean and noisy images. However, these approaches are not realistic because the autoencoder or CNN is trained on known noise and does not generalize well to new noisy distributions. This paper proposes a novel approach for biomedical image denoising using a variational network based on a Bayesian model and deep learning.
Method: In this study, we aim to denoise biomedical images using a Bayesian approach. In our dataset, each image exhibited a same noise distribution. To achieve this, we first estimate the noise distribution based on Bayesian probability by calculating the posterior distributions, and then proceed with denoising. A loss function that combines the Bayesian prior and autoencoder objectives is used to train the variational network. The proposed method was tested on CT-Scan biomedical image datasets and compared with state-of-the-art denoising techniques.
Results: The experimental results demonstrate that our method outperforms the existing methods in terms of denoising accuracy, visual quality, and computational efficiency. For instance, we obtained a PSNR of 39.18 dB and an SSIM of 0.9941 with noise intensity std = 10. Our approach can potentially improve the accuracy and reliability of biomedical image analysis, which can have significant implications for clinical diagnosis and treatment planning.
Conclusion: The proposed method combines the advantages of both Bayesian modeling and variational network to effectively denoise biomedical images.
.
{"title":"A Variational Network for Biomedical Images Denoising using Bayesian model and Auto-Encoder.","authors":"Aurelle Tchagna Kouanou, Issa Karambal, Yae Gaba, Christian Tchito Tchapga, Alain Marcel Simo Dikande, Clemence Alla Takam, Daniel Tchiotsop","doi":"10.1088/2057-1976/ada1da","DOIUrl":"https://doi.org/10.1088/2057-1976/ada1da","url":null,"abstract":"<p><strong>Background and objective: </strong>Auto-encoders have demonstrated outstanding performance in computer vision tasks such as biomedical imaging, including classification, segmentation, and denoising. Many of the current techniques for image denoising in biomedical applications involve training an autoencoder or convolutional neural network (CNN) using pairs of clean and noisy images. However, these approaches are not realistic because the autoencoder or CNN is trained on known noise and does not generalize well to new noisy distributions. This paper proposes a novel approach for biomedical image denoising using a variational network based on a Bayesian model and deep learning.
Method: In this study, we aim to denoise biomedical images using a Bayesian approach. In our dataset, each image exhibited a same noise distribution. To achieve this, we first estimate the noise distribution based on Bayesian probability by calculating the posterior distributions, and then proceed with denoising. A loss function that combines the Bayesian prior and autoencoder objectives is used to train the variational network. The proposed method was tested on CT-Scan biomedical image datasets and compared with state-of-the-art denoising techniques.
Results: The experimental results demonstrate that our method outperforms the existing methods in terms of denoising accuracy, visual quality, and computational efficiency. For instance, we obtained a PSNR of 39.18 dB and an SSIM of 0.9941 with noise intensity std = 10. Our approach can potentially improve the accuracy and reliability of biomedical image analysis, which can have significant implications for clinical diagnosis and treatment planning.
Conclusion: The proposed method combines the advantages of both Bayesian modeling and variational network to effectively denoise biomedical images.
.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142869360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1088/2057-1976/ad9bb6
Sayedu Khasim Noorbasha, Arun Kumar
The diagnosis of neurological disorders often involves analyzing EEG data, which can be contaminated by artifacts from eye movements or blinking (EOG). To improve the accuracy of EEG-based analysis, we propose a novel framework, VME-EFD, which combines Variational Mode Extraction (VME) and Empirical Fourier Decomposition (EFD) for effective EOG artifact removal. In this approach, the EEG signal is first decomposed by VME into two segments: the desired EEG signal and the EOG artifact. The EOG component is further processed by EFD, where decomposition levels are analyzed based on energy and skewness. The level with the highest energy and skewness, corresponding to the artifact, is discarded, while the remaining levels are reintegrated with the desired EEG. Simulations on both synthetic and real EEG datasets demonstrate that VME-EFD outperforms existing methods, with lower RRMSE (0.1358 versus 0.1557, 0.1823, 0.2079, 0.2748), lower ΔPSD in theαband (0.10 ± 0.01 and 0.17 ± 0.04 versus 0.89 ± 0.91 and 0.22 ± 0.19, 1.32 ± 0.23 and 1.10 ± 0.07, 2.86 ± 1.30 and 1.19 ± 0.07, 3.96 ± 0.56 and 2.42 ± 2.48), and higher correlation coefficient (CC: 0.9732 versus 0.9695, 0.9514, 0.8994, 0.8730). The framework effectively removes EOG artifacts and preserves critical EEG features, particularly in theαband, making it highly suitable for brain-computer interface (BCI) applications.
神经系统疾病的诊断通常涉及分析脑电图数据,这些数据可能受到眼动或眨眼(EOG)的伪影的污染。为了提高基于脑电图分析的准确性,我们提出了一种新的框架VME-EFD,它结合了变分模提取(VME)和经验傅里叶分解(EFD)来有效地去除脑电图伪影。在该方法中,首先用VME将脑电信号分解为两个部分:期望的脑电信号和脑电信号伪影。EOG成分通过EFD进一步处理,其中根据能量和偏度分析分解水平。在合成和真实EEG数据集上的仿真表明,ve - efd优于现有方法,RRMSE较低(0.1358 vs. 0.1557, 0.1823, 0.2079, 0.2748), α波段的ΔPSD较低(0.10±0.01和0.17±0.04 vs. 0.89±0.91和0.22±0.19,1.32±0.23和1.10±0.07,2.86±1.30和1.19±0.07,3.96±0.56和2.42±2.48)。相关系数较高(CC: 0.9732 vs. 0.9695, 0.9514, 0.8994, 0.8730)。该框架有效地去除了EEG伪影,并保留了关键的EEG特征,特别是在α波段,使其非常适合脑机接口(BCI)应用。
。
{"title":"VME-EFD : A novel framework to eliminate the Electrooculogram artifact from single-channel EEGs.","authors":"Sayedu Khasim Noorbasha, Arun Kumar","doi":"10.1088/2057-1976/ad9bb6","DOIUrl":"10.1088/2057-1976/ad9bb6","url":null,"abstract":"<p><p>The diagnosis of neurological disorders often involves analyzing EEG data, which can be contaminated by artifacts from eye movements or blinking (EOG). To improve the accuracy of EEG-based analysis, we propose a novel framework, VME-EFD, which combines Variational Mode Extraction (VME) and Empirical Fourier Decomposition (EFD) for effective EOG artifact removal. In this approach, the EEG signal is first decomposed by VME into two segments: the desired EEG signal and the EOG artifact. The EOG component is further processed by EFD, where decomposition levels are analyzed based on energy and skewness. The level with the highest energy and skewness, corresponding to the artifact, is discarded, while the remaining levels are reintegrated with the desired EEG. Simulations on both synthetic and real EEG datasets demonstrate that VME-EFD outperforms existing methods, with lower RRMSE (0.1358 versus 0.1557, 0.1823, 0.2079, 0.2748), lower ΔPSD in the<i>α</i>band (0.10 ± 0.01 and 0.17 ± 0.04 versus 0.89 ± 0.91 and 0.22 ± 0.19, 1.32 ± 0.23 and 1.10 ± 0.07, 2.86 ± 1.30 and 1.19 ± 0.07, 3.96 ± 0.56 and 2.42 ± 2.48), and higher correlation coefficient (CC: 0.9732 versus 0.9695, 0.9514, 0.8994, 0.8730). The framework effectively removes EOG artifacts and preserves critical EEG features, particularly in the<i>α</i>band, making it highly suitable for brain-computer interface (BCI) applications.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142799481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}