Pub Date : 2024-11-21DOI: 10.1088/2057-1976/ad9152
Miriam Schwarze, Gerhard Hilgers, Hans Rabus
Objectivea previous study reported nanodosimetric measurements of therapeutic-energy carbon ions penetrating simulated tissue. The results are incompatible with the predicted mean energy of the carbon ions in the nanodosimeter and previous experiments with lower energy monoenergetic beams. The purpose of this study is to explore the origin of these discrepancies.Approachdetailed simulations using the Geant4 toolkit were performed to investigate the radiation field in the nanodosimeter and provide input data for track structure simulations, which were performed with a developed version of the PTra code.Main resultsthe Geant4 simulations show that with the narrow-beam geometry employed in the experiment, only a small fraction of the carbon ions traverse the nanodosimeter and their mean energy is between 12% and 30% lower than the values estimated using the SRIM software. Only about one-third or less of these carbon ions hit the trigger detector. The track structure simulations indicate that the observed enhanced ionization cluster sizes are mainly due to coincidences with events in which carbon ions miss the trigger detector. In addition, the discrepancies observed for high absorber thicknesses of carbon ions traversing the target volume could be explained by assuming an increase in thickness or interaction cross-sections in the order of 1%.Significancethe results show that even with strong collimation of the radiation field, future nanodosimetric measurements of clinical carbon ion beams will require large trigger detectors to register all events with carbon ions traversing the nanodosimeter. Energy loss calculations of the primary beam in the absorbers are insufficient and should be replaced by detailed simulations when planning such experiments. Uncertainties of the interaction cross-sections in simulation codes may shift the Bragg peak position.
{"title":"Nanodosimetric investigation of the track structure of therapeutic carbon ion radiation part2: detailed simulation.","authors":"Miriam Schwarze, Gerhard Hilgers, Hans Rabus","doi":"10.1088/2057-1976/ad9152","DOIUrl":"10.1088/2057-1976/ad9152","url":null,"abstract":"<p><p><i>Objective</i>a previous study reported nanodosimetric measurements of therapeutic-energy carbon ions penetrating simulated tissue. The results are incompatible with the predicted mean energy of the carbon ions in the nanodosimeter and previous experiments with lower energy monoenergetic beams. The purpose of this study is to explore the origin of these discrepancies.<i>Approach</i>detailed simulations using the Geant4 toolkit were performed to investigate the radiation field in the nanodosimeter and provide input data for track structure simulations, which were performed with a developed version of the PTra code.<i>Main results</i>the Geant4 simulations show that with the narrow-beam geometry employed in the experiment, only a small fraction of the carbon ions traverse the nanodosimeter and their mean energy is between 12% and 30% lower than the values estimated using the SRIM software. Only about one-third or less of these carbon ions hit the trigger detector. The track structure simulations indicate that the observed enhanced ionization cluster sizes are mainly due to coincidences with events in which carbon ions miss the trigger detector. In addition, the discrepancies observed for high absorber thicknesses of carbon ions traversing the target volume could be explained by assuming an increase in thickness or interaction cross-sections in the order of 1%.<i>Significance</i>the results show that even with strong collimation of the radiation field, future nanodosimetric measurements of clinical carbon ion beams will require large trigger detectors to register all events with carbon ions traversing the nanodosimeter. Energy loss calculations of the primary beam in the absorbers are insufficient and should be replaced by detailed simulations when planning such experiments. Uncertainties of the interaction cross-sections in simulation codes may shift the Bragg peak position.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142614239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-21DOI: 10.1088/2057-1976/ad91ba
V M Raja Sankari, Snekhalatha Umapathy
Retinopathy of Prematurity (ROP) is a retinal disorder affecting preterm babies, which can lead to permanent blindness without treatment. Early-stage ROP diagnosis is vital in providing optimal therapy for the neonates. The proposed study predicts early-stage ROP from neonatal fundus images using Machine Learning (ML) classifiers and Convolutional Neural Networks (CNN) based pre-trained networks. The characteristic demarcation lines and ridges in early stage ROP are segmented utilising a novel Swin U-Net. 2000 Scale Invariant Feature Transform (SIFT) descriptors were extracted from the segmented ridges and are dimensionally reduced to 50 features using Principal Component Analysis (PCA). Seven ROP-specific features, including six Gray Level Co-occurrence Matrix (GLCM) and ridge length features, are extracted from the segmented image and are fused with the PCA reduced 50 SIFT features. Finally, three ML classifiers, such as Support Vector Machine (SVM), Random Forest (RF), andk- Nearest Neighbor (k-NN), are used to classify the 50 features to predict the early-stage ROP from Normal images. On the other hand, the raw retinal images are classified directly into normal and early-stage ROP using six pre-trained classifiers, namely ResNet50, ShuffleNet V2, EfficientNet, MobileNet, VGG16, and DarkNet19. It is seen that the ResNet50 network outperformed all other networks in predicting early-stage ROP with 89.5% accuracy, 87.5% sensitivity, 91.5% specificity, 91.1% precision, 88% NPV and an Area Under the Curve (AUC) of 0.92. Swin U-Net Convolutional Neural Networks (CNN) segmented the ridges and demarcation lines with an accuracy of 89.7% with 80.5% precision, 92.6% recall, 75.76% IoU, and 0.86 as the Dice coefficient. The SVM classifier using the 57 features from the segmented images achieved a classification accuracy of 88.75%, sensitivity of 90%, specificity of 87.5%, and an AUC of 0.91. The system can be utilised as a point-of-care diagnostic tool for ROP diagnosis of neonates in remote areas.
{"title":"Computer-aided diagnosis of early-stage Retinopathy of Prematurity in neonatal fundus images using artificial intelligence.","authors":"V M Raja Sankari, Snekhalatha Umapathy","doi":"10.1088/2057-1976/ad91ba","DOIUrl":"10.1088/2057-1976/ad91ba","url":null,"abstract":"<p><p>Retinopathy of Prematurity (ROP) is a retinal disorder affecting preterm babies, which can lead to permanent blindness without treatment. Early-stage ROP diagnosis is vital in providing optimal therapy for the neonates. The proposed study predicts early-stage ROP from neonatal fundus images using Machine Learning (ML) classifiers and Convolutional Neural Networks (CNN) based pre-trained networks. The characteristic demarcation lines and ridges in early stage ROP are segmented utilising a novel Swin U-Net. 2000 Scale Invariant Feature Transform (SIFT) descriptors were extracted from the segmented ridges and are dimensionally reduced to 50 features using Principal Component Analysis (PCA). Seven ROP-specific features, including six Gray Level Co-occurrence Matrix (GLCM) and ridge length features, are extracted from the segmented image and are fused with the PCA reduced 50 SIFT features. Finally, three ML classifiers, such as Support Vector Machine (SVM), Random Forest (RF), and<i>k</i>- Nearest Neighbor (<i>k</i>-NN), are used to classify the 50 features to predict the early-stage ROP from Normal images. On the other hand, the raw retinal images are classified directly into normal and early-stage ROP using six pre-trained classifiers, namely ResNet50, ShuffleNet V2, EfficientNet, MobileNet, VGG16, and DarkNet19. It is seen that the ResNet50 network outperformed all other networks in predicting early-stage ROP with 89.5% accuracy, 87.5% sensitivity, 91.5% specificity, 91.1% precision, 88% NPV and an Area Under the Curve (AUC) of 0.92. Swin U-Net Convolutional Neural Networks (CNN) segmented the ridges and demarcation lines with an accuracy of 89.7% with 80.5% precision, 92.6% recall, 75.76% IoU, and 0.86 as the Dice coefficient. The SVM classifier using the 57 features from the segmented images achieved a classification accuracy of 88.75%, sensitivity of 90%, specificity of 87.5%, and an AUC of 0.91. The system can be utilised as a point-of-care diagnostic tool for ROP diagnosis of neonates in remote areas.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142614193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lung cancer is one of the most common life-threatening worldwide cancers affecting both the male and the female populations. The appearance of nodules in the scan image is an early indication of the development of cancer cells in the lung. The Low Dose Computed Tomography screening technique is used for the early detection of cancer nodules. Therefore, with more Computed Tomography (CT) lung profiles, an automated lung nodule analysis system can be utilized through image processing techniques and neural network algorithms. A CT image of the lung consists of many elements such as blood vessels, ribs, nodules, sternum, bronchi and nodules. These nodules can be both benign and malignant, where the latter leads to lung cancer. Detecting them at an earlier stage can increase life expectancy by up to 5 to 10 years. To analyse only the nodules from the profile, the respected features are extracted using image processing techniques. Based on the review, textural features were the promising ones in medical image analysis and for solving computer vision problems. The importance of uncovering the hidden features allows Deep Learning algorithms (DL) to function better, especially in medical imaging, where accuracy has improved. The earlier detection of cancerous lung nodules is possible through the combination of multi-featured extraction and classification techniques using image data. This technique can be a breakthrough in the deep learning area by providing the appropriate features. One of the greatest challenges is the incorrect identification of malignant nodules results in a higher false positive rate during the prediction. The suitable features make the system more precise in prognosis. In this paper, the overview of lung cancer along with the publicly available datasets is discussed for the research purposes. They are mainly focused on the recent research that combines feature extraction and deep learning algorithms used to reduce the false positive rate in the automated detection of lung nodules. The primary objective of the paper is to provide the importance of textural features when combined with different deep-learning models. It gives insights into their advantages, disadvantages and limitations regarding possible research gaps. These papers compare the recent studies of deep learning models with and without feature extraction and conclude that DL models that include feature extraction are better than the others.
{"title":"A systematic review on feature extraction methods and deep learning models for detection of cancerous lung nodules at an early stage -the recent trends and challenges.","authors":"Mathumetha Palani, Sivakumar Rajagopal, Anantha Krishna Chintanpalli","doi":"10.1088/2057-1976/ad9154","DOIUrl":"10.1088/2057-1976/ad9154","url":null,"abstract":"<p><p>Lung cancer is one of the most common life-threatening worldwide cancers affecting both the male and the female populations. The appearance of nodules in the scan image is an early indication of the development of cancer cells in the lung. The Low Dose Computed Tomography screening technique is used for the early detection of cancer nodules. Therefore, with more Computed Tomography (CT) lung profiles, an automated lung nodule analysis system can be utilized through image processing techniques and neural network algorithms. A CT image of the lung consists of many elements such as blood vessels, ribs, nodules, sternum, bronchi and nodules. These nodules can be both benign and malignant, where the latter leads to lung cancer. Detecting them at an earlier stage can increase life expectancy by up to 5 to 10 years. To analyse only the nodules from the profile, the respected features are extracted using image processing techniques. Based on the review, textural features were the promising ones in medical image analysis and for solving computer vision problems. The importance of uncovering the hidden features allows Deep Learning algorithms (DL) to function better, especially in medical imaging, where accuracy has improved. The earlier detection of cancerous lung nodules is possible through the combination of multi-featured extraction and classification techniques using image data. This technique can be a breakthrough in the deep learning area by providing the appropriate features. One of the greatest challenges is the incorrect identification of malignant nodules results in a higher false positive rate during the prediction. The suitable features make the system more precise in prognosis. In this paper, the overview of lung cancer along with the publicly available datasets is discussed for the research purposes. They are mainly focused on the recent research that combines feature extraction and deep learning algorithms used to reduce the false positive rate in the automated detection of lung nodules. The primary objective of the paper is to provide the importance of textural features when combined with different deep-learning models. It gives insights into their advantages, disadvantages and limitations regarding possible research gaps. These papers compare the recent studies of deep learning models with and without feature extraction and conclude that DL models that include feature extraction are better than the others.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142614176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1088/2057-1976/ad90e7
Sina Taghipour, Farid Vakili-Tahami, Tajbakhsh Navid Chakherlou
Orthopedic injuries, such as femur shaft fractures, often require surgical intervention to promote healing and functional recovery. Metal plate implants are widely used due to their mechanical strength and biocompatibility. Biodegradable metal plate implants, including those made from magnesium, zinc, and iron alloys, offer distinct advantages over non-biodegradable materials like stainless steel, titanium, and cobalt alloys. Biodegradable implants gradually replace native bone tissue, reducing the need for additional surgeries and improving patient recovery. However, non-biodegradable implants remain popular due to their stability, corrosion resistance, and biocompatibility. This study focuses on designing an implant plate for treating transverse femoral shaft fractures during the walking cycle. The primary objective is to conduct a comprehensive finite element analysis (FEA) of a fractured femur's stabilization using various biodegradable and non-biodegradable materials. The study assesses the efficacy of different implant materials, discusses implant design, and identifies the optimal materials for femoral stabilization. Results indicate that magnesium alloy is superior among biodegradable materials, while titanium alloy is preferred among non-biodegradable options. The findings suggest that magnesium alloy is the recommended material for bone implants due to its advantages over non-degradable alternatives.
{"title":"Comparing the performance of a femoral shaft fracture fixation using implants with biodegradable and non-biodegradable materials.","authors":"Sina Taghipour, Farid Vakili-Tahami, Tajbakhsh Navid Chakherlou","doi":"10.1088/2057-1976/ad90e7","DOIUrl":"10.1088/2057-1976/ad90e7","url":null,"abstract":"<p><p>Orthopedic injuries, such as femur shaft fractures, often require surgical intervention to promote healing and functional recovery. Metal plate implants are widely used due to their mechanical strength and biocompatibility. Biodegradable metal plate implants, including those made from magnesium, zinc, and iron alloys, offer distinct advantages over non-biodegradable materials like stainless steel, titanium, and cobalt alloys. Biodegradable implants gradually replace native bone tissue, reducing the need for additional surgeries and improving patient recovery. However, non-biodegradable implants remain popular due to their stability, corrosion resistance, and biocompatibility. This study focuses on designing an implant plate for treating transverse femoral shaft fractures during the walking cycle. The primary objective is to conduct a comprehensive finite element analysis (FEA) of a fractured femur's stabilization using various biodegradable and non-biodegradable materials. The study assesses the efficacy of different implant materials, discusses implant design, and identifies the optimal materials for femoral stabilization. Results indicate that magnesium alloy is superior among biodegradable materials, while titanium alloy is preferred among non-biodegradable options. The findings suggest that magnesium alloy is the recommended material for bone implants due to its advantages over non-degradable alternatives.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142614190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the inherent variability in EEG signals across different individuals, domain adaptation and adversarial learning strategies are being progressively utilized to develop subject-specific classification models by leveraging data from other subjects. These approaches primarily focus on domain alignment and tend to overlook the critical task-specific class boundaries. This oversight can result in weak correlation between the extracted features and categories. To address these challenges, we propose a novel model that uses the known information from multiple subjects to bolster EEG classification for an individual subject through adversarial learning strategies. Our method begins by extracting both shallow and attention-driven deep features from EEG signals. Subsequently, we employ a class discriminator to encourage the same-class features from different domains to converge while ensuring that the different-class features diverge. This is achieved using our proposed discrimination loss function, which is designed to minimize the feature distance for samples of the same class across different domains while maximizing it for those from different classes. Additionally, our model incorporates two parallel classifiers that are harmonious yet distinct and jointly contribute to decision-making. Extensive testing on two publicly available EEG datasets has validated our model's efficacy and superiority.
{"title":"A class alignment network based on self-attention for cross-subject EEG classification.","authors":"Sufan Ma, Dongxiao Zhang, Jiayi Wang, Jialiang Xie","doi":"10.1088/2057-1976/ad90e8","DOIUrl":"10.1088/2057-1976/ad90e8","url":null,"abstract":"<p><p>Due to the inherent variability in EEG signals across different individuals, domain adaptation and adversarial learning strategies are being progressively utilized to develop subject-specific classification models by leveraging data from other subjects. These approaches primarily focus on domain alignment and tend to overlook the critical task-specific class boundaries. This oversight can result in weak correlation between the extracted features and categories. To address these challenges, we propose a novel model that uses the known information from multiple subjects to bolster EEG classification for an individual subject through adversarial learning strategies. Our method begins by extracting both shallow and attention-driven deep features from EEG signals. Subsequently, we employ a class discriminator to encourage the same-class features from different domains to converge while ensuring that the different-class features diverge. This is achieved using our proposed discrimination loss function, which is designed to minimize the feature distance for samples of the same class across different domains while maximizing it for those from different classes. Additionally, our model incorporates two parallel classifiers that are harmonious yet distinct and jointly contribute to decision-making. Extensive testing on two publicly available EEG datasets has validated our model's efficacy and superiority.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142614169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose. This study aimed to develop a new method for automated contrast-to-noise ratio (CNR) measurement using the low-contrast object in the ACR computed tomography (CT) phantom.Methods. The proposed method for CNR measurement was based on statistical criteria. A region of interest (ROI) was placed in a specific radial location and was then rotated around 360° in increments of 2°. At each position, the average CT number within the ROI was calculated. After one complete rotation, a profile of the average CT number around the full rotation was obtained. The center coordinate of the low-contrast object was determined from the maximum value of the profile. The CNR was calculated based on the average CT number and noise within the ROI in the low-contrast object and the ROI in the background, i.e., at the center of the phantom. The proposed method was used to evaluate CNR from images scanned with various phantom rotations, images with various noise levels (tube currents), and images from 25 CT scanners. The results were compared to a previous method based on a threshold approach.Results. The proposed method successfully placed the ROI properly in the center of a low-contrast object for variations of phantom rotation and tube current, whereas was not properly located in the center of the low-contrast object using the previous method. In addition, from 325 image samples of the 25 CT scanners, the proposed method successfully (100%) located the ROI within the low-contrast objects of all images used. The success rate of the previous method was only 58%.Conclusion. A new method for measuring CNR in the ACR CT phantom has been proposed and implemented. It is more powerful than a previous method based on a threshold approach.
{"title":"A statistical-based automatic detection of a low-contrast object in the ACR CT phantom for measuring contrast-to-noise ratio of CT images.","authors":"Choirul Anam, Riska Amilia, Ariij Naufal, Toshioh Fujibuchi, Geoff Dougherty","doi":"10.1088/2057-1976/ad90e9","DOIUrl":"10.1088/2057-1976/ad90e9","url":null,"abstract":"<p><p><i>Purpose</i>. This study aimed to develop a new method for automated contrast-to-noise ratio (CNR) measurement using the low-contrast object in the ACR computed tomography (CT) phantom.<i>Methods</i>. The proposed method for CNR measurement was based on statistical criteria. A region of interest (ROI) was placed in a specific radial location and was then rotated around 360° in increments of 2°. At each position, the average CT number within the ROI was calculated. After one complete rotation, a profile of the average CT number around the full rotation was obtained. The center coordinate of the low-contrast object was determined from the maximum value of the profile. The CNR was calculated based on the average CT number and noise within the ROI in the low-contrast object and the ROI in the background, i.e., at the center of the phantom. The proposed method was used to evaluate CNR from images scanned with various phantom rotations, images with various noise levels (tube currents), and images from 25 CT scanners. The results were compared to a previous method based on a threshold approach.<i>Results</i>. The proposed method successfully placed the ROI properly in the center of a low-contrast object for variations of phantom rotation and tube current, whereas was not properly located in the center of the low-contrast object using the previous method. In addition, from 325 image samples of the 25 CT scanners, the proposed method successfully (100%) located the ROI within the low-contrast objects of all images used. The success rate of the previous method was only 58%.<i>Conclusion</i>. A new method for measuring CNR in the ACR CT phantom has been proposed and implemented. It is more powerful than a previous method based on a threshold approach.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142614173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1088/2057-1976/ad9157
Mahbubunnabi Tamal, Murad Althobaiti, Maryam Alhashim, Maram Alsanea, Tarek M Hegazi, Mohamed Deriche, Abdullah M Alhashem
Introduction. The lung CT images of COVID-19 patients can be typically characterized by three different findings- Ground Glass Opacity (GGO), consolidation and pleural effusion. GGOs have been shown to precede consolidations and has different heterogeneous appearance. Conventional severity scoring only uses total area of lung involvement ignoring appearance of the effected regions. This study proposes a baseline to select heterogeneity/radiomic features that can distinguish these three pathological lung findings.Methods. Four approaches were implemented to select features from a pool of 44 features. First one is a manual feature selection method. The rest are automatic feature selection methods based on Genetic Algorithm (GA) coupled with (1) K-Nearest-Neighbor (GA-KNN), (2) binary-decision-tree (GA-BDT) and (3) Artificial-Neural-Network (GA-ANN). For the purpose of validation, an ANN was trained using the selected features and tested on a completely independent data set.Results. Manual selection of nine radiomic features was found to provide the most accurate results with the highest sensitivity, specificity and accuracy (85.7% overall accuracy and 0.90 area under receiver operating characteristic curve) followed by GA-BDT, GA-KNN and GA-ANN (accuracy 78%, 77.5% and 76.8%).Conclusion. Manually selected nine radiomic features can be used in accurate severity scoring allowing the clinician to plan for more effective personalized treatment. They can also be useful for monitoring the progression of COVID-19 and response to therapy for clinical trials.
{"title":"Radiomic features based automatic classification of CT lung findings for COVID-19 patients.","authors":"Mahbubunnabi Tamal, Murad Althobaiti, Maryam Alhashim, Maram Alsanea, Tarek M Hegazi, Mohamed Deriche, Abdullah M Alhashem","doi":"10.1088/2057-1976/ad9157","DOIUrl":"10.1088/2057-1976/ad9157","url":null,"abstract":"<p><p><i>Introduction</i>. The lung CT images of COVID-19 patients can be typically characterized by three different findings- Ground Glass Opacity (GGO), consolidation and pleural effusion. GGOs have been shown to precede consolidations and has different heterogeneous appearance. Conventional severity scoring only uses total area of lung involvement ignoring appearance of the effected regions. This study proposes a baseline to select heterogeneity/radiomic features that can distinguish these three pathological lung findings.<i>Methods</i>. Four approaches were implemented to select features from a pool of 44 features. First one is a manual feature selection method. The rest are automatic feature selection methods based on Genetic Algorithm (GA) coupled with (1) K-Nearest-Neighbor (GA-KNN), (2) binary-decision-tree (GA-BDT) and (3) Artificial-Neural-Network (GA-ANN). For the purpose of validation, an ANN was trained using the selected features and tested on a completely independent data set.<i>Results</i>. Manual selection of nine radiomic features was found to provide the most accurate results with the highest sensitivity, specificity and accuracy (85.7% overall accuracy and 0.90 area under receiver operating characteristic curve) followed by GA-BDT, GA-KNN and GA-ANN (accuracy 78%, 77.5% and 76.8%).<i>Conclusion</i>. Manually selected nine radiomic features can be used in accurate severity scoring allowing the clinician to plan for more effective personalized treatment. They can also be useful for monitoring the progression of COVID-19 and response to therapy for clinical trials.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142614178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.1088/2057-1976/ad947b
Agustin Bernardo, German Mato, Matı As Calandrelli, Jorgelina Maria Medus, Ariel Hernan Curiale
Purpose:
This paper introduces a deep learning method for myocardial strain analysis while also evaluating the efficacy of the method across a public and a private dataset for cardiac pathology discrimination.
Methods:
We measure the global and regional myocardial strain in cSAX CMR images by first identifying a ROI centered in the LV, obtaining the cardiac structures (LV, RV and Myo) and estimating the motion of the myocardii. Finally we compute the strain for the heart coordinate system and report the global and regional strain.
Results:
We validated our method in two public datasets (ACDC, 80 subjects and CMAC, 16 subjects) and a private dataset (SSC, 75 subjects), containing healthy and pathological cases (acute myocardial infarct, DCM and HCM). We measured the mean Dice coefficient and Haussdorff distance for segmentation accuracy, the absolute end point error for motion accuracy, and we conducted a study of the discrimination power of the strain and strain rate between populations of healthy and pathological subjects. The results demonstrated that our method effectively quantifies myocardial strain and strain rate, showing distinct patterns across different cardiac conditions achieving notable statistical significance. Results also show that the method's accuracy is on par with iterative non-parametric registration methods and is also capable of estimating regional strain values.
Conclusion:
Our method proves to be a powerful tool for cardiac strain analysis, achieving results comparable to other state of the art methods, and computational efficiency over traditional methods.
.
{"title":"A novel Deep Learning based method for Myocardial Strain Quantification.","authors":"Agustin Bernardo, German Mato, Matı As Calandrelli, Jorgelina Maria Medus, Ariel Hernan Curiale","doi":"10.1088/2057-1976/ad947b","DOIUrl":"https://doi.org/10.1088/2057-1976/ad947b","url":null,"abstract":"<p><strong>Purpose: </strong>
This paper introduces a deep learning method for myocardial strain analysis while also evaluating the efficacy of the method across a public and a private dataset for cardiac pathology discrimination.
Methods:
We measure the global and regional myocardial strain in cSAX CMR images by first identifying a ROI centered in the LV, obtaining the cardiac structures (LV, RV and Myo) and estimating the motion of the myocardii. Finally we compute the strain for the heart coordinate system and report the global and regional strain.
Results:
We validated our method in two public datasets (ACDC, 80 subjects and CMAC, 16 subjects) and a private dataset (SSC, 75 subjects), containing healthy and pathological cases (acute myocardial infarct, DCM and HCM). We measured the mean Dice coefficient and Haussdorff distance for segmentation accuracy, the absolute end point error for motion accuracy, and we conducted a study of the discrimination power of the strain and strain rate between populations of healthy and pathological subjects. The results demonstrated that our method effectively quantifies myocardial strain and strain rate, showing distinct patterns across different cardiac conditions achieving notable statistical significance. Results also show that the method's accuracy is on par with iterative non-parametric registration methods and is also capable of estimating regional strain values.
Conclusion:
Our method proves to be a powerful tool for cardiac strain analysis, achieving results comparable to other state of the art methods, and computational efficiency over traditional methods.
.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142680707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-14DOI: 10.1088/2057-1976/ad927f
Ummay Mowshome Jahan, Brianna Blevins, Sergiy Minko, Vladimir Reukov
Reactive oxygen species (ROS), which are expressed at high levels in many diseases, can be scavenged by cerium oxide nanoparticles (CeO2NPs). CeO2NPs can cause significant cytotoxicity when administered directly to cells, but this cytotoxicity can be reduced if CeO2NPs can be encapsulated in biocompatible polymers. In this study, CeO2NPs were synthesized using a one-stage process, then purified, characterized, and then encapsulated into an electrospun poly-ε-caprolactone (PCL) scaffold. The direct administration of CeO2NPs to RAW 264.7 Macrophages resulted in reduced ROS levels but lower cell viability. Conversely, the encapsulation of nanoceria in a PCL scaffold was shown to lower ROS levels and improve cell survival. The study demonstrated an effective technique for encapsulating nanoceria in PCL fiber and confirmed its biocompatibility and efficacy. This system has the potential to be utilized for developing tissue engineering scaffolds, targeted delivery of therapeutic CeO2NPs, wound healing, and other biomedical applications.
.
{"title":"Advancing biomedical applications: antioxidant and biocompatible cerium oxide nanoparticle-integrated poly-ε-caprolactone fibers.","authors":"Ummay Mowshome Jahan, Brianna Blevins, Sergiy Minko, Vladimir Reukov","doi":"10.1088/2057-1976/ad927f","DOIUrl":"https://doi.org/10.1088/2057-1976/ad927f","url":null,"abstract":"<p><p>Reactive oxygen species (ROS), which are expressed at high levels in many diseases, can be scavenged by cerium oxide nanoparticles (CeO2NPs). CeO2NPs can cause significant cytotoxicity when administered directly to cells, but this cytotoxicity can be reduced if CeO2NPs can be encapsulated in biocompatible polymers. In this study, CeO2NPs were synthesized using a one-stage process, then purified, characterized, and then encapsulated into an electrospun poly-ε-caprolactone (PCL) scaffold. The direct administration of CeO2NPs to RAW 264.7 Macrophages resulted in reduced ROS levels but lower cell viability. Conversely, the encapsulation of nanoceria in a PCL scaffold was shown to lower ROS levels and improve cell survival. The study demonstrated an effective technique for encapsulating nanoceria in PCL fiber and confirmed its biocompatibility and efficacy. This system has the potential to be utilized for developing tissue engineering scaffolds, targeted delivery of therapeutic CeO2NPs, wound healing, and other biomedical applications.
.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142614188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-14DOI: 10.1088/2057-1976/ad9281
Silambarasan Anbumani, Garrett Godfrey, William Hall, Jainil Shah, Paul Knechtges, Beth Erickson, X Allen Li, George Noid
Precise identification of pancreatic tumors is challenging for radiotherapy planning due to the anatomical variability of the tumor and poor visualization of the tumor on 3D cross-sectional imaging. Low extracellular volume fraction (ECVf) correlates with poor vasculature uptake and possible necrosis or hypoxia in pancreatic tumors. This work investigates the feasibility of delineating pancreatic tumors using ECVf spatial distribution maps derived from contrast enhanced dual-energy CT (DECT). Data acquired from radiotherapy simulation of 12 pancreatic cancer patients, using a dual source DECT scanner, were analyzed. For each patient, an ECVf distribution of the pancreas was computed from the simultaneously acquired low and high energy DECT series during the late arterial contrast phase combined with the patient's hematocrit level. Volume of interest (VECVf) maps in ECVf distribution of pancreas were identified by applying an appropriate threshold condition and a connected components clustering algorithm. The obtained VECVf was compared with the clinical gross tumor volume (GTV) using the positive predictive value (PPV), Dice similarity coefficient (DSC), mean distance to agreement (MDA) and true positive rate (TPR). As a proof of concept, our hypothetical threshold condition based on the first quartile separation of the ECVf distribution to find VECVf of the pancreas elucidates the tumor volume within the pancreas. Notably, 7 out of 12 cases studied for VECVf matched well with the GTV and the mean PPV of 0.83±0.12. The mean MDA (2.83±1.0) of the cases confirms that VECVf lies within the tolerance for comparing to the pancreatic GTV. For the remaining 5 cases, the VECVf is substantially affected by other compounding factors, e.g., large cysts, dilate ducts, and thus did not align with the GTVs. This work demonstrated the promising application of the ECVf map, derived from contrast enhanced DECT, to help delineate tumor target for RT planning of pancreatic cancer.
{"title":"Enhancing pancreatic tumor delineation using dual-energy CT-derived extracellular volume fraction map.","authors":"Silambarasan Anbumani, Garrett Godfrey, William Hall, Jainil Shah, Paul Knechtges, Beth Erickson, X Allen Li, George Noid","doi":"10.1088/2057-1976/ad9281","DOIUrl":"https://doi.org/10.1088/2057-1976/ad9281","url":null,"abstract":"<p><p>Precise identification of pancreatic tumors is challenging for radiotherapy planning due to the anatomical variability of the tumor and poor visualization of the tumor on 3D cross-sectional imaging. Low extracellular volume fraction (ECVf) correlates with poor vasculature uptake and possible necrosis or hypoxia in pancreatic tumors. This work investigates the feasibility of delineating pancreatic tumors using ECVf spatial distribution maps derived from contrast enhanced dual-energy CT (DECT). Data acquired from radiotherapy simulation of 12 pancreatic cancer patients, using a dual source DECT scanner, were analyzed. For each patient, an ECVf distribution of the pancreas was computed from the simultaneously acquired low and high energy DECT series during the late arterial contrast phase combined with the patient's hematocrit level. Volume of interest (VECVf) maps in ECVf distribution of pancreas were identified by applying an appropriate threshold condition and a connected components clustering algorithm. The obtained VECVf was compared with the clinical gross tumor volume (GTV) using the positive predictive value (PPV), Dice similarity coefficient (DSC), mean distance to agreement (MDA) and true positive rate (TPR). As a proof of concept, our hypothetical threshold condition based on the first quartile separation of the ECVf distribution to find VECVf of the pancreas elucidates the tumor volume within the pancreas. Notably, 7 out of 12 cases studied for VECVf matched well with the GTV and the mean PPV of 0.83±0.12. The mean MDA (2.83±1.0) of the cases confirms that VECVf lies within the tolerance for comparing to the pancreatic GTV. For the remaining 5 cases, the VECVf is substantially affected by other compounding factors, e.g., large cysts, dilate ducts, and thus did not align with the GTVs. This work demonstrated the promising application of the ECVf map, derived from contrast enhanced DECT, to help delineate tumor target for RT planning of pancreatic cancer.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142614210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}