Pub Date : 2024-06-22DOI: 10.1007/s40846-024-00880-w
Liang-wei Xu, Kang-jie Cheng
Purpose
The objective of this study was to evaluate the effect of thread shape outline, thread pitch and preload on the central screw loosening or fracturing by using 3D finite element (FE) method.
Methods
By using a commercial implant system as prototypes, nine central screw models and matching implant components were created by combinations of two parameters with different values: three screw shapes (triangle, buttress and reverse buttress) and three screw pitches (0.2 mm, 0.3 mm and 0.4 mm). These models were inserted into a bone block. After applying pre-tightening torque to each central screw, a 100 N load tilted at 45 degrees was applied to the abutment to simulate an occlusal load. The stability performance of the central screws was evaluated through FE software.
Results
For the central screw shape, the highest stress was always located at the level of the first thread, while the highest stress of the reverse buttress group was always located at the level of the second thread. Triangle (average 650 MPa) and reverse buttress (average 729 MPa) threads were more conducive to reducing central screw stress than buttress thread (average 920 MPa). Moreover, it is necessary to avoid thread parameters design with buttress shape/0.3 mm pitch and buttress shape/0.4 mm pitch.
Conclusion
Thread shape outline and thread pitch significantly influenced central screw stability performance. The central screw with a triangle thread shape and a pitch of 0.2 mm presented the best mechanical properties.
{"title":"Effect of Thread Design Parameters on Central Screw Loosening: A 3D Finite Element Analysis","authors":"Liang-wei Xu, Kang-jie Cheng","doi":"10.1007/s40846-024-00880-w","DOIUrl":"https://doi.org/10.1007/s40846-024-00880-w","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>The objective of this study was to evaluate the effect of thread shape outline, thread pitch and preload on the central screw loosening or fracturing by using 3D finite element (FE) method.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>By using a commercial implant system as prototypes, nine central screw models and matching implant components were created by combinations of two parameters with different values: three screw shapes (triangle, buttress and reverse buttress) and three screw pitches (0.2 mm, 0.3 mm and 0.4 mm). These models were inserted into a bone block. After applying pre-tightening torque to each central screw, a 100 N load tilted at 45 degrees was applied to the abutment to simulate an occlusal load. The stability performance of the central screws was evaluated through FE software.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>For the central screw shape, the highest stress was always located at the level of the first thread, while the highest stress of the reverse buttress group was always located at the level of the second thread. Triangle (average 650 MPa) and reverse buttress (average 729 MPa) threads were more conducive to reducing central screw stress than buttress thread (average 920 MPa). Moreover, it is necessary to avoid thread parameters design with buttress shape/0.3 mm pitch and buttress shape/0.4 mm pitch.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>Thread shape outline and thread pitch significantly influenced central screw stability performance. The central screw with a triangle thread shape and a pitch of 0.2 mm presented the best mechanical properties.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"23 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s40846-024-00874-8
Naily Rehab, Yahia Siwar, Zaied Mourad
Purpose
This work focuses on automated epileptic seizure diagnosis (ESD) and prediction (ESP) to clarify the expanding role of machine learning (ML) in epileptic analysis. It outlines the current approaches and challenges in the diagnosis and prognosis of epilepsy and examines the convergence of magnetic resonance imaging (MRI), electroencephalogram (EEG), and ML.
Methods
This paper lists current methods for segmentation, localization, feature extraction, diagnosis, and prognosis after providing a brief medical review to distinguish between different forms of epilepsy. A particular focus is on using ML to EEG and MRI data, describing classification techniques to differentiate normal and epileptic activity.
Results
We highlight the potential of ML-driven methods for computer-aided epilepsy diagnosis and prognosis. We discuss achievements, challenges, and future directions, including devising novel techniques for automated alerts and seizure frequency estimation with minimal computational burden.
Conclusion
ML interfaces offer new possibilities for real-time seizure diagnosis in refractory epilepsy patients through wearables and implants. This discovery opens the door for improved diagnostic precision and individualized treatment plans in this field by using ML’s capabilities.
Graphical Abstract
The graphical abstract presents the machine Learning (ML) workflow for epileptic seizure diagnosis (ES) in detail. It begins with collecting data, such as magnetic resonance imaging (MRI) and electroencephalogram (EEG) data. Subsequently, features were extracted from the MRI and EEG data and used to train and evaluate machine learning models. The trained models were then applied to ES classification. Finally, ML algorithms proved to have the potential to revolutionize the diagnosis and treatment of epilepsy. By enabling early detection and personalized treatment, ML algorithms can help improve patient outcomes and quality of life.
目的 本研究侧重于癫痫发作的自动诊断(ESD)和预测(ESP),以阐明机器学习(ML)在癫痫分析中不断扩大的作用。本文概述了癫痫诊断和预后方面的现有方法和挑战,并研究了磁共振成像(MRI)、脑电图(EEG)和 ML 的融合。结果我们强调了 ML 驱动的方法在计算机辅助癫痫诊断和预后方面的潜力。我们讨论了取得的成就、面临的挑战和未来的发展方向,包括以最小的计算负担设计出自动警报和癫痫发作频率估算的新技术。结论ML界面通过可穿戴设备和植入物为难治性癫痫患者的实时癫痫发作诊断提供了新的可能性。这一发现为利用 ML 的功能提高该领域的诊断精度和个性化治疗方案打开了大门。图解摘要图解摘要详细介绍了用于癫痫发作诊断(ES)的机器学习(ML)工作流程。它首先收集数据,如磁共振成像(MRI)和脑电图(EEG)数据。然后,从核磁共振成像和脑电图数据中提取特征,用于训练和评估机器学习模型。然后将训练好的模型应用于 ES 分类。最后,ML 算法被证明具有彻底改变癫痫诊断和治疗的潜力。通过实现早期检测和个性化治疗,ML 算法有助于改善患者的预后和生活质量。
{"title":"Machine Learning for Epilepsy: A Comprehensive Exploration of Novel EEG and MRI Techniques for Seizure Diagnosis","authors":"Naily Rehab, Yahia Siwar, Zaied Mourad","doi":"10.1007/s40846-024-00874-8","DOIUrl":"https://doi.org/10.1007/s40846-024-00874-8","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>This work focuses on automated epileptic seizure diagnosis (ESD) and prediction (ESP) to clarify the expanding role of machine learning (ML) in epileptic analysis. It outlines the current approaches and challenges in the diagnosis and prognosis of epilepsy and examines the convergence of magnetic resonance imaging (MRI), electroencephalogram (EEG), and ML.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>This paper lists current methods for segmentation, localization, feature extraction, diagnosis, and prognosis after providing a brief medical review to distinguish between different forms of epilepsy. A particular focus is on using ML to EEG and MRI data, describing classification techniques to differentiate normal and epileptic activity.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>We highlight the potential of ML-driven methods for computer-aided epilepsy diagnosis and prognosis. We discuss achievements, challenges, and future directions, including devising novel techniques for automated alerts and seizure frequency estimation with minimal computational burden.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>ML interfaces offer new possibilities for real-time seizure diagnosis in refractory epilepsy patients through wearables and implants. This discovery opens the door for improved diagnostic precision and individualized treatment plans in this field by using ML’s capabilities.</p><h3 data-test=\"abstract-sub-heading\">Graphical Abstract</h3><p>The graphical abstract presents the machine Learning (ML) workflow for epileptic seizure diagnosis (ES) in detail. It begins with collecting data, such as magnetic resonance imaging (MRI) and electroencephalogram (EEG) data. Subsequently, features were extracted from the MRI and EEG data and used to train and evaluate machine learning models. The trained models were then applied to ES classification. Finally, ML algorithms proved to have the potential to revolutionize the diagnosis and treatment of epilepsy. By enabling early detection and personalized treatment, ML algorithms can help improve patient outcomes and quality of life.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"173 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s40846-024-00877-5
Wei-Chi Tsai, Zong-Rong Chen, Jui-Tse Hsu, Chen-Yi Song
Purpose
To investigate the differences in foot kinematics during gait between adults with asymptomatic and symptomatic flatfoot.
Methods
The study included 10 participants (six males and four females, aged 25.7 ± 6.5 years) with symptomatic flatfoot and 10 participants (eight males and two females, aged 21.2 ± 1.0 years) with asymptomatic flatfoot. Multi-segment foot kinematics were captured during barefoot gait analysis using a 3D software. Angles were calculated for the calcaneus with respect to the shank (Sha-Cal), the midfoot with respect to the calcaneus (Cal-Mid), and the metatarsus with respect to the midfoot (Mid-Met) during the stance phase.
Results
Some differences were noted between medium-to-large effect sizes. The symptomatic group had a decreased Mid-Met dorsiflexion angle at the initial contact to 50% of the stance phase compared with the asymptomatic group. The symptomatic group also showed decreased Mid-Met abduction at initial contact, larger Sha-Cal eversion angles at 10% of the stance phase, and larger Cal-Mid eversion angles at 50% and 70% of the stance phase compared to the asymptomatic group. The symptomatic group also had a larger peak Sha-Cal eversion angle than the asymptomatic group.
Conclusion
Adults with symptomatic flatfoot exhibit significant differences in foot kinematics towards decreased forefoot dorsiflexion at initial contact to mid-stance, decreased forefoot abduction at initial contact, and increased rearfoot eversion during the stance phase compared with those with asymptomatic flatfoot during gait. Pain may impair intersegmental motion.
{"title":"Multi-Segment Foot Kinematics during Gait in Adults with Asymptomatic and Symptomatic Flatfoot","authors":"Wei-Chi Tsai, Zong-Rong Chen, Jui-Tse Hsu, Chen-Yi Song","doi":"10.1007/s40846-024-00877-5","DOIUrl":"https://doi.org/10.1007/s40846-024-00877-5","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>To investigate the differences in foot kinematics during gait between adults with asymptomatic and symptomatic flatfoot.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>The study included 10 participants (six males and four females, aged 25.7 ± 6.5 years) with symptomatic flatfoot and 10 participants (eight males and two females, aged 21.2 ± 1.0 years) with asymptomatic flatfoot. Multi-segment foot kinematics were captured during barefoot gait analysis using a 3D software. Angles were calculated for the calcaneus with respect to the shank (Sha-Cal), the midfoot with respect to the calcaneus (Cal-Mid), and the metatarsus with respect to the midfoot (Mid-Met) during the stance phase.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>Some differences were noted between medium-to-large effect sizes. The symptomatic group had a decreased Mid-Met dorsiflexion angle at the initial contact to 50% of the stance phase compared with the asymptomatic group. The symptomatic group also showed decreased Mid-Met abduction at initial contact, larger Sha-Cal eversion angles at 10% of the stance phase, and larger Cal-Mid eversion angles at 50% and 70% of the stance phase compared to the asymptomatic group. The symptomatic group also had a larger peak Sha-Cal eversion angle than the asymptomatic group.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>Adults with symptomatic flatfoot exhibit significant differences in foot kinematics towards decreased forefoot dorsiflexion at initial contact to mid-stance, decreased forefoot abduction at initial contact, and increased rearfoot eversion during the stance phase compared with those with asymptomatic flatfoot during gait. Pain may impair intersegmental motion.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"29 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.1007/s40846-024-00872-w
Juan Pablo Moreno, Miguel A. Sepúlveda, Esteban J. Pino
Purpose
The presence of motion artifacts (MA) in cardiac signals negatively impacts the reliability of higher-level information such as the Heart Rate (HR), and therefore the correct diagnosis of pathologies. This paper proposes an MA detection method, based on One-Dimensional Convolutional Neural Networks (1D CNN), to label noisy zones of signals as unreliable, and subsequently avoid them for metric calculations.
Methods
To validate the concept, we first design a CNN to detect MAs in electrocardiogram (ECG) recordings from MIT–BIH Arrhythmia and Noise Stress Test Databases. This network extracts features from 1 s data segments, and then classifies them as clean or noisy. Also, we then train a tuned version of the model with semi-synthetic ballistocardiogram (BCG) signals.
Results
The classification in ECG achieves an accuracy of 95.9% and the BCG classification obtains an accuracy of 91.1%. Both classifiers are incorporated into beat detection systems, which produce an increase in the sensitivity of the detection algorithms from 75 to 98.5% in the ECG case, and from 72.1 to 94.5% in the case of BCG, for signals contaminated at 0 dB of SNR.
Conclusion
We propose that this method will improve accuracy of any processing algorithm on BCG signals by identifying useful segments where a high accuracy can be achieved.
{"title":"1D Convolutional Neural Network Impact on Heart Rate Metrics for ECG and BCG Signals","authors":"Juan Pablo Moreno, Miguel A. Sepúlveda, Esteban J. Pino","doi":"10.1007/s40846-024-00872-w","DOIUrl":"https://doi.org/10.1007/s40846-024-00872-w","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>The presence of motion artifacts (MA) in cardiac signals negatively impacts the reliability of higher-level information such as the Heart Rate (HR), and therefore the correct diagnosis of pathologies. This paper proposes an MA detection method, based on One-Dimensional Convolutional Neural Networks (1D CNN), to label noisy zones of signals as unreliable, and subsequently avoid them for metric calculations.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>To validate the concept, we first design a CNN to detect MAs in electrocardiogram (ECG) recordings from MIT–BIH Arrhythmia and Noise Stress Test Databases. This network extracts features from 1 s data segments, and then classifies them as clean or noisy. Also, we then train a tuned version of the model with semi-synthetic ballistocardiogram (BCG) signals.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The classification in ECG achieves an accuracy of 95.9% and the BCG classification obtains an accuracy of 91.1%. Both classifiers are incorporated into beat detection systems, which produce an increase in the sensitivity of the detection algorithms from 75 to 98.5% in the ECG case, and from 72.1 to 94.5% in the case of BCG, for signals contaminated at 0 dB of SNR.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>We propose that this method will improve accuracy of any processing algorithm on BCG signals by identifying useful segments where a high accuracy can be achieved.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141257976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1007/s40846-024-00869-5
Hsin-Ya Su, Chung-Yueh Lien, Pai-Jung Huang, Woei-Chyn Chu
Purpose
In this paper, we propose an open-source deep learning-based computer-aided diagnosis system for breast ultrasound images based on the Breast Imaging Reporting and Data System (BI-RADS).
Methods
Our dataset with 8,026 region-of-interest images preprocessed with ten times data augmentation. We compared the classification performance of VGG-16, ResNet-50, and DenseNet-121 and two ensemble methods integrated the single models.
Results
The ensemble model achieved the best performance, with 81.8% accuracy. Our results show that our model is performant enough to classify Category 2 and Category 4/5 lesions, and data augmentation can improve the classification performance of Category 3.
Conclusion
Our main contribution is to classify breast ultrasound lesions into BI-RADS assessment classes that place more emphasis on adhering to the BI-RADS medical suggestions including recommending routine follow-up tracing (Category 2), short-term follow-up tracing (Category 3) and biopsies (Category 4/5).
{"title":"A Practical Computer Aided Diagnosis System for Breast Ultrasound Classifying Lesions into the ACR BI-RADS Assessment","authors":"Hsin-Ya Su, Chung-Yueh Lien, Pai-Jung Huang, Woei-Chyn Chu","doi":"10.1007/s40846-024-00869-5","DOIUrl":"https://doi.org/10.1007/s40846-024-00869-5","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>In this paper, we propose an open-source deep learning-based computer-aided diagnosis system for breast ultrasound images based on the Breast Imaging Reporting and Data System (BI-RADS).</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>Our dataset with 8,026 region-of-interest images preprocessed with ten times data augmentation. We compared the classification performance of VGG-16, ResNet-50, and DenseNet-121 and two ensemble methods integrated the single models.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The ensemble model achieved the best performance, with 81.8% accuracy. Our results show that our model is performant enough to classify Category 2 and Category 4/5 lesions, and data augmentation can improve the classification performance of Category 3.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>Our main contribution is to classify breast ultrasound lesions into BI-RADS assessment classes that place more emphasis on adhering to the BI-RADS medical suggestions including recommending routine follow-up tracing (Category 2), short-term follow-up tracing (Category 3) and biopsies (Category 4/5).</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Segmentation of nuclei and cytoplasm in cellular images is essential for estimating the prognosis of lung cancer disease. The detection of these organelles in the unstained brightfield microscopic images is challenging due to poor contrast and lack of separation of structures with irregular morphology. This work aims to carry out semantic segmentation of nuclei and cytoplasm in lung cancer brightfield images using the Swin-Unet Transformer.
Methods
For this study, publicly available brightfield images of lung cancer cells are pre-processed and fed to the Swin-Unet for semantic segmentation. Model specific hyperparameters are identified after detailed analysis and the segmentation performance is validated using standard evaluation metrics.
Results
The hyperparameter analysis provides the selection of optimum parameters as focal loss, learning rate of 0.0001, Adam optimizer, and Swin Transformer patch size of 4. The obtained results show that with these parameters, the Swin-Unet Transformer accurately segmented the nuclei and cytoplasm in the brightfield images with pixel-F1 scores of 90.71% and 79.29% respectively.
Conclusion
It is observed that the model could identify nuclei and cytoplasm with varied morphologies. The detection of cytoplasm with weak and subtle edge details indicates the effectiveness of shifted window based self attention mechanism of Swin-Unet in capturing the global and long distance pixel interactions in the brightfield images. Thus, the adopted methodology in this study can be employed for the precise segmentation of nuclei and cytoplasm for assessing the malignancy of lung cancer disease.
{"title":"An Approach to Segment Nuclei and Cytoplasm in Lung Cancer Brightfield Images Using Hybrid Swin-Unet Transformer","authors":"Sreelekshmi Palliyil Sreekumar, Rohini Palanisamy, Ramakrishnan Swaminathan","doi":"10.1007/s40846-024-00873-9","DOIUrl":"https://doi.org/10.1007/s40846-024-00873-9","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Segmentation of nuclei and cytoplasm in cellular images is essential for estimating the prognosis of lung cancer disease. The detection of these organelles in the unstained brightfield microscopic images is challenging due to poor contrast and lack of separation of structures with irregular morphology. This work aims to carry out semantic segmentation of nuclei and cytoplasm in lung cancer brightfield images using the Swin-Unet Transformer.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>For this study, publicly available brightfield images of lung cancer cells are pre-processed and fed to the Swin-Unet for semantic segmentation. Model specific hyperparameters are identified after detailed analysis and the segmentation performance is validated using standard evaluation metrics.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The hyperparameter analysis provides the selection of optimum parameters as focal loss, learning rate of 0.0001, Adam optimizer, and Swin Transformer patch size of 4. The obtained results show that with these parameters, the Swin-Unet Transformer accurately segmented the nuclei and cytoplasm in the brightfield images with pixel-F1 scores of 90.71% and 79.29% respectively.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>It is observed that the model could identify nuclei and cytoplasm with varied morphologies. The detection of cytoplasm with weak and subtle edge details indicates the effectiveness of shifted window based self attention mechanism of Swin-Unet in capturing the global and long distance pixel interactions in the brightfield images. Thus, the adopted methodology in this study can be employed for the precise segmentation of nuclei and cytoplasm for assessing the malignancy of lung cancer disease.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"23 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141166996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-28DOI: 10.1007/s40846-024-00870-y
Xiaozheng Yang, Rongchang Fu, Pengju Li, Kun Wang, Huiran Chen, Fu
Purpose
This paper aims to analyze the influence of mechanical force on bone regeneration from macro and micro perspectives, to investigate the mechanical response of bone tissues at various scales after operation and provide a theoretical basis for further research and clinical practice.
Methods
An effective postoperative lumbar model was constructed, and the bone regeneration area was established at the osteotomy. The area was divided into five stages, from 10 MPa to 100 MPa. Then, the osteon and bone lacuna-osteocyte models were constructed, and their biomechanical characteristics under different working conditions were studied.
Results
From the first stage to the fifth stage, the macroscopic bone tissue larger than 3000 µε decreased by about 40%, the maximum stress ratio n approximates k (EO/ET) of macro- and micro-bone tissues, and the area of osteocytes less than 3000 µε increased by about 45%. In the second stage, 41.7% of the bone cells have a strain of 1000 µε ∼ 3000 µε, and this percentage increases to 66.7%∼72.2% after the fourth stage.
Conclusion
The macro-meso stress ratio is related to the tissue strength around the osteon. In the first stage, the patient should lie flat and rest, instead of standing upright. At the beginning of the fourth stage, the rate of bone regeneration is much faster than the rate of lesions, making it suitable for upright recovery, and the recovery speed increases.
{"title":"Biomechanical Finite Element Analysis of Bone Tissues with Different Scales in the Bone Regeneration Area after Scoliosis Surgery","authors":"Xiaozheng Yang, Rongchang Fu, Pengju Li, Kun Wang, Huiran Chen, Fu","doi":"10.1007/s40846-024-00870-y","DOIUrl":"https://doi.org/10.1007/s40846-024-00870-y","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>This paper aims to analyze the influence of mechanical force on bone regeneration from macro and micro perspectives, to investigate the mechanical response of bone tissues at various scales after operation and provide a theoretical basis for further research and clinical practice.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>An effective postoperative lumbar model was constructed, and the bone regeneration area was established at the osteotomy. The area was divided into five stages, from 10 MPa to 100 MPa. Then, the osteon and bone lacuna-osteocyte models were constructed, and their biomechanical characteristics under different working conditions were studied.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>From the first stage to the fifth stage, the macroscopic bone tissue larger than 3000 µε decreased by about 40%, the maximum stress ratio n approximates k (E<sub>O</sub>/E<sub>T</sub>) of macro- and micro-bone tissues, and the area of osteocytes less than 3000 µε increased by about 45%. In the second stage, 41.7% of the bone cells have a strain of 1000 µε ∼ 3000 µε, and this percentage increases to 66.7%∼72.2% after the fourth stage.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>The macro-meso stress ratio is related to the tissue strength around the osteon. In the first stage, the patient should lie flat and rest, instead of standing upright. At the beginning of the fourth stage, the rate of bone regeneration is much faster than the rate of lesions, making it suitable for upright recovery, and the recovery speed increases.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"58 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141166997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.1007/s40846-024-00868-6
Samara Acosta-Jiménez, Susana Aideé González-Chávez, Javier Camarillo-Cisneros, César Pacheco-Tena, Mirelle Barcenas-López, Laura Esther González-Lozada, Claudia Hernández-Orozco, Jesús Humberto Burboa-Delgado, Rosa Elena Ochoa-Albíztegui
Purpose
Mammography is the modality of choice for the early detection of breast cancer. Deep learning, using convolutional neural networks (CNNs) specifically, have achieved extraordinary results in the classification of diseases, including breast cancer, on imaging. The images used to train a CNN varies based on several factors, such as imaging technique, imaging equipment, and study population; these factors significantly affect the accuracy of the CNN models. The aim of this study was to develop a novel CNN for the classification of mammograms as benign or malignant and to compare its utility to that of popular pre-trained CNNs in the literature using transfer learning. All CNNs were trained to detect breast cancer on mammograms using mammograms from a created database of Mexican women (MAMMOMX-PABIOM) and from a public database of UK women (MIAS).
Methods
A database (MAMMOMX-PABIOM) was built comprising 1,070 mammography images of 235 Mexican patients from 4 hospitals in Mexico. The study also used mammographic images from the Mammographic Image Analysis Society (MIAS) public database, which comprises mammography images from the UK National Breast Screening Programme. A novel CNN was developed and trained based on different configurations of training data; the accuracy of the models resulting from the novel CNN were compared with models resulting from more advanced pre-trained CNNs (DenseNet121, MobileNetV2, ResNet 50, VGG16) which were built using transfer learning.
Results
Of the models resulting from pre-trained CNNs using transfer learning, the model based on MobileNetV2 and training data from the MAMMOMX-PABIOM database achieved the highest validation accuracy of 70.10%. In comparison, the novel CNN, when trained with the data configuration A6, which comprises data from both the MAMMOMX-PABIOM database and the MIAS database, produced a much higher accuracy of 99.14%.
Conclusion
Although transfer learning is a widely used technique when training, data is scarce. The novel CNN produced much higher accuracy values across all configurations of training data compared to the accuracy values of pre-trained CNNs using transfer learning. In addition, this study addresses the gap in that neither a national database of mammograms of Mexican women exists, nor a deep learning tool for the classification of mammograms as benign or malignant that is focused on this population.
目的乳腺成像是早期检测乳腺癌的首选方式。深度学习,特别是使用卷积神经网络(CNN),在对包括乳腺癌在内的疾病进行成像分类方面取得了非凡的成果。用于训练 CNN 的图像因多种因素而异,如成像技术、成像设备和研究人群;这些因素极大地影响了 CNN 模型的准确性。本研究的目的是开发一种新型 CNN,用于将乳房 X 线照片分类为良性或恶性,并利用迁移学习将其效用与文献中流行的预训练 CNN 进行比较。所有 CNN 都经过了训练,以使用创建的墨西哥妇女数据库(MAMMOMX-PABIOM)和英国妇女公共数据库(MIAS)中的乳房 X 射线照片检测乳房 X 射线照片上的乳腺癌。方法建立了一个数据库(MAMMOMX-PABIOM),其中包括来自墨西哥 4 家医院的 235 名墨西哥患者的 1070 张乳房 X 射线照片。研究还使用了乳腺图像分析协会(MIAS)公共数据库中的乳腺图像,该数据库包括英国国家乳腺筛查计划中的乳腺图像。研究人员开发了一种新型 CNN,并根据不同的训练数据配置对其进行了训练;将新型 CNN 生成的模型的准确率与使用迁移学习建立的更先进的预训练 CNN(DenseNet121、MobileNetV2、ResNet 50 和 VGG16)生成的模型进行了比较。相比之下,新型 CNN 在使用数据配置 A6(包括来自 MAMMOMX-PABIOM 数据库和 MIAS 数据库的数据)进行训练时,获得了 99.14% 的更高准确率。与使用迁移学习进行预训练的 CNN 的准确率相比,新型 CNN 在所有配置的训练数据中都获得了更高的准确率。此外,这项研究还填补了一个空白,即既没有墨西哥妇女乳房 X 光照片的国家数据库,也没有针对这一人群的乳房 X 光照片良性或恶性分类的深度学习工具。
{"title":"Preliminary Results: Comparison of Convolutional Neural Network Architectures as an Auxiliary Clinical Tool Applied to Screening Mammography in Mexican Women","authors":"Samara Acosta-Jiménez, Susana Aideé González-Chávez, Javier Camarillo-Cisneros, César Pacheco-Tena, Mirelle Barcenas-López, Laura Esther González-Lozada, Claudia Hernández-Orozco, Jesús Humberto Burboa-Delgado, Rosa Elena Ochoa-Albíztegui","doi":"10.1007/s40846-024-00868-6","DOIUrl":"https://doi.org/10.1007/s40846-024-00868-6","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Mammography is the modality of choice for the early detection of breast cancer. Deep learning, using convolutional neural networks (CNNs) specifically, have achieved extraordinary results in the classification of diseases, including breast cancer, on imaging. The images used to train a CNN varies based on several factors, such as imaging technique, imaging equipment, and study population; these factors significantly affect the accuracy of the CNN models. The aim of this study was to develop a novel CNN for the classification of mammograms as benign or malignant and to compare its utility to that of popular pre-trained CNNs in the literature using transfer learning. All CNNs were trained to detect breast cancer on mammograms using mammograms from a created database of Mexican women (MAMMOMX-PABIOM) and from a public database of UK women (MIAS).</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>A database (MAMMOMX-PABIOM) was built comprising 1,070 mammography images of 235 Mexican patients from 4 hospitals in Mexico. The study also used mammographic images from the Mammographic Image Analysis Society (MIAS) public database, which comprises mammography images from the UK National Breast Screening Programme. A novel CNN was developed and trained based on different configurations of training data; the accuracy of the models resulting from the novel CNN were compared with models resulting from more advanced pre-trained CNNs (DenseNet121, MobileNetV2, ResNet 50, VGG16) which were built using transfer learning.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>Of the models resulting from pre-trained CNNs using transfer learning, the model based on MobileNetV2 and training data from the MAMMOMX-PABIOM database achieved the highest validation accuracy of 70.10%. In comparison, the novel CNN, when trained with the data configuration A6, which comprises data from both the MAMMOMX-PABIOM database and the MIAS database, produced a much higher accuracy of 99.14%.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>Although transfer learning is a widely used technique when training, data is scarce. The novel CNN produced much higher accuracy values across all configurations of training data compared to the accuracy values of pre-trained CNNs using transfer learning. In addition, this study addresses the gap in that neither a national database of mammograms of Mexican women exists, nor a deep learning tool for the classification of mammograms as benign or malignant that is focused on this population.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"37 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pancreatic ductal adenocarcinoma (PDAC) is the most prevalent form of pancreatic cancer, accounting for about 85% of all occurrences. It is highly challenging to treat PDAC because of its extreme aggressiveness and lack of therapeutic options. Identifying new gene markers can help in the design of novel targeted therapeutics.
Methods
In this study, we identified three different gene prognostic markers in PDAC using a machine learning approach. Initially, the differential expression genes (DEGs) profile of accession number GSE183795 was downloaded from the gene expression omnibus database of the National Center for Biotechnology Information (NCBI), which consists of the expression profile of the 244 patients with PDAC (139 pancreatic tumors, 102 adjacent non-tumors and 3 normal). Then, the expression dataset was preprocessed using different packages of R programming, such as GEOquery, Affy, and Limma. Further, DEGs were identified by the machine learning algorithms, including random forest (RF) and extreme gradient boost (XGboost). Finally, survival analysis was performed to identify DEGs using GEPIA software (TCGA database).
Results
Our results revealed that 6 out of 25 DEGs (ERCC3, ACY3, ATP2A3, MW-TW1879, MW-TW3829, and ZBTB7A) identified by RF and XGBoost algorithm were the same, indicating their feature importance. Moreover, three genes, including ATP2A3 (p = 0.029), NRL (p = 0.012), and FBXO45 (p = 0.013), were statistically significant when tested for survival analysis and may be utilized as the prognostic marker genes for PDAC.
Conclusion
These findings provide valuable insights into the molecular characteristics of PDAC and can potentially guide future research on cancer theranostics interventions for this devastating disease.
{"title":"Establishment of Three Gene Prognostic Markers in Pancreatic Ductal Adenocarcinoma Using Machine Learning Approach","authors":"Pragya Pragya, Praveen Kumar Govarthan, Malay Nayak, Sudip Mukherjee, Jac Fredo Agastinose Ronickom","doi":"10.1007/s40846-024-00859-7","DOIUrl":"https://doi.org/10.1007/s40846-024-00859-7","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Pancreatic ductal adenocarcinoma (PDAC) is the most prevalent form of pancreatic cancer, accounting for about 85% of all occurrences. It is highly challenging to treat PDAC because of its extreme aggressiveness and lack of therapeutic options. Identifying new gene markers can help in the design of novel targeted therapeutics.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>In this study, we identified three different gene prognostic markers in PDAC using a machine learning approach. Initially, the differential expression genes (DEGs) profile of accession number GSE183795 was downloaded from the gene expression omnibus database of the National Center for Biotechnology Information (NCBI), which consists of the expression profile of the 244 patients with PDAC (139 pancreatic tumors, 102 adjacent non-tumors and 3 normal). Then, the expression dataset was preprocessed using different packages of R programming, such as GEOquery, Affy, and Limma. Further, DEGs were identified by the machine learning algorithms, including random forest (RF) and extreme gradient boost (XGboost). Finally, survival analysis was performed to identify DEGs using GEPIA software (TCGA database).</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>Our results revealed that 6 out of 25 DEGs (ERCC3, ACY3, ATP2A3, MW-TW1879, MW-TW3829, and ZBTB7A) identified by RF and XGBoost algorithm were the same, indicating their feature importance. Moreover, three genes, including ATP2A3 (<i>p</i> = 0.029), NRL (<i>p</i> = 0.012), and FBXO45 (<i>p</i> = 0.013), were statistically significant when tested for survival analysis and may be utilized as the prognostic marker genes for PDAC.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>These findings provide valuable insights into the molecular characteristics of PDAC and can potentially guide future research on cancer theranostics interventions for this devastating disease.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"77 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-04DOI: 10.1007/s40846-024-00860-0
Yasaman Zakeri, Babak Karasfi, Afsaneh Jalalian
Purpose
A brain tumor is defined as any group of atypical cells occupying space in the brain. There are more than 120 types of them. MRI scans are used for brain tumor diagnosis since they are more detailed and three-dimensional. Accurate localization and segmentation of the tumor portion increase the patients' survival rates. To this end, we presented a systematic review of the latest development of brain tumor segmentation from MRI.
Methods
To find related articles, we searched the keywords like "brain tumors" and "segmentation by MRI”. The searches were performed on Elsevier, Springer, Wiley, and the leading conferences in the field of medical image processing. A total of 79 publications dedicated to tumor segmentation from years 2019 to 2023 were selected and categorized into four categories: non-Artificial Intelligence, machine learning, deep learning, and hybrid deep learning methods.
Results
We reviewed the trending techniques of tumor segmentation and provided a unified and integrated overview of the current state-of-the-art. The article dealt with providing the capabilities and shortcomings associated with each approach and the restrictions on using automated medical image segmentation techniques in clinical practice were determined.
Conclusion
In this study, the advancement of brain tumor segmentation by MRI is discussed, focusing more on recent articles. It identified the restrictions of the presented techniques regarding the four mentioned categories, which prevent them from being used in clinical practice. The literature will guide the researchers to become familiar with both the leading techniques and the potential problems that need to be addressed.
{"title":"A Review of Brain Tumor Segmentation Using MRIs from 2019 to 2023 (Statistical Information, Key Achievements, and Limitations)","authors":"Yasaman Zakeri, Babak Karasfi, Afsaneh Jalalian","doi":"10.1007/s40846-024-00860-0","DOIUrl":"https://doi.org/10.1007/s40846-024-00860-0","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>A brain tumor is defined as any group of atypical cells occupying space in the brain. There are more than 120 types of them. MRI scans are used for brain tumor diagnosis since they are more detailed and three-dimensional. Accurate localization and segmentation of the tumor portion increase the patients' survival rates. To this end, we presented a systematic review of the latest development of brain tumor segmentation from MRI.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>To find related articles, we searched the keywords like \"brain tumors\" and \"segmentation by MRI”. The searches were performed on Elsevier, Springer, Wiley, and the leading conferences in the field of medical image processing. A total of 79 publications dedicated to tumor segmentation from years 2019 to 2023 were selected and categorized into four categories: non-Artificial Intelligence, machine learning, deep learning, and hybrid deep learning methods.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>We reviewed the trending techniques of tumor segmentation and provided a unified and integrated overview of the current state-of-the-art. The article dealt with providing the capabilities and shortcomings associated with each approach and the restrictions on using automated medical image segmentation techniques in clinical practice were determined.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>In this study, the advancement of brain tumor segmentation by MRI is discussed, focusing more on recent articles. It identified the restrictions of the presented techniques regarding the four mentioned categories, which prevent them from being used in clinical practice. The literature will guide the researchers to become familiar with both the leading techniques and the potential problems that need to be addressed.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"15 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140883232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}