Pub Date : 2025-11-10DOI: 10.1109/OJEMB.2025.3630901
Jingting Yao;Shidong Xu;Isabela G. G. Choi;Otavio Henrique Pinhata-Baptista;Jerome L. Ackerman
Objective: Oral implant procedures necessitate assessment of alveolar bone, a vital tooth-supporting structure. While micro-computed tomography (micro-CT) is the gold standard for bone volume fraction assessment for its high spatial resolution and bone/soft tissue contrast, its substantial radiation exposure limits its use to specimens or small animals. This study evaluates the accuracy of 1.5T magnetic resonance imaging (MRI) in determining bone volume fraction, a surrogate of bone density, using micro-CT as the reference. Methods: Twenty-one alveolar bone biopsy specimens, which had undergone cone beam CT, micro-CT, and 14T MRI in a previous study, were subjected to 1.5T MRI. Results: The comparison between bone volume fraction measured by 1.5T MRI and micro-CT demonstrated a statistically significant correlation (r = 0.70, p < 0.0001). Consistency in results was investigated through repeated scans and repeated scanning and analyses. Conclusion: 1.5T MRI may be an effective, radiation-free tool for alveolar bone volume fraction assessment.
目的:口腔种植手术需要评估牙槽骨,一个重要的牙齿支撑结构。虽然微计算机断层扫描(micro-CT)因其高空间分辨率和骨/软组织对比度而成为骨体积分数评估的金标准,但其大量的辐射暴露限制了其在标本或小动物中的应用。本研究以micro-CT为参照,评估1.5T磁共振成像(MRI)测定骨密度替代指标骨体积分数的准确性。方法:对21例既往行锥形束CT、micro-CT、14T MRI检查的牙槽骨活检标本进行1.5T MRI检查。结果:1.5T MRI测量的骨体积分数与micro-CT测量的骨体积分数比较具有统计学意义(r = 0.70, p < 0.0001)。通过反复扫描和反复扫描和分析来调查结果的一致性。结论:1.5T MRI可能是一种有效的、无辐射的牙槽骨体积分数评估工具。
{"title":"Assessing Alveolar Bone Volume Fraction in Dental Implantology Using 1.5 Tesla Magnetic Resonance Imaging: An Ex Vivo Cross-Sectional Study","authors":"Jingting Yao;Shidong Xu;Isabela G. G. Choi;Otavio Henrique Pinhata-Baptista;Jerome L. Ackerman","doi":"10.1109/OJEMB.2025.3630901","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3630901","url":null,"abstract":"<bold>Objective:</b> Oral implant procedures necessitate assessment of alveolar bone, a vital tooth-supporting structure. While micro-computed tomography (micro-CT) is the gold standard for bone volume fraction assessment for its high spatial resolution and bone/soft tissue contrast, its substantial radiation exposure limits its use to specimens or small animals. This study evaluates the accuracy of 1.5T magnetic resonance imaging (MRI) in determining bone volume fraction, a surrogate of bone density, using micro-CT as the reference. <bold>Methods:</b> Twenty-one alveolar bone biopsy specimens, which had undergone cone beam CT, micro-CT, and 14T MRI in a previous study, were subjected to 1.5T MRI. <bold>Results:</b> The comparison between bone volume fraction measured by 1.5T MRI and micro-CT demonstrated a statistically significant correlation (r = 0.70, p < 0.0001). Consistency in results was investigated through repeated scans and repeated scanning and analyses. <bold>Conclusion:</b> 1.5T MRI may be an effective, radiation-free tool for alveolar bone volume fraction assessment.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"7-13"},"PeriodicalIF":2.9,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11236089","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-23DOI: 10.1109/OJEMB.2025.3624566
Jenn-Kaie Lain;Chung-An Wang;Jun-Hao Xu;Chen-Wei Lee
Goal: This study presents an enhanced stacked U-Net deep learning model for cuffless blood pressure estimation using only photoplethysmogram signals, aiming to improve the accuracy of non-invasive measurements. Methods: To address the challenges of systolic blood pressure estimation, the model incorporates velocity plethysmogram input and employs additive spatial and channel attention mechanisms. These enhancements improve feature extraction and mitigate decoder mismatches in the U-Net architecture. Results: The model satisfies the Grade A criteria established by the British Hypertension Society and meets the accuracy standards of the Association for the Advancement of Medical Instrumentation, achieving mean absolute errors of 3.921 mmHg for systolic and 2.441 mmHg for diastolic blood pressure. It outperforms PPG-only spectro-temporal methods and achieves comparable performance to the joint photoplethysmogram and electrocardiogram one-dimensional squeeze-and-excitation network with long short-term memory architecture. Conclusions: The proposed model shows strong potential as a practical, low-cost, and non-invasive solution for continuous, cuffless blood pressure monitoring.
{"title":"Development of an Improved Stacked U-Net Model for Cuffless Blood Pressure Estimation Based on PPG Signals","authors":"Jenn-Kaie Lain;Chung-An Wang;Jun-Hao Xu;Chen-Wei Lee","doi":"10.1109/OJEMB.2025.3624566","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3624566","url":null,"abstract":"<italic>Goal:</i> This study presents an enhanced stacked U-Net deep learning model for cuffless blood pressure estimation using only photoplethysmogram signals, aiming to improve the accuracy of non-invasive measurements. <italic>Methods:</i> To address the challenges of systolic blood pressure estimation, the model incorporates velocity plethysmogram input and employs additive spatial and channel attention mechanisms. These enhancements improve feature extraction and mitigate decoder mismatches in the U-Net architecture. <italic>Results:</i> The model satisfies the Grade A criteria established by the British Hypertension Society and meets the accuracy standards of the Association for the Advancement of Medical Instrumentation, achieving mean absolute errors of 3.921 mmHg for systolic and 2.441 mmHg for diastolic blood pressure. It outperforms PPG-only spectro-temporal methods and achieves comparable performance to the joint photoplethysmogram and electrocardiogram one-dimensional squeeze-and-excitation network with long short-term memory architecture. <italic>Conclusions:</i> The proposed model shows strong potential as a practical, low-cost, and non-invasive solution for continuous, cuffless blood pressure monitoring.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"584-590"},"PeriodicalIF":2.9,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11215636","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-23DOI: 10.1109/OJEMB.2025.3624582
Chien-Yu Chiou;Wei-Li Chen;Chun-Rong Huang;Yang C. Fann;Lawrence L. Latour;Pau-Choo Chung
Goal: Pathology images collected from different hospitals often have large appearance variability causedby different scanners, patients, or hospital protocols. Deep learning-based pathology segmentation models are highly dependent on the distribution of training data. Therefore, the models often suffer from the domain shift problem when applied to new target domains of different hospitals. Methods: To address this issue, we propose a hierarchical cross-consistency (HCC) network to hierarchically adapt models across pathology images of various domains with three consistency-based modules, the consistency module, the pair module, and the mixture module. The consistency module enhances the prediction consistency of each target image under various perturbations. The pair module improves consistency among different target images. Finally, the mixture module enhances the consistency across different domains. Results: The experimental results on pathology image datasets scanned using three different scanners show the superiority of the proposed HCC network compared to state-of-the-art unsupervised domain adaptation methods. Conclusions: The proposed method can successfully adapt trained pathology image segmentation models to new target domains, which is useful when introducing the models to different hospitals.
{"title":"Hierarchical Cross-Consistency Network Based Unsupervised Domain Adaptation for Pathology Whole Slide Image Segmentation","authors":"Chien-Yu Chiou;Wei-Li Chen;Chun-Rong Huang;Yang C. Fann;Lawrence L. Latour;Pau-Choo Chung","doi":"10.1109/OJEMB.2025.3624582","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3624582","url":null,"abstract":"<italic>Goal:</i> Pathology images collected from different hospitals often have large appearance variability causedby different scanners, patients, or hospital protocols. Deep learning-based pathology segmentation models are highly dependent on the distribution of training data. Therefore, the models often suffer from the domain shift problem when applied to new target domains of different hospitals. <italic>Methods:</i> To address this issue, we propose a hierarchical cross-consistency (HCC) network to hierarchically adapt models across pathology images of various domains with three consistency-based modules, the consistency module, the pair module, and the mixture module. The consistency module enhances the prediction consistency of each target image under various perturbations. The pair module improves consistency among different target images. Finally, the mixture module enhances the consistency across different domains. <italic>Results:</i> The experimental results on pathology image datasets scanned using three different scanners show the superiority of the proposed HCC network compared to state-of-the-art unsupervised domain adaptation methods. <italic>Conclusions:</i> The proposed method can successfully adapt trained pathology image segmentation models to new target domains, which is useful when introducing the models to different hospitals.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"598-604"},"PeriodicalIF":2.9,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11215652","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-23DOI: 10.1109/OJEMB.2025.3624591
L. Feld;S. Hellmers;L. Schell-Majoor;J. Koschate-Storm;T. Zieschang;A. Hein;B. Kollmeier
Objective: Older adults face a heightened fall risk, which can severely impact their health. Individual responses to unexpected gait perturbations (e.g., slips) are potential predictors of this risk. This study examines automatic detection of treadmill-generated gait perturbations using acceleration and angular velocity from everyday wearables. Detection is achieved using a deep convolutional long short-term memory (DeepConvLSTM) algorithm. Results: An F1 score of at least 0.68 and recall of 0.86 was retrieved for all data, i.e., data from hearing aids, smartphones at various positions and professional sensors at lumbar and sternum. Performance did not significantly change when combining data from different sensor positions or using only acceleration data. Conclusion: Results suggest that hearing aids and smartphones can monitor gait perturbations with similar performance as professional equipment, highlighting the potential of everyday wearables for continuous fall risk monitoring.
{"title":"Automatic Detection of Gait Perturbations With Everyday Wearable Technology","authors":"L. Feld;S. Hellmers;L. Schell-Majoor;J. Koschate-Storm;T. Zieschang;A. Hein;B. Kollmeier","doi":"10.1109/OJEMB.2025.3624591","DOIUrl":"10.1109/OJEMB.2025.3624591","url":null,"abstract":"<italic>Objective:</i> Older adults face a heightened fall risk, which can severely impact their health. Individual responses to unexpected gait perturbations (e.g., slips) are potential predictors of this risk. This study examines automatic detection of treadmill-generated gait perturbations using acceleration and angular velocity from everyday wearables. Detection is achieved using a deep convolutional long short-term memory (DeepConvLSTM) algorithm. <italic>Results:</i> An F1 score of at least 0.68 and recall of 0.86 was retrieved for all data, i.e., data from hearing aids, smartphones at various positions and professional sensors at lumbar and sternum. Performance did not significantly change when combining data from different sensor positions or using only acceleration data. <italic>Conclusion:</i> Results suggest that hearing aids and smartphones can monitor gait perturbations with similar performance as professional equipment, highlighting the potential of everyday wearables for continuous fall risk monitoring.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"570-575"},"PeriodicalIF":2.9,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12599889/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145497023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: This study evaluates the performance of an automated method for detecting and classifying breast masses as Breast Imaging Reporting and Data System (BI-RADS) benign or biopsy-confirmed malignant using subtraction of temporally sequential mammograms. Mammograms from 100 women across two screening rounds (400 images: 2 views × 2 rounds × 100 cases) were retrospectively collected. The prior mammographic views were subtracted from the most recent ones, 98 image features were extracted from regions of interest, and were ranked using 8 feature selection methods. Results: Machine learning reduced false positives and detected masses with 97.06% accuracy and 0.92 AUC. True masses were classified as benign or malignant with 94.82% accuracy and 0.95 AUC, a significant improvement compared with state-of-the-art methods reported in the literature (0.95 vs. 0.90 AUC). Conclusions: The proposed approach demonstrates that temporal subtraction can improve diagnostic accuracy by up to 5%, supporting earlier detection of malignancies and enabling more personalized treatment strategies.
{"title":"Subtraction of Temporally Sequential Digital Mammograms: Enhancing the Detection and Classification of Malignant Masses in Breast Imaging","authors":"Kosmia Loizidou;Galateia Skouroumouni;Gabriella Savvidou;Anastasia Constantinidou;Eleni Orphanidou Vlachou;Anneza Yiallourou;Costas Pitris;Christos Nikolaou","doi":"10.1109/OJEMB.2025.3624977","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3624977","url":null,"abstract":"<italic>Background:</i> This study evaluates the performance of an automated method for detecting and classifying breast masses as Breast Imaging Reporting and Data System (BI-RADS) benign or biopsy-confirmed malignant using subtraction of temporally sequential mammograms. Mammograms from 100 women across two screening rounds (400 images: 2 views × 2 rounds × 100 cases) were retrospectively collected. The prior mammographic views were subtracted from the most recent ones, 98 image features were extracted from regions of interest, and were ranked using 8 feature selection methods. <italic>Results:</i> Machine learning reduced false positives and detected masses with 97.06% accuracy and 0.92 AUC. True masses were classified as benign or malignant with 94.82% accuracy and 0.95 AUC, a significant improvement compared with state-of-the-art methods reported in the literature (0.95 vs. 0.90 AUC). <italic>Conclusions:</i> The proposed approach demonstrates that temporal subtraction can improve diagnostic accuracy by up to 5%, supporting earlier detection of malignancies and enabling more personalized treatment strategies.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"591-597"},"PeriodicalIF":2.9,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11215985","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-02DOI: 10.1109/OJEMB.2025.3617224
Nikhil V. Divekar;Alicia Baxter;Robert D. Gregg
Goal: This work customizes and validates a task-agnostic bilateral knee exoskeleton controller for targeted assistance of primary neuromuscular deficits in highly impaired individuals. Methods: We leveraged the biomechanics-based structure of the default controller to implement specialized modifications, targeting primary deficits in a participant with post-polio syndrome (PPS) and a participant with multiple sclerosis (MS). We also developed a clinician-friendly Android interface to tune important gait parameters. Results: Customized assistance improved the participants' primary mobility deficits as identified by the clinician, decreasing five-times-sit-to-stand time from 18.9 s to 11.8 s for the PPS participant, and restoring normative knee flexion range of motion and reducing compensatory circumduction for the MS participant. The exoskeleton induced mixed effects on secondary outcomes. Conclusions: A biomechanics-based task-agnostic exoskeleton controller can be effectively customized through specialized modifications of the intuitive basis functions and interface-based tuning to provide targeted improvements in the unique mobility deficits of highly impaired individuals.
{"title":"Customizable Task-Agnostic Exoskeleton Control for Targeted Neuromuscular Assistance: Case Series","authors":"Nikhil V. Divekar;Alicia Baxter;Robert D. Gregg","doi":"10.1109/OJEMB.2025.3617224","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3617224","url":null,"abstract":"<italic>Goal:</i> This work customizes and validates a task-agnostic bilateral knee exoskeleton controller for targeted assistance of primary neuromuscular deficits in highly impaired individuals. <italic>Methods:</i> We leveraged the biomechanics-based structure of the default controller to implement specialized modifications, targeting primary deficits in a participant with post-polio syndrome (PPS) and a participant with multiple sclerosis (MS). We also developed a clinician-friendly Android interface to tune important gait parameters. <italic>Results:</i> Customized assistance improved the participants' primary mobility deficits as identified by the clinician, decreasing five-times-sit-to-stand time from 18.9 s to 11.8 s for the PPS participant, and restoring normative knee flexion range of motion and reducing compensatory circumduction for the MS participant. The exoskeleton induced mixed effects on secondary outcomes. <italic>Conclusions:</i> A biomechanics-based task-agnostic exoskeleton controller can be effectively customized through specialized modifications of the intuitive basis functions and interface-based tuning to provide targeted improvements in the unique mobility deficits of highly impaired individuals.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"564-569"},"PeriodicalIF":2.9,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11190070","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145351955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-29DOI: 10.1109/OJEMB.2025.3615394
David Anderson Lloyd;Andrei Dragomir;Bulent Ozpolat;Biykem Bozkurt;Yasemin Akay;Metin Akay
Goal: Cardiovascular disease is the leading cause of death in the USA. Coronary Artery Disease (CAD) in particular is responsible for over 40% of cardiovascular disease deaths. Early detection and treatment are critical in the reduction of deaths associated with CAD. Methods: Sound signatures of CAD vary for individual patients depending on where and how severe the blockage is. We propose the use of the artificial intelligence (AI, specifically the DeepSets architecture) to learn patient-specific acoustic biomarkers which distinguish heart sounds before and after percutaneous coronary intervention (PCI) in 12 human patients. Initially, Matching Pursuit was used to decompose the sound recordings into more granular representations called ‘atoms’. Then we used AI to classify whether a group of atoms from a single segment are from before or after PCI. Leveraging the model's learned latent representation, we can then identify groups of atoms which represent CAD-associated sounds within the original recording. Results: Our deep learning approach achieves a test-set classification accuracy of 88.06% using sounds from the full cardiac cycle. The same deep learning architecture achieves 71.43% accuracy using the isolated diastolic window sound segment alone. Conclusions: This preliminary study shows that individualized clusters of atoms represent distinct parts of heart sounds associated with occlusions, and that these clusters differentially change their spectral energy signature after PCI. We believe that using this approach with recordings from individual patients over many time points during disease and treatment progression will allow for a precise, non-invasive monitoring of an individual patient's condition based on unique heart sound characteristics learned using AI.
{"title":"AI-Based Detection of Coronary Artery Occlusion Using Acoustic Biomarkers Before and After Stent Placement","authors":"David Anderson Lloyd;Andrei Dragomir;Bulent Ozpolat;Biykem Bozkurt;Yasemin Akay;Metin Akay","doi":"10.1109/OJEMB.2025.3615394","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3615394","url":null,"abstract":"<italic>Goal:</i> Cardiovascular disease is the leading cause of death in the USA. Coronary Artery Disease (CAD) in particular is responsible for over 40% of cardiovascular disease deaths. Early detection and treatment are critical in the reduction of deaths associated with CAD. <italic>Methods:</i> Sound signatures of CAD vary for individual patients depending on where and how severe the blockage is. We propose the use of the artificial intelligence (AI, specifically the DeepSets architecture) to learn patient-specific acoustic biomarkers which distinguish heart sounds before and after percutaneous coronary intervention (PCI) in 12 human patients. Initially, Matching Pursuit was used to decompose the sound recordings into more granular representations called ‘atoms’. Then we used AI to classify whether a group of atoms from a single segment are from before or after PCI. Leveraging the model's learned latent representation, we can then identify groups of atoms which represent CAD-associated sounds within the original recording. <italic>Results:</i> Our deep learning approach achieves a test-set classification accuracy of 88.06% using sounds from the full cardiac cycle. The same deep learning architecture achieves 71.43% accuracy using the isolated diastolic window sound segment alone. <italic>Conclusions:</i> This preliminary study shows that individualized clusters of atoms represent distinct parts of heart sounds associated with occlusions, and that these clusters differentially change their spectral energy signature after PCI. We believe that using this approach with recordings from individual patients over many time points during disease and treatment progression will allow for a precise, non-invasive monitoring of an individual patient's condition based on unique heart sound characteristics learned using AI.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"557-563"},"PeriodicalIF":2.9,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11184180","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-29DOI: 10.1109/OJEMB.2025.3615395
Aliya Hasan;Mohammad Karim
Objective: Heart sound analysis is essential for cardiovascular disorder classification. Traditional auscultation and rule-based methods require manual feature engineering and clinical expertise. This work proposes a CNN-based model for automated multiclass heart sound classification. Results: Using MFCC features extracted from segmented real-world recordings, the model classifies heart sounds into murmur, extrasystole, extrahls, artifact, and normal. It achieves 98.7% training accuracy and 91% validation accuracy, with strong precision and recall for normal and murmur classes, and a weighted F1-score of 0.91. Conclusions: The results show that the proposed MFCC-CNN framework is robust, generalizable, and suitable for automated auscultation and early cardiac screening.
{"title":"Robust Heart Sound Analysis With MFCC and Light Weight Convolutional Neural Network","authors":"Aliya Hasan;Mohammad Karim","doi":"10.1109/OJEMB.2025.3615395","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3615395","url":null,"abstract":"<italic>Objective:</i> Heart sound analysis is essential for cardiovascular disorder classification. Traditional auscultation and rule-based methods require manual feature engineering and clinical expertise. This work proposes a CNN-based model for automated multiclass heart sound classification. <italic>Results:</i> Using MFCC features extracted from segmented real-world recordings, the model classifies heart sounds into murmur, extrasystole, extrahls, artifact, and normal. It achieves 98.7% training accuracy and 91% validation accuracy, with strong precision and recall for normal and murmur classes, and a weighted F1-score of 0.91. <italic>Conclusions:</i> The results show that the proposed MFCC-CNN framework is robust, generalizable, and suitable for automated auscultation and early cardiac screening.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"549-556"},"PeriodicalIF":2.9,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11184173","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-15DOI: 10.1109/OJEMB.2025.3610160
Zeyu Tang;Xiaodan Xing;Gang Wang;Guang Yang
Deep learning-based Generative Models have the potential to convert low-resolution CT images into high-resolution counterparts without long acquisition times and increased radiation exposure in thin-slice CT imaging. However, procuring appropriate training data for these Super-Resolution (SR) models is challenging. Previous SR research has simulated thick-slice CT images from thin-slice CT images to create training pairs. However, these methods either rely on simplistic interpolation techniques that lack realism or on sinogram reconstruction, which requires the release of raw data and complex reconstruction algorithms. Thus, we introduce a simple yet realistic method to generate thick CT images from thin-slice CT images, facilitating the creation of training pairs for SR algorithms. The training pairs produced by our method closely resemble real data distributions (PSNR = 49.74 vs. 40.66, p $< $ 0.05). A multivariate Cox regression analysis involving thick slice CT images with lung fibrosis revealed that only the radiomics features extracted using our method demonstrated a significant correlation with mortality (HR = 1.19 and HR = 1.14, p $< $ 0.005). This paper represents the first to identify and address the challenge of generating appropriate paired training data for Deep Learning-based CT SR models, which enhances the efficacy and applicability of SR models in real-world scenarios.
{"title":"Enhancing Super-Resolution Network Efficacy in CT Imaging: Cost-Effective Simulation of Training Data","authors":"Zeyu Tang;Xiaodan Xing;Gang Wang;Guang Yang","doi":"10.1109/OJEMB.2025.3610160","DOIUrl":"10.1109/OJEMB.2025.3610160","url":null,"abstract":"Deep learning-based Generative Models have the potential to convert low-resolution CT images into high-resolution counterparts without long acquisition times and increased radiation exposure in thin-slice CT imaging. However, procuring appropriate training data for these Super-Resolution (SR) models is challenging. Previous SR research has simulated thick-slice CT images from thin-slice CT images to create training pairs. However, these methods either rely on simplistic interpolation techniques that lack realism or on sinogram reconstruction, which requires the release of raw data and complex reconstruction algorithms. Thus, we introduce a simple yet realistic method to generate thick CT images from thin-slice CT images, facilitating the creation of training pairs for SR algorithms. The training pairs produced by our method closely resemble real data distributions (PSNR = 49.74 vs. 40.66, p <inline-formula><tex-math>$< $</tex-math></inline-formula> 0.05). A multivariate Cox regression analysis involving thick slice CT images with lung fibrosis revealed that only the radiomics features extracted using our method demonstrated a significant correlation with mortality (HR = 1.19 and HR = 1.14, p <inline-formula><tex-math>$< $</tex-math></inline-formula> 0.005). This paper represents the first to identify and address the challenge of generating appropriate paired training data for Deep Learning-based CT SR models, which enhances the efficacy and applicability of SR models in real-world scenarios.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"576-583"},"PeriodicalIF":2.9,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12599898/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145497010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1109/OJEMB.2025.3607816
Ruijie Sun;Giles Hamilton-Fletcher;Sahil Faizal;Chen Feng;Todd E. Hudson;John-Ross Rizzo;Kevin C. Chan
Goal: Persons with blindness or low vision (pBLV) face challenges in completing activities of daily living (ADLs/IADLs). Semantic segmentation techniques on smartphones, like DeepLabV3+, can quickly assist in identifying key objects, but their performance across different indoor settings and lighting conditions remains unclear. Methods: Using the MIT ADE20K SceneParse150 dataset, we trained and evaluated AI models for specific indoor scenes (kitchen, bedroom, bathroom, living room) and compared them with a generic indoor model. Performance was assessed using mean accuracy and intersection-over-union metrics. Results: Scene-specific models outperformed the generic model, particularly in identifying ADL/IADL objects. Models focusing on rooms with more unique objects showed the greatest improvements (bedroom, bathroom). Scene-specific models were also more resilient to low-light conditions. Conclusions: These findings highlight how using scene-specific models can boost key performance indicators for assisting pBLV across different functional environments. We suggest that a dynamic selection of the best-performing models on mobile technologies may better facilitate ADLs/IADLs for pBLV.
{"title":"Training Indoor and Scene-Specific Semantic Segmentation Models to Assist Blind and Low Vision Users in Activities of Daily Living","authors":"Ruijie Sun;Giles Hamilton-Fletcher;Sahil Faizal;Chen Feng;Todd E. Hudson;John-Ross Rizzo;Kevin C. Chan","doi":"10.1109/OJEMB.2025.3607816","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3607816","url":null,"abstract":"<italic>Goal:</i> Persons with blindness or low vision (pBLV) face challenges in completing activities of daily living (ADLs/IADLs). Semantic segmentation techniques on smartphones, like DeepLabV3+, can quickly assist in identifying key objects, but their performance across different indoor settings and lighting conditions remains unclear. <italic>Methods:</i> Using the MIT ADE20K SceneParse150 dataset, we trained and evaluated AI models for specific indoor scenes (kitchen, bedroom, bathroom, living room) and compared them with a generic indoor model. Performance was assessed using mean accuracy and intersection-over-union metrics. <italic>Results:</i> Scene-specific models outperformed the generic model, particularly in identifying ADL/IADL objects. Models focusing on rooms with more unique objects showed the greatest improvements (bedroom, bathroom). Scene-specific models were also more resilient to low-light conditions. <italic>Conclusions:</i> These findings highlight how using scene-specific models can boost key performance indicators for assisting pBLV across different functional environments. We suggest that a dynamic selection of the best-performing models on mobile technologies may better facilitate ADLs/IADLs for pBLV.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"533-539"},"PeriodicalIF":2.9,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11153825","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145141639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}