Pub Date : 2025-09-29DOI: 10.1109/OJEMB.2025.3615394
David Anderson Lloyd;Andrei Dragomir;Bulent Ozpolat;Biykem Bozkurt;Yasemin Akay;Metin Akay
Goal: Cardiovascular disease is the leading cause of death in the USA. Coronary Artery Disease (CAD) in particular is responsible for over 40% of cardiovascular disease deaths. Early detection and treatment are critical in the reduction of deaths associated with CAD. Methods: Sound signatures of CAD vary for individual patients depending on where and how severe the blockage is. We propose the use of the artificial intelligence (AI, specifically the DeepSets architecture) to learn patient-specific acoustic biomarkers which distinguish heart sounds before and after percutaneous coronary intervention (PCI) in 12 human patients. Initially, Matching Pursuit was used to decompose the sound recordings into more granular representations called ‘atoms’. Then we used AI to classify whether a group of atoms from a single segment are from before or after PCI. Leveraging the model's learned latent representation, we can then identify groups of atoms which represent CAD-associated sounds within the original recording. Results: Our deep learning approach achieves a test-set classification accuracy of 88.06% using sounds from the full cardiac cycle. The same deep learning architecture achieves 71.43% accuracy using the isolated diastolic window sound segment alone. Conclusions: This preliminary study shows that individualized clusters of atoms represent distinct parts of heart sounds associated with occlusions, and that these clusters differentially change their spectral energy signature after PCI. We believe that using this approach with recordings from individual patients over many time points during disease and treatment progression will allow for a precise, non-invasive monitoring of an individual patient's condition based on unique heart sound characteristics learned using AI.
{"title":"AI-Based Detection of Coronary Artery Occlusion Using Acoustic Biomarkers Before and After Stent Placement","authors":"David Anderson Lloyd;Andrei Dragomir;Bulent Ozpolat;Biykem Bozkurt;Yasemin Akay;Metin Akay","doi":"10.1109/OJEMB.2025.3615394","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3615394","url":null,"abstract":"<italic>Goal:</i> Cardiovascular disease is the leading cause of death in the USA. Coronary Artery Disease (CAD) in particular is responsible for over 40% of cardiovascular disease deaths. Early detection and treatment are critical in the reduction of deaths associated with CAD. <italic>Methods:</i> Sound signatures of CAD vary for individual patients depending on where and how severe the blockage is. We propose the use of the artificial intelligence (AI, specifically the DeepSets architecture) to learn patient-specific acoustic biomarkers which distinguish heart sounds before and after percutaneous coronary intervention (PCI) in 12 human patients. Initially, Matching Pursuit was used to decompose the sound recordings into more granular representations called ‘atoms’. Then we used AI to classify whether a group of atoms from a single segment are from before or after PCI. Leveraging the model's learned latent representation, we can then identify groups of atoms which represent CAD-associated sounds within the original recording. <italic>Results:</i> Our deep learning approach achieves a test-set classification accuracy of 88.06% using sounds from the full cardiac cycle. The same deep learning architecture achieves 71.43% accuracy using the isolated diastolic window sound segment alone. <italic>Conclusions:</i> This preliminary study shows that individualized clusters of atoms represent distinct parts of heart sounds associated with occlusions, and that these clusters differentially change their spectral energy signature after PCI. We believe that using this approach with recordings from individual patients over many time points during disease and treatment progression will allow for a precise, non-invasive monitoring of an individual patient's condition based on unique heart sound characteristics learned using AI.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"557-563"},"PeriodicalIF":2.9,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11184180","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-29DOI: 10.1109/OJEMB.2025.3615395
Aliya Hasan;Mohammad Karim
Objective: Heart sound analysis is essential for cardiovascular disorder classification. Traditional auscultation and rule-based methods require manual feature engineering and clinical expertise. This work proposes a CNN-based model for automated multiclass heart sound classification. Results: Using MFCC features extracted from segmented real-world recordings, the model classifies heart sounds into murmur, extrasystole, extrahls, artifact, and normal. It achieves 98.7% training accuracy and 91% validation accuracy, with strong precision and recall for normal and murmur classes, and a weighted F1-score of 0.91. Conclusions: The results show that the proposed MFCC-CNN framework is robust, generalizable, and suitable for automated auscultation and early cardiac screening.
{"title":"Robust Heart Sound Analysis With MFCC and Light Weight Convolutional Neural Network","authors":"Aliya Hasan;Mohammad Karim","doi":"10.1109/OJEMB.2025.3615395","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3615395","url":null,"abstract":"<italic>Objective:</i> Heart sound analysis is essential for cardiovascular disorder classification. Traditional auscultation and rule-based methods require manual feature engineering and clinical expertise. This work proposes a CNN-based model for automated multiclass heart sound classification. <italic>Results:</i> Using MFCC features extracted from segmented real-world recordings, the model classifies heart sounds into murmur, extrasystole, extrahls, artifact, and normal. It achieves 98.7% training accuracy and 91% validation accuracy, with strong precision and recall for normal and murmur classes, and a weighted F1-score of 0.91. <italic>Conclusions:</i> The results show that the proposed MFCC-CNN framework is robust, generalizable, and suitable for automated auscultation and early cardiac screening.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"549-556"},"PeriodicalIF":2.9,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11184173","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-15DOI: 10.1109/OJEMB.2025.3610160
Zeyu Tang;Xiaodan Xing;Gang Wang;Guang Yang
Deep learning-based Generative Models have the potential to convert low-resolution CT images into high-resolution counterparts without long acquisition times and increased radiation exposure in thin-slice CT imaging. However, procuring appropriate training data for these Super-Resolution (SR) models is challenging. Previous SR research has simulated thick-slice CT images from thin-slice CT images to create training pairs. However, these methods either rely on simplistic interpolation techniques that lack realism or on sinogram reconstruction, which requires the release of raw data and complex reconstruction algorithms. Thus, we introduce a simple yet realistic method to generate thick CT images from thin-slice CT images, facilitating the creation of training pairs for SR algorithms. The training pairs produced by our method closely resemble real data distributions (PSNR = 49.74 vs. 40.66, p $< $ 0.05). A multivariate Cox regression analysis involving thick slice CT images with lung fibrosis revealed that only the radiomics features extracted using our method demonstrated a significant correlation with mortality (HR = 1.19 and HR = 1.14, p $< $ 0.005). This paper represents the first to identify and address the challenge of generating appropriate paired training data for Deep Learning-based CT SR models, which enhances the efficacy and applicability of SR models in real-world scenarios.
{"title":"Enhancing Super-Resolution Network Efficacy in CT Imaging: Cost-Effective Simulation of Training Data","authors":"Zeyu Tang;Xiaodan Xing;Gang Wang;Guang Yang","doi":"10.1109/OJEMB.2025.3610160","DOIUrl":"10.1109/OJEMB.2025.3610160","url":null,"abstract":"Deep learning-based Generative Models have the potential to convert low-resolution CT images into high-resolution counterparts without long acquisition times and increased radiation exposure in thin-slice CT imaging. However, procuring appropriate training data for these Super-Resolution (SR) models is challenging. Previous SR research has simulated thick-slice CT images from thin-slice CT images to create training pairs. However, these methods either rely on simplistic interpolation techniques that lack realism or on sinogram reconstruction, which requires the release of raw data and complex reconstruction algorithms. Thus, we introduce a simple yet realistic method to generate thick CT images from thin-slice CT images, facilitating the creation of training pairs for SR algorithms. The training pairs produced by our method closely resemble real data distributions (PSNR = 49.74 vs. 40.66, p <inline-formula><tex-math>$< $</tex-math></inline-formula> 0.05). A multivariate Cox regression analysis involving thick slice CT images with lung fibrosis revealed that only the radiomics features extracted using our method demonstrated a significant correlation with mortality (HR = 1.19 and HR = 1.14, p <inline-formula><tex-math>$< $</tex-math></inline-formula> 0.005). This paper represents the first to identify and address the challenge of generating appropriate paired training data for Deep Learning-based CT SR models, which enhances the efficacy and applicability of SR models in real-world scenarios.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"576-583"},"PeriodicalIF":2.9,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12599898/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145497010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1109/OJEMB.2025.3607816
Ruijie Sun;Giles Hamilton-Fletcher;Sahil Faizal;Chen Feng;Todd E. Hudson;John-Ross Rizzo;Kevin C. Chan
Goal: Persons with blindness or low vision (pBLV) face challenges in completing activities of daily living (ADLs/IADLs). Semantic segmentation techniques on smartphones, like DeepLabV3+, can quickly assist in identifying key objects, but their performance across different indoor settings and lighting conditions remains unclear. Methods: Using the MIT ADE20K SceneParse150 dataset, we trained and evaluated AI models for specific indoor scenes (kitchen, bedroom, bathroom, living room) and compared them with a generic indoor model. Performance was assessed using mean accuracy and intersection-over-union metrics. Results: Scene-specific models outperformed the generic model, particularly in identifying ADL/IADL objects. Models focusing on rooms with more unique objects showed the greatest improvements (bedroom, bathroom). Scene-specific models were also more resilient to low-light conditions. Conclusions: These findings highlight how using scene-specific models can boost key performance indicators for assisting pBLV across different functional environments. We suggest that a dynamic selection of the best-performing models on mobile technologies may better facilitate ADLs/IADLs for pBLV.
{"title":"Training Indoor and Scene-Specific Semantic Segmentation Models to Assist Blind and Low Vision Users in Activities of Daily Living","authors":"Ruijie Sun;Giles Hamilton-Fletcher;Sahil Faizal;Chen Feng;Todd E. Hudson;John-Ross Rizzo;Kevin C. Chan","doi":"10.1109/OJEMB.2025.3607816","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3607816","url":null,"abstract":"<italic>Goal:</i> Persons with blindness or low vision (pBLV) face challenges in completing activities of daily living (ADLs/IADLs). Semantic segmentation techniques on smartphones, like DeepLabV3+, can quickly assist in identifying key objects, but their performance across different indoor settings and lighting conditions remains unclear. <italic>Methods:</i> Using the MIT ADE20K SceneParse150 dataset, we trained and evaluated AI models for specific indoor scenes (kitchen, bedroom, bathroom, living room) and compared them with a generic indoor model. Performance was assessed using mean accuracy and intersection-over-union metrics. <italic>Results:</i> Scene-specific models outperformed the generic model, particularly in identifying ADL/IADL objects. Models focusing on rooms with more unique objects showed the greatest improvements (bedroom, bathroom). Scene-specific models were also more resilient to low-light conditions. <italic>Conclusions:</i> These findings highlight how using scene-specific models can boost key performance indicators for assisting pBLV across different functional environments. We suggest that a dynamic selection of the best-performing models on mobile technologies may better facilitate ADLs/IADLs for pBLV.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"533-539"},"PeriodicalIF":2.9,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11153825","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145141639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1109/OJEMB.2025.3607556
Ruojun Li;Samuel Chibuoyim Uche;Emmanuel Agu;Kristin Grimone;Debra S. Herman;Jane Metrik;Ana M. Abrantes;Michael D. Stein
Goal: To investigate whether machine learning analyses of smartphone sensor data can discriminate whether a subject consumed alcohol or marijuana from their gait. Methods: Using first-of-a-kind impaired gait datasets, we propose MariaGait, a novel deep learning approach to distinguish between marijuana and alcohol impairment. Subjects' time-series smartphone accelerometer and gyroscope sensor gait data are first encoded into Gramian Angular Field (GAF) images that are then classified using a tiled Convolutional Neural Network (CNN) with TICA pooling. To mitigate the insufficiency of positively labeled alcohol and marijuana instances, the tiled CNN was pre-trained on sober gait samples that were more abundant. Results: MariaGait achieved an accuracy of 94.61%, F1 score of 88.61%, and 94.33% ROC AUC score in classifying whether the subject consumed alcohol or marijuana, outperforming baseline models including Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM), Multi-head CNN and Multi-head LSTM, Random Forest and Support Vector Machines (SVM)). Conclusions: Our results demonstrate that MariaGait could be a practical, non-invasive approach to determine which substance a subject is impaired by from their gait.
{"title":"Discriminating Between Marijuana and Alcohol Gait Impairments Using Tile CNN With TICA Pooling","authors":"Ruojun Li;Samuel Chibuoyim Uche;Emmanuel Agu;Kristin Grimone;Debra S. Herman;Jane Metrik;Ana M. Abrantes;Michael D. Stein","doi":"10.1109/OJEMB.2025.3607556","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3607556","url":null,"abstract":"<italic>Goal:</i> To investigate whether machine learning analyses of smartphone sensor data can discriminate whether a subject consumed alcohol or marijuana from their gait. <italic>Methods:</i> Using first-of-a-kind impaired gait datasets, we propose <italic>MariaGait</i>, a novel deep learning approach to distinguish between marijuana and alcohol impairment. Subjects' time-series smartphone accelerometer and gyroscope sensor gait data are first encoded into Gramian Angular Field (GAF) images that are then classified using a tiled Convolutional Neural Network (CNN) with TICA pooling. To mitigate the insufficiency of positively labeled alcohol and marijuana instances, the tiled CNN was pre-trained on sober gait samples that were more abundant. <italic>Results:</i> <italic>MariaGait</i> achieved an accuracy of 94.61%, F1 score of 88.61%, and 94.33% ROC AUC score in classifying whether the subject consumed alcohol or marijuana, outperforming baseline models including Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM), Multi-head CNN and Multi-head LSTM, Random Forest and Support Vector Machines (SVM)). <italic>Conclusions:</i> Our results demonstrate that <italic>MariaGait</i> could be a practical, non-invasive approach to determine which substance a subject is impaired by from their gait.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"540-548"},"PeriodicalIF":2.9,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11153826","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-28DOI: 10.1109/OJEMB.2025.3593083
Matteo B. Lodi;Nicola Curreli;Giuseppe Mazzarella;Alessandro Fanti
Goal: Magnetic scaffolds (MagS), obtained by loading polymers with magnetic nanoparticles (MNPs) or by chemical doping of bio-ceramics, can be implanted and used as thermo-seeds for interstitial cancer therapy if exposed to radiofrequency (RF) magnetic fields. MagS have the potential to pave new therapeutic routes for the treatment of deep-seated tumors, such as bone cancers or biliary tumors. However, the studies of their fundamental RF magnetic properties and the understanding of the heat dissipation mechanism are underdeveloped. Therefore, in this work an in-depth analysis of the magnetic susceptibility spectra of several representative nanocomposites thermoseeds found in the literature is performed. Methods: A Cole-Cole model, instead of the Debye formulation, is proposed and analyzed to interpret the experimentally observed different power dissipation, due to hindered Brownian relaxation and large dipole-dipole and particle-particle interactions. To this aim, a fitting procedure based on genetic algorithm is used to derive the Cole-Cole model parameters. Results: The proposed Cole-Cole model can interpret the MNPs response when dispersed in solution and when embedded in the biomaterial. Significant differences in the equilibrium susceptibility, relaxation times and, especially, the broadening parameter are observed between the ferrofluid and MagS systems. The fitting errors are below 3%, on average. Non-linear relationships between the dipole-dipole interaction dimensionless number and the Cole-Cole parameters are found. Conclusions: The findings can foster MagS design and help planning their use for RF hyperthermia treatment, ensuring a high-quality therapy.
{"title":"Modeling the Complex Susceptibility of Magnetic Nanocomposites for Deep-Seated Tumor Hyperthermia","authors":"Matteo B. Lodi;Nicola Curreli;Giuseppe Mazzarella;Alessandro Fanti","doi":"10.1109/OJEMB.2025.3593083","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3593083","url":null,"abstract":"<italic>Goal:</i> Magnetic scaffolds (MagS), obtained by loading polymers with magnetic nanoparticles (MNPs) or by chemical doping of bio-ceramics, can be implanted and used as thermo-seeds for interstitial cancer therapy if exposed to radiofrequency (RF) magnetic fields. MagS have the potential to pave new therapeutic routes for the treatment of deep-seated tumors, such as bone cancers or biliary tumors. However, the studies of their fundamental RF magnetic properties and the understanding of the heat dissipation mechanism are underdeveloped. Therefore, in this work an in-depth analysis of the magnetic susceptibility spectra of several representative nanocomposites thermoseeds found in the literature is performed. <italic>Methods:</i> A Cole-Cole model, instead of the Debye formulation, is proposed and analyzed to interpret the experimentally observed different power dissipation, due to hindered Brownian relaxation and large dipole-dipole and particle-particle interactions. To this aim, a fitting procedure based on genetic algorithm is used to derive the Cole-Cole model parameters. <italic>Results:</i> The proposed Cole-Cole model can interpret the MNPs response when dispersed in solution and when embedded in the biomaterial. Significant differences in the equilibrium susceptibility, relaxation times and, especially, the broadening parameter are observed between the ferrofluid and MagS systems. The fitting errors are below 3%, on average. Non-linear relationships between the dipole-dipole interaction dimensionless number and the Cole-Cole parameters are found. <italic>Conclusions:</i> The findings can foster MagS design and help planning their use for RF hyperthermia treatment, ensuring a high-quality therapy.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"523-532"},"PeriodicalIF":2.9,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11097358","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144896779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-18DOI: 10.1109/OJEMB.2025.3590580
Julian Shanbhag;Sophie Fleischmann;Iris Wechsler;Heiko Gassner;Jürgen Winkler;Bjoern M. Eskofier;Anne D. Koelewijn;Sandro Wartzack;Jörg Miehling
Postural instability represents one of the cardinal symptoms of Parkinson's disease (PD). Still, internal processes leading to this instability are not fully understood. Simulations using neuromusculoskeletal human models can help understand these internal processes leading to PD-associated postural deficits. In this paper, we investigated whether reduced reactivity amplitudes resulting from impairments due to PD can explain postural instability as well as increased muscle tone as often observed in individuals with PD. To simulate reduced reactivity, we gradually decreased previously optimized gain factors within the postural control circuitry of our model performing a quiet upright standing task. After each reduction step, the model was again optimized. Simulation results were compared to experimental data collected from 31 individuals with PD and 31 age- and sex-matched healthy control participants. Analyzing our simulation results, we showed that muscle activations increased with a model's reduced reactivity, as well as joint angles' ranges of motion (ROMs). However, sway parameters such as center of pressure (COP) path lengths and COP ranges did not increase as observed in our experimental data. These results suggest that a reduced reactivity does not directly lead to increased sway parameters, but could cause increased muscle tone leading to subsequent postural control alterations. To further investigate postural stability using neuromusculoskeletal models, analyzing additional internal model parameters and tasks such as perturbed upright standing requiring comparable reaction patterns could provide promising results. By enhancing such models and deepening the understanding of internal processes of postural control, these models may be used to assess and evaluate rehabilitation interventions in the future.
{"title":"Does Reduced Reactivity Explain Altered Postural Control in Parkinson's Disease? A Predictive Simulation Study","authors":"Julian Shanbhag;Sophie Fleischmann;Iris Wechsler;Heiko Gassner;Jürgen Winkler;Bjoern M. Eskofier;Anne D. Koelewijn;Sandro Wartzack;Jörg Miehling","doi":"10.1109/OJEMB.2025.3590580","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3590580","url":null,"abstract":"Postural instability represents one of the cardinal symptoms of Parkinson's disease (PD). Still, internal processes leading to this instability are not fully understood. Simulations using neuromusculoskeletal human models can help understand these internal processes leading to PD-associated postural deficits. In this paper, we investigated whether reduced reactivity amplitudes resulting from impairments due to PD can explain postural instability as well as increased muscle tone as often observed in individuals with PD. To simulate reduced reactivity, we gradually decreased previously optimized gain factors within the postural control circuitry of our model performing a quiet upright standing task. After each reduction step, the model was again optimized. Simulation results were compared to experimental data collected from 31 individuals with PD and 31 age- and sex-matched healthy control participants. Analyzing our simulation results, we showed that muscle activations increased with a model's reduced reactivity, as well as joint angles' ranges of motion (ROMs). However, sway parameters such as center of pressure (COP) path lengths and COP ranges did not increase as observed in our experimental data. These results suggest that a reduced reactivity does not directly lead to increased sway parameters, but could cause increased muscle tone leading to subsequent postural control alterations. To further investigate postural stability using neuromusculoskeletal models, analyzing additional internal model parameters and tasks such as perturbed upright standing requiring comparable reaction patterns could provide promising results. By enhancing such models and deepening the understanding of internal processes of postural control, these models may be used to assess and evaluate rehabilitation interventions in the future.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"515-522"},"PeriodicalIF":2.9,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11083745","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144868195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: This study aimed to develop and validate a computer vision (CV)-based system to quantitatively analyze surgical exposure in endonasal endoscopic approach (EEA). Results: The number of pixels of the length or area of interest in the selected frame in the EEA video was measured using a reference instrument. The measured length and area were calibrated by training the current algorithm using EEA videos. A total of 50 EEA operative videos were analyzed, with 95.1%, 95.8%, and 96.2% accuracies in the training, test-1 and test-2 datasets, respectively. The CV-base model was validated using intercarotid distance and sellar height. Compared to neuronavigation, CV-based analysis reduced the time required for area measurement by 89% (p < 0.001). Our CV-based analysis showed that a smaller lateral (p = 0.001) and area (p = 0.024) surgical exposure were associated with residual tumors. Conclusions: CV-based analysis can accurately measure the surgical exposure in EEA videos and reduce the time required to measure surgical areas. The application of AI and CV can expedite quantitative analysis of surgical exposure in EEA surgeries.
{"title":"Human–Computer Vision Collaborative Measurement of Surgical Exposure and Length in Endonasal Endoscopic Skull Base Surgery","authors":"Chia-En Wong;Yu-Chen Kuo;Da-Wei Huang;Pei-Wen Chen;Heng-Jui Hsu;Wei-Ting Lee;Shang-Yu Hung;Jung-Shun Lee;Sheng-Fu Liang","doi":"10.1109/OJEMB.2025.3587947","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3587947","url":null,"abstract":"<italic>Objective:</i> This study aimed to develop and validate a computer vision (CV)-based system to quantitatively analyze surgical exposure in endonasal endoscopic approach (EEA). <italic>Results:</i> The number of pixels of the length or area of interest in the selected frame in the EEA video was measured using a reference instrument. The measured length and area were calibrated by training the current algorithm using EEA videos. A total of 50 EEA operative videos were analyzed, with 95.1%, 95.8%, and 96.2% accuracies in the training, test-1 and test-2 datasets, respectively. The CV-base model was validated using intercarotid distance and sellar height. Compared to neuronavigation, CV-based analysis reduced the time required for area measurement by 89% (p < 0.001). Our CV-based analysis showed that a smaller lateral (p = 0.001) and area (p = 0.024) surgical exposure were associated with residual tumors. <italic>Conclusions:</i> CV-based analysis can accurately measure the surgical exposure in EEA videos and reduce the time required to measure surgical areas. The application of AI and CV can expedite quantitative analysis of surgical exposure in EEA surgeries.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"480-487"},"PeriodicalIF":2.7,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11077379","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-10DOI: 10.1109/OJEMB.2025.3587993
MHD Jafar Mortada;Agnese Sbrollini;Ilaria Marcantoni;Erica Iammarino;Laura Burattini;Peter Van Dam
CineECG, a vectorcardiography-based method, uses standard 12-lead electrocardiography and 3D heart and torso models to depict the electrical activation path during the heart cycle, offering detailed visualization of cardiac electrical activity without numerical quantification. Our research aims to quantify CineECG outputs by defining 54 features that describe the route, shape, and direction of electrical activation. These features were used to develop a multinomial regression model classifying electrocardiography signals into normal sinus rhythm, left bundle branch block, right bundle branch block, and undetermined abnormalities. Trained and tested on 6,860 signals from the PhysioNet/Computing in Cardiology Challenge 2020 and THEW project, the model achieved an F1 score over 84% (normal sinus rhythm: 93%, left bundle branch block: 93%, right bundle branch block: 90%, undetermined abnormalities: 84%). The results suggest CineECG's potential in enhancing electrocardiography interpretation and aiding in the accurate diagnosis of various abnormalities.
CineECG是一种基于矢量心电图的方法,它使用标准的12导联心电图和3D心脏和躯干模型来描绘心脏周期中的电激活路径,在没有数值量化的情况下提供心脏电活动的详细可视化。我们的研究旨在通过定义54个特征来量化CineECG输出,这些特征描述了电激活的路径、形状和方向。利用这些特征建立多项回归模型,将心电图信号分为正常窦性心律、左束支传导阻滞、右束支传导阻滞和未确定异常。对来自PhysioNet/Computing in Cardiology Challenge 2020和THEW项目的6860个信号进行训练和测试,该模型获得了超过84%的F1评分(正常窦性心律:93%,左束支阻滞:93%,右束支阻滞:90%,未确定异常:84%)。结果表明,CineECG在增强心电图解释和帮助准确诊断各种异常方面具有潜力。
{"title":"Quantifying CineECG Output for Enhancing Electrocardiography Signals Classification","authors":"MHD Jafar Mortada;Agnese Sbrollini;Ilaria Marcantoni;Erica Iammarino;Laura Burattini;Peter Van Dam","doi":"10.1109/OJEMB.2025.3587993","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3587993","url":null,"abstract":"CineECG, a vectorcardiography-based method, uses standard 12-lead electrocardiography and 3D heart and torso models to depict the electrical activation path during the heart cycle, offering detailed visualization of cardiac electrical activity without numerical quantification. Our research aims to quantify CineECG outputs by defining 54 features that describe the route, shape, and direction of electrical activation. These features were used to develop a multinomial regression model classifying electrocardiography signals into normal sinus rhythm, left bundle branch block, right bundle branch block, and undetermined abnormalities. Trained and tested on 6,860 signals from the PhysioNet/Computing in Cardiology Challenge 2020 and THEW project, the model achieved an F1 score over 84% (normal sinus rhythm: 93%, left bundle branch block: 93%, right bundle branch block: 90%, undetermined abnormalities: 84%). The results suggest CineECG's potential in enhancing electrocardiography interpretation and aiding in the accurate diagnosis of various abnormalities.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"488-498"},"PeriodicalIF":2.9,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11077371","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144758368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Goal: This study aims to explore the temporal dynamics of functional connectivity in drug-resistant focal epilepsy, focusing on Temporal Lobe Epilepsy (TLE) and Extra-Temporal Lobe Epilepsy (ETLE), using magnetoencephalography (MEG). Methods: Temporal metrics such as Change Between States, Entropy of Transition Patterns, Entropy of Transition Probabilities, Dwell Time, Stability, and Max L1 Distance derived from dynamic functional connectivity matrices were analyzed across eight frequency bands (delta, theta, alpha, beta, low gamma, mid gamma, high gamma and broadband) in TLE and ETLE patients. Results: Significant differences were observed between TLE and ETLE. ETLE exhibited more widespread and unpredictable connectivity transitions, while TLE demonstrated localized and structured patterns. Entropy metrics indicated higher randomness in ETLE, and dwell time analysis revealed shorter state persistence in ETLE compared to TLE. Conclusions: The findings highlight the potential of MEG-based temporal connectivity metrics in characterizing network disruptions in focal epilepsy.
{"title":"Temporal Dynamics of Functional Connectivity in Temporal and Extra-Temporal Lobe Epilepsy: A Magnetoencephalography-Based Study","authors":"Suhas M.V;N. Mariyappa;Karunakar Kotegar;Ravindranadh Chowdary M;Raghavendra K;Ajay Asranna;Viswanathan L.G;Sanjib Sinha;Anitha H","doi":"10.1109/OJEMB.2025.3587954","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3587954","url":null,"abstract":"<italic>Goal:</i> This study aims to explore the temporal dynamics of functional connectivity in drug-resistant focal epilepsy, focusing on Temporal Lobe Epilepsy (TLE) and Extra-Temporal Lobe Epilepsy (ETLE), using magnetoencephalography (MEG). <italic>Methods:</i> Temporal metrics such as Change Between States, Entropy of Transition Patterns, Entropy of Transition Probabilities, Dwell Time, Stability, and Max L1 Distance derived from dynamic functional connectivity matrices were analyzed across eight frequency bands (delta, theta, alpha, beta, low gamma, mid gamma, high gamma and broadband) in TLE and ETLE patients. <italic>Results:</i> Significant differences were observed between TLE and ETLE. ETLE exhibited more widespread and unpredictable connectivity transitions, while TLE demonstrated localized and structured patterns. Entropy metrics indicated higher randomness in ETLE, and dwell time analysis revealed shorter state persistence in ETLE compared to TLE. <italic>Conclusions:</i> The findings highlight the potential of MEG-based temporal connectivity metrics in characterizing network disruptions in focal epilepsy.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"507-514"},"PeriodicalIF":2.9,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11077383","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144831868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}