Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference最新文献
Current research on steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) predominantly focuses on utilizing the frequency- and phase-locking characteristics of SSVEP for encoding purposes. In this study, we propose an innovative paradigm wherein SSVEP serves as a marker, integrated with different types of motion animations to identify distinct neural processing pathways associated with these animations. This approach enables the classification of SSVEP-based BCIs without relying on frequency features. We designed six distinct animations corresponding to six behaviors commonly observed in daily life. Each animation was tagged with a uniform 6 Hz stimulus frequency, forming a six-target classification task. Offline testing was conducted with 10 participants. Despite identical frequency components, significant differences in spatial distribution corresponding to the animations were observed, likely due to the behavioral variations in the animations. Classification analysis demonstrated an accuracy of 0.93 within a 6-second window, validating the practical feasibility of this approach. This paradigm offers a novel direction for the advancement of SSVEP-based BCIs, potentially enabling the integration of multi-sensory information.
{"title":"Beyond Frequency: Leveraging Spatial Features in SSVEP-Based Brain-Computer Interfaces with Visual Animations.","authors":"Yike Sun, Ziyu Zhang, Qi Qi, Xiaoyang Li, Jingnan Sun, Kemeng Zhang, Jiaxiang Zhuang, Xiaogang Chen, Xiaorong Gao","doi":"10.1109/EMBC58623.2025.11254745","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11254745","url":null,"abstract":"<p><p>Current research on steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) predominantly focuses on utilizing the frequency- and phase-locking characteristics of SSVEP for encoding purposes. In this study, we propose an innovative paradigm wherein SSVEP serves as a marker, integrated with different types of motion animations to identify distinct neural processing pathways associated with these animations. This approach enables the classification of SSVEP-based BCIs without relying on frequency features. We designed six distinct animations corresponding to six behaviors commonly observed in daily life. Each animation was tagged with a uniform 6 Hz stimulus frequency, forming a six-target classification task. Offline testing was conducted with 10 participants. Despite identical frequency components, significant differences in spatial distribution corresponding to the animations were observed, likely due to the behavioral variations in the animations. Classification analysis demonstrated an accuracy of 0.93 within a 6-second window, validating the practical feasibility of this approach. This paradigm offers a novel direction for the advancement of SSVEP-based BCIs, potentially enabling the integration of multi-sensory information.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01DOI: 10.1109/EMBC58623.2025.11252818
Xinlei Zhang, Junwei Ma, Keifei Liu, Wanqi Chen, Kang Ding, Shuangyuan Yang, Fan Li, Fengyu Cong
Automatic sleep staging typically requires multi-channel EEG data, limiting its application in portable devices. To address this, we propose a hybrid deep learning model that utilizes multi-domain features from single-channel EEG data collected via polysomnography (PSG). Our model employs two feature extractors to capture time-domain and time-frequency-domain features, which are fused for final predictions. Validated on the Haaglanden Medisch Centrum Sleep Centre Database (HMC) with EEG data from 151 subjects, the model achieves an accuracy of 0.747 and an F1 score of 0.742. Compared to state-of-the-art methods, it shows improved multi-classification performance, particularly in N3 stage detection. This study highlights the potential of single-channel EEG for accurate sleep staging and the development of portable PSG-based monitoring systems.Clinical Relevance-This study develops a deep learning model for automatic sleep staging only using a single-channel EEG. Our research would be helpful to automatically classify stages during sleep for sleep physicians.
自动睡眠分期通常需要多通道脑电图数据,限制了其在便携式设备中的应用。为了解决这个问题,我们提出了一种混合深度学习模型,该模型利用了通过多导睡眠图(PSG)收集的单通道EEG数据的多域特征。我们的模型采用两个特征提取器来捕获时域和时频域特征,并将其融合以进行最终预测。在Haaglanden Medisch Centrum Sleep Centre Database (HMC)中使用151名受试者的EEG数据进行验证,该模型的准确率为0.747,F1得分为0.742。与最先进的方法相比,该方法具有更好的多分类性能,特别是在N3阶段检测方面。这项研究强调了单通道脑电图在精确睡眠分期和便携式psg监测系统开发方面的潜力。临床意义:本研究开发了一种深度学习模型,仅使用单通道脑电图进行自动睡眠分期。我们的研究将有助于睡眠医生对睡眠阶段进行自动分类。
{"title":"A Hybrid Deep Learning Model for Sleep Staging with Multi-Domain Feature Fusion from Single-Channel EEG.","authors":"Xinlei Zhang, Junwei Ma, Keifei Liu, Wanqi Chen, Kang Ding, Shuangyuan Yang, Fan Li, Fengyu Cong","doi":"10.1109/EMBC58623.2025.11252818","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11252818","url":null,"abstract":"<p><p>Automatic sleep staging typically requires multi-channel EEG data, limiting its application in portable devices. To address this, we propose a hybrid deep learning model that utilizes multi-domain features from single-channel EEG data collected via polysomnography (PSG). Our model employs two feature extractors to capture time-domain and time-frequency-domain features, which are fused for final predictions. Validated on the Haaglanden Medisch Centrum Sleep Centre Database (HMC) with EEG data from 151 subjects, the model achieves an accuracy of 0.747 and an F1 score of 0.742. Compared to state-of-the-art methods, it shows improved multi-classification performance, particularly in N3 stage detection. This study highlights the potential of single-channel EEG for accurate sleep staging and the development of portable PSG-based monitoring systems.Clinical Relevance-This study develops a deep learning model for automatic sleep staging only using a single-channel EEG. Our research would be helpful to automatically classify stages during sleep for sleep physicians.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01DOI: 10.1109/EMBC58623.2025.11252945
Angkon Deb, Celia Shahnaz, Mohammad Saquib
Sleep stage classification is a critical task in sleep research, with significant implications for diagnosing and treating sleep disorders. Traditional methods rely on manual scoring of polysomnography (PSG) data, which is time-consuming and prone to human error. While recent advances in deep learning have enabled automated sleep stage classification, challenges persist in handling the complex, non-linear patterns of physiological signals. Existing models are often computationally expensive, require sophisticated feature extraction methods, and are unsuitable for real-time implementation. To address these limitations, we propose a lightweight and efficient dual-branch deep-learning model that leverages the feature extraction capabilities of CNNs and the channel-wise attention mechanisms of Transformers. Unlike conventional transformers, it avoids excessive computational complexity while effectively capturing both local and global dependencies in physiological signals. The model is validated on four benchmark datasets-SleepEDF-20, SleepEDF-78, SleepEDFx, and SHHS-and demonstrates superior performance compared to several baseline algorithms. Our proposed algorithm achieves state-of-the-art results across all datasets, highlighting its robustness and scalability for real-world applications. The code for the proposed algorithm is publicly available at link, enabling reproducibility and further research. Combining the strengths of CNNs and Transformers, it offers a promising solution for accurate and efficient sleep stage classification, paving the way for improved diagnosis and treatment of sleep disorders. The code is available at https://github.com/ang-frozen/embc2025.
{"title":"A Joint Optimization Guided Deep Learning Model based on CNN and Channel-Wise Transformers for Robust Sleep Stage Classification from EEG Signal.","authors":"Angkon Deb, Celia Shahnaz, Mohammad Saquib","doi":"10.1109/EMBC58623.2025.11252945","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11252945","url":null,"abstract":"<p><p>Sleep stage classification is a critical task in sleep research, with significant implications for diagnosing and treating sleep disorders. Traditional methods rely on manual scoring of polysomnography (PSG) data, which is time-consuming and prone to human error. While recent advances in deep learning have enabled automated sleep stage classification, challenges persist in handling the complex, non-linear patterns of physiological signals. Existing models are often computationally expensive, require sophisticated feature extraction methods, and are unsuitable for real-time implementation. To address these limitations, we propose a lightweight and efficient dual-branch deep-learning model that leverages the feature extraction capabilities of CNNs and the channel-wise attention mechanisms of Transformers. Unlike conventional transformers, it avoids excessive computational complexity while effectively capturing both local and global dependencies in physiological signals. The model is validated on four benchmark datasets-SleepEDF-20, SleepEDF-78, SleepEDFx, and SHHS-and demonstrates superior performance compared to several baseline algorithms. Our proposed algorithm achieves state-of-the-art results across all datasets, highlighting its robustness and scalability for real-world applications. The code for the proposed algorithm is publicly available at link, enabling reproducibility and further research. Combining the strengths of CNNs and Transformers, it offers a promising solution for accurate and efficient sleep stage classification, paving the way for improved diagnosis and treatment of sleep disorders. The code is available at https://github.com/ang-frozen/embc2025.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01DOI: 10.1109/EMBC58623.2025.11252620
Diletta Guberti, Zongheng Guo, Antoine Herpain, Marta Carrara, Manuela Ferrario
The morphology of the arterial blood pressure (ABP) waveform has been demonstrated to serve as a significant indicator of the patient's condition and a marker of impending changes. The wave separation analysis (WSA) approach is based on invasive measures of concomitant arterial blood flow (ABF) and ABP. Other methods were developed as well, but the analyses were limited to waveforms with the physiological shape, namely the Type A. This study introduces a bidirectional long short-term memory (BiLSTM) deep learning model to classify ABP beats into Type A and Type B/C, this last group refers to a condition of altered vascular compliance and resistance. The models were developed by using central (aortic) and peripheral (femoral) waveforms. The best models have achieved an accuracy of 96% and 90% for aortic and femoral signals, respectively. The ultimate objective of this research is to enhance non-invasive cardiovascular monitoring and facilitate the early detection of arterial alterations.
{"title":"A Bidirectional Long Short-Term Memory Deep Learning Model for Classification of Pulse Waveform.","authors":"Diletta Guberti, Zongheng Guo, Antoine Herpain, Marta Carrara, Manuela Ferrario","doi":"10.1109/EMBC58623.2025.11252620","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11252620","url":null,"abstract":"<p><p>The morphology of the arterial blood pressure (ABP) waveform has been demonstrated to serve as a significant indicator of the patient's condition and a marker of impending changes. The wave separation analysis (WSA) approach is based on invasive measures of concomitant arterial blood flow (ABF) and ABP. Other methods were developed as well, but the analyses were limited to waveforms with the physiological shape, namely the Type A. This study introduces a bidirectional long short-term memory (BiLSTM) deep learning model to classify ABP beats into Type A and Type B/C, this last group refers to a condition of altered vascular compliance and resistance. The models were developed by using central (aortic) and peripheral (femoral) waveforms. The best models have achieved an accuracy of 96% and 90% for aortic and femoral signals, respectively. The ultimate objective of this research is to enhance non-invasive cardiovascular monitoring and facilitate the early detection of arterial alterations.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing elderly population has made falls due to frailty a critical issue, with physical and cognitive factors interacting to elevate the risks of fractures and solitary deaths. Falls are challenging to predict due to the involvement of multiple factors, necessitating the development of continuous gait monitoring and fall detection technologies. Although various fall detection methods have been proposed, many rely on batteries requiring maintenance with circuit complexity, higher costs, or both. This study aims to develop an unintentional-falling detection system by utilizing triboelectric nanogenerator (TENG) technology to create a battery-free insole device. The device was tested to analyze the feature of the voltage signals produced by unintentional falls. The results suggest that the signals generated by the device are sufficiently distinguishable from the frequency-amplitude and the ratio of maximum and minimum voltage values during one gait cycle, indicating the feasibility of constructing a battery-free system capable of detecting unintentional-falling.
{"title":"A Battery-Free Unintentional-Fall Detection System Utilizing TENG Insoles.","authors":"Haruki Higoshi, Enzo Osumi, Tamon Miyake, Ryosuke Tsumura, Shigeki Sugano, Hiroki Shigemune","doi":"10.1109/EMBC58623.2025.11251584","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11251584","url":null,"abstract":"<p><p>The increasing elderly population has made falls due to frailty a critical issue, with physical and cognitive factors interacting to elevate the risks of fractures and solitary deaths. Falls are challenging to predict due to the involvement of multiple factors, necessitating the development of continuous gait monitoring and fall detection technologies. Although various fall detection methods have been proposed, many rely on batteries requiring maintenance with circuit complexity, higher costs, or both. This study aims to develop an unintentional-falling detection system by utilizing triboelectric nanogenerator (TENG) technology to create a battery-free insole device. The device was tested to analyze the feature of the voltage signals produced by unintentional falls. The results suggest that the signals generated by the device are sufficiently distinguishable from the frequency-amplitude and the ratio of maximum and minimum voltage values during one gait cycle, indicating the feasibility of constructing a battery-free system capable of detecting unintentional-falling.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01DOI: 10.1109/EMBC58623.2025.11254563
Anna Corti, Sarah Galante, Katia Chiappetta, Mattia Loppini, Valentina D A Corino
The rising number of total knee arthroplasty (TKA) revisions combined with the inferior outcomes compared to the primary TKA highlight the critical need for early detection of primary TKA failure. The present work aims to propose a radiomics-based machine learning model to automatically detect TKA failure from radiographs. The dataset comprised radiographs from 44 failed and 51 non-failed TKA patients. Following preprocessing phases, 465 radiomic features were extracted. A cross-validation procedure, consisting in 100 repeated training-validation splits was implemented. The training phase encompassed feature selection, data balancing and machine learning classifier training. Four feature selection approaches were evaluated combined with several classifiers. Based on the average performance metrics on the validation set, the Least Absolute Shrinkage and Selector Operator (LASSO) feature selection, combined with Logistic Regression (LR) classifier achieved the best performance, with an F1-score of 0.701, a balanced accuracy of 0.710 and area under the curve (AUC) of 0.783. The results demonstrate the potentialities of the developed radiomics-based approach in automatically detecting TKA failure from plain radiographs.Clinical Relevance-The increasing number of revision procedures poses significant challenges for healthcare systems, highlighting the critical need for automated early detection of primary TKA failure. The developed model can support clinicians by reducing their workload and minimizing inter- and intra-observer variability.
{"title":"A Radiomics-Based Machine Learning Model to Predict Total Knee Arthroplasty Failure from Plain Radiographs.","authors":"Anna Corti, Sarah Galante, Katia Chiappetta, Mattia Loppini, Valentina D A Corino","doi":"10.1109/EMBC58623.2025.11254563","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11254563","url":null,"abstract":"<p><p>The rising number of total knee arthroplasty (TKA) revisions combined with the inferior outcomes compared to the primary TKA highlight the critical need for early detection of primary TKA failure. The present work aims to propose a radiomics-based machine learning model to automatically detect TKA failure from radiographs. The dataset comprised radiographs from 44 failed and 51 non-failed TKA patients. Following preprocessing phases, 465 radiomic features were extracted. A cross-validation procedure, consisting in 100 repeated training-validation splits was implemented. The training phase encompassed feature selection, data balancing and machine learning classifier training. Four feature selection approaches were evaluated combined with several classifiers. Based on the average performance metrics on the validation set, the Least Absolute Shrinkage and Selector Operator (LASSO) feature selection, combined with Logistic Regression (LR) classifier achieved the best performance, with an F1-score of 0.701, a balanced accuracy of 0.710 and area under the curve (AUC) of 0.783. The results demonstrate the potentialities of the developed radiomics-based approach in automatically detecting TKA failure from plain radiographs.Clinical Relevance-The increasing number of revision procedures poses significant challenges for healthcare systems, highlighting the critical need for automated early detection of primary TKA failure. The developed model can support clinicians by reducing their workload and minimizing inter- and intra-observer variability.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01DOI: 10.1109/EMBC58623.2025.11253485
E B Dijkema, C M A Pennartz, U Olcese
Closed-loop brain-computer interfaces (BCIs) hold promise for restoring function after neurological damage by dynamically processing neural signals and delivering targeted brain stimulation. To achieve clinically meaningful outcomes, such systems must operate with high spatiotemporal precision. This work aims to demonstrate a proof-of-concept neuromorphic BCI that processes neural spike events in near-real time, without necessitating preprocessing besides signal filtering and spike detection. Methods - We developed a system that acquires neural signals and streams spike events into a spiking neural network (SNN) running on SpiNNaker neuromorphic hardware. We evaluated the system's performance using both in vivo recordings from mouse visual cortex and simulated neural waveforms. We measured the roundtrip latency, defined as the time from spike detection to an output spike generated by the SNN. Results - Under baseline conditions with no hidden SNN layers, mean roundtrip latency was 4.69 ms (±1.70 ms). Adding hidden layers increased latency by approximately 3.65 ms per layer, reflecting the computational overhead of deeper networks. The system successfully detected and processed spikes in near real-time, demonstrating that neuromorphic hardware can manage spike-based input at speeds suitable for closed-loop intervention. Discussion - These findings indicate that neuromorphic SNNs can rapidly process neural signals, providing a foundation for closed-loop BCIs capable of bypassing damaged neural pathways. Future efforts will involve implementing stimulation protocols and functional SNNs. Such developments may ultimately facilitate more effective, flexible, and power-efficient neuroprosthetic devices.
{"title":"A Proof-of-Concept Spike Based Neuromorphic Brain-Computer Interface.","authors":"E B Dijkema, C M A Pennartz, U Olcese","doi":"10.1109/EMBC58623.2025.11253485","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11253485","url":null,"abstract":"<p><p>Closed-loop brain-computer interfaces (BCIs) hold promise for restoring function after neurological damage by dynamically processing neural signals and delivering targeted brain stimulation. To achieve clinically meaningful outcomes, such systems must operate with high spatiotemporal precision. This work aims to demonstrate a proof-of-concept neuromorphic BCI that processes neural spike events in near-real time, without necessitating preprocessing besides signal filtering and spike detection. Methods - We developed a system that acquires neural signals and streams spike events into a spiking neural network (SNN) running on SpiNNaker neuromorphic hardware. We evaluated the system's performance using both in vivo recordings from mouse visual cortex and simulated neural waveforms. We measured the roundtrip latency, defined as the time from spike detection to an output spike generated by the SNN. Results - Under baseline conditions with no hidden SNN layers, mean roundtrip latency was 4.69 ms (±1.70 ms). Adding hidden layers increased latency by approximately 3.65 ms per layer, reflecting the computational overhead of deeper networks. The system successfully detected and processed spikes in near real-time, demonstrating that neuromorphic hardware can manage spike-based input at speeds suitable for closed-loop intervention. Discussion - These findings indicate that neuromorphic SNNs can rapidly process neural signals, providing a foundation for closed-loop BCIs capable of bypassing damaged neural pathways. Future efforts will involve implementing stimulation protocols and functional SNNs. Such developments may ultimately facilitate more effective, flexible, and power-efficient neuroprosthetic devices.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01DOI: 10.1109/EMBC58623.2025.11253249
Frank Kulwa, Doreen S Sarwatt, Mojisola G Asogbon, Jiaming Huang, Rami N Khushaba, Tolulope T Oyemakinde, Guanglin Li, Oluwarotimi W Samuel, Hai Li, Yongcheng Li
Motor intent (MI)-based brain-computer interfaces (BCIs) have been extensively studied to improve the performance and clinical realization of assistive robots for motor recovery in stroke patients. However, challenges arise in their low decoding performance. This can be attributed to the low spatial resolution and signal-to-noise ratio of electroencephalography (EEG), particularly in accurately deciphering hand movements, which reduces classification performance. Therefore, we have developed a novel feature extraction technique that exploits Levant's differentiators to extract distinct patterns in EEG signals and employs symmetric positive definite matrices (SPD) to effectively leverage the spatial-temporal properties of the EEG signal. Results from nine post-stroke patients and fifteen normal subjects showed an improved decoding accuracy of 99.16±0.64% and 99.30±0.69%, respectively in classifying twenty-four hand motor intents, significantly outperforming existing related methods. Thus, the proposed technique has the potential to greatly enhance the reliability and effectiveness of EEG-based control systems for post-stroke rehabilitation.Clinical Relevance- The outcome of this study can lead to better control of rehabilitation robots and improve the recovery speed of the stroke patients.
{"title":"A Novel Levant's Differentiator-Based Descriptor for EEG-Based Motor Intent Decoding.","authors":"Frank Kulwa, Doreen S Sarwatt, Mojisola G Asogbon, Jiaming Huang, Rami N Khushaba, Tolulope T Oyemakinde, Guanglin Li, Oluwarotimi W Samuel, Hai Li, Yongcheng Li","doi":"10.1109/EMBC58623.2025.11253249","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11253249","url":null,"abstract":"<p><p>Motor intent (MI)-based brain-computer interfaces (BCIs) have been extensively studied to improve the performance and clinical realization of assistive robots for motor recovery in stroke patients. However, challenges arise in their low decoding performance. This can be attributed to the low spatial resolution and signal-to-noise ratio of electroencephalography (EEG), particularly in accurately deciphering hand movements, which reduces classification performance. Therefore, we have developed a novel feature extraction technique that exploits Levant's differentiators to extract distinct patterns in EEG signals and employs symmetric positive definite matrices (SPD) to effectively leverage the spatial-temporal properties of the EEG signal. Results from nine post-stroke patients and fifteen normal subjects showed an improved decoding accuracy of 99.16±0.64% and 99.30±0.69%, respectively in classifying twenty-four hand motor intents, significantly outperforming existing related methods. Thus, the proposed technique has the potential to greatly enhance the reliability and effectiveness of EEG-based control systems for post-stroke rehabilitation.Clinical Relevance- The outcome of this study can lead to better control of rehabilitation robots and improve the recovery speed of the stroke patients.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01DOI: 10.1109/EMBC58623.2025.11254466
Xinyu Zhang, Ming Xia, Dongmin Huang, Guanghang Liao, Wenjin Wang
Newborns communicate with the outside world primarily by crying. Infant cry-based verification can reduce the risk of mix-ups in hospital obstetrics. Recent studies have explored the potential of using infant cries for identity verification. Yet, model performance remains limited by training with variable-length clips and evaluating the complete audio recording from a single view. To this end, we propose a novel unified training and evaluation framework that uses fixed-length segments during training to ensure input consistency and incorporates a multi-view joint evaluation strategy by associating the audio recording with its local segments. Extensive experiments conducted on the public CryCeleb2023 dataset show that our framework leads to consistent improvements on different verification models. Specifically, the Equal Error Rate (EER) exhibited a reduction of 10.29% for the whisper-PMFA model, 6.63% for the X-Vector model, and 5.91% for the ECAPA-TDNN model. These results demonstrate the effectiveness of our fixed-length segment training and slice-based multi-view evaluation strategy in enhancing the model stability and evaluation accuracy, providing a more robust framework for newborn voice verification. The source code is released at https://github.com/contactless-healthcare/Unified-Infant-Cry-Verification.
{"title":"A Unified Learning and Evaluation Framework for Infant Cry-based Verification.","authors":"Xinyu Zhang, Ming Xia, Dongmin Huang, Guanghang Liao, Wenjin Wang","doi":"10.1109/EMBC58623.2025.11254466","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11254466","url":null,"abstract":"<p><p>Newborns communicate with the outside world primarily by crying. Infant cry-based verification can reduce the risk of mix-ups in hospital obstetrics. Recent studies have explored the potential of using infant cries for identity verification. Yet, model performance remains limited by training with variable-length clips and evaluating the complete audio recording from a single view. To this end, we propose a novel unified training and evaluation framework that uses fixed-length segments during training to ensure input consistency and incorporates a multi-view joint evaluation strategy by associating the audio recording with its local segments. Extensive experiments conducted on the public CryCeleb2023 dataset show that our framework leads to consistent improvements on different verification models. Specifically, the Equal Error Rate (EER) exhibited a reduction of 10.29% for the whisper-PMFA model, 6.63% for the X-Vector model, and 5.91% for the ECAPA-TDNN model. These results demonstrate the effectiveness of our fixed-length segment training and slice-based multi-view evaluation strategy in enhancing the model stability and evaluation accuracy, providing a more robust framework for newborn voice verification. The source code is released at https://github.com/contactless-healthcare/Unified-Infant-Cry-Verification.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01DOI: 10.1109/EMBC58623.2025.11254864
Vincenzo Ronca, Gianluca Di Flumeri, Leonardo Lungarini, Rossella Capotorto, Daniele Germano, Andrea Giorgi, Gianluca Borghini, Fabio Babiloni, Pietro Arico
Ocular artifacts, particularly blinks, significantly affect the integrity of electroencephalographic (EEG) signals, posing a challenge for real-time applications. Traditional correction methods often require a calibration phase or additional electrooculogram (EOG) channels, limiting their applicability in mobile and real-world settings. This study presents a novel detection and correction method, designed for online ocular artifact correction without the need for prior calibration: the CFo-CLEAN. The proposed method integrates an Enhanced Adaptive Data-driven Algorithm (eADA) for real-time identification and correction of ocular artifacts directly from EEG signals. Unlike conventional approaches, this implementation adapts dynamically to ongoing EEG variations, enhancing flexibility and performance. The study evaluates the CFo-CLEAN method using EEG data recorded from 38 participants during real-world driving scenarios. Performance comparisons were conducted against established correction techniques, including Independent Component Analysis (ICA), regression-based methods, and subspace reconstruction approaches. The evaluation considered both artifact removal efficiency and EEG signal preservation across different experimental conditions. Results demonstrated that the method effectively reduced ocular artifact contamination while preserving neurophysiological content. Specifically, two implementations of the method, utilizing 60-second and 90-second time windows, were analyzed, revealing that longer windows provided superior EEG signal preservation, particularly in higher frequency bands. These findings validate the effectiveness of the CFo-CLEAN method for real-time applications, making it a valuable tool for brain-computer interfaces (BCIs), neuroergonomics, and cognitive state monitoring. By avoiding the need for a calibration phase and incorporating adaptive processing, this method represents a significant advancement in real-time EEG artifact correction, facilitating its deployment in dynamic, real-world environments.
{"title":"A Novel Multi-Stage Algorithm for Real-Time Detection and Correction of Ocular Artifacts in EEG: A Calibration-Free Approach.","authors":"Vincenzo Ronca, Gianluca Di Flumeri, Leonardo Lungarini, Rossella Capotorto, Daniele Germano, Andrea Giorgi, Gianluca Borghini, Fabio Babiloni, Pietro Arico","doi":"10.1109/EMBC58623.2025.11254864","DOIUrl":"https://doi.org/10.1109/EMBC58623.2025.11254864","url":null,"abstract":"<p><p>Ocular artifacts, particularly blinks, significantly affect the integrity of electroencephalographic (EEG) signals, posing a challenge for real-time applications. Traditional correction methods often require a calibration phase or additional electrooculogram (EOG) channels, limiting their applicability in mobile and real-world settings. This study presents a novel detection and correction method, designed for online ocular artifact correction without the need for prior calibration: the CFo-CLEAN. The proposed method integrates an Enhanced Adaptive Data-driven Algorithm (eADA) for real-time identification and correction of ocular artifacts directly from EEG signals. Unlike conventional approaches, this implementation adapts dynamically to ongoing EEG variations, enhancing flexibility and performance. The study evaluates the CFo-CLEAN method using EEG data recorded from 38 participants during real-world driving scenarios. Performance comparisons were conducted against established correction techniques, including Independent Component Analysis (ICA), regression-based methods, and subspace reconstruction approaches. The evaluation considered both artifact removal efficiency and EEG signal preservation across different experimental conditions. Results demonstrated that the method effectively reduced ocular artifact contamination while preserving neurophysiological content. Specifically, two implementations of the method, utilizing 60-second and 90-second time windows, were analyzed, revealing that longer windows provided superior EEG signal preservation, particularly in higher frequency bands. These findings validate the effectiveness of the CFo-CLEAN method for real-time applications, making it a valuable tool for brain-computer interfaces (BCIs), neuroergonomics, and cognitive state monitoring. By avoiding the need for a calibration phase and incorporating adaptive processing, this method represents a significant advancement in real-time EEG artifact correction, facilitating its deployment in dynamic, real-world environments.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2025 ","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145671189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference