Pub Date : 2026-01-14DOI: 10.1109/TNSRE.2026.3654400
Kimji N. Pellano;Inga Strümke;Daniel Groos;Lars Adde;Pål Haugen;Espen Alexander F. Ihlen
Cerebral Palsy (CP) is a prevalent motor disability in children, for which early detection can significantly improve treatment outcomes. While skeleton-based Graph Convolutional Network (GCN) models have shown promise in automatically predicting CP risk from infant videos, their “black-box” nature raises concerns about clinical explainability. To address this, we introduce a perturbation framework tailored for infant movement features and use it to compare two explainable AI (XAI) methods: Class Activation Mapping (CAM) and Gradient-weighted Class Activation Mapping (Grad-CAM). First, we identify significant and non-significant body keypoints in very low and very high risk infant video snippets based on the XAI attribution scores. We then conduct targeted velocity and angular perturbations, both individually and in combination, on these keypoints to assess how the GCN model’s risk predictions change. Our results indicate that velocity-driven features of the arms, hips, and legs appear to have a dominant influence on CP risk predictions, while angular perturbations have a more modest impact. Furthermore, CAM and Grad-CAM show partial convergence in their explanations for both low and high CP risk groups. Our findings demonstrate the use of XAI-driven movement analysis for early CP prediction, and offer insights into potential movement-based biomarker discovery that warrant further clinical validation.
{"title":"Toward Biomarker Discovery for Early Cerebral Palsy Detection: Evaluating Explanations Through Kinematic Perturbations","authors":"Kimji N. Pellano;Inga Strümke;Daniel Groos;Lars Adde;Pål Haugen;Espen Alexander F. Ihlen","doi":"10.1109/TNSRE.2026.3654400","DOIUrl":"10.1109/TNSRE.2026.3654400","url":null,"abstract":"Cerebral Palsy (CP) is a prevalent motor disability in children, for which early detection can significantly improve treatment outcomes. While skeleton-based Graph Convolutional Network (GCN) models have shown promise in automatically predicting CP risk from infant videos, their “black-box” nature raises concerns about clinical explainability. To address this, we introduce a perturbation framework tailored for infant movement features and use it to compare two explainable AI (XAI) methods: Class Activation Mapping (CAM) and Gradient-weighted Class Activation Mapping (Grad-CAM). First, we identify significant and non-significant body keypoints in very low and very high risk infant video snippets based on the XAI attribution scores. We then conduct targeted velocity and angular perturbations, both individually and in combination, on these keypoints to assess how the GCN model’s risk predictions change. Our results indicate that velocity-driven features of the arms, hips, and legs appear to have a dominant influence on CP risk predictions, while angular perturbations have a more modest impact. Furthermore, CAM and Grad-CAM show partial convergence in their explanations for both low and high CP risk groups. Our findings demonstrate the use of XAI-driven movement analysis for early CP prediction, and offer insights into potential movement-based biomarker discovery that warrant further clinical validation.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"34 ","pages":"750-766"},"PeriodicalIF":5.2,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11352985","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145984698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In multiple vision-demanding tasks, accurately controlling a prosthetic hand to approach a target object is particularly challenging for amputees, as visual attention diverted by other tasks forces them to rely heavily on peripheral vision. This study aims to initially validate that functionally effective sensory feedback can enhance the control of prosthetic hands during object approach under divided visual attention. To quantify prosthesis users’ ability to approach and manipulate objects using central and peripheral vision in real-life scenarios, we conducted two experimental tasks—APPROACHING and PINCH—under two visual feedback modes: full-vision and partial-vision. During the approaching process, we compared four feedback conditions: no supplementary sensory feedback (PURE), traditional continuous feedback (CONT), evenly distributed discrete feedback (ADIS), and a novel discrete strategy based on Weber’s law (WDIS) proposed in this study. Task performance was evaluated using metrics such as position error, dispersion, task completion time, and pinch failures, while psychological factors were assessed through a questionnaire. Results show that WDIS enabled more accurate and stable object approach, with shorter task completion times, which leads to better subsequent manipulation performance. This also provides participants with enhanced psychological experiences, including reduced workload and increased intuitiveness. WDIS improved prosthetic control and user experience in the simplified laboratory settings, providing a foundation for real-world applications.
{"title":"Discrete Tactile Feedback Based on Weber’s Law Enhances Prosthetic Hand Approaching Performance Under Divided Visual Attention","authors":"Xianwei Meng;Jianjun Meng;Guohong Chai;Xinjun Sheng;Xiangyang Zhu","doi":"10.1109/TNSRE.2026.3653788","DOIUrl":"10.1109/TNSRE.2026.3653788","url":null,"abstract":"In multiple vision-demanding tasks, accurately controlling a prosthetic hand to approach a target object is particularly challenging for amputees, as visual attention diverted by other tasks forces them to rely heavily on peripheral vision. This study aims to initially validate that functionally effective sensory feedback can enhance the control of prosthetic hands during object approach under divided visual attention. To quantify prosthesis users’ ability to approach and manipulate objects using central and peripheral vision in real-life scenarios, we conducted two experimental tasks—APPROACHING and PINCH—under two visual feedback modes: full-vision and partial-vision. During the approaching process, we compared four feedback conditions: no supplementary sensory feedback (PURE), traditional continuous feedback (CONT), evenly distributed discrete feedback (ADIS), and a novel discrete strategy based on Weber’s law (WDIS) proposed in this study. Task performance was evaluated using metrics such as position error, dispersion, task completion time, and pinch failures, while psychological factors were assessed through a questionnaire. Results show that WDIS enabled more accurate and stable object approach, with shorter task completion times, which leads to better subsequent manipulation performance. This also provides participants with enhanced psychological experiences, including reduced workload and increased intuitiveness. WDIS improved prosthetic control and user experience in the simplified laboratory settings, providing a foundation for real-world applications.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"34 ","pages":"674-685"},"PeriodicalIF":5.2,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11348986","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145966025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13DOI: 10.1109/TNSRE.2026.3653761
Keping Liu;Guang Liu;Zhifei Zhai;Baozhen Nie;Xiaoqin Duan;Changxian Xu;Zhongbo Sun
Assessment of motor function is an important component of a post-stroke rehabilitation program. The traditional assessment process mainly relies on clinical experience and lacks quantitative analysis. To objectively assess the upper limb motor status of post-stroke hemiplegic patients, this study proposes a novel assessment method based on multi-modal feature fusion of the upper limb for task-oriented movement. Features are extracted from each modal data and input into the corresponding base classifiers. The kinematic and muscle synergy are quantified by singular value decomposition (SVD) and similarity metric index, and the results are integrated to construct an aggregated classifier for in-depth quantitative assessment of different movement modalities. To exploit the complementary nature of kinematic and muscular level assessment results, a multi-modal feature fusion scheme is proposed and a probability-based functional scoring mechanism is generated to comprehensively analyze upper extremity motor function. Experimental results show that integrating synergy analyses into the assessment system improves the classification accuracy by 2.39% and 2.31%, respectively, which can be further improved to 90.75% by fusing the features extracted from different modalities. Furthermore, the assessment results of multi-modal fusion framework are significantly correlated with standard clinical trial scores ($r$ =-0.81, $p$ =0.0147). These promising results suggest that it is feasible to apply the proposed method to the clinical assessment of hemiplegic patients after stroke.
{"title":"Quantitative Assessment of Upper Limb Multi-Modal Feature Fusion Under Task-Oriented Movement","authors":"Keping Liu;Guang Liu;Zhifei Zhai;Baozhen Nie;Xiaoqin Duan;Changxian Xu;Zhongbo Sun","doi":"10.1109/TNSRE.2026.3653761","DOIUrl":"10.1109/TNSRE.2026.3653761","url":null,"abstract":"Assessment of motor function is an important component of a post-stroke rehabilitation program. The traditional assessment process mainly relies on clinical experience and lacks quantitative analysis. To objectively assess the upper limb motor status of post-stroke hemiplegic patients, this study proposes a novel assessment method based on multi-modal feature fusion of the upper limb for task-oriented movement. Features are extracted from each modal data and input into the corresponding base classifiers. The kinematic and muscle synergy are quantified by singular value decomposition (SVD) and similarity metric index, and the results are integrated to construct an aggregated classifier for in-depth quantitative assessment of different movement modalities. To exploit the complementary nature of kinematic and muscular level assessment results, a multi-modal feature fusion scheme is proposed and a probability-based functional scoring mechanism is generated to comprehensively analyze upper extremity motor function. Experimental results show that integrating synergy analyses into the assessment system improves the classification accuracy by 2.39% and 2.31%, respectively, which can be further improved to 90.75% by fusing the features extracted from different modalities. Furthermore, the assessment results of multi-modal fusion framework are significantly correlated with standard clinical trial scores (<inline-formula> <tex-math>$r$ </tex-math></inline-formula>=-0.81, <inline-formula> <tex-math>$p$ </tex-math></inline-formula>=0.0147). These promising results suggest that it is feasible to apply the proposed method to the clinical assessment of hemiplegic patients after stroke.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"34 ","pages":"711-720"},"PeriodicalIF":5.2,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11348979","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145966030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/TNSRE.2026.3652812
Shuo Guan;Yuhang Li;Yuanyuan Gao;Ran Yin;Yuxi Luo;Jiuxing Liang;Juan Zhang;Yingchun Zhang;Rihui Li
Advancing neuroimaging modalities for motor cortex analysis is critical for understanding the neural mechanisms underlying fine motor tasks and for expanding clinical applications. Functional Near-Infrared Spectroscopy (fNIRS) is widely used for measuring cortical hemodynamic activity due to its portability and accessibility, but its inherent limitations in spatial resolution and noise sensitivity reduce its utility for precise neural mapping. Diffuse Optical Tomography (DOT) has emerged as a promising alternative with superior spatial resolution and sensitivity. In this study, we performed a systematic comparison of DOT and fNIRS in detecting task-evoked neural activation during a finger-tapping paradigm including four conditions varying by finger type (thumb vs. little finger) and frequency (high vs. low). Our results demonstrated that DOT consistently captured robust activation in motor-related brain regions, even during less demanding conditions, while fNIRS exhibited limited sensitivity. Temporal trace analyses revealed that DOT achieved higher contrast-to-noise ratio (CNR) and contrast-to-background ratio (CBR), validating its enhanced signal quality and ability to distinguish subtle hemodynamic responses. Furthermore, statistical comparisons highlighted significant differences in task-related activations detected by the two modalities, particularly in low-effort conditions. These findings underscore the advantages of DOT over fNIRS, particularly in applications requiring high spatial resolution and sensitivity to subtle neural processes. The results contribute to ongoing efforts to refine optical imaging techniques for motor neuroscience and reinforce DOT’s potential for clinical translation in motor deficit diagnosis, rehabilitation monitoring, and brain-computer interface development.
{"title":"Enhanced Mapping of Finger Movement Representations Using Diffuse Optical Tomography: A Systematic Comparison With fNIRS","authors":"Shuo Guan;Yuhang Li;Yuanyuan Gao;Ran Yin;Yuxi Luo;Jiuxing Liang;Juan Zhang;Yingchun Zhang;Rihui Li","doi":"10.1109/TNSRE.2026.3652812","DOIUrl":"10.1109/TNSRE.2026.3652812","url":null,"abstract":"Advancing neuroimaging modalities for motor cortex analysis is critical for understanding the neural mechanisms underlying fine motor tasks and for expanding clinical applications. Functional Near-Infrared Spectroscopy (fNIRS) is widely used for measuring cortical hemodynamic activity due to its portability and accessibility, but its inherent limitations in spatial resolution and noise sensitivity reduce its utility for precise neural mapping. Diffuse Optical Tomography (DOT) has emerged as a promising alternative with superior spatial resolution and sensitivity. In this study, we performed a systematic comparison of DOT and fNIRS in detecting task-evoked neural activation during a finger-tapping paradigm including four conditions varying by finger type (thumb vs. little finger) and frequency (high vs. low). Our results demonstrated that DOT consistently captured robust activation in motor-related brain regions, even during less demanding conditions, while fNIRS exhibited limited sensitivity. Temporal trace analyses revealed that DOT achieved higher contrast-to-noise ratio (CNR) and contrast-to-background ratio (CBR), validating its enhanced signal quality and ability to distinguish subtle hemodynamic responses. Furthermore, statistical comparisons highlighted significant differences in task-related activations detected by the two modalities, particularly in low-effort conditions. These findings underscore the advantages of DOT over fNIRS, particularly in applications requiring high spatial resolution and sensitivity to subtle neural processes. The results contribute to ongoing efforts to refine optical imaging techniques for motor neuroscience and reinforce DOT’s potential for clinical translation in motor deficit diagnosis, rehabilitation monitoring, and brain-computer interface development.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"34 ","pages":"617-625"},"PeriodicalIF":5.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11345245","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/TNSRE.2026.3653138
Junbiao Zhu;Kendi Li;Sicong Chen;Haiyun Huang;Yupeng Zhang;Li Hu;Yuanqing Li
For patients with severe extremity motor function impairment, traditional smart ward control methods, such as those using joysticks and touchscreens, are frequently unsuitable due to their limited physical abilities. Consequently, developing an effective brain–computer interface (BCI) suitable for their operation has become an immediate concern. This paper presents a wearable multimodal BCI system for smart ward control, which employs a self-designed wearable headband to capture head rotation and blinking movement. By wearing the headband, users can control a computer cursor on the screen only with head rotation and blinking, and further control devices in a smart ward with self-designed graphical user interfaces (GUIs). The system decodes signals from an inertial measurement unit (IMU) to map the head posture to the position of the cursor on the screen and decodes electrooculography (EOG) and electroencephalography (EEG) signals to detect valid blinks for selecting and activating function buttons. Ten participants were recruited to perform two experimental tasks that simulate the daily needs of patients with extremity motor function issues. To our satisfaction, all the participants fully accomplished the simulated tasks, and an average accuracy of $97.0pm 3.9$ % and an average response time of $2.39pm 0.53$ s were achieved. Different from traditional step-controlled BCI nursing beds, we designed a continuous-controlled nursing bed and achieved satisfactory results. Furthermore, workload evaluation using NASA Task Load Index (NASA-TLX) revealed that the participants experienced a low workload when using the system. The experimental results demonstrate the effectiveness of our proposed system, indicating significant potential for practical applications.
{"title":"Smart Ward Control Based on a Wearable Multimodal Brain–Computer Interface Mouse","authors":"Junbiao Zhu;Kendi Li;Sicong Chen;Haiyun Huang;Yupeng Zhang;Li Hu;Yuanqing Li","doi":"10.1109/TNSRE.2026.3653138","DOIUrl":"10.1109/TNSRE.2026.3653138","url":null,"abstract":"For patients with severe extremity motor function impairment, traditional smart ward control methods, such as those using joysticks and touchscreens, are frequently unsuitable due to their limited physical abilities. Consequently, developing an effective brain–computer interface (BCI) suitable for their operation has become an immediate concern. This paper presents a wearable multimodal BCI system for smart ward control, which employs a self-designed wearable headband to capture head rotation and blinking movement. By wearing the headband, users can control a computer cursor on the screen only with head rotation and blinking, and further control devices in a smart ward with self-designed graphical user interfaces (GUIs). The system decodes signals from an inertial measurement unit (IMU) to map the head posture to the position of the cursor on the screen and decodes electrooculography (EOG) and electroencephalography (EEG) signals to detect valid blinks for selecting and activating function buttons. Ten participants were recruited to perform two experimental tasks that simulate the daily needs of patients with extremity motor function issues. To our satisfaction, all the participants fully accomplished the simulated tasks, and an average accuracy of <inline-formula> <tex-math>$97.0pm 3.9$ </tex-math></inline-formula> % and an average response time of <inline-formula> <tex-math>$2.39pm 0.53$ </tex-math></inline-formula> s were achieved. Different from traditional step-controlled BCI nursing beds, we designed a continuous-controlled nursing bed and achieved satisfactory results. Furthermore, workload evaluation using NASA Task Load Index (NASA-TLX) revealed that the participants experienced a low workload when using the system. The experimental results demonstrate the effectiveness of our proposed system, indicating significant potential for practical applications.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"34 ","pages":"638-649"},"PeriodicalIF":5.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11346927","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/TNSRE.2026.3651786
Mobeena Jamshed;Ahsan Shahzad;Kiseon Kim
Early detection of Mild Cognitive Impairment (MCI), a prodromal stage of dementia, plays a pivotal role in enabling timely clinical intervention and slowing cognitive decline. This paper presents a multi-sensor balance assessment framework designed to identify MCI-related postural instabilities using a wearable inertial measurement unit (IMU) network. The proposed system employs five synchronized IMUs placed at the waist, thighs, and shanks to capture balance dynamics across four static balance tasks: Eyes-Open, Eyes-Closed, Right-Leg Lift, and Left-Leg Lift. A three-stage feature selection strategy, comprising variance and correlation pruning, univariate filtering, and embedded model selection, is implemented within a Leave-One-Subject-Out (LOSO) cross-validation scheme to extract discriminative sway features. Classification using Support Vector Machines and tree-based ensemble models consistently yields superior results, achieving accuracies between 71.7% and 79.2%, with the highest performance observed in the Eyes-Open condition. A compact 10-feature subset demonstrates stable and robust discriminative power across all tasks. Compared to a single-sensor baseline, the multi-sensor configuration provides improved classification performance, underscoring the feasibility of compact, balance-driven, non-invasive MCI screening through wearable sensor systems.
{"title":"Early Detection of Mild Cognitive Impairment Through Balance Assessment Using Multi-Location Wearable Inertial Sensors","authors":"Mobeena Jamshed;Ahsan Shahzad;Kiseon Kim","doi":"10.1109/TNSRE.2026.3651786","DOIUrl":"10.1109/TNSRE.2026.3651786","url":null,"abstract":"Early detection of Mild Cognitive Impairment (MCI), a prodromal stage of dementia, plays a pivotal role in enabling timely clinical intervention and slowing cognitive decline. This paper presents a multi-sensor balance assessment framework designed to identify MCI-related postural instabilities using a wearable inertial measurement unit (IMU) network. The proposed system employs five synchronized IMUs placed at the waist, thighs, and shanks to capture balance dynamics across four static balance tasks: Eyes-Open, Eyes-Closed, Right-Leg Lift, and Left-Leg Lift. A three-stage feature selection strategy, comprising variance and correlation pruning, univariate filtering, and embedded model selection, is implemented within a Leave-One-Subject-Out (LOSO) cross-validation scheme to extract discriminative sway features. Classification using Support Vector Machines and tree-based ensemble models consistently yields superior results, achieving accuracies between 71.7% and 79.2%, with the highest performance observed in the Eyes-Open condition. A compact 10-feature subset demonstrates stable and robust discriminative power across all tasks. Compared to a single-sensor baseline, the multi-sensor configuration provides improved classification performance, underscoring the feasibility of compact, balance-driven, non-invasive MCI screening through wearable sensor systems.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"34 ","pages":"552-562"},"PeriodicalIF":5.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11342299","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/TNSRE.2026.3651761
Bence Mark Halpern;Wen-Chin Huang;Lester Phillip Violeta;Tomoki Toda
The article presents a new pathological text-to-speech (TTS) synthesis system that has the ability to control speech severity using latent interpolations. Recognizing the difficulty of this task, our work uses a data augmentation technique to generate a single-speaker multi-severity training dataset required for training such a model. Furthermore, we show how x-vectors already contain information about the severity and leverage it as a conditioning variable for the synthesis. Finally, we propose modifications to the GradTTS architecture to enhance the duration modeling of pathological speech. We carry out objective and subjective evaluations to demonstrate that the proposed GradTTS system works well, and produces more natural, controllable, and stable pathological speech samples than the baseline TransformerTTS system.
{"title":"Severity-Controllable Pathological Text-to-Speech Synthesis for Clinical Applications","authors":"Bence Mark Halpern;Wen-Chin Huang;Lester Phillip Violeta;Tomoki Toda","doi":"10.1109/TNSRE.2026.3651761","DOIUrl":"10.1109/TNSRE.2026.3651761","url":null,"abstract":"The article presents a new pathological text-to-speech (TTS) synthesis system that has the ability to control speech severity using latent interpolations. Recognizing the difficulty of this task, our work uses a data augmentation technique to generate a single-speaker multi-severity training dataset required for training such a model. Furthermore, we show how x-vectors already contain information about the severity and leverage it as a conditioning variable for the synthesis. Finally, we propose modifications to the GradTTS architecture to enhance the duration modeling of pathological speech. We carry out objective and subjective evaluations to demonstrate that the proposed GradTTS system works well, and produces more natural, controllable, and stable pathological speech samples than the baseline TransformerTTS system.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"34 ","pages":"573-582"},"PeriodicalIF":5.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11342311","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/TNSRE.2026.3653182
Lijiang Luan;Roger Adams;Evangelos Pappas;Adrian Pranata;Gordon Waddington;Jie Lyu;Jia Han
Individual differences in the biomechanical characteristics of chronic ankle instability (CAI) and the heterogeneity in treatment responses suggest that CAI may have distinguishable subtypes. However, the existing selection criteria for CAI are limited, and the current CAI model groups various types of ankle instability without any precise differentiation of subtypes. This study aimed to apply clustering analysis to identify distinct CAI subtypes. An ordered dataset representing three CAI types (perceived ankle instability (PAI), functional ankle instability (FAI), and mechanical ankle instability (MAI)) was designed, and the K-means clustering algorithm was then applied to clinical data from 210 participants, including individuals with CAI, copers, and healthy people. Clustering analysis was performed using the Cumberland Ankle Instability Tool (CAIT), Identification of Functional Ankle Instability (IdFAI), and anterior drawer test (ADT) scores as indicators, followed by dimensionality reduction and cluster validation. The K-Means clustering algorithm identified five distinct CAI subtypes: PAI, FAI, PAI+FAI, PAI+FAI+MAI, and Sub-coper. The clustering model based on clinical data confirmed the absence of pure MAI and showed that CAI patients could present with varying levels of instability. The most prevalent subtype might be a combination of PAI and FAI. This study demonstrates that, by using clustering analysis, CAI can be categorized into distinct subtypes, offering a more precise diagnostic framework. This approach supports the development of subgroup-based management strategies for CAI and highlights the need for updated selection criteria for CAI.
慢性踝关节不稳定(CAI)的生物力学特征的个体差异和治疗反应的异质性表明,CAI可能有可区分的亚型。然而,现有的CAI选择标准有限,目前的CAI模型将各种类型的踝关节不稳定进行分组,没有精确的亚型区分。本研究旨在应用聚类分析来识别不同的CAI亚型。设计了一个代表三种CAI类型(感知性踝关节不稳定(PAI)、功能性踝关节不稳定(FAI)和机械性踝关节不稳定(MAI))的有序数据集,然后将K-means聚类算法应用于来自210名参与者的临床数据,包括患有CAI的个体、患者和健康人。以Cumberland Ankle Instability Tool (CAIT)、Identification of Functional Ankle Instability (IdFAI)和前抽屉测试(ADT)评分为指标进行聚类分析,然后进行降维和聚类验证。K-Means聚类算法确定了5种不同的CAI亚型:PAI、FAI、PAI+FAI、PAI+FAI+MAI和Sub-coper。基于临床资料的聚类模型证实了单纯MAI的不存在,表明CAI患者可能存在不同程度的不稳定性。最常见的亚型可能是PAI和FAI的组合。本研究表明,通过聚类分析,CAI可以分为不同的亚型,提供了更精确的诊断框架。这种方法支持基于子组的CAI管理策略的开发,并强调需要更新CAI的选择标准。
{"title":"Unraveling Chronic Ankle Instability: A Data-Driven Clustering Approach to Redefine Subtypes and Improve Diagnosis","authors":"Lijiang Luan;Roger Adams;Evangelos Pappas;Adrian Pranata;Gordon Waddington;Jie Lyu;Jia Han","doi":"10.1109/TNSRE.2026.3653182","DOIUrl":"10.1109/TNSRE.2026.3653182","url":null,"abstract":"Individual differences in the biomechanical characteristics of chronic ankle instability (CAI) and the heterogeneity in treatment responses suggest that CAI may have distinguishable subtypes. However, the existing selection criteria for CAI are limited, and the current CAI model groups various types of ankle instability without any precise differentiation of subtypes. This study aimed to apply clustering analysis to identify distinct CAI subtypes. An ordered dataset representing three CAI types (perceived ankle instability (PAI), functional ankle instability (FAI), and mechanical ankle instability (MAI)) was designed, and the K-means clustering algorithm was then applied to clinical data from 210 participants, including individuals with CAI, copers, and healthy people. Clustering analysis was performed using the Cumberland Ankle Instability Tool (CAIT), Identification of Functional Ankle Instability (IdFAI), and anterior drawer test (ADT) scores as indicators, followed by dimensionality reduction and cluster validation. The K-Means clustering algorithm identified five distinct CAI subtypes: PAI, FAI, PAI+FAI, PAI+FAI+MAI, and Sub-coper. The clustering model based on clinical data confirmed the absence of pure MAI and showed that CAI patients could present with varying levels of instability. The most prevalent subtype might be a combination of PAI and FAI. This study demonstrates that, by using clustering analysis, CAI can be categorized into distinct subtypes, offering a more precise diagnostic framework. This approach supports the development of subgroup-based management strategies for CAI and highlights the need for updated selection criteria for CAI.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"34 ","pages":"894-905"},"PeriodicalIF":5.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11346862","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145958763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/TNSRE.2026.3652858
Imad Eddine Tibermacine;Samuele Russo;Christian Napoli
Electroencephalographic (EEG) decoding relies heavily on second-order (covariance) structure that lives on the manifold of symmetric positive-definite (SPD) matrices. Conventional deep networks in Euclidean space ignore this geometry, distorting geodesic relations between covariances; classical Riemannian pipelines respect SPD metrics but typically use fixed projections and a single global tangent embedding, which limits task adaptivity and incurs cubic costs in the channel dimension. We propose a fully geometry-consistent architecture that preserves manifold structure end-to-end while remaining trainable at scale. A compact depthwise-separable convolutional neural network (CNN) produces features whose regularized covariances lie on the SPD manifold. A learnable orthonormal projection, optimized on the Stiefel manifold via Riemannian stochastic gradient descent (SGD) with QR-factorization (QR) retraction, reduces dimensionality without breaking positive-definiteness and preserves an eigenvalue floor. We then perform tangent space graph-SPD aggregation on a scalp $k$ -nearest-neighbor graph—neighbor covariances are transported to the reference tangent space, attention-averaged, and mapped back via the exponential—followed by a log-Euclidean mapping and linear softmax classification. This Stiefel$!to $ Graph-SPD$!to log $ chain explains why full geometric consistency matters: it avoids Euclidean shortcuts, keeps all intermediates SPD, and makes log/exp costs cubic in the reduced rank $d$ . In cross-subject evaluation on three public datasets, the model attains ${83}.{2}%!/!{81}.{5}%!/!{79}.{7}%$ accuracy with improved macro-${F}_{{1}}$ , strong separability (macro-AUROC $approx {0}.{90}$ ), and well-calibrated probabilities (ECE $le {0}.{04}$ ), outperforming strong Euclidean CNNs and Riemannian baselines while remaining computationally pragmatic.
{"title":"Stiefel-SPD Manifold Graph Convolution for End-to-End EEG Learning","authors":"Imad Eddine Tibermacine;Samuele Russo;Christian Napoli","doi":"10.1109/TNSRE.2026.3652858","DOIUrl":"10.1109/TNSRE.2026.3652858","url":null,"abstract":"Electroencephalographic (EEG) decoding relies heavily on second-order (covariance) structure that lives on the manifold of symmetric positive-definite (SPD) matrices. Conventional deep networks in Euclidean space ignore this geometry, distorting geodesic relations between covariances; classical Riemannian pipelines respect SPD metrics but typically use fixed projections and a single global tangent embedding, which limits task adaptivity and incurs cubic costs in the channel dimension. We propose a fully geometry-consistent architecture that preserves manifold structure end-to-end while remaining trainable at scale. A compact depthwise-separable convolutional neural network (CNN) produces features whose regularized covariances lie on the SPD manifold. A learnable orthonormal projection, optimized on the Stiefel manifold via Riemannian stochastic gradient descent (SGD) with QR-factorization (QR) retraction, reduces dimensionality without breaking positive-definiteness and preserves an eigenvalue floor. We then perform tangent space graph-SPD aggregation on a scalp <inline-formula> <tex-math>$k$ </tex-math></inline-formula>-nearest-neighbor graph—neighbor covariances are transported to the reference tangent space, attention-averaged, and mapped back via the exponential—followed by a log-Euclidean mapping and linear softmax classification. This Stiefel<inline-formula> <tex-math>$!to $ </tex-math></inline-formula>Graph-SPD<inline-formula> <tex-math>$!to log $ </tex-math></inline-formula> chain explains why full geometric consistency matters: it avoids Euclidean shortcuts, keeps all intermediates SPD, and makes log/exp costs cubic in the reduced rank <inline-formula> <tex-math>$d$ </tex-math></inline-formula>. In cross-subject evaluation on three public datasets, the model attains <inline-formula> <tex-math>${83}.{2}%!/!{81}.{5}%!/!{79}.{7}%$ </tex-math></inline-formula> accuracy with improved macro-<inline-formula> <tex-math>${F}_{{1}}$ </tex-math></inline-formula>, strong separability (macro-AUROC <inline-formula> <tex-math>$approx {0}.{90}$ </tex-math></inline-formula>), and well-calibrated probabilities (ECE <inline-formula> <tex-math>$le {0}.{04}$ </tex-math></inline-formula>), outperforming strong Euclidean CNNs and Riemannian baselines while remaining computationally pragmatic.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"34 ","pages":"595-606"},"PeriodicalIF":5.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11345236","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-invasive neural interfaces (NIs) are increasingly investigated in upper limb neurorehabilitation, where they exploit biosignals, such as electroencephalography (EEG) and electromyography (EMG), to decode motor intentions using artificial intelligence (AI). Yet, traditional systems are complex and difficult to use outside the clinic. Wearable devices have the potential for innovative neurorehabilitation solutions thanks to their comfort, easy-to-use and long-term monitoring. However, current AI approaches require adaptation to the technical constraints of wearable devices, and the related state-of-the-art is not clearly explained and summarized. In this work, a systematic literature review on 51 studies was conducted analyzing them according to five important concepts: biosignals, wearable devices, AI-driven methods, upper limb, and clinical applications. The review highlights methodological heterogeneity, a variety of wearable sensor configurations, and open challenges related to accuracy, robustness, and clinical validation. Finally, we discuss how explainable AI (XAI) and generative AI (GenAI) may contribute to improve the interpretability and personalization of future neurorehabilitation systems.
{"title":"Artificial Intelligence and Wearable Technologies for Upper Limb Neurorehabilitation","authors":"Ilaria Siviero;Nicola Valè;Gloria Menegaz;Ander Ramos-Murguialday;Silvia Francesca Storti","doi":"10.1109/TNSRE.2026.3651949","DOIUrl":"10.1109/TNSRE.2026.3651949","url":null,"abstract":"Non-invasive neural interfaces (NIs) are increasingly investigated in upper limb neurorehabilitation, where they exploit biosignals, such as electroencephalography (EEG) and electromyography (EMG), to decode motor intentions using artificial intelligence (AI). Yet, traditional systems are complex and difficult to use outside the clinic. Wearable devices have the potential for innovative neurorehabilitation solutions thanks to their comfort, easy-to-use and long-term monitoring. However, current AI approaches require adaptation to the technical constraints of wearable devices, and the related state-of-the-art is not clearly explained and summarized. In this work, a systematic literature review on 51 studies was conducted analyzing them according to five important concepts: biosignals, wearable devices, AI-driven methods, upper limb, and clinical applications. The review highlights methodological heterogeneity, a variety of wearable sensor configurations, and open challenges related to accuracy, robustness, and clinical validation. Finally, we discuss how explainable AI (XAI) and generative AI (GenAI) may contribute to improve the interpretability and personalization of future neurorehabilitation systems.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"34 ","pages":"732-749"},"PeriodicalIF":5.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11342313","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}