Pub Date : 2024-09-03DOI: 10.1088/1741-2552/ad716d
M Sperduti, N L Tagliamonte, F Taffoni, E Guglielmelli, L Zollo
The somatosensory system is widely studied to understand its functioning mechanisms. Multiple tests, based on different devices and methods, have been performed not only on humans but also on animals andex-vivomodels. Depending on the nature of the sample under analysis and on the scientific aims of interest, several solutions for experimental stimulation and for investigations on sensation or pain have been adopted. In this review paper, an overview of the available devices and methods has been reported, also analyzing the representative values adopted during literature experiments. Among the various physical stimulations used to study the somatosensory system, we focused only on mechanical and thermal ones. Based on the analysis of their main features and on literature studies, we pointed out the most suitable solution for humans, rodents, andex-vivomodels and investigation aims (sensation and pain).
{"title":"Mechanical and thermal stimulation for studying the somatosensory system: a review on devices and methods.","authors":"M Sperduti, N L Tagliamonte, F Taffoni, E Guglielmelli, L Zollo","doi":"10.1088/1741-2552/ad716d","DOIUrl":"10.1088/1741-2552/ad716d","url":null,"abstract":"<p><p>The somatosensory system is widely studied to understand its functioning mechanisms. Multiple tests, based on different devices and methods, have been performed not only on humans but also on animals and<i>ex-vivo</i>models. Depending on the nature of the sample under analysis and on the scientific aims of interest, several solutions for experimental stimulation and for investigations on sensation or pain have been adopted. In this review paper, an overview of the available devices and methods has been reported, also analyzing the representative values adopted during literature experiments. Among the various physical stimulations used to study the somatosensory system, we focused only on mechanical and thermal ones. Based on the analysis of their main features and on literature studies, we pointed out the most suitable solution for humans, rodents, and<i>ex-vivo</i>models and investigation aims (sensation and pain).</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective. The decline in the performance of electromyography (EMG)-based silent speech recognition is widely attributed to disparities in speech patterns, articulation habits, and individual physiology among speakers. Feature alignment by learning a discriminative network that resolves domain offsets across speakers is an effective method to address this problem. The prevailing adversarial network with a branching discriminator specializing in domain discrimination renders insufficiently direct contribution to categorical predictions of the classifier.Approach. To this end, we propose a simplified discrepancy-based adversarial network with a streamlined end-to-end structure for EMG-based cross-subject silent speech recognition. Highly aligned features across subjects are obtained by introducing a Nuclear-norm Wasserstein discrepancy metric on the back end of the classification network, which could be utilized for both classification and domain discrimination. Given the low-level and implicitly noisy nature of myoelectric signals, we devise a cascaded adaptive rectification network as the front-end feature extraction network, adaptively reshaping the intermediate feature map with automatically learnable channel-wise thresholds. The resulting features effectively filter out domain-specific information between subjects while retaining domain-invariant features critical for cross-subject recognition.Main results. A series of sentence-level classification experiments with 100 Chinese sentences demonstrate the efficacy of our method, achieving an average accuracy of 89.46% tested on 40 new subjects by training with data from 60 subjects. Especially, our method achieves a remarkable 10.07% improvement compared to the state-of-the-art model when tested on 10 new subjects with 20 subjects employed for training, surpassing its result even with three times training subjects.Significance. Our study demonstrates an improved classification performance of the proposed adversarial architecture using cross-subject myoelectric signals, providing a promising prospect for EMG-based speech interactive application.
{"title":"A simplified adversarial architecture for cross-subject silent speech recognition using electromyography.","authors":"Qiang Cui, Xingyu Zhang, Yakun Zhang, Changyan Zheng, Liang Xie, Ye Yan, Edmond Q Wu, Erwei Yin","doi":"10.1088/1741-2552/ad7321","DOIUrl":"10.1088/1741-2552/ad7321","url":null,"abstract":"<p><p><i>Objective</i>. The decline in the performance of electromyography (EMG)-based silent speech recognition is widely attributed to disparities in speech patterns, articulation habits, and individual physiology among speakers. Feature alignment by learning a discriminative network that resolves domain offsets across speakers is an effective method to address this problem. The prevailing adversarial network with a branching discriminator specializing in domain discrimination renders insufficiently direct contribution to categorical predictions of the classifier.<i>Approach</i>. To this end, we propose a simplified discrepancy-based adversarial network with a streamlined end-to-end structure for EMG-based cross-subject silent speech recognition. Highly aligned features across subjects are obtained by introducing a Nuclear-norm Wasserstein discrepancy metric on the back end of the classification network, which could be utilized for both classification and domain discrimination. Given the low-level and implicitly noisy nature of myoelectric signals, we devise a cascaded adaptive rectification network as the front-end feature extraction network, adaptively reshaping the intermediate feature map with automatically learnable channel-wise thresholds. The resulting features effectively filter out domain-specific information between subjects while retaining domain-invariant features critical for cross-subject recognition.<i>Main results</i>. A series of sentence-level classification experiments with 100 Chinese sentences demonstrate the efficacy of our method, achieving an average accuracy of 89.46% tested on 40 new subjects by training with data from 60 subjects. Especially, our method achieves a remarkable 10.07% improvement compared to the state-of-the-art model when tested on 10 new subjects with 20 subjects employed for training, surpassing its result even with three times training subjects.<i>Significance</i>. Our study demonstrates an improved classification performance of the proposed adversarial architecture using cross-subject myoelectric signals, providing a promising prospect for EMG-based speech interactive application.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142047658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-03DOI: 10.1088/1741-2552/ad680b
Iris Kremer, Wissam Halimi, Andy Walshe, Moran Cerf, Pablo Mainar
Objective. We show that electroencephalography (EEG)-based cognitive load (CL) prediction using Riemannian geometry features outperforms existing models. The performance is estimated using Riemannian Procrustes Analysis (RPA) with a test set of subjects unseen during training.Approach. Performance is evaluated by using the Minimum Distance to Riemannian Mean model trained on CL classification. The baseline performance is established using spatial covariance matrices of the signal as features. Various novel features are explored and analyzed in depth, including spatial covariance and correlation matrices computed on the EEG signal and its first-order derivative. Furthermore, each RPA step effect on the performance is investigated, and the generalization performance of RPA is compared against a few different generalization methods.Main results. Performances are greatly improved by using the spatial covariance matrix of the first-order derivative of the signal as features. Furthermore, this work highlights both the importance and efficiency of RPA for CL prediction: it achieves good generalizability with little amounts of calibration data and largely outperforms all the comparison methods.Significance. CL prediction using RPA for generalizability across subjects is an approach worth exploring further, especially for real-world applications where calibration time is limited. Furthermore, the feature exploration uncovers new, promising features that can be used and further experimented within any Riemannian geometry setting.
{"title":"Predicting cognitive load with EEG using Riemannian geometry-based features.","authors":"Iris Kremer, Wissam Halimi, Andy Walshe, Moran Cerf, Pablo Mainar","doi":"10.1088/1741-2552/ad680b","DOIUrl":"10.1088/1741-2552/ad680b","url":null,"abstract":"<p><p><i>Objective</i>. We show that electroencephalography (EEG)-based cognitive load (CL) prediction using Riemannian geometry features outperforms existing models. The performance is estimated using Riemannian Procrustes Analysis (RPA) with a test set of subjects unseen during training.<i>Approach</i>. Performance is evaluated by using the Minimum Distance to Riemannian Mean model trained on CL classification. The baseline performance is established using spatial covariance matrices of the signal as features. Various novel features are explored and analyzed in depth, including spatial covariance and correlation matrices computed on the EEG signal and its first-order derivative. Furthermore, each RPA step effect on the performance is investigated, and the generalization performance of RPA is compared against a few different generalization methods.<i>Main results</i>. Performances are greatly improved by using the spatial covariance matrix of the first-order derivative of the signal as features. Furthermore, this work highlights both the importance and efficiency of RPA for CL prediction: it achieves good generalizability with little amounts of calibration data and largely outperforms all the comparison methods.<i>Significance</i>. CL prediction using RPA for generalizability across subjects is an approach worth exploring further, especially for real-world applications where calibration time is limited. Furthermore, the feature exploration uncovers new, promising features that can be used and further experimented within any Riemannian geometry setting.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141768377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1088/1741-2552/ad7060
Yiling Zhang, Yuan Liao, Wei Chen, Xiruo Zhang, Liya Huang
Objective.Electroencephalogram (EEG) signals offer invaluable insights into the complexities of emotion generation within the brain. Yet, the variability in EEG signals across individuals presents a formidable obstacle for empirical implementations. Our research addresses these challenges innovatively, focusing on the commonalities within distinct subjects' EEG data.Approach.We introduce a novel approach named Contrastive Learning Graph Convolutional Network (CLGCN). This method captures the distinctive features and crucial channel nodes related to individuals' emotional states. Specifically, CLGCN merges the dual benefits of CL's synchronous multisubject data learning and the GCN's proficiency in deciphering brain connectivity matrices. Understanding multifaceted brain functions and their information interchange processes is realized as CLGCN generates a standardized brain network learning matrix during a dataset's learning process.Main results.Our model underwent rigorous testing on the Database for Emotion Analysis using Physiological Signals (DEAP) and SEED datasets. In the five-fold cross-validation used for dependent subject experimental setting, it achieved an accuracy of 97.13% on the DEAP dataset and surpassed 99% on the SEED and SEED_IV datasets. In the incremental learning experiments with the SEED dataset, merely 5% of the data was sufficient to fine-tune the model, resulting in an accuracy of 92.8% for the new subject. These findings validate the model's efficacy.Significance.This work combines CL with GCN, improving the accuracy of decoding emotional states from EEG signals and offering valuable insights into uncovering the underlying mechanisms of emotional processes in the brain.
{"title":"Emotion recognition of EEG signals based on contrastive learning graph convolutional model.","authors":"Yiling Zhang, Yuan Liao, Wei Chen, Xiruo Zhang, Liya Huang","doi":"10.1088/1741-2552/ad7060","DOIUrl":"10.1088/1741-2552/ad7060","url":null,"abstract":"<p><p><i>Objective.</i>Electroencephalogram (EEG) signals offer invaluable insights into the complexities of emotion generation within the brain. Yet, the variability in EEG signals across individuals presents a formidable obstacle for empirical implementations. Our research addresses these challenges innovatively, focusing on the commonalities within distinct subjects' EEG data.<i>Approach.</i>We introduce a novel approach named Contrastive Learning Graph Convolutional Network (CLGCN). This method captures the distinctive features and crucial channel nodes related to individuals' emotional states. Specifically, CLGCN merges the dual benefits of CL's synchronous multisubject data learning and the GCN's proficiency in deciphering brain connectivity matrices. Understanding multifaceted brain functions and their information interchange processes is realized as CLGCN generates a standardized brain network learning matrix during a dataset's learning process.<i>Main results.</i>Our model underwent rigorous testing on the Database for Emotion Analysis using Physiological Signals (DEAP) and SEED datasets. In the five-fold cross-validation used for dependent subject experimental setting, it achieved an accuracy of 97.13% on the DEAP dataset and surpassed 99% on the SEED and SEED_IV datasets. In the incremental learning experiments with the SEED dataset, merely 5% of the data was sufficient to fine-tune the model, resulting in an accuracy of 92.8% for the new subject. These findings validate the model's efficacy.<i>Significance.</i>This work combines CL with GCN, improving the accuracy of decoding emotional states from EEG signals and offering valuable insights into uncovering the underlying mechanisms of emotional processes in the brain.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1088/1741-2552/ad705f
Alejandro Nieto Ramos, Balu Krishnan, Andreas V Alexopoulos, William Bingaman, Imad Najm, Juan C Bulacio, Demitre Serletis
Objective.For medically-refractory epilepsy patients, stereoelectroencephalography (sEEG) is a surgical method using intracranial electrode recordings to identify brain networks participating in early seizure organization and propagation (i.e. the epileptogenic zone, EZ). If identified, surgical EZ treatment via resection, ablation or neuromodulation can lead to seizure-freedom. To date, quantification of sEEG data, including its visualization and interpretation, remains a clinical and computational challenge. Given elusiveness of physical laws or governing equations modelling complex brain dynamics, data science offers unique insight into identifying unknown patterns within high-dimensional sEEG data. We apply here an unsupervised data-driven algorithm, dynamic mode decomposition (DMD), to sEEG recordings from five focal epilepsy patients (three with temporal lobe, and two with cingulate epilepsy), who underwent subsequent resective or ablative surgery and became seizure free.Approach.DMD obtains a linear approximation of nonlinear data dynamics, generating coherent structures ('modes') defining important signal features, used to extract frequencies, growth rates and spatial structures. DMD was adapted to produce dynamic modal maps (DMMs) across frequency sub-bands, capturing onset and evolution of epileptiform dynamics in sEEG data. Additionally, we developed a static estimate of EZ-localized electrode contacts, termed the higher-frequency mode-based norm index (MNI). DMM and MNI maps for representative patient seizures were validated against clinical sEEG results and seizure-free outcomes following surgery.Main results.DMD was most informative at higher frequencies, i.e. gamma (including high-gamma) and beta range, successfully identifying EZ contacts. Combined interpretation of DMM/MNI plots best identified spatiotemporal evolution of mode-specific network changes, with strong concordance to sEEG results and outcomes across all five patients. The method identified network attenuation in other contacts not implicated in the EZ.Significance.This is the first application of DMD to sEEG data analysis, supporting integration of neuroengineering, mathematical and machine learning methods into traditional workflows for sEEG review and epilepsy surgical decision-making.
{"title":"Epileptic network identification: insights from dynamic mode decomposition of sEEG data.","authors":"Alejandro Nieto Ramos, Balu Krishnan, Andreas V Alexopoulos, William Bingaman, Imad Najm, Juan C Bulacio, Demitre Serletis","doi":"10.1088/1741-2552/ad705f","DOIUrl":"10.1088/1741-2552/ad705f","url":null,"abstract":"<p><p><i>Objective.</i>For medically-refractory epilepsy patients, stereoelectroencephalography (sEEG) is a surgical method using intracranial electrode recordings to identify brain networks participating in early seizure organization and propagation (i.e. the epileptogenic zone, EZ). If identified, surgical EZ treatment via resection, ablation or neuromodulation can lead to seizure-freedom. To date, quantification of sEEG data, including its visualization and interpretation, remains a clinical and computational challenge. Given elusiveness of physical laws or governing equations modelling complex brain dynamics, data science offers unique insight into identifying unknown patterns within high-dimensional sEEG data. We apply here an unsupervised data-driven algorithm, dynamic mode decomposition (DMD), to sEEG recordings from five focal epilepsy patients (three with temporal lobe, and two with cingulate epilepsy), who underwent subsequent resective or ablative surgery and became seizure free.<i>Approach.</i>DMD obtains a linear approximation of nonlinear data dynamics, generating coherent structures ('modes') defining important signal features, used to extract frequencies, growth rates and spatial structures. DMD was adapted to produce dynamic modal maps (DMMs) across frequency sub-bands, capturing onset and evolution of epileptiform dynamics in sEEG data. Additionally, we developed a static estimate of EZ-localized electrode contacts, termed the higher-frequency mode-based norm index (MNI). DMM and MNI maps for representative patient seizures were validated against clinical sEEG results and seizure-free outcomes following surgery.<i>Main results.</i>DMD was most informative at higher frequencies, i.e. gamma (including high-gamma) and beta range, successfully identifying EZ contacts. Combined interpretation of DMM/MNI plots best identified spatiotemporal evolution of mode-specific network changes, with strong concordance to sEEG results and outcomes across all five patients. The method identified network attenuation in other contacts not implicated in the EZ.<i>Significance.</i>This is the first application of DMD to sEEG data analysis, supporting integration of neuroengineering, mathematical and machine learning methods into traditional workflows for sEEG review and epilepsy surgical decision-making.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1088/1741-2552/ad6e19
L Bashford, I A Rosenthal, S Kellis, D Bjånes, K Pejsa, B W Brunton, R A Andersen
Objective.A crucial goal in brain-machine interfacing is the long-term stability of neural decoding performance, ideally without regular retraining. Long-term stability has only been previously demonstrated in non-human primate experiments and only in primary sensorimotor cortices. Here we extend previous methods to determine long-term stability in humans by identifying and aligning low-dimensional structures in neural data.Approach.Over a period of 1106 and 871 d respectively, two participants completed an imagined center-out reaching task. The longitudinal accuracy between all day pairs was assessed by latent subspace alignment using principal components analysis and canonical correlations analysis of multi-unit intracortical recordings in different brain regions (Brodmann Area 5, Anterior Intraparietal Area and the junction of the postcentral and intraparietal sulcus).Main results.We show the long-term stable representation of neural activity in subspaces of intracortical recordings from higher-order association areas in humans.Significance.These results can be practically applied to significantly expand the longevity and generalizability of brain-computer interfaces.Clinical TrialsNCT01849822, NCT01958086, NCT01964261.
{"title":"Neural subspaces of imagined movements in parietal cortex remain stable over several years in humans.","authors":"L Bashford, I A Rosenthal, S Kellis, D Bjånes, K Pejsa, B W Brunton, R A Andersen","doi":"10.1088/1741-2552/ad6e19","DOIUrl":"10.1088/1741-2552/ad6e19","url":null,"abstract":"<p><p><i>Objective.</i>A crucial goal in brain-machine interfacing is the long-term stability of neural decoding performance, ideally without regular retraining. Long-term stability has only been previously demonstrated in non-human primate experiments and only in primary sensorimotor cortices. Here we extend previous methods to determine long-term stability in humans by identifying and aligning low-dimensional structures in neural data.<i>Approach.</i>Over a period of 1106 and 871 d respectively, two participants completed an imagined center-out reaching task. The longitudinal accuracy between all day pairs was assessed by latent subspace alignment using principal components analysis and canonical correlations analysis of multi-unit intracortical recordings in different brain regions (Brodmann Area 5, Anterior Intraparietal Area and the junction of the postcentral and intraparietal sulcus).<i>Main results.</i>We show the long-term stable representation of neural activity in subspaces of intracortical recordings from higher-order association areas in humans.<i>Significance.</i>These results can be practically applied to significantly expand the longevity and generalizability of brain-computer interfaces.Clinical TrialsNCT01849822, NCT01958086, NCT01964261.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11350602/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141972494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective. Brain-computer interfaces (BCIs) have the potential to preserve or restore speech in patients with neurological disorders that weaken the muscles involved in speech production. However, successful training of low-latency speech synthesis and recognition models requires alignment of neural activity with intended phonetic or acoustic output with high temporal precision. This is particularly challenging in patients who cannot produce audible speech, as ground truth with which to pinpoint neural activity synchronized with speech is not available.Approach. In this study, we present a new iterative algorithm for neural voice activity detection (nVAD) called iterative alignment discovery dynamic time warping (IAD-DTW) that integrates DTW into the loss function of a deep neural network (DNN). The algorithm is designed to discover the alignment between a patient's electrocorticographic (ECoG) neural responses and their attempts to speak during collection of data for training BCI decoders for speech synthesis and recognition.Main results. To demonstrate the effectiveness of the algorithm, we tested its accuracy in predicting the onset and duration of acoustic signals produced by able-bodied patients with intact speech undergoing short-term diagnostic ECoG recordings for epilepsy surgery. We simulated a lack of ground truth by randomly perturbing the temporal correspondence between neural activity and an initial single estimate for all speech onsets and durations. We examined the model's ability to overcome these perturbations to estimate ground truth. IAD-DTW showed no notable degradation (<1% absolute decrease in accuracy) in performance in these simulations, even in the case of maximal misalignments between speech and silence.Significance. IAD-DTW is computationally inexpensive and can be easily integrated into existing DNN-based nVAD approaches, as it pertains only to the final loss computation. This approach makes it possible to train speech BCI algorithms using ECoG data from patients who are unable to produce audible speech, including those with Locked-In Syndrome.
{"title":"Iterative alignment discovery of speech-associated neural activity.","authors":"Qinwan Rabbani, Samyak Shah, Griffin Milsap, Matthew Fifer, Hynek Hermansky, Nathan Crone","doi":"10.1088/1741-2552/ad663c","DOIUrl":"10.1088/1741-2552/ad663c","url":null,"abstract":"<p><p><i>Objective</i>. Brain-computer interfaces (BCIs) have the potential to preserve or restore speech in patients with neurological disorders that weaken the muscles involved in speech production. However, successful training of low-latency speech synthesis and recognition models requires alignment of neural activity with intended phonetic or acoustic output with high temporal precision. This is particularly challenging in patients who cannot produce audible speech, as ground truth with which to pinpoint neural activity synchronized with speech is not available.<i>Approach</i>. In this study, we present a new iterative algorithm for neural voice activity detection (nVAD) called iterative alignment discovery dynamic time warping (IAD-DTW) that integrates DTW into the loss function of a deep neural network (DNN). The algorithm is designed to discover the alignment between a patient's electrocorticographic (ECoG) neural responses and their attempts to speak during collection of data for training BCI decoders for speech synthesis and recognition.<i>Main results</i>. To demonstrate the effectiveness of the algorithm, we tested its accuracy in predicting the onset and duration of acoustic signals produced by able-bodied patients with intact speech undergoing short-term diagnostic ECoG recordings for epilepsy surgery. We simulated a lack of ground truth by randomly perturbing the temporal correspondence between neural activity and an initial single estimate for all speech onsets and durations. We examined the model's ability to overcome these perturbations to estimate ground truth. IAD-DTW showed no notable degradation (<1% absolute decrease in accuracy) in performance in these simulations, even in the case of maximal misalignments between speech and silence.<i>Significance</i>. IAD-DTW is computationally inexpensive and can be easily integrated into existing DNN-based nVAD approaches, as it pertains only to the final loss computation. This approach makes it possible to train speech BCI algorithms using ECoG data from patients who are unable to produce audible speech, including those with Locked-In Syndrome.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11351572/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142083006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1088/1741-2552/ad6a8d
Ashley N Dalrymple, Lee E Fisher, Douglas J Weber
Objective. Phantom limb pain (PLP) is debilitating and affects over 70% of people with lower-limb amputation. Other neuropathic pain conditions correspond with increased spinal excitability, which can be measured using reflexes andF-waves. Spinal cord neuromodulation can be used to reduce neuropathic pain in a variety of conditions and may affect spinal excitability, but has not been extensively used for treating PLP. Here, we propose using a non-invasive neuromodulation method, transcutaneous spinal cord stimulation (tSCS), to reduce PLP and modulate spinal excitability after transtibial amputation.Approach. We recruited three participants, two males (5- and 9-years post-amputation, traumatic and alcohol-induced neuropathy) and one female (3 months post-amputation, diabetic neuropathy) for this 5 d study. We measured pain using the McGill Pain Questionnaire (MPQ), visual analog scale (VAS), and pain pressure threshold (PPT) test. We measured spinal reflex and motoneuron excitability using posterior root-muscle (PRM) reflexes andF-waves, respectively. We delivered tSCS for 30 min d-1for 5 d.Main Results. After 5 d of tSCS, MPQ scores decreased by clinically-meaningful amounts for all participants from 34.0 ± 7.0-18.3 ± 6.8; however, there were no clinically-significant decreases in VAS scores. Two participants had increased PPTs across the residual limb (Day 1: 5.4 ± 1.6 lbf; Day 5: 11.4 ± 1.0 lbf).F-waves had normal latencies but small amplitudes. PRM reflexes had high thresholds (59.5 ± 6.1μC) and low amplitudes, suggesting that in PLP, the spinal cord is hypoexcitable. After 5 d of tSCS, reflex thresholds decreased significantly (38.6 ± 12.2μC;p< 0.001).Significance. These preliminary results in this non-placebo-controlled study suggest that, overall, limb amputation and PLP may be associated with reduced spinal excitability and tSCS can increase spinal excitability and reduce PLP.
{"title":"A preliminary study exploring the effects of transcutaneous spinal cord stimulation on spinal excitability and phantom limb pain in people with a transtibial amputation.","authors":"Ashley N Dalrymple, Lee E Fisher, Douglas J Weber","doi":"10.1088/1741-2552/ad6a8d","DOIUrl":"10.1088/1741-2552/ad6a8d","url":null,"abstract":"<p><p><i>Objective</i>. Phantom limb pain (PLP) is debilitating and affects over 70% of people with lower-limb amputation. Other neuropathic pain conditions correspond with increased spinal excitability, which can be measured using reflexes and<i>F</i>-waves. Spinal cord neuromodulation can be used to reduce neuropathic pain in a variety of conditions and may affect spinal excitability, but has not been extensively used for treating PLP. Here, we propose using a non-invasive neuromodulation method, transcutaneous spinal cord stimulation (tSCS), to reduce PLP and modulate spinal excitability after transtibial amputation.<i>Approach</i>. We recruited three participants, two males (5- and 9-years post-amputation, traumatic and alcohol-induced neuropathy) and one female (3 months post-amputation, diabetic neuropathy) for this 5 d study. We measured pain using the McGill Pain Questionnaire (MPQ), visual analog scale (VAS), and pain pressure threshold (PPT) test. We measured spinal reflex and motoneuron excitability using posterior root-muscle (PRM) reflexes and<i>F</i>-waves, respectively. We delivered tSCS for 30 min d<sup>-1</sup>for 5 d.<i>Main Results</i>. After 5 d of tSCS, MPQ scores decreased by clinically-meaningful amounts for all participants from 34.0 ± 7.0-18.3 ± 6.8; however, there were no clinically-significant decreases in VAS scores. Two participants had increased PPTs across the residual limb (Day 1: 5.4 ± 1.6 lbf; Day 5: 11.4 ± 1.0 lbf).<i>F</i>-waves had normal latencies but small amplitudes. PRM reflexes had high thresholds (59.5 ± 6.1<i>μ</i>C) and low amplitudes, suggesting that in PLP, the spinal cord is hypoexcitable. After 5 d of tSCS, reflex thresholds decreased significantly (38.6 ± 12.2<i>μ</i>C;<i>p</i>< 0.001).<i>Significance</i>. These preliminary results in this non-placebo-controlled study suggest that, overall, limb amputation and PLP may be associated with reduced spinal excitability and tSCS can increase spinal excitability and reduce PLP.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391861/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141880034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1088/1741-2552/ad692e
Robyn Meredith, Ethan Eddy, Scott Bateman, Erik Scheme
Objective.The use of electromyogram (EMG) signals recorded from the wrist is emerging as a desirable input modality for human-machine interaction (HMI). Although forearm-based EMG has been used for decades in prosthetics, there has been comparatively little prior work evaluating the performance of wrist-based control, especially in online, user-in-the-loop studies. Furthermore, despite different motivating use cases for wrist-based control, research has mostly adopted legacy prosthesis control evaluation frameworks.Approach.Gaining inspiration from rhythm games and the Schmidt's law speed-accuracy tradeoff, this work proposes a new temporally constrained evaluation environment with a linearly increasing difficulty to compare the online usability of wrist and forearm EMG. Compared to the more commonly used Fitts' Law-style testing, the proposed environment may offer different insights for emerging use cases of EMG as it decouples the machine learning algorithm's performance from proportional control, is easily generalizable to different gesture sets, and enables the extraction of a wide set of usability metrics that describe a users ability to successfully accomplish a task at a certain time with different levels of induced stress.Main results.The results suggest that wrist EMG-based control is comparable to that of forearm EMG when using traditional prosthesis control gestures and can even be better when using fine finger gestures. Additionally, the results suggest that as the difficulty of the environment increased, the online metrics and their correlation to the offline metrics decreased, highlighting the importance of evaluating myoelectric control in real-time evaluations over a range of difficulties.Significance.This work provides valuable insights into the future design and evaluation of myoelectric control systems for emerging HMI applications.
{"title":"Comparing online wrist and forearm EMG-based control using a rhythm game-inspired evaluation environment.","authors":"Robyn Meredith, Ethan Eddy, Scott Bateman, Erik Scheme","doi":"10.1088/1741-2552/ad692e","DOIUrl":"10.1088/1741-2552/ad692e","url":null,"abstract":"<p><p><i>Objective.</i>The use of electromyogram (EMG) signals recorded from the wrist is emerging as a desirable input modality for human-machine interaction (HMI). Although forearm-based EMG has been used for decades in prosthetics, there has been comparatively little prior work evaluating the performance of wrist-based control, especially in online, user-in-the-loop studies. Furthermore, despite different motivating use cases for wrist-based control, research has mostly adopted legacy prosthesis control evaluation frameworks.<i>Approach.</i>Gaining inspiration from rhythm games and the Schmidt's law speed-accuracy tradeoff, this work proposes a new temporally constrained evaluation environment with a linearly increasing difficulty to compare the online usability of wrist and forearm EMG. Compared to the more commonly used Fitts' Law-style testing, the proposed environment may offer different insights for emerging use cases of EMG as it decouples the machine learning algorithm's performance from proportional control, is easily generalizable to different gesture sets, and enables the extraction of a wide set of usability metrics that describe a users ability to successfully accomplish a task at a certain time with different levels of induced stress.<i>Main results.</i>The results suggest that wrist EMG-based control is comparable to that of forearm EMG when using traditional prosthesis control gestures and can even be better when using fine finger gestures. Additionally, the results suggest that as the difficulty of the environment increased, the online metrics and their correlation to the offline metrics decreased, highlighting the importance of evaluating myoelectric control in real-time evaluations over a range of difficulties.<i>Significance.</i>This work provides valuable insights into the future design and evaluation of myoelectric control systems for emerging HMI applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141857464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21DOI: 10.1088/1741-2552/ad6793
Manuel Eder, Jiachen Xu, Moritz Grosse-Wentrup
Objective.To date, a comprehensive comparison of Riemannian decoding methods with deep convolutional neural networks for EEG-based brain-computer interfaces remains absent from published work. We address this research gap by using MOABB, The Mother Of All BCI Benchmarks, to compare novel convolutional neural networks to state-of-the-art Riemannian approaches across a broad range of EEG datasets, including motor imagery, P300, and steady-state visual evoked potentials paradigms.Approach.We systematically evaluated the performance of convolutional neural networks, specifically EEGNet, shallow ConvNet, and deep ConvNet, against well-established Riemannian decoding methods using MOABB processing pipelines. This evaluation included within-session, cross-session, and cross-subject methods, to provide a practical analysis of model effectiveness and to find an overall solution that performs well across different experimental settings.Main results.We find no significant differences in decoding performance between convolutional neural networks and Riemannian methods for within-session, cross-session, and cross-subject analyses.Significance.The results show that, when using traditional Brain-Computer Interface paradigms, the choice between CNNs and Riemannian methods may not heavily impact decoding performances in many experimental settings. These findings provide researchers with flexibility in choosing decoding approaches based on factors such as ease of implementation, computational efficiency or individual preferences.
{"title":"Benchmarking brain-computer interface algorithms: Riemannian approaches vs convolutional neural networks.","authors":"Manuel Eder, Jiachen Xu, Moritz Grosse-Wentrup","doi":"10.1088/1741-2552/ad6793","DOIUrl":"10.1088/1741-2552/ad6793","url":null,"abstract":"<p><p><i>Objective.</i>To date, a comprehensive comparison of Riemannian decoding methods with deep convolutional neural networks for EEG-based brain-computer interfaces remains absent from published work. We address this research gap by using MOABB, The Mother Of All BCI Benchmarks, to compare novel convolutional neural networks to state-of-the-art Riemannian approaches across a broad range of EEG datasets, including motor imagery, P300, and steady-state visual evoked potentials paradigms.<i>Approach.</i>We systematically evaluated the performance of convolutional neural networks, specifically EEGNet, shallow ConvNet, and deep ConvNet, against well-established Riemannian decoding methods using MOABB processing pipelines. This evaluation included within-session, cross-session, and cross-subject methods, to provide a practical analysis of model effectiveness and to find an overall solution that performs well across different experimental settings.<i>Main results.</i>We find no significant differences in decoding performance between convolutional neural networks and Riemannian methods for within-session, cross-session, and cross-subject analyses.<i>Significance.</i>The results show that, when using traditional Brain-Computer Interface paradigms, the choice between CNNs and Riemannian methods may not heavily impact decoding performances in many experimental settings. These findings provide researchers with flexibility in choosing decoding approaches based on factors such as ease of implementation, computational efficiency or individual preferences.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141763620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}