Objective.With prolonged life expectancy, the incidence of memory deficits, especially in Alzheimer's disease (AD), has increased. Although multiple treatments have been evaluated, no promising treatment has been found to date. Deep brain stimulation (DBS) of the fornix area was explored as a possible treatment because the fornix is intimately connected to memory-related areas that are vulnerable in AD; however, a proper imaging biomarker for assessing the therapeutic efficiency of forniceal DBS in AD has not been established.Approach.This study assessed the efficacy and safety of DBS by estimating the optimal intersection volume between the volume of tissue activated and the fornix. Utilizing a gold-electroplating process, the microelectrode's surface area on the neural probe was increased, enhancing charge transfer performance within potential water window limits. Bilateral fornix implantation was conducted in triple-transgenic AD mice (3 × Tg-AD) and wild-type mice (strain: B6129SF1/J), with forniceal DBS administered exclusively to 3 × Tg-AD mice in the DBS-on group. Behavioral tasks, diffusion tensor imaging (DTI), and immunohistochemistry (IHC) were performed in all mice to assess the therapeutic efficacy of forniceal DBS.Main results.The results illustrated that memory deficits and increased anxiety-like behavior in 3 × Tg-AD mice were rescued by forniceal DBS. Furthermore, forniceal DBS positively altered DTI indices, such as increasing fractional anisotropy (FA) and decreasing mean diffusivity (MD), together with reducing microglial cell and astrocyte counts, suggesting a potential causal relationship between revised FA/MD and reduced cell counts in the anterior cingulate cortex, hippocampus, fornix, amygdala, and entorhinal cortex of 3 × Tg-AD mice following forniceal DBS.Significance.The efficacy of forniceal DBS in AD can be indicated by alterations in DTI-based biomarkers reflecting the decreased activation of glial cells, suggesting reduced neural inflammation as evidenced by improvements in memory and anxiety-like behavior.
{"title":"Utilizing diffusion tensor imaging as an image biomarker in exploring the therapeutic efficacy of forniceal deep brain stimulation in a mice model of Alzheimer's disease.","authors":"You-Yin Chen, Chih-Ju Chang, Yao-Wen Liang, Hsin-Yi Tseng, Ssu-Ju Li, Ching-Wen Chang, Yen-Ting Wu, Huai-Hsuan Shao, Po-Chun Chen, Ming-Liang Lai, Wen-Chun Deng, RuSiou Hsu, Yu-Chun Lo","doi":"10.1088/1741-2552/ad7322","DOIUrl":"10.1088/1741-2552/ad7322","url":null,"abstract":"<p><p><i>Objective.</i>With prolonged life expectancy, the incidence of memory deficits, especially in Alzheimer's disease (AD), has increased. Although multiple treatments have been evaluated, no promising treatment has been found to date. Deep brain stimulation (DBS) of the fornix area was explored as a possible treatment because the fornix is intimately connected to memory-related areas that are vulnerable in AD; however, a proper imaging biomarker for assessing the therapeutic efficiency of forniceal DBS in AD has not been established.<i>Approach.</i>This study assessed the efficacy and safety of DBS by estimating the optimal intersection volume between the volume of tissue activated and the fornix. Utilizing a gold-electroplating process, the microelectrode's surface area on the neural probe was increased, enhancing charge transfer performance within potential water window limits. Bilateral fornix implantation was conducted in triple-transgenic AD mice (3 × Tg-AD) and wild-type mice (strain: B6129SF1/J), with forniceal DBS administered exclusively to 3 × Tg-AD mice in the DBS-on group. Behavioral tasks, diffusion tensor imaging (DTI), and immunohistochemistry (IHC) were performed in all mice to assess the therapeutic efficacy of forniceal DBS.<i>Main results.</i>The results illustrated that memory deficits and increased anxiety-like behavior in 3 × Tg-AD mice were rescued by forniceal DBS. Furthermore, forniceal DBS positively altered DTI indices, such as increasing fractional anisotropy (FA) and decreasing mean diffusivity (MD), together with reducing microglial cell and astrocyte counts, suggesting a potential causal relationship between revised FA/MD and reduced cell counts in the anterior cingulate cortex, hippocampus, fornix, amygdala, and entorhinal cortex of 3 × Tg-AD mice following forniceal DBS.<i>Significance.</i>The efficacy of forniceal DBS in AD can be indicated by alterations in DTI-based biomarkers reflecting the decreased activation of glial cells, suggesting reduced neural inflammation as evidenced by improvements in memory and anxiety-like behavior.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":"21 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1088/1741-2552/ad775e
Sonal Santosh Baberwal, Luz Alejandra Magre, K R Sanjaya D Gunawardhana, Michael Parkinson, Tomas Ward, Shirley Coyle
Objective: Training plays a significant role in motor imagery (MI), particularly in applications such as Motor Imagery-based Brain-Computer Interface (MIBCI) systems and rehabilitation systems. Previous studies have investigated the intricate relationship between cues and MI signals. However, the medium of presentation still remains an emerging area to be explored, as possible factors to enhance Motor Imagery signals..
Approach: We hypothesise that the medium used for cue presentation can significantly influence both performance and training outcomes in MI tasks. To test this hypothesis, we designed and executed an experiment implementing no- feedback MI. Our investigation focused on three distinct cue presentation mediums -audio, screen, and virtual reality(VR) headsets-all of which have potential implications for BCI use in the Activities of Daily Lives.
Main Results: The results of our study uncovered notable variations in MI signals depending on the medium of cue presentation, where the analysis is based on 3 EEG channels. To substantiate our findings, we employed a comprehensive approach, utilizing various evaluation metrics including Event- Related Synchronisation(ERS)/Desynchronisation(ERD), Feature Extraction (using Recursive Feature Elimination (RFE)), Machine Learning methodologies (using Ensemble Learning), and participant Questionnaires. All the approaches signify that Motor Imagery signals are enhanced when presented in VR, followed by audio, and lastly screen. Applying a Machine Learning approach across all subjects, the mean cross-validation accuracy (Mean ± Std. Error) was 69.24 ± 3.12, 68.69 ± 3.3 and 66.1±2.59 when for the VR, audio-based, and screen-based instructions respectively.
Significance: This multi-faceted exploration provides evidence to inform MI- based BCI design and advocates the incorporation of different mediums into the design of MIBCI systems, experimental setups, and user studies. The influence of the medium used for cue presentation may be applied to develop more effective and inclusive MI applications in the realm of human-computer interaction and rehabilitation.
{"title":"Motor imagery with cues in virtual reality, audio and screen.","authors":"Sonal Santosh Baberwal, Luz Alejandra Magre, K R Sanjaya D Gunawardhana, Michael Parkinson, Tomas Ward, Shirley Coyle","doi":"10.1088/1741-2552/ad775e","DOIUrl":"https://doi.org/10.1088/1741-2552/ad775e","url":null,"abstract":"<p><strong>Objective: </strong>Training plays a significant role in motor imagery (MI), particularly in applications such as Motor Imagery-based Brain-Computer Interface (MIBCI) systems and rehabilitation systems. Previous studies have investigated the intricate relationship between cues and MI signals. However, the medium of presentation still remains an emerging area to be explored, as possible factors to enhance Motor Imagery signals..
Approach: We hypothesise that the medium used for cue presentation can significantly influence both performance and training outcomes in MI tasks. To test this hypothesis, we designed and executed an experiment implementing no- feedback MI. Our investigation focused on three distinct cue presentation mediums -audio, screen, and virtual reality(VR) headsets-all of which have potential implications for BCI use in the Activities of Daily Lives.
Main Results: The results of our study uncovered notable variations in MI signals depending on the medium of cue presentation, where the analysis is based on 3 EEG channels. To substantiate our findings, we employed a comprehensive approach, utilizing various evaluation metrics including Event- Related Synchronisation(ERS)/Desynchronisation(ERD), Feature Extraction (using Recursive Feature Elimination (RFE)), Machine Learning methodologies (using Ensemble Learning), and participant Questionnaires. All the approaches signify that Motor Imagery signals are enhanced when presented in VR, followed by audio, and lastly screen. Applying a Machine Learning approach across all subjects, the mean cross-validation accuracy (Mean ± Std. Error) was 69.24 ± 3.12, 68.69 ± 3.3 and 66.1±2.59 when for the VR, audio-based, and screen-based instructions respectively.
Significance: This multi-faceted exploration provides evidence to inform MI- based BCI design and advocates the incorporation of different mediums into the design of MIBCI systems, experimental setups, and user studies. The influence of the medium used for cue presentation may be applied to develop more effective and inclusive MI applications in the realm of human-computer interaction and rehabilitation.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-03DOI: 10.1088/1741-2552/ad716d
M Sperduti, N L Tagliamonte, F Taffoni, E Guglielmelli, L Zollo
The somatosensory system is widely studied to understand its functioning mechanisms. Multiple tests, based on different devices and methods, have been performed not only on humans but also on animals andex-vivomodels. Depending on the nature of the sample under analysis and on the scientific aims of interest, several solutions for experimental stimulation and for investigations on sensation or pain have been adopted. In this review paper, an overview of the available devices and methods has been reported, also analyzing the representative values adopted during literature experiments. Among the various physical stimulations used to study the somatosensory system, we focused only on mechanical and thermal ones. Based on the analysis of their main features and on literature studies, we pointed out the most suitable solution for humans, rodents, andex-vivomodels and investigation aims (sensation and pain).
{"title":"Mechanical and thermal stimulation for studying the somatosensory system: a review on devices and methods.","authors":"M Sperduti, N L Tagliamonte, F Taffoni, E Guglielmelli, L Zollo","doi":"10.1088/1741-2552/ad716d","DOIUrl":"10.1088/1741-2552/ad716d","url":null,"abstract":"<p><p>The somatosensory system is widely studied to understand its functioning mechanisms. Multiple tests, based on different devices and methods, have been performed not only on humans but also on animals and<i>ex-vivo</i>models. Depending on the nature of the sample under analysis and on the scientific aims of interest, several solutions for experimental stimulation and for investigations on sensation or pain have been adopted. In this review paper, an overview of the available devices and methods has been reported, also analyzing the representative values adopted during literature experiments. Among the various physical stimulations used to study the somatosensory system, we focused only on mechanical and thermal ones. Based on the analysis of their main features and on literature studies, we pointed out the most suitable solution for humans, rodents, and<i>ex-vivo</i>models and investigation aims (sensation and pain).</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective. The decline in the performance of electromyography (EMG)-based silent speech recognition is widely attributed to disparities in speech patterns, articulation habits, and individual physiology among speakers. Feature alignment by learning a discriminative network that resolves domain offsets across speakers is an effective method to address this problem. The prevailing adversarial network with a branching discriminator specializing in domain discrimination renders insufficiently direct contribution to categorical predictions of the classifier.Approach. To this end, we propose a simplified discrepancy-based adversarial network with a streamlined end-to-end structure for EMG-based cross-subject silent speech recognition. Highly aligned features across subjects are obtained by introducing a Nuclear-norm Wasserstein discrepancy metric on the back end of the classification network, which could be utilized for both classification and domain discrimination. Given the low-level and implicitly noisy nature of myoelectric signals, we devise a cascaded adaptive rectification network as the front-end feature extraction network, adaptively reshaping the intermediate feature map with automatically learnable channel-wise thresholds. The resulting features effectively filter out domain-specific information between subjects while retaining domain-invariant features critical for cross-subject recognition.Main results. A series of sentence-level classification experiments with 100 Chinese sentences demonstrate the efficacy of our method, achieving an average accuracy of 89.46% tested on 40 new subjects by training with data from 60 subjects. Especially, our method achieves a remarkable 10.07% improvement compared to the state-of-the-art model when tested on 10 new subjects with 20 subjects employed for training, surpassing its result even with three times training subjects.Significance. Our study demonstrates an improved classification performance of the proposed adversarial architecture using cross-subject myoelectric signals, providing a promising prospect for EMG-based speech interactive application.
{"title":"A simplified adversarial architecture for cross-subject silent speech recognition using electromyography.","authors":"Qiang Cui, Xingyu Zhang, Yakun Zhang, Changyan Zheng, Liang Xie, Ye Yan, Edmond Q Wu, Erwei Yin","doi":"10.1088/1741-2552/ad7321","DOIUrl":"10.1088/1741-2552/ad7321","url":null,"abstract":"<p><p><i>Objective</i>. The decline in the performance of electromyography (EMG)-based silent speech recognition is widely attributed to disparities in speech patterns, articulation habits, and individual physiology among speakers. Feature alignment by learning a discriminative network that resolves domain offsets across speakers is an effective method to address this problem. The prevailing adversarial network with a branching discriminator specializing in domain discrimination renders insufficiently direct contribution to categorical predictions of the classifier.<i>Approach</i>. To this end, we propose a simplified discrepancy-based adversarial network with a streamlined end-to-end structure for EMG-based cross-subject silent speech recognition. Highly aligned features across subjects are obtained by introducing a Nuclear-norm Wasserstein discrepancy metric on the back end of the classification network, which could be utilized for both classification and domain discrimination. Given the low-level and implicitly noisy nature of myoelectric signals, we devise a cascaded adaptive rectification network as the front-end feature extraction network, adaptively reshaping the intermediate feature map with automatically learnable channel-wise thresholds. The resulting features effectively filter out domain-specific information between subjects while retaining domain-invariant features critical for cross-subject recognition.<i>Main results</i>. A series of sentence-level classification experiments with 100 Chinese sentences demonstrate the efficacy of our method, achieving an average accuracy of 89.46% tested on 40 new subjects by training with data from 60 subjects. Especially, our method achieves a remarkable 10.07% improvement compared to the state-of-the-art model when tested on 10 new subjects with 20 subjects employed for training, surpassing its result even with three times training subjects.<i>Significance</i>. Our study demonstrates an improved classification performance of the proposed adversarial architecture using cross-subject myoelectric signals, providing a promising prospect for EMG-based speech interactive application.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142047658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-03DOI: 10.1088/1741-2552/ad680b
Iris Kremer, Wissam Halimi, Andy Walshe, Moran Cerf, Pablo Mainar
Objective. We show that electroencephalography (EEG)-based cognitive load (CL) prediction using Riemannian geometry features outperforms existing models. The performance is estimated using Riemannian Procrustes Analysis (RPA) with a test set of subjects unseen during training.Approach. Performance is evaluated by using the Minimum Distance to Riemannian Mean model trained on CL classification. The baseline performance is established using spatial covariance matrices of the signal as features. Various novel features are explored and analyzed in depth, including spatial covariance and correlation matrices computed on the EEG signal and its first-order derivative. Furthermore, each RPA step effect on the performance is investigated, and the generalization performance of RPA is compared against a few different generalization methods.Main results. Performances are greatly improved by using the spatial covariance matrix of the first-order derivative of the signal as features. Furthermore, this work highlights both the importance and efficiency of RPA for CL prediction: it achieves good generalizability with little amounts of calibration data and largely outperforms all the comparison methods.Significance. CL prediction using RPA for generalizability across subjects is an approach worth exploring further, especially for real-world applications where calibration time is limited. Furthermore, the feature exploration uncovers new, promising features that can be used and further experimented within any Riemannian geometry setting.
{"title":"Predicting cognitive load with EEG using Riemannian geometry-based features.","authors":"Iris Kremer, Wissam Halimi, Andy Walshe, Moran Cerf, Pablo Mainar","doi":"10.1088/1741-2552/ad680b","DOIUrl":"10.1088/1741-2552/ad680b","url":null,"abstract":"<p><p><i>Objective</i>. We show that electroencephalography (EEG)-based cognitive load (CL) prediction using Riemannian geometry features outperforms existing models. The performance is estimated using Riemannian Procrustes Analysis (RPA) with a test set of subjects unseen during training.<i>Approach</i>. Performance is evaluated by using the Minimum Distance to Riemannian Mean model trained on CL classification. The baseline performance is established using spatial covariance matrices of the signal as features. Various novel features are explored and analyzed in depth, including spatial covariance and correlation matrices computed on the EEG signal and its first-order derivative. Furthermore, each RPA step effect on the performance is investigated, and the generalization performance of RPA is compared against a few different generalization methods.<i>Main results</i>. Performances are greatly improved by using the spatial covariance matrix of the first-order derivative of the signal as features. Furthermore, this work highlights both the importance and efficiency of RPA for CL prediction: it achieves good generalizability with little amounts of calibration data and largely outperforms all the comparison methods.<i>Significance</i>. CL prediction using RPA for generalizability across subjects is an approach worth exploring further, especially for real-world applications where calibration time is limited. Furthermore, the feature exploration uncovers new, promising features that can be used and further experimented within any Riemannian geometry setting.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141768377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1088/1741-2552/ad7060
Yiling Zhang, Yuan Liao, Wei Chen, Xiruo Zhang, Liya Huang
Objective.Electroencephalogram (EEG) signals offer invaluable insights into the complexities of emotion generation within the brain. Yet, the variability in EEG signals across individuals presents a formidable obstacle for empirical implementations. Our research addresses these challenges innovatively, focusing on the commonalities within distinct subjects' EEG data.Approach.We introduce a novel approach named Contrastive Learning Graph Convolutional Network (CLGCN). This method captures the distinctive features and crucial channel nodes related to individuals' emotional states. Specifically, CLGCN merges the dual benefits of CL's synchronous multisubject data learning and the GCN's proficiency in deciphering brain connectivity matrices. Understanding multifaceted brain functions and their information interchange processes is realized as CLGCN generates a standardized brain network learning matrix during a dataset's learning process.Main results.Our model underwent rigorous testing on the Database for Emotion Analysis using Physiological Signals (DEAP) and SEED datasets. In the five-fold cross-validation used for dependent subject experimental setting, it achieved an accuracy of 97.13% on the DEAP dataset and surpassed 99% on the SEED and SEED_IV datasets. In the incremental learning experiments with the SEED dataset, merely 5% of the data was sufficient to fine-tune the model, resulting in an accuracy of 92.8% for the new subject. These findings validate the model's efficacy.Significance.This work combines CL with GCN, improving the accuracy of decoding emotional states from EEG signals and offering valuable insights into uncovering the underlying mechanisms of emotional processes in the brain.
{"title":"Emotion recognition of EEG signals based on contrastive learning graph convolutional model.","authors":"Yiling Zhang, Yuan Liao, Wei Chen, Xiruo Zhang, Liya Huang","doi":"10.1088/1741-2552/ad7060","DOIUrl":"10.1088/1741-2552/ad7060","url":null,"abstract":"<p><p><i>Objective.</i>Electroencephalogram (EEG) signals offer invaluable insights into the complexities of emotion generation within the brain. Yet, the variability in EEG signals across individuals presents a formidable obstacle for empirical implementations. Our research addresses these challenges innovatively, focusing on the commonalities within distinct subjects' EEG data.<i>Approach.</i>We introduce a novel approach named Contrastive Learning Graph Convolutional Network (CLGCN). This method captures the distinctive features and crucial channel nodes related to individuals' emotional states. Specifically, CLGCN merges the dual benefits of CL's synchronous multisubject data learning and the GCN's proficiency in deciphering brain connectivity matrices. Understanding multifaceted brain functions and their information interchange processes is realized as CLGCN generates a standardized brain network learning matrix during a dataset's learning process.<i>Main results.</i>Our model underwent rigorous testing on the Database for Emotion Analysis using Physiological Signals (DEAP) and SEED datasets. In the five-fold cross-validation used for dependent subject experimental setting, it achieved an accuracy of 97.13% on the DEAP dataset and surpassed 99% on the SEED and SEED_IV datasets. In the incremental learning experiments with the SEED dataset, merely 5% of the data was sufficient to fine-tune the model, resulting in an accuracy of 92.8% for the new subject. These findings validate the model's efficacy.<i>Significance.</i>This work combines CL with GCN, improving the accuracy of decoding emotional states from EEG signals and offering valuable insights into uncovering the underlying mechanisms of emotional processes in the brain.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1088/1741-2552/ad705f
Alejandro Nieto Ramos, Balu Krishnan, Andreas V Alexopoulos, William Bingaman, Imad Najm, Juan C Bulacio, Demitre Serletis
Objective.For medically-refractory epilepsy patients, stereoelectroencephalography (sEEG) is a surgical method using intracranial electrode recordings to identify brain networks participating in early seizure organization and propagation (i.e. the epileptogenic zone, EZ). If identified, surgical EZ treatment via resection, ablation or neuromodulation can lead to seizure-freedom. To date, quantification of sEEG data, including its visualization and interpretation, remains a clinical and computational challenge. Given elusiveness of physical laws or governing equations modelling complex brain dynamics, data science offers unique insight into identifying unknown patterns within high-dimensional sEEG data. We apply here an unsupervised data-driven algorithm, dynamic mode decomposition (DMD), to sEEG recordings from five focal epilepsy patients (three with temporal lobe, and two with cingulate epilepsy), who underwent subsequent resective or ablative surgery and became seizure free.Approach.DMD obtains a linear approximation of nonlinear data dynamics, generating coherent structures ('modes') defining important signal features, used to extract frequencies, growth rates and spatial structures. DMD was adapted to produce dynamic modal maps (DMMs) across frequency sub-bands, capturing onset and evolution of epileptiform dynamics in sEEG data. Additionally, we developed a static estimate of EZ-localized electrode contacts, termed the higher-frequency mode-based norm index (MNI). DMM and MNI maps for representative patient seizures were validated against clinical sEEG results and seizure-free outcomes following surgery.Main results.DMD was most informative at higher frequencies, i.e. gamma (including high-gamma) and beta range, successfully identifying EZ contacts. Combined interpretation of DMM/MNI plots best identified spatiotemporal evolution of mode-specific network changes, with strong concordance to sEEG results and outcomes across all five patients. The method identified network attenuation in other contacts not implicated in the EZ.Significance.This is the first application of DMD to sEEG data analysis, supporting integration of neuroengineering, mathematical and machine learning methods into traditional workflows for sEEG review and epilepsy surgical decision-making.
{"title":"Epileptic network identification: insights from dynamic mode decomposition of sEEG data.","authors":"Alejandro Nieto Ramos, Balu Krishnan, Andreas V Alexopoulos, William Bingaman, Imad Najm, Juan C Bulacio, Demitre Serletis","doi":"10.1088/1741-2552/ad705f","DOIUrl":"10.1088/1741-2552/ad705f","url":null,"abstract":"<p><p><i>Objective.</i>For medically-refractory epilepsy patients, stereoelectroencephalography (sEEG) is a surgical method using intracranial electrode recordings to identify brain networks participating in early seizure organization and propagation (i.e. the epileptogenic zone, EZ). If identified, surgical EZ treatment via resection, ablation or neuromodulation can lead to seizure-freedom. To date, quantification of sEEG data, including its visualization and interpretation, remains a clinical and computational challenge. Given elusiveness of physical laws or governing equations modelling complex brain dynamics, data science offers unique insight into identifying unknown patterns within high-dimensional sEEG data. We apply here an unsupervised data-driven algorithm, dynamic mode decomposition (DMD), to sEEG recordings from five focal epilepsy patients (three with temporal lobe, and two with cingulate epilepsy), who underwent subsequent resective or ablative surgery and became seizure free.<i>Approach.</i>DMD obtains a linear approximation of nonlinear data dynamics, generating coherent structures ('modes') defining important signal features, used to extract frequencies, growth rates and spatial structures. DMD was adapted to produce dynamic modal maps (DMMs) across frequency sub-bands, capturing onset and evolution of epileptiform dynamics in sEEG data. Additionally, we developed a static estimate of EZ-localized electrode contacts, termed the higher-frequency mode-based norm index (MNI). DMM and MNI maps for representative patient seizures were validated against clinical sEEG results and seizure-free outcomes following surgery.<i>Main results.</i>DMD was most informative at higher frequencies, i.e. gamma (including high-gamma) and beta range, successfully identifying EZ contacts. Combined interpretation of DMM/MNI plots best identified spatiotemporal evolution of mode-specific network changes, with strong concordance to sEEG results and outcomes across all five patients. The method identified network attenuation in other contacts not implicated in the EZ.<i>Significance.</i>This is the first application of DMD to sEEG data analysis, supporting integration of neuroengineering, mathematical and machine learning methods into traditional workflows for sEEG review and epilepsy surgical decision-making.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1088/1741-2552/ad6e19
L Bashford, I A Rosenthal, S Kellis, D Bjånes, K Pejsa, B W Brunton, R A Andersen
Objective.A crucial goal in brain-machine interfacing is the long-term stability of neural decoding performance, ideally without regular retraining. Long-term stability has only been previously demonstrated in non-human primate experiments and only in primary sensorimotor cortices. Here we extend previous methods to determine long-term stability in humans by identifying and aligning low-dimensional structures in neural data.Approach.Over a period of 1106 and 871 d respectively, two participants completed an imagined center-out reaching task. The longitudinal accuracy between all day pairs was assessed by latent subspace alignment using principal components analysis and canonical correlations analysis of multi-unit intracortical recordings in different brain regions (Brodmann Area 5, Anterior Intraparietal Area and the junction of the postcentral and intraparietal sulcus).Main results.We show the long-term stable representation of neural activity in subspaces of intracortical recordings from higher-order association areas in humans.Significance.These results can be practically applied to significantly expand the longevity and generalizability of brain-computer interfaces.Clinical TrialsNCT01849822, NCT01958086, NCT01964261.
{"title":"Neural subspaces of imagined movements in parietal cortex remain stable over several years in humans.","authors":"L Bashford, I A Rosenthal, S Kellis, D Bjånes, K Pejsa, B W Brunton, R A Andersen","doi":"10.1088/1741-2552/ad6e19","DOIUrl":"10.1088/1741-2552/ad6e19","url":null,"abstract":"<p><p><i>Objective.</i>A crucial goal in brain-machine interfacing is the long-term stability of neural decoding performance, ideally without regular retraining. Long-term stability has only been previously demonstrated in non-human primate experiments and only in primary sensorimotor cortices. Here we extend previous methods to determine long-term stability in humans by identifying and aligning low-dimensional structures in neural data.<i>Approach.</i>Over a period of 1106 and 871 d respectively, two participants completed an imagined center-out reaching task. The longitudinal accuracy between all day pairs was assessed by latent subspace alignment using principal components analysis and canonical correlations analysis of multi-unit intracortical recordings in different brain regions (Brodmann Area 5, Anterior Intraparietal Area and the junction of the postcentral and intraparietal sulcus).<i>Main results.</i>We show the long-term stable representation of neural activity in subspaces of intracortical recordings from higher-order association areas in humans.<i>Significance.</i>These results can be practically applied to significantly expand the longevity and generalizability of brain-computer interfaces.Clinical TrialsNCT01849822, NCT01958086, NCT01964261.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11350602/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141972494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective. Brain-computer interfaces (BCIs) have the potential to preserve or restore speech in patients with neurological disorders that weaken the muscles involved in speech production. However, successful training of low-latency speech synthesis and recognition models requires alignment of neural activity with intended phonetic or acoustic output with high temporal precision. This is particularly challenging in patients who cannot produce audible speech, as ground truth with which to pinpoint neural activity synchronized with speech is not available.Approach. In this study, we present a new iterative algorithm for neural voice activity detection (nVAD) called iterative alignment discovery dynamic time warping (IAD-DTW) that integrates DTW into the loss function of a deep neural network (DNN). The algorithm is designed to discover the alignment between a patient's electrocorticographic (ECoG) neural responses and their attempts to speak during collection of data for training BCI decoders for speech synthesis and recognition.Main results. To demonstrate the effectiveness of the algorithm, we tested its accuracy in predicting the onset and duration of acoustic signals produced by able-bodied patients with intact speech undergoing short-term diagnostic ECoG recordings for epilepsy surgery. We simulated a lack of ground truth by randomly perturbing the temporal correspondence between neural activity and an initial single estimate for all speech onsets and durations. We examined the model's ability to overcome these perturbations to estimate ground truth. IAD-DTW showed no notable degradation (<1% absolute decrease in accuracy) in performance in these simulations, even in the case of maximal misalignments between speech and silence.Significance. IAD-DTW is computationally inexpensive and can be easily integrated into existing DNN-based nVAD approaches, as it pertains only to the final loss computation. This approach makes it possible to train speech BCI algorithms using ECoG data from patients who are unable to produce audible speech, including those with Locked-In Syndrome.
{"title":"Iterative alignment discovery of speech-associated neural activity.","authors":"Qinwan Rabbani, Samyak Shah, Griffin Milsap, Matthew Fifer, Hynek Hermansky, Nathan Crone","doi":"10.1088/1741-2552/ad663c","DOIUrl":"10.1088/1741-2552/ad663c","url":null,"abstract":"<p><p><i>Objective</i>. Brain-computer interfaces (BCIs) have the potential to preserve or restore speech in patients with neurological disorders that weaken the muscles involved in speech production. However, successful training of low-latency speech synthesis and recognition models requires alignment of neural activity with intended phonetic or acoustic output with high temporal precision. This is particularly challenging in patients who cannot produce audible speech, as ground truth with which to pinpoint neural activity synchronized with speech is not available.<i>Approach</i>. In this study, we present a new iterative algorithm for neural voice activity detection (nVAD) called iterative alignment discovery dynamic time warping (IAD-DTW) that integrates DTW into the loss function of a deep neural network (DNN). The algorithm is designed to discover the alignment between a patient's electrocorticographic (ECoG) neural responses and their attempts to speak during collection of data for training BCI decoders for speech synthesis and recognition.<i>Main results</i>. To demonstrate the effectiveness of the algorithm, we tested its accuracy in predicting the onset and duration of acoustic signals produced by able-bodied patients with intact speech undergoing short-term diagnostic ECoG recordings for epilepsy surgery. We simulated a lack of ground truth by randomly perturbing the temporal correspondence between neural activity and an initial single estimate for all speech onsets and durations. We examined the model's ability to overcome these perturbations to estimate ground truth. IAD-DTW showed no notable degradation (<1% absolute decrease in accuracy) in performance in these simulations, even in the case of maximal misalignments between speech and silence.<i>Significance</i>. IAD-DTW is computationally inexpensive and can be easily integrated into existing DNN-based nVAD approaches, as it pertains only to the final loss computation. This approach makes it possible to train speech BCI algorithms using ECoG data from patients who are unable to produce audible speech, including those with Locked-In Syndrome.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":"21 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11351572/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142083006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1088/1741-2552/ad6a8d
Ashley N Dalrymple, Lee E Fisher, Douglas J Weber
Objective. Phantom limb pain (PLP) is debilitating and affects over 70% of people with lower-limb amputation. Other neuropathic pain conditions correspond with increased spinal excitability, which can be measured using reflexes andF-waves. Spinal cord neuromodulation can be used to reduce neuropathic pain in a variety of conditions and may affect spinal excitability, but has not been extensively used for treating PLP. Here, we propose using a non-invasive neuromodulation method, transcutaneous spinal cord stimulation (tSCS), to reduce PLP and modulate spinal excitability after transtibial amputation.Approach. We recruited three participants, two males (5- and 9-years post-amputation, traumatic and alcohol-induced neuropathy) and one female (3 months post-amputation, diabetic neuropathy) for this 5 d study. We measured pain using the McGill Pain Questionnaire (MPQ), visual analog scale (VAS), and pain pressure threshold (PPT) test. We measured spinal reflex and motoneuron excitability using posterior root-muscle (PRM) reflexes andF-waves, respectively. We delivered tSCS for 30 min d-1for 5 d.Main Results. After 5 d of tSCS, MPQ scores decreased by clinically-meaningful amounts for all participants from 34.0 ± 7.0-18.3 ± 6.8; however, there were no clinically-significant decreases in VAS scores. Two participants had increased PPTs across the residual limb (Day 1: 5.4 ± 1.6 lbf; Day 5: 11.4 ± 1.0 lbf).F-waves had normal latencies but small amplitudes. PRM reflexes had high thresholds (59.5 ± 6.1μC) and low amplitudes, suggesting that in PLP, the spinal cord is hypoexcitable. After 5 d of tSCS, reflex thresholds decreased significantly (38.6 ± 12.2μC;p< 0.001).Significance. These preliminary results in this non-placebo-controlled study suggest that, overall, limb amputation and PLP may be associated with reduced spinal excitability and tSCS can increase spinal excitability and reduce PLP.
{"title":"A preliminary study exploring the effects of transcutaneous spinal cord stimulation on spinal excitability and phantom limb pain in people with a transtibial amputation.","authors":"Ashley N Dalrymple, Lee E Fisher, Douglas J Weber","doi":"10.1088/1741-2552/ad6a8d","DOIUrl":"10.1088/1741-2552/ad6a8d","url":null,"abstract":"<p><p><i>Objective</i>. Phantom limb pain (PLP) is debilitating and affects over 70% of people with lower-limb amputation. Other neuropathic pain conditions correspond with increased spinal excitability, which can be measured using reflexes and<i>F</i>-waves. Spinal cord neuromodulation can be used to reduce neuropathic pain in a variety of conditions and may affect spinal excitability, but has not been extensively used for treating PLP. Here, we propose using a non-invasive neuromodulation method, transcutaneous spinal cord stimulation (tSCS), to reduce PLP and modulate spinal excitability after transtibial amputation.<i>Approach</i>. We recruited three participants, two males (5- and 9-years post-amputation, traumatic and alcohol-induced neuropathy) and one female (3 months post-amputation, diabetic neuropathy) for this 5 d study. We measured pain using the McGill Pain Questionnaire (MPQ), visual analog scale (VAS), and pain pressure threshold (PPT) test. We measured spinal reflex and motoneuron excitability using posterior root-muscle (PRM) reflexes and<i>F</i>-waves, respectively. We delivered tSCS for 30 min d<sup>-1</sup>for 5 d.<i>Main Results</i>. After 5 d of tSCS, MPQ scores decreased by clinically-meaningful amounts for all participants from 34.0 ± 7.0-18.3 ± 6.8; however, there were no clinically-significant decreases in VAS scores. Two participants had increased PPTs across the residual limb (Day 1: 5.4 ± 1.6 lbf; Day 5: 11.4 ± 1.0 lbf).<i>F</i>-waves had normal latencies but small amplitudes. PRM reflexes had high thresholds (59.5 ± 6.1<i>μ</i>C) and low amplitudes, suggesting that in PLP, the spinal cord is hypoexcitable. After 5 d of tSCS, reflex thresholds decreased significantly (38.6 ± 12.2<i>μ</i>C;<i>p</i>< 0.001).<i>Significance</i>. These preliminary results in this non-placebo-controlled study suggest that, overall, limb amputation and PLP may be associated with reduced spinal excitability and tSCS can increase spinal excitability and reduce PLP.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391861/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141880034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}