Pub Date : 2024-10-09DOI: 10.1088/1741-2552/ad7f88
Claire Dussard, Léa Pillette, Cassandra Dumas, Emeline Pierrieau, Laurent Hugueville, Brian Lau, Camille Jeunet-Kelway, Nathalie George
Objective.Neurofeedback (NF) is a cognitive training procedure based on real-time feedback (FB) of a participant's brain activity that they must learn to self-regulate. A classical visual FB delivered in a NF task is a filling gauge reflecting a measure of brain activity. This abstract visual FB is not transparently linked-from the subject's perspective-to the task performed (e.g., motor imagery (MI)). This may decrease the sense of agency, that is, the participants' reported control over FB. Here, we assessed the influence of FB transparency on NF performance and the role of agency in this relationship.Approach.Participants performed a NF task using MI to regulate brain activity measured using electroencephalography. In separate blocks, participants experienced three different conditions designed to vary transparency: FB was presented as either (1) a swinging pendulum, (2) a clenching virtual hand, (3) a clenching virtual hand combined with a motor illusion induced by tendon vibration. We measured self-reported agency and user experience after each NF block.Main results. We found that FB transparency influences NF performance. Transparent visual FB provided by the virtual hand resulted in significantly better NF performance than the abstract FB of the pendulum. Surprisingly, adding a motor illusion to the virtual hand significantly decreased performance relative to the virtual hand alone. When introduced in incremental linear mixed effect models, self-reported agency was significantly associated with NF performance and it captured the variance related to the effect of FB transparency on NF performance.Significance. Our results highlight the relevance of transparent FB in relation to the sense of agency. This is likely an important consideration in designing FB to improve NF performance and learning outcomes.
{"title":"Influence of feedback transparency on motor imagery neurofeedback performance: the contribution of agency.","authors":"Claire Dussard, Léa Pillette, Cassandra Dumas, Emeline Pierrieau, Laurent Hugueville, Brian Lau, Camille Jeunet-Kelway, Nathalie George","doi":"10.1088/1741-2552/ad7f88","DOIUrl":"10.1088/1741-2552/ad7f88","url":null,"abstract":"<p><p><i>Objective.</i>Neurofeedback (NF) is a cognitive training procedure based on real-time feedback (FB) of a participant's brain activity that they must learn to self-regulate. A classical visual FB delivered in a NF task is a filling gauge reflecting a measure of brain activity. This abstract visual FB is not transparently linked-from the subject's perspective-to the task performed (e.g., motor imagery (MI)). This may decrease the sense of agency, that is, the participants' reported control over FB. Here, we assessed the influence of FB transparency on NF performance and the role of agency in this relationship.<i>Approach.</i>Participants performed a NF task using MI to regulate brain activity measured using electroencephalography. In separate blocks, participants experienced three different conditions designed to vary transparency: FB was presented as either (1) a swinging pendulum, (2) a clenching virtual hand, (3) a clenching virtual hand combined with a motor illusion induced by tendon vibration. We measured self-reported agency and user experience after each NF block.<i>Main results</i>. We found that FB transparency influences NF performance. Transparent visual FB provided by the virtual hand resulted in significantly better NF performance than the abstract FB of the pendulum. Surprisingly, adding a motor illusion to the virtual hand significantly decreased performance relative to the virtual hand alone. When introduced in incremental linear mixed effect models, self-reported agency was significantly associated with NF performance and it captured the variance related to the effect of FB transparency on NF performance.<i>Significance</i>. Our results highlight the relevance of transparent FB in relation to the sense of agency. This is likely an important consideration in designing FB to improve NF performance and learning outcomes.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-08DOI: 10.1088/1741-2552/ad7f8c
T R Benigni, A E Pena, S S Kuntaegowdanahalli, J J Abbas, R Jung
Objective.To investigate the feasibility of delivering multidimensional feedback using a single channel of peripheral nerve stimulation by complementing intensity percepts with flutter frequency percepts controlled by burst period modulation.Approach.Two dimensions of a distally referred sensation were provided simultaneously: intensity was conveyed by the modulation of the pulse charge rate inside short discrete periods of stimulation referred to as bursts and frequency was conveyed by the modulation of the period between bursts. For this approach to be feasible, intensity percepts must be perceived independently of frequency percepts. Two experiments investigated these interactions. A series of two alternative forced choice tasks (2AFC) were used to investigate burst period modulation's role in intensity discernibility. Magnitude estimation tasks were used to determine any interactions in the gradation between the frequency and intensity percepts.Main results.The 2AFC revealed that burst periods can be individually differentiated as a gradable frequency percept in peripheral nerve stimulation. Participants could correctly rate a perceptual scale of intensity and frequency regardless of the value of the second, but the dependence of frequency differentiability on charge rate indicates that frequency was harder to detect with weaker intensity percepts. The same was not observed in intensity differentiability as the length of burst periods did not significantly alter intensity differentiation. These results suggest multidimensional encoding is a promising approach for increasing information throughput in sensory feedback systems if intensity ranges are selected properly.Significance.This study offers valuable insights into haptic feedback through the peripheral nervous system and demonstrates an encoding approach for neural stimulation that may offer enhanced information transfer in virtual reality applications and sensory-enabled prosthetic systems. This multidimensional encoding strategy for sensory feedback may open new avenues for enriched control capabilities.
{"title":"Simultaneous modulation of pulse charge and burst period elicits two differentiable referred sensations.","authors":"T R Benigni, A E Pena, S S Kuntaegowdanahalli, J J Abbas, R Jung","doi":"10.1088/1741-2552/ad7f8c","DOIUrl":"10.1088/1741-2552/ad7f8c","url":null,"abstract":"<p><p><i>Objective.</i>To investigate the feasibility of delivering multidimensional feedback using a single channel of peripheral nerve stimulation by complementing intensity percepts with flutter frequency percepts controlled by burst period modulation.<i>Approach.</i>Two dimensions of a distally referred sensation were provided simultaneously: intensity was conveyed by the modulation of the pulse charge rate inside short discrete periods of stimulation referred to as bursts and frequency was conveyed by the modulation of the period between bursts. For this approach to be feasible, intensity percepts must be perceived independently of frequency percepts. Two experiments investigated these interactions. A series of two alternative forced choice tasks (2AFC) were used to investigate burst period modulation's role in intensity discernibility. Magnitude estimation tasks were used to determine any interactions in the gradation between the frequency and intensity percepts.<i>Main results.</i>The 2AFC revealed that burst periods can be individually differentiated as a gradable frequency percept in peripheral nerve stimulation. Participants could correctly rate a perceptual scale of intensity and frequency regardless of the value of the second, but the dependence of frequency differentiability on charge rate indicates that frequency was harder to detect with weaker intensity percepts. The same was not observed in intensity differentiability as the length of burst periods did not significantly alter intensity differentiation. These results suggest multidimensional encoding is a promising approach for increasing information throughput in sensory feedback systems if intensity ranges are selected properly.<i>Significance.</i>This study offers valuable insights into haptic feedback through the peripheral nervous system and demonstrates an encoding approach for neural stimulation that may offer enhanced information transfer in virtual reality applications and sensory-enabled prosthetic systems. This multidimensional encoding strategy for sensory feedback may open new avenues for enriched control capabilities.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-08DOI: 10.1088/1741-2552/ad7a24
Nikolai Kapralov, Mina Jamshidi Idaji, Tilman Stephani, Alina Studenova, Carmen Vidaurre, Tomas Ros, Arno Villringer, Vadim Nikulin
Objective.Serving as a channel for communication with locked-in patients or control of prostheses, sensorimotor brain-computer interfaces (BCIs) decode imaginary movements from the recorded activity of the user's brain. However, many individuals remain unable to control the BCI, and the underlying mechanisms are unclear. The user's BCI performance was previously shown to correlate with the resting-state signal-to-noise ratio (SNR) of the mu rhythm and the phase synchronization (PS) of the mu rhythm between sensorimotor areas. Yet, these predictors of performance were primarily evaluated in a single BCI session, while the longitudinal aspect remains rather uninvestigated. In addition, different analysis pipelines were used to estimate PS in source space, potentially hindering the reproducibility of the results.Approach.To systematically address these issues, we performed an extensive validation of the relationship between pre-stimulus SNR, PS, and session-wise BCI performance using a publicly available dataset of 62 human participants performing up to 11 sessions of BCI training. We performed the analysis in sensor space using the surface Laplacian and in source space by combining 24 processing pipelines in a multiverse analysis. This way, we could investigate how robust the observed effects were to the selection of the pipeline.Main results.Our results show that SNR had both between- and within-subject effects on BCI performance for the majority of the pipelines. In contrast, the effect of PS on BCI performance was less robust to the selection of the pipeline and became non-significant after controlling for SNR.Significance.Taken together, our results demonstrate that changes in neuronal connectivity within the sensorimotor system are not critical for learning to control a BCI, and interventions that increase the SNR of the mu rhythm might lead to improvements in the user's BCI performance.
{"title":"Sensorimotor brain-computer interface performance depends on signal-to-noise ratio but not connectivity of the mu rhythm in a multiverse analysis of longitudinal data.","authors":"Nikolai Kapralov, Mina Jamshidi Idaji, Tilman Stephani, Alina Studenova, Carmen Vidaurre, Tomas Ros, Arno Villringer, Vadim Nikulin","doi":"10.1088/1741-2552/ad7a24","DOIUrl":"10.1088/1741-2552/ad7a24","url":null,"abstract":"<p><p><i>Objective.</i>Serving as a channel for communication with locked-in patients or control of prostheses, sensorimotor brain-computer interfaces (BCIs) decode imaginary movements from the recorded activity of the user's brain. However, many individuals remain unable to control the BCI, and the underlying mechanisms are unclear. The user's BCI performance was previously shown to correlate with the resting-state signal-to-noise ratio (SNR) of the mu rhythm and the phase synchronization (PS) of the mu rhythm between sensorimotor areas. Yet, these predictors of performance were primarily evaluated in a single BCI session, while the longitudinal aspect remains rather uninvestigated. In addition, different analysis pipelines were used to estimate PS in source space, potentially hindering the reproducibility of the results.<i>Approach.</i>To systematically address these issues, we performed an extensive validation of the relationship between pre-stimulus SNR, PS, and session-wise BCI performance using a publicly available dataset of 62 human participants performing up to 11 sessions of BCI training. We performed the analysis in sensor space using the surface Laplacian and in source space by combining 24 processing pipelines in a multiverse analysis. This way, we could investigate how robust the observed effects were to the selection of the pipeline.<i>Main results.</i>Our results show that SNR had both between- and within-subject effects on BCI performance for the majority of the pipelines. In contrast, the effect of PS on BCI performance was less robust to the selection of the pipeline and became non-significant after controlling for SNR.<i>Significance.</i>Taken together, our results demonstrate that changes in neuronal connectivity within the sensorimotor system are not critical for learning to control a BCI, and interventions that increase the SNR of the mu rhythm might lead to improvements in the user's BCI performance.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1088/1741-2552/ad83f4
Muhammad Ahmed Ahmed Abbasi, Hafza Faiza Abbasi, Xiaojun Yu, Muhammad Zulkifal Aziz, Nicole Tye June Yih Yih, Zeming Fan
The advancements in Brain-Computer Interface (BCI) have substantially evolved people's lives by enabling direct communication between the human brain and external peripheral devices. In recent years, the integration of machine larning (ML) and deep learning (DL) models have considerably imrpoved the performances of BCIs for decoding the motor imagery (MI) tasks. However, there still exist several limitations, e.g., extensive training time and high sensitivity to noises or outliers with those existing models, which largely hinder the rapid developments of BCIs. To address such issues, this paper proposes a novel extreme learning machine (ELM) based self-attention (E-SAT) mechanism to enhance subject-specific classification performances. Specifically, for E-SAT, ELM is employed both to imrpove self-attention module generalization ability for feature extraction and to optimize the model's parameter initialization process. Meanwhile, the extracted features are also classified using ELM, and the end-to-end ELM based setup is used to evaluate E-SAT performances on different MI EEG signals. Extensive experiments with different datasets, such as BCI Competition III Dataset IV-a, IV-b and BCI Competition IV Datasets 1,2a,2b,3, are conducted to verify the effectiveness of proposed E-SAT strategy. Results show that E-SAT outperforms several state-of-the-art (SOTA) existing methods in subject-specific classification on all the datasets, with an average classification accuracy of 99.8%,99.1%,98.9%,75.8%, 90.8%, and 95.4%, being achieved for each datasets, respectively. The experimental results not only show outstanding performance of E-SAT in feature extractions, but also demonstrate that it helps achieves the best results among nine other robust ones. In addition, results in this study also demonstrate that E-SAT achieves exceptional performance in both binary and multi-class classification tasks, as well as for noisy and non-noisy datatsets.
.
脑机接口(BCI)技术的进步实现了人脑与外部外围设备之间的直接通信,从而极大地改善了人们的生活。近年来,机器学习(ML)和深度学习(DL)模型的集成大大提高了 BCI 解码运动图像(MI)任务的性能。然而,这些现有模型仍存在一些局限性,例如训练时间长、对噪声或异常值的敏感性高,这在很大程度上阻碍了 BCI 的快速发展。为了解决这些问题,本文提出了一种新颖的基于极端学习机(ELM)的自我注意(E-SAT)机制,以提高针对特定对象的分类性能。具体而言,在 E-SAT 中,ELM 被用于提高自我注意模块在特征提取方面的泛化能力,以及优化模型的参数初始化过程。同时,还使用 ELM 对提取的特征进行分类,并使用基于 ELM 的端到端设置来评估 E-SAT 在不同 MI EEG 信号上的性能。通过对不同数据集(如 BCI Competition III 数据集 IV-a、IV-b 和 BCI Competition IV 数据集 1、2a、2b、3)进行广泛实验,验证了所提出的 E-SAT 策略的有效性。结果表明,在所有数据集上,E-SAT 的主题分类准确率分别达到 99.8%、99.1%、98.9%、75.8%、90.8% 和 95.4%,优于现有的几种最先进(SOTA)方法。实验结果不仅显示了 E-SAT 在特征提取方面的突出表现,还表明它有助于在其他九种鲁棒性特征提取中取得最佳结果。此外,本研究的结果还表明,E-SAT 在二元分类和多类分类任务中,以及在有噪声和无噪声数据集中都取得了卓越的性能。
{"title":"E-SAT: An extreme learning machine based self attention approach for decoding motor imagery EEG in subject-specific tasks.","authors":"Muhammad Ahmed Ahmed Abbasi, Hafza Faiza Abbasi, Xiaojun Yu, Muhammad Zulkifal Aziz, Nicole Tye June Yih Yih, Zeming Fan","doi":"10.1088/1741-2552/ad83f4","DOIUrl":"https://doi.org/10.1088/1741-2552/ad83f4","url":null,"abstract":"<p><p>The advancements in Brain-Computer Interface (BCI) have substantially evolved people's lives by enabling direct communication between the human brain and external peripheral devices. In recent years, the integration of machine larning (ML) and deep learning (DL) models have considerably imrpoved the performances of BCIs for decoding the motor imagery (MI) tasks. However, there still exist several limitations, e.g., extensive training time and high sensitivity to noises or outliers with those existing models, which largely hinder the rapid developments of BCIs. To address such issues, this paper proposes a novel extreme learning machine (ELM) based self-attention (E-SAT) mechanism to enhance subject-specific classification performances. Specifically, for E-SAT, ELM is employed both to imrpove self-attention module generalization ability for feature extraction and to optimize the model's parameter initialization process. Meanwhile, the extracted features are also classified using ELM, and the end-to-end ELM based setup is used to evaluate E-SAT performances on different MI EEG signals. Extensive experiments with different datasets, such as BCI Competition III Dataset IV-a, IV-b and BCI Competition IV Datasets 1,2a,2b,3, are conducted to verify the effectiveness of proposed E-SAT strategy. Results show that E-SAT outperforms several state-of-the-art (SOTA) existing methods in subject-specific classification on all the datasets, with an average classification accuracy of 99.8%,99.1%,98.9%,75.8%, 90.8%, and 95.4%, being achieved for each datasets, respectively. The experimental results not only show outstanding performance of E-SAT in feature extractions, but also demonstrate that it helps achieves the best results among nine other robust ones. In addition, results in this study also demonstrate that E-SAT achieves exceptional performance in both binary and multi-class classification tasks, as well as for noisy and non-noisy datatsets.
.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-04DOI: 10.1088/1741-2552/ad7323
John Thomas, Chifaou Abdallah, Kassem Jaber, Mays Khweileh, Olivier Aron, Irena Doležalová, Vadym Gnatkovsky, Daniel Mansilla, Päivi Nevalainen, Raluca Pana, Stephan Schuele, Jaysingh Singh, Ana Suller-Marti, Alexandra Urban, Jeffery Hall, François Dubeau, Louis Maillard, Philippe Kahane, Jean Gotman, Birgit Frauscher
Objective.The proportion of patients becoming seizure-free after epilepsy surgery has stagnated. Large multi-center stereo-electroencephalography (SEEG) datasets can allow comparing new patients to past similar cases and making clinical decisions with the knowledge of how cases were treated in the past. However, the complexity of these evaluations makes the manual search for similar patients impractical. We aim to develop an automated system that electrographically and anatomically matches seizures to those in a database. Additionally, since features that define seizure similarity are unknown, we evaluate the agreement and features among experts in classifying similarity.Approach.We utilized 320 SEEG seizures from 95 consecutive patients who underwent epilepsy surgery. Eight international experts evaluated seizure-pair similarity using a four-level similarity score. As our primary outcome, we developed and validated an automated seizure matching system by employing patient data marked by independent experts. Secondary outcomes included the inter-rater agreement (IRA) and features for classifying seizure similarity.Main results.The seizure matching system achieved a median area-under-the-curve of 0.76 (interquartile range, 0.1), indicating its feasibility. Six distinct seizure similarity features were identified and proved effective: onset region, onset pattern, propagation region, duration, extent of spread, and propagation speed. Among these features, the onset region showed the strongest correlation with expert scores (Spearman's rho = 0.75,p< 0.001). Additionally, the moderate IRA confirmed the practicality of our approach with an agreement of 73.9% (7%), and Gwet's kappa of 0.45 (0.16). Further, the interoperability of the system was validated on seizures from five centers.Significance.We demonstrated the feasibility and validity of a SEEG seizure matching system across patients, effectively mirroring the expertise of epileptologists. This novel system can identify patients with seizures similar to that of a patient being evaluated, thus optimizing the treatment plan by considering the results of treating similar patients in the past, potentially improving surgery outcome.
{"title":"Development of a stereo-EEG based seizure matching system for clinical decision making in epilepsy surgery.","authors":"John Thomas, Chifaou Abdallah, Kassem Jaber, Mays Khweileh, Olivier Aron, Irena Doležalová, Vadym Gnatkovsky, Daniel Mansilla, Päivi Nevalainen, Raluca Pana, Stephan Schuele, Jaysingh Singh, Ana Suller-Marti, Alexandra Urban, Jeffery Hall, François Dubeau, Louis Maillard, Philippe Kahane, Jean Gotman, Birgit Frauscher","doi":"10.1088/1741-2552/ad7323","DOIUrl":"10.1088/1741-2552/ad7323","url":null,"abstract":"<p><p><i>Objective.</i>The proportion of patients becoming seizure-free after epilepsy surgery has stagnated. Large multi-center stereo-electroencephalography (SEEG) datasets can allow comparing new patients to past similar cases and making clinical decisions with the knowledge of how cases were treated in the past. However, the complexity of these evaluations makes the manual search for similar patients impractical. We aim to develop an automated system that electrographically and anatomically matches seizures to those in a database. Additionally, since features that define seizure similarity are unknown, we evaluate the agreement and features among experts in classifying similarity.<i>Approach.</i>We utilized 320 SEEG seizures from 95 consecutive patients who underwent epilepsy surgery. Eight international experts evaluated seizure-pair similarity using a four-level similarity score. As our primary outcome, we developed and validated an automated seizure matching system by employing patient data marked by independent experts. Secondary outcomes included the inter-rater agreement (IRA) and features for classifying seizure similarity.<i>Main results.</i>The seizure matching system achieved a median area-under-the-curve of 0.76 (interquartile range, 0.1), indicating its feasibility. Six distinct seizure similarity features were identified and proved effective: onset region, onset pattern, propagation region, duration, extent of spread, and propagation speed. Among these features, the onset region showed the strongest correlation with expert scores (Spearman's rho = 0.75,<i>p</i>< 0.001). Additionally, the moderate IRA confirmed the practicality of our approach with an agreement of 73.9% (7%), and Gwet's kappa of 0.45 (0.16). Further, the interoperability of the system was validated on seizures from five centers.<i>Significance.</i>We demonstrated the feasibility and validity of a SEEG seizure matching system across patients, effectively mirroring the expertise of epileptologists. This novel system can identify patients with seizures similar to that of a patient being evaluated, thus optimizing the treatment plan by considering the results of treating similar patients in the past, potentially improving surgery outcome.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142047661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-04DOI: 10.1088/1741-2552/ad7f89
Xin Wen, Shuting Jia, Dan Han, Yanqing Dong, Chengxin Gao, Ruochen Cao, Yanrong Hao, Yuxiang Guo, Rui Cao
Objective.In the field of steady-state visual evoked potential brain computer interfaces (SSVEP-BCIs) research, convolutional neural networks (CNNs) have gradually been proved to be an effective method. Whereas, majority works apply the frequency domain characteristics in long time window to train the network, thus lead to insufficient performance of those networks in short time window. Furthermore, only the frequency domain information for classification lacks of other task-related information.Approach.To address these issues, we propose a time-frequency domain generalized filter-bank convolutional neural network (FBCNN-G) to improve the SSVEP-BCIs classification performance. The network integrates multiple frequency information of electroencephalogram (EEG) with template and predefined prior of sine-cosine signals to perform feature extraction, which contains correlation analyses in both template and signal aspects. Then the classification is performed at the end of the network. In addition, the method proposes the use of filter banks divided into specific frequency bands as pre-filters in the network to fully consider the fundamental and harmonic frequency characteristics of the signal.Main results.The proposed FBCNN-G model is compared with other methods on the public dataset Benchmark. The results manifest that this model has higher accuracy of character recognition accuracy and information transfer rates in several time windows. Particularly, in the 0.2 s time window, the mean accuracy of the proposed method reaches62.02%±5.12%, indicating its superior performance.Significance.The proposed FBCNN-G model is critical for the exploitation of SSVEP-BCIs character recognition models.
{"title":"Filter banks guided correlational convolutional neural network for SSVEPs based BCI classification.","authors":"Xin Wen, Shuting Jia, Dan Han, Yanqing Dong, Chengxin Gao, Ruochen Cao, Yanrong Hao, Yuxiang Guo, Rui Cao","doi":"10.1088/1741-2552/ad7f89","DOIUrl":"10.1088/1741-2552/ad7f89","url":null,"abstract":"<p><p><i>Objective.</i>In the field of steady-state visual evoked potential brain computer interfaces (SSVEP-BCIs) research, convolutional neural networks (CNNs) have gradually been proved to be an effective method. Whereas, majority works apply the frequency domain characteristics in long time window to train the network, thus lead to insufficient performance of those networks in short time window. Furthermore, only the frequency domain information for classification lacks of other task-related information.<i>Approach.</i>To address these issues, we propose a time-frequency domain generalized filter-bank convolutional neural network (FBCNN-G) to improve the SSVEP-BCIs classification performance. The network integrates multiple frequency information of electroencephalogram (EEG) with template and predefined prior of sine-cosine signals to perform feature extraction, which contains correlation analyses in both template and signal aspects. Then the classification is performed at the end of the network. In addition, the method proposes the use of filter banks divided into specific frequency bands as pre-filters in the network to fully consider the fundamental and harmonic frequency characteristics of the signal.<i>Main results.</i>The proposed FBCNN-G model is compared with other methods on the public dataset Benchmark. The results manifest that this model has higher accuracy of character recognition accuracy and information transfer rates in several time windows. Particularly, in the 0.2 s time window, the mean accuracy of the proposed method reaches62.02%±5.12%, indicating its superior performance.<i>Significance.</i>The proposed FBCNN-G model is critical for the exploitation of SSVEP-BCIs character recognition models.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1088/1741-2552/ad7c7e
Mats Tveter, Thomas Tveitstøl, Tønnes Nygaard, Ana S Pérez T, Shrikanth Kulashekhar, Ricardo Bruña, Hugo L Hammer, Christoffer Hatlestad-Hall, Ira R J Hebold Haraldsen
Objective.The accurate localization of electroencephalography (EEG) electrode positions is crucial for accurate source localization. Recent advancements have proposed alternatives to labor-intensive, manual methods for spatial localization of the electrodes, employing technologies such as 3D scanning and laser scanning. These novel approaches often integrate magnetic resonance imaging (MRI) as part of the pipeline in localizing the electrodes. The limited global availability of MRI data restricts its use as a standard modality in several clinical scenarios. This limitation restricts the use of these advanced methods.Approach.In this paper, we present a novel, versatile approach that utilizes 3D scans to localize EEG electrode positions with high accuracy. Importantly, while our method can be integrated with MRI data if available, it is specifically designed to be highly effective even in the absence of MRI, thus expanding the potential for advanced EEG analysis in various resource-limited settings. Our solution implements a two-tiered approach involving landmark/fiducials localization and electrode localization, creating an end-to-end framework.Main results.The efficacy and robustness of our approach have been validated on an extensive dataset containing over 400 3D scans from 278 subjects. The framework identifies pre-auricular points and achieves correct electrode positioning accuracy in the range of 85.7% to 91.0%. Additionally, our framework includes a validation tool that permits manual adjustments and visual validation if required.Significance.This study represents, to the best of the authors' knowledge, the first validation of such a method on a substantial dataset, thus ensuring the robustness and generalizability of our innovative approach. Our findings focus on developing a solution that facilitates source localization, without the need for MRI, contributing to the critical discussion on balancing cost effectiveness with methodological accuracy to promote wider adoption in both research and clinical settings.
{"title":"EEG electrodes and where to find them: automated localization from 3D scans.","authors":"Mats Tveter, Thomas Tveitstøl, Tønnes Nygaard, Ana S Pérez T, Shrikanth Kulashekhar, Ricardo Bruña, Hugo L Hammer, Christoffer Hatlestad-Hall, Ira R J Hebold Haraldsen","doi":"10.1088/1741-2552/ad7c7e","DOIUrl":"10.1088/1741-2552/ad7c7e","url":null,"abstract":"<p><p><i>Objective.</i>The accurate localization of electroencephalography (EEG) electrode positions is crucial for accurate source localization. Recent advancements have proposed alternatives to labor-intensive, manual methods for spatial localization of the electrodes, employing technologies such as 3D scanning and laser scanning. These novel approaches often integrate magnetic resonance imaging (MRI) as part of the pipeline in localizing the electrodes. The limited global availability of MRI data restricts its use as a standard modality in several clinical scenarios. This limitation restricts the use of these advanced methods.<i>Approach.</i>In this paper, we present a novel, versatile approach that utilizes 3D scans to localize EEG electrode positions with high accuracy. Importantly, while our method can be integrated with MRI data if available, it is specifically designed to be highly effective even in the absence of MRI, thus expanding the potential for advanced EEG analysis in various resource-limited settings. Our solution implements a two-tiered approach involving landmark/fiducials localization and electrode localization, creating an end-to-end framework.<i>Main results.</i>The efficacy and robustness of our approach have been validated on an extensive dataset containing over 400 3D scans from 278 subjects. The framework identifies pre-auricular points and achieves correct electrode positioning accuracy in the range of 85.7% to 91.0%. Additionally, our framework includes a validation tool that permits manual adjustments and visual validation if required.<i>Significance.</i>This study represents, to the best of the authors' knowledge, the first validation of such a method on a substantial dataset, thus ensuring the robustness and generalizability of our innovative approach. Our findings focus on developing a solution that facilitates source localization, without the need for MRI, contributing to the critical discussion on balancing cost effectiveness with methodological accuracy to promote wider adoption in both research and clinical settings.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-25DOI: 10.1088/1741-2552/ad7f8d
Arnau Dillen, Mohsen Omidi, Fakhreddine Ghaffari, Bram Vanderborght, Bart Roelands, Olivier Romain, Ann Nowé, Kevin De Pauw
Objective: Brain-computer interface (BCI) control systems monitor neural activity to detect the user's intentions, enabling device control through mental imagery. Despite their potential, decoding neural activity in real-world conditions poses significant challenges, making BCIs currently impractical compared to traditional interaction methods. This study introduces a novel motor imagery (MI) BCI control strategy for operating a physically assistive robotic arm, addressing the difficulties of MI decoding from electroencephalogram (EEG) signals, which are inherently non-stationary and vary across individuals.
Approach: A proof-of-concept BCI control system was developed using commercially available hardware, integrating MI with eye tracking in an augmented reality (AR) user interface to facilitate a shared control approach. This system proposes actions based on the user's gaze, enabling selection through imagined movements. A user study was conducted to evaluate the system's usability, focusing on its effectiveness and efficiency.
Main results:Participants performed tasks that simulated everyday activities with the robotic arm, demonstrating the shared control system's feasibility and practicality in real-world scenarios. Despite low online decoding performance (mean accuracy: 0.52 9, F1: 0.29, Cohen's Kappa: 0.12), participants achieved a mean success rate of 0.83 in the final phase of the user study when given 15 minutes to complete the evaluation tasks. The success rate dropped below 0.5 when a 5-minute cutoff time was selected.
Significance: These results indicate that integrating AR and eye tracking can significantly enhance the usability of BCI systems, despite the complexities of MI-EEG decoding. While efficiency is still low, the effectiveness of our approach was verified. This suggests that BCI systems have the potential to become a viable interaction modality for everyday applications
in the future.
{"title":"A shared robot control system combining augmented reality and motor imagery brain-computer interfaces with eye tracking.","authors":"Arnau Dillen, Mohsen Omidi, Fakhreddine Ghaffari, Bram Vanderborght, Bart Roelands, Olivier Romain, Ann Nowé, Kevin De Pauw","doi":"10.1088/1741-2552/ad7f8d","DOIUrl":"https://doi.org/10.1088/1741-2552/ad7f8d","url":null,"abstract":"<p><p><b>Objective</b>: Brain-computer interface (BCI) control systems monitor neural activity to detect the user's intentions, enabling device control through mental imagery. Despite their potential, decoding neural activity in real-world conditions poses significant challenges, making BCIs currently impractical compared to traditional interaction methods. This study introduces a novel motor imagery (MI) BCI control strategy for operating a physically assistive robotic arm, addressing the difficulties of MI decoding from electroencephalogram (EEG) signals, which are inherently non-stationary and vary across individuals.
<b>Approach</b>: A proof-of-concept BCI control system was developed using commercially available hardware, integrating MI with eye tracking in an augmented reality (AR) user interface to facilitate a shared control approach. This system proposes actions based on the user's gaze, enabling selection through imagined movements. A user study was conducted to evaluate the system's usability, focusing on its effectiveness and efficiency.
<b>Main results:</b>Participants performed tasks that simulated everyday activities with the robotic arm, demonstrating the shared control system's feasibility and practicality in real-world scenarios. Despite low online decoding performance (mean accuracy: 0.52 9, F1: 0.29, Cohen's Kappa: 0.12), participants achieved a mean success rate of 0.83 in the final phase of the user study when given 15 minutes to complete the evaluation tasks. The success rate dropped below 0.5 when a 5-minute cutoff time was selected.
<b>Significance</b>: These results indicate that integrating AR and eye tracking can significantly enhance the usability of BCI systems, despite the complexities of MI-EEG decoding. While efficiency is still low, the effectiveness of our approach was verified. This suggests that BCI systems have the potential to become a viable interaction modality for everyday applications
in the future.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-25DOI: 10.1088/1741-2552/ad7f8b
Ashlesha Deshmukh, Megan L Settell, Kevin Cheng, Bruce E Knudsen, James K Trevathan, Maria LaLuzerne, Stephan L Blanz, Aaron Skubal, Nishant Verma, Ben Benjamin Romanauski, Meagan K Brucker-Hahn, Danny Lam, Igor Lavrov, Aaron J Suminski, Douglas J Weber, Lee E Fisher, Scott F Lempka, Andrew J Shoffstall, Hyunjoo Park, Erika Ross, Mingming Zhang, Kip A Ludwig
Evoked compound action potentials (ECAPs) measured during epidural spinal cord stimulation (SCS) can help elucidate fundamental mechanisms for the treatment of pain and inform closed-loop control of SCS. Previous studies have used ECAPs to characterize neural responses to various neuromodulation therapies and have demonstrated that ECAPs are highly prone to multiple sources of artifact, including post-stimulus pulse capacitive artifact, electromyography (EMG) bleed-through, and motion artifact. However, a thorough characterization has yet to be performed for how these sources of artifact may contaminate recordings within the temporal window commonly used to determine activation of A-beta fibers in a large animal model.
We characterized sources of artifacts that can contaminate the recording of ECAPs in an epidural SCS swine model using the Abbott Octrode™ lead. Spinal ECAP recordings can be contaminated by capacitive artifact, short latency EMG from nearby muscles of the back, and motion artifact. The capacitive artifact can appear nearly identical in duration and waveshape to evoked A-beta responses. EMG bleed-through can have phase shifts across the electrode array, similar to the phase shift anticipated by propagation of an evoked A-beta fiber response. The short latency EMG is often evident at currents similar to those needed to activate A-beta fibers associated with the treatment of pain. Changes in CSF between the cord and dura, and motion induced during breathing created a cyclic oscillation in all evoked components of recorded ECAPs.
Controls must be implemented to separate neural signal from sources of artifact in SCS ECAPs. We suggest experimental procedures and reporting requirements necessary to disambiguate underlying neural response from these confounds. These data are important to better understand the framework for recorded ESRs, with components such as ECAPs, EMG, and artifacts, and have important implications for closed-loop control algorithms to account for transient motion such as postural changes and cough.
{"title":"Epidural Spinal Cord Recordings (ESRs): sources of neural-appearing artifact in stimulation evoked compound action potentials.","authors":"Ashlesha Deshmukh, Megan L Settell, Kevin Cheng, Bruce E Knudsen, James K Trevathan, Maria LaLuzerne, Stephan L Blanz, Aaron Skubal, Nishant Verma, Ben Benjamin Romanauski, Meagan K Brucker-Hahn, Danny Lam, Igor Lavrov, Aaron J Suminski, Douglas J Weber, Lee E Fisher, Scott F Lempka, Andrew J Shoffstall, Hyunjoo Park, Erika Ross, Mingming Zhang, Kip A Ludwig","doi":"10.1088/1741-2552/ad7f8b","DOIUrl":"https://doi.org/10.1088/1741-2552/ad7f8b","url":null,"abstract":"<p><p>Evoked compound action potentials (ECAPs) measured during epidural spinal cord stimulation (SCS) can help elucidate fundamental mechanisms for the treatment of pain and inform closed-loop control of SCS. Previous studies have used ECAPs to characterize neural responses to various neuromodulation therapies and have demonstrated that ECAPs are highly prone to multiple sources of artifact, including post-stimulus pulse capacitive artifact, electromyography (EMG) bleed-through, and motion artifact. However, a thorough characterization has yet to be performed for how these sources of artifact may contaminate recordings within the temporal window commonly used to determine activation of A-beta fibers in a large animal model.
We characterized sources of artifacts that can contaminate the recording of ECAPs in an epidural SCS swine model using the Abbott Octrode™ lead. Spinal ECAP recordings can be contaminated by capacitive artifact, short latency EMG from nearby muscles of the back, and motion artifact. The capacitive artifact can appear nearly identical in duration and waveshape to evoked A-beta responses. EMG bleed-through can have phase shifts across the electrode array, similar to the phase shift anticipated by propagation of an evoked A-beta fiber response. The short latency EMG is often evident at currents similar to those needed to activate A-beta fibers associated with the treatment of pain. Changes in CSF between the cord and dura, and motion induced during breathing created a cyclic oscillation in all evoked components of recorded ECAPs. 
Controls must be implemented to separate neural signal from sources of artifact in SCS ECAPs. We suggest experimental procedures and reporting requirements necessary to disambiguate underlying neural response from these confounds. These data are important to better understand the framework for recorded ESRs, with components such as ECAPs, EMG, and artifacts, and have important implications for closed-loop control algorithms to account for transient motion such as postural changes and cough.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-17DOI: 10.1088/1741-2552/ad731f
Christof Fehrman, C Daniel Meliza
Objective. Precise control of neural systems is essential to experimental investigations of how the brain controls behavior and holds the potential for therapeutic manipulations to correct aberrant network states. Model predictive control, which employs a dynamical model of the system to find optimal control inputs, has promise for dealing with the nonlinear dynamics, high levels of exogenous noise, and limited information about unmeasured states and parameters that are common in a wide range of neural systems. However, the challenge still remains of selecting the right model, constraining its parameters, and synchronizing to the neural system.Approach. As a proof of principle, we used recent advances in data-driven forecasting to construct a nonlinear machine-learning model of a Hodgkin-Huxley type neuron when only the membrane voltage is observable and there are an unknown number of intrinsic currents.Main Results. We show that this approach is able to learn the dynamics of different neuron types and can be used with model predictive control (MPC) to force the neuron to engage in arbitrary, researcher-defined spiking behaviors.Significance.To the best of our knowledge, this is the first application of nonlinear MPC of a conductance-based model where there is only realistically limited information about unobservable states and parameters.
{"title":"Nonlinear model predictive control of a conductance-based neuron model via data-driven forecasting.","authors":"Christof Fehrman, C Daniel Meliza","doi":"10.1088/1741-2552/ad731f","DOIUrl":"10.1088/1741-2552/ad731f","url":null,"abstract":"<p><p><i>Objective</i>. Precise control of neural systems is essential to experimental investigations of how the brain controls behavior and holds the potential for therapeutic manipulations to correct aberrant network states. Model predictive control, which employs a dynamical model of the system to find optimal control inputs, has promise for dealing with the nonlinear dynamics, high levels of exogenous noise, and limited information about unmeasured states and parameters that are common in a wide range of neural systems. However, the challenge still remains of selecting the right model, constraining its parameters, and synchronizing to the neural system.<i>Approach</i>. As a proof of principle, we used recent advances in data-driven forecasting to construct a nonlinear machine-learning model of a Hodgkin-Huxley type neuron when only the membrane voltage is observable and there are an unknown number of intrinsic currents.<i>Main Results</i>. We show that this approach is able to learn the dynamics of different neuron types and can be used with model predictive control (MPC) to force the neuron to engage in arbitrary, researcher-defined spiking behaviors.<i>Significance.</i>To the best of our knowledge, this is the first application of nonlinear MPC of a conductance-based model where there is only realistically limited information about unobservable states and parameters.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11483466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142047664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}