首页 > 最新文献

Journal of neural engineering最新文献

英文 中文
A multi-view neural framework with attention for epileptic seizure classification. 一种关注癫痫发作分类的多视点神经框架。
IF 3.8 Pub Date : 2026-02-03 DOI: 10.1088/1741-2552/ae33f8
Lufeng Feng, Baomin Xu, Li Duan, Wei Ni, Quan Z Sheng

Objective. Epilepsy is a chronic brain disorder characterized by recurrent seizures due to abnormal neuronal firing. Electroencephalogram (EEG)-based seizure classification has become an important auxiliary tool in clinical practice. This study aims to reduce reliance on expert experience in diagnosis and to improve the automated classification of epileptic seizures using EEG signals.Approach. We propose a novel filter-bank multi-view and attention-based mechanism neural network model for seizure classification. The model employs a learnable filter bank to decompose the raw EEG into multiple frequency sub-bands, forming multi-view representations. A multi-branch group convolution network is designed to capture multi-scale frequency-spatial features, while temporal dependencies are extracted through a bidirectional long short-term memory with an attention mechanism. A shared attention module adaptively emphasizes the most informative sub-bands and time windows for classification.Main results. The proposed model achieves an overallF1score of 0.7105, a weightedF1(WF1) score of 0.8314, and a Cohen's kappa coefficient of 0.6345 on the TUSZ v1.5.2 dataset. Compared with the baseline method FBCNet, the proposed model outperform by 3.22% in overallF1score (p < 0.05), 1.42% inWF1score (p < 0.05), and 2.87% in Cohen's kappa coefficient (p < 0.05). The best results are also obtained on the CHB-MIT dataset.Significance. These results demonstrate the effectiveness of combining multi-view feature extraction with attention-enhanced temporal modeling.

目的:癫痫是一种慢性脑部疾病,其特征是由于异常神经元放电引起的反复发作。基于脑电图的癫痫发作分类已成为临床实践中重要的辅助工具。本研究旨在减少诊断对专家经验的依赖,并利用脑电图信号改进癫痫发作的自动分类。方法:我们提出了一种新的基于多视图和注意力的滤波器组神经网络模型(FB-AMNet)用于癫痫发作分类。该模型采用可学习滤波器组将原始脑电信号分解成多个频带,形成多视图表示。设计了一种多分支群卷积网络来捕获多尺度的频率-空间特征,同时通过一种带有注意机制的双向LSTM提取时间依赖关系。共享注意力模块自适应地强调信息量最大的子带和时间窗进行分类。主要结果:本文提出的模型在TUSZ v1.5.2数据集上的F1总分为0.7105,加权F1得分为0.8314,Cohen’s kappa系数为0.6345。与基线方法FBCNet相比,该模型的F1总得分提高3.22% (p < 0.05), F1加权得分提高1.42% (p < 0.05), Cohen’s kappa系数提高2.87% (p < 0.05)。在CHB-MIT数据集上也获得了最好的结果。意义:这些结果证明了多视图特征提取与注意力增强时间建模相结合的有效性。
{"title":"A multi-view neural framework with attention for epileptic seizure classification.","authors":"Lufeng Feng, Baomin Xu, Li Duan, Wei Ni, Quan Z Sheng","doi":"10.1088/1741-2552/ae33f8","DOIUrl":"10.1088/1741-2552/ae33f8","url":null,"abstract":"<p><p><i>Objective</i>. Epilepsy is a chronic brain disorder characterized by recurrent seizures due to abnormal neuronal firing. Electroencephalogram (EEG)-based seizure classification has become an important auxiliary tool in clinical practice. This study aims to reduce reliance on expert experience in diagnosis and to improve the automated classification of epileptic seizures using EEG signals.<i>Approach</i>. We propose a novel filter-bank multi-view and attention-based mechanism neural network model for seizure classification. The model employs a learnable filter bank to decompose the raw EEG into multiple frequency sub-bands, forming multi-view representations. A multi-branch group convolution network is designed to capture multi-scale frequency-spatial features, while temporal dependencies are extracted through a bidirectional long short-term memory with an attention mechanism. A shared attention module adaptively emphasizes the most informative sub-bands and time windows for classification.<i>Main results</i>. The proposed model achieves an overallF1score of 0.7105, a weightedF1(WF1) score of 0.8314, and a Cohen's kappa coefficient of 0.6345 on the TUSZ v1.5.2 dataset. Compared with the baseline method FBCNet, the proposed model outperform by 3.22% in overallF1score (<i>p</i> < 0.05), 1.42% inWF1score (<i>p</i> < 0.05), and 2.87% in Cohen's kappa coefficient (<i>p</i> < 0.05). The best results are also obtained on the CHB-MIT dataset.<i>Significance</i>. These results demonstrate the effectiveness of combining multi-view feature extraction with attention-enhanced temporal modeling.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145914409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Word classification across speech modes from low-density electrocorticography signals. 低密度脑皮质电成像信号语音模式的词分类。
IF 3.8 Pub Date : 2026-02-02 DOI: 10.1088/1741-2552/ae3a1b
Aurélie de Borman, Bob Van Dyck, Kato Van Rooy, Evelien Carrette, Alfred Meurs, Dirk Van Roost, Marc M Van Hulle

Objective.Speech brain-computer interfaces (BCIs) aim to provide an alternative means of communication for individuals who are not able to speak. Remarkable progress has been achieved to decode attempted speech in individuals with severe anarthria. In contrast, imagined speech remains challenging to decode. The underlying neural mechanisms and relations to other speech modes are still elusive.Approach.In this study, we collected low-density electrocorticography signals from ten participants during a word repetition task. Electrodes were implanted for presurgical epilepsy evaluation in participants with preserved speech abilities. Models were developed using linear discriminant analysis to classify five words in response to different speech modes. We compared models trained during speaking, listening, imagining speaking, mouthing and reading. The relations between speech modes were investigated by transferring and augmenting models across speech modes.Main results.As expected, performed speech achieved the highest word classification accuracy followed by listening, mouthing, imagining and reading. While the accuracies obtained were not high enough for practical application, model transfer and augmentation could be investigated across speech modes. Transferring or augmenting models from one speech mode to another mode could significantly improve model performance. In particular, patterns learned from performed and perceived speech could generalize to imagined speech, leading to significantly improved imagined speech performance in seven participants. For four participants, imagined speech could be decoded above chance exclusively when models were transferred or augmented with performed or perceived speech.Significance.Imagined speech is often preferred by speech BCI users over attempted speech, as it requires less effort and can be produced more quickly. Transferring models across speech modes has the potential to facilitate and boost the development of imagined speech decoders.

目标。语音脑机接口(bci)旨在为不能说话的人提供另一种交流方式。在解码严重无音症患者的言语尝试方面取得了显著进展。相比之下,想象的语音解码仍然具有挑战性。在这项研究中,我们收集了10名参与者在单词重复任务中的低密度皮质电图信号。在保留语言能力的参与者中植入电极用于术前癫痫评估。利用线性判别分析建立模型,对不同语音模式下的五个词进行分类。我们比较了在说、听、想象说、口述和阅读过程中训练的模型。通过语音模式间的迁移和扩充模型来研究语音模式之间的关系。主要的结果。正如预期的那样,表演演讲达到了最高的单词分类准确率,其次是听力、口述、想象和阅读。虽然获得的准确率不够高,无法用于实际应用,但可以跨语音模式研究模型迁移和增强。将模型从一种语音模式转移或扩展到另一种模式可以显著提高模型的性能。特别是,从表演语言和感知语言中学习到的模式可以推广到想象语言,导致7名参与者的想象语言表现显著提高。对于四名参与者来说,当模型被转移或与表演或感知的语音增强时,想象的语音可以被随机解码。意义:语音BCI用户通常更喜欢想象的语音,而不是尝试的语音,因为它需要更少的努力,可以更快地产生。跨语音模式转移模型有可能促进和推动想象语音解码器的发展。
{"title":"Word classification across speech modes from low-density electrocorticography signals.","authors":"Aurélie de Borman, Bob Van Dyck, Kato Van Rooy, Evelien Carrette, Alfred Meurs, Dirk Van Roost, Marc M Van Hulle","doi":"10.1088/1741-2552/ae3a1b","DOIUrl":"10.1088/1741-2552/ae3a1b","url":null,"abstract":"<p><p><i>Objective.</i>Speech brain-computer interfaces (BCIs) aim to provide an alternative means of communication for individuals who are not able to speak. Remarkable progress has been achieved to decode attempted speech in individuals with severe anarthria. In contrast, imagined speech remains challenging to decode. The underlying neural mechanisms and relations to other speech modes are still elusive.<i>Approach.</i>In this study, we collected low-density electrocorticography signals from ten participants during a word repetition task. Electrodes were implanted for presurgical epilepsy evaluation in participants with preserved speech abilities. Models were developed using linear discriminant analysis to classify five words in response to different speech modes. We compared models trained during speaking, listening, imagining speaking, mouthing and reading. The relations between speech modes were investigated by transferring and augmenting models across speech modes.<i>Main results.</i>As expected, performed speech achieved the highest word classification accuracy followed by listening, mouthing, imagining and reading. While the accuracies obtained were not high enough for practical application, model transfer and augmentation could be investigated across speech modes. Transferring or augmenting models from one speech mode to another mode could significantly improve model performance. In particular, patterns learned from performed and perceived speech could generalize to imagined speech, leading to significantly improved imagined speech performance in seven participants. For four participants, imagined speech could be decoded above chance exclusively when models were transferred or augmented with performed or perceived speech.<i>Significance.</i>Imagined speech is often preferred by speech BCI users over attempted speech, as it requires less effort and can be produced more quickly. Transferring models across speech modes has the potential to facilitate and boost the development of imagined speech decoders.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":"23 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146101197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Systematic evaluation of surgical insertion of flexible neural probe arrays into deeper brain targets using length modulation methods. 使用长度调制方法将柔性神经探针阵列插入脑深部目标的系统评估。
IF 3.8 Pub Date : 2026-02-02 DOI: 10.1088/1741-2552/ae385c
Yingyi Gao, Zhouxiao Lu, Xuechun Wang, Zihan Jin, Alberto Esteban-Linares, Jeffery Guo, Huijing Xu, Kee Scholten, Dong Song, Ellis Meng

Objective. Penetrating polymer-based microelectrode arrays (pMEAs) offer the potential for long term high-quality electrophysiological recordings of dynamic neural activity. Compared to rigid metal wire and silicon MEAs, improved device-tissue interface stability has been reported. However, accurate surgical placement of long, thin shanks in deeper brain regions is challenging as flexibility is achieved at the expense of axial stiffness. This study systematically evaluates then compares two pMEA placement strategies-dissolvable dip coating and molded brace, both with bare, exposed pMEA tips-to address the need for consistent, reliable, and accurate surgical targeting. These methods were selected based on the criteria of ease of fabrication, surgical feasibility, and mechanical performance.Approach. Sham (mechanical model with no electrodes) and fully functional pMEAs with shanks up to 5.5 mm long were fabricated and then modified using biodegradable polyethylene glycol (PEG) to support implantation. PEG was applied to shanks by motorized dip coating or a mechanical mold. Dissolution time and insertion in agarose gel brain models and rat cortex were evaluated followed by targeting of dip coated pMEAs to the rat hippocampus.Main results. Dip coating at high withdrawal speeds achieved uniform coating on shanks. Both strategies yielded similar critical buckling forces and insertion forces for single shank and arrayed pMEAs. Dip coated pMEAs were successfully placed in hippocampal regions without severe tissue damage as confirmed by histology and recordings obtained.Significance. Dip coating is a simpler method to prepare pMEAs for surgical targeting of deep brain regions compared to the bracing technique, as it does not require both a specialized mold and application process. This work provides a guide for researchers using single or multi-shank pMEAs to an accessible insertion strategy for implanting into deep brain regions in rodents and other small animal models.

目的:穿透聚合物微电极阵列(pmea)为动态神经活动的长期高质量电生理记录提供了可能。与刚性金属线和硅MEAs相比,已经报道了器件组织界面稳定性的改善。然而,将长而细的小腿精确地植入脑深部是一项挑战,因为灵活性的实现是以牺牲轴向刚度为代价的。本研究系统地评估和比较了两种pMEA放置策略——可溶解浸渍涂层和模制支架,两者都带有裸露的、暴露的pMEA尖端——以满足一致、可靠和准确的手术瞄准需求。这些方法的选择是基于易于制作,手术可行性和机械性能的标准。方法:制备Sham(无电极的机械模型)和功能齐全的pmea,其柄长达5.5 mm,然后使用可生物降解的聚乙二醇(PEG)进行修饰以支持植入。通过电动浸涂或机械模具将聚乙二醇涂在柄上。测定其在琼脂糖凝胶脑模型和大鼠皮质中的溶解时间和插入时间,然后将浸包pmea靶向大鼠海马。主要结果:在高抽提速度下浸涂,使刀柄表面涂覆均匀。对于单杆和阵列pmea,这两种策略都产生了相似的临界屈曲力和插入力。经组织学和记录证实,浸涂pmea成功地放置在海马区域,没有严重的组织损伤。意义:与支撑技术相比,浸涂是一种更简单的制备脑深部手术靶向pmea的方法,因为它不需要专门的模具和应用过程。这项工作为研究人员使用单柄或多柄pmea植入啮齿类动物和其他小动物模型的深部脑区提供了一种可访问的插入策略。
{"title":"Systematic evaluation of surgical insertion of flexible neural probe arrays into deeper brain targets using length modulation methods.","authors":"Yingyi Gao, Zhouxiao Lu, Xuechun Wang, Zihan Jin, Alberto Esteban-Linares, Jeffery Guo, Huijing Xu, Kee Scholten, Dong Song, Ellis Meng","doi":"10.1088/1741-2552/ae385c","DOIUrl":"10.1088/1741-2552/ae385c","url":null,"abstract":"<p><p><i>Objective</i>. Penetrating polymer-based microelectrode arrays (pMEAs) offer the potential for long term high-quality electrophysiological recordings of dynamic neural activity. Compared to rigid metal wire and silicon MEAs, improved device-tissue interface stability has been reported. However, accurate surgical placement of long, thin shanks in deeper brain regions is challenging as flexibility is achieved at the expense of axial stiffness. This study systematically evaluates then compares two pMEA placement strategies-dissolvable dip coating and molded brace, both with bare, exposed pMEA tips-to address the need for consistent, reliable, and accurate surgical targeting. These methods were selected based on the criteria of ease of fabrication, surgical feasibility, and mechanical performance.<i>Approach</i>. Sham (mechanical model with no electrodes) and fully functional pMEAs with shanks up to 5.5 mm long were fabricated and then modified using biodegradable polyethylene glycol (PEG) to support implantation. PEG was applied to shanks by motorized dip coating or a mechanical mold. Dissolution time and insertion in agarose gel brain models and rat cortex were evaluated followed by targeting of dip coated pMEAs to the rat hippocampus.<i>Main results</i>. Dip coating at high withdrawal speeds achieved uniform coating on shanks. Both strategies yielded similar critical buckling forces and insertion forces for single shank and arrayed pMEAs. Dip coated pMEAs were successfully placed in hippocampal regions without severe tissue damage as confirmed by histology and recordings obtained.<i>Significance</i>. Dip coating is a simpler method to prepare pMEAs for surgical targeting of deep brain regions compared to the bracing technique, as it does not require both a specialized mold and application process. This work provides a guide for researchers using single or multi-shank pMEAs to an accessible insertion strategy for implanting into deep brain regions in rodents and other small animal models.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12862595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding of speech acoustics from EEG: going beyond the amplitude envelope. 脑电图语音解码:超越幅度包络。
IF 3.8 Pub Date : 2026-01-30 DOI: 10.1088/1741-2552/ae3ae1
Alexis D MacIntyre, Clément Gaultier, Tobias Goehring

Objective.During speech perception, properties of the acoustic stimulus can be reconstructed from the listener's brain using methods such as electroencephalography (EEG). Most studies employ the amplitude envelope as a target for decoding; however, speech acoustics can be characterised on multiple dimensions, including as spectral descriptors. The current study assesses how robustly an extended acoustic feature set can be decoded from EEG under varying levels of intelligibility and acoustic clarity.Approach.Analysis was conducted using EEG from 38 young adults who heard intelligible and non-intelligible speech that was either unprocessed or spectrally degraded using vocoding. We extracted a set of acoustic features which, alongside the envelope, characterised instantaneous properties of the speech spectrum (e.g. spectral slope) or spectral change over time (e.g. spectral flux). We establish the robustness of feature decoding by employing multiple model architectures and, in the case of linear decoders, by standardising decoding accuracy (Pearson'sr) using randomly permuted surrogate data.Main results. Linear models yielded the highestrrelative to non-linear models. However, the separate decoder architectures produced a similar pattern of results across features and experimental conditions. After convertingrvalues toZ-scores scaled by random data, we observed substantive differences in the noise floor between features. Decoding accuracy significantly varies by spectral degradation and speech intelligibility for some features, but such differences are reduced in the most robustly decoded features. This suggests acoustic feature reconstruction is primarily driven by generalised auditory processing.Significance. Our results demonstrate that linear decoders perform comparably to non-linear decoders in capturing the EEG response to speech acoustic properties beyond the amplitude envelope, with the reconstructive accuracy of some features also associated with understanding and spectral clarity. This sheds light on how sound properties are differentially represented by the brain and shows potential for clinical applications moving forward.

目的:在语音感知过程中,利用脑电图(EEG)等方法可以从听者的大脑中重建声刺激的特性。大多数研究采用幅度包络作为解码目标;然而,语音声学可以在多个维度上进行表征,包括作为频谱描述符。目前的研究评估了在不同的可理解性和声学清晰度水平下,如何鲁棒地从脑电图中解码扩展的声学特征集。& # xD; & # xD;方法。研究人员对38名年轻人的脑电图进行了分析,这些年轻人听到了可理解和不可理解的语音,这些语音要么未经处理,要么使用语音编码进行了频谱退化。我们提取了一组声学特征,这些特征与包络一起表征了语音频谱的瞬时特性(例如,频谱斜率)或频谱随时间的变化(例如,频谱通量)。我们通过采用多个模型架构来建立特征解码的鲁棒性,并且在线性解码器的情况下,通过使用随机排列的替代数据来标准化解码精度(Pearson’s r)。相对于非线性模型,线性模型的r值最高。然而,不同的解码器架构在不同的特征和实验条件下产生了相似的结果模式。在将r值转换为随机数据缩放的z分数后,我们观察到特征之间的噪声底存在实质性差异。解码精度因频谱退化和某些特征的语音可理解性而显著变化,但在最鲁棒解码的特征中,这种差异会减少。这表明声学特征重建主要是由广义听觉处理驱动的。我们的研究结果表明,线性解码器在捕获幅度包络线以外的语音声学特性的脑电图响应方面的表现与非线性解码器相当,其中一些特征的重建精度也与理解和频谱清晰度相关。这揭示了大脑如何以不同的方式表现声音特性,并显示了临床应用的潜力。
{"title":"Decoding of speech acoustics from EEG: going beyond the amplitude envelope.","authors":"Alexis D MacIntyre, Clément Gaultier, Tobias Goehring","doi":"10.1088/1741-2552/ae3ae1","DOIUrl":"10.1088/1741-2552/ae3ae1","url":null,"abstract":"<p><p><i>Objective.</i>During speech perception, properties of the acoustic stimulus can be reconstructed from the listener's brain using methods such as electroencephalography (EEG). Most studies employ the amplitude envelope as a target for decoding; however, speech acoustics can be characterised on multiple dimensions, including as spectral descriptors. The current study assesses how robustly an extended acoustic feature set can be decoded from EEG under varying levels of intelligibility and acoustic clarity.<i>Approach.</i>Analysis was conducted using EEG from 38 young adults who heard intelligible and non-intelligible speech that was either unprocessed or spectrally degraded using vocoding. We extracted a set of acoustic features which, alongside the envelope, characterised instantaneous properties of the speech spectrum (e.g. spectral slope) or spectral change over time (e.g. spectral flux). We establish the robustness of feature decoding by employing multiple model architectures and, in the case of linear decoders, by standardising decoding accuracy (Pearson's<i>r</i>) using randomly permuted surrogate data.<i>Main results</i>. Linear models yielded the highest<i>r</i>relative to non-linear models. However, the separate decoder architectures produced a similar pattern of results across features and experimental conditions. After converting<i>r</i>values to<i>Z</i>-scores scaled by random data, we observed substantive differences in the noise floor between features. Decoding accuracy significantly varies by spectral degradation and speech intelligibility for some features, but such differences are reduced in the most robustly decoded features. This suggests acoustic feature reconstruction is primarily driven by generalised auditory processing.<i>Significance</i>. Our results demonstrate that linear decoders perform comparably to non-linear decoders in capturing the EEG response to speech acoustic properties beyond the amplitude envelope, with the reconstructive accuracy of some features also associated with understanding and spectral clarity. This sheds light on how sound properties are differentially represented by the brain and shows potential for clinical applications moving forward.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146013968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal interference stimulator realized with silicon chip for non-invasive neuromodulation. 用硅芯片实现的无创神经调节时间干扰刺激器。
IF 3.8 Pub Date : 2026-01-30 DOI: 10.1088/1741-2552/ae3a1c
Yun-Yu Li, Nan-Hui Huang, Ming-Dou Ker

Objective.Temporal interference stimulation (TIS) has emerged as an innovative and promising approach for non-invasive stimulation. While previous studies have demonstrated the efficacy and performance of TIS using benchtop instruments, a dedicated system-on-chip for TIS applications has not yet been reported. This work addresses this gap by presenting a design for a TIS chip that enhances portability, thereby facilitating wearable applications of TIS.Approach.A miniaturized dual-channel temporal interference stimulator for non-invasive neuro-modulation is proposed and fabricated in a 0.18µm CMOS BCD process. The TIS chip occupies the silicon area of only 2.66 mm2. It generates output signals with a maximum amplitude of ±5 V and reliable frequency, with programmable input parameters to accommodate diverse biomedical applications. The carrier frequencies of the generated signals include 1 kHz, 2 kHz, and 3 kHz, combined with beat frequencies of 5 Hz, 10 Hz, and 20 Hz. This results in a total of nine available operation modes, enabling effective TIS.Main results.The proposed chip has effectively generated temporally interfering signals with reliable frequency and amplitude. To validate the efficacy of the TIS chip,in-vivoanimal experiments have been conducted, demonstrating its ability to produce effective electrical stimulation signals that successfully elicit neural responses in the deep brain of a pig.Significance.This work has replaced the bulky external stimulator with a fully integrated silicon chip, significantly enhancing portability and supporting future wearable clinical applications.

时间干扰刺激(TIS)利用神经元膜的低通滤波特性,已成为一种创新和有前途的非侵入性神经调节方法。虽然以前的研究已经使用台式仪器证明了TIS的功效和性能,但用于TIS应用的专用片上系统(SoC)尚未报道。这项工作通过提出一种增强可移植性的TIS芯片设计来解决这一差距,从而促进TIS的可穿戴应用。方法:提出了一种用于非侵入性神经调节的小型化双通道时间干扰刺激器,并采用0.18µm CMOS BCD工艺制作。TIS芯片的硅面积仅为2.66 mm²。它产生的输出信号最大幅度为±5 V,频率精确,具有可编程的输入参数,以适应各种生物医学应用。所产生信号的载波频率为1khz、2khz和3khz,外加5hz、10hz和20hz的拍频。这导致总共有9种可用的操作模式,从而实现有效的时间干扰刺激。主要结果:该芯片有效地产生了频率和幅值精确可靠的时域干扰信号。为了验证TIS芯片的有效性,已经进行了体内动物实验,证明其能够产生有效的电刺激信号,成功地引发猪脑深部的神经反应。意义:这项工作用完全集成的硅芯片取代了笨重的外部刺激器,显著提高了便携性,支持未来可穿戴临床应用。
{"title":"Temporal interference stimulator realized with silicon chip for non-invasive neuromodulation.","authors":"Yun-Yu Li, Nan-Hui Huang, Ming-Dou Ker","doi":"10.1088/1741-2552/ae3a1c","DOIUrl":"10.1088/1741-2552/ae3a1c","url":null,"abstract":"<p><p><i>Objective.</i>Temporal interference stimulation (TIS) has emerged as an innovative and promising approach for non-invasive stimulation. While previous studies have demonstrated the efficacy and performance of TIS using benchtop instruments, a dedicated system-on-chip for TIS applications has not yet been reported. This work addresses this gap by presenting a design for a TIS chip that enhances portability, thereby facilitating wearable applications of TIS.<i>Approach.</i>A miniaturized dual-channel temporal interference stimulator for non-invasive neuro-modulation is proposed and fabricated in a 0.18<i>µ</i>m CMOS BCD process. The TIS chip occupies the silicon area of only 2.66 mm<sup>2</sup>. It generates output signals with a maximum amplitude of ±5 V and reliable frequency, with programmable input parameters to accommodate diverse biomedical applications. The carrier frequencies of the generated signals include 1 kHz, 2 kHz, and 3 kHz, combined with beat frequencies of 5 Hz, 10 Hz, and 20 Hz. This results in a total of nine available operation modes, enabling effective TIS.<i>Main results.</i>The proposed chip has effectively generated temporally interfering signals with reliable frequency and amplitude. To validate the efficacy of the TIS chip,<i>in-vivo</i>animal experiments have been conducted, demonstrating its ability to produce effective electrical stimulation signals that successfully elicit neural responses in the deep brain of a pig.<i>Significance.</i>This work has replaced the bulky external stimulator with a fully integrated silicon chip, significantly enhancing portability and supporting future wearable clinical applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146004523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalability of random forest in myoelectric control. 随机森林在肌电控制中的可扩展性。
IF 3.8 Pub Date : 2026-01-22 DOI: 10.1088/1741-2552/ae2802
Xinyu Jiang, Chenfei Ma, Kianoush Nazarpour

Objective.Myoelectric control systems translate electromyographic (EMG) signals into control commands, enabling immersive human-robot interactions in the real world and the Metaverse. The variability of EMG due to various confounding factors leads to significant performance degradation. Such variability can be mitigated by training a highly generalizable but massively parameterized deep neural network, which can be effectively scaled using a vast dataset. We aim to find an alternative simple, explainable, efficient and parallelizable model, which can flexibly scale up with a larger dataset and scale down to reduce model size, and thereby will significantly facilitate the practical implementation of myoelectric control.Approach.In this work, we discuss the scalability of a random forest (RF) for myoelectric control. We show how to scale an RF up and down during the process of pre-training, fine-tuning, and automatic self-calibration. The effects of diverse factors such as bootstrapping, decision tree editing (pre-training, pruning, grafting, appending), and the size of training data are systematically studied using EMG data from 106 participants including both low- and high-density electrodes.Main results.We examined several factors that affect the size and accuracy of the model. The best solution could reduce the size of RF models by≈500×, with the accuracy reduced by only 1.5%. Importantly, for the first time we report the merit of RF that with more EMG electrodes (higher input dimension), the RF model size would be reduced.Significance.All of these findings contribute to the real time deployment RF models in real world myoelectric control applications.

目的:肌电控制系统将肌电图(EMG)信号转化为控制命令,使现实世界和虚拟世界中的沉浸式人机交互成为可能。由于各种混杂因素,肌电图的可变性导致了显著的性能下降。这种可变性可以通过训练一个高度泛化但大规模参数化的深度神经网络来缓解,该网络可以使用庞大的数据集进行有效缩放。我们的目标是找到一种替代的简单、可解释、高效和可并行的模型,该模型可以灵活地扩展更大的数据集,并缩小模型尺寸,从而大大促进肌电控制的实际实施。方法:在这项工作中,我们讨论了随机森林(RF)用于肌电控制的可扩展性。我们展示了如何在预训练,微调和自动自校准过程中向上和向下缩放RF。利用106名参与者的低电极和高密度电极的肌电图数据,系统地研究了自举、决策树编辑(预训练、剪枝、嫁接、追加)和训练数据大小等不同因素的影响。主要结果:我们考察了影响模型大小和精度的几个因素。最佳方案可使射频模型尺寸减小约500x,精度仅降低1.5%。重要的是,我们首次报道了射频的优点,即使用更多的肌电电极(更高的输入尺寸),射频模型的尺寸将会减小。意义:所有这些发现都有助于在现实世界的肌电控制应用中实时部署RF模型。
{"title":"Scalability of random forest in myoelectric control.","authors":"Xinyu Jiang, Chenfei Ma, Kianoush Nazarpour","doi":"10.1088/1741-2552/ae2802","DOIUrl":"10.1088/1741-2552/ae2802","url":null,"abstract":"<p><p><i>Objective.</i>Myoelectric control systems translate electromyographic (EMG) signals into control commands, enabling immersive human-robot interactions in the real world and the Metaverse. The variability of EMG due to various confounding factors leads to significant performance degradation. Such variability can be mitigated by training a highly generalizable but massively parameterized deep neural network, which can be effectively scaled using a vast dataset. We aim to find an alternative simple, explainable, efficient and parallelizable model, which can flexibly scale up with a larger dataset and scale down to reduce model size, and thereby will significantly facilitate the practical implementation of myoelectric control.<i>Approach.</i>In this work, we discuss the scalability of a random forest (RF) for myoelectric control. We show how to scale an RF up and down during the process of pre-training, fine-tuning, and automatic self-calibration. The effects of diverse factors such as bootstrapping, decision tree editing (pre-training, pruning, grafting, appending), and the size of training data are systematically studied using EMG data from 106 participants including both low- and high-density electrodes.<i>Main results.</i>We examined several factors that affect the size and accuracy of the model. The best solution could reduce the size of RF models by≈500×, with the accuracy reduced by only 1.5%. Importantly, for the first time we report the merit of RF that with more EMG electrodes (higher input dimension), the RF model size would be reduced.<i>Significance.</i>All of these findings contribute to the real time deployment RF models in real world myoelectric control applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145679401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breaking the performance barrier in deep learning-based SSVEP-BCIs: a joint frequency-phase training strategy. 突破基于深度学习的ssvep - bci的性能障碍:一种联合频相训练策略。
IF 3.8 Pub Date : 2026-01-22 DOI: 10.1088/1741-2552/ae36f6
Wenlong Ding, Aiping Liu, Xun Chen

Objective.Deep learning (DL) exhibits considerable potential for steady-state visual evoked potential (SSVEP) classification in electroencephalography-based brain-computer interfaces (BCIs). SSVEP signals contain both frequency and phase characteristics that correspond to the visual stimuli. However, existing DL training strategies typically focus on either frequency or phase information alone, thus failing to fully exploit these dual inherent properties and substantially limiting classification accuracy.Approach.To tackle this limitation, this study proposes a joint frequency-phase training strategy (JFPTS), which comprises two complementary stages with distinct time-window sampling schemes. The first stage adopts a frequency prior-driven sampling scheme to improve frequency component utilization, whereas the second stage employs a phase-locked sampling scheme to enhance intra-category phase consistency. This design enables JFPTS to effectively leverage both frequency and phase properties of SSVEP signals.Main results.Comprehensive experiments on two well-established public datasets validate the effectiveness of JFPTS. The results demonstrate that the JFPTS-enhanced model achieves a marked superiority over the current state-of-the-art classification approaches, notably surpassing the long-standing performance benchmark set by task-discriminative component analysis (TDCA).Significance.Overall, JFPTS establishes a new training paradigm that advances DL approaches for SSVEP classification and promotes the broader adoption of SSVEP-BCIs.

目的:深度学习在基于脑电图(EEG)的脑机接口(bci)中显示出相当大的稳态视觉诱发电位(SSVEP)分类潜力。SSVEP信号包含与视觉刺激相对应的频率和相位特征。然而,现有的深度学习训练策略通常只关注频率或相位信息,因此无法充分利用这些双重固有属性,从而大大限制了分类精度。方法:为了解决这一限制,本研究提出了一种联合频相训练策略(JFPTS),它包括两个具有不同时间窗采样方案的互补阶段。第一阶段采用频率先验驱动采样方案,提高频率成分利用率;第二阶段采用锁相采样方案,提高类别内相位一致性。这种设计使JFPTS能够有效地利用SSVEP信号的频率和相位特性。主要结果:在两个完善的公共数据集上进行的综合实验验证了JFPTS的有效性。结果表明,jfpts增强模型比当前最先进的分类方法取得了明显的优势,特别是超越了长期以来由任务判别成分分析(TDCA)设定的性能基准。意义:总体而言,JFPTS建立了一个新的训练范式,推进了SSVEP分类的深度学习方法,并促进了SSVEP- bci的广泛采用。
{"title":"Breaking the performance barrier in deep learning-based SSVEP-BCIs: a joint frequency-phase training strategy.","authors":"Wenlong Ding, Aiping Liu, Xun Chen","doi":"10.1088/1741-2552/ae36f6","DOIUrl":"10.1088/1741-2552/ae36f6","url":null,"abstract":"<p><p><i>Objective.</i>Deep learning (DL) exhibits considerable potential for steady-state visual evoked potential (SSVEP) classification in electroencephalography-based brain-computer interfaces (BCIs). SSVEP signals contain both frequency and phase characteristics that correspond to the visual stimuli. However, existing DL training strategies typically focus on either frequency or phase information alone, thus failing to fully exploit these dual inherent properties and substantially limiting classification accuracy.<i>Approach.</i>To tackle this limitation, this study proposes a joint frequency-phase training strategy (JFPTS), which comprises two complementary stages with distinct time-window sampling schemes. The first stage adopts a frequency prior-driven sampling scheme to improve frequency component utilization, whereas the second stage employs a phase-locked sampling scheme to enhance intra-category phase consistency. This design enables JFPTS to effectively leverage both frequency and phase properties of SSVEP signals.<i>Main results.</i>Comprehensive experiments on two well-established public datasets validate the effectiveness of JFPTS. The results demonstrate that the JFPTS-enhanced model achieves a marked superiority over the current state-of-the-art classification approaches, notably surpassing the long-standing performance benchmark set by task-discriminative component analysis (TDCA).<i>Significance.</i>Overall, JFPTS establishes a new training paradigm that advances DL approaches for SSVEP classification and promotes the broader adoption of SSVEP-BCIs.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MMoGCN: a multi-gate mixture of graph convolutional network model for EEG emotion and mood disorder recognition. 基于多门混合图卷积网络的脑电情绪与情绪障碍识别模型。
IF 3.8 Pub Date : 2026-01-22 DOI: 10.1088/1741-2552/ae37dc
Daxing Zhang, Yaru Guo, Xinni Kong, Yu Ouyang, Zhongzheng Li, Hong Zeng

Objective.Emotional states and mood disorders are closely interconnected, and their joint recognition serves as a critical pathway to uncovering their intrinsic relationship. Currently, deep learning (DL) models based on electroencephalogram (EEG) have achieved significant progress in single tasks such as emotion recognition or mood disorder (MD) recognition. However, most existing models are limited to handling only one of these tasks independently and fail to effectively leverage the shared features in EEG data related to both emotions and mood disorders. This limitation hinders the in-depth exploration of the complex interplay between emotions and mood disorders. Therefore, this study aims to develop an EEG-based DL framework for the joint recognition of emotions and mood disorders, thereby providing a foundation for further investigation into their interaction.Approach.We design a multi-gate mixture-of-experts graph convolutional network model(MMoGCN) for joint emotion and MD recognition. MMoGCN comprises three key modules: (1) a feature extraction module based on differential entropy to robustly represent EEG signals; (2) a Multi-gated shared experts module, which integrates two experts, and combines them through a gating mechanism to extract shared representations across tasks; and (3) adaptive task-specific towers, which consist of individual classification towers for each task and incorporate an adaptive weighting loss function to dynamically adjust task contributions. MMoGCN is evaluated on a self-collected dataset and further validated on the public DEAP dataset.Main results.MMoGCN achieves superior performance compared with state-of-the-art single-task and multi-task baselines in both emotion and MD recognition. Validation experiments on DEAP further demonstrate the scalability and generalization of MMoGCN.Significance.An effective multi-task learning model is proposed for joint emotion and MD recognition based on EEG. Additionally, the cognitive differences are also analyzed in emotional responses between healthy controls and subjects with mood disorders, providing methodological insights and potential assistance for cognitive rehabilitation from both cognitive and emotional perspectives.

目的:情绪状态与情绪障碍密切相关,二者的联合识别是揭示其内在关系的重要途径。目前,基于脑电图(EEG)的深度学习模型在情绪识别或情绪障碍识别等单一任务中取得了重大进展。然而,大多数现有模型仅限于独立处理其中一项任务,无法有效利用与情绪和情绪障碍相关的EEG数据中的共享特征。这一限制阻碍了对情绪和情绪障碍之间复杂相互作用的深入探索。因此,本研究旨在开发一种基于脑电图的深度学习框架,用于情绪和情绪障碍的联合识别,从而为进一步研究它们之间的相互作用提供基础。我们设计了一个多门混合专家图卷积网络模型(MMoGCN)用于联合情绪和情绪障碍识别。MMoGCN包括三个关键模块:(1)基于差分熵(DE)的特征提取模块,对脑电信号进行鲁棒化表示;(2)多门共享专家模块(MGSE),该模块集成了两个专家,并通过门控机制将他们组合在一起,以提取跨任务的共享表示;(3)自适应任务特定塔(ATST),它由每个任务的单独分类塔组成,并结合自适应加权损失(AWL)函数来动态调整任务贡献。在自收集数据集上对MMoGCN进行评估,并在公共DEAP数据集上进一步验证。与最先进的单任务和多任务基线相比,MMoGCN在情绪和情绪障碍识别方面都取得了卓越的表现。在DEAP上的验证实验进一步证明了MMoGCN的可扩展性和泛化性。提出了一种有效的基于脑电图的联合情绪和临床状态识别的多任务学习模型。此外,我们还分析了健康对照组和情绪障碍受试者之间情绪反应的认知差异,从认知和情绪的角度为认知康复提供方法学见解和潜在指导。
{"title":"MMoGCN: a multi-gate mixture of graph convolutional network model for EEG emotion and mood disorder recognition.","authors":"Daxing Zhang, Yaru Guo, Xinni Kong, Yu Ouyang, Zhongzheng Li, Hong Zeng","doi":"10.1088/1741-2552/ae37dc","DOIUrl":"10.1088/1741-2552/ae37dc","url":null,"abstract":"<p><p><i>Objective.</i>Emotional states and mood disorders are closely interconnected, and their joint recognition serves as a critical pathway to uncovering their intrinsic relationship. Currently, deep learning (DL) models based on electroencephalogram (EEG) have achieved significant progress in single tasks such as emotion recognition or mood disorder (MD) recognition. However, most existing models are limited to handling only one of these tasks independently and fail to effectively leverage the shared features in EEG data related to both emotions and mood disorders. This limitation hinders the in-depth exploration of the complex interplay between emotions and mood disorders. Therefore, this study aims to develop an EEG-based DL framework for the joint recognition of emotions and mood disorders, thereby providing a foundation for further investigation into their interaction.<i>Approach.</i>We design a multi-gate mixture-of-experts graph convolutional network model(MMoGCN) for joint emotion and MD recognition. MMoGCN comprises three key modules: (1) a feature extraction module based on differential entropy to robustly represent EEG signals; (2) a Multi-gated shared experts module, which integrates two experts, and combines them through a gating mechanism to extract shared representations across tasks; and (3) adaptive task-specific towers, which consist of individual classification towers for each task and incorporate an adaptive weighting loss function to dynamically adjust task contributions. MMoGCN is evaluated on a self-collected dataset and further validated on the public DEAP dataset.<i>Main results.</i>MMoGCN achieves superior performance compared with state-of-the-art single-task and multi-task baselines in both emotion and MD recognition. Validation experiments on DEAP further demonstrate the scalability and generalization of MMoGCN.<i>Significance.</i>An effective multi-task learning model is proposed for joint emotion and MD recognition based on EEG. Additionally, the cognitive differences are also analyzed in emotional responses between healthy controls and subjects with mood disorders, providing methodological insights and potential assistance for cognitive rehabilitation from both cognitive and emotional perspectives.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145968190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differences in stimulus evoked electroencephalographic entropy reduction distinguishes cognitively normal Parkinson's disease participants from healthy aged-matched controls. 刺激诱发的脑电图熵减少的差异将认知正常的帕金森病患者与年龄匹配的健康对照区分出来。
IF 3.8 Pub Date : 2026-01-21 DOI: 10.1088/1741-2552/ae33f7
David T J Liley

Objective.Parkinson's disease (PD) is a common neurodegenerative disease best known for its defining motor symptoms. However, it is also associated with significant cognitive impairment at all stages of the disease, with many patients eventually progressing to dementia. Therefore, there exists a significant need to identify objective functional biomarkers that better predict and monitor cognitive decline. While methods that analyse either spontaneous or evoked electroencephalogram (EEG), due to increasing practical usability and ostensible objectivity, have been investigated, current approaches are limited in that the associated measures are, in the absence of a theoretical basis, purely correlative.Approach.To address this shortcoming, we propose calculating changes in evoked EEG amplitude variability, quantified using information theoretic differential entropy (DE), during a three-level passive auditory oddball task, as it is argued this will directly index functional changes in cognition. We therefore estimate changes in stimulus-evoked DE in cognitively normal PD participants (N= 25), both on and off their medication, and in healthy age-matched controls (N= 25), and find substantial stimulus (standard, target, novel) and group differences.Main results.Notably, we find the time-course of the return of post-stimulus reductions in DE (i.e. information processing) to pre-stimulus levels delayed in PD compared to healthy controls, thus mirroring the assumed bradyphrenia. The observed changes in DE, together with the corollary increases in resting alpha (8-13 Hz) band activity seen in PD, are explained in the context of a well-known macroscopic theory of mammalian electrocortical activity, in terms of reduced tonic thalamo-cortical drive.Significance.This method of task-evoked DE EEG amplitude variability is expected to generalise to any situation where the objective determination of cognitive function is sought.

目的:帕金森病(PD)是一种常见的神经退行性疾病,以其典型的运动症状而闻名。然而,在痴呆症的所有阶段,它也与严重的认知障碍有关,许多患者最终进展为痴呆症。因此,有必要确定客观的功能性生物标志物,以更好地预测和监测认知能力下降。虽然由于越来越多的实际可用性和表面上的客观性,已经研究了分析自发或诱发脑电图的方法,但目前的方法是有限的,因为在缺乏理论基础的情况下,相关的措施纯粹是相关的。方法:为了解决这一缺陷,我们建议在三水平被动听觉怪任务中计算诱发脑电图振幅变异性的变化,使用信息理论微分熵(DE)进行量化,因为有人认为这将直接反映认知的功能变化。因此,我们估计了认知正常PD参与者(n = 25)中刺激诱发DE的变化,包括服药和停药,以及健康年龄匹配的对照组(n = 25),并发现了实质性的刺激(标准、目标、新型)和组差异。主要结果:值得注意的是,我们发现,与健康对照组相比,PD患者刺激后DE(即信息处理)恢复到刺激前水平的时间过程延迟,从而反映了假设的迟缓性精神分裂症。在PD中观察到的DE的变化,以及相应的静息α (8 -13 Hz)带活动的增加,可以在众所周知的哺乳动物皮层电活动宏观理论的背景下解释,即丘脑-皮层强直性驱动的减少。意义:这种任务诱发DE脑电图振幅变异性的方法有望推广到任何寻求客观确定认知功能的情况。
{"title":"Differences in stimulus evoked electroencephalographic entropy reduction distinguishes cognitively normal Parkinson's disease participants from healthy aged-matched controls.","authors":"David T J Liley","doi":"10.1088/1741-2552/ae33f7","DOIUrl":"10.1088/1741-2552/ae33f7","url":null,"abstract":"<p><p><i>Objective.</i>Parkinson's disease (PD) is a common neurodegenerative disease best known for its defining motor symptoms. However, it is also associated with significant cognitive impairment at all stages of the disease, with many patients eventually progressing to dementia. Therefore, there exists a significant need to identify objective functional biomarkers that better predict and monitor cognitive decline. While methods that analyse either spontaneous or evoked electroencephalogram (EEG), due to increasing practical usability and ostensible objectivity, have been investigated, current approaches are limited in that the associated measures are, in the absence of a theoretical basis, purely correlative.<i>Approach.</i>To address this shortcoming, we propose calculating changes in evoked EEG amplitude variability, quantified using information theoretic differential entropy (DE), during a three-level passive auditory oddball task, as it is argued this will directly index functional changes in cognition. We therefore estimate changes in stimulus-evoked DE in cognitively normal PD participants (<i>N</i>= 25), both on and off their medication, and in healthy age-matched controls (<i>N</i>= 25), and find substantial stimulus (standard, target, novel) and group differences.<i>Main results.</i>Notably, we find the time-course of the return of post-stimulus reductions in DE (i.e. information processing) to pre-stimulus levels delayed in PD compared to healthy controls, thus mirroring the assumed bradyphrenia. The observed changes in DE, together with the corollary increases in resting alpha (8-13 Hz) band activity seen in PD, are explained in the context of a well-known macroscopic theory of mammalian electrocortical activity, in terms of reduced tonic thalamo-cortical drive.<i>Significance.</i>This method of task-evoked DE EEG amplitude variability is expected to generalise to any situation where the objective determination of cognitive function is sought.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145914384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CTSSP: A temporal-spectral-spatial joint optimization algorithm for motor imagery EEG decoding. CTSSP:一种运动意象脑电解码的时-谱-空联合优化算法。
IF 3.8 Pub Date : 2026-01-20 DOI: 10.1088/1741-2552/ae34ea
Lincong Pan, Kun Wang, Weibo Yi, Yang Zhang, Minpeng Xu, Dong Ming

Objective.Motor imagery brain-computer interfaces hold significant promise for neurorehabilitation, yet their performance is often compromised by electroencephalography (EEG) non-stationarity, low signal-to-noise ratios, and severe cross-session variability. Current decoding methods typically suffer from fragmented optimization, treating temporal, spectral, and spatial features in isolation.Approach.We propose common temporal-spectral-spatial patterns (CTSSP), a unified framework that jointly optimizes filters across all three domains. The algorithm integrates: (1) multi-scale temporal segmentation to capture dynamic neural evolution, (2) channel-adaptive finite impulse response filters to enhance task-relevant rhythms, and (3) low-rank regularization to improve generalization.Main results.Evaluated across five public datasets, CTSSP achieves state-of-the-art performance. It yielded mean accuracies of 76.9% (within-subject), 68.8% (cross-session), and 69.8% (cross-subject). In within-subject and cross-session scenarios, CTSSP significantly outperformed competing baselines by margins of 2.6%-14.6% (p< 0.001) and 2.3%-13.8% (p< 0.05), respectively. In cross-subject tasks, it achieved the highest average accuracy, proving competitive against deep learning models. Neurophysiological visualization confirms that the learned filters align closely with motor cortex activation mechanisms.Significance.CTSSP effectively overcomes the limitations of decoupled feature extraction by extracting robust, interpretable, and coupled temporal-spectral-spatial patterns. It offers a powerful, data-efficient solution for decoding MI EEG in noisy, non-stationary environments. The code is available athttps://github.com/PLC-TJU/CTSSP.

目的:运动图像脑机接口(mi - bci)在神经康复方面具有重要的前景,但其性能经常受到脑电图非平稳性、低信噪比和严重的跨会话变异性的影响。当前的解码方法通常是碎片化的优化,孤立地处理时间、光谱和空间特征。方法:我们提出了共同的时间-光谱-空间模式(CTSSP),这是一个统一的框架,可以共同优化所有三个领域的滤波器。该算法集成了:1)多尺度时间分割以捕捉动态神经进化,2)信道自适应有限脉冲响应(FIR)滤波器以增强任务相关节律,3)低秩正则化以提高泛化。主要结果:通过五个公共数据集进行评估,CTSSP达到了最先进的性能。其平均准确率为76.9%(主题内)、68.8%(跨主题)和69.8%(跨主题)。在主题内和跨会话场景中,CTSSP分别以2.6-14.6% (p < 0.001)和2.3-13.8% (p < 0.05)的优势显著优于竞争基线。在跨学科任务中,它达到了最高的平均准确率,证明了与深度学习模型的竞争力。神经生理学可视化证实,习得的过滤器与运动皮层激活机制密切相关。意义:CTSSP有效地克服了解耦特征提取的局限性,提取了鲁棒的、可解释的、耦合的时间-光谱-空间模式。它提供了一个强大的,数据高效的解决方案,解码MI脑电图在嘈杂,非平稳环境。代码可在https://github.com/PLC-TJU/CTSSP上获得。
{"title":"CTSSP: A temporal-spectral-spatial joint optimization algorithm for motor imagery EEG decoding.","authors":"Lincong Pan, Kun Wang, Weibo Yi, Yang Zhang, Minpeng Xu, Dong Ming","doi":"10.1088/1741-2552/ae34ea","DOIUrl":"10.1088/1741-2552/ae34ea","url":null,"abstract":"<p><p><i>Objective.</i>Motor imagery brain-computer interfaces hold significant promise for neurorehabilitation, yet their performance is often compromised by electroencephalography (EEG) non-stationarity, low signal-to-noise ratios, and severe cross-session variability. Current decoding methods typically suffer from fragmented optimization, treating temporal, spectral, and spatial features in isolation.<i>Approach.</i>We propose common temporal-spectral-spatial patterns (CTSSP), a unified framework that jointly optimizes filters across all three domains. The algorithm integrates: (1) multi-scale temporal segmentation to capture dynamic neural evolution, (2) channel-adaptive finite impulse response filters to enhance task-relevant rhythms, and (3) low-rank regularization to improve generalization.<i>Main results.</i>Evaluated across five public datasets, CTSSP achieves state-of-the-art performance. It yielded mean accuracies of 76.9% (within-subject), 68.8% (cross-session), and 69.8% (cross-subject). In within-subject and cross-session scenarios, CTSSP significantly outperformed competing baselines by margins of 2.6%-14.6% (<i>p</i>< 0.001) and 2.3%-13.8% (<i>p</i>< 0.05), respectively. In cross-subject tasks, it achieved the highest average accuracy, proving competitive against deep learning models. Neurophysiological visualization confirms that the learned filters align closely with motor cortex activation mechanisms.<i>Significance.</i>CTSSP effectively overcomes the limitations of decoupled feature extraction by extracting robust, interpretable, and coupled temporal-spectral-spatial patterns. It offers a powerful, data-efficient solution for decoding MI EEG in noisy, non-stationary environments. The code is available athttps://github.com/PLC-TJU/CTSSP.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145919404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of neural engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1