Pub Date : 2026-01-30DOI: 10.1088/1741-2552/ae3ae1
Alexis D MacIntyre, Clément Gaultier, Tobias Goehring
Objective.During speech perception, properties of the acoustic stimulus can be reconstructed from the listener's brain using methods such as electroencephalography (EEG). Most studies employ the amplitude envelope as a target for decoding; however, speech acoustics can be characterised on multiple dimensions, including as spectral descriptors. The current study assesses how robustly an extended acoustic feature set can be decoded from EEG under varying levels of intelligibility and acoustic clarity.Approach.Analysis was conducted using EEG from 38 young adults who heard intelligible and non-intelligible speech that was either unprocessed or spectrally degraded using vocoding. We extracted a set of acoustic features which, alongside the envelope, characterised instantaneous properties of the speech spectrum (e.g. spectral slope) or spectral change over time (e.g. spectral flux). We establish the robustness of feature decoding by employing multiple model architectures and, in the case of linear decoders, by standardising decoding accuracy (Pearson'sr) using randomly permuted surrogate data.Main results. Linear models yielded the highestrrelative to non-linear models. However, the separate decoder architectures produced a similar pattern of results across features and experimental conditions. After convertingrvalues toZ-scores scaled by random data, we observed substantive differences in the noise floor between features. Decoding accuracy significantly varies by spectral degradation and speech intelligibility for some features, but such differences are reduced in the most robustly decoded features. This suggests acoustic feature reconstruction is primarily driven by generalised auditory processing.Significance. Our results demonstrate that linear decoders perform comparably to non-linear decoders in capturing the EEG response to speech acoustic properties beyond the amplitude envelope, with the reconstructive accuracy of some features also associated with understanding and spectral clarity. This sheds light on how sound properties are differentially represented by the brain and shows potential for clinical applications moving forward.
{"title":"Decoding of speech acoustics from EEG: going beyond the amplitude envelope.","authors":"Alexis D MacIntyre, Clément Gaultier, Tobias Goehring","doi":"10.1088/1741-2552/ae3ae1","DOIUrl":"10.1088/1741-2552/ae3ae1","url":null,"abstract":"<p><p><i>Objective.</i>During speech perception, properties of the acoustic stimulus can be reconstructed from the listener's brain using methods such as electroencephalography (EEG). Most studies employ the amplitude envelope as a target for decoding; however, speech acoustics can be characterised on multiple dimensions, including as spectral descriptors. The current study assesses how robustly an extended acoustic feature set can be decoded from EEG under varying levels of intelligibility and acoustic clarity.<i>Approach.</i>Analysis was conducted using EEG from 38 young adults who heard intelligible and non-intelligible speech that was either unprocessed or spectrally degraded using vocoding. We extracted a set of acoustic features which, alongside the envelope, characterised instantaneous properties of the speech spectrum (e.g. spectral slope) or spectral change over time (e.g. spectral flux). We establish the robustness of feature decoding by employing multiple model architectures and, in the case of linear decoders, by standardising decoding accuracy (Pearson's<i>r</i>) using randomly permuted surrogate data.<i>Main results</i>. Linear models yielded the highest<i>r</i>relative to non-linear models. However, the separate decoder architectures produced a similar pattern of results across features and experimental conditions. After converting<i>r</i>values to<i>Z</i>-scores scaled by random data, we observed substantive differences in the noise floor between features. Decoding accuracy significantly varies by spectral degradation and speech intelligibility for some features, but such differences are reduced in the most robustly decoded features. This suggests acoustic feature reconstruction is primarily driven by generalised auditory processing.<i>Significance</i>. Our results demonstrate that linear decoders perform comparably to non-linear decoders in capturing the EEG response to speech acoustic properties beyond the amplitude envelope, with the reconstructive accuracy of some features also associated with understanding and spectral clarity. This sheds light on how sound properties are differentially represented by the brain and shows potential for clinical applications moving forward.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146013968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1088/1741-2552/ae3a1c
Yun-Yu Li, Nan-Hui Huang, Ming-Dou Ker
Objective.Temporal interference stimulation (TIS) has emerged as an innovative and promising approach for non-invasive stimulation. While previous studies have demonstrated the efficacy and performance of TIS using benchtop instruments, a dedicated system-on-chip for TIS applications has not yet been reported. This work addresses this gap by presenting a design for a TIS chip that enhances portability, thereby facilitating wearable applications of TIS.Approach.A miniaturized dual-channel temporal interference stimulator for non-invasive neuro-modulation is proposed and fabricated in a 0.18µm CMOS BCD process. The TIS chip occupies the silicon area of only 2.66 mm2. It generates output signals with a maximum amplitude of ±5 V and reliable frequency, with programmable input parameters to accommodate diverse biomedical applications. The carrier frequencies of the generated signals include 1 kHz, 2 kHz, and 3 kHz, combined with beat frequencies of 5 Hz, 10 Hz, and 20 Hz. This results in a total of nine available operation modes, enabling effective TIS.Main results.The proposed chip has effectively generated temporally interfering signals with reliable frequency and amplitude. To validate the efficacy of the TIS chip,in-vivoanimal experiments have been conducted, demonstrating its ability to produce effective electrical stimulation signals that successfully elicit neural responses in the deep brain of a pig.Significance.This work has replaced the bulky external stimulator with a fully integrated silicon chip, significantly enhancing portability and supporting future wearable clinical applications.
{"title":"Temporal interference stimulator realized with silicon chip for non-invasive neuromodulation.","authors":"Yun-Yu Li, Nan-Hui Huang, Ming-Dou Ker","doi":"10.1088/1741-2552/ae3a1c","DOIUrl":"10.1088/1741-2552/ae3a1c","url":null,"abstract":"<p><p><i>Objective.</i>Temporal interference stimulation (TIS) has emerged as an innovative and promising approach for non-invasive stimulation. While previous studies have demonstrated the efficacy and performance of TIS using benchtop instruments, a dedicated system-on-chip for TIS applications has not yet been reported. This work addresses this gap by presenting a design for a TIS chip that enhances portability, thereby facilitating wearable applications of TIS.<i>Approach.</i>A miniaturized dual-channel temporal interference stimulator for non-invasive neuro-modulation is proposed and fabricated in a 0.18<i>µ</i>m CMOS BCD process. The TIS chip occupies the silicon area of only 2.66 mm<sup>2</sup>. It generates output signals with a maximum amplitude of ±5 V and reliable frequency, with programmable input parameters to accommodate diverse biomedical applications. The carrier frequencies of the generated signals include 1 kHz, 2 kHz, and 3 kHz, combined with beat frequencies of 5 Hz, 10 Hz, and 20 Hz. This results in a total of nine available operation modes, enabling effective TIS.<i>Main results.</i>The proposed chip has effectively generated temporally interfering signals with reliable frequency and amplitude. To validate the efficacy of the TIS chip,<i>in-vivo</i>animal experiments have been conducted, demonstrating its ability to produce effective electrical stimulation signals that successfully elicit neural responses in the deep brain of a pig.<i>Significance.</i>This work has replaced the bulky external stimulator with a fully integrated silicon chip, significantly enhancing portability and supporting future wearable clinical applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146004523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-27DOI: 10.1088/1741-2552/ae3e16
Anderson Roy Phillips, Yash Shashank Vakilna, Dorsa E P Moghaddam, Anton R Banta, John Mosher, Behnaam Aazhang
Electroencephalography (EEG) provides robust, cost-effective, and portable measurements of brain electrical activity. However, its spatial resolution is limited, constraining the localization and estimation of deep sources. Although methods exist to infer neural activity from scalp recordings, major challenges remain due to high dimensionality, temporal overlap among neural sources, and anatomical variability in head geometry. This topical review synthesizes inverse modeling approaches, with emphasis on nonlinear methods, multimodal integration, and high-density EEG systems that address these limitations. We also review the forward model and related background theory, summarize clinical applications, outline research directions, and identify available software tools and relevant publicly available datasets. Our goal is to help researchers understand traditional source estimation techniques and integrate advanced methods that may better capture the complexity of neurophysiological sources.
{"title":"Inferring neural sources from electroencephalography: Foundations and frontiers.","authors":"Anderson Roy Phillips, Yash Shashank Vakilna, Dorsa E P Moghaddam, Anton R Banta, John Mosher, Behnaam Aazhang","doi":"10.1088/1741-2552/ae3e16","DOIUrl":"https://doi.org/10.1088/1741-2552/ae3e16","url":null,"abstract":"<p><p>Electroencephalography (EEG) provides robust, cost-effective, and portable measurements of brain electrical activity. However, its spatial resolution is limited, constraining the localization and estimation of deep sources. Although methods exist to infer neural activity from scalp recordings, major challenges remain due to high dimensionality, temporal overlap among neural sources, and anatomical variability in head geometry. This topical review synthesizes inverse modeling approaches, with emphasis on nonlinear methods, multimodal integration, and high-density EEG systems that address these limitations. We also review the forward model and related background theory, summarize clinical applications, outline research directions, and identify available software tools and relevant publicly available datasets. Our goal is to help researchers understand traditional source estimation techniques and integrate advanced methods that may better capture the complexity of neurophysiological sources.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1088/1741-2552/ae2802
Xinyu Jiang, Chenfei Ma, Kianoush Nazarpour
Objective.Myoelectric control systems translate electromyographic (EMG) signals into control commands, enabling immersive human-robot interactions in the real world and the Metaverse. The variability of EMG due to various confounding factors leads to significant performance degradation. Such variability can be mitigated by training a highly generalizable but massively parameterized deep neural network, which can be effectively scaled using a vast dataset. We aim to find an alternative simple, explainable, efficient and parallelizable model, which can flexibly scale up with a larger dataset and scale down to reduce model size, and thereby will significantly facilitate the practical implementation of myoelectric control.Approach.In this work, we discuss the scalability of a random forest (RF) for myoelectric control. We show how to scale an RF up and down during the process of pre-training, fine-tuning, and automatic self-calibration. The effects of diverse factors such as bootstrapping, decision tree editing (pre-training, pruning, grafting, appending), and the size of training data are systematically studied using EMG data from 106 participants including both low- and high-density electrodes.Main results.We examined several factors that affect the size and accuracy of the model. The best solution could reduce the size of RF models by≈500×, with the accuracy reduced by only 1.5%. Importantly, for the first time we report the merit of RF that with more EMG electrodes (higher input dimension), the RF model size would be reduced.Significance.All of these findings contribute to the real time deployment RF models in real world myoelectric control applications.
{"title":"Scalability of random forest in myoelectric control.","authors":"Xinyu Jiang, Chenfei Ma, Kianoush Nazarpour","doi":"10.1088/1741-2552/ae2802","DOIUrl":"10.1088/1741-2552/ae2802","url":null,"abstract":"<p><p><i>Objective.</i>Myoelectric control systems translate electromyographic (EMG) signals into control commands, enabling immersive human-robot interactions in the real world and the Metaverse. The variability of EMG due to various confounding factors leads to significant performance degradation. Such variability can be mitigated by training a highly generalizable but massively parameterized deep neural network, which can be effectively scaled using a vast dataset. We aim to find an alternative simple, explainable, efficient and parallelizable model, which can flexibly scale up with a larger dataset and scale down to reduce model size, and thereby will significantly facilitate the practical implementation of myoelectric control.<i>Approach.</i>In this work, we discuss the scalability of a random forest (RF) for myoelectric control. We show how to scale an RF up and down during the process of pre-training, fine-tuning, and automatic self-calibration. The effects of diverse factors such as bootstrapping, decision tree editing (pre-training, pruning, grafting, appending), and the size of training data are systematically studied using EMG data from 106 participants including both low- and high-density electrodes.<i>Main results.</i>We examined several factors that affect the size and accuracy of the model. The best solution could reduce the size of RF models by≈500×, with the accuracy reduced by only 1.5%. Importantly, for the first time we report the merit of RF that with more EMG electrodes (higher input dimension), the RF model size would be reduced.<i>Significance.</i>All of these findings contribute to the real time deployment RF models in real world myoelectric control applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145679401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1088/1741-2552/ae36f6
Wenlong Ding, Aiping Liu, Xun Chen
Objective.Deep learning (DL) exhibits considerable potential for steady-state visual evoked potential (SSVEP) classification in electroencephalography-based brain-computer interfaces (BCIs). SSVEP signals contain both frequency and phase characteristics that correspond to the visual stimuli. However, existing DL training strategies typically focus on either frequency or phase information alone, thus failing to fully exploit these dual inherent properties and substantially limiting classification accuracy.Approach.To tackle this limitation, this study proposes a joint frequency-phase training strategy (JFPTS), which comprises two complementary stages with distinct time-window sampling schemes. The first stage adopts a frequency prior-driven sampling scheme to improve frequency component utilization, whereas the second stage employs a phase-locked sampling scheme to enhance intra-category phase consistency. This design enables JFPTS to effectively leverage both frequency and phase properties of SSVEP signals.Main results.Comprehensive experiments on two well-established public datasets validate the effectiveness of JFPTS. The results demonstrate that the JFPTS-enhanced model achieves a marked superiority over the current state-of-the-art classification approaches, notably surpassing the long-standing performance benchmark set by task-discriminative component analysis (TDCA).Significance.Overall, JFPTS establishes a new training paradigm that advances DL approaches for SSVEP classification and promotes the broader adoption of SSVEP-BCIs.
{"title":"Breaking the performance barrier in deep learning-based SSVEP-BCIs: a joint frequency-phase training strategy.","authors":"Wenlong Ding, Aiping Liu, Xun Chen","doi":"10.1088/1741-2552/ae36f6","DOIUrl":"10.1088/1741-2552/ae36f6","url":null,"abstract":"<p><p><i>Objective.</i>Deep learning (DL) exhibits considerable potential for steady-state visual evoked potential (SSVEP) classification in electroencephalography-based brain-computer interfaces (BCIs). SSVEP signals contain both frequency and phase characteristics that correspond to the visual stimuli. However, existing DL training strategies typically focus on either frequency or phase information alone, thus failing to fully exploit these dual inherent properties and substantially limiting classification accuracy.<i>Approach.</i>To tackle this limitation, this study proposes a joint frequency-phase training strategy (JFPTS), which comprises two complementary stages with distinct time-window sampling schemes. The first stage adopts a frequency prior-driven sampling scheme to improve frequency component utilization, whereas the second stage employs a phase-locked sampling scheme to enhance intra-category phase consistency. This design enables JFPTS to effectively leverage both frequency and phase properties of SSVEP signals.<i>Main results.</i>Comprehensive experiments on two well-established public datasets validate the effectiveness of JFPTS. The results demonstrate that the JFPTS-enhanced model achieves a marked superiority over the current state-of-the-art classification approaches, notably surpassing the long-standing performance benchmark set by task-discriminative component analysis (TDCA).<i>Significance.</i>Overall, JFPTS establishes a new training paradigm that advances DL approaches for SSVEP classification and promotes the broader adoption of SSVEP-BCIs.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective.Emotional states and mood disorders are closely interconnected, and their joint recognition serves as a critical pathway to uncovering their intrinsic relationship. Currently, deep learning (DL) models based on electroencephalogram (EEG) have achieved significant progress in single tasks such as emotion recognition or mood disorder (MD) recognition. However, most existing models are limited to handling only one of these tasks independently and fail to effectively leverage the shared features in EEG data related to both emotions and mood disorders. This limitation hinders the in-depth exploration of the complex interplay between emotions and mood disorders. Therefore, this study aims to develop an EEG-based DL framework for the joint recognition of emotions and mood disorders, thereby providing a foundation for further investigation into their interaction.Approach.We design a multi-gate mixture-of-experts graph convolutional network model(MMoGCN) for joint emotion and MD recognition. MMoGCN comprises three key modules: (1) a feature extraction module based on differential entropy to robustly represent EEG signals; (2) a Multi-gated shared experts module, which integrates two experts, and combines them through a gating mechanism to extract shared representations across tasks; and (3) adaptive task-specific towers, which consist of individual classification towers for each task and incorporate an adaptive weighting loss function to dynamically adjust task contributions. MMoGCN is evaluated on a self-collected dataset and further validated on the public DEAP dataset.Main results.MMoGCN achieves superior performance compared with state-of-the-art single-task and multi-task baselines in both emotion and MD recognition. Validation experiments on DEAP further demonstrate the scalability and generalization of MMoGCN.Significance.An effective multi-task learning model is proposed for joint emotion and MD recognition based on EEG. Additionally, the cognitive differences are also analyzed in emotional responses between healthy controls and subjects with mood disorders, providing methodological insights and potential assistance for cognitive rehabilitation from both cognitive and emotional perspectives.
{"title":"MMoGCN: a multi-gate mixture of graph convolutional network model for EEG emotion and mood disorder recognition.","authors":"Daxing Zhang, Yaru Guo, Xinni Kong, Yu Ouyang, Zhongzheng Li, Hong Zeng","doi":"10.1088/1741-2552/ae37dc","DOIUrl":"10.1088/1741-2552/ae37dc","url":null,"abstract":"<p><p><i>Objective.</i>Emotional states and mood disorders are closely interconnected, and their joint recognition serves as a critical pathway to uncovering their intrinsic relationship. Currently, deep learning (DL) models based on electroencephalogram (EEG) have achieved significant progress in single tasks such as emotion recognition or mood disorder (MD) recognition. However, most existing models are limited to handling only one of these tasks independently and fail to effectively leverage the shared features in EEG data related to both emotions and mood disorders. This limitation hinders the in-depth exploration of the complex interplay between emotions and mood disorders. Therefore, this study aims to develop an EEG-based DL framework for the joint recognition of emotions and mood disorders, thereby providing a foundation for further investigation into their interaction.<i>Approach.</i>We design a multi-gate mixture-of-experts graph convolutional network model(MMoGCN) for joint emotion and MD recognition. MMoGCN comprises three key modules: (1) a feature extraction module based on differential entropy to robustly represent EEG signals; (2) a Multi-gated shared experts module, which integrates two experts, and combines them through a gating mechanism to extract shared representations across tasks; and (3) adaptive task-specific towers, which consist of individual classification towers for each task and incorporate an adaptive weighting loss function to dynamically adjust task contributions. MMoGCN is evaluated on a self-collected dataset and further validated on the public DEAP dataset.<i>Main results.</i>MMoGCN achieves superior performance compared with state-of-the-art single-task and multi-task baselines in both emotion and MD recognition. Validation experiments on DEAP further demonstrate the scalability and generalization of MMoGCN.<i>Significance.</i>An effective multi-task learning model is proposed for joint emotion and MD recognition based on EEG. Additionally, the cognitive differences are also analyzed in emotional responses between healthy controls and subjects with mood disorders, providing methodological insights and potential assistance for cognitive rehabilitation from both cognitive and emotional perspectives.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145968190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1088/1741-2552/ae33f7
David T J Liley
Objective.Parkinson's disease (PD) is a common neurodegenerative disease best known for its defining motor symptoms. However, it is also associated with significant cognitive impairment at all stages of the disease, with many patients eventually progressing to dementia. Therefore, there exists a significant need to identify objective functional biomarkers that better predict and monitor cognitive decline. While methods that analyse either spontaneous or evoked electroencephalogram (EEG), due to increasing practical usability and ostensible objectivity, have been investigated, current approaches are limited in that the associated measures are, in the absence of a theoretical basis, purely correlative.Approach.To address this shortcoming, we propose calculating changes in evoked EEG amplitude variability, quantified using information theoretic differential entropy (DE), during a three-level passive auditory oddball task, as it is argued this will directly index functional changes in cognition. We therefore estimate changes in stimulus-evoked DE in cognitively normal PD participants (N= 25), both on and off their medication, and in healthy age-matched controls (N= 25), and find substantial stimulus (standard, target, novel) and group differences.Main results.Notably, we find the time-course of the return of post-stimulus reductions in DE (i.e. information processing) to pre-stimulus levels delayed in PD compared to healthy controls, thus mirroring the assumed bradyphrenia. The observed changes in DE, together with the corollary increases in resting alpha (8-13 Hz) band activity seen in PD, are explained in the context of a well-known macroscopic theory of mammalian electrocortical activity, in terms of reduced tonic thalamo-cortical drive.Significance.This method of task-evoked DE EEG amplitude variability is expected to generalise to any situation where the objective determination of cognitive function is sought.
{"title":"Differences in stimulus evoked electroencephalographic entropy reduction distinguishes cognitively normal Parkinson's disease participants from healthy aged-matched controls.","authors":"David T J Liley","doi":"10.1088/1741-2552/ae33f7","DOIUrl":"10.1088/1741-2552/ae33f7","url":null,"abstract":"<p><p><i>Objective.</i>Parkinson's disease (PD) is a common neurodegenerative disease best known for its defining motor symptoms. However, it is also associated with significant cognitive impairment at all stages of the disease, with many patients eventually progressing to dementia. Therefore, there exists a significant need to identify objective functional biomarkers that better predict and monitor cognitive decline. While methods that analyse either spontaneous or evoked electroencephalogram (EEG), due to increasing practical usability and ostensible objectivity, have been investigated, current approaches are limited in that the associated measures are, in the absence of a theoretical basis, purely correlative.<i>Approach.</i>To address this shortcoming, we propose calculating changes in evoked EEG amplitude variability, quantified using information theoretic differential entropy (DE), during a three-level passive auditory oddball task, as it is argued this will directly index functional changes in cognition. We therefore estimate changes in stimulus-evoked DE in cognitively normal PD participants (<i>N</i>= 25), both on and off their medication, and in healthy age-matched controls (<i>N</i>= 25), and find substantial stimulus (standard, target, novel) and group differences.<i>Main results.</i>Notably, we find the time-course of the return of post-stimulus reductions in DE (i.e. information processing) to pre-stimulus levels delayed in PD compared to healthy controls, thus mirroring the assumed bradyphrenia. The observed changes in DE, together with the corollary increases in resting alpha (8-13 Hz) band activity seen in PD, are explained in the context of a well-known macroscopic theory of mammalian electrocortical activity, in terms of reduced tonic thalamo-cortical drive.<i>Significance.</i>This method of task-evoked DE EEG amplitude variability is expected to generalise to any situation where the objective determination of cognitive function is sought.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145914384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1088/1741-2552/ae34ea
Lincong Pan, Kun Wang, Weibo Yi, Yang Zhang, Minpeng Xu, Dong Ming
Objective.Motor imagery brain-computer interfaces hold significant promise for neurorehabilitation, yet their performance is often compromised by electroencephalography (EEG) non-stationarity, low signal-to-noise ratios, and severe cross-session variability. Current decoding methods typically suffer from fragmented optimization, treating temporal, spectral, and spatial features in isolation.Approach.We propose common temporal-spectral-spatial patterns (CTSSP), a unified framework that jointly optimizes filters across all three domains. The algorithm integrates: (1) multi-scale temporal segmentation to capture dynamic neural evolution, (2) channel-adaptive finite impulse response filters to enhance task-relevant rhythms, and (3) low-rank regularization to improve generalization.Main results.Evaluated across five public datasets, CTSSP achieves state-of-the-art performance. It yielded mean accuracies of 76.9% (within-subject), 68.8% (cross-session), and 69.8% (cross-subject). In within-subject and cross-session scenarios, CTSSP significantly outperformed competing baselines by margins of 2.6%-14.6% (p< 0.001) and 2.3%-13.8% (p< 0.05), respectively. In cross-subject tasks, it achieved the highest average accuracy, proving competitive against deep learning models. Neurophysiological visualization confirms that the learned filters align closely with motor cortex activation mechanisms.Significance.CTSSP effectively overcomes the limitations of decoupled feature extraction by extracting robust, interpretable, and coupled temporal-spectral-spatial patterns. It offers a powerful, data-efficient solution for decoding MI EEG in noisy, non-stationary environments. The code is available athttps://github.com/PLC-TJU/CTSSP.
{"title":"CTSSP: A temporal-spectral-spatial joint optimization algorithm for motor imagery EEG decoding.","authors":"Lincong Pan, Kun Wang, Weibo Yi, Yang Zhang, Minpeng Xu, Dong Ming","doi":"10.1088/1741-2552/ae34ea","DOIUrl":"10.1088/1741-2552/ae34ea","url":null,"abstract":"<p><p><i>Objective.</i>Motor imagery brain-computer interfaces hold significant promise for neurorehabilitation, yet their performance is often compromised by electroencephalography (EEG) non-stationarity, low signal-to-noise ratios, and severe cross-session variability. Current decoding methods typically suffer from fragmented optimization, treating temporal, spectral, and spatial features in isolation.<i>Approach.</i>We propose common temporal-spectral-spatial patterns (CTSSP), a unified framework that jointly optimizes filters across all three domains. The algorithm integrates: (1) multi-scale temporal segmentation to capture dynamic neural evolution, (2) channel-adaptive finite impulse response filters to enhance task-relevant rhythms, and (3) low-rank regularization to improve generalization.<i>Main results.</i>Evaluated across five public datasets, CTSSP achieves state-of-the-art performance. It yielded mean accuracies of 76.9% (within-subject), 68.8% (cross-session), and 69.8% (cross-subject). In within-subject and cross-session scenarios, CTSSP significantly outperformed competing baselines by margins of 2.6%-14.6% (<i>p</i>< 0.001) and 2.3%-13.8% (<i>p</i>< 0.05), respectively. In cross-subject tasks, it achieved the highest average accuracy, proving competitive against deep learning models. Neurophysiological visualization confirms that the learned filters align closely with motor cortex activation mechanisms.<i>Significance.</i>CTSSP effectively overcomes the limitations of decoupled feature extraction by extracting robust, interpretable, and coupled temporal-spectral-spatial patterns. It offers a powerful, data-efficient solution for decoding MI EEG in noisy, non-stationary environments. The code is available athttps://github.com/PLC-TJU/CTSSP.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145919404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1088/1741-2552/ae33f6
Jichu Zhang, Maryse Lapierre-Landry, Havisha Kalpatthi, Michael W Jenkins, David L Wilson, Nicole A Pelot, Andrew J Shoffstall
Objective.Precise segmentation and quantification of nerve morphology from imaging data are critical for designing effective and selective peripheral nerve stimulation (PNS) therapies. However, prior studies on nerve morphology segmentation suffer from important limitations in both accuracy and efficiency. This study introduces a deep learning approach for robust and automated three-dimensional (3D) segmentation of human vagus nerve fascicles and epineurium from high-resolution micro-computed tomography (microCT) images.Methods.We developed a multi-class 3D U-Net to segment fascicles and epineurium that incorporates a novel anatomy-aware loss function to ensure that predictions respect nerve topology. We trained and tested the network using subject-level five-fold cross-validation with 100 microCT volumes (11.4μm isotropic resolution) from cervical and thoracic vagus nerves stained with phosphotungstic acid from five subjects. We benchmarked the 3D U-Net's performance against a two-dimensional (2D) U-Net using both standard and anatomy-specific segmentation metrics.Results.Our 3D U-Net generated high-quality segmentations (average Dice similarity coefficient: 0.93). Compared to a 2D U-Net, our 3D U-Net yielded significantly better volumetric overlap, boundary delineation, and fascicle instance detection. The 3D approach reduced anatomical errors (topological and morphological implausibility) by 2.5-fold, provided more consistent inter-slice boundaries, and improved detection of fascicle splits/merges by nearly 6-fold.Significance.Our automated 3D segmentation pipeline provides anatomically accurate 3D maps of peripheral neural morphology from microCT data. The automation allows for high throughput, and the substantial improvement in segmentation quality and anatomical fidelity enhances the reliability of morphological analysis, vagal pathway mapping, and the implementation of realistic computational models. These advancements provide a foundation for understanding the functional organization of the vagus and other peripheral nerves and optimizing PNS therapies.
{"title":"Automated 3D segmentation of human vagus nerve fascicles and epineurium from micro-computed tomography images using anatomy-aware neural networks.","authors":"Jichu Zhang, Maryse Lapierre-Landry, Havisha Kalpatthi, Michael W Jenkins, David L Wilson, Nicole A Pelot, Andrew J Shoffstall","doi":"10.1088/1741-2552/ae33f6","DOIUrl":"10.1088/1741-2552/ae33f6","url":null,"abstract":"<p><p><i>Objective.</i>Precise segmentation and quantification of nerve morphology from imaging data are critical for designing effective and selective peripheral nerve stimulation (PNS) therapies. However, prior studies on nerve morphology segmentation suffer from important limitations in both accuracy and efficiency. This study introduces a deep learning approach for robust and automated three-dimensional (3D) segmentation of human vagus nerve fascicles and epineurium from high-resolution micro-computed tomography (microCT) images.<i>Methods.</i>We developed a multi-class 3D U-Net to segment fascicles and epineurium that incorporates a novel anatomy-aware loss function to ensure that predictions respect nerve topology. We trained and tested the network using subject-level five-fold cross-validation with 100 microCT volumes (11.4<i>μ</i>m isotropic resolution) from cervical and thoracic vagus nerves stained with phosphotungstic acid from five subjects. We benchmarked the 3D U-Net's performance against a two-dimensional (2D) U-Net using both standard and anatomy-specific segmentation metrics.<i>Results.</i>Our 3D U-Net generated high-quality segmentations (average Dice similarity coefficient: 0.93). Compared to a 2D U-Net, our 3D U-Net yielded significantly better volumetric overlap, boundary delineation, and fascicle instance detection. The 3D approach reduced anatomical errors (topological and morphological implausibility) by 2.5-fold, provided more consistent inter-slice boundaries, and improved detection of fascicle splits/merges by nearly 6-fold.<i>Significance.</i>Our automated 3D segmentation pipeline provides anatomically accurate 3D maps of peripheral neural morphology from microCT data. The automation allows for high throughput, and the substantial improvement in segmentation quality and anatomical fidelity enhances the reliability of morphological analysis, vagal pathway mapping, and the implementation of realistic computational models. These advancements provide a foundation for understanding the functional organization of the vagus and other peripheral nerves and optimizing PNS therapies.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145914356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1088/1741-2552/ae3a1a
Jonas Althoff, Waldo Nogueira
Objective: Electroencephalography (EEG) data can be used to decode an attended sound source in normal-hearing (NH) listeners, even for music stimuli. This information could steer the sound processing strategy for cochlear implants (CIs) users, potentially improving their music listening experience. The aim of this study was to investigate whether selective auditory attention decoding (SAAD) could be performed in CI users for music stimuli.
Approach: High-density EEG was recorded from 8 NH and 8 CI users. Duets containing a clarinet and cello were dichotically presented. A linear decoder was trained to reconstruct audio features of the attended instrument from EEG data. The estimated attended instrument was selected based on which of the two instruments had a higher correlation to the reconstructed instrument. EEG recordings are challenging in CI users, as these devices introduce strong electrical artifacts. We also propose a new artifact rejection technique that employs ICA calculating ICs and automating their selection for removal, which we termed ASICA.
Main results:
We showed that it was possible to perform SAAD for music in CI users. The decoding accuracies were 59.4 % for NH listeners and 60 % for CI users with the proposed algorithm.
Using the proposed algorithm, the correlation coefficients between the reconstructed audio feature and the attended audio feature were improved in conditions where artifact was dominating.
Significance:
Results indicate that selective auditory attention to musical instruments can be effectively decoded, and that this decoding is enhanced by the new artifact reduction algorithm, particularly in scenarios where the cochlear implant's electrical artifact has greater influence.
Moreover, these results could be relevant as an objective measure of music perception or for a brain computer interface that improves music enjoyment. Additionally we showed that the stimulation artifact can be suppressed.
The ethic's committee of the MHH approved this study (8874_BO_K_2020).
{"title":"Selective auditory attention decoding in bilateral cochlear implant users to music instruments.","authors":"Jonas Althoff, Waldo Nogueira","doi":"10.1088/1741-2552/ae3a1a","DOIUrl":"https://doi.org/10.1088/1741-2552/ae3a1a","url":null,"abstract":"<p><strong>Objective: </strong>Electroencephalography (EEG) data can be used to decode an attended sound source in normal-hearing (NH) listeners, even for music stimuli. This information could steer the sound processing strategy for cochlear implants (CIs) users, potentially improving their music listening experience. The aim of this study was to investigate whether selective auditory attention decoding (SAAD) could be performed in CI users for music stimuli.
Approach: High-density EEG was recorded from 8 NH and 8 CI users. Duets containing a clarinet and cello were dichotically presented. A linear decoder was trained to reconstruct audio features of the attended instrument from EEG data. The estimated attended instrument was selected based on which of the two instruments had a higher correlation to the reconstructed instrument. EEG recordings are challenging in CI users, as these devices introduce strong electrical artifacts. We also propose a new artifact rejection technique that employs ICA calculating ICs and automating their selection for removal, which we termed ASICA.
Main results: 
We showed that it was possible to perform SAAD for music in CI users. The decoding accuracies were 59.4 % for NH listeners and 60 % for CI users with the proposed algorithm. 
Using the proposed algorithm, the correlation coefficients between the reconstructed audio feature and the attended audio feature were improved in conditions where artifact was dominating. 
Significance: 
Results indicate that selective auditory attention to musical instruments can be effectively decoded, and that this decoding is enhanced by the new artifact reduction algorithm, particularly in scenarios where the cochlear implant's electrical artifact has greater influence.
Moreover, these results could be relevant as an objective measure of music perception or for a brain computer interface that improves music enjoyment. Additionally we showed that the stimulation artifact can be suppressed. 
The ethic's committee of the MHH approved this study (8874_BO_K_2020).</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146004935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}