Pub Date : 2026-02-05DOI: 10.1088/1741-2552/ae3d67
Breanne Christie, Nicolas Norena Acosta, Roksana Sadeghi, Arathy Kartha, Chigozie Ewulum, Avi Caspi, Francesco V Tenore, Gislin Dagnelie, Roberta L Klatzky, Seth D Billings
Objective.Visual impairments create significant challenges for navigation. This work explored the potential for an autonomous navigation aid with multisensory feedback to improve navigational performance for users of visual neuroprostheses.Approach.An autonomous navigation system was developed that maps the environment in real time and provides guidance using combinations of prosthetic vision, haptic, and auditory cues. Navigational performance was evaluated in 20 sighted participants using simulated prosthetic vision and in a single-subject case study of an Argus II visual neuroprosthesis user. Participants completed three tasks: navigate to destination, obstacle field traversal, and relative distance judgment. Multiple sensory feedback configurations incorporating visual, haptic, and auditory cues were compared. Performance metrics included collision rate, distance traveled, task completion time, navigation success rate, and accuracy of relative distance judgments.Main results.Performance differences across sensory configurations were most pronounced in navigation success and collision rates. Haptic plus audio feedback was highly effective for navigation tasks, enabling successful navigation in nearly all trials involving haptic guidance. Argus vision (AV) alone was inadequate for navigation. Depth vision (DV) provided modest improvements over AV but did not enhance performance beyond haptic and audio guidance when combined. Wide field-of-view DV yielded additional benefits, particularly for obstacle field traversal where its performance exceeded other modes. Adding AV to haptic and audio also provided no benefit and, in some cases, degraded performance. Performance trends for the Argus user were generally comparable to those of sighted participants across sensory modes, with the exception of the relative distance judgment task, in which the Argus user demonstrated better performance. Among sighted participants, increased field of view and resolution independently improved relative distance judgment accuracy.Significance.These findings demonstrate the potential of multimodal feedback systems to improve navigation for prosthetic vision users. (ClinicalTrials.gov NCT04359108).
{"title":"Autonomous multisensory enhancement of a visual neuroprosthesis for navigation: technical proof-of-concept with simulated prosthetic vision and single-subject case study of a visual prosthesis user.","authors":"Breanne Christie, Nicolas Norena Acosta, Roksana Sadeghi, Arathy Kartha, Chigozie Ewulum, Avi Caspi, Francesco V Tenore, Gislin Dagnelie, Roberta L Klatzky, Seth D Billings","doi":"10.1088/1741-2552/ae3d67","DOIUrl":"10.1088/1741-2552/ae3d67","url":null,"abstract":"<p><p><i>Objective.</i>Visual impairments create significant challenges for navigation. This work explored the potential for an autonomous navigation aid with multisensory feedback to improve navigational performance for users of visual neuroprostheses.<i>Approach.</i>An autonomous navigation system was developed that maps the environment in real time and provides guidance using combinations of prosthetic vision, haptic, and auditory cues. Navigational performance was evaluated in 20 sighted participants using simulated prosthetic vision and in a single-subject case study of an Argus II visual neuroprosthesis user. Participants completed three tasks: navigate to destination, obstacle field traversal, and relative distance judgment. Multiple sensory feedback configurations incorporating visual, haptic, and auditory cues were compared. Performance metrics included collision rate, distance traveled, task completion time, navigation success rate, and accuracy of relative distance judgments.<i>Main results.</i>Performance differences across sensory configurations were most pronounced in navigation success and collision rates. Haptic plus audio feedback was highly effective for navigation tasks, enabling successful navigation in nearly all trials involving haptic guidance. Argus vision (AV) alone was inadequate for navigation. Depth vision (DV) provided modest improvements over AV but did not enhance performance beyond haptic and audio guidance when combined. Wide field-of-view DV yielded additional benefits, particularly for obstacle field traversal where its performance exceeded other modes. Adding AV to haptic and audio also provided no benefit and, in some cases, degraded performance. Performance trends for the Argus user were generally comparable to those of sighted participants across sensory modes, with the exception of the relative distance judgment task, in which the Argus user demonstrated better performance. Among sighted participants, increased field of view and resolution independently improved relative distance judgment accuracy.<i>Significance.</i>These findings demonstrate the potential of multimodal feedback systems to improve navigation for prosthetic vision users. (ClinicalTrials.gov NCT04359108).</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146055700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/1741-2552/ae3c41
Jingyao Sun, Ruimou Xie, Jingyang Yu, Linhong Ji, Tianyu Jia, Yu Pan, Chong Li
Objective. Hybrid brain-computer interface (BCI) systems incorporate electroencephalography (EEG) and electromyography (EMG) signals to extract corticomuscular coherence (CMC) features, enabling self-modulation of neural communication. While promising for stroke rehabilitation, the neurophysiological mechanism underlying hybrid BCI therapy remains poorly understood. To address this gap, we characterized post-stroke CMC dynamics during ankle dorsiflexion and further established their relationship with functional motor recovery.Approach. We acquired synchronous EEG and high-density EMG recordings from 13 subacute stroke patients (with their affected limb) before and after three-week rehabilitation, and 9 age-matched healthy controls (using their dominant limb) during isometric ankle dorsiflexion. Using multivariate coupling analysis, we computed EEG and EMG projection vectors to identify optimal coupling patterns. Subsequently, we derived CMC spectra and topographies through coherence analysis to characterize corticomuscular interactions at spatial and spectral scales.Main results. Compared to healthy controls, stroke patients demonstrated reduced beta-band CMC patterns, particularly within the sensorimotor areas involved in the foot movement. No significant differences in CMC patterns were observed between stroke patients before and after rehabilitation training. Further analysis revealed significant correlation between beta-band CMC changes and clinical improvements measured by the Berg balance scale.Significance. Beta-band CMC is a potential neurophysiological biomarker of motor recovery following stroke. These findings provide novel insights into the disrupted corticomuscular communication underlying post-stroke motor dysfunction, while offering mechanistic evidence to guide the design and implementation of hybrid BCI systems that target these specific biomarkers for therapeutic intervention.
{"title":"Dynamic modulation of corticomuscular coherence during ankle dorsiflexion after stroke: towards hybrid BCI for lower-limb rehabilitation.","authors":"Jingyao Sun, Ruimou Xie, Jingyang Yu, Linhong Ji, Tianyu Jia, Yu Pan, Chong Li","doi":"10.1088/1741-2552/ae3c41","DOIUrl":"10.1088/1741-2552/ae3c41","url":null,"abstract":"<p><p><i>Objective</i>. Hybrid brain-computer interface (BCI) systems incorporate electroencephalography (EEG) and electromyography (EMG) signals to extract corticomuscular coherence (CMC) features, enabling self-modulation of neural communication. While promising for stroke rehabilitation, the neurophysiological mechanism underlying hybrid BCI therapy remains poorly understood. To address this gap, we characterized post-stroke CMC dynamics during ankle dorsiflexion and further established their relationship with functional motor recovery.<i>Approach</i>. We acquired synchronous EEG and high-density EMG recordings from 13 subacute stroke patients (with their affected limb) before and after three-week rehabilitation, and 9 age-matched healthy controls (using their dominant limb) during isometric ankle dorsiflexion. Using multivariate coupling analysis, we computed EEG and EMG projection vectors to identify optimal coupling patterns. Subsequently, we derived CMC spectra and topographies through coherence analysis to characterize corticomuscular interactions at spatial and spectral scales.<i>Main results</i>. Compared to healthy controls, stroke patients demonstrated reduced beta-band CMC patterns, particularly within the sensorimotor areas involved in the foot movement. No significant differences in CMC patterns were observed between stroke patients before and after rehabilitation training. Further analysis revealed significant correlation between beta-band CMC changes and clinical improvements measured by the Berg balance scale.<i>Significance</i>. Beta-band CMC is a potential neurophysiological biomarker of motor recovery following stroke. These findings provide novel insights into the disrupted corticomuscular communication underlying post-stroke motor dysfunction, while offering mechanistic evidence to guide the design and implementation of hybrid BCI systems that target these specific biomarkers for therapeutic intervention.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146032227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1088/1741-2552/ae33f8
Lufeng Feng, Baomin Xu, Li Duan, Wei Ni, Quan Z Sheng
Objective. Epilepsy is a chronic brain disorder characterized by recurrent seizures due to abnormal neuronal firing. Electroencephalogram (EEG)-based seizure classification has become an important auxiliary tool in clinical practice. This study aims to reduce reliance on expert experience in diagnosis and to improve the automated classification of epileptic seizures using EEG signals.Approach. We propose a novel filter-bank multi-view and attention-based mechanism neural network model for seizure classification. The model employs a learnable filter bank to decompose the raw EEG into multiple frequency sub-bands, forming multi-view representations. A multi-branch group convolution network is designed to capture multi-scale frequency-spatial features, while temporal dependencies are extracted through a bidirectional long short-term memory with an attention mechanism. A shared attention module adaptively emphasizes the most informative sub-bands and time windows for classification.Main results. The proposed model achieves an overallF1score of 0.7105, a weightedF1(WF1) score of 0.8314, and a Cohen's kappa coefficient of 0.6345 on the TUSZ v1.5.2 dataset. Compared with the baseline method FBCNet, the proposed model outperform by 3.22% in overallF1score (p < 0.05), 1.42% inWF1score (p < 0.05), and 2.87% in Cohen's kappa coefficient (p < 0.05). The best results are also obtained on the CHB-MIT dataset.Significance. These results demonstrate the effectiveness of combining multi-view feature extraction with attention-enhanced temporal modeling.
{"title":"A multi-view neural framework with attention for epileptic seizure classification.","authors":"Lufeng Feng, Baomin Xu, Li Duan, Wei Ni, Quan Z Sheng","doi":"10.1088/1741-2552/ae33f8","DOIUrl":"10.1088/1741-2552/ae33f8","url":null,"abstract":"<p><p><i>Objective</i>. Epilepsy is a chronic brain disorder characterized by recurrent seizures due to abnormal neuronal firing. Electroencephalogram (EEG)-based seizure classification has become an important auxiliary tool in clinical practice. This study aims to reduce reliance on expert experience in diagnosis and to improve the automated classification of epileptic seizures using EEG signals.<i>Approach</i>. We propose a novel filter-bank multi-view and attention-based mechanism neural network model for seizure classification. The model employs a learnable filter bank to decompose the raw EEG into multiple frequency sub-bands, forming multi-view representations. A multi-branch group convolution network is designed to capture multi-scale frequency-spatial features, while temporal dependencies are extracted through a bidirectional long short-term memory with an attention mechanism. A shared attention module adaptively emphasizes the most informative sub-bands and time windows for classification.<i>Main results</i>. The proposed model achieves an overallF1score of 0.7105, a weightedF1(WF1) score of 0.8314, and a Cohen's kappa coefficient of 0.6345 on the TUSZ v1.5.2 dataset. Compared with the baseline method FBCNet, the proposed model outperform by 3.22% in overallF1score (<i>p</i> < 0.05), 1.42% inWF1score (<i>p</i> < 0.05), and 2.87% in Cohen's kappa coefficient (<i>p</i> < 0.05). The best results are also obtained on the CHB-MIT dataset.<i>Significance</i>. These results demonstrate the effectiveness of combining multi-view feature extraction with attention-enhanced temporal modeling.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145914409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1088/1741-2552/ae409c
Francesc Varkevisser, Wouter A Serdijn, Tiago Costa
Objective: Neuroprosthetic devices require multichannel stimulator systems with an increasing number of channels. However, there are inherent power losses in typical multichannel stimulation circuits caused by mismatches between the power supply voltage and the voltage required at each electrode to successfully stimulate tissue. This imposes a bottleneck towards high-channel-count devices, which is particularly severe in wirelessly-powered devices. Hence, advances in the power efficiency of stimulation systems are critical. To support these advances, this paper presents a methodology to identify and quantify power losses associated with different power supply scaling strategies in multichannel stimulation systems.
Approach: The methodology uses distributions of stimulation amplitudes and electrode impedances to calculate power losses in multichannel systems. Experimental data from prior studies spanning various stimulation applications were analyzed to evaluate the performance of fixed, global, and stepped supply scaling methods, focusing on their impact on power dissipation and efficiency.
Main results: Variability in output conditions results in low power efficiency in multichannel stimulation systems across all applications. Stepped voltage scaling demonstrates substantial efficiency improvements, achieving an increase of 43 % to 100 %, particularly in high-channel-count applications with significant variability in tissue impedance. In contrast, global scaling proved effective only in systems with fewer channels and minimal inter-channel variation.
Significance: The findings highlight the importance of tailoring power management strategies to specific applications to optimize efficiency while minimizing system complexity. The proposed methodology provides a framework for evaluating trade-offs between efficiency and system complexity, facilitating the design of more scalable and power-efficient neurostimulation systems.
{"title":"Analysis of power losses and the efficacy of power minimization strategies in multichannel electrical stimulation systems.","authors":"Francesc Varkevisser, Wouter A Serdijn, Tiago Costa","doi":"10.1088/1741-2552/ae409c","DOIUrl":"https://doi.org/10.1088/1741-2552/ae409c","url":null,"abstract":"<p><strong>Objective: </strong>Neuroprosthetic devices require multichannel stimulator systems with an increasing number of channels. However, there are inherent power losses in typical multichannel stimulation circuits caused by mismatches between the power supply voltage and the voltage required at each electrode to successfully stimulate tissue. This imposes a bottleneck towards high-channel-count devices, which is particularly severe in wirelessly-powered devices. Hence, advances in the power efficiency of stimulation systems are critical. To support these advances, this paper presents a methodology to identify and quantify power losses associated with different power supply scaling strategies in multichannel stimulation systems.</p><p><strong>Approach: </strong>The methodology uses distributions of stimulation amplitudes and electrode impedances to calculate power losses in multichannel systems. Experimental data from prior studies spanning various stimulation applications were analyzed to evaluate the performance of fixed, global, and stepped supply scaling methods, focusing on their impact on power dissipation and efficiency.</p><p><strong>Main results: </strong>Variability in output conditions results in low power efficiency in multichannel stimulation systems across all applications. Stepped voltage scaling demonstrates substantial efficiency improvements, achieving an increase of 43 % to 100 %, particularly in high-channel-count applications with significant variability in tissue impedance. In contrast, global scaling proved effective only in systems with fewer channels and minimal inter-channel variation.</p><p><strong>Significance: </strong>The findings highlight the importance of tailoring power management strategies to specific applications to optimize efficiency while minimizing system complexity. The proposed methodology provides a framework for evaluating trade-offs between efficiency and system complexity, facilitating the design of more scalable and power-efficient neurostimulation systems.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1088/1741-2552/ae3a1b
Aurélie de Borman, Bob Van Dyck, Kato Van Rooy, Evelien Carrette, Alfred Meurs, Dirk Van Roost, Marc M Van Hulle
Objective.Speech brain-computer interfaces (BCIs) aim to provide an alternative means of communication for individuals who are not able to speak. Remarkable progress has been achieved to decode attempted speech in individuals with severe anarthria. In contrast, imagined speech remains challenging to decode. The underlying neural mechanisms and relations to other speech modes are still elusive.Approach.In this study, we collected low-density electrocorticography signals from ten participants during a word repetition task. Electrodes were implanted for presurgical epilepsy evaluation in participants with preserved speech abilities. Models were developed using linear discriminant analysis to classify five words in response to different speech modes. We compared models trained during speaking, listening, imagining speaking, mouthing and reading. The relations between speech modes were investigated by transferring and augmenting models across speech modes.Main results.As expected, performed speech achieved the highest word classification accuracy followed by listening, mouthing, imagining and reading. While the accuracies obtained were not high enough for practical application, model transfer and augmentation could be investigated across speech modes. Transferring or augmenting models from one speech mode to another mode could significantly improve model performance. In particular, patterns learned from performed and perceived speech could generalize to imagined speech, leading to significantly improved imagined speech performance in seven participants. For four participants, imagined speech could be decoded above chance exclusively when models were transferred or augmented with performed or perceived speech.Significance.Imagined speech is often preferred by speech BCI users over attempted speech, as it requires less effort and can be produced more quickly. Transferring models across speech modes has the potential to facilitate and boost the development of imagined speech decoders.
{"title":"Word classification across speech modes from low-density electrocorticography signals.","authors":"Aurélie de Borman, Bob Van Dyck, Kato Van Rooy, Evelien Carrette, Alfred Meurs, Dirk Van Roost, Marc M Van Hulle","doi":"10.1088/1741-2552/ae3a1b","DOIUrl":"10.1088/1741-2552/ae3a1b","url":null,"abstract":"<p><p><i>Objective.</i>Speech brain-computer interfaces (BCIs) aim to provide an alternative means of communication for individuals who are not able to speak. Remarkable progress has been achieved to decode attempted speech in individuals with severe anarthria. In contrast, imagined speech remains challenging to decode. The underlying neural mechanisms and relations to other speech modes are still elusive.<i>Approach.</i>In this study, we collected low-density electrocorticography signals from ten participants during a word repetition task. Electrodes were implanted for presurgical epilepsy evaluation in participants with preserved speech abilities. Models were developed using linear discriminant analysis to classify five words in response to different speech modes. We compared models trained during speaking, listening, imagining speaking, mouthing and reading. The relations between speech modes were investigated by transferring and augmenting models across speech modes.<i>Main results.</i>As expected, performed speech achieved the highest word classification accuracy followed by listening, mouthing, imagining and reading. While the accuracies obtained were not high enough for practical application, model transfer and augmentation could be investigated across speech modes. Transferring or augmenting models from one speech mode to another mode could significantly improve model performance. In particular, patterns learned from performed and perceived speech could generalize to imagined speech, leading to significantly improved imagined speech performance in seven participants. For four participants, imagined speech could be decoded above chance exclusively when models were transferred or augmented with performed or perceived speech.<i>Significance.</i>Imagined speech is often preferred by speech BCI users over attempted speech, as it requires less effort and can be produced more quickly. Transferring models across speech modes has the potential to facilitate and boost the development of imagined speech decoders.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":"23 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146101197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective. Penetrating polymer-based microelectrode arrays (pMEAs) offer the potential for long term high-quality electrophysiological recordings of dynamic neural activity. Compared to rigid metal wire and silicon MEAs, improved device-tissue interface stability has been reported. However, accurate surgical placement of long, thin shanks in deeper brain regions is challenging as flexibility is achieved at the expense of axial stiffness. This study systematically evaluates then compares two pMEA placement strategies-dissolvable dip coating and molded brace, both with bare, exposed pMEA tips-to address the need for consistent, reliable, and accurate surgical targeting. These methods were selected based on the criteria of ease of fabrication, surgical feasibility, and mechanical performance.Approach. Sham (mechanical model with no electrodes) and fully functional pMEAs with shanks up to 5.5 mm long were fabricated and then modified using biodegradable polyethylene glycol (PEG) to support implantation. PEG was applied to shanks by motorized dip coating or a mechanical mold. Dissolution time and insertion in agarose gel brain models and rat cortex were evaluated followed by targeting of dip coated pMEAs to the rat hippocampus.Main results. Dip coating at high withdrawal speeds achieved uniform coating on shanks. Both strategies yielded similar critical buckling forces and insertion forces for single shank and arrayed pMEAs. Dip coated pMEAs were successfully placed in hippocampal regions without severe tissue damage as confirmed by histology and recordings obtained.Significance. Dip coating is a simpler method to prepare pMEAs for surgical targeting of deep brain regions compared to the bracing technique, as it does not require both a specialized mold and application process. This work provides a guide for researchers using single or multi-shank pMEAs to an accessible insertion strategy for implanting into deep brain regions in rodents and other small animal models.
{"title":"Systematic evaluation of surgical insertion of flexible neural probe arrays into deeper brain targets using length modulation methods.","authors":"Yingyi Gao, Zhouxiao Lu, Xuechun Wang, Zihan Jin, Alberto Esteban-Linares, Jeffery Guo, Huijing Xu, Kee Scholten, Dong Song, Ellis Meng","doi":"10.1088/1741-2552/ae385c","DOIUrl":"10.1088/1741-2552/ae385c","url":null,"abstract":"<p><p><i>Objective</i>. Penetrating polymer-based microelectrode arrays (pMEAs) offer the potential for long term high-quality electrophysiological recordings of dynamic neural activity. Compared to rigid metal wire and silicon MEAs, improved device-tissue interface stability has been reported. However, accurate surgical placement of long, thin shanks in deeper brain regions is challenging as flexibility is achieved at the expense of axial stiffness. This study systematically evaluates then compares two pMEA placement strategies-dissolvable dip coating and molded brace, both with bare, exposed pMEA tips-to address the need for consistent, reliable, and accurate surgical targeting. These methods were selected based on the criteria of ease of fabrication, surgical feasibility, and mechanical performance.<i>Approach</i>. Sham (mechanical model with no electrodes) and fully functional pMEAs with shanks up to 5.5 mm long were fabricated and then modified using biodegradable polyethylene glycol (PEG) to support implantation. PEG was applied to shanks by motorized dip coating or a mechanical mold. Dissolution time and insertion in agarose gel brain models and rat cortex were evaluated followed by targeting of dip coated pMEAs to the rat hippocampus.<i>Main results</i>. Dip coating at high withdrawal speeds achieved uniform coating on shanks. Both strategies yielded similar critical buckling forces and insertion forces for single shank and arrayed pMEAs. Dip coated pMEAs were successfully placed in hippocampal regions without severe tissue damage as confirmed by histology and recordings obtained.<i>Significance</i>. Dip coating is a simpler method to prepare pMEAs for surgical targeting of deep brain regions compared to the bracing technique, as it does not require both a specialized mold and application process. This work provides a guide for researchers using single or multi-shank pMEAs to an accessible insertion strategy for implanting into deep brain regions in rodents and other small animal models.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12862595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1088/1741-2552/ae3ae1
Alexis D MacIntyre, Clément Gaultier, Tobias Goehring
Objective.During speech perception, properties of the acoustic stimulus can be reconstructed from the listener's brain using methods such as electroencephalography (EEG). Most studies employ the amplitude envelope as a target for decoding; however, speech acoustics can be characterised on multiple dimensions, including as spectral descriptors. The current study assesses how robustly an extended acoustic feature set can be decoded from EEG under varying levels of intelligibility and acoustic clarity.Approach.Analysis was conducted using EEG from 38 young adults who heard intelligible and non-intelligible speech that was either unprocessed or spectrally degraded using vocoding. We extracted a set of acoustic features which, alongside the envelope, characterised instantaneous properties of the speech spectrum (e.g. spectral slope) or spectral change over time (e.g. spectral flux). We establish the robustness of feature decoding by employing multiple model architectures and, in the case of linear decoders, by standardising decoding accuracy (Pearson'sr) using randomly permuted surrogate data.Main results. Linear models yielded the highestrrelative to non-linear models. However, the separate decoder architectures produced a similar pattern of results across features and experimental conditions. After convertingrvalues toZ-scores scaled by random data, we observed substantive differences in the noise floor between features. Decoding accuracy significantly varies by spectral degradation and speech intelligibility for some features, but such differences are reduced in the most robustly decoded features. This suggests acoustic feature reconstruction is primarily driven by generalised auditory processing.Significance. Our results demonstrate that linear decoders perform comparably to non-linear decoders in capturing the EEG response to speech acoustic properties beyond the amplitude envelope, with the reconstructive accuracy of some features also associated with understanding and spectral clarity. This sheds light on how sound properties are differentially represented by the brain and shows potential for clinical applications moving forward.
{"title":"Decoding of speech acoustics from EEG: going beyond the amplitude envelope.","authors":"Alexis D MacIntyre, Clément Gaultier, Tobias Goehring","doi":"10.1088/1741-2552/ae3ae1","DOIUrl":"10.1088/1741-2552/ae3ae1","url":null,"abstract":"<p><p><i>Objective.</i>During speech perception, properties of the acoustic stimulus can be reconstructed from the listener's brain using methods such as electroencephalography (EEG). Most studies employ the amplitude envelope as a target for decoding; however, speech acoustics can be characterised on multiple dimensions, including as spectral descriptors. The current study assesses how robustly an extended acoustic feature set can be decoded from EEG under varying levels of intelligibility and acoustic clarity.<i>Approach.</i>Analysis was conducted using EEG from 38 young adults who heard intelligible and non-intelligible speech that was either unprocessed or spectrally degraded using vocoding. We extracted a set of acoustic features which, alongside the envelope, characterised instantaneous properties of the speech spectrum (e.g. spectral slope) or spectral change over time (e.g. spectral flux). We establish the robustness of feature decoding by employing multiple model architectures and, in the case of linear decoders, by standardising decoding accuracy (Pearson's<i>r</i>) using randomly permuted surrogate data.<i>Main results</i>. Linear models yielded the highest<i>r</i>relative to non-linear models. However, the separate decoder architectures produced a similar pattern of results across features and experimental conditions. After converting<i>r</i>values to<i>Z</i>-scores scaled by random data, we observed substantive differences in the noise floor between features. Decoding accuracy significantly varies by spectral degradation and speech intelligibility for some features, but such differences are reduced in the most robustly decoded features. This suggests acoustic feature reconstruction is primarily driven by generalised auditory processing.<i>Significance</i>. Our results demonstrate that linear decoders perform comparably to non-linear decoders in capturing the EEG response to speech acoustic properties beyond the amplitude envelope, with the reconstructive accuracy of some features also associated with understanding and spectral clarity. This sheds light on how sound properties are differentially represented by the brain and shows potential for clinical applications moving forward.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146013968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1088/1741-2552/ae3a1c
Yun-Yu Li, Nan-Hui Huang, Ming-Dou Ker
Objective.Temporal interference stimulation (TIS) has emerged as an innovative and promising approach for non-invasive stimulation. While previous studies have demonstrated the efficacy and performance of TIS using benchtop instruments, a dedicated system-on-chip for TIS applications has not yet been reported. This work addresses this gap by presenting a design for a TIS chip that enhances portability, thereby facilitating wearable applications of TIS.Approach.A miniaturized dual-channel temporal interference stimulator for non-invasive neuro-modulation is proposed and fabricated in a 0.18µm CMOS BCD process. The TIS chip occupies the silicon area of only 2.66 mm2. It generates output signals with a maximum amplitude of ±5 V and reliable frequency, with programmable input parameters to accommodate diverse biomedical applications. The carrier frequencies of the generated signals include 1 kHz, 2 kHz, and 3 kHz, combined with beat frequencies of 5 Hz, 10 Hz, and 20 Hz. This results in a total of nine available operation modes, enabling effective TIS.Main results.The proposed chip has effectively generated temporally interfering signals with reliable frequency and amplitude. To validate the efficacy of the TIS chip,in-vivoanimal experiments have been conducted, demonstrating its ability to produce effective electrical stimulation signals that successfully elicit neural responses in the deep brain of a pig.Significance.This work has replaced the bulky external stimulator with a fully integrated silicon chip, significantly enhancing portability and supporting future wearable clinical applications.
{"title":"Temporal interference stimulator realized with silicon chip for non-invasive neuromodulation.","authors":"Yun-Yu Li, Nan-Hui Huang, Ming-Dou Ker","doi":"10.1088/1741-2552/ae3a1c","DOIUrl":"10.1088/1741-2552/ae3a1c","url":null,"abstract":"<p><p><i>Objective.</i>Temporal interference stimulation (TIS) has emerged as an innovative and promising approach for non-invasive stimulation. While previous studies have demonstrated the efficacy and performance of TIS using benchtop instruments, a dedicated system-on-chip for TIS applications has not yet been reported. This work addresses this gap by presenting a design for a TIS chip that enhances portability, thereby facilitating wearable applications of TIS.<i>Approach.</i>A miniaturized dual-channel temporal interference stimulator for non-invasive neuro-modulation is proposed and fabricated in a 0.18<i>µ</i>m CMOS BCD process. The TIS chip occupies the silicon area of only 2.66 mm<sup>2</sup>. It generates output signals with a maximum amplitude of ±5 V and reliable frequency, with programmable input parameters to accommodate diverse biomedical applications. The carrier frequencies of the generated signals include 1 kHz, 2 kHz, and 3 kHz, combined with beat frequencies of 5 Hz, 10 Hz, and 20 Hz. This results in a total of nine available operation modes, enabling effective TIS.<i>Main results.</i>The proposed chip has effectively generated temporally interfering signals with reliable frequency and amplitude. To validate the efficacy of the TIS chip,<i>in-vivo</i>animal experiments have been conducted, demonstrating its ability to produce effective electrical stimulation signals that successfully elicit neural responses in the deep brain of a pig.<i>Significance.</i>This work has replaced the bulky external stimulator with a fully integrated silicon chip, significantly enhancing portability and supporting future wearable clinical applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146004523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-29DOI: 10.1088/1741-2552/ae3f58
Rubén Eguinoa, Ricardo San Martín, Pilar Luna, Maria Herrojo-Ruiz, Carmen Vidaurre
Objective.Recent developments in computational neuroscience have shed light on the neural processes underlying altered decision-making under uncertainty in anxiety. These disruptions are partly attributed to impaired encoding of precision-weighted prediction errors (pwPEs), which guide belief updating during learning and decision-making, as described by hierarchical Bayesian models. In this paper, we introduce a gamified paradigm for collecting decision-making data, together with a framework for extracting EEG features linked to computationally relevant variables, drawing on principles from neurofeedback and brain-computer interface research. This approach aims to develop tools that target functionally meaningful brain networks involved in decision-making, with the potential to inform future neurofeedback interactions.Approach.Forty healthy participants performed a volatile decision-making task in a game-based, immersive environment. EEG data were analysed to identify spatial filters whose theta- and alpha-band power correlated with pwPEs and state anxiety scores. Both intra-subject (trial-wise pwPEs) and intersubject (state anxiety) analyses were conducted to uncover distinct neural signatures.Main results.The intra-subject analysis revealed that pwPEs were significantly and positively correlated with theta power, and significantly and negatively correlated with alpha power - supporting the hypothesis that these oscillatory patterns underlie belief updating. In contrast, the inter-subject analysis showed that higher state anxiety was associated with reduced theta and increased alpha power, consistent with attenuated learning and impaired adaptation in anxious individuals. These findings align with theoretical models of hierarchical Bayesian inference and prior evidence of anxiety-related disruptions in uncertainty processing.Significance.The findings validate the proposed EEG framework for identifying neural markers related to belief updating and anxiety-related learning impairments. This approach lays the foundation for personalized neurofeedback procedures that target maladaptive decision-making in anxiety, with the added benefit of using immersive task paradigms for better engagement and translational potential for real-world applications.
{"title":"An EEG correlation framework to study state anxiety and learning under uncertainty.","authors":"Rubén Eguinoa, Ricardo San Martín, Pilar Luna, Maria Herrojo-Ruiz, Carmen Vidaurre","doi":"10.1088/1741-2552/ae3f58","DOIUrl":"https://doi.org/10.1088/1741-2552/ae3f58","url":null,"abstract":"<p><p><i>Objective.</i>Recent developments in computational neuroscience have shed light on the neural processes underlying altered decision-making under uncertainty in anxiety. These disruptions are partly attributed to impaired encoding of precision-weighted prediction errors (pwPEs), which guide belief updating during learning and decision-making, as described by hierarchical Bayesian models. In this paper, we introduce a gamified paradigm for collecting decision-making data, together with a framework for extracting EEG features linked to computationally relevant variables, drawing on principles from neurofeedback and brain-computer interface research. This approach aims to develop tools that target functionally meaningful brain networks involved in decision-making, with the potential to inform future neurofeedback interactions.<i>Approach.</i>Forty healthy participants performed a volatile decision-making task in a game-based, immersive environment. EEG data were analysed to identify spatial filters whose theta- and alpha-band power correlated with pwPEs and state anxiety scores. Both intra-subject (trial-wise pwPEs) and intersubject (state anxiety) analyses were conducted to uncover distinct neural signatures.<i>Main results.</i>The intra-subject analysis revealed that pwPEs were significantly and positively correlated with theta power, and significantly and negatively correlated with alpha power - supporting the hypothesis that these oscillatory patterns underlie belief updating. In contrast, the inter-subject analysis showed that higher state anxiety was associated with reduced theta and increased alpha power, consistent with attenuated learning and impaired adaptation in anxious individuals. These findings align with theoretical models of hierarchical Bayesian inference and prior evidence of anxiety-related disruptions in uncertainty processing.<i>Significance.</i>The findings validate the proposed EEG framework for identifying neural markers related to belief updating and anxiety-related learning impairments. This approach lays the foundation for personalized neurofeedback procedures that target maladaptive decision-making in anxiety, with the added benefit of using immersive task paradigms for better engagement and translational potential for real-world applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146088439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-27DOI: 10.1088/1741-2552/ae3e16
Anderson Roy Phillips, Yash Shashank Vakilna, Dorsa E P Moghaddam, Anton R Banta, John Mosher, Behnaam Aazhang
Electroencephalography (EEG) provides robust, cost-effective, and portable measurements of brain electrical activity. However, its spatial resolution is limited, constraining the localization and estimation of deep sources. Although methods exist to infer neural activity from scalp recordings, major challenges remain due to high dimensionality, temporal overlap among neural sources, and anatomical variability in head geometry. This topical review synthesizes inverse modeling approaches, with emphasis on nonlinear methods, multimodal integration, and high-density EEG systems that address these limitations. We also review the forward model and related background theory, summarize clinical applications, outline research directions, and identify available software tools and relevant publicly available datasets. Our goal is to help researchers understand traditional source estimation techniques and integrate advanced methods that may better capture the complexity of neurophysiological sources.
{"title":"Inferring neural sources from electroencephalography: Foundations and frontiers.","authors":"Anderson Roy Phillips, Yash Shashank Vakilna, Dorsa E P Moghaddam, Anton R Banta, John Mosher, Behnaam Aazhang","doi":"10.1088/1741-2552/ae3e16","DOIUrl":"https://doi.org/10.1088/1741-2552/ae3e16","url":null,"abstract":"<p><p>Electroencephalography (EEG) provides robust, cost-effective, and portable measurements of brain electrical activity. However, its spatial resolution is limited, constraining the localization and estimation of deep sources. Although methods exist to infer neural activity from scalp recordings, major challenges remain due to high dimensionality, temporal overlap among neural sources, and anatomical variability in head geometry. This topical review synthesizes inverse modeling approaches, with emphasis on nonlinear methods, multimodal integration, and high-density EEG systems that address these limitations. We also review the forward model and related background theory, summarize clinical applications, outline research directions, and identify available software tools and relevant publicly available datasets. Our goal is to help researchers understand traditional source estimation techniques and integrate advanced methods that may better capture the complexity of neurophysiological sources.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}