Pub Date : 2025-12-23DOI: 10.1088/1741-2552/ae2803
Sigrid Dupan, Simon Stuttaford, Matthew Dyson
Objective.Prosthesis control can be seen as a new skill to be learned. To enhance learning, both internal and augmented feedback are exploited. The latter represents external feedback sources that can be designed to enhance learning, e.g. biofeedback. Previous research has shown that augmented feedback protocols can be designed to induce retention by adhering to the guidance hypothesis, but it is not clear yet if that also results in transfer of those skills to prosthesis control. In this study, we test if a training paradigm optimised for retention allows for the transfer of myoelectric skill to prosthesis control.Approach.Twelve limb-intact participants learned a novel myoelectric skill during five one-hour training sessions. To induce retention of the novel myoelectric skill, we used a delayed feedback paradigm. Prosthesis transfer was tested through pre-and post-tests with a prosthesis. Prosthesis control tests included a grasp matching task, the modified box and blocks test, and an object manipulation task, requiring five grasps in total ('power', 'tripod', 'pointer', 'lateral grip', and 'hand open').Main results.We found that prosthesis control improved significantly following five days of training. Importantly, the prosthesis control metrics were significantly related to the retention metric during training, but not to the prosthesis performance during the pre-test.Significance.This study shows that transfer of novel, abstract myoelectric control from a computer interface to prosthetic control is possible if the training paradigm is designed to induce retention. These results highlight the importance of approaching myoelectric and prosthetic skills from a skill acquisition standpoint, and open up new avenues for the design of prosthetic training protocols.
{"title":"Successful transfer of myoelectric skill from virtual interface to prosthesis control.","authors":"Sigrid Dupan, Simon Stuttaford, Matthew Dyson","doi":"10.1088/1741-2552/ae2803","DOIUrl":"10.1088/1741-2552/ae2803","url":null,"abstract":"<p><p><i>Objective.</i>Prosthesis control can be seen as a new skill to be learned. To enhance learning, both internal and augmented feedback are exploited. The latter represents external feedback sources that can be designed to enhance learning, e.g. biofeedback. Previous research has shown that augmented feedback protocols can be designed to induce retention by adhering to the guidance hypothesis, but it is not clear yet if that also results in transfer of those skills to prosthesis control. In this study, we test if a training paradigm optimised for retention allows for the transfer of myoelectric skill to prosthesis control.<i>Approach.</i>Twelve limb-intact participants learned a novel myoelectric skill during five one-hour training sessions. To induce retention of the novel myoelectric skill, we used a delayed feedback paradigm. Prosthesis transfer was tested through pre-and post-tests with a prosthesis. Prosthesis control tests included a grasp matching task, the modified box and blocks test, and an object manipulation task, requiring five grasps in total ('power', 'tripod', 'pointer', 'lateral grip', and 'hand open').<i>Main results.</i>We found that prosthesis control improved significantly following five days of training. Importantly, the prosthesis control metrics were significantly related to the retention metric during training, but not to the prosthesis performance during the pre-test.<i>Significance.</i>This study shows that transfer of novel, abstract myoelectric control from a computer interface to prosthetic control is possible if the training paradigm is designed to induce retention. These results highlight the importance of approaching myoelectric and prosthetic skills from a skill acquisition standpoint, and open up new avenues for the design of prosthetic training protocols.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145679375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1088/1741-2552/ae2ba7
Hafsa Farooqi, Zixi Zhao, David Darrow, Andrew Lamperski, Théoden I Netoff
Background.Electrical neuromodulation is increasingly used in the treatment of neurological disorders; however, the selection of stimulation parameters that provide optimal therapeutic benefits remains a major challenge. Moreover, identifying pathological biomarkers linking the effect of stimulation parameters to alleviating symptoms, and hence required for optimizing stimulation parameters, might not always be possible.Objective.We present an augmented, preference-based Bayesian optimization algorithm to optimize stimulation parameters for participants undergoing neuromodulation. This algorithm incorporates two key features: I) It prioritizes the participant's preferences for stimulation parameters, making it independent of the need for pathological biomarkers. II) It leverages meta learning, using historical participant data to guide the initial optimization for new participants and overcome initial data sparsity. This approach improves both prediction accuracy and convergence speed.Approach.Consider preference training data collected from a set of historical participants who share the same neurological disorder as a new (target) participant. Within that population, there may be different response phenotypes. The goal is to identify historical participants whose stimulation-response phenotype is most similar to the target participant, and leverage their data to accelerate and improve parameter optimization for the target participant. To achieve this, the algorithm iteratively performs a two-step process:(I) a novel, iterative weighting procedure that identifies historical participants with stimulation preferences closest to the target participant, and (II) meta learning that combines the training data of the identified participants with the limited training data of the target participant to train novel, augmented preference learning models. These models are then used to predict the stimulation parameters expected to maximize the target participant's preference.Mainresults.The proposed algorithm has been validated using synthetically generated data sets that simulate participant preference behavior during neuromodulation.Significance.This approach holds promise for improving personalized neuromodulation therapies and advancing treatment outcomes for neurological disorders without the need for a tedious data collection process and disease-specific pathological biomarkers.
{"title":"An augmented preference-based Bayesian approach for optimizing neuromodulation stimulation parameters using meta learning.","authors":"Hafsa Farooqi, Zixi Zhao, David Darrow, Andrew Lamperski, Théoden I Netoff","doi":"10.1088/1741-2552/ae2ba7","DOIUrl":"10.1088/1741-2552/ae2ba7","url":null,"abstract":"<p><p><i>Background.</i>Electrical neuromodulation is increasingly used in the treatment of neurological disorders; however, the selection of stimulation parameters that provide optimal therapeutic benefits remains a major challenge. Moreover, identifying pathological biomarkers linking the effect of stimulation parameters to alleviating symptoms, and hence required for optimizing stimulation parameters, might not always be possible.<i>Objective.</i>We present an augmented, preference-based Bayesian optimization algorithm to optimize stimulation parameters for participants undergoing neuromodulation. This algorithm incorporates two key features: I) It prioritizes the participant's preferences for stimulation parameters, making it independent of the need for pathological biomarkers. II) It leverages meta learning, using historical participant data to guide the initial optimization for new participants and overcome initial data sparsity. This approach improves both prediction accuracy and convergence speed.<i>Approach.</i>Consider preference training data collected from a set of historical participants who share the same neurological disorder as a new (target) participant. Within that population, there may be different response phenotypes. The goal is to identify historical participants whose stimulation-response phenotype is most similar to the target participant, and leverage their data to accelerate and improve parameter optimization for the target participant. To achieve this, the algorithm iteratively performs a two-step process:(I) a novel, iterative weighting procedure that identifies historical participants with stimulation preferences closest to the target participant, and (II) meta learning that combines the training data of the identified participants with the limited training data of the target participant to train novel, augmented preference learning models. These models are then used to predict the stimulation parameters expected to maximize the target participant's preference.<i>Main</i><i>results.</i>The proposed algorithm has been validated using synthetically generated data sets that simulate participant preference behavior during neuromodulation.<i>Significance.</i>This approach holds promise for improving personalized neuromodulation therapies and advancing treatment outcomes for neurological disorders without the need for a tedious data collection process and disease-specific pathological biomarkers.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1088/1741-2552/ae2954
Mohd Faisal, Sudarsan Sahoo, Jupitara Hazarika
Objective.Multimodal neuroimaging fusion has shown promise in enhancing brain-computer interface (BCI) performance by capturing complementary neural dynamics. However, most existing fusion frameworks inadequately model the temporal asynchrony and adaptive fusion between electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), thereby limiting their ability to generalize across sessions and subjects. This work aims to develop an adaptive fusion framework that effectively aligns and integrates EEG and fNIRS representations to improve cross-session and cross-subject generalization in BCI applications.Approach. To address this, we propose STeCANet, a novel Spatiotemporal Cross-Attention Network that integrates EEG and fNIRS signals through hierarchical attention-based alignment. The model leverages fNIRS-guided spatial attention, EEG-fNIRS temporal alignment, adaptive fusion, and adversarial training to ensure robust cross-modal interaction and spatiotemporal consistency.Main results. Evaluations across three cognitive paradigms, namely motor imagery, mental arithmetic, and word generation, demonstrate that STeCANet significantly outperforms unimodal and recent multimodal baselines under both session-independent and subject-independent settings. Ablation studies confirm the contribution of each sub-module and loss function, including the domain adaptation component, in boosting classification accuracy and robustness.Significance. These results suggest that STeCANet offers a robust and interpretable solution for next-generation BCI applications.
{"title":"STeCANet: spatio-temporal cross attention network for brain computer interface systems using EEG-fNIRS signals.","authors":"Mohd Faisal, Sudarsan Sahoo, Jupitara Hazarika","doi":"10.1088/1741-2552/ae2954","DOIUrl":"10.1088/1741-2552/ae2954","url":null,"abstract":"<p><p><i>Objective.</i>Multimodal neuroimaging fusion has shown promise in enhancing brain-computer interface (BCI) performance by capturing complementary neural dynamics. However, most existing fusion frameworks inadequately model the temporal asynchrony and adaptive fusion between electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), thereby limiting their ability to generalize across sessions and subjects. This work aims to develop an adaptive fusion framework that effectively aligns and integrates EEG and fNIRS representations to improve cross-session and cross-subject generalization in BCI applications.<i>Approach</i>. To address this, we propose STeCANet, a novel Spatiotemporal Cross-Attention Network that integrates EEG and fNIRS signals through hierarchical attention-based alignment. The model leverages fNIRS-guided spatial attention, EEG-fNIRS temporal alignment, adaptive fusion, and adversarial training to ensure robust cross-modal interaction and spatiotemporal consistency.<i>Main results</i>. Evaluations across three cognitive paradigms, namely motor imagery, mental arithmetic, and word generation, demonstrate that STeCANet significantly outperforms unimodal and recent multimodal baselines under both session-independent and subject-independent settings. Ablation studies confirm the contribution of each sub-module and loss function, including the domain adaptation component, in boosting classification accuracy and robustness.<i>Significance</i>. These results suggest that STeCANet offers a robust and interpretable solution for next-generation BCI applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22DOI: 10.1088/1741-2552/ae2b0f
Angeliki-Ilektra Karaiskou, Carolina Varon, Cem Ates Musluoglu, Kaat Alaerts, Maarten De Vos
Objective. Meditation and mindfulness are increasingly recognized as important in improving mental well-being. However, electroencephalography (EEG)-based neurofeedback systems supporting these practices typically fail to generalize to unseen subjects. This study investigates the application of both spatial and spectral alignment to EEG to improve the classification of meditation and rest states for new subjects without any model retraining.Approach. Two unsupervised domain adaptation techniques are employed to reduce differences between subjects in their EEG recordings. The first, Riemannian Space Data Alignment (RSDA), adjusts and brings together patterns of brain activity across electrodes (spatial domain). The second, Convolutional Monge Mapping Normalization (CMMN), aligns the distribution of brain rhythms across frequencies (spectral domain). Each method is evaluated separately, in combination, and in interaction withz-score normalization. Classification between meditation and rest is performed on the aligned time series using EEGNet, a compact convolutional neural network architecture, with leave-one-subject-out (LOSO) cross-validation to assess generalization across subjects. All experiments are based on a publicly available dataset of meditation EEG recordings from 53 subjects, including both novice and expert meditators.Main results. The combined RSDA+CMMN approach significantly improved LOSO classification accuracy (66.6%) compared to non-aligned (55.7%) andz-score normalized (59.6%) baselines, even though it did not improve overall harmonization. Spectral analysis identified consistent classification contributions from the Theta (4-8 Hz), Alpha (8-14 Hz), and Beta (14-30 Hz) bands, while spatial analysis highlighted Frontopolar and Temporal regions as critical for distinguishing the mental states of meditation and rest.Significance. This work is the first to explore both spatial and spectral alignment in subject-independent meditation decoding for improved cross-subject generalization. Aligning EEG time series without retraining provides a practical solution for real-time neurofeedback, thereby reducing subject variability and paving the way toward calibration-free neurotechnology that supports mental well-being.
{"title":"EEG-based meditation decoding: tackling subject variability with spatial and temporal alignment.","authors":"Angeliki-Ilektra Karaiskou, Carolina Varon, Cem Ates Musluoglu, Kaat Alaerts, Maarten De Vos","doi":"10.1088/1741-2552/ae2b0f","DOIUrl":"10.1088/1741-2552/ae2b0f","url":null,"abstract":"<p><p><i>Objective</i>. Meditation and mindfulness are increasingly recognized as important in improving mental well-being. However, electroencephalography (EEG)-based neurofeedback systems supporting these practices typically fail to generalize to unseen subjects. This study investigates the application of both spatial and spectral alignment to EEG to improve the classification of meditation and rest states for new subjects without any model retraining.<i>Approach</i>. Two unsupervised domain adaptation techniques are employed to reduce differences between subjects in their EEG recordings. The first, Riemannian Space Data Alignment (RSDA), adjusts and brings together patterns of brain activity across electrodes (spatial domain). The second, Convolutional Monge Mapping Normalization (CMMN), aligns the distribution of brain rhythms across frequencies (spectral domain). Each method is evaluated separately, in combination, and in interaction with<i>z</i>-score normalization. Classification between meditation and rest is performed on the aligned time series using EEGNet, a compact convolutional neural network architecture, with leave-one-subject-out (LOSO) cross-validation to assess generalization across subjects. All experiments are based on a publicly available dataset of meditation EEG recordings from 53 subjects, including both novice and expert meditators.<i>Main results</i>. The combined RSDA+CMMN approach significantly improved LOSO classification accuracy (66.6%) compared to non-aligned (55.7%) and<i>z</i>-score normalized (59.6%) baselines, even though it did not improve overall harmonization. Spectral analysis identified consistent classification contributions from the Theta (4-8 Hz), Alpha (8-14 Hz), and Beta (14-30 Hz) bands, while spatial analysis highlighted Frontopolar and Temporal regions as critical for distinguishing the mental states of meditation and rest.<i>Significance</i>. This work is the first to explore both spatial and spectral alignment in subject-independent meditation decoding for improved cross-subject generalization. Aligning EEG time series without retraining provides a practical solution for real-time neurofeedback, thereby reducing subject variability and paving the way toward calibration-free neurotechnology that supports mental well-being.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145728043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19DOI: 10.1088/1741-2552/ae2541
Dixit Sharma, Bart Krekelberg
Objective.Despite decades of electroencephalography (EEG) research, the relationship between EEG and underlying spiking dynamics remains unclear. This limits our ability to infer neural dynamics reflected in intracranial signals from EEG, a critical step to bridge electrophysiological findings across species and to develop non-invasive brain-machine interfaces (BMIs). In this study, we aimed to estimate spiking activity in the visual cortex using non-invasive scalp EEG.Approach. We recorded spiking activity from a 32-channel floating microarray permanently implanted in parafoveal V1 and scalp-EEG in a male macaque monkey. While the animal fixated, the screen flickered at different temporal frequencies to induce steady-state visual evoked potentials. We analyzed the relationship between the V1 multi-unit spiking activity envelope (MUAe) and EEG frequency bands to predict MUAe at each time point from EEG. We extracted instantaneous spectrotemporal features of the EEG signal, including phase, amplitude, and phase-amplitude coupling of its frequency bands.Main results. Although the relationship between these spectrotemporal features and the V1 MUAe was complex and frequency-dependent, they were reliably predictive of the MUAe. Specifically, in a linear regression predicting MUAe from EEG, each EEG feature (phase, amplitude, coupling) contributed to model predictions. In addition, we found that MUAe predictions were better in shallow than deep cortical layers, and that the phase of stimulus frequency further improved MUAe predictions.Significance.Our study shows that a comprehensive account of spectrotemporal features of non-invasive EEG provides information on underlying spiking activity beyond what is available when only the amplitude or phase of the EEG signal is considered. This demonstrates the richness of the EEG signal and its complex relationship with neural spiking activity and suggests that using more comprehensive spectrotemporal signatures could improve BMI applications.
{"title":"Predicting spiking activity from scalp EEG.","authors":"Dixit Sharma, Bart Krekelberg","doi":"10.1088/1741-2552/ae2541","DOIUrl":"10.1088/1741-2552/ae2541","url":null,"abstract":"<p><p><i>Objective.</i>Despite decades of electroencephalography (EEG) research, the relationship between EEG and underlying spiking dynamics remains unclear. This limits our ability to infer neural dynamics reflected in intracranial signals from EEG, a critical step to bridge electrophysiological findings across species and to develop non-invasive brain-machine interfaces (BMIs). In this study, we aimed to estimate spiking activity in the visual cortex using non-invasive scalp EEG.<i>Approach</i>. We recorded spiking activity from a 32-channel floating microarray permanently implanted in parafoveal V1 and scalp-EEG in a male macaque monkey. While the animal fixated, the screen flickered at different temporal frequencies to induce steady-state visual evoked potentials. We analyzed the relationship between the V1 multi-unit spiking activity envelope (MUAe) and EEG frequency bands to predict MUAe at each time point from EEG. We extracted instantaneous spectrotemporal features of the EEG signal, including phase, amplitude, and phase-amplitude coupling of its frequency bands.<i>Main results</i>. Although the relationship between these spectrotemporal features and the V1 MUAe was complex and frequency-dependent, they were reliably predictive of the MUAe. Specifically, in a linear regression predicting MUAe from EEG, each EEG feature (phase, amplitude, coupling) contributed to model predictions. In addition, we found that MUAe predictions were better in shallow than deep cortical layers, and that the phase of stimulus frequency further improved MUAe predictions.<i>Significance.</i>Our study shows that a comprehensive account of spectrotemporal features of non-invasive EEG provides information on underlying spiking activity beyond what is available when only the amplitude or phase of the EEG signal is considered. This demonstrates the richness of the EEG signal and its complex relationship with neural spiking activity and suggests that using more comprehensive spectrotemporal signatures could improve BMI applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12715843/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145644066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19DOI: 10.1088/1741-2552/ae2359
Frederik Lampert, Matthew R Baker, Michael A Jensen, Amir H Ayyoubi, Christian Bentler, Jessica L Bowersock, Rosana Esteller, Jeffrey A Herron, Graham W Johnson, Daryl R Kipke, Christopher K Kovach, Vaclav Kremen, Filip Mivalt, Joseph S Neimat, Theoden I Netoff, Enrico Opri, Alexander Rockhill, Joshua M Rosenow, Kristin K Sellers, Nathan P Staff, Chandra Prakash Swamy, Ashwin Viswanathan, Gerwin Schalk, Timothy Denison, Dora Hermes, Nuri F Ince, Peter Brunner, Gregory A Worrell, Kai J Miller
Adaptive neuromodulation systems and implantable brain-computer interfaces have made notable strides in recent years, translating experimental prototypes into clinical applications and garnering substantial attention from the public. This surge in interest is accompanied by increased scrutiny related to the safety, efficacy, and ethical implications of these systems, all of which must be directly addressed as we introduce new neurotechnologies. In response, we have synthesized the insights resulting from discussions between groups of experts in the field and summarized them into five key domains essential to therapeutic device development: (1) analyzing current landscape of neuromodulation devices and translational platforms (2) identifying clinical need, (3) understanding neural mechanisms, (4) designing viable technologies, and (5) addressing ethical concerns. The role of translational research platforms that allow rapid, iterative testing of hypotheses in both preclinical and clinical settings is emphasized. These platforms must balance experimental flexibility with patient safety and clear clinical benefit. Furthermore, requirements for interoperability, modularity, and wireless communication protocols are explored to support long-term usability and scalability. The current regulatory processes and funding models are examined alongside the ethical responsibilities of researchers and device manufacturers. Special attention is given to the role of patients as active contributors to research and to the long-term obligations we have to them as the primary burden-bearers of the implanted neurotechnologies. This article represents a synthesis of scientific, engineering, and clinical viewpoints to inform key stakeholders in the neuromodulation and brain-computer interface spaces.
{"title":"Adaptive neuromodulation dialogues: navigating current challenges and emerging innovations in neuromodulation system development.","authors":"Frederik Lampert, Matthew R Baker, Michael A Jensen, Amir H Ayyoubi, Christian Bentler, Jessica L Bowersock, Rosana Esteller, Jeffrey A Herron, Graham W Johnson, Daryl R Kipke, Christopher K Kovach, Vaclav Kremen, Filip Mivalt, Joseph S Neimat, Theoden I Netoff, Enrico Opri, Alexander Rockhill, Joshua M Rosenow, Kristin K Sellers, Nathan P Staff, Chandra Prakash Swamy, Ashwin Viswanathan, Gerwin Schalk, Timothy Denison, Dora Hermes, Nuri F Ince, Peter Brunner, Gregory A Worrell, Kai J Miller","doi":"10.1088/1741-2552/ae2359","DOIUrl":"10.1088/1741-2552/ae2359","url":null,"abstract":"<p><p>Adaptive neuromodulation systems and implantable brain-computer interfaces have made notable strides in recent years, translating experimental prototypes into clinical applications and garnering substantial attention from the public. This surge in interest is accompanied by increased scrutiny related to the safety, efficacy, and ethical implications of these systems, all of which must be directly addressed as we introduce new neurotechnologies. In response, we have synthesized the insights resulting from discussions between groups of experts in the field and summarized them into five key domains essential to therapeutic device development: (1) analyzing current landscape of neuromodulation devices and translational platforms (2) identifying clinical need, (3) understanding neural mechanisms, (4) designing viable technologies, and (5) addressing ethical concerns. The role of translational research platforms that allow rapid, iterative testing of hypotheses in both preclinical and clinical settings is emphasized. These platforms must balance experimental flexibility with patient safety and clear clinical benefit. Furthermore, requirements for interoperability, modularity, and wireless communication protocols are explored to support long-term usability and scalability. The current regulatory processes and funding models are examined alongside the ethical responsibilities of researchers and device manufacturers. Special attention is given to the role of patients as active contributors to research and to the long-term obligations we have to them as the primary burden-bearers of the implanted neurotechnologies. This article represents a synthesis of scientific, engineering, and clinical viewpoints to inform key stakeholders in the neuromodulation and brain-computer interface spaces.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12715846/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145598488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1088/1741-2552/ae23ff
Paria Piran, Abolfazl Bayrami, Shima Rahim Pouran, Fatemeh Asghari, Saeideh Aran, Pouya Bayrami
Objective.Damage to the peripheral nerves frequently leads to significant impairments in their functional capacity, highlighting the need for effective treatments that can facilitate nerve repair. This study explores the potential of grape skin extract (Ex), alone and in combination with zinc oxide nanoparticles (ZnO NPs), to enhance regeneration following sciatic nerve injury (SNI) in rats.Approach.ZnO NPs were synthesized using both a conventional chemical route and a green synthesis method in which Ex served as a natural reducing and capping agent. The synthesized nanoparticles were characterized by Fourier-transform infrared spectroscopy, scanning electron microscopy, x-ray diffraction, Thermogravimetric analysis, Energy-dispersive x-ray spectroscopy, zeta potential, and Gas chromatography-mass spectrometry analyses to confirm the role of Ex in shaping nanoparticle morphology and surface properties. Functional recovery and histological outcomes were then assessed in a murine SNI model.Main results.Treatment with Ex and ZnO/Ex significantly reduced collagen accumulation, fibrosis, and tissue vacuolization compared to untreated controls. Both interventions also improved myelination and enhanced the sciatic function index, indicating improved neural repair.Significance.These findings demonstrate that Ex and ZnO/Ex promote nerve regeneration and highlight their potential as promising candidates for the development of biogenic nanotherapeutics targeting peripheral nerve injuries.
{"title":"Regenerative potential of biogenic zinc oxide nanoparticles prepared with Vitis vinifera-derived extract on sciatic nerve injury in rats.","authors":"Paria Piran, Abolfazl Bayrami, Shima Rahim Pouran, Fatemeh Asghari, Saeideh Aran, Pouya Bayrami","doi":"10.1088/1741-2552/ae23ff","DOIUrl":"10.1088/1741-2552/ae23ff","url":null,"abstract":"<p><p><i>Objective.</i>Damage to the peripheral nerves frequently leads to significant impairments in their functional capacity, highlighting the need for effective treatments that can facilitate nerve repair. This study explores the potential of grape skin extract (Ex), alone and in combination with zinc oxide nanoparticles (ZnO NPs), to enhance regeneration following sciatic nerve injury (SNI) in rats.<i>Approach.</i>ZnO NPs were synthesized using both a conventional chemical route and a green synthesis method in which Ex served as a natural reducing and capping agent. The synthesized nanoparticles were characterized by Fourier-transform infrared spectroscopy, scanning electron microscopy, x-ray diffraction, Thermogravimetric analysis, Energy-dispersive x-ray spectroscopy, zeta potential, and Gas chromatography-mass spectrometry analyses to confirm the role of Ex in shaping nanoparticle morphology and surface properties. Functional recovery and histological outcomes were then assessed in a murine SNI model.<i>Main results.</i>Treatment with Ex and ZnO/Ex significantly reduced collagen accumulation, fibrosis, and tissue vacuolization compared to untreated controls. Both interventions also improved myelination and enhanced the sciatic function index, indicating improved neural repair.<i>Significance.</i>These findings demonstrate that Ex and ZnO/Ex promote nerve regeneration and highlight their potential as promising candidates for the development of biogenic nanotherapeutics targeting peripheral nerve injuries.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145608179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1088/1741-2552/ae2953
Jacob Hardenburger, Bryan Millis, Joel Bixler, Christopher Valdez, E Duco Jansen, Anita Mahadevan-Jansen
Photothermal laser tissue interactions are challenging to study at the subcellular level due to the complexity of accurately characterizing spatial energy distributions. Infrared (IR) neural stimulation, a label-free photothermal neuromodulation technique using pulsed IR light, has demonstrated promise but lacks standardized, high-resolution dosimetry methods.Objective. In this study, we present an automated, imaging-based workflow to perform spatially resolved photothermal dosimetry. This method uses thermal lensing to mark the location of IR exposure within the imaging field of view, enabling precise assessment of the radiant exposure dosage and correlated neuronal responses.Approach. Neuronal Ca2+responses to single IR pulses of varying duration (350µs, 2 ms, and 8 ms) were measured using widefield fluorescence microscopy. The thermal lensing artifact (TLA) observed during stimulation was used to model the spatial energy distribution of the laser beam profile. Neuronal Ca2+responses were analyzed relative to the local radiant exposure,H0(x,y), and the average radiant exposure, dosage, Havg, calculated using the laser pulse energy divided by the laser spot area.Main results. The TLA provided a reliable fiducial for tracking the IR stimulus within the imaging field. Neuronal responses to INS were spatially dependent and exhibited three phenotypes: unreactive, low-amplitude, and high-amplitude. The Gaussian laser beam profile led to cells near the beam center receiving higher radiant exposure dosages, exceeding activation thresholds. We find that shorter pulse durations required lower radiant exposure dosages to elicit neuronal responses. TheHavgconsistently underestimates the radiant exposure required for stimulation. TheH0(x,y) required for stimulation did not produce measurable cellular damage.Significance. Local radiant exposure dosage dictates neuronal activation during INS. Our method provides a standardized, high-throughput approach for performing spatially resolved photothermal dosimetry at microscopic level.
{"title":"Thermal lensing during infrared neural stimulation enables spatially resolved photothermal dosimetry.","authors":"Jacob Hardenburger, Bryan Millis, Joel Bixler, Christopher Valdez, E Duco Jansen, Anita Mahadevan-Jansen","doi":"10.1088/1741-2552/ae2953","DOIUrl":"10.1088/1741-2552/ae2953","url":null,"abstract":"<p><p>Photothermal laser tissue interactions are challenging to study at the subcellular level due to the complexity of accurately characterizing spatial energy distributions. Infrared (IR) neural stimulation, a label-free photothermal neuromodulation technique using pulsed IR light, has demonstrated promise but lacks standardized, high-resolution dosimetry methods.<i>Objective</i>. In this study, we present an automated, imaging-based workflow to perform spatially resolved photothermal dosimetry. This method uses thermal lensing to mark the location of IR exposure within the imaging field of view, enabling precise assessment of the radiant exposure dosage and correlated neuronal responses.<i>Approach</i>. Neuronal Ca<sup>2+</sup>responses to single IR pulses of varying duration (350<i>µ</i>s, 2 ms, and 8 ms) were measured using widefield fluorescence microscopy. The thermal lensing artifact (TLA) observed during stimulation was used to model the spatial energy distribution of the laser beam profile. Neuronal Ca<sup>2+</sup>responses were analyzed relative to the local radiant exposure,<i>H</i><sub>0</sub>(<i>x,y</i>), and the average radiant exposure, dosage, H<sub>avg</sub>, calculated using the laser pulse energy divided by the laser spot area.<i>Main results</i>. The TLA provided a reliable fiducial for tracking the IR stimulus within the imaging field. Neuronal responses to INS were spatially dependent and exhibited three phenotypes: unreactive, low-amplitude, and high-amplitude. The Gaussian laser beam profile led to cells near the beam center receiving higher radiant exposure dosages, exceeding activation thresholds. We find that shorter pulse durations required lower radiant exposure dosages to elicit neuronal responses. The<i>H</i><sub>avg</sub>consistently underestimates the radiant exposure required for stimulation. The<i>H</i><sub>0</sub>(<i>x,y</i>) required for stimulation did not produce measurable cellular damage.<i>Significance</i>. Local radiant exposure dosage dictates neuronal activation during INS. Our method provides a standardized, high-throughput approach for performing spatially resolved photothermal dosimetry at microscopic level.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1088/1741-2552/ae1bd9
Panpan Chen, Chi Zhang, Bao Li, Li Tong, Shuxiao Ma, Linyuan Wang, Long Cao, Ziya Yu, Bin Yan
Objective. Modeling neural encoding of visual stimuli often uses deep neural networks (DNNs) to predict human brain response to external stimuli. However, each DNN depends on networks tailored for computer vision tasks, resulting in suboptimal brain correspondence. On the other hand, when end-to-end optimizing the encoding process for specific brain regions, challenges like training difficulties arise. Additionally, these models mostly focus on visual information processing, while the human brain integrates multi-modal information such as language to achieve a comprehensive understanding.Approach. To address these limitations, this paper proposes a multi-modal prompt learning (PL) model for neural encoding of dynamic natural stimuli. Specifically, we leverage the powerful representation ability of pre-trained foundation models and fine-tune them using our multi-modal prompts. These prompts, which include textual and visual prompts tailored to each specific regions of interest, can adapt foundation models to neural encoding tasks with fewer trainable parameters. We use the CLIP For video Clip retrieval (CLIP4clip) and Video Masked Autoencoder V2 (videoMAEv2) for feature extraction with backbone freezing, refine the representations via PL, and map the fused multi-modal features to predict voxel-wise brain responses.Main results. Extensive experiments on two functional magnetic resonance imaging video datasets demonstrate that our method outperforms existing fine-tuning methods and public models.Significance. This work highlights the potential of prompt-based fine-tuning strategies in bridging the gap between foundation models and neural encoding tasks.
{"title":"Incorporating multi-modal prompt learning into foundation models enhances predictability of visual fMRI responses to dynamic natural stimuli.","authors":"Panpan Chen, Chi Zhang, Bao Li, Li Tong, Shuxiao Ma, Linyuan Wang, Long Cao, Ziya Yu, Bin Yan","doi":"10.1088/1741-2552/ae1bd9","DOIUrl":"10.1088/1741-2552/ae1bd9","url":null,"abstract":"<p><p><i>Objective</i>. Modeling neural encoding of visual stimuli often uses deep neural networks (DNNs) to predict human brain response to external stimuli. However, each DNN depends on networks tailored for computer vision tasks, resulting in suboptimal brain correspondence. On the other hand, when end-to-end optimizing the encoding process for specific brain regions, challenges like training difficulties arise. Additionally, these models mostly focus on visual information processing, while the human brain integrates multi-modal information such as language to achieve a comprehensive understanding.<i>Approach</i>. To address these limitations, this paper proposes a multi-modal prompt learning (PL) model for neural encoding of dynamic natural stimuli. Specifically, we leverage the powerful representation ability of pre-trained foundation models and fine-tune them using our multi-modal prompts. These prompts, which include textual and visual prompts tailored to each specific regions of interest, can adapt foundation models to neural encoding tasks with fewer trainable parameters. We use the CLIP For video Clip retrieval (CLIP4clip) and Video Masked Autoencoder V2 (videoMAEv2) for feature extraction with backbone freezing, refine the representations via PL, and map the fused multi-modal features to predict voxel-wise brain responses.<i>Main results</i>. Extensive experiments on two functional magnetic resonance imaging video datasets demonstrate that our method outperforms existing fine-tuning methods and public models.<i>Significance</i>. This work highlights the potential of prompt-based fine-tuning strategies in bridging the gap between foundation models and neural encoding tasks.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145454384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1088/1741-2552/ae2716
Alexandra-Maria Tăuțan, Jin Jing, Lara Basovic, Peter N Hadar, Shadi Sartipi, Marta P Fernandes, Jennifer Kim, Aaron F Struck, M Brandon Westover, Sahar F Zafar
Objective.Rhythmic and periodic patterns (RPPs) are harmful brain activity observed on electroencephalography (EEG) recordings of critically ill patients. This work describes automatic methods for detection of the frequency and spatial extent of specific RPPs: lateralized and generalized rhythmic delta activity (LRDA, GRDA) and lateralized and generalized periodic discharges (LPD, GPD).Approach.The frequency and spatial extent of RPPs is estimated using signal processing and rule-based logic. Three algorithm variants based on fast Fourier transform (FFT) and Hilbert-Huang transforms (HHT) were developed for rhythmic delta activity, and three using derivative and time-based peak detection for periodic discharges. Annotations from three expert neurophysiologists served as the gold standard, and inter-rater reliability (IRR) and mean absolute error (MAE) were used to assess performance.Main results.We evaluated the algorithms on segments with 100% agreement on event classification (n= 389) and on the full cohort of 1087 segments (including disagreements). For the first subset, top algorithms matched or exceeded expert agreement for RPP frequency/spatial extent. RDA1b-FFT, the best algorithm for rhythmic delta activity, showed an expert-algorithm IRR of good to excellent with an intra-class correlation coefficient (ICC) of 91% and 96% (MAE 0.13 Hz and 0.26 Hz) frequency, and ICCs of 85% and 66% (MAE 0.19 and 0.09) for spatial extent for LRDA and GRDA. For periodic discharges, PD2a, showed and expert-algorithm IRR ICC of 80% and 61% (MAE 0.41 Hz and 0.15 Hz) for frequency, and ICC 77% and 13% (MAE 0.17 and 0.40) for spatial extent of LPD and GPD. For the full cohort, IRR declined, but expert-algorithm IRR remained comparable or superior to experts.Significance.The presence of RPPs at high frequencies and spatial extent are associated with a higher probability of poor outcomes. The proposed algorithms for estimating frequency and spatial extent of RPPs match expert performance and are a viable tool for large-scale EEG analysis.
{"title":"Automated estimation of frequency and spatial extent of periodic and rhythmic epileptiform activity from continuous electroencephalography data.","authors":"Alexandra-Maria Tăuțan, Jin Jing, Lara Basovic, Peter N Hadar, Shadi Sartipi, Marta P Fernandes, Jennifer Kim, Aaron F Struck, M Brandon Westover, Sahar F Zafar","doi":"10.1088/1741-2552/ae2716","DOIUrl":"10.1088/1741-2552/ae2716","url":null,"abstract":"<p><p><i>Objective.</i>Rhythmic and periodic patterns (RPPs) are harmful brain activity observed on electroencephalography (EEG) recordings of critically ill patients. This work describes automatic methods for detection of the frequency and spatial extent of specific RPPs: lateralized and generalized rhythmic delta activity (LRDA, GRDA) and lateralized and generalized periodic discharges (LPD, GPD).<i>Approach.</i>The frequency and spatial extent of RPPs is estimated using signal processing and rule-based logic. Three algorithm variants based on fast Fourier transform (FFT) and Hilbert-Huang transforms (HHT) were developed for rhythmic delta activity, and three using derivative and time-based peak detection for periodic discharges. Annotations from three expert neurophysiologists served as the gold standard, and inter-rater reliability (IRR) and mean absolute error (MAE) were used to assess performance.<i>Main results.</i>We evaluated the algorithms on segments with 100% agreement on event classification (<i>n</i>= 389) and on the full cohort of 1087 segments (including disagreements). For the first subset, top algorithms matched or exceeded expert agreement for RPP frequency/spatial extent. RDA1b-FFT, the best algorithm for rhythmic delta activity, showed an expert-algorithm IRR of good to excellent with an intra-class correlation coefficient (ICC) of 91% and 96% (MAE 0.13 Hz and 0.26 Hz) frequency, and ICCs of 85% and 66% (MAE 0.19 and 0.09) for spatial extent for LRDA and GRDA. For periodic discharges, PD2a, showed and expert-algorithm IRR ICC of 80% and 61% (MAE 0.41 Hz and 0.15 Hz) for frequency, and ICC 77% and 13% (MAE 0.17 and 0.40) for spatial extent of LPD and GPD. For the full cohort, IRR declined, but expert-algorithm IRR remained comparable or superior to experts.<i>Significance.</i>The presence of RPPs at high frequencies and spatial extent are associated with a higher probability of poor outcomes. The proposed algorithms for estimating frequency and spatial extent of RPPs match expert performance and are a viable tool for large-scale EEG analysis.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145663052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}