Pub Date : 2025-01-17DOI: 10.1088/1741-2552/ada4df
Ethan Eddy, Evan Campbell, Scott Bateman, Erik Scheme
Objective.While myoelectric control has been commercialized in prosthetics for decades, its adoption for more general human-machine interaction has been slow. Although high accuracies can be achieved across many gestures, current control approaches are prone to false activations in real-world conditions. This is because the same electromyogram (EMG) signals generated during the elicitation of gestures are also naturally activated when performing activities of daily living (ADLs), such as when driving to work or while typing on a keyboard. This can lead the myoelectric control system, which is trained on a closed set of gestures and thus unaware of the muscle activity associated with these ADLs, to be falsely activated, leading to erroneous inputs and user frustration.Approach.To overcome this problem, the concept of wake gestures, whereby users could switch between a dedicated control mode and a sleep mode by snapping their fingers, was explored. Using a simple dynamic time warping model, the real-world user-in-the-loop efficacy of wake gestures as a toggle for myoelectric interfaces was demonstrated through two online ubiquitous control tasks with varying levels of difficulty: (1) dismissing an alarm and (2) controlling a robot.Main results.During these online evaluations, the designed system ignored almost all (>99.9%) non-target EMG activity generated during a set of ADLs (i.e. walking, typing, writing, phone use, and driving), ignored all control gestures (i.e. wrist flexion, wrist extension, hand open, and hand close), and enabled reliable mode switching during intentional wake gesture elicitation. Additionally, questionnaires revealed that participants responded well to the use of wake gestures and generally preferred false negatives over false positives, providing valuable insights into the future design of these systems.Significance.These results highlight the real-world viability of wake gestures for enabling the intermittent use of myoelectric control, opening up new interaction possibilities for EMG-based inputs.
{"title":"EMG-based wake gestures eliminate false activations during out-of-set activities of daily living: an online myoelectric control study.","authors":"Ethan Eddy, Evan Campbell, Scott Bateman, Erik Scheme","doi":"10.1088/1741-2552/ada4df","DOIUrl":"10.1088/1741-2552/ada4df","url":null,"abstract":"<p><p><i>Objective.</i>While myoelectric control has been commercialized in prosthetics for decades, its adoption for more general human-machine interaction has been slow. Although high accuracies can be achieved across many gestures, current control approaches are prone to false activations in real-world conditions. This is because the same electromyogram (EMG) signals generated during the elicitation of gestures are also naturally activated when performing activities of daily living (ADLs), such as when driving to work or while typing on a keyboard. This can lead the myoelectric control system, which is trained on a closed set of gestures and thus unaware of the muscle activity associated with these ADLs, to be falsely activated, leading to erroneous inputs and user frustration.<i>Approach.</i>To overcome this problem, the concept of wake gestures, whereby users could switch between a dedicated control mode and a sleep mode by snapping their fingers, was explored. Using a simple dynamic time warping model, the real-world user-in-the-loop efficacy of wake gestures as a toggle for myoelectric interfaces was demonstrated through two online ubiquitous control tasks with varying levels of difficulty: (1) dismissing an alarm and (2) controlling a robot.<i>Main results.</i>During these online evaluations, the designed system ignored almost all (>99.9%) non-target EMG activity generated during a set of ADLs (i.e. walking, typing, writing, phone use, and driving), ignored all control gestures (i.e. wrist flexion, wrist extension, hand open, and hand close), and enabled reliable mode switching during intentional wake gesture elicitation. Additionally, questionnaires revealed that participants responded well to the use of wake gestures and generally preferred false negatives over false positives, providing valuable insights into the future design of these systems.<i>Significance.</i>These results highlight the real-world viability of wake gestures for enabling the intermittent use of myoelectric control, opening up new interaction possibilities for EMG-based inputs.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142924225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective.electroencephalography (EEG) and magnetoencephalography (MEG) are widely used non-invasive techniques in clinical and cognitive neuroscience. However, low spatial resolution measurements, partial brain coverage by some sensor arrays, as well as noisy sensors could result in distorted sensor topographies resulting in inaccurate reconstructions of underlying brain dynamics. Solving these problems has been a challenging task. This paper proposes a robust framework based on electromagnetic source imaging for interpolation of unknown or poor quality EEG/MEG measurements.Approach.This framework consists of two steps: (1) estimating brain source activity using a robust inverse algorithm along with the leadfield matrix of available good sensors, and (2) interpolating unknown or poor quality EEG/MEG measurements using the reconstructed brain sources using the leadfield matrices of unknown or poor quality sensors. We evaluate the proposed framework through simulations and several real datasets, comparing its performance to two popular benchmarks-neighborhood interpolation and spherical spline interpolation algorithms.Results.In both simulations and real EEG/MEG measurements, we demonstrate several advantages compared to benchmarks, which are robust to highly correlated brain activity, low signal-to-noise ratio data and accurately estimates cortical dynamics.Significance.These results demonstrate a rigorous platform to enhance the spatial resolution of EEG and MEG, to overcome limitations of partial coverage of EEG/MEG sensor arrays that is particularly relevant to low-channel count optically pumped magnetometer arrays, and to estimate activity in poor/noisy sensors to a certain extent based on the available measurements from other good sensors. Implementation of this framework will enhance the quality of EEG and MEG, thereby expanding the potential applications of these modalities.
{"title":"Robust interpolation of EEG/MEG sensor time-series via electromagnetic source imaging.","authors":"Chang Cai, Xinbao Qi, Yuanshun Long, Zheyuan Zhang, Jing Yan, Huicong Kang, Wei Wu, Srikantan S Nagarajan","doi":"10.1088/1741-2552/ada309","DOIUrl":"10.1088/1741-2552/ada309","url":null,"abstract":"<p><p><i>Objective.</i>electroencephalography (EEG) and magnetoencephalography (MEG) are widely used non-invasive techniques in clinical and cognitive neuroscience. However, low spatial resolution measurements, partial brain coverage by some sensor arrays, as well as noisy sensors could result in distorted sensor topographies resulting in inaccurate reconstructions of underlying brain dynamics. Solving these problems has been a challenging task. This paper proposes a robust framework based on electromagnetic source imaging for interpolation of unknown or poor quality EEG/MEG measurements.<i>Approach.</i>This framework consists of two steps: (1) estimating brain source activity using a robust inverse algorithm along with the leadfield matrix of available good sensors, and (2) interpolating unknown or poor quality EEG/MEG measurements using the reconstructed brain sources using the leadfield matrices of unknown or poor quality sensors. We evaluate the proposed framework through simulations and several real datasets, comparing its performance to two popular benchmarks-neighborhood interpolation and spherical spline interpolation algorithms.<i>Results.</i>In both simulations and real EEG/MEG measurements, we demonstrate several advantages compared to benchmarks, which are robust to highly correlated brain activity, low signal-to-noise ratio data and accurately estimates cortical dynamics.<i>Significance.</i>These results demonstrate a rigorous platform to enhance the spatial resolution of EEG and MEG, to overcome limitations of partial coverage of EEG/MEG sensor arrays that is particularly relevant to low-channel count optically pumped magnetometer arrays, and to estimate activity in poor/noisy sensors to a certain extent based on the available measurements from other good sensors. Implementation of this framework will enhance the quality of EEG and MEG, thereby expanding the potential applications of these modalities.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-14DOI: 10.1088/1741-2552/adaa1c
Xiang Li, Jay W Reddy, Vishal Jain, Mats Forssell, Zabir Ahmed, Maysamreza Chamanzar
Spike sorting is a commonly used analysis method for identifying single-units and multi-units from extracellular recordings. The extracellular recordings contain a mixture of signal components, such as neural and non-neural events, possibly due to motion and breathing artifacts or electrical interference. Identifying single and multi-unit spikes using a simple threshold-crossing method may lead to uncertainty in differentiating the actual neural spikes from non-neural spikes. The traditional method for classifying neural and non-neural units from spike sorting results is manual curation by a trained person. This subjective method suffers from human error and variability and is further complicated by the absence of ground truth in experimental extracellular recordings. Moreover, the manual curation process is time consuming and is becoming intractable due to the growing size and complexity of extracellular datasets. To address these challenges, we, for the first time, present a novel automatic curation method based on an autoencoder model, which is trained on features of simulated extracellular spike waveforms. The model is then applied to experimental electrophysiology datasets, where the reconstruction error is used as the metric for classifying neural and non-neural spikes. As an alternative to the traditional frequency domain and statistical techniques, our proposed method offers a time-domain evaluation model to automate the analysis of extracellular recordings based on learned time-domain features. The model exhibits excellent performance and throughput when applied to real-world extracellular datasets without any retraining, highlighting its generalizability. This method can be integrated into spike sorting pipelines as a pre-processing filtering step or a post-processing curation method.
{"title":"AECuration: Automated event curation for spike sorting.","authors":"Xiang Li, Jay W Reddy, Vishal Jain, Mats Forssell, Zabir Ahmed, Maysamreza Chamanzar","doi":"10.1088/1741-2552/adaa1c","DOIUrl":"https://doi.org/10.1088/1741-2552/adaa1c","url":null,"abstract":"<p><p>Spike sorting is a commonly used analysis method for identifying single-units and multi-units from extracellular recordings. The extracellular recordings contain a mixture of signal components, such as neural and non-neural events, possibly due to motion and breathing artifacts or electrical interference. Identifying single and multi-unit spikes using a simple threshold-crossing method may lead to uncertainty in differentiating the actual neural spikes from non-neural spikes. The traditional method for classifying neural and non-neural units from spike sorting results is manual curation by a trained person. This subjective method suffers from human error and variability and is further complicated by the absence of ground truth in experimental extracellular recordings. Moreover, the manual curation process is time consuming and is becoming intractable due to the growing size and complexity of extracellular datasets. To address these challenges, we, for the first time, present a novel automatic curation method based on an autoencoder model, which is trained on features of simulated extracellular spike waveforms. The model is then applied to experimental electrophysiology datasets, where the reconstruction error is used as the metric for classifying neural and non-neural spikes. As an alternative to the traditional frequency domain and statistical techniques, our proposed method offers a time-domain evaluation model to automate the analysis of extracellular recordings based on learned time-domain features. The model exhibits excellent performance and throughput when applied to real-world extracellular datasets without any retraining, highlighting its generalizability. This method can be integrated into spike sorting pipelines as a pre-processing filtering step or a post-processing curation method.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-13DOI: 10.1088/1741-2552/ada9c0
Deland Hu Liu, Satyam Kumar, Hussein Alawieh, Frigyes Samuel Racz, Jose Del R Millan
Objective: A motor imagery (MI)-based brain-computer interface (BCI) enables users to engage with external environments by capturing and decoding electroencephalography (EEG) signals associated with the imagined movement of specific limbs. Despite significant advancements in BCI technologies over the past 40 years, a notable challenge remains: many users lack BCI proficiency, unable to produce sufficiently distinct and reliable MI brain patterns, hence leading to low classification rates in their BCIs. The objective of this study is to enhance the online performance of MI-BCIs in a personalized, biomarker-driven approach using transcranial alternating current stimulation (tACS).
Approach: Previous studies have identified that the peak power spectral density (PSD) value in sensorimotor idling rhythms is a neural correlate of participants' upper limb MI-BCI performances. In this active-controlled, single-blind study, we applied 20 minutes of tACS at the participant-specific, peak µ frequency in resting-state sensorimotor rhythms (SMRs), with the goal of enhancing resting-state µ SMRs.
Main results: After tACS, we observed significant improvements in event-related desynchronizations (ERDs) of µ sensorimotor rhythms (SMRs), and in the performance of an online MI-BCI that decodes left versus right hand commands in healthy participants (N=10) -but not in an active control-stimulation control group (N=10). Lastly, we showed a significant correlation between the resting-state µ SMRs and µ ERD, offering a mechanistic interpretation behind the observed changes in online BCI performances.
Significance: Our research lays the groundwork for future non-invasive interventions designed to enhance BCI performances, thereby improving the independence and interactions of individuals who rely on these systems.
{"title":"Personalized μ-transcranial alternating current stimulation improves online brain-computer interface control.","authors":"Deland Hu Liu, Satyam Kumar, Hussein Alawieh, Frigyes Samuel Racz, Jose Del R Millan","doi":"10.1088/1741-2552/ada9c0","DOIUrl":"https://doi.org/10.1088/1741-2552/ada9c0","url":null,"abstract":"<p><strong>Objective: </strong>A motor imagery (MI)-based brain-computer interface (BCI) enables users to engage with external environments by capturing and decoding electroencephalography (EEG) signals associated with the imagined movement of specific limbs. Despite significant advancements in BCI technologies over the past 40 years, a notable challenge remains: many users lack BCI proficiency, unable to produce sufficiently distinct and reliable MI brain patterns, hence leading to low classification rates in their BCIs. The objective of this study is to enhance the online performance of MI-BCIs in a personalized, biomarker-driven approach using transcranial alternating current stimulation (tACS).</p><p><strong>Approach: </strong>Previous studies have identified that the peak power spectral density (PSD) value in sensorimotor idling rhythms is a neural correlate of participants' upper limb MI-BCI performances. In this active-controlled, single-blind study, we applied 20 minutes of tACS at the participant-specific, peak µ frequency in resting-state sensorimotor rhythms (SMRs), with the goal of enhancing resting-state µ SMRs.</p><p><strong>Main results: </strong>After tACS, we observed significant improvements in event-related desynchronizations (ERDs) of µ sensorimotor rhythms (SMRs), and in the performance of an online MI-BCI that decodes left versus right hand commands in healthy participants (N=10) -but not in an active control-stimulation control group (N=10). Lastly, we showed a significant correlation between the resting-state µ SMRs and µ ERD, offering a mechanistic interpretation behind the observed changes in online BCI performances.</p><p><strong>Significance: </strong>Our research lays the groundwork for future non-invasive interventions designed to enhance BCI performances, thereby improving the independence and interactions of individuals who rely on these systems.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-13DOI: 10.1088/1741-2552/ada30a
Amirhossein Chalehchaleh, Martin M Winchester, Giovanni M Di Liberto
Objective. Speech comprehension involves detecting words and interpreting their meaning according to the preceding semantic context. This process is thought to be underpinned by a predictive neural system that uses that context to anticipate upcoming words. However, previous studies relied on evaluation metrics designed for continuous univariate sound features, overlooking the discrete and sparse nature of word-level features. This mismatch has limited effect sizes and hampered progress in understanding lexical prediction mechanisms in ecologically-valid experiments.Approach. We investigate these limitations by analyzing both simulated and actual electroencephalography (EEG) signals recorded during a speech comprehension task. We then introduce two novel assessment metrics tailored to capture the neural encoding of lexical surprise, improving upon traditional evaluation approaches.Main results. The proposed metrics demonstrated effect-sizes over 140% larger than those achieved with the conventional temporal response function (TRF) evaluation. These improvements were consistent across both simulated and real EEG datasets.Significance. Our findings substantially advance methods for evaluating lexical prediction in neural data, enabling more precise measurements and deeper insights into how the brain builds predictive representations during speech comprehension. These contributions open new avenues for research into predictive coding mechanisms in naturalistic language processing.
{"title":"Robust assessment of the cortical encoding of word-level expectations using the temporal response function.","authors":"Amirhossein Chalehchaleh, Martin M Winchester, Giovanni M Di Liberto","doi":"10.1088/1741-2552/ada30a","DOIUrl":"10.1088/1741-2552/ada30a","url":null,"abstract":"<p><p><i>Objective</i>. Speech comprehension involves detecting words and interpreting their meaning according to the preceding semantic context. This process is thought to be underpinned by a predictive neural system that uses that context to anticipate upcoming words. However, previous studies relied on evaluation metrics designed for continuous univariate sound features, overlooking the discrete and sparse nature of word-level features. This mismatch has limited effect sizes and hampered progress in understanding lexical prediction mechanisms in ecologically-valid experiments.<i>Approach</i>. We investigate these limitations by analyzing both simulated and actual electroencephalography (EEG) signals recorded during a speech comprehension task. We then introduce two novel assessment metrics tailored to capture the neural encoding of lexical surprise, improving upon traditional evaluation approaches.<i>Main results</i>. The proposed metrics demonstrated effect-sizes over 140% larger than those achieved with the conventional temporal response function (TRF) evaluation. These improvements were consistent across both simulated and real EEG datasets.<i>Significance</i>. Our findings substantially advance methods for evaluating lexical prediction in neural data, enabling more precise measurements and deeper insights into how the brain builds predictive representations during speech comprehension. These contributions open new avenues for research into predictive coding mechanisms in naturalistic language processing.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-09DOI: 10.1088/1741-2552/ada827
Akhil Mohan, Xin Li, Bei Zhang, Jayme S Knutson, Morgan Widina, Xiaofeng Wang, Ken Uchino, Ela B Plow, David A Cunningham
Objective:Ipsilateral motor evoked potentials (iMEPs) are believed to represent cortically evoked excitability of uncrossed brainstem-mediated pathways. In the event of extensive injury to (crossed) corticospinal pathways, which can occur following a stroke, uncrossed ipsilateral pathways may serve as an alternate resource to support the recovery of the paretic limb. However, iMEPs, even in neurally intact people, can be small, infrequent, and noisy, so discerning them in stroke survivors is very challenging. This study aimed to investigate the inter-rater reliability of iMEP features (presence/absence, amplitude, area, onset, and offset) to evaluate the reliability of existing methods for objectively analyzing iMEPs in stroke survivors with chronic upper extremity motor impairment.
Approach:Two investigators subjectively measured iMEP features from thirty-two stroke participants with chronic upper extremity motor impairment. Six objective methods based on standard deviation (SD) and mean consecutive differences (MCD) were used to measure the iMEP features from the same 32 participants. IMEP analysis used both trial-by-trial (individual signal) and average-signal analysis approaches. Inter-rater reliability of iMEP features and agreement between the subjective and objective methods were analyzed (percent agreement-PA and intraclass correlation coefficient-ICC).
Main results:Inter-rater reliability was excellent for iMEP detection (PA> 85%), amplitude, and area (ICC> 0.9). Of the six objective methods we tested, the 1SD method was most appropriate for identifying and analyzing iMEP amplitude and area (ICC> 0.9) in both trial-by-trial and average signal analysis approaches. None of the objective methods were reliable for analyzing iMEP onset and offset. Results also support using the average-signal analysis approach over the trial-by-trial analysis approach, as it offers excellent reliability for iMEP analysis in stroke survivors with chronic upper extremity motor impairment.
Significance:Findings from our study have relevance for understanding the role of ipsilateral pathways that typically survive unilateral severe white matter injury in people with stroke.
.
{"title":"Evaluation of objective methods for analyzing ipsilateral motor evoked potentials in stroke survivors with chronic upper extremity motor impairment.","authors":"Akhil Mohan, Xin Li, Bei Zhang, Jayme S Knutson, Morgan Widina, Xiaofeng Wang, Ken Uchino, Ela B Plow, David A Cunningham","doi":"10.1088/1741-2552/ada827","DOIUrl":"https://doi.org/10.1088/1741-2552/ada827","url":null,"abstract":"<p><p><b>Objective:</b>Ipsilateral motor evoked potentials (iMEPs) are believed to represent cortically evoked excitability of uncrossed brainstem-mediated pathways. In the event of extensive injury to (crossed) corticospinal pathways, which can occur following a stroke, uncrossed ipsilateral pathways may serve as an alternate resource to support the recovery of the paretic limb. However, iMEPs, even in neurally intact people, can be small, infrequent, and noisy, so discerning them in stroke survivors is very challenging. This study aimed to investigate the inter-rater reliability of iMEP features (presence/absence, amplitude, area, onset, and offset) to evaluate the reliability of existing methods for objectively analyzing iMEPs in stroke survivors with chronic upper extremity motor impairment.
<b>Approach:</b>Two investigators subjectively measured iMEP features from thirty-two stroke participants with chronic upper extremity motor impairment. Six objective methods based on standard deviation (SD) and mean consecutive differences (MCD) were used to measure the iMEP features from the same 32 participants. IMEP analysis used both trial-by-trial (individual signal) and average-signal analysis approaches. Inter-rater reliability of iMEP features and agreement between the subjective and objective methods were analyzed (percent agreement-PA and intraclass correlation coefficient-ICC).
<b>Main results:</b>Inter-rater reliability was excellent for iMEP detection (PA> 85%), amplitude, and area (ICC> 0.9). Of the six objective methods we tested, the 1SD method was most appropriate for identifying and analyzing iMEP amplitude and area (ICC> 0.9) in both trial-by-trial and average signal analysis approaches. None of the objective methods were reliable for analyzing iMEP onset and offset. Results also support using the average-signal analysis approach over the trial-by-trial analysis approach, as it offers excellent reliability for iMEP analysis in stroke survivors with chronic upper extremity motor impairment.
<b>Significance:</b>Findings from our study have relevance for understanding the role of ipsilateral pathways that typically survive unilateral severe white matter injury in people with stroke.
.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142960779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-09DOI: 10.1088/1741-2552/ada4de
Joel S Burma, Nathan E Johnson, Ibukunoluwa K Oni, Andrew P Lapointe, Chantel T Debert, Kathryn J Schneider, Jeff F Dunn, Jonathan D Smirl
Objective. The current paper describes the creation of a simultaneous trimodal neuroimaging protocol. The authors detail their methodological design for a subsequent large-scale study, demonstrate the ability to obtain the expected physiologically induced responses across cerebrovascular domains, and describe the pitfalls experienced when developing this approach.Approach. Electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS), and transcranial Doppler ultrasound (TCD) were combined to provide an assessment of neuronal activity, microvascular oxygenation, and upstream artery velocity, respectively. Real-time blood pressure, capnography, and heart rate were quantified to control for the known confounding influence of cardiorespiratory variables. The EEG-fNIRS-TCD protocol was attached to a 21 year-old male who completed neurovascular coupling/functional hyperemia (finger tapping and 'Where's Waldo/Wally?'), dynamic cerebral autoregulation (squat-stand maneuvers), and cerebrovascular reactivity tasks (end-tidal clamping during hypocapnia/hypercapnia).Main results. In a pilot participant, the Waldo task produced robust hemodynamic responses within the occipital microvasculature and the posterior cerebral artery. A ∼90% decrease in alpha band power was seen in the occipital cortical region compared between the eyes closed and eyes opened protocol, compared to the frontal, central, and parietal regions (∼80% reduction). A modest increase in motor oxygenated hemoglobin was seen during the finger tapping task, with a harmonious alpha decrease of ∼15% across all cortical regions. No change in the middle or posterior cerebral arteries were noted during finger tapping. During cerebral autoregulatory challenges, sinusoidal oscillations were produced in hemodynamics at 0.05 and 0.10 Hz, while a decrease and increase in TCD and fNIRS metrics were elicited during hypocapnia and hypercapnia protocols, respectively.Significance. All neuroimaging modalities have their inherent limitations; however, these can be minimized by employing multimodal neuroimaging approaches. This EEG-fNIRS-TCD protocol enables a comprehensive assessment of cerebrovascular regulation across the association between electrical activity and cerebral hemodynamics during tasks with a mild degree of body and/or head movement.
{"title":"A multimodal neuroimaging study of cerebrovascular regulation: protocols and insights of combining electroencephalography, functional near-infrared spectroscopy, transcranial Doppler ultrasound, and physiological parameters.","authors":"Joel S Burma, Nathan E Johnson, Ibukunoluwa K Oni, Andrew P Lapointe, Chantel T Debert, Kathryn J Schneider, Jeff F Dunn, Jonathan D Smirl","doi":"10.1088/1741-2552/ada4de","DOIUrl":"10.1088/1741-2552/ada4de","url":null,"abstract":"<p><p><i>Objective</i>. The current paper describes the creation of a simultaneous trimodal neuroimaging protocol. The authors detail their methodological design for a subsequent large-scale study, demonstrate the ability to obtain the expected physiologically induced responses across cerebrovascular domains, and describe the pitfalls experienced when developing this approach.<i>Approach</i>. Electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS), and transcranial Doppler ultrasound (TCD) were combined to provide an assessment of neuronal activity, microvascular oxygenation, and upstream artery velocity, respectively. Real-time blood pressure, capnography, and heart rate were quantified to control for the known confounding influence of cardiorespiratory variables. The EEG-fNIRS-TCD protocol was attached to a 21 year-old male who completed neurovascular coupling/functional hyperemia (finger tapping and '<i>Where's Waldo/Wally?</i>'), dynamic cerebral autoregulation (squat-stand maneuvers), and cerebrovascular reactivity tasks (end-tidal clamping during hypocapnia/hypercapnia).<i>Main results</i>. In a pilot participant, the Waldo task produced robust hemodynamic responses within the occipital microvasculature and the posterior cerebral artery. A ∼90% decrease in alpha band power was seen in the occipital cortical region compared between the eyes closed and eyes opened protocol, compared to the frontal, central, and parietal regions (∼80% reduction). A modest increase in motor oxygenated hemoglobin was seen during the finger tapping task, with a harmonious alpha decrease of ∼15% across all cortical regions. No change in the middle or posterior cerebral arteries were noted during finger tapping. During cerebral autoregulatory challenges, sinusoidal oscillations were produced in hemodynamics at 0.05 and 0.10 Hz, while a decrease and increase in TCD and fNIRS metrics were elicited during hypocapnia and hypercapnia protocols, respectively.<i>Significance</i>. All neuroimaging modalities have their inherent limitations; however, these can be minimized by employing multimodal neuroimaging approaches. This EEG-fNIRS-TCD protocol enables a comprehensive assessment of cerebrovascular regulation across the association between electrical activity and cerebral hemodynamics during tasks with a mild degree of body and/or head movement.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142924184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-07DOI: 10.1088/1741-2552/ad9edf
Deland H Liu, Ju-Chun Hsieh, Hussein Alawieh, Satyam Kumar, Fumiaki Iwane, Ilya Pyatnitskiy, Zoya J Ahmad, Huiliang Wang, José Del R Millán
Objective.Non-invasive electroencephalograms (EEG)-based brain-computer interfaces (BCIs) play a crucial role in a diverse range of applications, including motor rehabilitation, assistive and communication technologies, holding potential promise to benefit users across various clinical spectrums. Effective integration of these applications into daily life requires systems that provide stable and reliable BCI control for extended periods. Our prior research introduced the AIRTrode, a self-adhesive (A), injectable (I), and room-temperature (RT) spontaneously-crosslinked hydrogel electrode (AIRTrode). The AIRTrode has shown lower skin-contact impedance and greater stability than dry electrodes and, unlike wet gel electrodes, does not dry out after just a few hours, enhancing its suitability for long-term application. This study aims to demonstrate the efficacy of AIRTrodes in facilitating reliable, stable and long-term online EEG-based BCI operations.Approach.In this study, four healthy participants utilized AIRTrodes in two BCI control tasks-continuous and discrete-across two sessions separated by six hours. Throughout this duration, the AIRTrodes remained attached to the participants' heads. In the continuous task, participants controlled the BCI through decoding of upper-limb motor imagery (MI). In the discrete task, the control was based on decoding of error-related potentials (ErrPs).Main Results.Using AIRTrodes, participants demonstrated consistently reliable online BCI performance across both sessions and tasks. The physiological signals captured during MI and ErrPs tasks were valid and remained stable over sessions. Lastly, both the BCI performances and physiological signals captured were comparable with those from freshly applied, research-grade wet gel electrodes, the latter requiring inconvenient re-application at the start of the second session.Significance.AIRTrodes show great potential promise for integrating non-invasive BCIs into everyday settings due to their ability to support consistent BCI performances over extended periods. This technology could significantly enhance the usability of BCIs in real-world applications, facilitating continuous, all-day functionality that was previously challenging with existing electrode technologies.
{"title":"Novel AIRTrode-based wearable electrode supports long-term, online brain-computer interface operations.","authors":"Deland H Liu, Ju-Chun Hsieh, Hussein Alawieh, Satyam Kumar, Fumiaki Iwane, Ilya Pyatnitskiy, Zoya J Ahmad, Huiliang Wang, José Del R Millán","doi":"10.1088/1741-2552/ad9edf","DOIUrl":"10.1088/1741-2552/ad9edf","url":null,"abstract":"<p><p><i>Objective.</i>Non-invasive electroencephalograms (EEG)-based brain-computer interfaces (BCIs) play a crucial role in a diverse range of applications, including motor rehabilitation, assistive and communication technologies, holding potential promise to benefit users across various clinical spectrums. Effective integration of these applications into daily life requires systems that provide stable and reliable BCI control for extended periods. Our prior research introduced the AIRTrode, a self-adhesive (A), injectable (I), and room-temperature (RT) spontaneously-crosslinked hydrogel electrode (AIRTrode). The AIRTrode has shown lower skin-contact impedance and greater stability than dry electrodes and, unlike wet gel electrodes, does not dry out after just a few hours, enhancing its suitability for long-term application. This study aims to demonstrate the efficacy of AIRTrodes in facilitating reliable, stable and long-term online EEG-based BCI operations.<i>Approach.</i>In this study, four healthy participants utilized AIRTrodes in two BCI control tasks-continuous and discrete-across two sessions separated by six hours. Throughout this duration, the AIRTrodes remained attached to the participants' heads. In the continuous task, participants controlled the BCI through decoding of upper-limb motor imagery (MI). In the discrete task, the control was based on decoding of error-related potentials (ErrPs).<i>Main Results.</i>Using AIRTrodes, participants demonstrated consistently reliable online BCI performance across both sessions and tasks. The physiological signals captured during MI and ErrPs tasks were valid and remained stable over sessions. Lastly, both the BCI performances and physiological signals captured were comparable with those from freshly applied, research-grade wet gel electrodes, the latter requiring inconvenient re-application at the start of the second session.<i>Significance.</i>AIRTrodes show great potential promise for integrating non-invasive BCIs into everyday settings due to their ability to support consistent BCI performances over extended periods. This technology could significantly enhance the usability of BCIs in real-world applications, facilitating continuous, all-day functionality that was previously challenging with existing electrode technologies.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.1088/1741-2552/ada30b
Ketong Li, Peng Chen, Qian Chen, Xiangyun Li
Objective. Brain-computer interface(BCI) is leveraged by artificial intelligence in EEG signal decoding, which makes it possible to become a new means of human-machine interaction. However, the performance of current EEG decoding methods is still insufficient for clinical applications because of inadequate EEG information extraction and limited computational resources in hospitals. This paper introduces a hybrid network that employs a transformer with modified locally linear embedding and sliding window convolution for EEG decoding.Approach. This network separately extracts channel and temporal features from EEG signals, subsequently fusing these features using a cross-attention mechanism. Simultaneously, manifold learning is employed to lower the computational burden of the model by mapping the high-dimensional EEG data to a low-dimensional space by its dimension reduction function.Main results. The proposed model achieves accuracy rates of 84.44%, 94.96%, and 82.79% on the BCI Competition IV dataset 2a, high gamma dataset, and a self-constructed motor imagery (MI) dataset from the left and right hand fist-clenching tests respectively. The results indicate our model outperforms the baseline models by EEG-channel transformer with dimension-reduced EEG data and window attention with sliding window convolution. Additionally, to enhance the interpretability of the model, features preceding the temporal feature extraction network were visualized. This visualization promotes the understanding of how the model prefers task-related channels.Significance. The transformer-based method makes the MI-EEG decoding more practical for further clinical applications.
{"title":"A hybrid network using transformer with modified locally linear embedding and sliding window convolution for EEG decoding.","authors":"Ketong Li, Peng Chen, Qian Chen, Xiangyun Li","doi":"10.1088/1741-2552/ada30b","DOIUrl":"10.1088/1741-2552/ada30b","url":null,"abstract":"<p><p><i>Objective</i>. Brain-computer interface(BCI) is leveraged by artificial intelligence in EEG signal decoding, which makes it possible to become a new means of human-machine interaction. However, the performance of current EEG decoding methods is still insufficient for clinical applications because of inadequate EEG information extraction and limited computational resources in hospitals. This paper introduces a hybrid network that employs a transformer with modified locally linear embedding and sliding window convolution for EEG decoding.<i>Approach</i>. This network separately extracts channel and temporal features from EEG signals, subsequently fusing these features using a cross-attention mechanism. Simultaneously, manifold learning is employed to lower the computational burden of the model by mapping the high-dimensional EEG data to a low-dimensional space by its dimension reduction function.<i>Main results</i>. The proposed model achieves accuracy rates of 84.44%, 94.96%, and 82.79% on the BCI Competition IV dataset 2a, high gamma dataset, and a self-constructed motor imagery (MI) dataset from the left and right hand fist-clenching tests respectively. The results indicate our model outperforms the baseline models by EEG-channel transformer with dimension-reduced EEG data and window attention with sliding window convolution. Additionally, to enhance the interpretability of the model, features preceding the temporal feature extraction network were visualized. This visualization promotes the understanding of how the model prefers task-related channels.<i>Significance</i>. The transformer-based method makes the MI-EEG decoding more practical for further clinical applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-27DOI: 10.1088/1741-2552/ada0e8
Gabriele Frediani, Federico Carpi
Objective. The perception of softness plays a key role in interactions with various objects, both in the real world and in virtual/augmented reality (VR/AR) systems. The latter can be enriched with haptic feedback on virtual objects' softness to improve immersivity and realism. In such systems, visual expectation can influence tactile sensitivity to softness, as multisensory integration attempts to create a coherent perceptual experience. Nevertheless, expectation is sometimes reported to attenuate, and other times to enhance, perception. Elucidating how the perception of softness is affected by visual expectation in VR/AR is relevant not only to the neuropsychology and neuroscience of perception, but also to practical applications, such as VR/AR-based training or rehabilitation.Approach.Here, by using novel wearable tactile displays of softness previously described by us, we investigated how the sensitivity to softness in a visuo-tactile VR platform can be influenced by expectation. Twelve subjects were engaged in comparing the softness of pairs of virtual objects, familiar or not, with tactile feedback of softness and visual expectation either conflicting or not. The objects' Young's moduli were initially randomly selected from a large set, spanning two orders of magnitude (0.5, 2, 20, 50 and 100 MPa), and then their difference was iteratively reduced, to reach the just noticeable difference in softness.Main results.For the intermediate modulus, a conflict between tactile feedback and visual expectation caused a statistically significant increase in sensitivity.Significance.This finding supports the theory that there can be conditions in which contradictory stimuli strengthen attention (to resolve conflicting sensory information), which in turn can reverse the sensory silencing effect that expectation may otherwise have on perception.
{"title":"Tactile sensitivity to softness in virtual reality can increase when visual expectation and tactile feedback contradict each other.","authors":"Gabriele Frediani, Federico Carpi","doi":"10.1088/1741-2552/ada0e8","DOIUrl":"10.1088/1741-2552/ada0e8","url":null,"abstract":"<p><p><i>Objective</i>. The perception of softness plays a key role in interactions with various objects, both in the real world and in virtual/augmented reality (VR/AR) systems. The latter can be enriched with haptic feedback on virtual objects' softness to improve immersivity and realism. In such systems, visual expectation can influence tactile sensitivity to softness, as multisensory integration attempts to create a coherent perceptual experience. Nevertheless, expectation is sometimes reported to attenuate, and other times to enhance, perception. Elucidating how the perception of softness is affected by visual expectation in VR/AR is relevant not only to the neuropsychology and neuroscience of perception, but also to practical applications, such as VR/AR-based training or rehabilitation.<i>Approach.</i>Here, by using novel wearable tactile displays of softness previously described by us, we investigated how the sensitivity to softness in a visuo-tactile VR platform can be influenced by expectation. Twelve subjects were engaged in comparing the softness of pairs of virtual objects, familiar or not, with tactile feedback of softness and visual expectation either conflicting or not. The objects' Young's moduli were initially randomly selected from a large set, spanning two orders of magnitude (0.5, 2, 20, 50 and 100 MPa), and then their difference was iteratively reduced, to reach the just noticeable difference in softness.<i>Main results.</i>For the intermediate modulus, a conflict between tactile feedback and visual expectation caused a statistically significant increase in sensitivity.<i>Significance.</i>This finding supports the theory that there can be conditions in which contradictory stimuli strengthen attention (to resolve conflicting sensory information), which in turn can reverse the sensory silencing effect that expectation may otherwise have on perception.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142857373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}