Pub Date : 2023-08-31DOI: 10.1088/1741-2552/acf242
Ke Liu, Zhaolin Yao, Li Zheng, Qingguo Wei, Weihua Pei, Xiaorong Gao, Yijun Wang
Objective. Steady-state visual evoked potential (SSVEP) based brain-computer interfaces (BCIs) often struggle to balance user experience and system performance. To address this challenge, this study employed stimuli in the 55-62.8 Hz frequency range to implement a 40-target BCI speller that offered both high-performance and user-friendliness.Approach. This study proposed a method that presents stable multi-target stimuli on a monitor with a 360 Hz refresh rate. Real-time generation of stimulus matrix and stimulus rendering was used to ensure stable presentation while reducing the computational load. The 40 targets were encoded using the joint frequency and phase modulation method, offline and online BCI experiments were conducted on 16 subjects using the task discriminant component analysis algorithm for feature extraction and classification.Main results. The online BCI system achieved an average accuracy of 88.87% ± 3.05% and an information transfer rate of 51.83 ± 2.77 bits min-1under the low flickering perception condition.Significance. These findings suggest the feasibility and significant practical value of the proposed high-frequency SSVEP BCI system in advancing the visual BCI technology.
{"title":"A high-frequency SSVEP-BCI system based on a 360 Hz refresh rate.","authors":"Ke Liu, Zhaolin Yao, Li Zheng, Qingguo Wei, Weihua Pei, Xiaorong Gao, Yijun Wang","doi":"10.1088/1741-2552/acf242","DOIUrl":"https://doi.org/10.1088/1741-2552/acf242","url":null,"abstract":"<p><p><i>Objective</i>. Steady-state visual evoked potential (SSVEP) based brain-computer interfaces (BCIs) often struggle to balance user experience and system performance. To address this challenge, this study employed stimuli in the 55-62.8 Hz frequency range to implement a 40-target BCI speller that offered both high-performance and user-friendliness.<i>Approach</i>. This study proposed a method that presents stable multi-target stimuli on a monitor with a 360 Hz refresh rate. Real-time generation of stimulus matrix and stimulus rendering was used to ensure stable presentation while reducing the computational load. The 40 targets were encoded using the joint frequency and phase modulation method, offline and online BCI experiments were conducted on 16 subjects using the task discriminant component analysis algorithm for feature extraction and classification.<i>Main results</i>. The online BCI system achieved an average accuracy of 88.87% ± 3.05% and an information transfer rate of 51.83 ± 2.77 bits min<sup>-1</sup>under the low flickering perception condition.<i>Significance</i>. These findings suggest the feasibility and significant practical value of the proposed high-frequency SSVEP BCI system in advancing the visual BCI technology.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10167308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective.Human beings perceive stereoscopic image quality through the cerebral visual cortex, which is a complex brain activity. As a solution, the quality of stereoscopic images can be evaluated more accurately by attempting to replicate the human perception from electroencephalogram (EEG) signals on image quality in a machine, which is different from previous stereoscopic image quality assessment methods focused only on the extraction of image features.Approach.Our proposed method is based on a novel image-to-brain (I2B) cross-modality model including a spatial-temporal EEG encoder (STEE) and an I2B deep convolutional generative adversarial network (I2B-DCGAN). Specifically, the EEG representations are first learned by STEE as real samples of I2B-DCGAN, which is designed to extract both quality and semantic features from the stereoscopic images by a semantic-guided image encoder, and utilize a generator to conditionally create the corresponding EEG features for images. Finally, the generated EEG features are classified to predict the image perceptual quality level.Main results.Extensive experimental results on the collected brain-visual multimodal stereoscopic image quality ranking database, demonstrate that the proposed I2B cross-modality model can better emulate the visual perception mechanism of the human brain and outperform the other methods by achieving an average accuracy of 95.95%.Significance.The proposed method can convert the learned stereoscopic image features into brain representations without EEG signals during testing. Further experiments verify that the proposed method has good generalization ability on new datasets and the potential for practical applications.
{"title":"Image2Brain: a cross-modality model for blind stereoscopic image quality ranking.","authors":"Lili Shen, Xintong Li, Zhaoqing Pan, Xichun Sun, Yixuan Zhang, Jianpu Zheng","doi":"10.1088/1741-2552/acf2c9","DOIUrl":"https://doi.org/10.1088/1741-2552/acf2c9","url":null,"abstract":"<p><p><i>Objective.</i>Human beings perceive stereoscopic image quality through the cerebral visual cortex, which is a complex brain activity. As a solution, the quality of stereoscopic images can be evaluated more accurately by attempting to replicate the human perception from electroencephalogram (EEG) signals on image quality in a machine, which is different from previous stereoscopic image quality assessment methods focused only on the extraction of image features.<i>Approach.</i>Our proposed method is based on a novel image-to-brain (I2B) cross-modality model including a spatial-temporal EEG encoder (STEE) and an I2B deep convolutional generative adversarial network (I2B-DCGAN). Specifically, the EEG representations are first learned by STEE as real samples of I2B-DCGAN, which is designed to extract both quality and semantic features from the stereoscopic images by a semantic-guided image encoder, and utilize a generator to conditionally create the corresponding EEG features for images. Finally, the generated EEG features are classified to predict the image perceptual quality level.<i>Main results.</i>Extensive experimental results on the collected brain-visual multimodal stereoscopic image quality ranking database, demonstrate that the proposed I2B cross-modality model can better emulate the visual perception mechanism of the human brain and outperform the other methods by achieving an average accuracy of 95.95%.<i>Significance.</i>The proposed method can convert the learned stereoscopic image features into brain representations without EEG signals during testing. Further experiments verify that the proposed method has good generalization ability on new datasets and the potential for practical applications.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10168592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-30DOI: 10.1088/1741-2552/acf1ce
Corentin Puffay, Jonas Vanthornhout, Marlies Gillis, Bernd Accou, Hugo Van Hamme, Tom Francart
Objective.When listening to continuous speech, populations of neurons in the brain track different features of the signal. Neural tracking can be measured by relating the electroencephalography (EEG) and the speech signal. Recent studies have shown a significant contribution of linguistic features over acoustic neural tracking using linear models. However, linear models cannot model the nonlinear dynamics of the brain. To overcome this, we use a convolutional neural network (CNN) that relates EEG to linguistic features using phoneme or word onsets as a control and has the capacity to model non-linear relations.Approach.We integrate phoneme- and word-based linguistic features (phoneme surprisal, cohort entropy (CE), word surprisal (WS) and word frequency (WF)) in our nonlinear CNN model and investigate if they carry additional information on top of lexical features (phoneme and word onsets). We then compare the performance of our nonlinear CNN with that of a linear encoder and a linearized CNN.Main results.For the non-linear CNN, we found a significant contribution of CE over phoneme onsets and of WS and WF over word onsets. Moreover, the non-linear CNN outperformed the linear baselines.Significance.Measuring coding of linguistic features in the brain is important for auditory neuroscience research and applications that involve objectively measuring speech understanding. With linear models, this is measurable, but the effects are very small. The proposed non-linear CNN model yields larger differences between linguistic and lexical models and, therefore, could show effects that would otherwise be unmeasurable and may, in the future, lead to improved within-subject measures and shorter recordings.
{"title":"Robust neural tracking of linguistic speech representations using a convolutional neural network.","authors":"Corentin Puffay, Jonas Vanthornhout, Marlies Gillis, Bernd Accou, Hugo Van Hamme, Tom Francart","doi":"10.1088/1741-2552/acf1ce","DOIUrl":"https://doi.org/10.1088/1741-2552/acf1ce","url":null,"abstract":"<p><p><i>Objective.</i>When listening to continuous speech, populations of neurons in the brain track different features of the signal. Neural tracking can be measured by relating the electroencephalography (EEG) and the speech signal. Recent studies have shown a significant contribution of linguistic features over acoustic neural tracking using linear models. However, linear models cannot model the nonlinear dynamics of the brain. To overcome this, we use a convolutional neural network (CNN) that relates EEG to linguistic features using phoneme or word onsets as a control and has the capacity to model non-linear relations.<i>Approach.</i>We integrate phoneme- and word-based linguistic features (phoneme surprisal, cohort entropy (CE), word surprisal (WS) and word frequency (WF)) in our nonlinear CNN model and investigate if they carry additional information on top of lexical features (phoneme and word onsets). We then compare the performance of our nonlinear CNN with that of a linear encoder and a linearized CNN.<i>Main results.</i>For the non-linear CNN, we found a significant contribution of CE over phoneme onsets and of WS and WF over word onsets. Moreover, the non-linear CNN outperformed the linear baselines.<i>Significance.</i>Measuring coding of linguistic features in the brain is important for auditory neuroscience research and applications that involve objectively measuring speech understanding. With linear models, this is measurable, but the effects are very small. The proposed non-linear CNN model yields larger differences between linguistic and lexical models and, therefore, could show effects that would otherwise be unmeasurable and may, in the future, lead to improved within-subject measures and shorter recordings.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10539200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-29DOI: 10.1088/1741-2552/acf1cd
Enrique Germany, Igor Teixeira, Venethia Danthine, Roberto Santalucia, Inci Cakiroglu, Andres Torres, Michele Verleysen, Jean Delbeke, Antoine Nonclercq, Riëm El Tahry
Objective. In 1/3 of patients, anti-seizure medications may be insufficient, and resective surgery may be offered whenever the seizure onset is localized and situated in a non-eloquent brain region. When surgery is not feasible or fails, vagus nerve stimulation (VNS) therapy can be used as an add-on treatment to reduce seizure frequency and/or severity. However, screening tools or methods for predicting patient response to VNS and avoiding unnecessary implantation are unavailable, and confident biomarkers of clinical efficacy are unclear.Approach. To predict the response of patients to VNS, functional brain connectivity measures in combination with graph measures have been primarily used with respect to imaging techniques such as functional magnetic resonance imaging, but connectivity graph-based analysis based on electrophysiological signals such as electroencephalogram, have been barely explored. Although the study of the influence of VNS on functional connectivity is not new, this work is distinguished by using preimplantation low-density EEG data to analyze discriminative measures between responders and non-responder patients using functional connectivity and graph theory metrics.Main results. By calculating five functional brain connectivity indexes per frequency band upon partial directed coherence and direct transform function connectivity matrices in a population of 37 refractory epilepsy patients, we found significant differences (p< 0.05) between the global efficiency, average clustering coefficient, and modularity of responders and non-responders using the Mann-Whitney U test with Benjamini-Hochberg correction procedure and use of a false discovery rate of 5%.Significance. Our results indicate that these measures may potentially be used as biomarkers to predict responsiveness to VNS therapy.
{"title":"Functional brain connectivity indexes derived from low-density EEG of pre-implanted patients as VNS outcome predictors.","authors":"Enrique Germany, Igor Teixeira, Venethia Danthine, Roberto Santalucia, Inci Cakiroglu, Andres Torres, Michele Verleysen, Jean Delbeke, Antoine Nonclercq, Riëm El Tahry","doi":"10.1088/1741-2552/acf1cd","DOIUrl":"https://doi.org/10.1088/1741-2552/acf1cd","url":null,"abstract":"<p><p><i>Objective</i>. In 1/3 of patients, anti-seizure medications may be insufficient, and resective surgery may be offered whenever the seizure onset is localized and situated in a non-eloquent brain region. When surgery is not feasible or fails, vagus nerve stimulation (VNS) therapy can be used as an add-on treatment to reduce seizure frequency and/or severity. However, screening tools or methods for predicting patient response to VNS and avoiding unnecessary implantation are unavailable, and confident biomarkers of clinical efficacy are unclear.<i>Approach</i>. To predict the response of patients to VNS, functional brain connectivity measures in combination with graph measures have been primarily used with respect to imaging techniques such as functional magnetic resonance imaging, but connectivity graph-based analysis based on electrophysiological signals such as electroencephalogram, have been barely explored. Although the study of the influence of VNS on functional connectivity is not new, this work is distinguished by using preimplantation low-density EEG data to analyze discriminative measures between responders and non-responder patients using functional connectivity and graph theory metrics.<i>Main results</i>. By calculating five functional brain connectivity indexes per frequency band upon partial directed coherence and direct transform function connectivity matrices in a population of 37 refractory epilepsy patients, we found significant differences (<i>p</i>< 0.05) between the global efficiency, average clustering coefficient, and modularity of responders and non-responders using the Mann-Whitney U test with Benjamini-Hochberg correction procedure and use of a false discovery rate of 5%.<i>Significance</i>. Our results indicate that these measures may potentially be used as biomarkers to predict responsiveness to VNS therapy.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10167299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-25DOI: 10.1088/1741-2552/acef95
Dylan M Wallace, Miri Benyamini, Sam Nason-Tomaszewski, Joseph T Costello, Luis H Cubillos, Matthew J Mender, Hisham Temmar, Matthew S Willsey, Parag G Patil, Cynthia A Chestek, Miriam Zacksenhouse
Objective.While brain-machine interfaces (BMIs) are promising technologies that could provide direct pathways for controlling the external world and thus regaining motor capabilities, their effectiveness is hampered by decoding errors. Previous research has demonstrated the detection and correction of BMI outcome errors, which occur at the end of trials. Here we focus on continuous detection and correction of BMI execution errors, which occur during real-time movements.Approach.Two adult male rhesus macaques were implanted with Utah arrays in the motor cortex. The monkeys performed single or two-finger group BMI tasks where a Kalman filter decoded binned spiking-band power into intended finger kinematics. Neural activity was analyzed to determine how it depends not only on the kinematics of the fingers, but also on the distance of each finger-group to its target. We developed a method to detect erroneous movements, i.e. consistent movements away from the target, from the same neural activity used by the Kalman filter. Detected errors were corrected by a simple stopping strategy, and the effect on performance was evaluated.Mainresults.First we show that including distance to target explains significantly more variance of the recorded neural activity. Then, for the first time, we demonstrate that neural activity in motor cortex can be used to detect execution errors during BMI controlled movements. Keeping false positive rate below5%, it was possible to achieve mean true positive rate of28.1%online. Despite requiring 200 ms to detect and react to suspected errors, we were able to achieve a significant improvement in task performance via reduced orbiting time of one finger group.Significance.Neural activity recorded in motor cortex for BMI control can be used to detect and correct BMI errors and thus to improve performance. Further improvements may be obtained by enhancing classification and correction strategies.
{"title":"Error detection and correction in intracortical brain-machine interfaces controlling two finger groups.","authors":"Dylan M Wallace, Miri Benyamini, Sam Nason-Tomaszewski, Joseph T Costello, Luis H Cubillos, Matthew J Mender, Hisham Temmar, Matthew S Willsey, Parag G Patil, Cynthia A Chestek, Miriam Zacksenhouse","doi":"10.1088/1741-2552/acef95","DOIUrl":"10.1088/1741-2552/acef95","url":null,"abstract":"<p><p><i>Objective.</i>While brain-machine interfaces (BMIs) are promising technologies that could provide direct pathways for controlling the external world and thus regaining motor capabilities, their effectiveness is hampered by decoding errors. Previous research has demonstrated the detection and correction of BMI outcome errors, which occur at the end of trials. Here we focus on continuous detection and correction of BMI execution errors, which occur during real-time movements.<i>Approach.</i>Two adult male rhesus macaques were implanted with Utah arrays in the motor cortex. The monkeys performed single or two-finger group BMI tasks where a Kalman filter decoded binned spiking-band power into intended finger kinematics. Neural activity was analyzed to determine how it depends not only on the kinematics of the fingers, but also on the distance of each finger-group to its target. We developed a method to detect erroneous movements, i.e. consistent movements away from the target, from the same neural activity used by the Kalman filter. Detected errors were corrected by a simple stopping strategy, and the effect on performance was evaluated.<i>Main</i><i>results.</i>First we show that including distance to target explains significantly more variance of the recorded neural activity. Then, for the first time, we demonstrate that neural activity in motor cortex can be used to detect execution errors during BMI controlled movements. Keeping false positive rate below5%, it was possible to achieve mean true positive rate of28.1%online. Despite requiring 200 ms to detect and react to suspected errors, we were able to achieve a significant improvement in task performance via reduced orbiting time of one finger group.<i>Significance.</i>Neural activity recorded in motor cortex for BMI control can be used to detect and correct BMI errors and thus to improve performance. Further improvements may be obtained by enhancing classification and correction strategies.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10594236/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10457583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-25DOI: 10.1088/1741-2552/aceaac
Piergiuseppe Liuzzi, Bahia Hakiki, Francesca Draghi, Anna Maria Romoli, Rachele Burali, Maenia Scarpino, Francesca Cecchi, Antonello Grippo, Andrea Mannini
Objective.Brain-injured patients may enter a state of minimal or inconsistent awareness termed minimally conscious state (MCS). Such patient may (MCS+) or may not (MCS-) exhibit high-level behavioral responses, and the two groups retain two inherently different rehabilitative paths and expected outcomes. We hypothesized that brain complexity may be treated as a proxy of high-level cognition and thus could be used as a neural correlate of consciousness.Approach.In this prospective observational study, 68 MCS patients (MCS-: 30; women: 31) were included (median [IQR] age 69 [20]; time post-onset 83 [28]). At admission to intensive rehabilitation, 30 min resting-state closed-eyes recordings were performed together with consciousness diagnosis following international guidelines. The width of the multifractal singularity spectrum (MSS) was computed for each channel time series and entered nested cross-validated interpretable machine learning models targeting the differential diagnosis of MCS±.Main results.Frontal MSS widths (p< 0.05), as well as the ones deriving from the left centro-temporal network (C3:p= 0.018, T3:p= 0.017; T5:p= 0.003) were found to be significantly higher in the MCS+ cohort. The best performing solution was found to be the K-nearest neighbor model with an aggregated test accuracy of 75.5% (median [IQR] AuROC for 100 executions 0.88 [0.02]). Coherently, the electrodes with highest Shapley values were found to be Fz and Cz, with four out the first five ranked features belonging to the fronto-central network.Significance.MCS+ is a frequent condition associated with a notably better prognosis than the MCS-. High fractality in the left centro-temporal network results coherent with neurological networks involved in the language function, proper of MCS+ patients. Using EEG-based interpretable algorithm to complement differential diagnosis of consciousness may improve rehabilitation pathways and communications with caregivers.
{"title":"EEG fractal dimensions predict high-level behavioral responses in minimally conscious patients.","authors":"Piergiuseppe Liuzzi, Bahia Hakiki, Francesca Draghi, Anna Maria Romoli, Rachele Burali, Maenia Scarpino, Francesca Cecchi, Antonello Grippo, Andrea Mannini","doi":"10.1088/1741-2552/aceaac","DOIUrl":"https://doi.org/10.1088/1741-2552/aceaac","url":null,"abstract":"<p><p><i>Objective.</i>Brain-injured patients may enter a state of minimal or inconsistent awareness termed minimally conscious state (MCS). Such patient may (MCS+) or may not (MCS-) exhibit high-level behavioral responses, and the two groups retain two inherently different rehabilitative paths and expected outcomes. We hypothesized that brain complexity may be treated as a proxy of high-level cognition and thus could be used as a neural correlate of consciousness.<i>Approach.</i>In this prospective observational study, 68 MCS patients (MCS-: 30; women: 31) were included (median [IQR] age 69 [20]; time post-onset 83 [28]). At admission to intensive rehabilitation, 30 min resting-state closed-eyes recordings were performed together with consciousness diagnosis following international guidelines. The width of the multifractal singularity spectrum (MSS) was computed for each channel time series and entered nested cross-validated interpretable machine learning models targeting the differential diagnosis of MCS±.<i>Main results.</i>Frontal MSS widths (<i>p</i>< 0.05), as well as the ones deriving from the left centro-temporal network (C3:<i>p</i>= 0.018, T3:<i>p</i>= 0.017; T5:<i>p</i>= 0.003) were found to be significantly higher in the MCS+ cohort. The best performing solution was found to be the K-nearest neighbor model with an aggregated test accuracy of 75.5% (median [IQR] AuROC for 100 executions 0.88 [0.02]). Coherently, the electrodes with highest Shapley values were found to be Fz and Cz, with four out the first five ranked features belonging to the fronto-central network.<i>Significance.</i>MCS+ is a frequent condition associated with a notably better prognosis than the MCS-. High fractality in the left centro-temporal network results coherent with neurological networks involved in the language function, proper of MCS+ patients. Using EEG-based interpretable algorithm to complement differential diagnosis of consciousness may improve rehabilitation pathways and communications with caregivers.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10103900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-24DOI: 10.1088/1741-2552/acef92
Li Zheng, Pan Liao, Xiuwen Wu, Miao Cao, Wei Cui, Lingxi Lu, Hui Xu, Linlin Zhu, Bingjiang Lyu, Xiongfei Wang, Pengfei Teng, Jing Wang, Simon Vogrin, Chris Plummer, Guoming Luan, Jia-Hong Gao
Objective.Magnetoencephalography (MEG) is a powerful non-invasive diagnostic modality for presurgical epilepsy evaluation. However, the clinical utility of MEG mapping for localising epileptic foci is limited by its low efficiency, high labour requirements, and considerable interoperator variability. To address these obstacles, we proposed a novel artificial intelligence-based automated magnetic source imaging (AMSI) pipeline for automated detection and localisation of epileptic sources from MEG data.Approach.To expedite the analysis of clinical MEG data from patients with epilepsy and reduce human bias, we developed an autolabelling method, a deep-learning model based on convolutional neural networks and a hierarchical clustering method based on a perceptual hash algorithm, to enable the coregistration of MEG and magnetic resonance imaging, the detection and clustering of epileptic activity, and the localisation of epileptic sources in a highly automated manner. We tested the capability of the AMSI pipeline by assessing MEG data from 48 epilepsy patients.Main results.The AMSI pipeline was able to rapidly detect interictal epileptiform discharges with 93.31% ± 3.87% precision based on a 35-patient dataset (with sevenfold patientwise cross-validation) and robustly rendered accurate localisation of epileptic activity with a lobar concordance of 87.18% against interictal and ictal stereo-electroencephalography findings in a 13-patient dataset. We also showed that the AMSI pipeline accomplishes the necessary processes and delivers objective results within a much shorter time frame (∼12 min) than traditional manual processes (∼4 h).Significance.The AMSI pipeline promises to facilitate increased utilisation of MEG data in the clinical analysis of patients with epilepsy.
{"title":"An artificial intelligence-based pipeline for automated detection and localisation of epileptic sources from magnetoencephalography.","authors":"Li Zheng, Pan Liao, Xiuwen Wu, Miao Cao, Wei Cui, Lingxi Lu, Hui Xu, Linlin Zhu, Bingjiang Lyu, Xiongfei Wang, Pengfei Teng, Jing Wang, Simon Vogrin, Chris Plummer, Guoming Luan, Jia-Hong Gao","doi":"10.1088/1741-2552/acef92","DOIUrl":"https://doi.org/10.1088/1741-2552/acef92","url":null,"abstract":"<p><p><i>Objective.</i>Magnetoencephalography (MEG) is a powerful non-invasive diagnostic modality for presurgical epilepsy evaluation. However, the clinical utility of MEG mapping for localising epileptic foci is limited by its low efficiency, high labour requirements, and considerable interoperator variability. To address these obstacles, we proposed a novel artificial intelligence-based automated magnetic source imaging (AMSI) pipeline for automated detection and localisation of epileptic sources from MEG data.<i>Approach.</i>To expedite the analysis of clinical MEG data from patients with epilepsy and reduce human bias, we developed an autolabelling method, a deep-learning model based on convolutional neural networks and a hierarchical clustering method based on a perceptual hash algorithm, to enable the coregistration of MEG and magnetic resonance imaging, the detection and clustering of epileptic activity, and the localisation of epileptic sources in a highly automated manner. We tested the capability of the AMSI pipeline by assessing MEG data from 48 epilepsy patients.<i>Main results.</i>The AMSI pipeline was able to rapidly detect interictal epileptiform discharges with 93.31% ± 3.87% precision based on a 35-patient dataset (with sevenfold patientwise cross-validation) and robustly rendered accurate localisation of epileptic activity with a lobar concordance of 87.18% against interictal and ictal stereo-electroencephalography findings in a 13-patient dataset. We also showed that the AMSI pipeline accomplishes the necessary processes and delivers objective results within a much shorter time frame (∼12 min) than traditional manual processes (∼4 h).<i>Significance.</i>The AMSI pipeline promises to facilitate increased utilisation of MEG data in the clinical analysis of patients with epilepsy.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10101359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-23DOI: 10.1088/1741-2552/acee20
Stephen M Gordon, Jonathan R McDaniel, Kevin W King, Vernon J Lawhern, Jonathan Touryan
Objective.Currently, there exists very few ways to isolate cognitive processes, historically defined via highly controlled laboratory studies, in more ecologically valid contexts. Specifically, it remains unclear as to what extent patterns of neural activity observed under such constraints actually manifest outside the laboratory in a manner that can be used to make accurate inferences about latent states, associated cognitive processes, or proximal behavior. Improving our understanding of when and how specific patterns of neural activity manifest in ecologically valid scenarios would provide validation for laboratory-based approaches that study similar neural phenomena in isolation and meaningful insight into the latent states that occur during complex tasks.Approach.Domain generalization methods, borrowed from the work of the brain-computer interface community, have the potential to capture high-dimensional patterns of neural activity in a way that can be reliably applied across experimental datasets in order to address this specific challenge. We previously used such an approach to decode phasic neural responses associated with visual target discrimination. Here, we extend that work to more tonic phenomena such as internal latent states. We use data from two highly controlled laboratory paradigms to train two separate domain-generalized models. We apply the trained models to an ecologically valid paradigm in which participants performed multiple, concurrent driving-related tasks while perched atop a six-degrees-of-freedom ride-motion simulator.Main Results.Using the pretrained models, we estimate latent state and the associated patterns of neural activity. As the patterns of neural activity become more similar to those patterns observed in the training data, we find changes in behavior and task performance that are consistent with the observations from the original, laboratory-based paradigms.Significance.These results lend ecological validity to the original, highly controlled, experimental designs and provide a methodology for understanding the relationship between neural activity and behavior during complex tasks.
{"title":"Decoding neural activity to assess individual latent state in ecologically valid contexts.","authors":"Stephen M Gordon, Jonathan R McDaniel, Kevin W King, Vernon J Lawhern, Jonathan Touryan","doi":"10.1088/1741-2552/acee20","DOIUrl":"https://doi.org/10.1088/1741-2552/acee20","url":null,"abstract":"<p><p><i>Objective.</i>Currently, there exists very few ways to isolate cognitive processes, historically defined via highly controlled laboratory studies, in more ecologically valid contexts. Specifically, it remains unclear as to what extent patterns of neural activity observed under such constraints actually manifest outside the laboratory in a manner that can be used to make accurate inferences about latent states, associated cognitive processes, or proximal behavior. Improving our understanding of when and how specific patterns of neural activity manifest in ecologically valid scenarios would provide validation for laboratory-based approaches that study similar neural phenomena in isolation and meaningful insight into the latent states that occur during complex tasks.<i>Approach.</i>Domain generalization methods, borrowed from the work of the brain-computer interface community, have the potential to capture high-dimensional patterns of neural activity in a way that can be reliably applied across experimental datasets in order to address this specific challenge. We previously used such an approach to decode phasic neural responses associated with visual target discrimination. Here, we extend that work to more tonic phenomena such as internal latent states. We use data from two highly controlled laboratory paradigms to train two separate domain-generalized models. We apply the trained models to an ecologically valid paradigm in which participants performed multiple, concurrent driving-related tasks while perched atop a six-degrees-of-freedom ride-motion simulator.<i>Main Results.</i>Using the pretrained models, we estimate latent state and the associated patterns of neural activity. As the patterns of neural activity become more similar to those patterns observed in the training data, we find changes in behavior and task performance that are consistent with the observations from the original, laboratory-based paradigms.<i>Significance.</i>These results lend ecological validity to the original, highly controlled, experimental designs and provide a methodology for understanding the relationship between neural activity and behavior during complex tasks.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10064814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-23DOI: 10.1088/1741-2552/ace933
Shuqing Yin, Yang Li, Ruoyu Lu, Lihua Guo, Yansheng Wang, Chong Liu, Jingmin Li
Objective. Three-dimensional micro-nano electrodes (MNEs) with the vertical nanopillar array distributed on the surface play an increasingly important role in neural science research. The geometric parameters of the nanopillar array and the cell adhesion state on the nanopillar array are the factors that may affect the MNE recording. However, the quantified relationship between these parameters and the signal-to-noise ratio (SNR) is still unclear. This paper establishes a cell-MNE interface SNR model and obtains the mathematical relationship between the above parameters and SNR.Approach. The equivalent electrical circuit and numerical simulation are used to study the sensing performance of the cell-electrode interface. The adhesion state of cells on MNE is quantified as engulfment percentage, and an equivalent cleft width is proposed to describe the signal loss caused by clefts between the cell membrane and the electrode surface.Main results. Whether the planar substrate is insulated or not, the SNR of MNE is greater than planar microelectrode only when the engulfment percentage is greater than a certain value. Under the premise of maximum engulfment percentage, the spacing and height of nanopillars should be minimized, and the radius of the nanopillar should be maximized for better signal quality.Significance. The model can clarify the mechanism of improving SNR by nanopillar arrays and provides the theoretical basis for the design of such nanopillar neural electrodes.
{"title":"A cell-electrode interface signal-to-noise ratio model for 3D micro-nano electrode.","authors":"Shuqing Yin, Yang Li, Ruoyu Lu, Lihua Guo, Yansheng Wang, Chong Liu, Jingmin Li","doi":"10.1088/1741-2552/ace933","DOIUrl":"https://doi.org/10.1088/1741-2552/ace933","url":null,"abstract":"<p><p><i>Objective</i>. Three-dimensional micro-nano electrodes (MNEs) with the vertical nanopillar array distributed on the surface play an increasingly important role in neural science research. The geometric parameters of the nanopillar array and the cell adhesion state on the nanopillar array are the factors that may affect the MNE recording. However, the quantified relationship between these parameters and the signal-to-noise ratio (SNR) is still unclear. This paper establishes a cell-MNE interface SNR model and obtains the mathematical relationship between the above parameters and SNR.<i>Approach</i>. The equivalent electrical circuit and numerical simulation are used to study the sensing performance of the cell-electrode interface. The adhesion state of cells on MNE is quantified as engulfment percentage, and an equivalent cleft width is proposed to describe the signal loss caused by clefts between the cell membrane and the electrode surface.<i>Main results</i>. Whether the planar substrate is insulated or not, the SNR of MNE is greater than planar microelectrode only when the engulfment percentage is greater than a certain value. Under the premise of maximum engulfment percentage, the spacing and height of nanopillars should be minimized, and the radius of the nanopillar should be maximized for better signal quality.<i>Significance</i>. The model can clarify the mechanism of improving SNR by nanopillar arrays and provides the theoretical basis for the design of such nanopillar neural electrodes.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10061904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-23DOI: 10.1088/1741-2552/acef94
Jamie A O'Reilly, Judy D Zhu, Paul Sowman
Objective. To use a recurrent neural network (RNN) to reconstruct neural activity responsible for generating noninvasively measured electromagnetic signals.Approach. Output weights of an RNN were fixed as the lead field matrix from volumetric source space computed using the boundary element method with co-registered structural magnetic resonance images and magnetoencephalography (MEG). Initially, the network was trained to minimise mean-squared-error loss between its outputs and MEG signals, causing activations in the penultimate layer to converge towards putative neural source activations. Subsequently, L1 regularisation was applied to the final hidden layer, and the model was fine-tuned, causing it to favour more focused activations. Estimated source signals were then obtained from the outputs of the last hidden layer. We developed and validated this approach with simulations before applying it to real MEG data, comparing performance with beamformers, minimum-norm estimate, and mixed-norm estimate source reconstruction methods.Main results. The proposed RNN method had higher output signal-to-noise ratios and comparable correlation and error between estimated and simulated sources. Reconstructed MEG signals were also equal or superior to the other methods regarding their similarity to ground-truth. When applied to MEG data recorded during an auditory roving oddball experiment, source signals estimated with the RNN were generally biophysically plausible and consistent with expectations from the literature.Significance. This work builds on recent developments of RNNs for modelling event-related neural responses by incorporating biophysical constraints from the forward model, thus taking a significant step towards greater biological realism and introducing the possibility of exploring how input manipulations may influence localised neural activity.
{"title":"Localized estimation of electromagnetic sources underlying event-related fields using recurrent neural networks.","authors":"Jamie A O'Reilly, Judy D Zhu, Paul Sowman","doi":"10.1088/1741-2552/acef94","DOIUrl":"https://doi.org/10.1088/1741-2552/acef94","url":null,"abstract":"<p><p><i>Objective</i>. To use a recurrent neural network (RNN) to reconstruct neural activity responsible for generating noninvasively measured electromagnetic signals.<i>Approach</i>. Output weights of an RNN were fixed as the lead field matrix from volumetric source space computed using the boundary element method with co-registered structural magnetic resonance images and magnetoencephalography (MEG). Initially, the network was trained to minimise mean-squared-error loss between its outputs and MEG signals, causing activations in the penultimate layer to converge towards putative neural source activations. Subsequently, L1 regularisation was applied to the final hidden layer, and the model was fine-tuned, causing it to favour more focused activations. Estimated source signals were then obtained from the outputs of the last hidden layer. We developed and validated this approach with simulations before applying it to real MEG data, comparing performance with beamformers, minimum-norm estimate, and mixed-norm estimate source reconstruction methods.<i>Main results</i>. The proposed RNN method had higher output signal-to-noise ratios and comparable correlation and error between estimated and simulated sources. Reconstructed MEG signals were also equal or superior to the other methods regarding their similarity to ground-truth. When applied to MEG data recorded during an auditory roving oddball experiment, source signals estimated with the RNN were generally biophysically plausible and consistent with expectations from the literature.<i>Significance</i>. This work builds on recent developments of RNNs for modelling event-related neural responses by incorporating biophysical constraints from the forward model, thus taking a significant step towards greater biological realism and introducing the possibility of exploring how input manipulations may influence localised neural activity.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10420682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}