Pub Date : 2023-08-22DOI: 10.1088/1741-2552/aceca3
Khaled Aboumerhi, Amparo Güemes, Hongtao Liu, Francesco Tenore, Ralph Etienne-Cummings
In recent years, there has been a growing demand for miniaturization, low power consumption, quick treatments, and non-invasive clinical strategies in the healthcare industry. To meet these demands, healthcare professionals are seeking new technological paradigms that can improve diagnostic accuracy while ensuring patient compliance. Neuromorphic engineering, which uses neural models in hardware and software to replicate brain-like behaviors, can help usher in a new era of medicine by delivering low power, low latency, small footprint, and high bandwidth solutions. This paper provides an overview of recent neuromorphic advancements in medicine, including medical imaging and cancer diagnosis, processing of biosignals for diagnosis, and biomedical interfaces, such as motor, cognitive, and perception prostheses. For each section, we provide examples of how brain-inspired models can successfully compete with conventional artificial intelligence algorithms, demonstrating the potential of neuromorphic engineering to meet demands and improve patient outcomes. Lastly, we discuss current struggles in fitting neuromorphic hardware with non-neuromorphic technologies and propose potential solutions for future bottlenecks in hardware compatibility.
{"title":"Neuromorphic applications in medicine.","authors":"Khaled Aboumerhi, Amparo Güemes, Hongtao Liu, Francesco Tenore, Ralph Etienne-Cummings","doi":"10.1088/1741-2552/aceca3","DOIUrl":"https://doi.org/10.1088/1741-2552/aceca3","url":null,"abstract":"<p><p>In recent years, there has been a growing demand for miniaturization, low power consumption, quick treatments, and non-invasive clinical strategies in the healthcare industry. To meet these demands, healthcare professionals are seeking new technological paradigms that can improve diagnostic accuracy while ensuring patient compliance. Neuromorphic engineering, which uses neural models in hardware and software to replicate brain-like behaviors, can help usher in a new era of medicine by delivering low power, low latency, small footprint, and high bandwidth solutions. This paper provides an overview of recent neuromorphic advancements in medicine, including medical imaging and cancer diagnosis, processing of biosignals for diagnosis, and biomedical interfaces, such as motor, cognitive, and perception prostheses. For each section, we provide examples of how brain-inspired models can successfully compete with conventional artificial intelligence algorithms, demonstrating the potential of neuromorphic engineering to meet demands and improve patient outcomes. Lastly, we discuss current struggles in fitting neuromorphic hardware with non-neuromorphic technologies and propose potential solutions for future bottlenecks in hardware compatibility.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10440904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-22DOI: 10.1088/1741-2552/aced20
Yuyang Xie, Peijun Qin, Tianruo Guo, Amr Al Abed, Nigel H Lovell, David Tsai
Objective.A transverse intrafascicular multichannel electrode (TIME) may offer advantages over more conventional cuff electrodes including higher spatial selectivity and reduced stimulation charge requirements. However, the performance of TIME, especially in the context of non-conventional stimulation waveforms, remains relatively unexplored. As part of our overarching goal of investigating stimulation efficacy of TIME, we developed a computational toolkit that automates the creation and usage ofin siliconerve models with TIME setup, which solves nerve responses using cable equations and computes extracellular potentials using finite element method.Approach.We began by implementing a flexible and scalable Python/MATLAB-based toolkit for automatically creating models of nerve stimulation in the hybrid NEURON/COMSOL ecosystems. We then developed a sciatic nerve model containing 14 fascicles with 1,170 myelinated (A-type, 30%) and unmyelinated (C-type, 70%) fibers to study fiber responses over a variety of TIME arrangements (monopolar and hexapolar) and stimulation waveforms (kilohertz stimulation and cathodic ramp modulation).Main results.Our toolkit obviates the conventional need to re-create the same nerve in two disparate modeling environments and automates bi-directional transfer of results. Our population-based simulations suggested that kilohertz stimuli provide selective activation of targeted C fibers near the stimulating electrodes but also tended to activate non-targeted A fibers further away. However, C fiber selectivity can be enhanced by hexapolar TIME arrangements that confined the spatial extent of electrical stimuli. Improved upon prior findings, we devised a high-frequency waveform that incorporates cathodic DC ramp to completely remove undesirable onset responses.Conclusion.Our toolkit allows agile, iterative design cycles involving the nerve and TIME, while minimizing the potential operator errors during complex simulation. The nerve model created by our toolkit allowed us to study and optimize the design of next-generation intrafascicular implants for improved spatial and fiber-type selectivity.
{"title":"Modulating individual axons and axonal populations in the peripheral nerve using transverse intrafascicular multichannel electrodes.","authors":"Yuyang Xie, Peijun Qin, Tianruo Guo, Amr Al Abed, Nigel H Lovell, David Tsai","doi":"10.1088/1741-2552/aced20","DOIUrl":"https://doi.org/10.1088/1741-2552/aced20","url":null,"abstract":"<p><p><i>Objective.</i>A transverse intrafascicular multichannel electrode (TIME) may offer advantages over more conventional cuff electrodes including higher spatial selectivity and reduced stimulation charge requirements. However, the performance of TIME, especially in the context of non-conventional stimulation waveforms, remains relatively unexplored. As part of our overarching goal of investigating stimulation efficacy of TIME, we developed a computational toolkit that automates the creation and usage of<i>in silico</i>nerve models with TIME setup, which solves nerve responses using cable equations and computes extracellular potentials using finite element method.<i>Approach.</i>We began by implementing a flexible and scalable Python/MATLAB-based toolkit for automatically creating models of nerve stimulation in the hybrid NEURON/COMSOL ecosystems. We then developed a sciatic nerve model containing 14 fascicles with 1,170 myelinated (A-type, 30%) and unmyelinated (C-type, 70%) fibers to study fiber responses over a variety of TIME arrangements (monopolar and hexapolar) and stimulation waveforms (kilohertz stimulation and cathodic ramp modulation).<i>Main results.</i>Our toolkit obviates the conventional need to re-create the same nerve in two disparate modeling environments and automates bi-directional transfer of results. Our population-based simulations suggested that kilohertz stimuli provide selective activation of targeted C fibers near the stimulating electrodes but also tended to activate non-targeted A fibers further away. However, C fiber selectivity can be enhanced by hexapolar TIME arrangements that confined the spatial extent of electrical stimuli. Improved upon prior findings, we devised a high-frequency waveform that incorporates cathodic DC ramp to completely remove undesirable onset responses.<i>Conclusion.</i>Our toolkit allows agile, iterative design cycles involving the nerve and TIME, while minimizing the potential operator errors during complex simulation. The nerve model created by our toolkit allowed us to study and optimize the design of next-generation intrafascicular implants for improved spatial and fiber-type selectivity.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10121118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-21DOI: 10.1088/1741-2552/acee1f
Tie Liang, Xionghui Yu, Xiaoguang Liu, Hongrui Wang, Xiuling Liu, Bin Dong
Objective.The combination of the motor imagery (MI) electroencephalography (EEG) signals and deep learning-based methods is an effective way to improve MI classification accuracy. However, deep learning-based methods often need too many trainable parameters. As a result, the trade-off between the network decoding performance and computational cost has always been an important challenge in the MI classification research.Approach.In the present study, we proposed a new end-to-end convolutional neural network (CNN) model called the EEG-circular dilated convolution (CDIL) network, which takes into account both the lightweight model and the classification accuracy. Specifically, the depth-separable convolution was used to reduce the number of network parameters and extract the temporal and spatial features from the EEG signals. CDIL was used to extract the time-varying deep features that were generated in the previous stage. Finally, we combined the features extracted from the two stages and used the global average pooling to further reduce the number of parameters, in order to achieve an accurate MI classification. The performance of the proposed model was verified using three publicly available datasets.Main results.The proposed model achieved an average classification accuracy of 79.63% and 94.53% for the BCIIV2a and HGD four-classification task, respectively, and 87.82% for the BCIIV2b two-classification task. In particular, by comparing the number of parameters, computation and classification accuracy with other lightweight models, it was confirmed that the proposed model achieved a better balance between the decoding performance and computational cost. Furthermore, the structural feasibility of the proposed model was confirmed by ablation experiments and feature visualization.Significance.The results indicated that the proposed CNN model presented high classification accuracy with less computing resources, and can be applied in the MI classification research.
{"title":"EEG-CDILNet: a lightweight and accurate CNN network using circular dilated convolution for motor imagery classification.","authors":"Tie Liang, Xionghui Yu, Xiaoguang Liu, Hongrui Wang, Xiuling Liu, Bin Dong","doi":"10.1088/1741-2552/acee1f","DOIUrl":"https://doi.org/10.1088/1741-2552/acee1f","url":null,"abstract":"<p><p><i>Objective.</i>The combination of the motor imagery (MI) electroencephalography (EEG) signals and deep learning-based methods is an effective way to improve MI classification accuracy. However, deep learning-based methods often need too many trainable parameters. As a result, the trade-off between the network decoding performance and computational cost has always been an important challenge in the MI classification research.<i>Approach.</i>In the present study, we proposed a new end-to-end convolutional neural network (CNN) model called the EEG-circular dilated convolution (CDIL) network, which takes into account both the lightweight model and the classification accuracy. Specifically, the depth-separable convolution was used to reduce the number of network parameters and extract the temporal and spatial features from the EEG signals. CDIL was used to extract the time-varying deep features that were generated in the previous stage. Finally, we combined the features extracted from the two stages and used the global average pooling to further reduce the number of parameters, in order to achieve an accurate MI classification. The performance of the proposed model was verified using three publicly available datasets.<i>Main results.</i>The proposed model achieved an average classification accuracy of 79.63% and 94.53% for the BCIIV2a and HGD four-classification task, respectively, and 87.82% for the BCIIV2b two-classification task. In particular, by comparing the number of parameters, computation and classification accuracy with other lightweight models, it was confirmed that the proposed model achieved a better balance between the decoding performance and computational cost. Furthermore, the structural feasibility of the proposed model was confirmed by ablation experiments and feature visualization.<i>Significance.</i>The results indicated that the proposed CNN model presented high classification accuracy with less computing resources, and can be applied in the MI classification research.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10064811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-14DOI: 10.1088/1741-2552/ace9fb
Tessy M Thomas, Aditya Singh, Latane P Bullock, Daniel Liang, Cale W Morse, Xavier Scherschligt, John P Seymour, Nitin Tandon
Objective.The speech production network relies on a widely distributed brain network. However, research and development of speech brain-computer interfaces (speech-BCIs) has typically focused on decoding speech only from superficial subregions readily accessible by subdural grid arrays-typically placed over the sensorimotor cortex. Alternatively, the technique of stereo-electroencephalography (sEEG) enables access to distributed brain regions using multiple depth electrodes with lower surgical risks, especially in patients with brain injuries resulting in aphasia and other speech disorders.Approach.To investigate the decoding potential of widespread electrode coverage in multiple cortical sites, we used a naturalistic continuous speech production task. We obtained neural recordings using sEEG from eight participants while they read aloud sentences. We trained linear classifiers to decode distinct speech components (articulatory components and phonemes) solely based on broadband gamma activity and evaluated the decoding performance using nested five-fold cross-validation.Main Results.We achieved an average classification accuracy of 18.7% across 9 places of articulation (e.g. bilabials, palatals), 26.5% across 5 manner of articulation (MOA) labels (e.g. affricates, fricatives), and 4.81% across 38 phonemes. The highest classification accuracies achieved with a single large dataset were 26.3% for place of articulation, 35.7% for MOA, and 9.88% for phonemes. Electrodes that contributed high decoding power were distributed across multiple sulcal and gyral sites in both dominant and non-dominant hemispheres, including ventral sensorimotor, inferior frontal, superior temporal, and fusiform cortices. Rather than finding a distinct cortical locus for each speech component, we observed neural correlates of both articulatory and phonetic components in multiple hubs of a widespread language production network.Significance.These results reveal the distributed cortical representations whose activity can enable decoding speech components during continuous speech through the use of this minimally invasive recording method, elucidating language neurobiology and neural targets for future speech-BCIs.
{"title":"Decoding articulatory and phonetic components of naturalistic continuous speech from the distributed language network.","authors":"Tessy M Thomas, Aditya Singh, Latane P Bullock, Daniel Liang, Cale W Morse, Xavier Scherschligt, John P Seymour, Nitin Tandon","doi":"10.1088/1741-2552/ace9fb","DOIUrl":"https://doi.org/10.1088/1741-2552/ace9fb","url":null,"abstract":"<p><p><i>Objective.</i>The speech production network relies on a widely distributed brain network. However, research and development of speech brain-computer interfaces (speech-BCIs) has typically focused on decoding speech only from superficial subregions readily accessible by subdural grid arrays-typically placed over the sensorimotor cortex. Alternatively, the technique of stereo-electroencephalography (sEEG) enables access to distributed brain regions using multiple depth electrodes with lower surgical risks, especially in patients with brain injuries resulting in aphasia and other speech disorders.<i>Approach.</i>To investigate the decoding potential of widespread electrode coverage in multiple cortical sites, we used a naturalistic continuous speech production task. We obtained neural recordings using sEEG from eight participants while they read aloud sentences. We trained linear classifiers to decode distinct speech components (articulatory components and phonemes) solely based on broadband gamma activity and evaluated the decoding performance using nested five-fold cross-validation.<i>Main Results.</i>We achieved an average classification accuracy of 18.7% across 9 places of articulation (e.g. bilabials, palatals), 26.5% across 5 manner of articulation (MOA) labels (e.g. affricates, fricatives), and 4.81% across 38 phonemes. The highest classification accuracies achieved with a single large dataset were 26.3% for place of articulation, 35.7% for MOA, and 9.88% for phonemes. Electrodes that contributed high decoding power were distributed across multiple sulcal and gyral sites in both dominant and non-dominant hemispheres, including ventral sensorimotor, inferior frontal, superior temporal, and fusiform cortices. Rather than finding a distinct cortical locus for each speech component, we observed neural correlates of both articulatory and phonetic components in multiple hubs of a widespread language production network.<i>Significance.</i>These results reveal the distributed cortical representations whose activity can enable decoding speech components during continuous speech through the use of this minimally invasive recording method, elucidating language neurobiology and neural targets for future speech-BCIs.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10008637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective.Emotion recognition based on electroencephalography (EEG) is garnering increasing attention among researchers due to its wide-ranging applications and the rise of portable devices. Deep learning-based models have demonstrated impressive progress in EEG-based emotion recognition, thanks to their exceptional feature extraction capabilities. However, the manual design of deep networks is time-consuming and labour-intensive. Moreover, the inherent variability of EEG signals necessitates extensive customization of models, exacerbating these challenges. Neural architecture search (NAS) methods can alleviate the need for excessive manual involvement by automatically discovering the optimal network structure for EEG-based emotion recognition.Approach.In this regard, we propose AutoEER (AutomaticEEG-basedEmotionRecognition), a framework that leverages tailored NAS to automatically discover the optimal network structure for EEG-based emotion recognition. We carefully design a customized search space specifically for EEG signals, incorporating operators that effectively capture both temporal and spatial properties of EEG. Additionally, we employ a novel parameterization strategy to derive the optimal network structure from the proposed search space.Main results.Extensive experimentation on emotion classification tasks using two benchmark datasets, DEAP and SEED, has demonstrated that AutoEER outperforms state-of-the-art manual deep and NAS models. Specifically, compared to the optimal model WangNAS on the accuracy (ACC) metric, AutoEER improves its average accuracy on all datasets by 0.93%. Similarly, compared to the optimal model LiNAS on the F1 Ssore (F1) metric, AutoEER improves its average F1 score on all datasets by 4.51%. Furthermore, the architectures generated by AutoEER exhibit superior transferability compared to alternative methods.Significance.AutoEER represents a novel approach to EEG analysis, utilizing a specialized search space to design models tailored to individual subjects. This approach significantly reduces the labour and time costs associated with manual model construction in EEG research, holding great promise for advancing the field and streamlining research practices.
{"title":"AutoEER: automatic EEG-based emotion recognition with neural architecture search.","authors":"Yixiao Wu, Huan Liu, Dalin Zhang, Yuzhe Zhang, Tianyu Lou, Qinghua Zheng","doi":"10.1088/1741-2552/aced22","DOIUrl":"https://doi.org/10.1088/1741-2552/aced22","url":null,"abstract":"<p><p><i>Objective.</i>Emotion recognition based on electroencephalography (EEG) is garnering increasing attention among researchers due to its wide-ranging applications and the rise of portable devices. Deep learning-based models have demonstrated impressive progress in EEG-based emotion recognition, thanks to their exceptional feature extraction capabilities. However, the manual design of deep networks is time-consuming and labour-intensive. Moreover, the inherent variability of EEG signals necessitates extensive customization of models, exacerbating these challenges. Neural architecture search (NAS) methods can alleviate the need for excessive manual involvement by automatically discovering the optimal network structure for EEG-based emotion recognition.<i>Approach.</i>In this regard, we propose AutoEER (<b>Auto</b>matic<b>E</b>EG-based<b>E</b>motion<b>R</b>ecognition), a framework that leverages tailored NAS to automatically discover the optimal network structure for EEG-based emotion recognition. We carefully design a customized search space specifically for EEG signals, incorporating operators that effectively capture both temporal and spatial properties of EEG. Additionally, we employ a novel parameterization strategy to derive the optimal network structure from the proposed search space.<i>Main results.</i>Extensive experimentation on emotion classification tasks using two benchmark datasets, DEAP and SEED, has demonstrated that AutoEER outperforms state-of-the-art manual deep and NAS models. Specifically, compared to the optimal model WangNAS on the accuracy (ACC) metric, AutoEER improves its average accuracy on all datasets by 0.93%. Similarly, compared to the optimal model LiNAS on the F1 Ssore (F1) metric, AutoEER improves its average F1 score on all datasets by 4.51%. Furthermore, the architectures generated by AutoEER exhibit superior transferability compared to alternative methods.<i>Significance.</i>AutoEER represents a novel approach to EEG analysis, utilizing a specialized search space to design models tailored to individual subjects. This approach significantly reduces the labour and time costs associated with manual model construction in EEG research, holding great promise for advancing the field and streamlining research practices.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10009138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-11DOI: 10.1088/1741-2552/aceca4
Meagan K Brucker-Hahn, Hans J Zander, Andrew J Will, Jayesh C Vallabh, Jason S Wolff, David A Dinsmoor, Scott F Lempka
Objective.Evoked compound action potential (ECAP) recordings have emerged as a quantitative measure of the neural response during spinal cord stimulation (SCS) to treat pain. However, utilization of ECAP recordings to optimize stimulation efficacy requires an understanding of the factors influencing these recordings and their relationship to the underlying neural activation.Approach.We acquired a library of ECAP recordings from 56 patients over a wide assortment of postures and stimulation parameters, and then processed these signals to quantify several aspects of these recordings (e.g., ECAP threshold (ET), amplitude, latency, growth rate). We compared our experimental findings against a computational model that examined the effect of variable distances between the spinal cord and the SCS electrodes.Main results.Postural shifts strongly influenced the experimental ECAP recordings, with a 65.7% lower ET and 178.5% higher growth rate when supine versus seated. The computational model exhibited similar trends, with a 71.9% lower ET and 231.5% higher growth rate for a 2.0 mm cerebrospinal fluid (CSF) layer (representing a supine posture) versus a 4.4 mm CSF layer (representing a prone posture). Furthermore, the computational model demonstrated that constant ECAP amplitudes may not equate to a constant degree of neural activation.Significance.These results demonstrate large variability across all ECAP metrics and the inability of a constant ECAP amplitude to provide constant neural activation. These results are critical to improve the delivery, efficacy, and robustness of clinical SCS technologies utilizing these ECAP recordings to provide closed-loop stimulation.
{"title":"Evoked compound action potentials during spinal cord stimulation: effects of posture and pulse width on signal features and neural activation within the spinal cord.","authors":"Meagan K Brucker-Hahn, Hans J Zander, Andrew J Will, Jayesh C Vallabh, Jason S Wolff, David A Dinsmoor, Scott F Lempka","doi":"10.1088/1741-2552/aceca4","DOIUrl":"https://doi.org/10.1088/1741-2552/aceca4","url":null,"abstract":"<p><p><i>Objective.</i>Evoked compound action potential (ECAP) recordings have emerged as a quantitative measure of the neural response during spinal cord stimulation (SCS) to treat pain. However, utilization of ECAP recordings to optimize stimulation efficacy requires an understanding of the factors influencing these recordings and their relationship to the underlying neural activation.<i>Approach.</i>We acquired a library of ECAP recordings from 56 patients over a wide assortment of postures and stimulation parameters, and then processed these signals to quantify several aspects of these recordings (e.g., ECAP threshold (ET), amplitude, latency, growth rate). We compared our experimental findings against a computational model that examined the effect of variable distances between the spinal cord and the SCS electrodes.<i>Main results.</i>Postural shifts strongly influenced the experimental ECAP recordings, with a 65.7% lower ET and 178.5% higher growth rate when supine versus seated. The computational model exhibited similar trends, with a 71.9% lower ET and 231.5% higher growth rate for a 2.0 mm cerebrospinal fluid (CSF) layer (representing a supine posture) versus a 4.4 mm CSF layer (representing a prone posture). Furthermore, the computational model demonstrated that constant ECAP amplitudes may not equate to a constant degree of neural activation.<i>Significance.</i>These results demonstrate large variability across all ECAP metrics and the inability of a constant ECAP amplitude to provide constant neural activation. These results are critical to improve the delivery, efficacy, and robustness of clinical SCS technologies utilizing these ECAP recordings to provide closed-loop stimulation.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10387174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-10DOI: 10.1088/1741-2552/aceca1
Akash R Pattnaik, Nina J Ghosn, Ian Z Ong, Andrew Y Revell, William K S Ojemann, Brittany H Scheid, Georgia Georgostathi, John M Bernabei, Erin C Conrad, Saurabh R Sinha, Kathryn A Davis, Nishant Sinha, Brian Litt
Objective.Epilepsy is a neurological disorder characterized by recurrent seizures which vary widely in severity, from clinically silent to prolonged convulsions. Measuring severity is crucial for guiding therapy, particularly when complete control is not possible. Seizure diaries, the current standard for guiding therapy, are insensitive to the duration of events or the propagation of seizure activity across the brain. We present a quantitative seizure severity score that incorporates electroencephalography (EEG) and clinical data and demonstrate how it can guide epilepsy therapies.Approach.We collected intracranial EEG and clinical semiology data from 54 epilepsy patients who had 256 seizures during invasive, in-hospital presurgical evaluation. We applied an absolute slope algorithm to EEG recordings to identify seizing channels. From this data, we developed a seizure severity score that combines seizure duration, spread, and semiology using non-negative matrix factorization. For validation, we assessed its correlation with independent measures of epilepsy burden: seizure types, epilepsy duration, a pharmacokinetic model of medication load, and response to epilepsy surgery. We investigated the association between the seizure severity score and preictal network features.Main results.The seizure severity score augmented clinical classification by objectively delineating seizure duration and spread from recordings in available electrodes. Lower preictal medication loads were associated with higher seizure severity scores (p= 0.018, 97.5% confidence interval = [-1.242, -0.116]) and lower pre-surgical severity was associated with better surgical outcome (p= 0.042). In 85% of patients with multiple seizure types, greater preictal change from baseline was associated with higher severity.Significance.We present a quantitative measure of seizure severity that includes EEG and clinical features, validated on gold standard in-patient recordings. We provide a framework for extending our tool's utility to ambulatory EEG devices, for linking it to seizure semiology measured by wearable sensors, and as a tool to advance data-driven epilepsy care.
{"title":"The seizure severity score: a quantitative tool for comparing seizures and their response to therapy.","authors":"Akash R Pattnaik, Nina J Ghosn, Ian Z Ong, Andrew Y Revell, William K S Ojemann, Brittany H Scheid, Georgia Georgostathi, John M Bernabei, Erin C Conrad, Saurabh R Sinha, Kathryn A Davis, Nishant Sinha, Brian Litt","doi":"10.1088/1741-2552/aceca1","DOIUrl":"10.1088/1741-2552/aceca1","url":null,"abstract":"<p><p><i>Objective.</i>Epilepsy is a neurological disorder characterized by recurrent seizures which vary widely in severity, from clinically silent to prolonged convulsions. Measuring severity is crucial for guiding therapy, particularly when complete control is not possible. Seizure diaries, the current standard for guiding therapy, are insensitive to the duration of events or the propagation of seizure activity across the brain. We present a quantitative seizure severity score that incorporates electroencephalography (EEG) and clinical data and demonstrate how it can guide epilepsy therapies.<i>Approach.</i>We collected intracranial EEG and clinical semiology data from 54 epilepsy patients who had 256 seizures during invasive, in-hospital presurgical evaluation. We applied an absolute slope algorithm to EEG recordings to identify seizing channels. From this data, we developed a seizure severity score that combines seizure duration, spread, and semiology using non-negative matrix factorization. For validation, we assessed its correlation with independent measures of epilepsy burden: seizure types, epilepsy duration, a pharmacokinetic model of medication load, and response to epilepsy surgery. We investigated the association between the seizure severity score and preictal network features.<i>Main results.</i>The seizure severity score augmented clinical classification by objectively delineating seizure duration and spread from recordings in available electrodes. Lower preictal medication loads were associated with higher seizure severity scores (<i>p</i>= 0.018, 97.5% confidence interval = [-1.242, -0.116]) and lower pre-surgical severity was associated with better surgical outcome (<i>p</i>= 0.042). In 85% of patients with multiple seizure types, greater preictal change from baseline was associated with higher severity.<i>Significance.</i>We present a quantitative measure of seizure severity that includes EEG and clinical features, validated on gold standard in-patient recordings. We provide a framework for extending our tool's utility to ambulatory EEG devices, for linking it to seizure semiology measured by wearable sensors, and as a tool to advance data-driven epilepsy care.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11250994/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10365447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-10DOI: 10.1088/1741-2552/aceca2
Haozhe Zac Wang, Yan Tat Wong
Objective.We developed a realistic simulation paradigm for cortical prosthetic vision and investigated whether we can improve visual performance using a novel clustering algorithm.Approach.Cortical visual prostheses have been developed to restore sight by stimulating the visual cortex. To investigate the visual experience, previous studies have used uniform phosphene maps, which may not accurately capture generated phosphene map distributions of implant recipients. The current simulation paradigm was based on the Human Connectome Project retinotopy dataset and the placement of implants on the cortices from magnetic resonance imaging scans. Five unique retinotopic maps were derived using this method. To improve performance on these retinotopic maps, we enabled head scanning and a density-based clustering algorithm was then used to relocate centroids of visual stimuli. The impact of these improvements on visual detection performance was tested. Using spatially evenly distributed maps as a control, we recruited ten subjects and evaluated their performance across five sessions on the Berkeley Rudimentary Visual Acuity test and the object recognition task.Main results.Performance on control maps is significantly better than on retinotopic maps in both tasks. Both head scanning and the clustering algorithm showed the potential of improving visual ability across multiple sessions in the object recognition task.Significance.The current paradigm is the first that simulates the experience of cortical prosthetic vision based on brain scans and implant placement, which captures the spatial distribution of phosphenes more realistically. Utilisation of evenly distributed maps may overestimate the performance that visual prosthetics can restore. This simulation paradigm could be used in clinical practice when making plans for where best to implant cortical visual prostheses.
{"title":"A novel simulation paradigm utilising MRI-derived phosphene maps for cortical prosthetic vision.","authors":"Haozhe Zac Wang, Yan Tat Wong","doi":"10.1088/1741-2552/aceca2","DOIUrl":"10.1088/1741-2552/aceca2","url":null,"abstract":"<p><p><i>Objective.</i>We developed a realistic simulation paradigm for cortical prosthetic vision and investigated whether we can improve visual performance using a novel clustering algorithm.<i>Approach.</i>Cortical visual prostheses have been developed to restore sight by stimulating the visual cortex. To investigate the visual experience, previous studies have used uniform phosphene maps, which may not accurately capture generated phosphene map distributions of implant recipients. The current simulation paradigm was based on the Human Connectome Project retinotopy dataset and the placement of implants on the cortices from magnetic resonance imaging scans. Five unique retinotopic maps were derived using this method. To improve performance on these retinotopic maps, we enabled head scanning and a density-based clustering algorithm was then used to relocate centroids of visual stimuli. The impact of these improvements on visual detection performance was tested. Using spatially evenly distributed maps as a control, we recruited ten subjects and evaluated their performance across five sessions on the Berkeley Rudimentary Visual Acuity test and the object recognition task.<i>Main results.</i>Performance on control maps is significantly better than on retinotopic maps in both tasks. Both head scanning and the clustering algorithm showed the potential of improving visual ability across multiple sessions in the object recognition task.<i>Significance.</i>The current paradigm is the first that simulates the experience of cortical prosthetic vision based on brain scans and implant placement, which captures the spatial distribution of phosphenes more realistically. Utilisation of evenly distributed maps may overestimate the performance that visual prosthetics can restore. This simulation paradigm could be used in clinical practice when making plans for where best to implant cortical visual prostheses.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10594539/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10387176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-10DOI: 10.1088/1741-2552/aced21
Filip Mivalt, Vladimir Sladky, Samuel Worrell, Nicholas M Gregg, Irena Balzekas, Inyong Kim, Su-Youne Chang, Daniel R Montonye, Andrea Duque-Lopez, Martina Krakorova, Tereza Pridalova, Kamila Lepkova, Benjamin H Brinkmann, Kai J Miller, Jamie J Van Gompel, Timothy Denison, Timothy J Kaufmann, Steven A Messina, Erik K St Louis, Vaclav Kremen, Gregory A Worrell
Objective.Long-term intracranial electroencephalography (iEEG) in freely behaving animals provides valuable electrophysiological information and when correlated with animal behavior is useful for investigating brain function.Approach.Here we develop and validate an automated iEEG-based sleep-wake classifier for canines using expert sleep labels derived from simultaneous video, accelerometry, scalp electroencephalography (EEG) and iEEG monitoring. The video, scalp EEG, and accelerometry recordings were manually scored by a board-certified sleep expert into sleep-wake state categories: awake, rapid-eye-movement (REM) sleep, and three non-REM sleep categories (NREM1, 2, 3). The expert labels were used to train, validate, and test a fully automated iEEG sleep-wake classifier in freely behaving canines.Main results. The iEEG-based classifier achieved an overall classification accuracy of 0.878 ± 0.055 and a Cohen's Kappa score of 0.786 ± 0.090. Subsequently, we used the automated iEEG-based classifier to investigate sleep over multiple weeks in freely behaving canines. The results show that the dogs spend a significant amount of the day sleeping, but the characteristics of daytime nap sleep differ from night-time sleep in three key characteristics: during the day, there are fewer NREM sleep cycles (10.81 ± 2.34 cycles per day vs. 22.39 ± 3.88 cycles per night;p< 0.001), shorter NREM cycle durations (13.83 ± 8.50 min per day vs. 15.09 ± 8.55 min per night;p< 0.001), and dogs spend a greater proportion of sleep time in NREM sleep and less time in REM sleep compared to night-time sleep (NREM 0.88 ± 0.09, REM 0.12 ± 0.09 per day vs. NREM 0.80 ± 0.08, REM 0.20 ± 0.08 per night;p< 0.001).Significance.These results support the feasibility and accuracy of automated iEEG sleep-wake classifiers for canine behavior investigations.
目标。在此,我们开发并验证了一种基于脑电图的自动睡眠-觉醒分类器,该分类器使用来自同步视频、加速度计、头皮脑电图(EEG)和脑电图监测的专家睡眠标签。视频、头皮脑电图和加速度计记录由经过委员会认证的睡眠专家手动评分,分为睡眠-觉醒状态类别:清醒、快速眼动(REM)睡眠和三个非快速眼动睡眠类别(NREM1、2、3)。专家标签用于训练、验证和测试全自动脑电图睡眠-觉醒分类器。主要的结果。基于eeg的分类器总体分类精度为0.878±0.055,Cohen’s Kappa评分为0.786±0.090。随后,我们使用基于脑电图的自动分类器对自由行为的狗进行了数周的睡眠调查。研究结果表明,狗狗一天中有相当多的时间在睡觉,但白天小睡睡眠的特点与夜间睡眠的特点有三个关键区别:白天,有更少的非快速眼动睡眠周期(10.81±2.34周期每天每晚和22.39±3.88周期;p < 0.001),非快速眼动睡眠短周期持续时间(13.83±8.50分钟每天每晚和15.09±8.55分钟;p < 0.001),和狗花更大比例的睡眠时间在非快速眼动睡眠,夜间睡眠相比更少的时间在快速眼动睡眠(NREM 0.88±0.09,状态REM非快速眼动睡眠每天0.12±0.09和0.80±0.08,0.20±0.08雷姆每晚;.Significance p < 0.001)。这些结果支持了自动脑电睡眠-觉醒分类器用于犬类行为研究的可行性和准确性。
{"title":"Automated sleep classification with chronic neural implants in freely behaving canines.","authors":"Filip Mivalt, Vladimir Sladky, Samuel Worrell, Nicholas M Gregg, Irena Balzekas, Inyong Kim, Su-Youne Chang, Daniel R Montonye, Andrea Duque-Lopez, Martina Krakorova, Tereza Pridalova, Kamila Lepkova, Benjamin H Brinkmann, Kai J Miller, Jamie J Van Gompel, Timothy Denison, Timothy J Kaufmann, Steven A Messina, Erik K St Louis, Vaclav Kremen, Gregory A Worrell","doi":"10.1088/1741-2552/aced21","DOIUrl":"10.1088/1741-2552/aced21","url":null,"abstract":"<p><p><i>Objective.</i>Long-term intracranial electroencephalography (iEEG) in freely behaving animals provides valuable electrophysiological information and when correlated with animal behavior is useful for investigating brain function.<i>Approach.</i>Here we develop and validate an automated iEEG-based sleep-wake classifier for canines using expert sleep labels derived from simultaneous video, accelerometry, scalp electroencephalography (EEG) and iEEG monitoring. The video, scalp EEG, and accelerometry recordings were manually scored by a board-certified sleep expert into sleep-wake state categories: awake, rapid-eye-movement (REM) sleep, and three non-REM sleep categories (NREM1, 2, 3). The expert labels were used to train, validate, and test a fully automated iEEG sleep-wake classifier in freely behaving canines.<i>Main results</i>. The iEEG-based classifier achieved an overall classification accuracy of 0.878 ± 0.055 and a Cohen's Kappa score of 0.786 ± 0.090. Subsequently, we used the automated iEEG-based classifier to investigate sleep over multiple weeks in freely behaving canines. The results show that the dogs spend a significant amount of the day sleeping, but the characteristics of daytime nap sleep differ from night-time sleep in three key characteristics: during the day, there are fewer NREM sleep cycles (10.81 ± 2.34 cycles per day vs. 22.39 ± 3.88 cycles per night;<i>p</i>< 0.001), shorter NREM cycle durations (13.83 ± 8.50 min per day vs. 15.09 ± 8.55 min per night;<i>p</i>< 0.001), and dogs spend a greater proportion of sleep time in NREM sleep and less time in REM sleep compared to night-time sleep (NREM 0.88 ± 0.09, REM 0.12 ± 0.09 per day vs. NREM 0.80 ± 0.08, REM 0.20 ± 0.08 per night;<i>p</i>< 0.001).<i>Significance.</i>These results support the feasibility and accuracy of automated iEEG sleep-wake classifiers for canine behavior investigations.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10480092/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10538717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1088/1741-2552/ace79b
Halim I Baqapuri, Erik Roecher, Jana Zweerings, Stefan Wolter, Eike A Schmidt, Ruben C Gur, Klaus Mathiak
Objective.Driver assistance systems play an increasingly important role in modern vehicles. In the current level of technology, the driver must continuously supervise the driving and intervene whenever necessary when using driving assistance systems. The driver's attentiveness plays an important role in this human-machine interaction. Our aim was to design a simplistic technical framework for studying neural correlates of driving situations in a functional magnetic resonance imaging (fMRI) setting. In this work we assessed the feasibility of our proposed platform.Methods.We proposed a virtual environment (VE) simulation of driver assistance as a framework to investigate brain states related to partially automated driving. We focused on the processing of auditory signals during different driving scenarios as they have been shown to be advantageous as warning stimuli in driving situations. This provided the necessary groundwork to study brain auditory attentional networks under varying environmental demands in an fMRI setting. To this end, we conducted a study with 20 healthy participants to assess the feasibility of the VE simulation.Results.We demonstrated that the proposed VE can elicit driving related brain activation patterns. Relevant driving events evoked, in particular, responses in the bilateral auditory, sensory-motor, visual and insular cortices, which are related to perceptual and behavioral processes during driving assistance. Conceivably, attentional mechanisms increased somatosensory integration and reduced interoception, which are relevant for requesting interactions during partially automated driving.Significance.In modern vehicles, driver assistance technologies are playing an increasingly prevalent role. It is important to study the interaction between these systems and drivers' attentional responses to aid in future optimizations of the assistance systems. The proposed VE provides a foundational first step in this endeavor. Such simulated VEs provide a safe setting for experimentation with driving behaviors in a semi-naturalistic environment.
{"title":"Auditory neural correlates and neuroergonomics of driving assistance in a simulated virtual environment.","authors":"Halim I Baqapuri, Erik Roecher, Jana Zweerings, Stefan Wolter, Eike A Schmidt, Ruben C Gur, Klaus Mathiak","doi":"10.1088/1741-2552/ace79b","DOIUrl":"https://doi.org/10.1088/1741-2552/ace79b","url":null,"abstract":"<p><p><i>Objective.</i>Driver assistance systems play an increasingly important role in modern vehicles. In the current level of technology, the driver must continuously supervise the driving and intervene whenever necessary when using driving assistance systems. The driver's attentiveness plays an important role in this human-machine interaction. Our aim was to design a simplistic technical framework for studying neural correlates of driving situations in a functional magnetic resonance imaging (fMRI) setting. In this work we assessed the feasibility of our proposed platform.<i>Methods.</i>We proposed a virtual environment (VE) simulation of driver assistance as a framework to investigate brain states related to partially automated driving. We focused on the processing of auditory signals during different driving scenarios as they have been shown to be advantageous as warning stimuli in driving situations. This provided the necessary groundwork to study brain auditory attentional networks under varying environmental demands in an fMRI setting. To this end, we conducted a study with 20 healthy participants to assess the feasibility of the VE simulation.<i>Results.</i>We demonstrated that the proposed VE can elicit driving related brain activation patterns. Relevant driving events evoked, in particular, responses in the bilateral auditory, sensory-motor, visual and insular cortices, which are related to perceptual and behavioral processes during driving assistance. Conceivably, attentional mechanisms increased somatosensory integration and reduced interoception, which are relevant for requesting interactions during partially automated driving.<i>Significance.</i>In modern vehicles, driver assistance technologies are playing an increasingly prevalent role. It is important to study the interaction between these systems and drivers' attentional responses to aid in future optimizations of the assistance systems. The proposed VE provides a foundational first step in this endeavor. Such simulated VEs provide a safe setting for experimentation with driving behaviors in a semi-naturalistic environment.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9939173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}