首页 > 最新文献

International Journal of Neural Systems最新文献

英文 中文
Separating Inhibitory and Excitatory Responses of Epileptic Brain to Single-Pulse Electrical Stimulation. 癫痫脑对单脉冲电刺激的抑制和兴奋反应的分离。
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-02-01 Epub Date: 2022-12-10 DOI: 10.1142/S0129065723500089
Sepehr Shirani, Antonio Valentin, Gonzalo Alarcon, Farhana Kazi, Saeid Sanei

To enable an accurate recognition of neuronal excitability in an epileptic brain for modeling or localization of epileptic zone, here the brain response to single-pulse electrical stimulation (SPES) has been decomposed into its constituent components using adaptive singular spectrum analysis (SSA). Given the response at neuronal level, these components are expected to be the inhibitory and excitatory components. The prime objective is to thoroughly investigate the nature of delayed responses (elicited between 100[Formula: see text]ms-1 s after SPES) for localization of the epileptic zone. SSA is a powerful subspace signal analysis method for separation of single channel signals into their constituent uncorrelated components. The consistency in the results for both early and delayed brain responses verifies the usability of the approach.

为了能够准确识别癫痫脑中的神经元兴奋性,以便对癫痫区进行建模或定位,本文使用自适应奇异谱分析(SSA)将大脑对单脉冲电刺激(SPES)的反应分解为其组成成分。考虑到神经元水平的反应,这些成分被认为是抑制性和兴奋性成分。主要目的是彻底研究癫痫区定位的延迟反应(在SPES后100[公式:见正文]ms-1 s之间引发)的性质。SSA是一种强大的子空间信号分析方法,用于将单通道信号分离为其组成的不相关分量。早期和延迟大脑反应结果的一致性验证了该方法的可用性。
{"title":"Separating Inhibitory and Excitatory Responses of Epileptic Brain to Single-Pulse Electrical Stimulation.","authors":"Sepehr Shirani,&nbsp;Antonio Valentin,&nbsp;Gonzalo Alarcon,&nbsp;Farhana Kazi,&nbsp;Saeid Sanei","doi":"10.1142/S0129065723500089","DOIUrl":"10.1142/S0129065723500089","url":null,"abstract":"<p><p>To enable an accurate recognition of neuronal excitability in an epileptic brain for modeling or localization of epileptic zone, here the brain response to single-pulse electrical stimulation (SPES) has been decomposed into its constituent components using adaptive singular spectrum analysis (SSA). Given the response at neuronal level, these components are expected to be the inhibitory and excitatory components. The prime objective is to thoroughly investigate the nature of delayed responses (elicited between 100[Formula: see text]ms-1 s after SPES) for localization of the epileptic zone. SSA is a powerful subspace signal analysis method for separation of single channel signals into their constituent uncorrelated components. The consistency in the results for both early and delayed brain responses verifies the usability of the approach.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"33 2","pages":"2350008"},"PeriodicalIF":8.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9190493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Attention-Based Convolutional Recurrent Deep Neural Networks for the Prediction of Response to Repetitive Transcranial Magnetic Stimulation for Major Depressive Disorder. 基于注意的卷积递归深度神经网络预测重性抑郁症对重复经颅磁刺激的反应。
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-02-01 DOI: 10.1142/S0129065723500077
Mohsen Sadat Shahabi, Ahmad Shalbaf, Behrooz Nobakhsh, Reza Rostami, Reza Kazemi

Repetitive Transcranial Magnetic Stimulation (rTMS) is proposed as an effective treatment for major depressive disorder (MDD). However, because of the suboptimal treatment outcome of rTMS, the prediction of response to this technique is a crucial task. We developed a deep learning (DL) model to classify responders (R) and non-responders (NR). With this aim, we assessed the pre-treatment EEG signal of 34 MDD patients and extracted effective connectivity (EC) among all electrodes in four frequency bands of EEG signal. Two-dimensional EC maps are put together to create a rich connectivity image and a sequence of these images is fed to the DL model. Then, the DL framework was constructed based on transfer learning (TL) models which are pre-trained convolutional neural networks (CNN) named VGG16, Xception, and EfficientNetB0. Then, long short-term memory (LSTM) cells are equipped with an attention mechanism added on top of TL models to fully exploit the spatiotemporal information of EEG signal. Using leave-one subject out cross validation (LOSO CV), Xception-BLSTM-Attention acquired the highest performance with 98.86% of accuracy and 97.73% of specificity. Fusion of these models as an ensemble model based on optimized majority voting gained 99.32% accuracy and 98.34% of specificity. Therefore, the ensemble of TL-LSTM-Attention models can predict accurately the treatment outcome.

重复经颅磁刺激(rTMS)被认为是治疗重度抑郁症(MDD)的有效方法。然而,由于rTMS的治疗效果不理想,预测对该技术的反应是一项至关重要的任务。我们开发了一个深度学习(DL)模型来分类响应者(R)和无响应者(NR)。为此,我们对34例重度抑郁症患者的治疗前脑电信号进行了评估,并在脑电信号的四个频段提取了所有电极之间的有效连通性(effective connectivity, EC)。将二维EC图放在一起创建一个丰富的连通性图像,并将这些图像的序列馈送到DL模型。然后,基于迁移学习(TL)模型构建深度学习框架,这些模型是预训练的卷积神经网络(CNN),命名为VGG16, Xception和EfficientNetB0。然后,在长短期记忆(LSTM)细胞的基础上增加注意机制,充分利用脑电信号的时空信息。采用留一被试交叉验证(LOSO CV), exception - blstm - attention的准确率为98.86%,特异性为97.73%。这些模型融合为一个基于优化多数投票的集成模型,准确率为99.32%,特异性为98.34%。因此,tl - lstm -注意力模型的集合可以准确预测治疗结果。
{"title":"Attention-Based Convolutional Recurrent Deep Neural Networks for the Prediction of Response to Repetitive Transcranial Magnetic Stimulation for Major Depressive Disorder.","authors":"Mohsen Sadat Shahabi,&nbsp;Ahmad Shalbaf,&nbsp;Behrooz Nobakhsh,&nbsp;Reza Rostami,&nbsp;Reza Kazemi","doi":"10.1142/S0129065723500077","DOIUrl":"https://doi.org/10.1142/S0129065723500077","url":null,"abstract":"<p><p>Repetitive Transcranial Magnetic Stimulation (rTMS) is proposed as an effective treatment for major depressive disorder (MDD). However, because of the suboptimal treatment outcome of rTMS, the prediction of response to this technique is a crucial task. We developed a deep learning (DL) model to classify responders (R) and non-responders (NR). With this aim, we assessed the pre-treatment EEG signal of 34 MDD patients and extracted effective connectivity (EC) among all electrodes in four frequency bands of EEG signal. Two-dimensional EC maps are put together to create a rich connectivity image and a sequence of these images is fed to the DL model. Then, the DL framework was constructed based on transfer learning (TL) models which are pre-trained convolutional neural networks (CNN) named VGG16, Xception, and EfficientNetB0. Then, long short-term memory (LSTM) cells are equipped with an attention mechanism added on top of TL models to fully exploit the spatiotemporal information of EEG signal. Using leave-one subject out cross validation (LOSO CV), Xception-BLSTM-Attention acquired the highest performance with 98.86% of accuracy and 97.73% of specificity. Fusion of these models as an ensemble model based on optimized majority voting gained 99.32% accuracy and 98.34% of specificity. Therefore, the ensemble of TL-LSTM-Attention models can predict accurately the treatment outcome.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"33 2","pages":"2350007"},"PeriodicalIF":8.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10638196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Prediction Model for Normal Variation of Somatosensory Evoked Potential During Scoliosis Surgery. 脊柱侧凸手术中躯体感觉诱发电位正常变化的预测模型。
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-02-01 DOI: 10.1142/S0129065723500053
Ningbo Fei, Rong Li, Hongyan Cui, Yong Hu

Somatosensory evoked potential (SEP) has been commonly used as intraoperative monitoring to detect the presence of neurological deficits during scoliosis surgery. However, SEP usually presents an enormous variation in response to patient-specific factors such as physiological parameters leading to the false warning. This study proposes a prediction model to quantify SEP amplitude variation due to noninjury-related physiological changes of the patient undergoing scoliosis surgery. Based on a hybrid network of attention-based long-short-term memory (LSTM) and convolutional neural networks (CNNs), we develop a deep learning-based framework for predicting the SEP value in response to variation of physiological variables. The training and selection of model parameters were based on a 5-fold cross-validation scheme using mean square error (MSE) as evaluation metrics. The proposed model obtained MSE of 0.027[Formula: see text][Formula: see text] on left cortical SEP, MSE of 0.024[Formula: see text][Formula: see text] on left subcortical SEP, MSE of 0.031[Formula: see text][Formula: see text] on right cortical SEP, and MSE of 0.025[Formula: see text][Formula: see text] on right subcortical SEP based on the test set. The proposed model could quantify the affection from physiological parameters to the SEP amplitude in response to normal variation of physiology during scoliosis surgery. The prediction of SEP amplitude provides a potential varying reference for intraoperative SEP monitoring.

体感诱发电位(SEP)常被用于术中监测脊柱侧凸手术中是否存在神经功能缺陷。然而,SEP通常对患者特异性因素(如生理参数)的反应表现出巨大的差异,从而导致误报。本研究提出了一种预测模型,用于量化脊柱侧凸手术患者非损伤性生理变化引起的SEP振幅变化。基于基于注意的长短期记忆(LSTM)和卷积神经网络(cnn)的混合网络,我们开发了一个基于深度学习的框架,用于预测生理变量变化的SEP值。模型参数的训练和选择基于5重交叉验证方案,以均方误差(MSE)作为评估指标。该模型基于测试集得到左侧皮质SEP的MSE为0.027[公式:见文][公式:见文],左侧皮质下SEP的MSE为0.024[公式:见文][公式:见文],右侧皮质SEP的MSE为0.031[公式:见文][公式:见文],右侧皮质下SEP的MSE为0.025[公式:见文][公式:见文]。该模型可以量化生理参数对脊柱侧凸手术过程中生理正常变化对SEP振幅的影响。SEP振幅的预测为术中SEP监测提供了潜在的变化参考。
{"title":"A Prediction Model for Normal Variation of Somatosensory Evoked Potential During Scoliosis Surgery.","authors":"Ningbo Fei,&nbsp;Rong Li,&nbsp;Hongyan Cui,&nbsp;Yong Hu","doi":"10.1142/S0129065723500053","DOIUrl":"https://doi.org/10.1142/S0129065723500053","url":null,"abstract":"<p><p>Somatosensory evoked potential (SEP) has been commonly used as intraoperative monitoring to detect the presence of neurological deficits during scoliosis surgery. However, SEP usually presents an enormous variation in response to patient-specific factors such as physiological parameters leading to the false warning. This study proposes a prediction model to quantify SEP amplitude variation due to noninjury-related physiological changes of the patient undergoing scoliosis surgery. Based on a hybrid network of attention-based long-short-term memory (LSTM) and convolutional neural networks (CNNs), we develop a deep learning-based framework for predicting the SEP value in response to variation of physiological variables. The training and selection of model parameters were based on a 5-fold cross-validation scheme using mean square error (MSE) as evaluation metrics. The proposed model obtained MSE of 0.027[Formula: see text][Formula: see text] on left cortical SEP, MSE of 0.024[Formula: see text][Formula: see text] on left subcortical SEP, MSE of 0.031[Formula: see text][Formula: see text] on right cortical SEP, and MSE of 0.025[Formula: see text][Formula: see text] on right subcortical SEP based on the test set. The proposed model could quantify the affection from physiological parameters to the SEP amplitude in response to normal variation of physiology during scoliosis surgery. The prediction of SEP amplitude provides a potential varying reference for intraoperative SEP monitoring.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"33 2","pages":"2350005"},"PeriodicalIF":8.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10629178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Multimodal Fusion Approach for Human Activity Recognition. 人类活动识别的多模态融合方法。
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1142/S0129065723500028
Dimitrios Koutrintzes, Evaggelos Spyrou, Eirini Mathe, Phivos Mylonas

The problem of human activity recognition (HAR) has been increasingly attracting the efforts of the research community, having several applications. It consists of recognizing human motion and/or behavior within a given image or a video sequence, using as input raw sensor measurements. In this paper, a multimodal approach addressing the task of video-based HAR is proposed. It is based on 3D visual data that are collected using an RGB + depth camera, resulting to both raw video and 3D skeletal sequences. These data are transformed into six different 2D image representations; four of them are in the spectral domain, another is a pseudo-colored image. The aforementioned representations are based on skeletal data. The last representation is a "dynamic" image which is actually an artificially created image that summarizes RGB data of the whole video sequence, in a visually comprehensible way. In order to classify a given activity video, first, all the aforementioned 2D images are extracted and then six trained convolutional neural networks are used so as to extract visual features. The latter are fused so as to form a single feature vector and are fed into a support vector machine for classification into human activities. For evaluation purposes, a challenging motion activity recognition dataset is used, while single-view, cross-view and cross-subject experiments are performed. Moreover, the proposed approach is compared to three other state-of-the-art methods, demonstrating superior performance in most experiments.

人类活动识别(HAR)问题越来越受到研究界的关注,并具有多种应用。它包括在给定图像或视频序列中识别人类运动和/或行为,使用原始传感器测量值作为输入。本文提出了一种多模态方法来解决基于视频的HAR任务。它是基于使用RGB +深度相机收集的3D视觉数据,从而产生原始视频和3D骨骼序列。这些数据被转换成六种不同的二维图像表示;其中四个是光谱域图像,另一个是伪彩色图像。上述表示基于骨架数据。最后一种表示是“动态”图像,它实际上是一种人工生成的图像,它以视觉上可理解的方式总结了整个视频序列的RGB数据。为了对给定的活动视频进行分类,首先提取所有上述二维图像,然后使用六个训练好的卷积神经网络提取视觉特征。后者被融合成一个单一的特征向量,并被输入到支持向量机中分类为人类活动。为了评估目的,使用了具有挑战性的运动活动识别数据集,同时进行了单视图,跨视图和跨主题实验。此外,将所提出的方法与其他三种最先进的方法进行了比较,在大多数实验中显示出优越的性能。
{"title":"A Multimodal Fusion Approach for Human Activity Recognition.","authors":"Dimitrios Koutrintzes,&nbsp;Evaggelos Spyrou,&nbsp;Eirini Mathe,&nbsp;Phivos Mylonas","doi":"10.1142/S0129065723500028","DOIUrl":"https://doi.org/10.1142/S0129065723500028","url":null,"abstract":"<p><p>The problem of human activity recognition (HAR) has been increasingly attracting the efforts of the research community, having several applications. It consists of recognizing human motion and/or behavior within a given image or a video sequence, using as input raw sensor measurements. In this paper, a multimodal approach addressing the task of video-based HAR is proposed. It is based on 3D visual data that are collected using an RGB + depth camera, resulting to both raw video and 3D skeletal sequences. These data are transformed into six different 2D image representations; four of them are in the spectral domain, another is a pseudo-colored image. The aforementioned representations are based on skeletal data. The last representation is a \"dynamic\" image which is actually an artificially created image that summarizes RGB data of the whole video sequence, in a visually comprehensible way. In order to classify a given activity video, first, all the aforementioned 2D images are extracted and then six trained convolutional neural networks are used so as to extract visual features. The latter are fused so as to form a single feature vector and are fed into a support vector machine for classification into human activities. For evaluation purposes, a challenging motion activity recognition dataset is used, while single-view, cross-view and cross-subject experiments are performed. Moreover, the proposed approach is compared to three other state-of-the-art methods, demonstrating superior performance in most experiments.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"33 1","pages":"2350002"},"PeriodicalIF":8.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9083202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Unraveling the Development of an Algorithm for Recognizing Primary Emotions Through Electroencephalography. 通过脑电图揭示识别初级情绪的算法的发展。
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1142/S0129065722500575
Jennifer Sorinas, Juan C Fernandez Troyano, Jose Manuel Ferrández, Eduardo Fernandez

The large range of potential applications, not only for patients but also for healthy people, that could be achieved by affective brain-computer interface (aBCI) makes more latent the necessity of finding a commonly accepted protocol for real-time EEG-based emotion recognition. Based on wavelet package for spectral feature extraction, attending to the nature of the EEG signal, we have specified some of the main parameters needed for the implementation of robust positive and negative emotion classification. Twelve seconds has resulted as the most appropriate sliding window size; from that, a set of 20 target frequency-location variables have been proposed as the most relevant features that carry the emotional information. Lastly, QDA and KNN classifiers and population rating criterion for stimuli labeling have been suggested as the most suitable approaches for EEG-based emotion recognition. The proposed model reached a mean accuracy of 98% (s.d. 1.4) and 98.96% (s.d. 1.28) in a subject-dependent (SD) approach for QDA and KNN classifier, respectively. This new model represents a step forward towards real-time classification. Moreover, new insights regarding subject-independent (SI) approximation have been discussed, although the results were not conclusive.

情感脑机接口(aBCI)不仅对患者,而且对健康人都有广泛的潜在应用,这使得寻找一种普遍接受的基于脑电图的实时情感识别协议的必要性更加明显。基于小波包进行频谱特征提取,考虑到脑电信号的性质,给出了实现鲁棒正、负情绪分类所需的一些主要参数。12秒是最合适的滑动窗口大小;由此,提出了一组20个目标频率定位变量,作为携带情感信息的最相关特征。最后,QDA分类器和KNN分类器以及刺激标记的总体评级标准被认为是最适合用于基于脑电图的情绪识别的方法。在QDA和KNN分类器的主题依赖(SD)方法中,所提出的模型分别达到98% (SD值1.4)和98.96% (SD值1.28)的平均准确率。这个新模型向实时分类又迈进了一步。此外,关于学科独立(SI)近似的新见解已被讨论,尽管结果不是决定性的。
{"title":"Unraveling the Development of an Algorithm for Recognizing Primary Emotions Through Electroencephalography.","authors":"Jennifer Sorinas,&nbsp;Juan C Fernandez Troyano,&nbsp;Jose Manuel Ferrández,&nbsp;Eduardo Fernandez","doi":"10.1142/S0129065722500575","DOIUrl":"https://doi.org/10.1142/S0129065722500575","url":null,"abstract":"<p><p>The large range of potential applications, not only for patients but also for healthy people, that could be achieved by affective brain-computer interface (aBCI) makes more latent the necessity of finding a commonly accepted protocol for real-time EEG-based emotion recognition. Based on wavelet package for spectral feature extraction, attending to the nature of the EEG signal, we have specified some of the main parameters needed for the implementation of robust positive and negative emotion classification. Twelve seconds has resulted as the most appropriate sliding window size; from that, a set of 20 target frequency-location variables have been proposed as the most relevant features that carry the emotional information. Lastly, QDA and KNN classifiers and population rating criterion for stimuli labeling have been suggested as the most suitable approaches for EEG-based emotion recognition. The proposed model reached a mean accuracy of 98% (s.d. 1.4) and 98.96% (s.d. 1.28) in a subject-dependent (SD) approach for QDA and KNN classifier, respectively. This new model represents a step forward towards real-time classification. Moreover, new insights regarding subject-independent (SI) approximation have been discussed, although the results were not conclusive.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"33 1","pages":"2250057"},"PeriodicalIF":8.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10587567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Interictal Epileptiform Discharge Detection from Scalp EEG Using Scalable Time-series Classification Approaches. 基于可扩展时间序列分类方法的头皮脑电图间期癫痫样放电自动检测。
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1142/S0129065723500016
D Nhu, M Janmohamed, L Shakhatreh, O Gonen, P Perucca, A Gilligan, P Kwan, T J O'Brien, C W Tan, L Kuhlmann

Deep learning for automated interictal epileptiform discharge (IED) detection has been topical with many published papers in recent years. All existing works viewed EEG signals as time-series and developed specific models for IED classification; however, general time-series classification (TSC) methods were not considered. Moreover, none of these methods were evaluated on any public datasets, making direct comparisons challenging. This paper explored two state-of-the-art convolutional-based TSC algorithms, InceptionTime and Minirocket, on IED detection. We fine-tuned and cross-evaluated them on a public (Temple University Events - TUEV) and two private datasets and provided ready metrics for benchmarking future work. We observed that the optimal parameters correlated with the clinical duration of an IED and achieved the best area under precision-recall curve (AUPRC) of 0.98 and F1 of 0.80 on the private datasets, respectively. The AUPRC and F1 on the TUEV dataset were 0.99 and 0.97, respectively. While algorithms trained on the private sets maintained their performance when tested on the TUEV data, those trained on TUEV could not generalize well to the private data. These results emerge from differences in the class distributions across datasets and indicate a need for public datasets with a better diversity of IED waveforms, background activities and artifacts to facilitate standardization and benchmarking of algorithms.

近年来,深度学习技术在癫痫样放电(IED)自动检测中的应用备受关注。现有研究均将脑电图信号视为时间序列,并建立了特定的IED分类模型;然而,一般的时间序列分类(TSC)方法没有被考虑。此外,这些方法都没有在任何公共数据集上进行评估,这使得直接比较具有挑战性。本文探讨了两种最先进的基于卷积的TSC算法,InceptionTime和Minirocket,用于IED检测。我们在一个公共(天普大学事件- TUEV)和两个私人数据集上对它们进行了微调和交叉评估,并为未来的工作提供了现成的基准指标。我们观察到,最佳参数与IED的临床持续时间相关,并且在私有数据集上精确召回曲线下的最佳面积(AUPRC)和F1分别为0.98和0.80。TUEV数据集的AUPRC和F1分别为0.99和0.97。虽然在私有集上训练的算法在TUEV数据上测试时保持了良好的性能,但在TUEV上训练的算法不能很好地泛化到私有数据上。这些结果来自数据集类别分布的差异,表明需要具有更好多样性的IED波形、背景活动和工件的公共数据集,以促进算法的标准化和基准测试。
{"title":"Automated Interictal Epileptiform Discharge Detection from Scalp EEG Using Scalable Time-series Classification Approaches.","authors":"D Nhu,&nbsp;M Janmohamed,&nbsp;L Shakhatreh,&nbsp;O Gonen,&nbsp;P Perucca,&nbsp;A Gilligan,&nbsp;P Kwan,&nbsp;T J O'Brien,&nbsp;C W Tan,&nbsp;L Kuhlmann","doi":"10.1142/S0129065723500016","DOIUrl":"https://doi.org/10.1142/S0129065723500016","url":null,"abstract":"<p><p>Deep learning for automated interictal epileptiform discharge (IED) detection has been topical with many published papers in recent years. All existing works viewed EEG signals as time-series and developed specific models for IED classification; however, general time-series classification (TSC) methods were not considered. Moreover, none of these methods were evaluated on any public datasets, making direct comparisons challenging. This paper explored two state-of-the-art convolutional-based TSC algorithms, InceptionTime and Minirocket, on IED detection. We fine-tuned and cross-evaluated them on a public (Temple University Events - TUEV) and two private datasets and provided ready metrics for benchmarking future work. We observed that the optimal parameters correlated with the clinical duration of an IED and achieved the best area under precision-recall curve (AUPRC) of 0.98 and F1 of 0.80 on the private datasets, respectively. The AUPRC and F1 on the TUEV dataset were 0.99 and 0.97, respectively. While algorithms trained on the private sets maintained their performance when tested on the TUEV data, those trained on TUEV could not generalize well to the private data. These results emerge from differences in the class distributions across datasets and indicate a need for public datasets with a better diversity of IED waveforms, background activities and artifacts to facilitate standardization and benchmarking of algorithms.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"33 1","pages":"2350001"},"PeriodicalIF":8.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9098300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatio-Temporal Graph Attention Network for Neonatal Seizure Detection 用于新生儿癫痫发作检测的时空图注意网络
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.2139/ssrn.4327675
K. Raeisi, M. Khazaei, G. Tamburro, Pierpaolo Croce, S. Comani, F. Zappasodi
{"title":"Spatio-Temporal Graph Attention Network for Neonatal Seizure Detection","authors":"K. Raeisi, M. Khazaei, G. Tamburro, Pierpaolo Croce, S. Comani, F. Zappasodi","doi":"10.2139/ssrn.4327675","DOIUrl":"https://doi.org/10.2139/ssrn.4327675","url":null,"abstract":"","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"1 1","pages":""},"PeriodicalIF":8.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68774270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Edge Detection Method Based on Nonlinear Spiking Neural Systems. 基于非线性尖峰神经系统的边缘检测方法。
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1142/S0129065722500605
Ronghao Xian, Rikong Lugu, Hong Peng, Qian Yang, Xiaohui Luo, Jun Wang

Nonlinear spiking neural P (NSNP) systems are a class of neural-like computational models inspired from the nonlinear mechanism of spiking neurons. NSNP systems have a distinguishing feature: nonlinear spiking mechanism. To handle edge detection of images, this paper proposes a variant, nonlinear spiking neural P (NSNP) systems with two outputs (TO), termed as NSNP-TO systems. Based on NSNP-TO system, an edge detection framework is developed, termed as ED-NSNP detector. The detection ability of ED-NSNP detector relies on two convolutional kernels. To obtain good detection performance, particle swarm optimization (PSO) is used to optimize the parameters of the two convolutional kernels. The proposed ED-NSNP detector is evaluated on several open benchmark images and compared with seven baseline edge detection methods. The comparison results indicate the availability and effectiveness of the proposed ED-NSNP detector.

非线性spike neural P (NSNP)系统是一类受spike神经元非线性机制启发的类神经计算模型。NSNP系统有一个显著的特点:非线性尖峰机制。为了处理图像的边缘检测,本文提出了一种具有两个输出(To)的非线性尖峰神经P (NSNP)系统,称为NSNP- To系统。基于NSNP-TO系统,开发了一种边缘检测框架,称为ED-NSNP检测器。ED-NSNP检测器的检测能力依赖于两个卷积核。为了获得较好的检测性能,采用粒子群算法对两个卷积核的参数进行优化。在若干开放的基准图像上对所提出的ED-NSNP检测器进行了评估,并与7种基线边缘检测方法进行了比较。比较结果表明了所提出的ED-NSNP检测器的可用性和有效性。
{"title":"Edge Detection Method Based on Nonlinear Spiking Neural Systems.","authors":"Ronghao Xian,&nbsp;Rikong Lugu,&nbsp;Hong Peng,&nbsp;Qian Yang,&nbsp;Xiaohui Luo,&nbsp;Jun Wang","doi":"10.1142/S0129065722500605","DOIUrl":"https://doi.org/10.1142/S0129065722500605","url":null,"abstract":"<p><p>Nonlinear spiking neural P (NSNP) systems are a class of neural-like computational models inspired from the nonlinear mechanism of spiking neurons. NSNP systems have a distinguishing feature: nonlinear spiking mechanism. To handle edge detection of images, this paper proposes a variant, nonlinear spiking neural P (NSNP) systems with two outputs (TO), termed as NSNP-TO systems. Based on NSNP-TO system, an edge detection framework is developed, termed as ED-NSNP detector. The detection ability of ED-NSNP detector relies on two convolutional kernels. To obtain good detection performance, particle swarm optimization (PSO) is used to optimize the parameters of the two convolutional kernels. The proposed ED-NSNP detector is evaluated on several open benchmark images and compared with seven baseline edge detection methods. The comparison results indicate the availability and effectiveness of the proposed ED-NSNP detector.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"33 1","pages":"2250060"},"PeriodicalIF":8.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10533406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Mixture 2D Convolutions for 3D Medical Image Segmentation. 混合二维卷积用于三维医学图像分割。
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1142/S0129065722500599
Jianyong Wang, Lei Zhang, Zhang Yi

Three-dimensional (3D) medical image segmentation plays a crucial role in medical care applications. Although various two-dimensional (2D) and 3D neural network models have been applied to 3D medical image segmentation and achieved impressive results, a trade-off remains between efficiency and accuracy. To address this issue, a novel mixture convolutional network (MixConvNet) is proposed, in which traditional 2D/3D convolutional blocks are replaced with novel MixConv blocks. In the MixConv block, 3D convolution is decomposed into a mixture of 2D convolutions from different views. Therefore, the MixConv block fully utilizes the advantages of 2D convolution and maintains the learning ability of 3D convolution. It acts as 3D convolutions and thus can process volumetric input directly and learn intra-slice features, which are absent in the traditional 2D convolutional block. By contrast, the proposed MixConv block only contains 2D convolutions; hence, it has significantly fewer trainable parameters and less computation budget than a block containing 3D convolutions. Furthermore, the proposed MixConvNet is pre-trained with small input patches and fine-tuned with large input patches to improve segmentation performance further. In experiments on the Decathlon Heart dataset and Sliver07 dataset, the proposed MixConvNet outperformed the state-of-the-art methods such as UNet3D, VNet, and nnUnet.

三维医学图像分割在医疗保健应用中起着至关重要的作用。尽管各种二维(2D)和三维神经网络模型已经应用于三维医学图像分割并取得了令人印象深刻的结果,但效率和准确性之间仍然存在权衡。为了解决这一问题,提出了一种新的混合卷积网络(MixConvNet),该网络将传统的2D/3D卷积块替换为新的MixConv块。在MixConv块中,3D卷积被分解为来自不同视图的2D卷积的混合。因此,MixConv块充分利用了二维卷积的优点,同时又保持了三维卷积的学习能力。它作为三维卷积,因此可以直接处理体积输入并学习传统二维卷积块所不具备的片内特征。相比之下,所提出的MixConv块仅包含二维卷积;因此,与包含3D卷积的块相比,它具有更少的可训练参数和更少的计算预算。此外,所提出的MixConvNet使用小输入补丁进行预训练,并使用大输入补丁进行微调,以进一步提高分割性能。在Decathlon Heart数据集和Sliver07数据集的实验中,所提出的MixConvNet优于UNet3D、VNet和nnUnet等最先进的方法。
{"title":"Mixture 2D Convolutions for 3D Medical Image Segmentation.","authors":"Jianyong Wang,&nbsp;Lei Zhang,&nbsp;Zhang Yi","doi":"10.1142/S0129065722500599","DOIUrl":"https://doi.org/10.1142/S0129065722500599","url":null,"abstract":"<p><p>Three-dimensional (3D) medical image segmentation plays a crucial role in medical care applications. Although various two-dimensional (2D) and 3D neural network models have been applied to 3D medical image segmentation and achieved impressive results, a trade-off remains between efficiency and accuracy. To address this issue, a novel mixture convolutional network (MixConvNet) is proposed, in which traditional 2D/3D convolutional blocks are replaced with novel MixConv blocks. In the MixConv block, 3D convolution is decomposed into a mixture of 2D convolutions from different views. Therefore, the MixConv block fully utilizes the advantages of 2D convolution and maintains the learning ability of 3D convolution. It acts as 3D convolutions and thus can process volumetric input directly and learn intra-slice features, which are absent in the traditional 2D convolutional block. By contrast, the proposed MixConv block only contains 2D convolutions; hence, it has significantly fewer trainable parameters and less computation budget than a block containing 3D convolutions. Furthermore, the proposed MixConvNet is pre-trained with small input patches and fine-tuned with large input patches to improve segmentation performance further. In experiments on the Decathlon Heart dataset and Sliver07 dataset, the proposed MixConvNet outperformed the state-of-the-art methods such as UNet3D, VNet, and nnUnet.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"33 1","pages":"2250059"},"PeriodicalIF":8.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10533407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Dual-Modal Information Bottleneck Network for Seizure Detection. 缉获检测的双模态信息瓶颈网络。
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1142/S0129065722500617
Jiale Wang, Xinting Ge, Yunfeng Shi, Mengxue Sun, Qingtao Gong, Haipeng Wang, Wenhui Huang

In recent years, deep learning has shown very competitive performance in seizure detection. However, most of the currently used methods either convert electroencephalogram (EEG) signals into spectral images and employ 2D-CNNs, or split the one-dimensional (1D) features of EEG signals into many segments and employ 1D-CNNs. Moreover, these investigations are further constrained by the absence of consideration for temporal links between time series segments or spectrogram images. Therefore, we propose a Dual-Modal Information Bottleneck (Dual-modal IB) network for EEG seizure detection. The network extracts EEG features from both time series and spectrogram dimensions, allowing information from different modalities to pass through the Dual-modal IB, requiring the model to gather and condense the most pertinent information in each modality and only share what is necessary. Specifically, we make full use of the information shared between the two modality representations to obtain key information for seizure detection and to remove irrelevant feature between the two modalities. In addition, to explore the intrinsic temporal dependencies, we further introduce a bidirectional long-short-term memory (BiLSTM) for Dual-modal IB model, which is used to model the temporal relationships between the information after each modality is extracted by convolutional neural network (CNN). For CHB-MIT dataset, the proposed framework can achieve an average segment-based sensitivity of 97.42%, specificity of 99.32%, accuracy of 98.29%, and an average event-based sensitivity of 96.02%, false detection rate (FDR) of 0.70/h. We release our code at https://github.com/LLLL1021/Dual-modal-IB.

近年来,深度学习在癫痫检测方面表现出了很强的竞争力。然而,目前使用的方法大多是将脑电图信号转换成光谱图像并使用2d - cnn,或者将脑电图信号的一维特征分割成多个片段并使用1D- cnn。此外,由于没有考虑时间序列片段或谱图图像之间的时间联系,这些研究进一步受到限制。因此,我们提出了一种用于脑电图发作检测的双模态信息瓶颈(Dual-Modal IB)网络。该网络从时间序列和频谱图维度中提取EEG特征,允许来自不同模态的信息通过双模态IB,要求模型在每个模态中收集和浓缩最相关的信息,只共享必要的信息。具体而言,我们充分利用两种模态表示之间共享的信息来获取癫痫检测的关键信息,并去除两种模态之间不相关的特征。此外,为了探索内在的时间依赖性,我们进一步引入了双向长短期记忆(BiLSTM)的双模态IB模型,该模型用于模拟卷积神经网络(CNN)提取每个模态后信息之间的时间关系。对于CHB-MIT数据集,该框架基于片段的平均灵敏度为97.42%,特异性为99.32%,准确率为98.29%,基于事件的平均灵敏度为96.02%,误检率(FDR)为0.70/h。我们在https://github.com/LLLL1021/Dual-modal-IB上发布我们的代码。
{"title":"Dual-Modal Information Bottleneck Network for Seizure Detection.","authors":"Jiale Wang,&nbsp;Xinting Ge,&nbsp;Yunfeng Shi,&nbsp;Mengxue Sun,&nbsp;Qingtao Gong,&nbsp;Haipeng Wang,&nbsp;Wenhui Huang","doi":"10.1142/S0129065722500617","DOIUrl":"https://doi.org/10.1142/S0129065722500617","url":null,"abstract":"<p><p>In recent years, deep learning has shown very competitive performance in seizure detection. However, most of the currently used methods either convert electroencephalogram (EEG) signals into spectral images and employ 2D-CNNs, or split the one-dimensional (1D) features of EEG signals into many segments and employ 1D-CNNs. Moreover, these investigations are further constrained by the absence of consideration for temporal links between time series segments or spectrogram images. Therefore, we propose a Dual-Modal Information Bottleneck (Dual-modal IB) network for EEG seizure detection. The network extracts EEG features from both time series and spectrogram dimensions, allowing information from different modalities to pass through the Dual-modal IB, requiring the model to gather and condense the most pertinent information in each modality and only share what is necessary. Specifically, we make full use of the information shared between the two modality representations to obtain key information for seizure detection and to remove irrelevant feature between the two modalities. In addition, to explore the intrinsic temporal dependencies, we further introduce a bidirectional long-short-term memory (BiLSTM) for Dual-modal IB model, which is used to model the temporal relationships between the information after each modality is extracted by convolutional neural network (CNN). For CHB-MIT dataset, the proposed framework can achieve an average segment-based sensitivity of 97.42%, specificity of 99.32%, accuracy of 98.29%, and an average event-based sensitivity of 96.02%, false detection rate (FDR) of 0.70/h. We release our code at https://github.com/LLLL1021/Dual-modal-IB.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"33 1","pages":"2250061"},"PeriodicalIF":8.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9098298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
International Journal of Neural Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1