缉获检测的双模态信息瓶颈网络。

IF 6.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE International Journal of Neural Systems Pub Date : 2023-01-01 DOI:10.1142/S0129065722500617
Jiale Wang, Xinting Ge, Yunfeng Shi, Mengxue Sun, Qingtao Gong, Haipeng Wang, Wenhui Huang
{"title":"缉获检测的双模态信息瓶颈网络。","authors":"Jiale Wang,&nbsp;Xinting Ge,&nbsp;Yunfeng Shi,&nbsp;Mengxue Sun,&nbsp;Qingtao Gong,&nbsp;Haipeng Wang,&nbsp;Wenhui Huang","doi":"10.1142/S0129065722500617","DOIUrl":null,"url":null,"abstract":"<p><p>In recent years, deep learning has shown very competitive performance in seizure detection. However, most of the currently used methods either convert electroencephalogram (EEG) signals into spectral images and employ 2D-CNNs, or split the one-dimensional (1D) features of EEG signals into many segments and employ 1D-CNNs. Moreover, these investigations are further constrained by the absence of consideration for temporal links between time series segments or spectrogram images. Therefore, we propose a Dual-Modal Information Bottleneck (Dual-modal IB) network for EEG seizure detection. The network extracts EEG features from both time series and spectrogram dimensions, allowing information from different modalities to pass through the Dual-modal IB, requiring the model to gather and condense the most pertinent information in each modality and only share what is necessary. Specifically, we make full use of the information shared between the two modality representations to obtain key information for seizure detection and to remove irrelevant feature between the two modalities. In addition, to explore the intrinsic temporal dependencies, we further introduce a bidirectional long-short-term memory (BiLSTM) for Dual-modal IB model, which is used to model the temporal relationships between the information after each modality is extracted by convolutional neural network (CNN). For CHB-MIT dataset, the proposed framework can achieve an average segment-based sensitivity of 97.42%, specificity of 99.32%, accuracy of 98.29%, and an average event-based sensitivity of 96.02%, false detection rate (FDR) of 0.70/h. We release our code at https://github.com/LLLL1021/Dual-modal-IB.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"33 1","pages":"2250061"},"PeriodicalIF":6.6000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Dual-Modal Information Bottleneck Network for Seizure Detection.\",\"authors\":\"Jiale Wang,&nbsp;Xinting Ge,&nbsp;Yunfeng Shi,&nbsp;Mengxue Sun,&nbsp;Qingtao Gong,&nbsp;Haipeng Wang,&nbsp;Wenhui Huang\",\"doi\":\"10.1142/S0129065722500617\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In recent years, deep learning has shown very competitive performance in seizure detection. However, most of the currently used methods either convert electroencephalogram (EEG) signals into spectral images and employ 2D-CNNs, or split the one-dimensional (1D) features of EEG signals into many segments and employ 1D-CNNs. Moreover, these investigations are further constrained by the absence of consideration for temporal links between time series segments or spectrogram images. Therefore, we propose a Dual-Modal Information Bottleneck (Dual-modal IB) network for EEG seizure detection. The network extracts EEG features from both time series and spectrogram dimensions, allowing information from different modalities to pass through the Dual-modal IB, requiring the model to gather and condense the most pertinent information in each modality and only share what is necessary. Specifically, we make full use of the information shared between the two modality representations to obtain key information for seizure detection and to remove irrelevant feature between the two modalities. In addition, to explore the intrinsic temporal dependencies, we further introduce a bidirectional long-short-term memory (BiLSTM) for Dual-modal IB model, which is used to model the temporal relationships between the information after each modality is extracted by convolutional neural network (CNN). For CHB-MIT dataset, the proposed framework can achieve an average segment-based sensitivity of 97.42%, specificity of 99.32%, accuracy of 98.29%, and an average event-based sensitivity of 96.02%, false detection rate (FDR) of 0.70/h. We release our code at https://github.com/LLLL1021/Dual-modal-IB.</p>\",\"PeriodicalId\":50305,\"journal\":{\"name\":\"International Journal of Neural Systems\",\"volume\":\"33 1\",\"pages\":\"2250061\"},\"PeriodicalIF\":6.6000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Neural Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1142/S0129065722500617\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Neural Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1142/S0129065722500617","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 2

摘要

近年来,深度学习在癫痫检测方面表现出了很强的竞争力。然而,目前使用的方法大多是将脑电图信号转换成光谱图像并使用2d - cnn,或者将脑电图信号的一维特征分割成多个片段并使用1D- cnn。此外,由于没有考虑时间序列片段或谱图图像之间的时间联系,这些研究进一步受到限制。因此,我们提出了一种用于脑电图发作检测的双模态信息瓶颈(Dual-Modal IB)网络。该网络从时间序列和频谱图维度中提取EEG特征,允许来自不同模态的信息通过双模态IB,要求模型在每个模态中收集和浓缩最相关的信息,只共享必要的信息。具体而言,我们充分利用两种模态表示之间共享的信息来获取癫痫检测的关键信息,并去除两种模态之间不相关的特征。此外,为了探索内在的时间依赖性,我们进一步引入了双向长短期记忆(BiLSTM)的双模态IB模型,该模型用于模拟卷积神经网络(CNN)提取每个模态后信息之间的时间关系。对于CHB-MIT数据集,该框架基于片段的平均灵敏度为97.42%,特异性为99.32%,准确率为98.29%,基于事件的平均灵敏度为96.02%,误检率(FDR)为0.70/h。我们在https://github.com/LLLL1021/Dual-modal-IB上发布我们的代码。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Dual-Modal Information Bottleneck Network for Seizure Detection.

In recent years, deep learning has shown very competitive performance in seizure detection. However, most of the currently used methods either convert electroencephalogram (EEG) signals into spectral images and employ 2D-CNNs, or split the one-dimensional (1D) features of EEG signals into many segments and employ 1D-CNNs. Moreover, these investigations are further constrained by the absence of consideration for temporal links between time series segments or spectrogram images. Therefore, we propose a Dual-Modal Information Bottleneck (Dual-modal IB) network for EEG seizure detection. The network extracts EEG features from both time series and spectrogram dimensions, allowing information from different modalities to pass through the Dual-modal IB, requiring the model to gather and condense the most pertinent information in each modality and only share what is necessary. Specifically, we make full use of the information shared between the two modality representations to obtain key information for seizure detection and to remove irrelevant feature between the two modalities. In addition, to explore the intrinsic temporal dependencies, we further introduce a bidirectional long-short-term memory (BiLSTM) for Dual-modal IB model, which is used to model the temporal relationships between the information after each modality is extracted by convolutional neural network (CNN). For CHB-MIT dataset, the proposed framework can achieve an average segment-based sensitivity of 97.42%, specificity of 99.32%, accuracy of 98.29%, and an average event-based sensitivity of 96.02%, false detection rate (FDR) of 0.70/h. We release our code at https://github.com/LLLL1021/Dual-modal-IB.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Neural Systems
International Journal of Neural Systems 工程技术-计算机:人工智能
CiteScore
11.30
自引率
28.80%
发文量
116
审稿时长
24 months
期刊介绍: The International Journal of Neural Systems is a monthly, rigorously peer-reviewed transdisciplinary journal focusing on information processing in both natural and artificial neural systems. Special interests include machine learning, computational neuroscience and neurology. The journal prioritizes innovative, high-impact articles spanning multiple fields, including neurosciences and computer science and engineering. It adopts an open-minded approach to this multidisciplinary field, serving as a platform for novel ideas and enhanced understanding of collective and cooperative phenomena in computationally capable systems.
期刊最新文献
Epileptic Seizure Detection with an End-to-end Temporal Convolutional Network and Bidirectional Long Short-Term Memory Model A graph-based neural approach to linear sum assignment problems Automated Quality Evaluation of Large-Scale Benchmark Datasets for Vision-Language Tasks sEMG-based Inter-Session Hand Gesture Recognition via Domain Adaptation with Locality Preserving and Maximum Margin Cultural Differences in the Assessment of Synthetic Voices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1