AFSleepNet:基于注意力的多视角特征融合框架,用于儿科睡眠分期。

IF 4.8 2区 医学 Q2 ENGINEERING, BIOMEDICAL IEEE Transactions on Neural Systems and Rehabilitation Engineering Pub Date : 2024-11-04 DOI:10.1109/TNSRE.2024.3490757
Yunfeng Zhu;Yunxiao Wu;Zhiya Wang;Ligang Zhou;Chen Chen;Zhifei Xu;Wei Chen
{"title":"AFSleepNet:基于注意力的多视角特征融合框架,用于儿科睡眠分期。","authors":"Yunfeng Zhu;Yunxiao Wu;Zhiya Wang;Ligang Zhou;Chen Chen;Zhifei Xu;Wei Chen","doi":"10.1109/TNSRE.2024.3490757","DOIUrl":null,"url":null,"abstract":"The widespread prevalence of sleep problems in children highlights the importance of timely and accurate sleep staging in the diagnosis and treatment of pediatric sleep disorders. However, most existing sleep staging methods rely on one-dimensional raw polysomnograms or two-dimensional spectrograms, which omit critical details due to single-view processing. This shortcoming is particularly apparent in pediatric sleep staging, where the lack of a specialized network fails to meet the needs of precision medicine. Therefore, we introduce AFSleepNet, a novel attention-based multi-view feature fusion network tailored for pediatric sleep analysis. The model utilizes multimodal data (EEG, EOG, EMG), combining one-dimensional convolutional neural networks to extract time-invariant features and bidirectional-long-short-term memory to learn the transition rules among sleep stages, as well as employing short-time Fourier transform to generate two-dimensional spectral maps. This network employs a fusion method with self-attention mechanism and innovative pre-training strategy. This strategy can maintain the feature extraction capabilities of AFSleepNet from different views, enhancing the robustness of the multi-view model while effectively preventing model overfitting, thereby achieving efficient and accurate automatic sleep stage analysis. A “leave-one-subject-out” cross-validation on CHAT and clinical datasets demonstrated the excellent performance of AFSleepNet, with mean accuracies of 87.5% and 88.1%, respectively. Superiority over existing methods improves the accuracy and reliability of pediatric sleep staging.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"32 ","pages":"4022-4032"},"PeriodicalIF":4.8000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741586","citationCount":"0","resultStr":"{\"title\":\"AFSleepNet: Attention-Based Multi-View Feature Fusion Framework for Pediatric Sleep Staging\",\"authors\":\"Yunfeng Zhu;Yunxiao Wu;Zhiya Wang;Ligang Zhou;Chen Chen;Zhifei Xu;Wei Chen\",\"doi\":\"10.1109/TNSRE.2024.3490757\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The widespread prevalence of sleep problems in children highlights the importance of timely and accurate sleep staging in the diagnosis and treatment of pediatric sleep disorders. However, most existing sleep staging methods rely on one-dimensional raw polysomnograms or two-dimensional spectrograms, which omit critical details due to single-view processing. This shortcoming is particularly apparent in pediatric sleep staging, where the lack of a specialized network fails to meet the needs of precision medicine. Therefore, we introduce AFSleepNet, a novel attention-based multi-view feature fusion network tailored for pediatric sleep analysis. The model utilizes multimodal data (EEG, EOG, EMG), combining one-dimensional convolutional neural networks to extract time-invariant features and bidirectional-long-short-term memory to learn the transition rules among sleep stages, as well as employing short-time Fourier transform to generate two-dimensional spectral maps. This network employs a fusion method with self-attention mechanism and innovative pre-training strategy. This strategy can maintain the feature extraction capabilities of AFSleepNet from different views, enhancing the robustness of the multi-view model while effectively preventing model overfitting, thereby achieving efficient and accurate automatic sleep stage analysis. A “leave-one-subject-out” cross-validation on CHAT and clinical datasets demonstrated the excellent performance of AFSleepNet, with mean accuracies of 87.5% and 88.1%, respectively. Superiority over existing methods improves the accuracy and reliability of pediatric sleep staging.\",\"PeriodicalId\":13419,\"journal\":{\"name\":\"IEEE Transactions on Neural Systems and Rehabilitation Engineering\",\"volume\":\"32 \",\"pages\":\"4022-4032\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2024-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741586\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Neural Systems and Rehabilitation Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10741586/\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10741586/","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

儿童睡眠问题普遍存在,这凸显了及时准确的睡眠分期对诊断和治疗儿童睡眠障碍的重要性。然而,大多数现有的睡眠分期方法都依赖于一维原始多导睡眠图或二维频谱图,由于单视角处理,忽略了关键细节。这一缺陷在儿科睡眠分期中尤为明显,由于缺乏专业网络,无法满足精准医疗的需求。因此,我们引入了 AFSleepNet,这是一种为儿科睡眠分析量身定制的基于注意力的新型多视角特征融合网络。该模型利用多模态数据(EEG、EOG、EMG),结合一维卷积神经网络提取时变特征,并利用双向长短期记忆学习睡眠阶段之间的转换规则,同时利用短时傅里叶变换生成二维频谱图。该网络采用了具有自我注意机制的融合方法和创新的预训练策略。这种策略可以保持 AFSleepNet 从不同视角提取特征的能力,增强多视角模型的鲁棒性,同时有效防止模型过拟合,从而实现高效、准确的自动睡眠阶段分析。在CHAT和临床数据集上进行的 "leave-one-subject-out "交叉验证证明了AFSleepNet的卓越性能,平均准确率分别为87.5%和88.1%。与现有方法相比,AFSleepNet 的优越性提高了儿科睡眠分期的准确性和可靠性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AFSleepNet: Attention-Based Multi-View Feature Fusion Framework for Pediatric Sleep Staging
The widespread prevalence of sleep problems in children highlights the importance of timely and accurate sleep staging in the diagnosis and treatment of pediatric sleep disorders. However, most existing sleep staging methods rely on one-dimensional raw polysomnograms or two-dimensional spectrograms, which omit critical details due to single-view processing. This shortcoming is particularly apparent in pediatric sleep staging, where the lack of a specialized network fails to meet the needs of precision medicine. Therefore, we introduce AFSleepNet, a novel attention-based multi-view feature fusion network tailored for pediatric sleep analysis. The model utilizes multimodal data (EEG, EOG, EMG), combining one-dimensional convolutional neural networks to extract time-invariant features and bidirectional-long-short-term memory to learn the transition rules among sleep stages, as well as employing short-time Fourier transform to generate two-dimensional spectral maps. This network employs a fusion method with self-attention mechanism and innovative pre-training strategy. This strategy can maintain the feature extraction capabilities of AFSleepNet from different views, enhancing the robustness of the multi-view model while effectively preventing model overfitting, thereby achieving efficient and accurate automatic sleep stage analysis. A “leave-one-subject-out” cross-validation on CHAT and clinical datasets demonstrated the excellent performance of AFSleepNet, with mean accuracies of 87.5% and 88.1%, respectively. Superiority over existing methods improves the accuracy and reliability of pediatric sleep staging.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.60
自引率
8.20%
发文量
479
审稿时长
6-12 weeks
期刊介绍: Rehabilitative and neural aspects of biomedical engineering, including functional electrical stimulation, acoustic dynamics, human performance measurement and analysis, nerve stimulation, electromyography, motor control and stimulation; and hardware and software applications for rehabilitation engineering and assistive devices.
期刊最新文献
Semi-Autonomous Continuous Robotic Arm Control Using an Augmented Reality Brain-Computer Interface Low-Intensity Focused Ultrasound Stimulation on Fingertip Can Evoke Fine Tactile Sensations and Different Local Hemodynamic Responses The Neural Basis of the Effect of Transcutaneous Auricular Vagus Nerve Stimulation on Emotion Regulation Related Brain Regions: An rs-fMRI Study An Asynchronous Training-free SSVEP-BCI Detection Algorithm for Non-Equal Prior Probability Scenarios. A Swing-Assist Controller for Enhancing Knee Flexion in a Semi-Powered Transfemoral Prosthesis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1