ADformer:基于脑电图的阿尔茨海默病评估多粒度变换器

Yihe Wang, Nadia Mammone, Darina Petrovsky, Alexandros T. Tzallas, Francesco C. Morabito, Xiang Zhang
{"title":"ADformer:基于脑电图的阿尔茨海默病评估多粒度变换器","authors":"Yihe Wang, Nadia Mammone, Darina Petrovsky, Alexandros T. Tzallas, Francesco C. Morabito, Xiang Zhang","doi":"arxiv-2409.00032","DOIUrl":null,"url":null,"abstract":"Electroencephalogram (EEG) has emerged as a cost-effective and efficient\nmethod for supporting neurologists in assessing Alzheimer's disease (AD).\nExisting approaches predominantly utilize handcrafted features or Convolutional\nNeural Network (CNN)-based methods. However, the potential of the transformer\narchitecture, which has shown promising results in various time series analysis\ntasks, remains underexplored in interpreting EEG for AD assessment.\nFurthermore, most studies are evaluated on the subject-dependent setup but\noften overlook the significance of the subject-independent setup. To address\nthese gaps, we present ADformer, a novel multi-granularity transformer designed\nto capture temporal and spatial features to learn effective EEG\nrepresentations. We employ multi-granularity data embedding across both\ndimensions and utilize self-attention to learn local features within each\ngranularity and global features among different granularities. We conduct\nexperiments across 5 datasets with a total of 525 subjects in setups including\nsubject-dependent, subject-independent, and leave-subjects-out. Our results\nshow that ADformer outperforms existing methods in most evaluations, achieving\nF1 scores of 75.19% and 93.58% on two large datasets with 65 subjects and 126\nsubjects, respectively, in distinguishing AD and healthy control (HC) subjects\nunder the challenging subject-independent setup.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ADformer: A Multi-Granularity Transformer for EEG-Based Alzheimer's Disease Assessment\",\"authors\":\"Yihe Wang, Nadia Mammone, Darina Petrovsky, Alexandros T. Tzallas, Francesco C. Morabito, Xiang Zhang\",\"doi\":\"arxiv-2409.00032\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Electroencephalogram (EEG) has emerged as a cost-effective and efficient\\nmethod for supporting neurologists in assessing Alzheimer's disease (AD).\\nExisting approaches predominantly utilize handcrafted features or Convolutional\\nNeural Network (CNN)-based methods. However, the potential of the transformer\\narchitecture, which has shown promising results in various time series analysis\\ntasks, remains underexplored in interpreting EEG for AD assessment.\\nFurthermore, most studies are evaluated on the subject-dependent setup but\\noften overlook the significance of the subject-independent setup. To address\\nthese gaps, we present ADformer, a novel multi-granularity transformer designed\\nto capture temporal and spatial features to learn effective EEG\\nrepresentations. We employ multi-granularity data embedding across both\\ndimensions and utilize self-attention to learn local features within each\\ngranularity and global features among different granularities. We conduct\\nexperiments across 5 datasets with a total of 525 subjects in setups including\\nsubject-dependent, subject-independent, and leave-subjects-out. Our results\\nshow that ADformer outperforms existing methods in most evaluations, achieving\\nF1 scores of 75.19% and 93.58% on two large datasets with 65 subjects and 126\\nsubjects, respectively, in distinguishing AD and healthy control (HC) subjects\\nunder the challenging subject-independent setup.\",\"PeriodicalId\":501309,\"journal\":{\"name\":\"arXiv - CS - Computational Engineering, Finance, and Science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computational Engineering, Finance, and Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.00032\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computational Engineering, Finance, and Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.00032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

脑电图(EEG)已成为支持神经学家评估阿尔茨海默病(AD)的一种经济高效的方法。然而,在各种时间序列分析任务中显示出良好效果的变压器架构,在解读脑电图以评估注意力缺失症方面的潜力仍未得到充分发掘。此外,大多数研究都是在与受试者相关的设置上进行评估,但往往忽略了与受试者无关的设置的重要性。为了弥补这些不足,我们提出了 ADformer,这是一种新型多粒度变换器,旨在捕捉时间和空间特征以学习有效的脑电图描述。我们采用跨两个维度的多粒度数据嵌入,并利用自我关注来学习每个粒度内的局部特征和不同粒度间的全局特征。我们在 5 个数据集上进行了实验,共有 525 名受试者参加,实验设置包括依赖受试者、不依赖受试者和排除受试者。我们的结果表明,ADformer 在大多数评估中的表现都优于现有方法,在两个分别有 65 名受试者和 126 名受试者的大型数据集上,ADformer 的 F1 分数分别达到了 75.19% 和 93.58%,在具有挑战性的受试者独立设置下,ADformer 可以区分 AD 受试者和健康对照组(HC)受试者。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ADformer: A Multi-Granularity Transformer for EEG-Based Alzheimer's Disease Assessment
Electroencephalogram (EEG) has emerged as a cost-effective and efficient method for supporting neurologists in assessing Alzheimer's disease (AD). Existing approaches predominantly utilize handcrafted features or Convolutional Neural Network (CNN)-based methods. However, the potential of the transformer architecture, which has shown promising results in various time series analysis tasks, remains underexplored in interpreting EEG for AD assessment. Furthermore, most studies are evaluated on the subject-dependent setup but often overlook the significance of the subject-independent setup. To address these gaps, we present ADformer, a novel multi-granularity transformer designed to capture temporal and spatial features to learn effective EEG representations. We employ multi-granularity data embedding across both dimensions and utilize self-attention to learn local features within each granularity and global features among different granularities. We conduct experiments across 5 datasets with a total of 525 subjects in setups including subject-dependent, subject-independent, and leave-subjects-out. Our results show that ADformer outperforms existing methods in most evaluations, achieving F1 scores of 75.19% and 93.58% on two large datasets with 65 subjects and 126 subjects, respectively, in distinguishing AD and healthy control (HC) subjects under the challenging subject-independent setup.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A generalized non-hourglass updated Lagrangian formulation for SPH solid dynamics A Knowledge-Inspired Hierarchical Physics-Informed Neural Network for Pipeline Hydraulic Transient Simulation Uncertainty Analysis of Limit Cycle Oscillations in Nonlinear Dynamical Systems with the Fourier Generalized Polynomial Chaos Expansion Micropolar elastoplasticity using a fast Fourier transform-based solver A differentiable structural analysis framework for high-performance design optimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1