Yihe Wang, Nadia Mammone, Darina Petrovsky, Alexandros T. Tzallas, Francesco C. Morabito, Xiang Zhang
{"title":"ADformer:基于脑电图的阿尔茨海默病评估多粒度变换器","authors":"Yihe Wang, Nadia Mammone, Darina Petrovsky, Alexandros T. Tzallas, Francesco C. Morabito, Xiang Zhang","doi":"arxiv-2409.00032","DOIUrl":null,"url":null,"abstract":"Electroencephalogram (EEG) has emerged as a cost-effective and efficient\nmethod for supporting neurologists in assessing Alzheimer's disease (AD).\nExisting approaches predominantly utilize handcrafted features or Convolutional\nNeural Network (CNN)-based methods. However, the potential of the transformer\narchitecture, which has shown promising results in various time series analysis\ntasks, remains underexplored in interpreting EEG for AD assessment.\nFurthermore, most studies are evaluated on the subject-dependent setup but\noften overlook the significance of the subject-independent setup. To address\nthese gaps, we present ADformer, a novel multi-granularity transformer designed\nto capture temporal and spatial features to learn effective EEG\nrepresentations. We employ multi-granularity data embedding across both\ndimensions and utilize self-attention to learn local features within each\ngranularity and global features among different granularities. We conduct\nexperiments across 5 datasets with a total of 525 subjects in setups including\nsubject-dependent, subject-independent, and leave-subjects-out. Our results\nshow that ADformer outperforms existing methods in most evaluations, achieving\nF1 scores of 75.19% and 93.58% on two large datasets with 65 subjects and 126\nsubjects, respectively, in distinguishing AD and healthy control (HC) subjects\nunder the challenging subject-independent setup.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ADformer: A Multi-Granularity Transformer for EEG-Based Alzheimer's Disease Assessment\",\"authors\":\"Yihe Wang, Nadia Mammone, Darina Petrovsky, Alexandros T. Tzallas, Francesco C. Morabito, Xiang Zhang\",\"doi\":\"arxiv-2409.00032\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Electroencephalogram (EEG) has emerged as a cost-effective and efficient\\nmethod for supporting neurologists in assessing Alzheimer's disease (AD).\\nExisting approaches predominantly utilize handcrafted features or Convolutional\\nNeural Network (CNN)-based methods. However, the potential of the transformer\\narchitecture, which has shown promising results in various time series analysis\\ntasks, remains underexplored in interpreting EEG for AD assessment.\\nFurthermore, most studies are evaluated on the subject-dependent setup but\\noften overlook the significance of the subject-independent setup. To address\\nthese gaps, we present ADformer, a novel multi-granularity transformer designed\\nto capture temporal and spatial features to learn effective EEG\\nrepresentations. We employ multi-granularity data embedding across both\\ndimensions and utilize self-attention to learn local features within each\\ngranularity and global features among different granularities. We conduct\\nexperiments across 5 datasets with a total of 525 subjects in setups including\\nsubject-dependent, subject-independent, and leave-subjects-out. Our results\\nshow that ADformer outperforms existing methods in most evaluations, achieving\\nF1 scores of 75.19% and 93.58% on two large datasets with 65 subjects and 126\\nsubjects, respectively, in distinguishing AD and healthy control (HC) subjects\\nunder the challenging subject-independent setup.\",\"PeriodicalId\":501309,\"journal\":{\"name\":\"arXiv - CS - Computational Engineering, Finance, and Science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computational Engineering, Finance, and Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.00032\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computational Engineering, Finance, and Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.00032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
脑电图(EEG)已成为支持神经学家评估阿尔茨海默病(AD)的一种经济高效的方法。然而,在各种时间序列分析任务中显示出良好效果的变压器架构,在解读脑电图以评估注意力缺失症方面的潜力仍未得到充分发掘。此外,大多数研究都是在与受试者相关的设置上进行评估,但往往忽略了与受试者无关的设置的重要性。为了弥补这些不足,我们提出了 ADformer,这是一种新型多粒度变换器,旨在捕捉时间和空间特征以学习有效的脑电图描述。我们采用跨两个维度的多粒度数据嵌入,并利用自我关注来学习每个粒度内的局部特征和不同粒度间的全局特征。我们在 5 个数据集上进行了实验,共有 525 名受试者参加,实验设置包括依赖受试者、不依赖受试者和排除受试者。我们的结果表明,ADformer 在大多数评估中的表现都优于现有方法,在两个分别有 65 名受试者和 126 名受试者的大型数据集上,ADformer 的 F1 分数分别达到了 75.19% 和 93.58%,在具有挑战性的受试者独立设置下,ADformer 可以区分 AD 受试者和健康对照组(HC)受试者。
ADformer: A Multi-Granularity Transformer for EEG-Based Alzheimer's Disease Assessment
Electroencephalogram (EEG) has emerged as a cost-effective and efficient
method for supporting neurologists in assessing Alzheimer's disease (AD).
Existing approaches predominantly utilize handcrafted features or Convolutional
Neural Network (CNN)-based methods. However, the potential of the transformer
architecture, which has shown promising results in various time series analysis
tasks, remains underexplored in interpreting EEG for AD assessment.
Furthermore, most studies are evaluated on the subject-dependent setup but
often overlook the significance of the subject-independent setup. To address
these gaps, we present ADformer, a novel multi-granularity transformer designed
to capture temporal and spatial features to learn effective EEG
representations. We employ multi-granularity data embedding across both
dimensions and utilize self-attention to learn local features within each
granularity and global features among different granularities. We conduct
experiments across 5 datasets with a total of 525 subjects in setups including
subject-dependent, subject-independent, and leave-subjects-out. Our results
show that ADformer outperforms existing methods in most evaluations, achieving
F1 scores of 75.19% and 93.58% on two large datasets with 65 subjects and 126
subjects, respectively, in distinguishing AD and healthy control (HC) subjects
under the challenging subject-independent setup.