Zhenye Zhao, Yibing Li, Yong Peng, Kenneth Camilleri, Wanzeng Kong
{"title":"Multi-view graph fusion of self-weighted EEG feature representations for speech imagery decoding.","authors":"Zhenye Zhao, Yibing Li, Yong Peng, Kenneth Camilleri, Wanzeng Kong","doi":"10.1016/j.jneumeth.2025.110413","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Electroencephalogram (EEG)-based speech imagery is an emerging brain-computer interface paradigm, which enables the speech disabled to naturally and intuitively communicate with external devices or other people. Currently, speech imagery research decoding performance is limited. One of the reasons is that there is still no consensus on which domain features are more discriminative.</p><p><strong>New method: </strong>To adaptively capture the complementary information from different domain features, we treat each domain as a view and propose a multi-view graph fusion of self-weighted EEG feature representations (MVGSF) model by learning a consensus graph from multi-view EEG features, based on which the imagery intentions can be effectively decoded. Considering that different EEG features in each view have different discriminative abilities, the view-dependent feature importance exploration strategy is incorporated in MVGSF.</p><p><strong>Results: </strong>(1) MVGSF exhibits outstanding performance on two public speech imagery datasets (2) The learned consensus graph from multi-view features effectively characterizes the relationships of EEG samples in a progressive manner. (3) Some task-related insights are explored including the feature importance-based identification of critical EEG channels and frequency bands in speech imagery decoding.</p><p><strong>Comparison with existing methods: </strong>We compared MVGSF with single-view counterparts, other multi-view models, and state-of-the-art models. MVGSF achieved the highest accuracy, with average accuracies of 78.93% on the 2020IBCIC3 dataset and 53.85% on the KaraOne dataset.</p><p><strong>Conclusions: </strong>MVGSF effectively integrates features from multiple domains to enhance decoding capabilities. Furthermore, through the learned feature importance, MVGSF has made certain contributions to identify the EEG spatial-frequency patterns in speech imagery decoding.</p>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":" ","pages":"110413"},"PeriodicalIF":2.7000,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Neuroscience Methods","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.jneumeth.2025.110413","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
Multi-view graph fusion of self-weighted EEG feature representations for speech imagery decoding.
Background: Electroencephalogram (EEG)-based speech imagery is an emerging brain-computer interface paradigm, which enables the speech disabled to naturally and intuitively communicate with external devices or other people. Currently, speech imagery research decoding performance is limited. One of the reasons is that there is still no consensus on which domain features are more discriminative.
New method: To adaptively capture the complementary information from different domain features, we treat each domain as a view and propose a multi-view graph fusion of self-weighted EEG feature representations (MVGSF) model by learning a consensus graph from multi-view EEG features, based on which the imagery intentions can be effectively decoded. Considering that different EEG features in each view have different discriminative abilities, the view-dependent feature importance exploration strategy is incorporated in MVGSF.
Results: (1) MVGSF exhibits outstanding performance on two public speech imagery datasets (2) The learned consensus graph from multi-view features effectively characterizes the relationships of EEG samples in a progressive manner. (3) Some task-related insights are explored including the feature importance-based identification of critical EEG channels and frequency bands in speech imagery decoding.
Comparison with existing methods: We compared MVGSF with single-view counterparts, other multi-view models, and state-of-the-art models. MVGSF achieved the highest accuracy, with average accuracies of 78.93% on the 2020IBCIC3 dataset and 53.85% on the KaraOne dataset.
Conclusions: MVGSF effectively integrates features from multiple domains to enhance decoding capabilities. Furthermore, through the learned feature importance, MVGSF has made certain contributions to identify the EEG spatial-frequency patterns in speech imagery decoding.
期刊介绍:
The Journal of Neuroscience Methods publishes papers that describe new methods that are specifically for neuroscience research conducted in invertebrates, vertebrates or in man. Major methodological improvements or important refinements of established neuroscience methods are also considered for publication. The Journal''s Scope includes all aspects of contemporary neuroscience research, including anatomical, behavioural, biochemical, cellular, computational, molecular, invasive and non-invasive imaging, optogenetic, and physiological research investigations.