Brain–computer interfaces (BCIs) play a pivotal role in facilitating human–machine interaction and elucidating brain mechanisms, with motor imagery (MI) being one of the most widely studied paradigms due to its substantial potential. However, inherent inter-subject variability in physiological structures often constrains the accuracy of MI decoding models. To address this challenge, we construct a streamlined graph convolutional network (GCN) and develop an MI decoding model, termed GCN-multiDA. Specifically, the model employs a GCN to capture spatial dependencies in EEG signals and incorporates a graph pruning strategy based on the task-frequency index (TF), region-of-interest index (ROI), and topological index (Topo) to streamline the network. This design preserves neurophysiological relevance while enhancing decoding accuracy and reducing model complexity. Furthermore, drawing inspiration from multi-source personalized domain adaptation, we introduce a domain bias assessment measurement (DBAM) to align cross-domain feature distributions and mitigate inter-domain discrepancies, along with a classifier alignment module to enforce prediction consistency across domains, thereby enabling robust MI classification. Comprehensive experiments conducted on four datasets, including BCI competition IV 2a and 2b, OpenBMI, and PhysioNet, demonstrate that GCN-multiDA consistently outperforms baseline models, improving mean accuracy by 2.66%, 2.53%, 1.32%, and 3.55%, respectively, and achieving the best performance in terms of Kappa and rRMSE metrics. Ablation and sensitivity analyses further confirm that the pruning algorithm contributes substantially to performance improvements across all datasets.
扫码关注我们
求助内容:
应助结果提醒方式:
