The attention mechanism enables the processing of the data more efficiently by driving the neural networks to focus on the pertinent information. The increase in performance pushed their wide adoption, including for bio-signal. Multiple researchers explored their use of electroencephalography in many scenarios, including motor imagery. Despite the myriad of implementations, their achievement varies from one subject to another since the signals are delicate. In this paper, we extend our previous research (Riyad and Adib 2024) by suggesting a new implementation. The proposal employs the Convolutional Block Attention Module as a backbone with a few modifications adjusted for the nature of the electroencephalography. It uses three levels of attention that are performed on the channel, time, and electrode individually known as Channel Attention Module (CAM), Time Attention Module (TAM), and Electrode Attention Module (EAM). The compartmentalization authorizes the placing of the attention sub-block in diverse configurations, each with a specific order that impacts the extraction of the feature. Also, we suggest studying them with two structures, one with an early spatial filtering that uses the new block once and a late spatial filtering that uses the attention twice. For the experiments, we test on the dataset 2b of the BCI competition IV. The results show that placing the CAM first and feeding its output to the TAM and EAM boost the performance drastically. For optimal results, it is necessary to use the new attention once at the beginning of the network. Also, it permits an even classification of the classes compared with the others.
扫码关注我们
求助内容:
应助结果提醒方式:
