D-D Zhang, Jianlong Zheng, J. Fathi, M. Sun, F. Deligianni, G. Yang
{"title":"Motor Imagery Classification based on RNNs with Spatiotemporal-Energy Feature Extraction","authors":"D-D Zhang, Jianlong Zheng, J. Fathi, M. Sun, F. Deligianni, G. Yang","doi":"10.31256/ukras17.55","DOIUrl":null,"url":null,"abstract":"With the recent advances in artificial intelligence and robotics, Brain Computer Interface (BCI) has become a rapidly evolving research area. Motor imagery (MI) based BCIs have several applications in neurorehabilitation and the control of robotic prosthesis because they offer the potential to seamlessly translate human intentions to machine language. However, to achieve adequate performance, these systems require extensive training with high-density EEG systems even for two-class paradigms. Effectively extracting and translating EEG data features is a key challenge in Brain Computer Interface (BCI) development. This paper presents a method based on Recurrent Neural Networks (RNNs) with spatiotemporal-energy feature extraction that significantly improves the performance of existing methods. We present cross-validation results based on EEG data collected by a 16-channel, dry electrodes system to demonstrate the practical use of our algorithm. Introduction Robotic control, based on brainwave decoding, can be used in a range of scenarios including patients with locked-in syndrome, rehabilitation after a stroke, virtual reality games and so on. In these cases, subjects may not be able to move their limbs. For this reason, the development of MI tasks based BCI is very important [1]. During a MI task, the subjects imagine moving a specific part of their body without initiating the actual movements. This process involves the brain networks, which are responsible for motor control similarly to the actual movements. Decoding brain waves is challenging, since EEG signals have limited spatial resolution and low signal to noise ratio. Furthermore, experimental conditions, such as subjects’ concentration and prior experience with BCI can bring confounds to the results. Thus far, several approaches have been proposed to classify MI tasks based data but their performances are limited even for the two-class paradigms that involve left and right hand MI tasks [2]. EEG-based BCI normally involves noise filtering, feature extraction and classification. Brain signals are normally analysed in cue-triggered or stimulus-triggered time windows. Related methods include identifying changes in Event Potentials (EPs), slow cortical potentials shifts, quantify oscillatory EEG components and so on [3]. These types of BCI are operated with predefined time windows. Furthermore, the interand intra-subject variability cannot be overlooked when finding suitable feature representation model. Recently, Deep Neural Networks (DNNs) have emerged with promising results in several applications. Their adaptive nature allows them to automatically extract relevant features from data without extensive preprocessing and prior knowledge about the signals [4]. Convolutional Neural Networks (CNNs) have been used to classify EEG features by transforming the temporal domain into spatial domain [5]. However, the CNN structure is static and inherently not suitable for processing temporal patterns. Furthermore, the trend in BCI is to reduce the number of channels and thus construct a sparse spatial representation of the signal, which impedes the effectiveness of CNNs. To deal with time series data, recurrent neural networks (RNNs) based on Long Short-Term Memory (LSTM) seems to be a better choice since they can preserve temporal characteristics of the signal [6]. In this paper, we propose a novel approach to decoding multichannel EEG raw data based on RNNs and spatiotemporal features extracted from the EEG signal. Appropriate spatiotemporal feature extraction could play an important role in improving learning rate in these DNN. The presented results were based on an EEG dataset acquired using a dry, 16-channels, active electrodes g.tec Nautilus system. Although wet, active electrodes are the gold standard in EEG signal acquisition, they require long preparation times and the conductive gel to reduce skin-electrode impedance, which makes the subjects feel uncomfortable [7]. Dry electrodes make it easier to bring BCI systems from the laboratory to the patients’ home but with the challenge of decoding low-quality signals. Therefore, there is a need for the development of more advanced methods for feature extraction and classification.","PeriodicalId":392429,"journal":{"name":"UK-RAS Conference: Robots Working For and Among Us Proceedings","volume":"204 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"UK-RAS Conference: Robots Working For and Among Us Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31256/ukras17.55","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the recent advances in artificial intelligence and robotics, Brain Computer Interface (BCI) has become a rapidly evolving research area. Motor imagery (MI) based BCIs have several applications in neurorehabilitation and the control of robotic prosthesis because they offer the potential to seamlessly translate human intentions to machine language. However, to achieve adequate performance, these systems require extensive training with high-density EEG systems even for two-class paradigms. Effectively extracting and translating EEG data features is a key challenge in Brain Computer Interface (BCI) development. This paper presents a method based on Recurrent Neural Networks (RNNs) with spatiotemporal-energy feature extraction that significantly improves the performance of existing methods. We present cross-validation results based on EEG data collected by a 16-channel, dry electrodes system to demonstrate the practical use of our algorithm. Introduction Robotic control, based on brainwave decoding, can be used in a range of scenarios including patients with locked-in syndrome, rehabilitation after a stroke, virtual reality games and so on. In these cases, subjects may not be able to move their limbs. For this reason, the development of MI tasks based BCI is very important [1]. During a MI task, the subjects imagine moving a specific part of their body without initiating the actual movements. This process involves the brain networks, which are responsible for motor control similarly to the actual movements. Decoding brain waves is challenging, since EEG signals have limited spatial resolution and low signal to noise ratio. Furthermore, experimental conditions, such as subjects’ concentration and prior experience with BCI can bring confounds to the results. Thus far, several approaches have been proposed to classify MI tasks based data but their performances are limited even for the two-class paradigms that involve left and right hand MI tasks [2]. EEG-based BCI normally involves noise filtering, feature extraction and classification. Brain signals are normally analysed in cue-triggered or stimulus-triggered time windows. Related methods include identifying changes in Event Potentials (EPs), slow cortical potentials shifts, quantify oscillatory EEG components and so on [3]. These types of BCI are operated with predefined time windows. Furthermore, the interand intra-subject variability cannot be overlooked when finding suitable feature representation model. Recently, Deep Neural Networks (DNNs) have emerged with promising results in several applications. Their adaptive nature allows them to automatically extract relevant features from data without extensive preprocessing and prior knowledge about the signals [4]. Convolutional Neural Networks (CNNs) have been used to classify EEG features by transforming the temporal domain into spatial domain [5]. However, the CNN structure is static and inherently not suitable for processing temporal patterns. Furthermore, the trend in BCI is to reduce the number of channels and thus construct a sparse spatial representation of the signal, which impedes the effectiveness of CNNs. To deal with time series data, recurrent neural networks (RNNs) based on Long Short-Term Memory (LSTM) seems to be a better choice since they can preserve temporal characteristics of the signal [6]. In this paper, we propose a novel approach to decoding multichannel EEG raw data based on RNNs and spatiotemporal features extracted from the EEG signal. Appropriate spatiotemporal feature extraction could play an important role in improving learning rate in these DNN. The presented results were based on an EEG dataset acquired using a dry, 16-channels, active electrodes g.tec Nautilus system. Although wet, active electrodes are the gold standard in EEG signal acquisition, they require long preparation times and the conductive gel to reduce skin-electrode impedance, which makes the subjects feel uncomfortable [7]. Dry electrodes make it easier to bring BCI systems from the laboratory to the patients’ home but with the challenge of decoding low-quality signals. Therefore, there is a need for the development of more advanced methods for feature extraction and classification.