{"title":"基于时空卷积神经网络的脑电想象节律估计","authors":"Naoki Yoshimura, Toshihisa Tanaka, Yuta Inaba","doi":"10.1109/SSP53291.2023.10208053","DOIUrl":null,"url":null,"abstract":"The problem of estimating imagined music from electroencephalogram (EEG) is very challenging. In this paper, we focused on beats (pulse trains of single notes), one of the components of music, and attempted to estimate imagined beats from an EEG. First, we presented two types of beat patterns and asked 17 experimental participants to imagine them. Next, the imagined beat pulses were estimated from the EEG during the task based on spatiotemporal convolutional neural network models. We employed a CNN and an EEGNet to evaluate the model’s performance with binary cross entropy and focal loss as AUC and F1-measure. Although AUCs between the CNN model and EEGNet are competitive, the number of parameters of the EEGNet is much smaller than that of the CNN. Moreover, we have observed the effect of the loss functions in the F1-measure. Overall, the EEGNet model with the focal loss efficiently performed in imagined beat identification.","PeriodicalId":296346,"journal":{"name":"2023 IEEE Statistical Signal Processing Workshop (SSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Estimation of Imagined Rhythms from EEG by Spatiotemporal Convolutional Neural Networks\",\"authors\":\"Naoki Yoshimura, Toshihisa Tanaka, Yuta Inaba\",\"doi\":\"10.1109/SSP53291.2023.10208053\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The problem of estimating imagined music from electroencephalogram (EEG) is very challenging. In this paper, we focused on beats (pulse trains of single notes), one of the components of music, and attempted to estimate imagined beats from an EEG. First, we presented two types of beat patterns and asked 17 experimental participants to imagine them. Next, the imagined beat pulses were estimated from the EEG during the task based on spatiotemporal convolutional neural network models. We employed a CNN and an EEGNet to evaluate the model’s performance with binary cross entropy and focal loss as AUC and F1-measure. Although AUCs between the CNN model and EEGNet are competitive, the number of parameters of the EEGNet is much smaller than that of the CNN. Moreover, we have observed the effect of the loss functions in the F1-measure. Overall, the EEGNet model with the focal loss efficiently performed in imagined beat identification.\",\"PeriodicalId\":296346,\"journal\":{\"name\":\"2023 IEEE Statistical Signal Processing Workshop (SSP)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE Statistical Signal Processing Workshop (SSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSP53291.2023.10208053\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Statistical Signal Processing Workshop (SSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSP53291.2023.10208053","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Estimation of Imagined Rhythms from EEG by Spatiotemporal Convolutional Neural Networks
The problem of estimating imagined music from electroencephalogram (EEG) is very challenging. In this paper, we focused on beats (pulse trains of single notes), one of the components of music, and attempted to estimate imagined beats from an EEG. First, we presented two types of beat patterns and asked 17 experimental participants to imagine them. Next, the imagined beat pulses were estimated from the EEG during the task based on spatiotemporal convolutional neural network models. We employed a CNN and an EEGNet to evaluate the model’s performance with binary cross entropy and focal loss as AUC and F1-measure. Although AUCs between the CNN model and EEGNet are competitive, the number of parameters of the EEGNet is much smaller than that of the CNN. Moreover, we have observed the effect of the loss functions in the F1-measure. Overall, the EEGNet model with the focal loss efficiently performed in imagined beat identification.