Carlos Valle, Carolina Mendez-Orellana, Christian Herff, Maria Rodriguez-Fernandez
{"title":"Identification of perceived sentences using deep neural networks in EEG.","authors":"Carlos Valle, Carolina Mendez-Orellana, Christian Herff, Maria Rodriguez-Fernandez","doi":"10.1088/1741-2552/ad88a3","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objetive</i>. Decoding speech from brain activity can enable communication for individuals with speech disorders. Deep neural networks (DNNs) have shown great potential for speech decoding applications. However, the limited availability of large datasets containing neural recordings from speech-impaired subjects poses a challenge. Leveraging data from healthy participants can mitigate this limitation and expedite the development of speech neuroprostheses while minimizing the need for patient-specific training data.<i>Approach</i>. In this study, we collected a substantial dataset consisting of recordings from 56 healthy participants using 64 EEG channels. Multiple neural networks were trained to classify perceived sentences in the Spanish language using subject-independent, mixed-subjects, and fine-tuning approaches. The dataset has been made publicly available to foster further research in this area.<i>Main results</i>. Our results demonstrate a remarkable level of accuracy in distinguishing sentence identity across 30 classes, showcasing the feasibility of training DNNs to decode sentence identity from perceived speech using EEG. Notably, the subject-independent approach rendered accuracy comparable to the mixed-subjects approach, although with higher variability among subjects. Additionally, our fine-tuning approach yielded even higher accuracy, indicating an improved capability to adapt to individual subject characteristics, which enhances performance. This suggests that DNNs have effectively learned to decode universal features of brain activity across individuals while also being adaptable to specific participant data. Furthermore, our analyses indicate that EEGNet and DeepConvNet exhibit comparable performance, outperforming ShallowConvNet for sentence identity decoding. Finally, our Grad-CAM visualization analysis identifies key areas influencing the network's predictions, offering valuable insights into the neural processes underlying language perception and comprehension.<i>Significance</i>. These findings advance our understanding of EEG-based speech perception decoding and hold promise for the development of speech neuroprostheses, particularly in scenarios where subjects cannot provide their own training data.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/ad88a3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objetive. Decoding speech from brain activity can enable communication for individuals with speech disorders. Deep neural networks (DNNs) have shown great potential for speech decoding applications. However, the limited availability of large datasets containing neural recordings from speech-impaired subjects poses a challenge. Leveraging data from healthy participants can mitigate this limitation and expedite the development of speech neuroprostheses while minimizing the need for patient-specific training data.Approach. In this study, we collected a substantial dataset consisting of recordings from 56 healthy participants using 64 EEG channels. Multiple neural networks were trained to classify perceived sentences in the Spanish language using subject-independent, mixed-subjects, and fine-tuning approaches. The dataset has been made publicly available to foster further research in this area.Main results. Our results demonstrate a remarkable level of accuracy in distinguishing sentence identity across 30 classes, showcasing the feasibility of training DNNs to decode sentence identity from perceived speech using EEG. Notably, the subject-independent approach rendered accuracy comparable to the mixed-subjects approach, although with higher variability among subjects. Additionally, our fine-tuning approach yielded even higher accuracy, indicating an improved capability to adapt to individual subject characteristics, which enhances performance. This suggests that DNNs have effectively learned to decode universal features of brain activity across individuals while also being adaptable to specific participant data. Furthermore, our analyses indicate that EEGNet and DeepConvNet exhibit comparable performance, outperforming ShallowConvNet for sentence identity decoding. Finally, our Grad-CAM visualization analysis identifies key areas influencing the network's predictions, offering valuable insights into the neural processes underlying language perception and comprehension.Significance. These findings advance our understanding of EEG-based speech perception decoding and hold promise for the development of speech neuroprostheses, particularly in scenarios where subjects cannot provide their own training data.