{"title":"Using Ear-EEG to Decode Auditory Attention in Multiple-speaker Environment","authors":"Haolin Zhu, Yujie Yan, Xiran Xu, Zhongshu Ge, Pei Tian, Xihong Wu, Jing Chen","doi":"arxiv-2409.08710","DOIUrl":null,"url":null,"abstract":"Auditory Attention Decoding (AAD) can help to determine the identity of the\nattended speaker during an auditory selective attention task, by analyzing and\nprocessing measurements of electroencephalography (EEG) data. Most studies on\nAAD are based on scalp-EEG signals in two-speaker scenarios, which are far from\nreal application. Ear-EEG has recently gained significant attention due to its\nmotion tolerance and invisibility during data acquisition, making it easy to\nincorporate with other devices for applications. In this work, participants\nselectively attended to one of the four spatially separated speakers' speech in\nan anechoic room. The EEG data were concurrently collected from a scalp-EEG\nsystem and an ear-EEG system (cEEGrids). Temporal response functions (TRFs) and\nstimulus reconstruction (SR) were utilized using ear-EEG data. Results showed\nthat the attended speech TRFs were stronger than each unattended speech and\ndecoding accuracy was 41.3\\% in the 60s (chance level of 25\\%). To further\ninvestigate the impact of electrode placement and quantity, SR was utilized in\nboth scalp-EEG and ear-EEG, revealing that while the number of electrodes had a\nminor effect, their positioning had a significant influence on the decoding\naccuracy. One kind of auditory spatial attention detection (ASAD) method,\nSTAnet, was testified with this ear-EEG database, resulting in 93.1% in\n1-second decoding window. The implementation code and database for our work are\navailable on GitHub: https://github.com/zhl486/Ear_EEG_code.git and Zenodo:\nhttps://zenodo.org/records/10803261.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08710","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Auditory Attention Decoding (AAD) can help to determine the identity of the
attended speaker during an auditory selective attention task, by analyzing and
processing measurements of electroencephalography (EEG) data. Most studies on
AAD are based on scalp-EEG signals in two-speaker scenarios, which are far from
real application. Ear-EEG has recently gained significant attention due to its
motion tolerance and invisibility during data acquisition, making it easy to
incorporate with other devices for applications. In this work, participants
selectively attended to one of the four spatially separated speakers' speech in
an anechoic room. The EEG data were concurrently collected from a scalp-EEG
system and an ear-EEG system (cEEGrids). Temporal response functions (TRFs) and
stimulus reconstruction (SR) were utilized using ear-EEG data. Results showed
that the attended speech TRFs were stronger than each unattended speech and
decoding accuracy was 41.3\% in the 60s (chance level of 25\%). To further
investigate the impact of electrode placement and quantity, SR was utilized in
both scalp-EEG and ear-EEG, revealing that while the number of electrodes had a
minor effect, their positioning had a significant influence on the decoding
accuracy. One kind of auditory spatial attention detection (ASAD) method,
STAnet, was testified with this ear-EEG database, resulting in 93.1% in
1-second decoding window. The implementation code and database for our work are
available on GitHub: https://github.com/zhl486/Ear_EEG_code.git and Zenodo:
https://zenodo.org/records/10803261.