{"title":"Language Model-Guided Classifier Adaptation for Brain-Computer Interfaces for Communication.","authors":"Xinlin J Chen, Leslie M Collins, Boyla O Mainsah","doi":"10.1109/smc53654.2022.9945561","DOIUrl":null,"url":null,"abstract":"<p><p>Brain-computer interfaces (BCIs), such as the P300 speller, can provide a means of communication for individuals with severe neuromuscular limitations. BCIs interpret electroencephalography (EEG) signals in order to translate embedded information about a user's intent into executable commands to control external devices. However, EEG signals are inherently noisy and nonstationary, posing a challenge to extended BCI use. Conventionally, a BCI classifier is trained via supervised learning in an offline calibration session; once trained, the classifier is deployed for online use and is not updated. As the statistics of a user's EEG data change over time, the performance of a static classifier may decline with extended use. It is therefore desirable to automatically adapt the classifier to current data statistics without requiring offline recalibration. In an existing semi-supervised learning approach, the classifier is trained on labeled EEG data and is then updated using incoming unlabeled EEG data and classifier-predicted labels. To reduce the risk of learning from incorrect predictions, a threshold is imposed to exclude unlabeled data with low-confidence label predictions from the expanded training set when retraining the adaptive classifier. In this work, we propose the use of a language model for spelling error correction and disambiguation to provide information about label correctness during semi-supervised learning. Results from simulations with multi-session P300 speller user EEG data demonstrate that our language-guided semi-supervised approach significantly improves spelling accuracy relative to conventional BCI calibration and threshold-based semi-supervised learning.</p>","PeriodicalId":72691,"journal":{"name":"Conference proceedings. IEEE International Conference on Systems, Man, and Cybernetics","volume":"2022 ","pages":"1642-1647"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9910722/pdf/nihms-1862780.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference proceedings. IEEE International Conference on Systems, Man, and Cybernetics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/smc53654.2022.9945561","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Brain-computer interfaces (BCIs), such as the P300 speller, can provide a means of communication for individuals with severe neuromuscular limitations. BCIs interpret electroencephalography (EEG) signals in order to translate embedded information about a user's intent into executable commands to control external devices. However, EEG signals are inherently noisy and nonstationary, posing a challenge to extended BCI use. Conventionally, a BCI classifier is trained via supervised learning in an offline calibration session; once trained, the classifier is deployed for online use and is not updated. As the statistics of a user's EEG data change over time, the performance of a static classifier may decline with extended use. It is therefore desirable to automatically adapt the classifier to current data statistics without requiring offline recalibration. In an existing semi-supervised learning approach, the classifier is trained on labeled EEG data and is then updated using incoming unlabeled EEG data and classifier-predicted labels. To reduce the risk of learning from incorrect predictions, a threshold is imposed to exclude unlabeled data with low-confidence label predictions from the expanded training set when retraining the adaptive classifier. In this work, we propose the use of a language model for spelling error correction and disambiguation to provide information about label correctness during semi-supervised learning. Results from simulations with multi-session P300 speller user EEG data demonstrate that our language-guided semi-supervised approach significantly improves spelling accuracy relative to conventional BCI calibration and threshold-based semi-supervised learning.