{"title":"CCSUMSP: A cross-subject Chinese speech decoding framework with unified topology and multi-modal semantic pre-training","authors":"Shuai Huang, Yongxiong Wang, Huan Luo","doi":"10.1016/j.inffus.2025.103022","DOIUrl":null,"url":null,"abstract":"<div><div>Decoding speech from brain signals has been a long-standing challenge in neuroscience and brain–computer interface research. While significant progress has been made in English speech decoding, cross-subject Chinese speech decoding remains understudied, despite its potential applications and unique linguistic characteristics. Chinese, with its logographic writing system and tonal nature, presents unique challenges for neural decoding, including complex visual processing of characters and the need to distinguish subtle tonal differences that can alter word meanings. In this paper, we propose Cross-Subject Chinese Speech Decoding Framework with Unified Topology and Multi-Modal Semantic Pre-training(CCSUMSP), a novel framework for cross-subject Chinese speech decoding from electroencephalogram (EEG) signals. There are three key innovations in our approach: (1) We develop a unified topological representation(UTR) that can accommodate various EEG montages, enabling better generalization across subjects and recording setups; (2) We design a multi-modal semantic pre-training strategy by using both EEG and eye-tracking data to capture richer linguistic information; (3) We introduce a dynamic multi-view decoder(DMD) where the weights of different brain regions can be adaptively adjusted based on input signals. In contrast to the state-of-the-art methods, this article presents significant improvements in cross-subject decoding accuracy and generalization by evaluating our framework on the ChineseEEG dataset. Moreover, through our work, we advance the field of EEG-based speech decoding and provide insights into the neural mechanisms underlying Chinese language processing. Finally, the framework we proposed is potentially employed in assistive communication technologies and neural rehabilitation for Chinese speakers.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103022"},"PeriodicalIF":14.7000,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525000958","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Decoding speech from brain signals has been a long-standing challenge in neuroscience and brain–computer interface research. While significant progress has been made in English speech decoding, cross-subject Chinese speech decoding remains understudied, despite its potential applications and unique linguistic characteristics. Chinese, with its logographic writing system and tonal nature, presents unique challenges for neural decoding, including complex visual processing of characters and the need to distinguish subtle tonal differences that can alter word meanings. In this paper, we propose Cross-Subject Chinese Speech Decoding Framework with Unified Topology and Multi-Modal Semantic Pre-training(CCSUMSP), a novel framework for cross-subject Chinese speech decoding from electroencephalogram (EEG) signals. There are three key innovations in our approach: (1) We develop a unified topological representation(UTR) that can accommodate various EEG montages, enabling better generalization across subjects and recording setups; (2) We design a multi-modal semantic pre-training strategy by using both EEG and eye-tracking data to capture richer linguistic information; (3) We introduce a dynamic multi-view decoder(DMD) where the weights of different brain regions can be adaptively adjusted based on input signals. In contrast to the state-of-the-art methods, this article presents significant improvements in cross-subject decoding accuracy and generalization by evaluating our framework on the ChineseEEG dataset. Moreover, through our work, we advance the field of EEG-based speech decoding and provide insights into the neural mechanisms underlying Chinese language processing. Finally, the framework we proposed is potentially employed in assistive communication technologies and neural rehabilitation for Chinese speakers.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.