CCSUMSP: A cross-subject Chinese speech decoding framework with unified topology and multi-modal semantic pre-training

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Information Fusion Pub Date : 2025-07-01 Epub Date: 2025-02-14 DOI:10.1016/j.inffus.2025.103022
Shuai Huang, Yongxiong Wang, Huan Luo
{"title":"CCSUMSP: A cross-subject Chinese speech decoding framework with unified topology and multi-modal semantic pre-training","authors":"Shuai Huang,&nbsp;Yongxiong Wang,&nbsp;Huan Luo","doi":"10.1016/j.inffus.2025.103022","DOIUrl":null,"url":null,"abstract":"<div><div>Decoding speech from brain signals has been a long-standing challenge in neuroscience and brain–computer interface research. While significant progress has been made in English speech decoding, cross-subject Chinese speech decoding remains understudied, despite its potential applications and unique linguistic characteristics. Chinese, with its logographic writing system and tonal nature, presents unique challenges for neural decoding, including complex visual processing of characters and the need to distinguish subtle tonal differences that can alter word meanings. In this paper, we propose Cross-Subject Chinese Speech Decoding Framework with Unified Topology and Multi-Modal Semantic Pre-training(CCSUMSP), a novel framework for cross-subject Chinese speech decoding from electroencephalogram (EEG) signals. There are three key innovations in our approach: (1) We develop a unified topological representation(UTR) that can accommodate various EEG montages, enabling better generalization across subjects and recording setups; (2) We design a multi-modal semantic pre-training strategy by using both EEG and eye-tracking data to capture richer linguistic information; (3) We introduce a dynamic multi-view decoder(DMD) where the weights of different brain regions can be adaptively adjusted based on input signals. In contrast to the state-of-the-art methods, this article presents significant improvements in cross-subject decoding accuracy and generalization by evaluating our framework on the ChineseEEG dataset. Moreover, through our work, we advance the field of EEG-based speech decoding and provide insights into the neural mechanisms underlying Chinese language processing. Finally, the framework we proposed is potentially employed in assistive communication technologies and neural rehabilitation for Chinese speakers.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103022"},"PeriodicalIF":15.5000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525000958","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/14 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Decoding speech from brain signals has been a long-standing challenge in neuroscience and brain–computer interface research. While significant progress has been made in English speech decoding, cross-subject Chinese speech decoding remains understudied, despite its potential applications and unique linguistic characteristics. Chinese, with its logographic writing system and tonal nature, presents unique challenges for neural decoding, including complex visual processing of characters and the need to distinguish subtle tonal differences that can alter word meanings. In this paper, we propose Cross-Subject Chinese Speech Decoding Framework with Unified Topology and Multi-Modal Semantic Pre-training(CCSUMSP), a novel framework for cross-subject Chinese speech decoding from electroencephalogram (EEG) signals. There are three key innovations in our approach: (1) We develop a unified topological representation(UTR) that can accommodate various EEG montages, enabling better generalization across subjects and recording setups; (2) We design a multi-modal semantic pre-training strategy by using both EEG and eye-tracking data to capture richer linguistic information; (3) We introduce a dynamic multi-view decoder(DMD) where the weights of different brain regions can be adaptively adjusted based on input signals. In contrast to the state-of-the-art methods, this article presents significant improvements in cross-subject decoding accuracy and generalization by evaluating our framework on the ChineseEEG dataset. Moreover, through our work, we advance the field of EEG-based speech decoding and provide insights into the neural mechanisms underlying Chinese language processing. Finally, the framework we proposed is potentially employed in assistive communication technologies and neural rehabilitation for Chinese speakers.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CCSUMSP:一个具有统一拓扑和多模态语义预训练的跨学科中文语音解码框架
从大脑信号中解码语音一直是神经科学和脑机接口研究中的一个长期挑战。虽然英语语音解码已经取得了很大的进展,但跨学科的汉语语音解码尽管具有潜在的应用和独特的语言特征,但研究仍然不足。汉语的符号书写系统和声调特性给神经解码带来了独特的挑战,包括汉字的复杂视觉处理,以及区分可能改变词义的细微音调差异的需要。本文提出了基于统一拓扑和多模态语义预训练的跨主题汉语语音解码框架(CCSUMSP),这是一种基于脑电图信号的跨主题汉语语音解码框架。我们的方法有三个关键的创新:(1)我们开发了一个统一的拓扑表示(UTR),可以适应各种EEG蒙太奇,实现跨主题和记录设置的更好泛化;(2)设计了一种多模态语义预训练策略,利用脑电和眼动数据来获取更丰富的语言信息;(3)引入动态多视点解码器(DMD),根据输入信号自适应调整不同脑区权重。与最先进的方法相比,本文通过评估我们的框架在中文eeg数据集上的跨主题解码精度和泛化方面的显着改进。此外,通过我们的工作,我们推进了基于脑电图的语音解码领域,并提供了对汉语语言处理的神经机制的见解。最后,我们提出的框架有可能用于汉语使用者的辅助沟通技术和神经康复。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
期刊最新文献
GTEE: A global timestamp encoding enhanced method for robust time series imputation in complex missing scenarios Resilient distributed Kalman filtering for cyber-physical systems via mean subsequence reduction Learning across modalities: a systematic survey of multimodal models for financial analysis MuBe4D: A mutual benefit framework for generalizable motion segmentation and geometry-first 4D reconstruction Decoding multilingual imagined speech from scalp EEG via dynamic differentiable graph hierarchical fusion network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1