{"title":"MCNN-CMCA:基于生理信号的心理状态识别的跨模态通道关注多尺度卷积神经网络","authors":"Yayun Wei, Lei Cao, Yilin Dong, Tianyu Liu","doi":"10.1016/j.dsp.2024.104856","DOIUrl":null,"url":null,"abstract":"<div><div>Human mental state recognition (MSR) has significant implications for human-machine interactions. Although mental state recognition models based on single-modality signals, such as electroencephalogram (EEG) or peripheral physiological signals (PPS), have achieved encouraging progress, methods leveraging multimodal physiological signals still need to be explored. In this study, we present MCNN-CMCA, a generic model that employs multiscale convolutional neural networks (CNNs) with cross-modal channel attention to realize physiological signals-based MSR. Specifically, we first design an innovative cross-modal channel attention mechanism that adaptively adjusting the weights of each signal channel, effectively learning both intra-modality and inter-modality correlation and expanding the channel information to the depth dimension. Additionally, the study utilizes multiscale temporal CNNs for obtaining short-term and long-term time-frequency features across different modalities. Finally, the multimodal fusion module integrates the representations of all physiological signals and the classification layer implements sparse connections by setting the mask weights to 0. We evaluate the proposed method on the SEED-VIG, DEAP, and self-made datasets, achieving superior results compared to existing state-of-the-art methods. Furthermore, we conduct ablation studies to demonstrate the effectiveness of each component in the MCNN-CMCA and show the use of multimodal physiological signals outperforms single-modality signals.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104856"},"PeriodicalIF":2.9000,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MCNN-CMCA: A multiscale convolutional neural networks with cross-modal channel attention for physiological signal-based mental state recognition\",\"authors\":\"Yayun Wei, Lei Cao, Yilin Dong, Tianyu Liu\",\"doi\":\"10.1016/j.dsp.2024.104856\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Human mental state recognition (MSR) has significant implications for human-machine interactions. Although mental state recognition models based on single-modality signals, such as electroencephalogram (EEG) or peripheral physiological signals (PPS), have achieved encouraging progress, methods leveraging multimodal physiological signals still need to be explored. In this study, we present MCNN-CMCA, a generic model that employs multiscale convolutional neural networks (CNNs) with cross-modal channel attention to realize physiological signals-based MSR. Specifically, we first design an innovative cross-modal channel attention mechanism that adaptively adjusting the weights of each signal channel, effectively learning both intra-modality and inter-modality correlation and expanding the channel information to the depth dimension. Additionally, the study utilizes multiscale temporal CNNs for obtaining short-term and long-term time-frequency features across different modalities. Finally, the multimodal fusion module integrates the representations of all physiological signals and the classification layer implements sparse connections by setting the mask weights to 0. We evaluate the proposed method on the SEED-VIG, DEAP, and self-made datasets, achieving superior results compared to existing state-of-the-art methods. Furthermore, we conduct ablation studies to demonstrate the effectiveness of each component in the MCNN-CMCA and show the use of multimodal physiological signals outperforms single-modality signals.</div></div>\",\"PeriodicalId\":51011,\"journal\":{\"name\":\"Digital Signal Processing\",\"volume\":\"156 \",\"pages\":\"Article 104856\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1051200424004810\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200424004810","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
MCNN-CMCA: A multiscale convolutional neural networks with cross-modal channel attention for physiological signal-based mental state recognition
Human mental state recognition (MSR) has significant implications for human-machine interactions. Although mental state recognition models based on single-modality signals, such as electroencephalogram (EEG) or peripheral physiological signals (PPS), have achieved encouraging progress, methods leveraging multimodal physiological signals still need to be explored. In this study, we present MCNN-CMCA, a generic model that employs multiscale convolutional neural networks (CNNs) with cross-modal channel attention to realize physiological signals-based MSR. Specifically, we first design an innovative cross-modal channel attention mechanism that adaptively adjusting the weights of each signal channel, effectively learning both intra-modality and inter-modality correlation and expanding the channel information to the depth dimension. Additionally, the study utilizes multiscale temporal CNNs for obtaining short-term and long-term time-frequency features across different modalities. Finally, the multimodal fusion module integrates the representations of all physiological signals and the classification layer implements sparse connections by setting the mask weights to 0. We evaluate the proposed method on the SEED-VIG, DEAP, and self-made datasets, achieving superior results compared to existing state-of-the-art methods. Furthermore, we conduct ablation studies to demonstrate the effectiveness of each component in the MCNN-CMCA and show the use of multimodal physiological signals outperforms single-modality signals.
期刊介绍:
Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal.
The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as:
• big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,