Francesca M Barbero, Siddharth Talwar, Roberta P Calce, Bruno Rossion, Olivier Collignon
{"title":"互调频率揭示了整合面部和声音恐惧表情的共同神经组件。","authors":"Francesca M Barbero, Siddharth Talwar, Roberta P Calce, Bruno Rossion, Olivier Collignon","doi":"10.1016/j.cortex.2024.12.008","DOIUrl":null,"url":null,"abstract":"<p><p>Effective social communication depends on the integration of emotional expressions coming from the face and the voice. Although there are consistent reports on how seeing and hearing emotion expressions can be automatically integrated, direct signatures of multisensory integration in the human brain remain elusive. Here we implemented a multi-input electroencephalographic (EEG) frequency tagging paradigm to investigate neural populations integrating facial and vocal fearful expressions. High-density EEG was acquired in participants attending to dynamic fearful facial and vocal expressions tagged at different frequencies (f<sub>vis</sub>, f<sub>aud</sub>). Beyond EEG activity at the specific unimodal facial and vocal emotion presentation frequencies, activity at intermodulation frequencies (IM) arising at the sums and differences of the harmonics of the stimulation frequencies (mf<sub>vis</sub> ± nf<sub>aud</sub>) were observed, suggesting non-linear integration of the visual and auditory emotion information into a unified representation. These IM provide evidence that common neural populations integrate signal from the two sensory streams. Importantly, IMs were absent in a control condition with mismatched facial and vocal emotion expressions. Our results provide direct evidence from non-invasive recordings in humans for common neural populations that integrate fearful facial and vocal emotional expressions.</p>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"184 ","pages":"19-31"},"PeriodicalIF":3.2000,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Intermodulation frequencies reveal common neural assemblies integrating facial and vocal fearful expressions.\",\"authors\":\"Francesca M Barbero, Siddharth Talwar, Roberta P Calce, Bruno Rossion, Olivier Collignon\",\"doi\":\"10.1016/j.cortex.2024.12.008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Effective social communication depends on the integration of emotional expressions coming from the face and the voice. Although there are consistent reports on how seeing and hearing emotion expressions can be automatically integrated, direct signatures of multisensory integration in the human brain remain elusive. Here we implemented a multi-input electroencephalographic (EEG) frequency tagging paradigm to investigate neural populations integrating facial and vocal fearful expressions. High-density EEG was acquired in participants attending to dynamic fearful facial and vocal expressions tagged at different frequencies (f<sub>vis</sub>, f<sub>aud</sub>). Beyond EEG activity at the specific unimodal facial and vocal emotion presentation frequencies, activity at intermodulation frequencies (IM) arising at the sums and differences of the harmonics of the stimulation frequencies (mf<sub>vis</sub> ± nf<sub>aud</sub>) were observed, suggesting non-linear integration of the visual and auditory emotion information into a unified representation. These IM provide evidence that common neural populations integrate signal from the two sensory streams. Importantly, IMs were absent in a control condition with mismatched facial and vocal emotion expressions. Our results provide direct evidence from non-invasive recordings in humans for common neural populations that integrate fearful facial and vocal emotional expressions.</p>\",\"PeriodicalId\":10758,\"journal\":{\"name\":\"Cortex\",\"volume\":\"184 \",\"pages\":\"19-31\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-12-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cortex\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1016/j.cortex.2024.12.008\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cortex","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1016/j.cortex.2024.12.008","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0
摘要
有效的社交沟通取决于整合来自面部和声音的情绪表达。尽管关于视觉和听觉情绪表达如何自动整合的报道不断,但人脑中多感官整合的直接特征仍然难以捉摸。在这里,我们采用了一种多输入脑电图(EEG)频率标记范例来研究整合面部和声音恐惧表情的神经群。研究人员在观察不同频率(fvis、faud)的动态恐惧面部和声音表情时,获得了高密度脑电图。除了特定的单模态面部和声音情绪呈现频率的脑电图活动外,还观察到刺激频率谐波之和和之差(mfvis ± nfaud)产生的互调频率(IM)活动,这表明视觉和听觉情绪信息被非线性地整合为一个统一的表征。这些 IM 提供了共同神经群整合两种感觉流信号的证据。重要的是,在面部和声音情绪表达不匹配的对照条件下,IM 不存在。我们的研究结果为人类非侵入性记录提供了直接证据,证明有共同的神经群整合了恐惧的面部和声音情绪表达。
Intermodulation frequencies reveal common neural assemblies integrating facial and vocal fearful expressions.
Effective social communication depends on the integration of emotional expressions coming from the face and the voice. Although there are consistent reports on how seeing and hearing emotion expressions can be automatically integrated, direct signatures of multisensory integration in the human brain remain elusive. Here we implemented a multi-input electroencephalographic (EEG) frequency tagging paradigm to investigate neural populations integrating facial and vocal fearful expressions. High-density EEG was acquired in participants attending to dynamic fearful facial and vocal expressions tagged at different frequencies (fvis, faud). Beyond EEG activity at the specific unimodal facial and vocal emotion presentation frequencies, activity at intermodulation frequencies (IM) arising at the sums and differences of the harmonics of the stimulation frequencies (mfvis ± nfaud) were observed, suggesting non-linear integration of the visual and auditory emotion information into a unified representation. These IM provide evidence that common neural populations integrate signal from the two sensory streams. Importantly, IMs were absent in a control condition with mismatched facial and vocal emotion expressions. Our results provide direct evidence from non-invasive recordings in humans for common neural populations that integrate fearful facial and vocal emotional expressions.
期刊介绍:
CORTEX is an international journal devoted to the study of cognition and of the relationship between the nervous system and mental processes, particularly as these are reflected in the behaviour of patients with acquired brain lesions, normal volunteers, children with typical and atypical development, and in the activation of brain regions and systems as recorded by functional neuroimaging techniques. It was founded in 1964 by Ennio De Renzi.