Francesca M Barbero, Siddharth Talwar, Roberta P Calce, Bruno Rossion, Olivier Collignon
{"title":"Intermodulation frequencies reveal common neural assemblies integrating facial and vocal fearful expressions.","authors":"Francesca M Barbero, Siddharth Talwar, Roberta P Calce, Bruno Rossion, Olivier Collignon","doi":"10.1016/j.cortex.2024.12.008","DOIUrl":null,"url":null,"abstract":"<p><p>Effective social communication depends on the integration of emotional expressions coming from the face and the voice. Although there are consistent reports on how seeing and hearing emotion expressions can be automatically integrated, direct signatures of multisensory integration in the human brain remain elusive. Here we implemented a multi-input electroencephalographic (EEG) frequency tagging paradigm to investigate neural populations integrating facial and vocal fearful expressions. High-density EEG was acquired in participants attending to dynamic fearful facial and vocal expressions tagged at different frequencies (f<sub>vis</sub>, f<sub>aud</sub>). Beyond EEG activity at the specific unimodal facial and vocal emotion presentation frequencies, activity at intermodulation frequencies (IM) arising at the sums and differences of the harmonics of the stimulation frequencies (mf<sub>vis</sub> ± nf<sub>aud</sub>) were observed, suggesting non-linear integration of the visual and auditory emotion information into a unified representation. These IM provide evidence that common neural populations integrate signal from the two sensory streams. Importantly, IMs were absent in a control condition with mismatched facial and vocal emotion expressions. Our results provide direct evidence from non-invasive recordings in humans for common neural populations that integrate fearful facial and vocal emotional expressions.</p>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"184 ","pages":"19-31"},"PeriodicalIF":3.2000,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cortex","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1016/j.cortex.2024.12.008","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Effective social communication depends on the integration of emotional expressions coming from the face and the voice. Although there are consistent reports on how seeing and hearing emotion expressions can be automatically integrated, direct signatures of multisensory integration in the human brain remain elusive. Here we implemented a multi-input electroencephalographic (EEG) frequency tagging paradigm to investigate neural populations integrating facial and vocal fearful expressions. High-density EEG was acquired in participants attending to dynamic fearful facial and vocal expressions tagged at different frequencies (fvis, faud). Beyond EEG activity at the specific unimodal facial and vocal emotion presentation frequencies, activity at intermodulation frequencies (IM) arising at the sums and differences of the harmonics of the stimulation frequencies (mfvis ± nfaud) were observed, suggesting non-linear integration of the visual and auditory emotion information into a unified representation. These IM provide evidence that common neural populations integrate signal from the two sensory streams. Importantly, IMs were absent in a control condition with mismatched facial and vocal emotion expressions. Our results provide direct evidence from non-invasive recordings in humans for common neural populations that integrate fearful facial and vocal emotional expressions.
期刊介绍:
CORTEX is an international journal devoted to the study of cognition and of the relationship between the nervous system and mental processes, particularly as these are reflected in the behaviour of patients with acquired brain lesions, normal volunteers, children with typical and atypical development, and in the activation of brain regions and systems as recorded by functional neuroimaging techniques. It was founded in 1964 by Ennio De Renzi.