Intermodulation frequencies reveal common neural assemblies integrating facial and vocal fearful expressions.

IF 3.2 2区 心理学 Q1 BEHAVIORAL SCIENCES Cortex Pub Date : 2024-12-26 DOI:10.1016/j.cortex.2024.12.008
Francesca M Barbero, Siddharth Talwar, Roberta P Calce, Bruno Rossion, Olivier Collignon
{"title":"Intermodulation frequencies reveal common neural assemblies integrating facial and vocal fearful expressions.","authors":"Francesca M Barbero, Siddharth Talwar, Roberta P Calce, Bruno Rossion, Olivier Collignon","doi":"10.1016/j.cortex.2024.12.008","DOIUrl":null,"url":null,"abstract":"<p><p>Effective social communication depends on the integration of emotional expressions coming from the face and the voice. Although there are consistent reports on how seeing and hearing emotion expressions can be automatically integrated, direct signatures of multisensory integration in the human brain remain elusive. Here we implemented a multi-input electroencephalographic (EEG) frequency tagging paradigm to investigate neural populations integrating facial and vocal fearful expressions. High-density EEG was acquired in participants attending to dynamic fearful facial and vocal expressions tagged at different frequencies (f<sub>vis</sub>, f<sub>aud</sub>). Beyond EEG activity at the specific unimodal facial and vocal emotion presentation frequencies, activity at intermodulation frequencies (IM) arising at the sums and differences of the harmonics of the stimulation frequencies (mf<sub>vis</sub> ± nf<sub>aud</sub>) were observed, suggesting non-linear integration of the visual and auditory emotion information into a unified representation. These IM provide evidence that common neural populations integrate signal from the two sensory streams. Importantly, IMs were absent in a control condition with mismatched facial and vocal emotion expressions. Our results provide direct evidence from non-invasive recordings in humans for common neural populations that integrate fearful facial and vocal emotional expressions.</p>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"184 ","pages":"19-31"},"PeriodicalIF":3.2000,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cortex","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1016/j.cortex.2024.12.008","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Effective social communication depends on the integration of emotional expressions coming from the face and the voice. Although there are consistent reports on how seeing and hearing emotion expressions can be automatically integrated, direct signatures of multisensory integration in the human brain remain elusive. Here we implemented a multi-input electroencephalographic (EEG) frequency tagging paradigm to investigate neural populations integrating facial and vocal fearful expressions. High-density EEG was acquired in participants attending to dynamic fearful facial and vocal expressions tagged at different frequencies (fvis, faud). Beyond EEG activity at the specific unimodal facial and vocal emotion presentation frequencies, activity at intermodulation frequencies (IM) arising at the sums and differences of the harmonics of the stimulation frequencies (mfvis ± nfaud) were observed, suggesting non-linear integration of the visual and auditory emotion information into a unified representation. These IM provide evidence that common neural populations integrate signal from the two sensory streams. Importantly, IMs were absent in a control condition with mismatched facial and vocal emotion expressions. Our results provide direct evidence from non-invasive recordings in humans for common neural populations that integrate fearful facial and vocal emotional expressions.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
互调频率揭示了整合面部和声音恐惧表情的共同神经组件。
有效的社交沟通取决于整合来自面部和声音的情绪表达。尽管关于视觉和听觉情绪表达如何自动整合的报道不断,但人脑中多感官整合的直接特征仍然难以捉摸。在这里,我们采用了一种多输入脑电图(EEG)频率标记范例来研究整合面部和声音恐惧表情的神经群。研究人员在观察不同频率(fvis、faud)的动态恐惧面部和声音表情时,获得了高密度脑电图。除了特定的单模态面部和声音情绪呈现频率的脑电图活动外,还观察到刺激频率谐波之和和之差(mfvis ± nfaud)产生的互调频率(IM)活动,这表明视觉和听觉情绪信息被非线性地整合为一个统一的表征。这些 IM 提供了共同神经群整合两种感觉流信号的证据。重要的是,在面部和声音情绪表达不匹配的对照条件下,IM 不存在。我们的研究结果为人类非侵入性记录提供了直接证据,证明有共同的神经群整合了恐惧的面部和声音情绪表达。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Cortex
Cortex 医学-行为科学
CiteScore
7.00
自引率
5.60%
发文量
250
审稿时长
74 days
期刊介绍: CORTEX is an international journal devoted to the study of cognition and of the relationship between the nervous system and mental processes, particularly as these are reflected in the behaviour of patients with acquired brain lesions, normal volunteers, children with typical and atypical development, and in the activation of brain regions and systems as recorded by functional neuroimaging techniques. It was founded in 1964 by Ennio De Renzi.
期刊最新文献
Corrigendum to "Overlapping but separate number representations in the intraparietal sulcus-Probing format- and modality-independence in sighted Braille readers" [Cortex 162 (May 2023) 65-80]. Exploring specific alterations at the explicit and perceptual levels in sense of ownership, agency, and body schema in Functional Motor Disorder: A pilot comparative study with Irritable Bowel Syndrome. Trajectories of intrinsic connectivity one year post pediatric mild traumatic brain injury: Neural injury superimposed on neurodevelopment. Revisiting the electrophysiological correlates of valence and expectancy in reward processing - A multi-lab replication. Hemispheric asymmetries in the auditory cortex reflect discriminative responses to temporal details or summary statistics of stationary sounds.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1