听觉皮层通过空间分布活动编码唇读信息

IF 8.1 1区 生物学 Q1 BIOCHEMISTRY & MOLECULAR BIOLOGY Current Biology Pub Date : 2024-09-09 Epub Date: 2024-08-16 DOI:10.1016/j.cub.2024.07.073
Ganesan Karthik, Cody Zhewei Cao, Michael I Demidenko, Andrew Jahn, William C Stacey, Vibhangini S Wasade, David Brang
{"title":"听觉皮层通过空间分布活动编码唇读信息","authors":"Ganesan Karthik, Cody Zhewei Cao, Michael I Demidenko, Andrew Jahn, William C Stacey, Vibhangini S Wasade, David Brang","doi":"10.1016/j.cub.2024.07.073","DOIUrl":null,"url":null,"abstract":"<p><p>Watching a speaker's face improves speech perception accuracy. This benefit is enabled, in part, by implicit lipreading abilities present in the general population. While it is established that lipreading can alter the perception of a heard word, it is unknown how these visual signals are represented in the auditory system or how they interact with auditory speech representations. One influential, but untested, hypothesis is that visual speech modulates the population-coded representations of phonetic and phonemic features in the auditory system. This model is largely supported by data showing that silent lipreading evokes activity in the auditory cortex, but these activations could alternatively reflect general effects of arousal or attention or the encoding of non-linguistic features such as visual timing information. This gap limits our understanding of how vision supports speech perception. To test the hypothesis that the auditory system encodes visual speech information, we acquired functional magnetic resonance imaging (fMRI) data from healthy adults and intracranial recordings from electrodes implanted in patients with epilepsy during auditory and visual speech perception tasks. Across both datasets, linear classifiers successfully decoded the identity of silently lipread words using the spatial pattern of auditory cortex responses. Examining the time course of classification using intracranial recordings, lipread words were classified at earlier time points relative to heard words, suggesting a predictive mechanism for facilitating speech. These results support a model in which the auditory system combines the joint neural distributions evoked by heard and lipread words to generate a more precise estimate of what was said.</p>","PeriodicalId":11359,"journal":{"name":"Current Biology","volume":null,"pages":null},"PeriodicalIF":8.1000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11387126/pdf/","citationCount":"0","resultStr":"{\"title\":\"Auditory cortex encodes lipreading information through spatially distributed activity.\",\"authors\":\"Ganesan Karthik, Cody Zhewei Cao, Michael I Demidenko, Andrew Jahn, William C Stacey, Vibhangini S Wasade, David Brang\",\"doi\":\"10.1016/j.cub.2024.07.073\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Watching a speaker's face improves speech perception accuracy. This benefit is enabled, in part, by implicit lipreading abilities present in the general population. While it is established that lipreading can alter the perception of a heard word, it is unknown how these visual signals are represented in the auditory system or how they interact with auditory speech representations. One influential, but untested, hypothesis is that visual speech modulates the population-coded representations of phonetic and phonemic features in the auditory system. This model is largely supported by data showing that silent lipreading evokes activity in the auditory cortex, but these activations could alternatively reflect general effects of arousal or attention or the encoding of non-linguistic features such as visual timing information. This gap limits our understanding of how vision supports speech perception. To test the hypothesis that the auditory system encodes visual speech information, we acquired functional magnetic resonance imaging (fMRI) data from healthy adults and intracranial recordings from electrodes implanted in patients with epilepsy during auditory and visual speech perception tasks. Across both datasets, linear classifiers successfully decoded the identity of silently lipread words using the spatial pattern of auditory cortex responses. Examining the time course of classification using intracranial recordings, lipread words were classified at earlier time points relative to heard words, suggesting a predictive mechanism for facilitating speech. These results support a model in which the auditory system combines the joint neural distributions evoked by heard and lipread words to generate a more precise estimate of what was said.</p>\",\"PeriodicalId\":11359,\"journal\":{\"name\":\"Current Biology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11387126/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Current Biology\",\"FirstCategoryId\":\"99\",\"ListUrlMain\":\"https://doi.org/10.1016/j.cub.2024.07.073\",\"RegionNum\":1,\"RegionCategory\":\"生物学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/8/16 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"BIOCHEMISTRY & MOLECULAR BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Current Biology","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.1016/j.cub.2024.07.073","RegionNum":1,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/16 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BIOCHEMISTRY & MOLECULAR BIOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

观察说话者的面部可以提高语音感知的准确性。这一优势部分得益于普通人的隐性唇读能力。虽然唇读可以改变对听到的单词的感知,但这些视觉信号是如何在听觉系统中表现出来的,又是如何与听觉语音表征相互作用的,目前还不得而知。一个有影响力但未经验证的假设是,视觉语音会调节听觉系统中语音和音位特征的群体编码表征。这一模型在很大程度上得到了数据的支持,这些数据显示无声唇读唤起了听觉皮层的活动,但这些活动也可能反映了唤醒或注意力的一般效应,或非语言特征的编码,如视觉定时信息。这一空白限制了我们对视觉如何支持语音感知的理解。为了验证听觉系统编码视觉语音信息的假设,我们获取了健康成人的功能磁共振成像(fMRI)数据,以及癫痫患者在听觉和视觉语音感知任务中植入电极的颅内记录。在这两个数据集中,线性分类器利用听觉皮层反应的空间模式成功解码了默读唇语单词的身份。利用颅内记录对分类的时间过程进行研究发现,相对于听到的单词,唇读单词在更早的时间点被分类,这表明存在一种促进言语的预测机制。这些结果支持这样一个模型,即听觉系统将听到的单词和唇读单词所诱发的联合神经分布结合起来,从而对所说内容产生更精确的估计。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Auditory cortex encodes lipreading information through spatially distributed activity.

Watching a speaker's face improves speech perception accuracy. This benefit is enabled, in part, by implicit lipreading abilities present in the general population. While it is established that lipreading can alter the perception of a heard word, it is unknown how these visual signals are represented in the auditory system or how they interact with auditory speech representations. One influential, but untested, hypothesis is that visual speech modulates the population-coded representations of phonetic and phonemic features in the auditory system. This model is largely supported by data showing that silent lipreading evokes activity in the auditory cortex, but these activations could alternatively reflect general effects of arousal or attention or the encoding of non-linguistic features such as visual timing information. This gap limits our understanding of how vision supports speech perception. To test the hypothesis that the auditory system encodes visual speech information, we acquired functional magnetic resonance imaging (fMRI) data from healthy adults and intracranial recordings from electrodes implanted in patients with epilepsy during auditory and visual speech perception tasks. Across both datasets, linear classifiers successfully decoded the identity of silently lipread words using the spatial pattern of auditory cortex responses. Examining the time course of classification using intracranial recordings, lipread words were classified at earlier time points relative to heard words, suggesting a predictive mechanism for facilitating speech. These results support a model in which the auditory system combines the joint neural distributions evoked by heard and lipread words to generate a more precise estimate of what was said.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Current Biology
Current Biology 生物-生化与分子生物学
CiteScore
11.80
自引率
2.20%
发文量
869
审稿时长
46 days
期刊介绍: Current Biology is a comprehensive journal that showcases original research in various disciplines of biology. It provides a platform for scientists to disseminate their groundbreaking findings and promotes interdisciplinary communication. The journal publishes articles of general interest, encompassing diverse fields of biology. Moreover, it offers accessible editorial pieces that are specifically designed to enlighten non-specialist readers.
期刊最新文献
Parallel maturation of rodent hippocampal memory and CA1 task representations. Dynamic shape-shifting of the single-celled eukaryotic predator Lacrymaria via unconventional cytoskeletal components. Incorporating biotic interactions to better model current and future vegetation of the maritime Antarctic. Regulation of outer kinetochore assembly during meiosis I and II by CENP-A and KNL-2/M18BP1 in C. elegans oocytes. Positive serial dependence in ratings of food images for appeal and calories.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1