Cortical Tracking of Continuous Speech Under Bimodal Divided Attention.

IF 3.6 Q1 LINGUISTICS Neurobiology of Language Pub Date : 2023-04-11 eCollection Date: 2023-01-01 DOI:10.1162/nol_a_00100
Zilong Xie, Christian Brodbeck, Bharath Chandrasekaran
{"title":"Cortical Tracking of Continuous Speech Under Bimodal Divided Attention.","authors":"Zilong Xie, Christian Brodbeck, Bharath Chandrasekaran","doi":"10.1162/nol_a_00100","DOIUrl":null,"url":null,"abstract":"<p><p>Speech processing often occurs amid competing inputs from other modalities, for example, listening to the radio while driving. We examined the extent to which <i>dividing</i> attention between auditory and visual modalities (bimodal divided attention) impacts neural processing of natural continuous speech from acoustic to linguistic levels of representation. We recorded electroencephalographic (EEG) responses when human participants performed a challenging primary visual task, imposing low or high cognitive load while listening to audiobook stories as a secondary task. The two dual-task conditions were contrasted with an auditory single-task condition in which participants attended to stories while ignoring visual stimuli. Behaviorally, the high load dual-task condition was associated with lower speech comprehension accuracy relative to the other two conditions. We fitted multivariate temporal response function encoding models to predict EEG responses from acoustic and linguistic speech features at different representation levels, including auditory spectrograms and information-theoretic models of sublexical-, word-form-, and sentence-level representations. Neural tracking of most acoustic and linguistic features remained unchanged with increasing dual-task load, despite unambiguous behavioral and neural evidence of the high load dual-task condition being more demanding. Compared to the auditory single-task condition, dual-task conditions selectively reduced neural tracking of only some acoustic and linguistic features, mainly at latencies >200 ms, while earlier latencies were surprisingly unaffected. These findings indicate that behavioral effects of bimodal divided attention on continuous speech processing occur not because of impaired early sensory representations but likely at later cognitive processing stages. Crossmodal attention-related mechanisms may not be uniform across different speech processing levels.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6000,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10205152/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurobiology of Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/nol_a_00100","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"LINGUISTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Speech processing often occurs amid competing inputs from other modalities, for example, listening to the radio while driving. We examined the extent to which dividing attention between auditory and visual modalities (bimodal divided attention) impacts neural processing of natural continuous speech from acoustic to linguistic levels of representation. We recorded electroencephalographic (EEG) responses when human participants performed a challenging primary visual task, imposing low or high cognitive load while listening to audiobook stories as a secondary task. The two dual-task conditions were contrasted with an auditory single-task condition in which participants attended to stories while ignoring visual stimuli. Behaviorally, the high load dual-task condition was associated with lower speech comprehension accuracy relative to the other two conditions. We fitted multivariate temporal response function encoding models to predict EEG responses from acoustic and linguistic speech features at different representation levels, including auditory spectrograms and information-theoretic models of sublexical-, word-form-, and sentence-level representations. Neural tracking of most acoustic and linguistic features remained unchanged with increasing dual-task load, despite unambiguous behavioral and neural evidence of the high load dual-task condition being more demanding. Compared to the auditory single-task condition, dual-task conditions selectively reduced neural tracking of only some acoustic and linguistic features, mainly at latencies >200 ms, while earlier latencies were surprisingly unaffected. These findings indicate that behavioral effects of bimodal divided attention on continuous speech processing occur not because of impaired early sensory representations but likely at later cognitive processing stages. Crossmodal attention-related mechanisms may not be uniform across different speech processing levels.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
双模分散注意力下连续语音的皮层跟踪
语音处理通常发生在其他模态的竞争输入中,例如开车时听收音机。我们研究了在听觉和视觉模式之间划分注意力(双模划分注意力)会在多大程度上影响从声学表征到语言表征层面的自然连续语音的神经处理。我们记录了人类参与者在执行一项具有挑战性的主要视觉任务时的脑电图(EEG)反应,该任务施加了较低或较高的认知负荷,同时听有声读物故事作为次要任务。这两种双重任务条件与听觉单一任务条件形成了鲜明对比,在听觉单一任务条件下,参与者在听故事的同时忽略了视觉刺激。从行为上看,与其他两种条件相比,高负荷双任务条件下的语音理解准确率较低。我们拟合了多变量时间反应函数编码模型,以预测不同表征水平的声学和语言语音特征的脑电图反应,包括听觉频谱图和次词汇、词形和句子级表征的信息理论模型。尽管有明确的行为和神经证据表明高负荷双任务条件要求更高,但大多数声学和语言特征的神经跟踪随着双任务负荷的增加而保持不变。与听觉单一任务条件相比,双任务条件只选择性地减少了对某些声音和语言特征的神经跟踪,主要是在潜伏期大于 200 毫秒时,而较早的潜伏期却出人意料地没有受到影响。这些研究结果表明,双模态分心对连续语音处理的行为影响不是因为早期感觉表征受损,而很可能是在后期认知处理阶段。跨模态注意相关机制在不同的语音处理水平上可能并不一致。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neurobiology of Language
Neurobiology of Language Social Sciences-Linguistics and Language
CiteScore
5.90
自引率
6.20%
发文量
32
审稿时长
17 weeks
期刊最新文献
The Domain-Specific Neural Basis of Auditory Statistical Learning in 5-7-Year-Old Children. A Comparison of Denoising Approaches for Spoken Word Production Related Artefacts in Continuous Multiband fMRI Data. Neural Mechanisms of Learning and Consolidation of Morphologically Derived Words in a Novel Language: Evidence From Hebrew Speakers. Cerebellar Atrophy and Language Processing in Chronic Left-Hemisphere Stroke. Cortico-Cerebellar Monitoring of Speech Sequence Production.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1