Unimodal speech perception predicts stable individual differences in audiovisual benefit for phonemes, words and sentencesa).

IF 2.3 2区 物理与天体物理 Q2 ACOUSTICS Journal of the Acoustical Society of America Pub Date : 2025-03-01 DOI:10.1121/10.0034846
Jacqueline von Seth, Máté Aller, Matthew H Davis
{"title":"Unimodal speech perception predicts stable individual differences in audiovisual benefit for phonemes, words and sentencesa).","authors":"Jacqueline von Seth, Máté Aller, Matthew H Davis","doi":"10.1121/10.0034846","DOIUrl":null,"url":null,"abstract":"<p><p>There are substantial individual differences in the benefit that can be obtained from visual cues during speech perception. Here, 113 normally hearing participants between the ages of 18 and 60 years old completed a three-part experiment investigating the reliability and predictors of individual audiovisual benefit for acoustically degraded speech. Audiovisual benefit was calculated as the relative intelligibility (at the individual-level) of approximately matched (at the group-level) auditory-only and audiovisual speech for materials at three levels of linguistic structure: meaningful sentences, monosyllabic words, and consonants in minimal syllables. This measure of audiovisual benefit was stable across sessions and materials, suggesting that a shared mechanism of audiovisual integration operates across levels of linguistic structure. Information transmission analyses suggested that this may be related to simple phonetic cue extraction: sentence-level audiovisual benefit was reliably predicted by the relative ability to discriminate place of articulation at the consonant-level. Finally, whereas unimodal speech perception was related to cognitive measures (matrix reasoning and vocabulary) and demographics (age and gender), audiovisual benefit was predicted only by unimodal speech perceptual abilities: Better lipreading ability and subclinically poorer hearing (speech reception thresholds) independently predicted enhanced audiovisual benefit. This work has implications for practices in quantifying audiovisual benefit and research identifying strategies to enhance multimodal communication in hearing loss.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"157 3","pages":"1554-1576"},"PeriodicalIF":2.3000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Acoustical Society of America","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1121/10.0034846","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

Abstract

There are substantial individual differences in the benefit that can be obtained from visual cues during speech perception. Here, 113 normally hearing participants between the ages of 18 and 60 years old completed a three-part experiment investigating the reliability and predictors of individual audiovisual benefit for acoustically degraded speech. Audiovisual benefit was calculated as the relative intelligibility (at the individual-level) of approximately matched (at the group-level) auditory-only and audiovisual speech for materials at three levels of linguistic structure: meaningful sentences, monosyllabic words, and consonants in minimal syllables. This measure of audiovisual benefit was stable across sessions and materials, suggesting that a shared mechanism of audiovisual integration operates across levels of linguistic structure. Information transmission analyses suggested that this may be related to simple phonetic cue extraction: sentence-level audiovisual benefit was reliably predicted by the relative ability to discriminate place of articulation at the consonant-level. Finally, whereas unimodal speech perception was related to cognitive measures (matrix reasoning and vocabulary) and demographics (age and gender), audiovisual benefit was predicted only by unimodal speech perceptual abilities: Better lipreading ability and subclinically poorer hearing (speech reception thresholds) independently predicted enhanced audiovisual benefit. This work has implications for practices in quantifying audiovisual benefit and research identifying strategies to enhance multimodal communication in hearing loss.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
单模态语音感知预测了音素、单词和句子在视听方面的稳定个体差异。
在语言感知过程中,从视觉线索中获得的好处有很大的个体差异。在这里,113名年龄在18岁到60岁之间的正常听力参与者完成了一项由三个部分组成的实验,该实验调查了听力退化语音的个人视听效益的可靠性和预测因素。视听效益的计算是在三个语言结构层次上:有意义的句子、单音节单词和最小音节中的辅音,纯听觉和视听语言的相对可理解性(在个人层面)大致匹配(在群体层面)。这种视听效益的测量在不同的课程和材料中都是稳定的,这表明一种共享的视听整合机制在不同的语言结构水平上运作。信息传递分析表明,这可能与简单的语音线索提取有关:在辅音水平上辨别发音位置的相对能力可靠地预测了句子水平的视听效益。最后,虽然单模态言语感知与认知测量(矩阵推理和词汇)和人口统计学(年龄和性别)有关,但视听效益仅由单模态言语感知能力预测:更好的唇读能力和亚临床较差的听力(言语接受阈值)独立预测增强的视听效益。这项工作对量化视听效益的实践和研究确定听力损失患者的多模式交流策略具有重要意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.60
自引率
16.70%
发文量
1433
审稿时长
4.7 months
期刊介绍: Since 1929 The Journal of the Acoustical Society of America has been the leading source of theoretical and experimental research results in the broad interdisciplinary study of sound. Subject coverage includes: linear and nonlinear acoustics; aeroacoustics, underwater sound and acoustical oceanography; ultrasonics and quantum acoustics; architectural and structural acoustics and vibration; speech, music and noise; psychology and physiology of hearing; engineering acoustics, transduction; bioacoustics, animal bioacoustics.
期刊最新文献
Low-frequency broadband sound absorption of micro-perforated panel absorber with different extended necksa). Multi-frequency acoustic backscatter inversion for measuring multi-class sediment suspensions. Bubble beginnings: The study that launched microbubble bioeffects. Acoustic radiation force exerted by progressive waves on subwavelength inhomogeneous scatterers. Dual-decoder neural network based for end-to-end prediction of acoustic transmission loss in deep-sea environments.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1