表达:先前的多感官学习可促进噪音中的听觉声音识别和语音识别。

IF 1.5 3区 心理学 Q4 PHYSIOLOGY Quarterly Journal of Experimental Psychology Pub Date : 2024-09-20 DOI:10.1177/17470218241278649
Corrina Maguinness, Sonja Schall, Brian Mathias, Martin Schoemann, Katharina von Kriegstein
{"title":"表达:先前的多感官学习可促进噪音中的听觉声音识别和语音识别。","authors":"Corrina Maguinness, Sonja Schall, Brian Mathias, Martin Schoemann, Katharina von Kriegstein","doi":"10.1177/17470218241278649","DOIUrl":null,"url":null,"abstract":"<p><p>Seeing the visual articulatory movements of a speaker, while hearing their voice, helps with understanding what is said. This multisensory enhancement is particularly evident in noisy listening conditions. Multisensory enhancement also occurs even in auditory-only conditions: auditory-only speech and voice-identity recognition are superior for speakers previously learned with their face, compared to control learning; an effect termed the \"face-benefit.\" Whether the face-benefit can assist in maintaining robust perception in increasingly noisy listening conditions, similar to concurrent multisensory input, is unknown. Here, in two behavioural experiments, we examined this hypothesis. In each experiment, participants learned a series of speakers' voices together with their dynamic face or control image. Following learning, participants listened to auditory-only sentences spoken by the same speakers and recognised the content of the sentences (speech recognition, Experiment 1) or the voice-identity of the speaker (Experiment 2) in increasing levels of auditory noise. For speech recognition, we observed that 14 of 30 participants (47%) showed a face-benefit. 19 of 25 participants (76%) showed a face-benefit for voice-identity recognition. For those participants who demonstrated a face-benefit, the face-benefit increased with auditory noise levels. Taken together, the results support an audio-visual model of auditory communication and suggest that the brain can develop a flexible system in which learned facial characteristics are used to deal with varying auditory uncertainty.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Prior multisensory learning can facilitate auditory-only voice-identity and speech recognition in noise.\",\"authors\":\"Corrina Maguinness, Sonja Schall, Brian Mathias, Martin Schoemann, Katharina von Kriegstein\",\"doi\":\"10.1177/17470218241278649\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Seeing the visual articulatory movements of a speaker, while hearing their voice, helps with understanding what is said. This multisensory enhancement is particularly evident in noisy listening conditions. Multisensory enhancement also occurs even in auditory-only conditions: auditory-only speech and voice-identity recognition are superior for speakers previously learned with their face, compared to control learning; an effect termed the \\\"face-benefit.\\\" Whether the face-benefit can assist in maintaining robust perception in increasingly noisy listening conditions, similar to concurrent multisensory input, is unknown. Here, in two behavioural experiments, we examined this hypothesis. In each experiment, participants learned a series of speakers' voices together with their dynamic face or control image. Following learning, participants listened to auditory-only sentences spoken by the same speakers and recognised the content of the sentences (speech recognition, Experiment 1) or the voice-identity of the speaker (Experiment 2) in increasing levels of auditory noise. For speech recognition, we observed that 14 of 30 participants (47%) showed a face-benefit. 19 of 25 participants (76%) showed a face-benefit for voice-identity recognition. For those participants who demonstrated a face-benefit, the face-benefit increased with auditory noise levels. Taken together, the results support an audio-visual model of auditory communication and suggest that the brain can develop a flexible system in which learned facial characteristics are used to deal with varying auditory uncertainty.</p>\",\"PeriodicalId\":20869,\"journal\":{\"name\":\"Quarterly Journal of Experimental Psychology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2024-09-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Quarterly Journal of Experimental Psychology\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/17470218241278649\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"PHYSIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Quarterly Journal of Experimental Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/17470218241278649","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"PHYSIOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

在听到说话者声音的同时,看到说话者的视觉发音动作,有助于理解说话内容。在嘈杂的听力条件下,这种多感官增强效果尤为明显。即使在纯听力条件下,多感官增强也会发生:与对照学习相比,纯听力条件下的语音和声音识别对以前用面孔学习过的说话者更为有利;这种效应被称为 "面孔优势"。面孔优势是否能在日益嘈杂的听力条件下帮助维持稳健的感知,类似于同时进行的多感官输入,目前尚不清楚。在这里,我们通过两个行为实验对这一假设进行了检验。在每个实验中,参与者都学习了一系列发言人的声音以及他们的动态脸部或控制图像。学习结束后,被试聆听由相同说话者说出的纯听觉句子,并在听觉噪声水平不断增加的情况下识别句子的内容(语音识别,实验 1)或说话者的声音特征(实验 2)。在语音识别方面,我们发现有 14/30 名参与者(47%)表现出了脸部优势。而有 19/25 名参与者(76%)在语音识别中表现出了人脸优势。对于那些表现出人脸优势的参与者来说,人脸优势随着听觉噪音水平的增加而增加。综上所述,这些结果支持听觉交流的视听模型,并表明大脑可以开发出一种灵活的系统,利用学习到的面部特征来应对不同的听觉不确定性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Prior multisensory learning can facilitate auditory-only voice-identity and speech recognition in noise.

Seeing the visual articulatory movements of a speaker, while hearing their voice, helps with understanding what is said. This multisensory enhancement is particularly evident in noisy listening conditions. Multisensory enhancement also occurs even in auditory-only conditions: auditory-only speech and voice-identity recognition are superior for speakers previously learned with their face, compared to control learning; an effect termed the "face-benefit." Whether the face-benefit can assist in maintaining robust perception in increasingly noisy listening conditions, similar to concurrent multisensory input, is unknown. Here, in two behavioural experiments, we examined this hypothesis. In each experiment, participants learned a series of speakers' voices together with their dynamic face or control image. Following learning, participants listened to auditory-only sentences spoken by the same speakers and recognised the content of the sentences (speech recognition, Experiment 1) or the voice-identity of the speaker (Experiment 2) in increasing levels of auditory noise. For speech recognition, we observed that 14 of 30 participants (47%) showed a face-benefit. 19 of 25 participants (76%) showed a face-benefit for voice-identity recognition. For those participants who demonstrated a face-benefit, the face-benefit increased with auditory noise levels. Taken together, the results support an audio-visual model of auditory communication and suggest that the brain can develop a flexible system in which learned facial characteristics are used to deal with varying auditory uncertainty.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.50
自引率
5.90%
发文量
178
审稿时长
3-8 weeks
期刊介绍: Promoting the interests of scientific psychology and its researchers, QJEP, the journal of the Experimental Psychology Society, is a leading journal with a long-standing tradition of publishing cutting-edge research. Several articles have become classic papers in the fields of attention, perception, learning, memory, language, and reasoning. The journal publishes original articles on any topic within the field of experimental psychology (including comparative research). These include substantial experimental reports, review papers, rapid communications (reporting novel techniques or ground breaking results), comments (on articles previously published in QJEP or on issues of general interest to experimental psychologists), and book reviews. Experimental results are welcomed from all relevant techniques, including behavioural testing, brain imaging and computational modelling. QJEP offers a competitive publication time-scale. Accepted Rapid Communications have priority in the publication cycle and usually appear in print within three months. We aim to publish all accepted (but uncorrected) articles online within seven days. Our Latest Articles page offers immediate publication of articles upon reaching their final form. The journal offers an open access option called Open Select, enabling authors to meet funder requirements to make their article free to read online for all in perpetuity. Authors also benefit from a broad and diverse subscription base that delivers the journal contents to a world-wide readership. Together these features ensure that the journal offers authors the opportunity to raise the visibility of their work to a global audience.
期刊最新文献
Negative or positive left or right? The influence of attribute label position on IAT effects in picture-word IATs and word IATs. Prior multisensory learning can facilitate auditory-only voice-identity and speech recognition in noise. Visual letter similarity effects in Korean word recognition: The role of distinctive strokes. EXPRESS: On prior visual experience in mental map navigation using allocentric and egocentric perspectives in the visually impaired EXPRESS: Pure-tone audiometry and dichotic listening in primary progressive aphasia and Alzheimer's disease.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1