Corrina Maguinness, Sonja Schall, Brian Mathias, Martin Schoemann, Katharina von Kriegstein
{"title":"Prior multisensory learning can facilitate auditory-only voice-identity and speech recognition in noise.","authors":"Corrina Maguinness, Sonja Schall, Brian Mathias, Martin Schoemann, Katharina von Kriegstein","doi":"10.1177/17470218241278649","DOIUrl":null,"url":null,"abstract":"<p><p>Seeing the visual articulatory movements of a speaker, while hearing their voice, helps with understanding what is said. This multisensory enhancement is particularly evident in noisy listening conditions. Multisensory enhancement also occurs even in auditory-only conditions: auditory-only speech and voice-identity recognition are superior for speakers previously learned with their face, compared to control learning; an effect termed the \"face-benefit.\" Whether the face-benefit can assist in maintaining robust perception in increasingly noisy listening conditions, similar to concurrent multisensory input, is unknown. Here, in two behavioural experiments, we examined this hypothesis. In each experiment, participants learned a series of speakers' voices together with their dynamic face or control image. Following learning, participants listened to auditory-only sentences spoken by the same speakers and recognised the content of the sentences (speech recognition, Experiment 1) or the voice-identity of the speaker (Experiment 2) in increasing levels of auditory noise. For speech recognition, we observed that 14 of 30 participants (47%) showed a face-benefit. 19 of 25 participants (76%) showed a face-benefit for voice-identity recognition. For those participants who demonstrated a face-benefit, the face-benefit increased with auditory noise levels. Taken together, the results support an audio-visual model of auditory communication and suggest that the brain can develop a flexible system in which learned facial characteristics are used to deal with varying auditory uncertainty.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"17470218241278649"},"PeriodicalIF":1.5000,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Quarterly Journal of Experimental Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/17470218241278649","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"PHYSIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Seeing the visual articulatory movements of a speaker, while hearing their voice, helps with understanding what is said. This multisensory enhancement is particularly evident in noisy listening conditions. Multisensory enhancement also occurs even in auditory-only conditions: auditory-only speech and voice-identity recognition are superior for speakers previously learned with their face, compared to control learning; an effect termed the "face-benefit." Whether the face-benefit can assist in maintaining robust perception in increasingly noisy listening conditions, similar to concurrent multisensory input, is unknown. Here, in two behavioural experiments, we examined this hypothesis. In each experiment, participants learned a series of speakers' voices together with their dynamic face or control image. Following learning, participants listened to auditory-only sentences spoken by the same speakers and recognised the content of the sentences (speech recognition, Experiment 1) or the voice-identity of the speaker (Experiment 2) in increasing levels of auditory noise. For speech recognition, we observed that 14 of 30 participants (47%) showed a face-benefit. 19 of 25 participants (76%) showed a face-benefit for voice-identity recognition. For those participants who demonstrated a face-benefit, the face-benefit increased with auditory noise levels. Taken together, the results support an audio-visual model of auditory communication and suggest that the brain can develop a flexible system in which learned facial characteristics are used to deal with varying auditory uncertainty.
期刊介绍:
Promoting the interests of scientific psychology and its researchers, QJEP, the journal of the Experimental Psychology Society, is a leading journal with a long-standing tradition of publishing cutting-edge research. Several articles have become classic papers in the fields of attention, perception, learning, memory, language, and reasoning. The journal publishes original articles on any topic within the field of experimental psychology (including comparative research). These include substantial experimental reports, review papers, rapid communications (reporting novel techniques or ground breaking results), comments (on articles previously published in QJEP or on issues of general interest to experimental psychologists), and book reviews. Experimental results are welcomed from all relevant techniques, including behavioural testing, brain imaging and computational modelling.
QJEP offers a competitive publication time-scale. Accepted Rapid Communications have priority in the publication cycle and usually appear in print within three months. We aim to publish all accepted (but uncorrected) articles online within seven days. Our Latest Articles page offers immediate publication of articles upon reaching their final form.
The journal offers an open access option called Open Select, enabling authors to meet funder requirements to make their article free to read online for all in perpetuity. Authors also benefit from a broad and diverse subscription base that delivers the journal contents to a world-wide readership. Together these features ensure that the journal offers authors the opportunity to raise the visibility of their work to a global audience.