{"title":"Learning to recognize unfamiliar faces from fine-phonetic detail in visual speech.","authors":"Alexandra Jesse","doi":"10.3758/s13414-025-03049-y","DOIUrl":null,"url":null,"abstract":"<p><p>How speech is realized varies across talkers but can be somewhat consistent within a talker. Humans are sensitive to these idiosyncrasies when perceiving auditory speech, but also, in face-to-face communications, when perceiving their visual speech. Our recent work has shown that humans can also use talker idiosyncrasies seen in how talkers produce sentences to rapidly learn to recognize unfamiliar talkers, suggesting that visual speech information can be used for speech perception and talker recognition. However, in learning from sentences, learners may focus only on global information about the talker, such as talker-specific realizations of prosody and rate. The present study tested whether human perceivers can learn the identity of the talker based solely on fine-phonetic detail in the dynamic realization of visual speech alone. Participants learned to identify talkers from point-light displays showing them uttering isolated words. These point-light displays isolated the dynamic speech information, while discarding static information about the talker's face. No sound was presented. Feedback was given only during training. Test included point-light displays of familiar words from training and of novel words. Participants learned to recognize two and four talkers from the word-level dynamics of visual speech from very little exposure. The established representations allowed talker recognition independent of linguistic content-that is, even from novel words. Spoken words therefore contain sufficient indexical information in their fine-phonetic detail for perceivers to acquire dynamic facial representations for unfamiliar talkers that allows generalization across words. Dynamic representations of talking faces are formed for the recognition of unfamiliar faces.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Attention Perception & Psychophysics","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13414-025-03049-y","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
How speech is realized varies across talkers but can be somewhat consistent within a talker. Humans are sensitive to these idiosyncrasies when perceiving auditory speech, but also, in face-to-face communications, when perceiving their visual speech. Our recent work has shown that humans can also use talker idiosyncrasies seen in how talkers produce sentences to rapidly learn to recognize unfamiliar talkers, suggesting that visual speech information can be used for speech perception and talker recognition. However, in learning from sentences, learners may focus only on global information about the talker, such as talker-specific realizations of prosody and rate. The present study tested whether human perceivers can learn the identity of the talker based solely on fine-phonetic detail in the dynamic realization of visual speech alone. Participants learned to identify talkers from point-light displays showing them uttering isolated words. These point-light displays isolated the dynamic speech information, while discarding static information about the talker's face. No sound was presented. Feedback was given only during training. Test included point-light displays of familiar words from training and of novel words. Participants learned to recognize two and four talkers from the word-level dynamics of visual speech from very little exposure. The established representations allowed talker recognition independent of linguistic content-that is, even from novel words. Spoken words therefore contain sufficient indexical information in their fine-phonetic detail for perceivers to acquire dynamic facial representations for unfamiliar talkers that allows generalization across words. Dynamic representations of talking faces are formed for the recognition of unfamiliar faces.
期刊介绍:
The journal Attention, Perception, & Psychophysics is an official journal of the Psychonomic Society. It spans all areas of research in sensory processes, perception, attention, and psychophysics. Most articles published are reports of experimental work; the journal also presents theoretical, integrative, and evaluative reviews. Commentary on issues of importance to researchers appears in a special section of the journal. Founded in 1966 as Perception & Psychophysics, the journal assumed its present name in 2009.