Mathilde Fort, S. Kandel, Justine Chipot, C. Savariaux, L. Granjon, E. Spinelli
{"title":"看到一个单词最初的发音姿势会触发词汇访问","authors":"Mathilde Fort, S. Kandel, Justine Chipot, C. Savariaux, L. Granjon, E. Spinelli","doi":"10.1080/01690965.2012.701758","DOIUrl":null,"url":null,"abstract":"When the auditory information is deteriorated by noise in a conversation, watching the face of a speaker enhances speech intelligibility. Recent findings indicate that decoding the facial movements of a speaker accelerates word recognition. The objective of this study was to provide evidence that the mere presentation of the first two phonemes—that is, the articulatory gestures of the initial syllable—is enough visual information to activate a lexical unit and initiate the lexical access process. We used a priming paradigm combined with a lexical decision task. The primes were syllables that either shared the initial syllable with an auditory target or not. In Experiment 1, the primes were displayed in audiovisual, auditory-only or visual-only conditions. There was a priming effect in all conditions. Experiment 2 investigated the locus (prelexical vs. lexical or postlexical) of the facilitation effect observed in the visual-only condition by manipulating the target's word frequency. The facilitation produced by the visual prime was significant for low-frequency words but not for high-frequency words, indicating that the locus of the effect is not prelexical. This suggests that visual speech mostly contributes to the word recognition process when lexical access is difficult.","PeriodicalId":87410,"journal":{"name":"Language and cognitive processes","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01690965.2012.701758","citationCount":"36","resultStr":"{\"title\":\"Seeing the initial articulatory gestures of a word triggers lexical access\",\"authors\":\"Mathilde Fort, S. Kandel, Justine Chipot, C. Savariaux, L. Granjon, E. Spinelli\",\"doi\":\"10.1080/01690965.2012.701758\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"When the auditory information is deteriorated by noise in a conversation, watching the face of a speaker enhances speech intelligibility. Recent findings indicate that decoding the facial movements of a speaker accelerates word recognition. The objective of this study was to provide evidence that the mere presentation of the first two phonemes—that is, the articulatory gestures of the initial syllable—is enough visual information to activate a lexical unit and initiate the lexical access process. We used a priming paradigm combined with a lexical decision task. The primes were syllables that either shared the initial syllable with an auditory target or not. In Experiment 1, the primes were displayed in audiovisual, auditory-only or visual-only conditions. There was a priming effect in all conditions. Experiment 2 investigated the locus (prelexical vs. lexical or postlexical) of the facilitation effect observed in the visual-only condition by manipulating the target's word frequency. The facilitation produced by the visual prime was significant for low-frequency words but not for high-frequency words, indicating that the locus of the effect is not prelexical. This suggests that visual speech mostly contributes to the word recognition process when lexical access is difficult.\",\"PeriodicalId\":87410,\"journal\":{\"name\":\"Language and cognitive processes\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/01690965.2012.701758\",\"citationCount\":\"36\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Language and cognitive processes\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/01690965.2012.701758\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language and cognitive processes","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/01690965.2012.701758","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Seeing the initial articulatory gestures of a word triggers lexical access
When the auditory information is deteriorated by noise in a conversation, watching the face of a speaker enhances speech intelligibility. Recent findings indicate that decoding the facial movements of a speaker accelerates word recognition. The objective of this study was to provide evidence that the mere presentation of the first two phonemes—that is, the articulatory gestures of the initial syllable—is enough visual information to activate a lexical unit and initiate the lexical access process. We used a priming paradigm combined with a lexical decision task. The primes were syllables that either shared the initial syllable with an auditory target or not. In Experiment 1, the primes were displayed in audiovisual, auditory-only or visual-only conditions. There was a priming effect in all conditions. Experiment 2 investigated the locus (prelexical vs. lexical or postlexical) of the facilitation effect observed in the visual-only condition by manipulating the target's word frequency. The facilitation produced by the visual prime was significant for low-frequency words but not for high-frequency words, indicating that the locus of the effect is not prelexical. This suggests that visual speech mostly contributes to the word recognition process when lexical access is difficult.