Seeing the initial articulatory gestures of a word triggers lexical access

Mathilde Fort, S. Kandel, Justine Chipot, C. Savariaux, L. Granjon, E. Spinelli
{"title":"Seeing the initial articulatory gestures of a word triggers lexical access","authors":"Mathilde Fort, S. Kandel, Justine Chipot, C. Savariaux, L. Granjon, E. Spinelli","doi":"10.1080/01690965.2012.701758","DOIUrl":null,"url":null,"abstract":"When the auditory information is deteriorated by noise in a conversation, watching the face of a speaker enhances speech intelligibility. Recent findings indicate that decoding the facial movements of a speaker accelerates word recognition. The objective of this study was to provide evidence that the mere presentation of the first two phonemes—that is, the articulatory gestures of the initial syllable—is enough visual information to activate a lexical unit and initiate the lexical access process. We used a priming paradigm combined with a lexical decision task. The primes were syllables that either shared the initial syllable with an auditory target or not. In Experiment 1, the primes were displayed in audiovisual, auditory-only or visual-only conditions. There was a priming effect in all conditions. Experiment 2 investigated the locus (prelexical vs. lexical or postlexical) of the facilitation effect observed in the visual-only condition by manipulating the target's word frequency. The facilitation produced by the visual prime was significant for low-frequency words but not for high-frequency words, indicating that the locus of the effect is not prelexical. This suggests that visual speech mostly contributes to the word recognition process when lexical access is difficult.","PeriodicalId":87410,"journal":{"name":"Language and cognitive processes","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01690965.2012.701758","citationCount":"36","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language and cognitive processes","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/01690965.2012.701758","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 36

Abstract

When the auditory information is deteriorated by noise in a conversation, watching the face of a speaker enhances speech intelligibility. Recent findings indicate that decoding the facial movements of a speaker accelerates word recognition. The objective of this study was to provide evidence that the mere presentation of the first two phonemes—that is, the articulatory gestures of the initial syllable—is enough visual information to activate a lexical unit and initiate the lexical access process. We used a priming paradigm combined with a lexical decision task. The primes were syllables that either shared the initial syllable with an auditory target or not. In Experiment 1, the primes were displayed in audiovisual, auditory-only or visual-only conditions. There was a priming effect in all conditions. Experiment 2 investigated the locus (prelexical vs. lexical or postlexical) of the facilitation effect observed in the visual-only condition by manipulating the target's word frequency. The facilitation produced by the visual prime was significant for low-frequency words but not for high-frequency words, indicating that the locus of the effect is not prelexical. This suggests that visual speech mostly contributes to the word recognition process when lexical access is difficult.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
看到一个单词最初的发音姿势会触发词汇访问
当谈话中的噪音使听觉信息变差时,观察说话人的脸可以提高语音的清晰度。最近的研究表明,解码说话人的面部动作可以加速单词识别。本研究的目的是提供证据,证明仅仅是前两个音素的呈现,即最初音节的发音手势,就足以激活一个词汇单元并启动词汇获取过程。我们使用了一个与词汇决策任务相结合的启动范式。启动词是音节,这些音节要么与听觉目标共享第一个音节,要么不共享。在实验1中,启动词分别在视听、纯听觉和纯视觉条件下显示。在所有条件下都有启动效应。实验二考察了单纯视觉条件下易化效应的位点(词汇前、词汇前、词汇后)。视觉启动词对低频词的促进作用显著,对高频词的促进作用不显著,说明视觉启动词的促进作用位点不是词汇前的。这表明,当词汇获取困难时,视觉语言主要有助于单词识别过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Referential choice across the lifespan: why children and elderly adults produce ambiguous pronouns. MEG evidence that the LIFG effect of object extraction requires similarity-based interference. Phonemes and Production. Memory availability and referential access. The architecture of speech production and the role of the phoneme in speech processing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1