How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processing
{"title":"How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processing","authors":"Chotiga Pattamadilok, Marc Sato","doi":"10.1016/j.bandl.2021.105058","DOIUrl":null,"url":null,"abstract":"<div><p>Both visual articulatory gestures and orthography provide information on the phonological content of speech. This EEG study investigated the integration between speech and these two visual inputs. A comparison of skilled readers’ brain responses elicited by a spoken word presented alone versus synchronously with a static image of a viseme or a grapheme of the spoken word’s onset showed that while neither visual input induced audiovisual integration on N1 acoustic component, both led to a supra-additive integration on P2, with a stronger integration between speech and graphemes on left-anterior electrodes. This pattern persisted in P350 time-window and generalized to all electrodes. The finding suggests a strong impact of spelling knowledge on phonetic processing and lexical access. It also indirectly indicates that the dynamic and predictive value present in natural lip movements but not in static visemes is particularly critical to the contribution of visual articulatory gestures to speech processing.</p></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":null,"pages":null},"PeriodicalIF":2.1000,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brain and Language","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0093934X21001528","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY","Score":null,"Total":0}
引用次数: 2
Abstract
Both visual articulatory gestures and orthography provide information on the phonological content of speech. This EEG study investigated the integration between speech and these two visual inputs. A comparison of skilled readers’ brain responses elicited by a spoken word presented alone versus synchronously with a static image of a viseme or a grapheme of the spoken word’s onset showed that while neither visual input induced audiovisual integration on N1 acoustic component, both led to a supra-additive integration on P2, with a stronger integration between speech and graphemes on left-anterior electrodes. This pattern persisted in P350 time-window and generalized to all electrodes. The finding suggests a strong impact of spelling knowledge on phonetic processing and lexical access. It also indirectly indicates that the dynamic and predictive value present in natural lip movements but not in static visemes is particularly critical to the contribution of visual articulatory gestures to speech processing.
期刊介绍:
An interdisciplinary journal, Brain and Language publishes articles that elucidate the complex relationships among language, brain, and behavior. The journal covers the large variety of modern techniques in cognitive neuroscience, including functional and structural brain imaging, electrophysiology, cellular and molecular neurobiology, genetics, lesion-based approaches, and computational modeling. All articles must relate to human language and be relevant to the understanding of its neurobiological and neurocognitive bases. Published articles in the journal are expected to have significant theoretical novelty and/or practical implications, and use perspectives and methods from psychology, linguistics, and neuroscience along with brain data and brain measures.