{"title":"The role of audio-visual phrasal prosody in bootstrapping the acquisition of word order","authors":"Irene De la Cruz-Pavía","doi":"10.21437/speechprosody.2022-47","DOIUrl":null,"url":null,"abstract":"From early in development infants integrate auditory and visual facial information while processing language. The potential role of visual cues in the acquisition of grammar remains however virtually unexplored. Phrasal prosodic prominence correlates systematically with basic word order in natural languages. Co-verbal gestures—head and eyebrow motion—act in turn as markers of auditory prosody. Here, we examine whether co-verbal gestures could help infants parse the input into prosodic units such as phrases, and discover the basic word order of the native language. In a first study we show that adult talkers spontaneously produce co-verbal gestures signaling phrase boundaries across languages and speech styles: Japanese and English, adult- and infant-directed speech. A second study shows that adult speakers use co-verbal information, specifically head nods marking phrasal prosodic prominence, to parse an artificial language into phrase-like units that follow the native language’s word order. Finally, a third study shows that the presence of co-verbal gestures—i.e. head nods—also impacts 8-month-old infants’ segmentation preferences of a structurally ambiguous artificial language. However, infants’ ability to use this cue is still limited, suggesting that co-verbal gestures might be acquired later in development than visual speech, presumably due to their greater inter-/intra-speaker variability.","PeriodicalId":442842,"journal":{"name":"Speech Prosody 2022","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Speech Prosody 2022","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21437/speechprosody.2022-47","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
From early in development infants integrate auditory and visual facial information while processing language. The potential role of visual cues in the acquisition of grammar remains however virtually unexplored. Phrasal prosodic prominence correlates systematically with basic word order in natural languages. Co-verbal gestures—head and eyebrow motion—act in turn as markers of auditory prosody. Here, we examine whether co-verbal gestures could help infants parse the input into prosodic units such as phrases, and discover the basic word order of the native language. In a first study we show that adult talkers spontaneously produce co-verbal gestures signaling phrase boundaries across languages and speech styles: Japanese and English, adult- and infant-directed speech. A second study shows that adult speakers use co-verbal information, specifically head nods marking phrasal prosodic prominence, to parse an artificial language into phrase-like units that follow the native language’s word order. Finally, a third study shows that the presence of co-verbal gestures—i.e. head nods—also impacts 8-month-old infants’ segmentation preferences of a structurally ambiguous artificial language. However, infants’ ability to use this cue is still limited, suggesting that co-verbal gestures might be acquired later in development than visual speech, presumably due to their greater inter-/intra-speaker variability.