Pub Date : 2024-09-01DOI: 10.1016/j.bandl.2024.105463
Deling He , Eugene H. Buder , Gavin M. Bidelman
We investigated how neural oscillations code the hierarchical nature of stress rhythms in speech and how stress processing varies with language experience. By measuring phase synchrony of multilevel EEG-acoustic tracking and intra-brain cross-frequency coupling, we show the encoding of stress involves different neural signatures (delta rhythms = stress foot rate; theta rhythms = syllable rate), is stronger for amplitude vs. duration stress cues, and induces nested delta-theta coherence mirroring the stress-syllable hierarchy in speech. Only native English, but not Mandarin, speakers exhibited enhanced neural entrainment at central stress (2 Hz) and syllable (4 Hz) rates intrinsic to natural English. English individuals with superior cortical-stress tracking capabilities also displayed stronger neural hierarchical coherence, highlighting a nuanced interplay between internal nesting of brain rhythms and external entrainment rooted in language-specific speech rhythms. Our cross-language findings reveal brain-speech synchronization is not purely a “bottom-up” but benefits from “top-down” processing from listeners’ language-specific experience.
{"title":"Cross-linguistic and acoustic-driven effects on multiscale neural synchrony to stress rhythms","authors":"Deling He , Eugene H. Buder , Gavin M. Bidelman","doi":"10.1016/j.bandl.2024.105463","DOIUrl":"10.1016/j.bandl.2024.105463","url":null,"abstract":"<div><p>We investigated how neural oscillations code the hierarchical nature of stress rhythms in speech and how stress processing varies with language experience. By measuring phase synchrony of multilevel EEG-acoustic tracking and intra-brain cross-frequency coupling, we show the encoding of stress involves different neural signatures (delta rhythms = stress foot rate; theta rhythms = syllable rate), is stronger for amplitude vs. duration stress cues, and induces nested delta-theta coherence mirroring the stress-syllable hierarchy in speech. Only native English, but not Mandarin, speakers exhibited enhanced neural entrainment at central stress (2 Hz) and syllable (4 Hz) rates intrinsic to natural English. English individuals with superior cortical-stress tracking capabilities also displayed stronger neural hierarchical coherence, highlighting a nuanced interplay between internal nesting of brain rhythms and external entrainment rooted in language-specific speech rhythms. Our cross-language findings reveal brain-speech synchronization is not purely a “bottom-up” but benefits from “top-down” processing from listeners’ language-specific experience.</p></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142146931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Estonian is a quantity language with both a primary duration cue and a secondary pitch cue, whereas Chinese is a tonal language with a dominant pitch use. Using a mismatch negativity experiment and a behavioral discrimination experiment, we investigated how native language background affects the perception of duration only, pitch only, and duration plus pitch information. Chinese participants perceived duration in Estonian as meaningless acoustic information due to a lack of phonological use of duration in their native language; however, they demonstrated a better pitch discrimination ability than Estonian participants. On the other hand, Estonian participants outperformed Chinese participants in perceiving the non-speech pure tones that resembled the Estonian quantity (i.e., containing both duration and pitch information). Our results indicate that native language background affects the perception of duration and pitch and that such an effect is not specific to processing speech sounds.
{"title":"Native language background affects the perception of duration and pitch","authors":"Siqi Lyu , Nele Põldver , Liis Kask , Luming Wang , Kairi Kreegipuu","doi":"10.1016/j.bandl.2024.105460","DOIUrl":"10.1016/j.bandl.2024.105460","url":null,"abstract":"<div><p>Estonian is a quantity language with both a primary duration cue and a secondary pitch cue, whereas Chinese is a tonal language with a dominant pitch use. Using a mismatch negativity experiment and a behavioral discrimination experiment, we investigated how native language background affects the perception of duration only, pitch only, and duration plus pitch information. Chinese participants perceived duration in Estonian as meaningless acoustic information due to a lack of phonological use of duration in their native language; however, they demonstrated a better pitch discrimination ability than Estonian participants. On the other hand, Estonian participants outperformed Chinese participants in perceiving the non-speech pure tones that resembled the Estonian quantity (i.e., containing both duration and pitch information). Our results indicate that native language background affects the perception of duration and pitch and that such an effect is not specific to processing speech sounds.</p></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0093934X2400083X/pdfft?md5=819e444451098ee2477b79a68810ca70&pid=1-s2.0-S0093934X2400083X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-27DOI: 10.1016/j.bandl.2024.105458
Mingchuan Yang , Yang Liu , Zhaoqian Yue , Guang Yang , Xu Jiang , Yimin Cai , Yuqi Zhang , Xiujie Yang , Dongwei Li , Luyao Chen
This study investigated the causal enhancing effect of transcranial photobiomodulation (tPBM) over the left inferior frontal gyrus (LIFG) on syntactically complex Mandarin Chinese first language (L1) and second language (L2) sentence processing performances. Two (L1 and L2) groups of participants (thirty per group) were recruited to receive the double-blind, sham-controlled tPBM intervention via LIFG, followed by the sentence processing, the verbal working memory (WM), and the visual WM tasks. Results revealed a consistent pattern for both groups: (a) tPBM enhanced sentence processing performance but not verbal WM for linear processing of unstructured sequences and visual WM performances; (b) Participants with lower sentence processing performances under sham tPBM benefited more from active tPBM. Taken together, the current study substantiated that tPBM enhanced L1 and L2 sentence processing, and would serve as a promising and cost-effective noninvasive brain stimulation (NIBS) tool for future applications on upregulating the human language faculty.
{"title":"Transcranial photobiomodulation on the left inferior frontal gyrus enhances Mandarin Chinese L1 and L2 complex sentence processing performances","authors":"Mingchuan Yang , Yang Liu , Zhaoqian Yue , Guang Yang , Xu Jiang , Yimin Cai , Yuqi Zhang , Xiujie Yang , Dongwei Li , Luyao Chen","doi":"10.1016/j.bandl.2024.105458","DOIUrl":"10.1016/j.bandl.2024.105458","url":null,"abstract":"<div><p>This study investigated the causal enhancing effect of transcranial photobiomodulation (tPBM) over the left inferior frontal gyrus (LIFG) on syntactically complex Mandarin Chinese first language (L1) and second language (L2) sentence processing performances. Two (L1 and L2) groups of participants (thirty per group) were recruited to receive the double-blind, sham-controlled tPBM intervention via LIFG, followed by the sentence processing, the verbal working memory (WM), and the visual WM tasks. Results revealed a consistent pattern for both groups: (a) tPBM enhanced sentence processing performance but not verbal WM for linear processing of unstructured sequences and visual WM performances; (b) Participants with lower sentence processing performances under sham tPBM benefited more from active tPBM. Taken together, the current study substantiated that tPBM enhanced L1 and L2 sentence processing, and would serve as a promising and cost-effective noninvasive brain stimulation (NIBS) tool for future applications on upregulating the human language faculty.</p></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0093934X24000816/pdfft?md5=35a1e6fe98cd0da5fda77ab7364c5a7a&pid=1-s2.0-S0093934X24000816-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142087916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-17DOI: 10.1016/j.bandl.2024.105457
Huili Wang , Xiaobing Sun , Xueyan Li , Beixian Gu , Yang Fu , Wenyu Liu
The bidirectional influence between emotional language and inhibitory processes has been studied in alphabetic languages, highlighting the need for additional investigation in nonalphabetic languages to explore potential cross-linguistic differences. The present ERP study investigated the bidirectional influence in the context of Mandarin, a language with unique linguistic features and neural substrates. In Experiment 1, emotional adjectives preceded the Go/NoGo cue. The ERPs revealed that negative emotional language facilitated inhibitory control. In Experiment 2, with a Go/NoGo cue preceding the emotional language, the study confirmed that inhibitory control facilitated the semantic integration of negative language in Chinese, whereas the inhibited state may not affect deeper refinement of the emotional content. However, no interaction was observed in positive emotional language processing. These results suggest an interaction between inhibitory control and negative emotional language processing in Chinese, supporting the integrative emotion-cognition view.
{"title":"The bidirectional influence between emotional language and inhibitory control in Chinese: An ERP study","authors":"Huili Wang , Xiaobing Sun , Xueyan Li , Beixian Gu , Yang Fu , Wenyu Liu","doi":"10.1016/j.bandl.2024.105457","DOIUrl":"10.1016/j.bandl.2024.105457","url":null,"abstract":"<div><p>The bidirectional influence between emotional language and inhibitory processes has been studied in alphabetic languages, highlighting the need for additional investigation in nonalphabetic languages to explore potential cross-linguistic differences. The present ERP study investigated the bidirectional influence in the context of Mandarin, a language with unique linguistic features and neural substrates. In Experiment 1, emotional adjectives preceded the Go/NoGo cue. The ERPs revealed that negative emotional language facilitated inhibitory control. In Experiment 2, with a Go/NoGo cue preceding the emotional language, the study confirmed that inhibitory control facilitated the semantic integration of negative language in Chinese, whereas the inhibited state may not affect deeper refinement of the emotional content. However, no interaction was observed in positive emotional language processing. These results suggest an interaction between inhibitory control and negative emotional language processing in Chinese, supporting the integrative emotion-cognition view.</p></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.bandl.2024.105449
Julia R. Drouin , Charles P. Davis
Recognizing acoustically degraded speech relies on predictive processing whereby incomplete auditory cues are mapped to stored linguistic representations via pattern recognition processes. While listeners vary in their ability to recognize degraded speech, performance improves when a written transcription is presented, allowing completion of the partial sensory pattern to preexisting representations. Building on work characterizing predictive processing as pattern completion, we examined the relationship between domain-general pattern recognition and individual variation in degraded speech learning. Participants completed a visual pattern recognition task to measure individual-level tendency towards pattern completion. Participants were also trained to recognize noise-vocoded speech with written transcriptions and tested on speech recognition pre- and post-training using a retrieval-based transcription task. Listeners significantly improved in recognizing speech after training, and pattern completion on the visual task predicted improvement for novel items. The results implicate pattern completion as a domain-general learning mechanism that can facilitate speech adaptation in challenging contexts.
{"title":"Individual differences in visual pattern completion predict adaptation to degraded speech","authors":"Julia R. Drouin , Charles P. Davis","doi":"10.1016/j.bandl.2024.105449","DOIUrl":"10.1016/j.bandl.2024.105449","url":null,"abstract":"<div><p>Recognizing acoustically degraded speech relies on predictive processing whereby incomplete auditory cues are mapped to stored linguistic representations via pattern recognition processes. While listeners vary in their ability to recognize degraded speech, performance improves when a written transcription is presented, allowing completion of the partial sensory pattern to preexisting representations. Building on work characterizing predictive processing as pattern completion, we examined the relationship between domain-general pattern recognition and individual variation in degraded speech learning. Participants completed a visual pattern recognition task to measure individual-level tendency towards pattern completion. Participants were also trained to recognize noise-vocoded speech with written transcriptions and tested on speech recognition pre- and post-training using a retrieval-based transcription task. Listeners significantly improved in recognizing speech after training, and pattern completion on the visual task predicted improvement for novel items. The results implicate pattern completion as a domain-general learning mechanism that can facilitate speech adaptation in challenging contexts.</p></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.bandl.2024.105448
Charlene Moser , Megan M. Spencer-Smith , Peter J. Anderson , Alissandra McIlroy , Amanda G. Wood , Richard J. Leventer , Vicki A. Anderson , Vanessa Siffredi
The corpus callosum, the largest white matter inter-hemispheric pathway, is involved in language and communication. In a cohort of 15 children and adolescents (8–15 years) with developmental absence of the corpus callosum (AgCC), this study aimed to describe language and everyday communication functioning, and explored the role of anatomical factors, social risk, and non-verbal IQ in these outcomes. Standardised measures of language and everyday communication functioning, intellectual ability and social risk were used. AgCC classification and anterior commissure volume, a potential alternative pathway, were extracted from T1-weighted images. Participants with AgCC showed reduced receptive and expressive language compared with test norms, and high rates of language and communication impairments. Complete AgCC, higher social risk and lower non-verbal IQ were associated with communication difficulties. Anterior commissure volume was not associated with language and communication. Recognising heterogeneity in language and communication functioning enhances our understanding and suggests specific focuses for potential interventions.
{"title":"Language and communication functioning in children and adolescents with agenesis of the corpus callosum","authors":"Charlene Moser , Megan M. Spencer-Smith , Peter J. Anderson , Alissandra McIlroy , Amanda G. Wood , Richard J. Leventer , Vicki A. Anderson , Vanessa Siffredi","doi":"10.1016/j.bandl.2024.105448","DOIUrl":"10.1016/j.bandl.2024.105448","url":null,"abstract":"<div><p>The corpus callosum, the largest white matter inter-hemispheric pathway, is involved in language and communication. In a cohort of 15 children and adolescents (8–15 years) with developmental absence of the corpus callosum (AgCC), this study aimed to describe language and everyday communication functioning, and explored the role of anatomical factors, social risk, and non-verbal IQ in these outcomes. Standardised measures of language and everyday communication functioning, intellectual ability and social risk were used. AgCC classification and anterior commissure volume, a potential alternative pathway, were extracted from T1-weighted images. Participants with AgCC showed reduced receptive and expressive language compared with test norms, and high rates of language and communication impairments. Complete AgCC, higher social risk and lower non-verbal IQ were associated with communication difficulties. Anterior commissure volume was not associated with language and communication. Recognising heterogeneity in language and communication functioning enhances our understanding and suggests specific focuses for potential interventions.</p></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0093934X24000713/pdfft?md5=6e4bff27501090ac35c359ea65d91681&pid=1-s2.0-S0093934X24000713-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.bandl.2024.105447
Justyna Kotowicz , Anna Banaszkiewicz , Gabriela Dzięgiel-Fivet , Karen Emmorey , Artur Marchewka , Katarzyna Jednoróg
The goal of this study was to investigate sentence-level reading circuits in deaf native signers, a unique group of deaf people who are immersed in a fully accessible linguistic environment from birth, and hearing readers. Task-based fMRI, functional connectivity and lateralization analyses were conducted. Both groups exhibited overlapping brain activity in the left-hemispheric perisylvian regions in response to a semantic sentence task. We found increased activity in left occipitotemporal and right frontal and temporal regions in deaf readers. Lateralization analyses did not confirm more rightward asymmetry in deaf individuals. Deaf readers exhibited weaker functional connectivity between inferior frontal and middle temporal gyri and enhanced coupling between temporal and insular cortex. In conclusion, despite the shared functional activity within the semantic reading network across both groups, our results suggest greater reliance on cognitive control processes for deaf readers, possibly resulting in greater effort required to perform the task in this group.
{"title":"Neural underpinnings of sentence reading in deaf, native sign language users","authors":"Justyna Kotowicz , Anna Banaszkiewicz , Gabriela Dzięgiel-Fivet , Karen Emmorey , Artur Marchewka , Katarzyna Jednoróg","doi":"10.1016/j.bandl.2024.105447","DOIUrl":"10.1016/j.bandl.2024.105447","url":null,"abstract":"<div><p>The goal of this study was to investigate sentence-level reading circuits in deaf native signers, a unique group of deaf people who are immersed in a fully accessible linguistic environment from birth, and hearing readers. Task-based fMRI, functional connectivity and lateralization analyses were conducted. Both groups exhibited overlapping brain activity in the left-hemispheric perisylvian regions in response to a semantic sentence task. We found increased activity in left occipitotemporal and right frontal and temporal regions in deaf readers. Lateralization analyses did not confirm more rightward asymmetry in deaf individuals. Deaf readers exhibited weaker functional connectivity between inferior frontal and middle temporal gyri and enhanced coupling between temporal and insular cortex. In conclusion, despite the shared functional activity within the semantic reading network across both groups, our results suggest greater reliance on cognitive control processes for deaf readers, possibly resulting in greater effort required to perform the task in this group.</p></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141857176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.bandl.2024.105425
Jane Lai , Angel Chan , Evan Kidd
Developmental Language Disorder (DLD) has been explained as either a deficit deriving from an abstract representational deficit or as emerging from difficulties in acquiring and coordinating multiple interacting cues guiding learning. These competing explanations are often difficult to decide between when tested on European languages. This paper reports an experimental study of relative clause (RC) production in Cantonese-speaking children with and without DLD, which enabled us to test multiple developmental predictions derived from one prominent theory − emergentism. Children with DLD (N = 22; aged 6;6–9;7) were compared with age-matched typically-developing peers (N = 23) and language-matched, typically-developing children (N = 21; aged 4;7–7;6) on a sentence repetition task. Results showed that children’s production across multiple RC types was influenced by structural frequency, general semantic complexity, and the linear order of constituents, with the DLD group performing worse than their age-matched and language-matched peers. The results are consistent with the emergentist explanation of DLD.
{"title":"Production of relative clauses in Cantonese-speaking children with and without Developmental Language Disorder","authors":"Jane Lai , Angel Chan , Evan Kidd","doi":"10.1016/j.bandl.2024.105425","DOIUrl":"10.1016/j.bandl.2024.105425","url":null,"abstract":"<div><p>Developmental Language Disorder (DLD) has been explained as either a deficit deriving from an abstract representational deficit or as emerging from difficulties in acquiring and coordinating multiple interacting cues guiding learning. These competing explanations are often difficult to decide between when tested on European languages. This paper reports an experimental study of relative clause (RC) production in Cantonese-speaking children with and without DLD, which enabled us to test multiple developmental predictions derived from one prominent theory − emergentism. Children with DLD (N = 22; aged 6;6–9;7) were compared with age-matched typically-developing peers (N = 23) and language-matched, typically-developing children (N = 21; aged 4;7–7;6) on a sentence repetition task. Results showed that children’s production across multiple RC types was influenced by structural frequency, general semantic complexity, and the linear order of constituents, with the DLD group performing worse than their age-matched and language-matched peers. The results are consistent with the emergentist explanation of DLD.</p></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.bandl.2024.105438
Veena D. Dwivedi , Janahan Selvanayagam
A key aspect of linguistic communication involves semantic reference to objects. Presently, we investigate neural responses at objects when reference is disrupted, e.g., “The connoisseur tasted *that wine“… vs. “…*that roof…” Without any previous linguistic context or visual gesture, use of the demonstrative determiner “that” renders interpretation at the noun as incoherent. This incoherence is not based on knowledge of how the world plausibly works but instead is based on grammatical rules of reference. Whereas Event-Related Potential (ERP) responses to sentences such as “The connoisseur tasted the wine …” vs. “the roof” would result in an N400 effect, it is unclear what to expect for doubly incoherent “…*that roof…”. Results revealed an N400 effect, as expected, preceded by a P200 component (instead of predicted P600 effect). These independent ERP components at the doubly violated condition support the notion that semantic interpretation can be partitioned into grammatical vs. contextual constructs.
{"title":"An electrophysiological investigation of referential communication","authors":"Veena D. Dwivedi , Janahan Selvanayagam","doi":"10.1016/j.bandl.2024.105438","DOIUrl":"10.1016/j.bandl.2024.105438","url":null,"abstract":"<div><p>A key aspect of linguistic communication involves semantic reference to objects. Presently, we investigate neural responses at objects when reference is disrupted, e.g., <em>“The connoisseur tasted *that <u>wine</u>“…</em> vs. <em>“</em>…*<em>that <u>roof</u>…”</em> Without any previous linguistic context or visual gesture, use of the demonstrative determiner <em>“that”</em> renders interpretation at the noun as incoherent. This incoherence is not based on knowledge of how the world plausibly works but instead is based on grammatical rules of reference. Whereas Event-Related Potential (ERP) responses to sentences such as <em>“The connoisseur tasted the <u>wine</u> …”</em> vs. <em>“the <u>roof</u>”</em> would result in an N400 effect, it is unclear what to expect for doubly incoherent <em>“</em>…*<em>that <u>roof</u>…”</em>. Results revealed an N400 effect, as expected, preceded by a P200 component (instead of predicted P600 effect). These independent ERP components at the doubly violated condition support the notion that semantic interpretation can be partitioned into grammatical vs. contextual constructs.</p></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0093934X24000610/pdfft?md5=5f5e2fe644072d0809e7e14b2ad11e83&pid=1-s2.0-S0093934X24000610-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.bandl.2024.105439
Mingjiang Sun , Weijing Xing , Wenjing Yu , L. Robert Slevc , Weijun Li
Considerable work has investigated similarities between the processing of music and language, but it remains unclear whether typical, genuine music can influence speech processing via cross-domain priming. To investigate this, we measured ERPs to musical phrases and to syntactically ambiguous Chinese phrases that could be disambiguated by early or late prosodic boundaries. Musical primes also had either early or late prosodic boundaries and we asked participants to judge whether the prime and target have the same structure. Within musical phrases, prosodic boundaries elicited reduced N1 and enhanced P2 components (relative to the no-boundary condition) and musical phrases with late boundaries exhibited a closure positive shift (CPS) component. More importantly, primed target phrases elicited a smaller CPS compared to non-primed phrases, regardless of the type of ambiguous phrase. These results suggest that prosodic priming can occur across domains, supporting the existence of common neural processes in music and language processing.
{"title":"ERP evidence for cross-domain prosodic priming from music to speech","authors":"Mingjiang Sun , Weijing Xing , Wenjing Yu , L. Robert Slevc , Weijun Li","doi":"10.1016/j.bandl.2024.105439","DOIUrl":"10.1016/j.bandl.2024.105439","url":null,"abstract":"<div><p>Considerable work has investigated similarities between the processing of music and language, but it remains unclear whether typical, genuine music can influence speech processing via cross-domain priming. To investigate this, we measured ERPs to musical phrases and to syntactically ambiguous Chinese phrases that could be disambiguated by early or late prosodic boundaries. Musical primes also had either early or late prosodic boundaries and we asked participants to judge whether the prime and target have the same structure. Within musical phrases, prosodic boundaries elicited reduced N1 and enhanced P2 components (relative to the no-boundary condition) and musical phrases with late boundaries exhibited a closure positive shift (CPS) component. More importantly, primed target phrases elicited a smaller CPS compared to non-primed phrases, regardless of the type of ambiguous phrase. These results suggest that prosodic priming can occur across domains, supporting the existence of common neural processes in music and language processing.</p></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}