Kate Stone, Bruno Nicenboim, Shravan Vasishth, Frank Rösler
Intuitively, strongly constraining contexts should lead to stronger probabilistic representations of sentences in memory. Encountering unexpected words could therefore be expected to trigger costlier shifts in these representations than expected words. However, psycholinguistic measures commonly used to study probabilistic processing, such as the N400 event-related potential (ERP) component, are sensitive to word predictability but not to contextual constraint. Some research suggests that constraint-related processing cost may be measurable via an ERP positivity following the N400, known as the anterior post-N400 positivity (PNP). The PNP is argued to reflect update of a sentence representation and to be distinct from the posterior P600, which reflects conflict detection and reanalysis. However, constraint-related PNP findings are inconsistent. We sought to conceptually replicate Federmeier et al. (2007) and Kuperberg et al. (2020), who observed that the PNP, but not the N400 or the P600, was affected by constraint at unexpected but plausible words. Using a pre-registered design and statistical approach maximising power, we demonstrated a dissociated effect of predictability and constraint: strong evidence for predictability but not constraint in the N400 window, and strong evidence for constraint but not predictability in the later window. However, the constraint effect was consistent with a P600 and not a PNP, suggesting increased conflict between a strong representation and unexpected input rather than greater update of the representation. We conclude that either a simple strong/weak constraint design is not always sufficient to elicit the PNP, or that previous PNP constraint findings could be an artifact of smaller sample size.
直观地说,强约束上下文应该导致句子在记忆中的更强的概率表示。因此,与预期单词相比,遇到意外单词可能会在这些表征中引发代价更高的变化。然而,通常用于研究概率处理的心理语言学测量,如N400事件相关电位(ERP)组件,对单词可预测性敏感,但对上下文约束不敏感。一些研究表明,约束相关的加工成本可以通过N400后的ERP阳性来测量,称为前N400后阳性(PNP)。PNP反映了句子表征的更新,与反映冲突检测和再分析的后验P600不同。然而,约束相关的PNP发现是不一致的。我们试图从概念上复制Federmeier et al.(2007)和Kuperberg et al.(2020),他们观察到PNP,而不是N400或P600,会受到意外但合理的单词约束的影响。使用预先注册的设计和统计方法最大化功率,我们证明了可预测性和约束的分离效应:在N400窗口中有很强的可预测性而不是约束的证据,在后面的窗口中有很强的约束而不是可预测性的证据。然而,约束效应与P600一致,而与PNP不一致,这表明强表征与意外输入之间的冲突增加,而不是表征的更大更新。我们得出的结论是,简单的强/弱约束设计并不总是足以引发PNP,或者以前的PNP约束结果可能是较小样本量的工件。
{"title":"Understanding the Effects of Constraint and Predictability in ERP.","authors":"Kate Stone, Bruno Nicenboim, Shravan Vasishth, Frank Rösler","doi":"10.1162/nol_a_00094","DOIUrl":"https://doi.org/10.1162/nol_a_00094","url":null,"abstract":"<p><p>Intuitively, strongly constraining contexts should lead to stronger probabilistic representations of sentences in memory. Encountering unexpected words could therefore be expected to trigger costlier shifts in these representations than expected words. However, psycholinguistic measures commonly used to study probabilistic processing, such as the N400 event-related potential (ERP) component, are sensitive to word predictability but not to contextual constraint. Some research suggests that constraint-related processing cost may be measurable via an ERP positivity following the N400, known as the anterior post-N400 positivity (PNP). The PNP is argued to reflect update of a sentence representation and to be distinct from the posterior P600, which reflects conflict detection and reanalysis. However, constraint-related PNP findings are inconsistent. We sought to conceptually replicate Federmeier et al. (2007) and Kuperberg et al. (2020), who observed that the PNP, but not the N400 or the P600, was affected by constraint at unexpected but plausible words. Using a pre-registered design and statistical approach maximising power, we demonstrated a dissociated effect of predictability and constraint: strong evidence for predictability but not constraint in the N400 window, and strong evidence for constraint but not predictability in the later window. However, the constraint effect was consistent with a P600 and not a PNP, suggesting increased conflict between a strong representation and unexpected input rather than greater update of the representation. We conclude that either a simple strong/weak constraint design is not always sufficient to elicit the PNP, or that previous PNP constraint findings could be an artifact of smaller sample size.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10205153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9527049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-16eCollection Date: 2022-01-01DOI: 10.1162/nol_a_00081
Lucy J MacGregor, Rebecca A Gilbert, Zuzanna Balewski, Daniel J Mitchell, Sharon W Erzinçlioğlu, Jennifer M Rodd, John Duncan, Evelina Fedorenko, Matthew H Davis
Listening to spoken language engages domain-general multiple demand (MD; frontoparietal) regions of the human brain, in addition to domain-selective (frontotemporal) language regions, particularly when comprehension is challenging. However, there is limited evidence that the MD network makes a functional contribution to core aspects of understanding language. In a behavioural study of volunteers (n = 19) with chronic brain lesions, but without aphasia, we assessed the causal role of these networks in perceiving, comprehending, and adapting to spoken sentences made more challenging by acoustic-degradation or lexico-semantic ambiguity. We measured perception of and adaptation to acoustically degraded (noise-vocoded) sentences with a word report task before and after training. Participants with greater damage to MD but not language regions required more vocoder channels to achieve 50% word report, indicating impaired perception. Perception improved following training, reflecting adaptation to acoustic degradation, but adaptation was unrelated to lesion location or extent. Comprehension of spoken sentences with semantically ambiguous words was measured with a sentence coherence judgement task. Accuracy was high and unaffected by lesion location or extent. Adaptation to semantic ambiguity was measured in a subsequent word association task, which showed that availability of lower-frequency meanings of ambiguous words increased following their comprehension (word-meaning priming). Word-meaning priming was reduced for participants with greater damage to language but not MD regions. Language and MD networks make dissociable contributions to challenging speech comprehension: Using recent experience to update word meaning preferences depends on language-selective regions, whereas the domain-general MD network plays a causal role in reporting words from degraded speech.
{"title":"Causal Contributions of the Domain-General (Multiple Demand) and the Language-Selective Brain Networks to Perceptual and Semantic Challenges in Speech Comprehension.","authors":"Lucy J MacGregor, Rebecca A Gilbert, Zuzanna Balewski, Daniel J Mitchell, Sharon W Erzinçlioğlu, Jennifer M Rodd, John Duncan, Evelina Fedorenko, Matthew H Davis","doi":"10.1162/nol_a_00081","DOIUrl":"10.1162/nol_a_00081","url":null,"abstract":"<p><p>Listening to spoken language engages domain-general multiple demand (MD; frontoparietal) regions of the human brain, in addition to domain-selective (frontotemporal) language regions, particularly when comprehension is challenging. However, there is limited evidence that the MD network makes a functional contribution to core aspects of understanding language. In a behavioural study of volunteers (<i>n</i> = 19) with chronic brain lesions, but without aphasia, we assessed the causal role of these networks in perceiving, comprehending, and adapting to spoken sentences made more challenging by acoustic-degradation or lexico-semantic ambiguity. We measured perception of and adaptation to acoustically degraded (noise-vocoded) sentences with a word report task before and after training. Participants with greater damage to MD but not language regions required more vocoder channels to achieve 50% word report, indicating impaired perception. Perception improved following training, reflecting adaptation to acoustic degradation, but adaptation was unrelated to lesion location or extent. Comprehension of spoken sentences with semantically ambiguous words was measured with a sentence coherence judgement task. Accuracy was high and unaffected by lesion location or extent. Adaptation to semantic ambiguity was measured in a subsequent word association task, which showed that availability of lower-frequency meanings of ambiguous words increased following their comprehension (word-meaning priming). Word-meaning priming was reduced for participants with greater damage to language but not MD regions. Language and MD networks make dissociable contributions to challenging speech comprehension: Using recent experience to update word meaning preferences depends on language-selective regions, whereas the domain-general MD network plays a causal role in reporting words from degraded speech.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9893226/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9705360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-16eCollection Date: 2022-01-01DOI: 10.1162/nol_a_00079
Srishti Nayak, Peyton L Coleman, Enikő Ladányi, Rachana Nitin, Daniel E Gustavson, Simon E Fisher, Cyrille L Magne, Reyna L Gordon
Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
{"title":"The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) Framework for Understanding Musicality-Language Links Across the Lifespan.","authors":"Srishti Nayak, Peyton L Coleman, Enikő Ladányi, Rachana Nitin, Daniel E Gustavson, Simon E Fisher, Cyrille L Magne, Reyna L Gordon","doi":"10.1162/nol_a_00079","DOIUrl":"10.1162/nol_a_00079","url":null,"abstract":"<p><p>Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9893227/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10686983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neurobiology of Language: Volume 3 Reviewers List","authors":"","doi":"10.1162/nol_e_00096","DOIUrl":"https://doi.org/10.1162/nol_e_00096","url":null,"abstract":"","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44533448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-29eCollection Date: 2022-01-01DOI: 10.1162/nol_a_00076
Julia Uddén, Annika Hultén, Jan-Mathijs Schoffelen, Nietzsche Lam, Karin Harbusch, Antal van den Bosch, Gerard Kempen, Karl Magnus Petersson, Peter Hagoort
This study investigated two questions. One is: To what degree is sentence processing beyond single words independent of the input modality (speech vs. reading)? The second question is: Which parts of the network recruited by both modalities is sensitive to syntactic complexity? These questions were investigated by having more than 200 participants read or listen to well-formed sentences or series of unconnected words. A largely left-hemisphere frontotemporoparietal network was found to be supramodal in nature, i.e., independent of input modality. In addition, the left inferior frontal gyrus (LIFG) and the left posterior middle temporal gyrus (LpMTG) were most clearly associated with left-branching complexity. The left anterior temporal lobe showed the greatest sensitivity to sentences that differed in right-branching complexity. Moreover, activity in LIFG and LpMTG increased from sentence onset to end, in parallel with an increase of the left-branching complexity. While LIFG, bilateral anterior temporal lobe, posterior MTG, and left inferior parietal lobe all contribute to the supramodal unification processes, the results suggest that these regions differ in their respective contributions to syntactic complexity related processing. The consequences of these findings for neurobiological models of language processing are discussed.
{"title":"Supramodal Sentence Processing in the Human Brain: fMRI Evidence for the Influence of Syntactic Complexity in More Than 200 Participants.","authors":"Julia Uddén, Annika Hultén, Jan-Mathijs Schoffelen, Nietzsche Lam, Karin Harbusch, Antal van den Bosch, Gerard Kempen, Karl Magnus Petersson, Peter Hagoort","doi":"10.1162/nol_a_00076","DOIUrl":"10.1162/nol_a_00076","url":null,"abstract":"<p><p>This study investigated two questions. One is: To what degree is sentence processing beyond single words independent of the input modality (speech vs. reading)? The second question is: Which parts of the network recruited by both modalities is sensitive to syntactic complexity? These questions were investigated by having more than 200 participants read or listen to well-formed sentences or series of unconnected words. A largely left-hemisphere frontotemporoparietal network was found to be supramodal in nature, i.e., independent of input modality. In addition, the left inferior frontal gyrus (LIFG) and the left posterior middle temporal gyrus (LpMTG) were most clearly associated with left-branching complexity. The left anterior temporal lobe showed the greatest sensitivity to sentences that differed in right-branching complexity. Moreover, activity in LIFG and LpMTG increased from sentence onset to end, in parallel with an increase of the left-branching complexity. While LIFG, bilateral anterior temporal lobe, posterior MTG, and left inferior parietal lobe all contribute to the supramodal unification processes, the results suggest that these regions differ in their respective contributions to syntactic complexity related processing. The consequences of these findings for neurobiological models of language processing are discussed.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158636/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9858907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-22eCollection Date: 2022-01-01DOI: 10.1162/nol_a_00075
Rose Bruffaerts, Jolien Schaeverbeke, Ahmed Radwan, Manon Grube, Silvy Gabel, An-Sofie De Weer, Eva Dries, Karen Van Bouwel, Timothy D Griffiths, Stefan Sunaert, Rik Vandenberghe
Recent mechanistic models argue for a key role of rhythm processing in both speech production and speech perception. Patients with the non-fluent variant (NFV) of primary progressive aphasia (PPA) with apraxia of speech (AOS) represent a specific study population in which this link can be examined. Previously, we observed impaired rhythm processing in NFV with AOS. We hypothesized that a shared neurocomputational mechanism structures auditory input (sound and speech) and output (speech production) in time, a "temporal scaffolding" mechanism. Since considerable white matter damage is observed in NFV, we test here whether white matter changes are related to impaired rhythm processing. Forty-seven participants performed a psychoacoustic test battery: 12 patients with NFV and AOS, 11 patients with the semantic variant of PPA, and 24 cognitively intact age- and education-matched controls. Deformation-based morphometry was used to test whether white matter volume correlated to rhythmic abilities. In 34 participants, we also obtained tract-based metrics of the left Aslant tract, which is typically damaged in patients with NFV. Nine out of 12 patients with NFV displayed impaired rhythmic processing. Left frontal white matter atrophy adjacent to the supplementary motor area (SMA) correlated with poorer rhythmic abilities. The structural integrity of the left Aslant tract also correlated with rhythmic abilities. A colocalized and perhaps shared white matter substrate adjacent to the SMA is associated with impaired rhythmic processing and motor speech impairment. Our results support the existence of a temporal scaffolding mechanism structuring perceptual input and speech output.
最近的机理模型认为,节奏处理在语音产生和语音感知中都起着关键作用。原发性进行性失语症(PPA)的非流利变异型(NFV)伴言语障(AOS)患者是一个特殊的研究群体,可以在他们身上考察这种联系。此前,我们曾观察到 NFV 伴有 AOS 的患者节奏处理能力受损。我们假设,一种共同的神经计算机制将听觉输入(声音和言语)和输出(言语生成)在时间上进行结构化,这是一种 "时间支架 "机制。由于在 NFV 中观察到大量白质损伤,我们在此测试白质变化是否与节奏处理受损有关。47 名参与者进行了一系列心理声学测试:12 名 NFV 和 AOS 患者,11 名 PPA 语义变异患者,以及 24 名年龄和教育匹配的认知完整对照组。我们使用基于变形的形态测量法来测试白质体积是否与节奏能力相关。在34名参与者中,我们还获得了左侧Aslant束的基于束的指标,NFV患者的左侧Aslant束通常会受损。在 12 名 NFV 患者中,有 9 人的节奏处理能力受损。与辅助运动区(SMA)相邻的左额叶白质萎缩与较差的节奏能力相关。左侧阿斯兰特束的结构完整性也与节奏能力相关。与 SMA 相邻的白质基质可能是共定位的,也可能是共用的,这与节奏处理能力受损和运动言语障碍有关。我们的研究结果表明,在感知输入和言语输出的结构中存在一种时间支架机制。
{"title":"Left Frontal White Matter Links to Rhythm Processing Relevant to Speech Production in Apraxia of Speech.","authors":"Rose Bruffaerts, Jolien Schaeverbeke, Ahmed Radwan, Manon Grube, Silvy Gabel, An-Sofie De Weer, Eva Dries, Karen Van Bouwel, Timothy D Griffiths, Stefan Sunaert, Rik Vandenberghe","doi":"10.1162/nol_a_00075","DOIUrl":"10.1162/nol_a_00075","url":null,"abstract":"<p><p>Recent mechanistic models argue for a key role of rhythm processing in both speech production and speech perception. Patients with the non-fluent variant (NFV) of primary progressive aphasia (PPA) with apraxia of speech (AOS) represent a specific study population in which this link can be examined. Previously, we observed impaired rhythm processing in NFV with AOS. We hypothesized that a shared neurocomputational mechanism structures auditory input (sound and speech) and output (speech production) in time, a \"temporal scaffolding\" mechanism. Since considerable white matter damage is observed in NFV, we test here whether white matter changes are related to impaired rhythm processing. Forty-seven participants performed a psychoacoustic test battery: 12 patients with NFV and AOS, 11 patients with the semantic variant of PPA, and 24 cognitively intact age- and education-matched controls. Deformation-based morphometry was used to test whether white matter volume correlated to rhythmic abilities. In 34 participants, we also obtained tract-based metrics of the left Aslant tract, which is typically damaged in patients with NFV. Nine out of 12 patients with NFV displayed impaired rhythmic processing. Left frontal white matter atrophy adjacent to the supplementary motor area (SMA) correlated with poorer rhythmic abilities. The structural integrity of the left Aslant tract also correlated with rhythmic abilities. A colocalized and perhaps shared white matter substrate adjacent to the SMA is associated with impaired rhythmic processing and motor speech impairment. Our results support the existence of a temporal scaffolding mechanism structuring perceptual input and speech output.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158569/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9858910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-22eCollection Date: 2022-01-01DOI: 10.1162/nol_a_00077
Chia-Wen Lo, Tzu-Yun Tung, Alan Hezao Ke, Jonathan R Brennan
Neural responses appear to synchronize with sentence structure. However, researchers have debated whether this response in the delta band (0.5-3 Hz) really reflects hierarchical information or simply lexical regularities. Computational simulations in which sentences are represented simply as sequences of high-dimensional numeric vectors that encode lexical information seem to give rise to power spectra similar to those observed for sentence synchronization, suggesting that sentence-level cortical tracking findings may reflect sequential lexical or part-of-speech information, and not necessarily hierarchical syntactic information. Using electroencephalography (EEG) data and the frequency-tagging paradigm, we develop a novel experimental condition to tease apart the predictions of the lexical and the hierarchical accounts of the attested low-frequency synchronization. Under a lexical model, synchronization should be observed even when words are reversed within their phrases (e.g., "sheep white grass eat" instead of "white sheep eat grass"), because the same lexical items are preserved at the same regular intervals. Critically, such stimuli are not syntactically well-formed; thus a hierarchical model does not predict synchronization of phrase- and sentence-level structure in the reversed phrase condition. Computational simulations confirm these diverging predictions. EEG data from N = 31 native speakers of Mandarin show robust delta synchronization to syntactically well-formed isochronous speech. Importantly, no such pattern is observed for reversed phrases, consistent with the hierarchical, but not the lexical, accounts.
神经反应似乎与句子结构同步。然而,研究人员一直在争论这种在 delta 波段(0.5-3 Hz)的反应是真正反映了层次信息,还是仅仅反映了词汇的规律性。在计算模拟中,句子被简单地表示为编码词法信息的高维数字向量序列,似乎会产生与句子同步类似的功率谱,这表明句子级皮层跟踪发现可能反映了顺序词法或语音部分信息,而不一定是层次句法信息。利用脑电图(EEG)数据和频率标记范式,我们开发了一种新的实验条件,以区分词法和层次法对已证实的低频同步的预测。根据词法模型,即使词在短语中被颠倒了(例如,"羊吃白草 "而不是 "白羊吃草"),也应该能观察到同步现象,因为相同的词项被保留在相同的规则间隔内。重要的是,这种刺激的句法结构并不完善;因此层次模型并不能预测在短语颠倒的条件下短语和句子结构的同步性。计算模拟证实了这些不同的预测。来自 N = 31 位以普通话为母语的人的脑电图数据显示,句法结构良好的等时语音具有很强的δ同步性。重要的是,在反向短语中没有观察到这样的模式,这与层次说而不是词法说一致。
{"title":"Hierarchy, Not Lexical Regularity, Modulates Low-Frequency Neural Synchrony During Language Comprehension.","authors":"Chia-Wen Lo, Tzu-Yun Tung, Alan Hezao Ke, Jonathan R Brennan","doi":"10.1162/nol_a_00077","DOIUrl":"10.1162/nol_a_00077","url":null,"abstract":"<p><p>Neural responses appear to synchronize with sentence structure. However, researchers have debated whether this response in the delta band (0.5-3 Hz) really reflects hierarchical information or simply lexical regularities. Computational simulations in which sentences are represented simply as sequences of high-dimensional numeric vectors that encode lexical information seem to give rise to power spectra similar to those observed for sentence synchronization, suggesting that sentence-level cortical tracking findings may reflect sequential lexical or part-of-speech information, and not necessarily hierarchical syntactic information. Using electroencephalography (EEG) data and the frequency-tagging paradigm, we develop a novel experimental condition to tease apart the predictions of the lexical and the hierarchical accounts of the attested low-frequency synchronization. Under a lexical model, synchronization should be observed even when words are reversed within their phrases (e.g., \"sheep white grass eat\" instead of \"white sheep eat grass\"), because the same lexical items are preserved at the same regular intervals. Critically, such stimuli are not syntactically well-formed; thus a hierarchical model does not predict synchronization of phrase- and sentence-level structure in the reversed phrase condition. Computational simulations confirm these diverging predictions. EEG data from <i>N</i> = 31 native speakers of Mandarin show robust delta synchronization to syntactically well-formed isochronous speech. Importantly, no such pattern is observed for reversed phrases, consistent with the hierarchical, but not the lexical, accounts.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158645/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9557429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-14eCollection Date: 2022-01-01DOI: 10.1162/nol_a_00071
Alexander M Paunov, Idan A Blank, Olessia Jouravlev, Zachary Mineroff, Jeanne Gallée, Evelina Fedorenko
Language and social cognition, especially the ability to reason about mental states, known as theory of mind (ToM), are deeply related in development and everyday use. However, whether these cognitive faculties rely on distinct, overlapping, or the same mechanisms remains debated. Some evidence suggests that, by adulthood, language and ToM draw on largely distinct-though plausibly interacting-cortical networks. However, the broad topography of these networks is similar, and some have emphasized the importance of social content / communicative intent in the linguistic signal for eliciting responses in the language areas. Here, we combine the power of individual-subject functional localization with the naturalistic-cognition inter-subject correlation approach to illuminate the language-ToM relationship. Using functional magnetic resonance imaging (fMRI), we recorded neural activity as participants (n = 43) listened to stories and dialogues with mental state content (+linguistic, +ToM), viewed silent animations and live action films with mental state content but no language (-linguistic, +ToM), or listened to an expository text (+linguistic, -ToM). The ToM network robustly tracked stimuli rich in mental state information regardless of whether mental states were conveyed linguistically or non-linguistically, while tracking a +linguistic / -ToM stimulus only weakly. In contrast, the language network tracked linguistic stimuli more strongly than (a) non-linguistic stimuli, and than (b) the ToM network, and showed reliable tracking even for the linguistic condition devoid of mental state content. These findings suggest that in spite of their indisputably close links, language and ToM dissociate robustly in their neural substrates-and thus plausibly cognitive mechanisms-including during the processing of rich naturalistic materials.
{"title":"Differential Tracking of Linguistic vs. Mental State Content in Naturalistic Stimuli by Language and Theory of Mind (ToM) Brain Networks.","authors":"Alexander M Paunov, Idan A Blank, Olessia Jouravlev, Zachary Mineroff, Jeanne Gallée, Evelina Fedorenko","doi":"10.1162/nol_a_00071","DOIUrl":"10.1162/nol_a_00071","url":null,"abstract":"<p><p>Language and social cognition, especially the ability to reason about mental states, known as <i>theory of mind</i> (ToM), are deeply related in development and everyday use. However, whether these cognitive faculties rely on distinct, overlapping, or the same mechanisms remains debated. Some evidence suggests that, by adulthood, language and ToM draw on largely distinct-though plausibly interacting-cortical networks. However, the broad topography of these networks is similar, and some have emphasized the importance of social content / communicative intent in the linguistic signal for eliciting responses in the language areas. Here, we combine the power of individual-subject functional localization with the naturalistic-cognition inter-subject correlation approach to illuminate the language-ToM relationship. Using functional magnetic resonance imaging (fMRI), we recorded neural activity as participants (<i>n</i> = 43) listened to stories and dialogues with mental state content (+linguistic, +ToM), viewed silent animations and live action films with mental state content but no language (-linguistic, +ToM), or listened to an expository text (+linguistic, -ToM). The ToM network robustly tracked stimuli rich in mental state information regardless of whether mental states were conveyed linguistically or non-linguistically, while tracking a +linguistic / -ToM stimulus only weakly. In contrast, the language network tracked linguistic stimuli more strongly than (a) non-linguistic stimuli, and than (b) the ToM network, and showed reliable tracking even for the linguistic condition devoid of mental state content. These findings suggest that in spite of their indisputably close links, language and ToM dissociate robustly in their neural substrates-and thus plausibly cognitive mechanisms-including during the processing of rich naturalistic materials.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158571/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9509635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-20eCollection Date: 2022-05-01DOI: 10.1162/nol_a_00069
Kelly C Martin, Anna Seydell-Greenwald, Madison M Berl, William D Gaillard, Peter E Turkeltaub, Elissa L Newport
Studies of language organization show a striking change in cerebral dominance for language over development: We begin life with a left hemisphere (LH) bias for language processing, which is weaker than that in adults and which can be overcome if there is a LH injury. Over development this LH bias becomes stronger and can no longer be reversed. Prior work has shown that this change results from a significant reduction in the magnitude of language activation in right hemisphere (RH) regions in adults compared to children. Here we investigate whether the spatial distribution of language activation, albeit weaker in magnitude, still persists in homotopic RH regions of the mature brain. Children aged 4-13 (n = 39) and young adults (n = 14) completed an auditory sentence comprehension fMRI (functional magnetic resonance imaging) task. To equate neural activity across the hemispheres, we applied fixed cutoffs for the number of active voxels that would be included in each hemisphere for each participant. To evaluate homotopicity, we generated left-right flipped versions of each activation map, calculated spatial overlap between the LH and RH activity in frontal and temporal regions, and tested for mean differences in the spatial overlap values between the age groups. We found that, in children as well as in adults, there was indeed a spatially intact shadow of language activity in the right frontal and temporal regions homotopic to the LH language regions. After a LH stroke in adulthood, recovering early-life activation in these regions might assist in enhancing recovery of language abilities.
{"title":"A Weak Shadow of Early Life Language Processing Persists in the Right Hemisphere of the Mature Brain.","authors":"Kelly C Martin, Anna Seydell-Greenwald, Madison M Berl, William D Gaillard, Peter E Turkeltaub, Elissa L Newport","doi":"10.1162/nol_a_00069","DOIUrl":"10.1162/nol_a_00069","url":null,"abstract":"<p><p>Studies of language organization show a striking change in cerebral dominance for language over development: We begin life with a left hemisphere (LH) bias for language processing, which is weaker than that in adults and which can be overcome if there is a LH injury. Over development this LH bias becomes stronger and can no longer be reversed. Prior work has shown that this change results from a significant reduction in the magnitude of language activation in right hemisphere (RH) regions in adults compared to children. Here we investigate whether the <i>spatial distribution</i> of language activation, albeit weaker in magnitude, still persists in homotopic RH regions of the mature brain. Children aged 4-13 (<i>n</i> = 39) and young adults (<i>n</i> = 14) completed an auditory sentence comprehension fMRI (functional magnetic resonance imaging) task. To equate neural activity across the hemispheres, we applied fixed cutoffs for the number of active voxels that would be included in each hemisphere for each participant. To evaluate homotopicity, we generated left-right flipped versions of each activation map, calculated spatial overlap between the LH and RH activity in frontal and temporal regions, and tested for mean differences in the spatial overlap values between the age groups. We found that, in children as well as in adults, there was indeed a spatially intact shadow of language activity in the right frontal and temporal regions homotopic to the LH language regions. After a LH stroke in adulthood, recovering early-life activation in these regions might assist in enhancing recovery of language abilities.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2022-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9169899/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10843837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-13eCollection Date: 2022-01-01DOI: 10.1162/nol_a_00067
Nicholas Riccardi, Chris Rorden, Julius Fridriksson, Rutvik H Desai
The role of left inferior frontal cortex (LIFC) in canonical sentence comprehension is controversial. Many studies have found involvement of LIFC in sentence production or complex sentence comprehension, but negative or mixed results are often found in comprehension of simple or canonical sentences. We used voxel-, region-, and connectivity-based lesion symptom mapping (VLSM, RLSM, CLSM) in left-hemisphere chronic stroke survivors to investigate canonical sentence comprehension while controlling for lexical-semantic, executive, and phonological processes. We investigated how damage and disrupted white matter connectivity of LIFC and two other language-related regions, the left anterior temporal lobe (LATL) and posterior temporal-inferior parietal area (LpT-iP), affected sentence comprehension. VLSM and RLSM revealed that LIFC damage was not associated with canonical sentence comprehension measured by a sensibility judgment task. LIFC damage was associated instead with impairments in a lexical semantic similarity judgment task with high semantic/executive demands. Damage to the LpT-iP, specifically posterior middle temporal gyrus (pMTG), predicted worse sentence comprehension after controlling for visual lexical access, semantic knowledge, and auditory-verbal short-term memory (STM), but not auditory single-word comprehension, suggesting pMTG is vital for auditory language comprehension. CLSM revealed that disruption of left-lateralized white-matter connections from LIFC to LATL and LpT-iP was associated with worse sentence comprehension, controlling for performance in tasks related to lexical access, auditory word comprehension, and auditory-verbal STM. However, the LIFC connections were accounted for by the lexical semantic similarity judgment task, which had high semantic/executive demands. This suggests that LIFC connectivity is relevant to canonical sentence comprehension when task-related semantic/executive demands are high.
{"title":"Canonical Sentence Processing and the Inferior Frontal Cortex: Is There a Connection?","authors":"Nicholas Riccardi, Chris Rorden, Julius Fridriksson, Rutvik H Desai","doi":"10.1162/nol_a_00067","DOIUrl":"10.1162/nol_a_00067","url":null,"abstract":"<p><p>The role of left inferior frontal cortex (LIFC) in canonical sentence comprehension is controversial. Many studies have found involvement of LIFC in sentence production or complex sentence comprehension, but negative or mixed results are often found in comprehension of simple or canonical sentences. We used voxel-, region-, and connectivity-based lesion symptom mapping (VLSM, RLSM, CLSM) in left-hemisphere chronic stroke survivors to investigate canonical sentence comprehension while controlling for lexical-semantic, executive, and phonological processes. We investigated how damage and disrupted white matter connectivity of LIFC and two other language-related regions, the left anterior temporal lobe (LATL) and posterior temporal-inferior parietal area (LpT-iP), affected sentence comprehension. VLSM and RLSM revealed that LIFC damage was not associated with canonical sentence comprehension measured by a sensibility judgment task. LIFC damage was associated instead with impairments in a lexical semantic similarity judgment task with high semantic/executive demands. Damage to the LpT-iP, specifically posterior middle temporal gyrus (pMTG), predicted worse sentence comprehension after controlling for visual lexical access, semantic knowledge, and auditory-verbal short-term memory (STM), but not auditory single-word comprehension, suggesting pMTG is vital for auditory language comprehension. CLSM revealed that disruption of left-lateralized white-matter connections from LIFC to LATL and LpT-iP was associated with worse sentence comprehension, controlling for performance in tasks related to lexical access, auditory word comprehension, and auditory-verbal STM. However, the LIFC connections were accounted for by the lexical semantic similarity judgment task, which had high semantic/executive demands. This suggests that LIFC connectivity is relevant to canonical sentence comprehension when task-related semantic/executive demands are high.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158581/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9504772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}