Patti Adank, Georgios P. D. Argyropoulos, K. Armeni, Christoph Aurnhammer, Nilgoun Bahar, Jana Basnakova, Laura Batterink, Idan Blank, Lindsay Bowman, Jonathan Brennan, Trevor Brothers, Adam Buchwald, Chiara Cantiani, Stefano Cappa, Micaela Chan, Luyao Chen, Yuchun Chen, A. Chrabaszcz, Laurent Cohen, H. Coslett, Jacqueline Cummine, Anila D ’ Mello, A. Daliri, Nicola Del, Maschio Andrew, Tesla DeMarco, D. D. Ouden, Michele T. Diaz, Anthony Steven Dick, Guosheng Ding, Nai Ding, Irene Echeverria-Altuna, Mark Eckert, Allyson Ettinger, Z. Eviatar, Heather Flowers, Robert Frank, Stefan Frank, Jon Gauthier, Giulia Gennari, Fatemeh Geranmayeh, Laura Giglio
{"title":"Neurobiology of Language: Volume 4 Reviewers List","authors":"Patti Adank, Georgios P. D. Argyropoulos, K. Armeni, Christoph Aurnhammer, Nilgoun Bahar, Jana Basnakova, Laura Batterink, Idan Blank, Lindsay Bowman, Jonathan Brennan, Trevor Brothers, Adam Buchwald, Chiara Cantiani, Stefano Cappa, Micaela Chan, Luyao Chen, Yuchun Chen, A. Chrabaszcz, Laurent Cohen, H. Coslett, Jacqueline Cummine, Anila D ’ Mello, A. Daliri, Nicola Del, Maschio Andrew, Tesla DeMarco, D. D. Ouden, Michele T. Diaz, Anthony Steven Dick, Guosheng Ding, Nai Ding, Irene Echeverria-Altuna, Mark Eckert, Allyson Ettinger, Z. Eviatar, Heather Flowers, Robert Frank, Stefan Frank, Jon Gauthier, Giulia Gennari, Fatemeh Geranmayeh, Laura Giglio","doi":"10.1162/nol_e_00130","DOIUrl":"https://doi.org/10.1162/nol_e_00130","url":null,"abstract":"","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138991772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hannah Mechtenberg, Christopher C. Heffner, Emily B. Myers, Sara Guediche
Abstract Over the past few decades, research into the function of the cerebellum has expanded far beyond the motor domain. A growing number of studies are probing the role of specific cerebellar subregions, such as Crus I and Crus II, in higher-order cognitive functions including receptive language processing. In the current fMRI study, we show evidence for the cerebellum’s sensitivity to variation in two well-studied psycholinguistic properties of words–lexical frequency and phonological neighborhood density–during passive, continuous listening of a podcast. To determine whether, and how, activity in the cerebellum correlates with these lexical properties, we modeled each word separately using an amplitude-modulated regressor, time-locked to the onset of each word. At the group level, significant effects of both lexical properties landed in expected cerebellar subregions: Crus I and Crus II. The BOLD signal correlated with variation in both lexical properties; patterns consistent with both language-specific and domain-general mechanisms. Activation patterns at the individual level also showed that effects of phonological neighborhood and lexical frequency landed in Crus I and Crus II as the most probable sites, though there was activation seen in other lobules (especially for frequency). Although the exact cerebellar mechanisms used during speech and language processing are not yet evident, these findings highlight the cerebellum’s role in word-level processing during continuous listening.
{"title":"The cerebellum is sensitive to the lexical properties of words during spoken language comprehension","authors":"Hannah Mechtenberg, Christopher C. Heffner, Emily B. Myers, Sara Guediche","doi":"10.1162/nol_a_00126","DOIUrl":"https://doi.org/10.1162/nol_a_00126","url":null,"abstract":"Abstract Over the past few decades, research into the function of the cerebellum has expanded far beyond the motor domain. A growing number of studies are probing the role of specific cerebellar subregions, such as Crus I and Crus II, in higher-order cognitive functions including receptive language processing. In the current fMRI study, we show evidence for the cerebellum’s sensitivity to variation in two well-studied psycholinguistic properties of words–lexical frequency and phonological neighborhood density–during passive, continuous listening of a podcast. To determine whether, and how, activity in the cerebellum correlates with these lexical properties, we modeled each word separately using an amplitude-modulated regressor, time-locked to the onset of each word. At the group level, significant effects of both lexical properties landed in expected cerebellar subregions: Crus I and Crus II. The BOLD signal correlated with variation in both lexical properties; patterns consistent with both language-specific and domain-general mechanisms. Activation patterns at the individual level also showed that effects of phonological neighborhood and lexical frequency landed in Crus I and Crus II as the most probable sites, though there was activation seen in other lobules (especially for frequency). Although the exact cerebellar mechanisms used during speech and language processing are not yet evident, these findings highlight the cerebellum’s role in word-level processing during continuous listening.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135392301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract A fundamental question in neurolinguistics concerns the brain regions involved in syntactic and semantic processing during speech comprehension, both at the lexical (word processing) and supra-lexical levels (sentence and discourse processing). To what extent are these regions separated or intertwined? To address this question, we introduce a novel approach exploiting neural language models to generate high-dimensional feature sets that separately encode semantic and syntactic information. More precisely, we train a lexical language model, Glove, and a supra-lexical language model, GPT-2, on a text corpus from which we selectively removed either syntactic or semantic information. We then assess to what extent the features derived from these information-restricted models are still able to predict the fMRI time courses of humans listening to naturalistic text. Furthermore, to determine the windows of integration of brain regions involved in supra-lexical processing, we manipulate the size of contextual information provided to GPT-2. The analyses show that, while most brain regions involved in language comprehension are sensitive to both syntactic and semantic features, the relative magnitudes of these effects vary across these regions. Moreover, regions that are best fitted by semantic or syntactic features are more spatially dissociated in the left hemisphere than in the right one, and the right hemisphere shows sensitivity to longer contexts than the left. The novelty of our approach lies in the ability to control for the information encoded in the models’ embeddings by manipulating the training set. These “information-restricted” models complement previous studies that used language models to probe the neural bases of language, and shed new light on its spatial organization.
{"title":"Information-Restricted Neural Language Models Reveal Different Brain Regions' Sensitivity to Semantics, Syntax and Context","authors":"Alexandre Pasquiou, Yair Lakretz, Bertrand Thirion, Christophe Pallier","doi":"10.1162/nol_a_00125","DOIUrl":"https://doi.org/10.1162/nol_a_00125","url":null,"abstract":"Abstract A fundamental question in neurolinguistics concerns the brain regions involved in syntactic and semantic processing during speech comprehension, both at the lexical (word processing) and supra-lexical levels (sentence and discourse processing). To what extent are these regions separated or intertwined? To address this question, we introduce a novel approach exploiting neural language models to generate high-dimensional feature sets that separately encode semantic and syntactic information. More precisely, we train a lexical language model, Glove, and a supra-lexical language model, GPT-2, on a text corpus from which we selectively removed either syntactic or semantic information. We then assess to what extent the features derived from these information-restricted models are still able to predict the fMRI time courses of humans listening to naturalistic text. Furthermore, to determine the windows of integration of brain regions involved in supra-lexical processing, we manipulate the size of contextual information provided to GPT-2. The analyses show that, while most brain regions involved in language comprehension are sensitive to both syntactic and semantic features, the relative magnitudes of these effects vary across these regions. Moreover, regions that are best fitted by semantic or syntactic features are more spatially dissociated in the left hemisphere than in the right one, and the right hemisphere shows sensitivity to longer contexts than the left. The novelty of our approach lies in the ability to control for the information encoded in the models’ embeddings by manipulating the training set. These “information-restricted” models complement previous studies that used language models to probe the neural bases of language, and shed new light on its spatial organization.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135432016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-31eCollection Date: 2023-01-01DOI: 10.1162/nol_a_00117
William Matchin, Dirk-Bart den Ouden, Alexandra Basilakos, Brielle Caserta Stark, Julius Fridriksson, Gregory Hickok
Sentence structure, or syntax, is potentially a uniquely creative aspect of the human mind. Neuropsychological experiments in the 1970s suggested parallel syntactic production and comprehension deficits in agrammatic Broca's aphasia, thought to result from damage to syntactic mechanisms in Broca's area in the left frontal lobe. This hypothesis was sometimes termed overarching agrammatism, converging with developments in linguistic theory concerning central syntactic mechanisms supporting language production and comprehension. However, the evidence supporting an association among receptive syntactic deficits, expressive agrammatism, and damage to frontal cortex is equivocal. In addition, the relationship among a distinct grammatical production deficit in aphasia, paragrammatism, and receptive syntax has not been assessed. We used lesion-symptom mapping in three partially overlapping groups of left-hemisphere stroke patients to investigate these issues: grammatical production deficits in a primary group of 53 subjects and syntactic comprehension in larger sample sizes (N = 130, 218) that overlapped with the primary group. Paragrammatic production deficits were significantly associated with multiple analyses of syntactic comprehension, particularly when incorporating lesion volume as a covariate, but agrammatic production deficits were not. The lesion correlates of impaired performance of syntactic comprehension were significantly associated with damage to temporal lobe regions, which were also implicated in paragrammatism, but not with the inferior and middle frontal regions implicated in expressive agrammatism. Our results provide strong evidence against the overarching agrammatism hypothesis. By contrast, our results suggest the possibility of an alternative grammatical parallelism hypothesis rooted in paragrammatism and a central syntactic system in the posterior temporal lobe.
{"title":"Grammatical Parallelism in Aphasia: A Lesion-Symptom Mapping Study.","authors":"William Matchin, Dirk-Bart den Ouden, Alexandra Basilakos, Brielle Caserta Stark, Julius Fridriksson, Gregory Hickok","doi":"10.1162/nol_a_00117","DOIUrl":"10.1162/nol_a_00117","url":null,"abstract":"<p><p>Sentence structure, or syntax, is potentially a uniquely creative aspect of the human mind. Neuropsychological experiments in the 1970s suggested parallel syntactic production and comprehension deficits in agrammatic Broca's aphasia, thought to result from damage to syntactic mechanisms in Broca's area in the left frontal lobe. This hypothesis was sometimes termed <i>overarching agrammatism</i>, converging with developments in linguistic theory concerning central syntactic mechanisms supporting language production and comprehension. However, the evidence supporting an association among receptive syntactic deficits, expressive agrammatism, and damage to frontal cortex is equivocal. In addition, the relationship among a distinct grammatical production deficit in aphasia, paragrammatism, and receptive syntax has not been assessed. We used lesion-symptom mapping in three partially overlapping groups of left-hemisphere stroke patients to investigate these issues: grammatical production deficits in a primary group of 53 subjects and syntactic comprehension in larger sample sizes (<i>N</i> = 130, 218) that overlapped with the primary group. Paragrammatic production deficits were significantly associated with multiple analyses of syntactic comprehension, particularly when incorporating lesion volume as a covariate, but agrammatic production deficits were not. The lesion correlates of impaired performance of syntactic comprehension were significantly associated with damage to temporal lobe regions, which were also implicated in paragrammatism, but not with the inferior and middle frontal regions implicated in expressive agrammatism. Our results provide strong evidence against the overarching agrammatism hypothesis. By contrast, our results suggest the possibility of an alternative grammatical parallelism hypothesis rooted in paragrammatism and a central syntactic system in the posterior temporal lobe.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10631800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49613061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-31eCollection Date: 2023-01-01DOI: 10.1162/nol_a_00115
Alexandra C Brito, Deborah F Levy, Sarah M Schneck, Jillian L Entrup, Caitlin F Onuscheck, Marianne Casilio, Michael de Riesthal, L Taylor Davis, Stephen M Wilson
After a stroke, individuals with aphasia often recover to a certain extent over time. This recovery process may be dependent on the health of surviving brain regions. Leukoaraiosis (white matter hyperintensities on MRI reflecting cerebral small vessel disease) is one indication of compromised brain health and is associated with cognitive and motor impairment. Previous studies have suggested that leukoaraiosis may be a clinically relevant predictor of aphasia outcomes and recovery, although findings have been inconsistent. We investigated the relationship between leukoaraiosis and aphasia in the first year after stroke. We recruited 267 patients with acute left hemispheric stroke and coincident fluid attenuated inversion recovery MRI. Patients were evaluated for aphasia within 5 days of stroke, and 174 patients presented with aphasia acutely. Of these, 84 patients were evaluated at ∼3 months post-stroke or later to assess longer-term speech and language outcomes. Multivariable regression models were fit to the data to identify any relationships between leukoaraiosis and initial aphasia severity, extent of recovery, or longer-term aphasia severity. We found that leukoaraiosis was present to varying degrees in 90% of patients. However, leukoaraiosis did not predict initial aphasia severity, aphasia recovery, or longer-term aphasia severity. The lack of any relationship between leukoaraiosis severity and aphasia recovery may reflect the anatomical distribution of cerebral small vessel disease, which is largely medial to the white matter pathways that are critical for speech and language function.
{"title":"Leukoaraiosis Is Not Associated With Recovery From Aphasia in the First Year After Stroke.","authors":"Alexandra C Brito, Deborah F Levy, Sarah M Schneck, Jillian L Entrup, Caitlin F Onuscheck, Marianne Casilio, Michael de Riesthal, L Taylor Davis, Stephen M Wilson","doi":"10.1162/nol_a_00115","DOIUrl":"10.1162/nol_a_00115","url":null,"abstract":"<p><p>After a stroke, individuals with aphasia often recover to a certain extent over time. This recovery process may be dependent on the health of surviving brain regions. Leukoaraiosis (white matter hyperintensities on MRI reflecting cerebral small vessel disease) is one indication of compromised brain health and is associated with cognitive and motor impairment. Previous studies have suggested that leukoaraiosis may be a clinically relevant predictor of aphasia outcomes and recovery, although findings have been inconsistent. We investigated the relationship between leukoaraiosis and aphasia in the first year after stroke. We recruited 267 patients with acute left hemispheric stroke and coincident fluid attenuated inversion recovery MRI. Patients were evaluated for aphasia within 5 days of stroke, and 174 patients presented with aphasia acutely. Of these, 84 patients were evaluated at ∼3 months post-stroke or later to assess longer-term speech and language outcomes. Multivariable regression models were fit to the data to identify any relationships between leukoaraiosis and initial aphasia severity, extent of recovery, or longer-term aphasia severity. We found that leukoaraiosis was present to varying degrees in 90% of patients. However, leukoaraiosis did not predict initial aphasia severity, aphasia recovery, or longer-term aphasia severity. The lack of any relationship between leukoaraiosis severity and aphasia recovery may reflect the anatomical distribution of cerebral small vessel disease, which is largely medial to the white matter pathways that are critical for speech and language function.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10631799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72015569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marissa M Lee, Lauren M McGrath, Catherine J Stoodley
Abstract The cerebellum is traditionally associated with the control of coordinated movement, but ample evidence suggests that the cerebellum also supports cognitive processing. Consistent with this, right-lateralized posterolateral cerebellar regions are engaged during a range of reading and reading-related tasks, but the specific role of the cerebellum during reading tasks is not clear. Based on the cerebellar contribution to automatizing movement, it has been hypothesized that the cerebellum is specifically involved in rapid, fluent reading. We aimed to determine whether the right posterolateral cerebellum is a specific modulator of reading fluency or whether cerebellar modulation is broader, also impacting reading accuracy, rapid automatized naming, and general processing speed. To do this, we examined the effect of transcranial direct current stimulation (tDCS) targeting the right posterolateral cerebellum (lobules VI/VII) on single-word reading fluency, reading accuracy, rapid automatized naming, and processing speed. Young adults with typical reading development (n = 25; 15 female sex assigned at birth, 10 male sex assigned at birth, aged 18–28 years [M = 19.92 ± 2.04 years]) completed the reading and cognitive measures after 20 min of 2 mA anodal (excitatory), cathodal (inhibitory), or sham tDCS in a within-subjects design. Linear mixed effects models indicated that cathodal tDCS decreased single-word reading fluency scores (d = −0.36, p < 0.05) but did not significantly affect single-word reading accuracy, rapid automatized naming, or general processing speed measures. Our results suggest that the right posterolateral cerebellum is involved in reading fluency, consistent with a broader role of the cerebellum in fast, fluent cognition.
{"title":"Cerebellar neuromodulation impacts reading fluency in young adults","authors":"Marissa M Lee, Lauren M McGrath, Catherine J Stoodley","doi":"10.1162/nol_a_00124","DOIUrl":"https://doi.org/10.1162/nol_a_00124","url":null,"abstract":"Abstract The cerebellum is traditionally associated with the control of coordinated movement, but ample evidence suggests that the cerebellum also supports cognitive processing. Consistent with this, right-lateralized posterolateral cerebellar regions are engaged during a range of reading and reading-related tasks, but the specific role of the cerebellum during reading tasks is not clear. Based on the cerebellar contribution to automatizing movement, it has been hypothesized that the cerebellum is specifically involved in rapid, fluent reading. We aimed to determine whether the right posterolateral cerebellum is a specific modulator of reading fluency or whether cerebellar modulation is broader, also impacting reading accuracy, rapid automatized naming, and general processing speed. To do this, we examined the effect of transcranial direct current stimulation (tDCS) targeting the right posterolateral cerebellum (lobules VI/VII) on single-word reading fluency, reading accuracy, rapid automatized naming, and processing speed. Young adults with typical reading development (n = 25; 15 female sex assigned at birth, 10 male sex assigned at birth, aged 18–28 years [M = 19.92 ± 2.04 years]) completed the reading and cognitive measures after 20 min of 2 mA anodal (excitatory), cathodal (inhibitory), or sham tDCS in a within-subjects design. Linear mixed effects models indicated that cathodal tDCS decreased single-word reading fluency scores (d = −0.36, p &lt; 0.05) but did not significantly affect single-word reading accuracy, rapid automatized naming, or general processing speed measures. Our results suggest that the right posterolateral cerebellum is involved in reading fluency, consistent with a broader role of the cerebellum in fast, fluent cognition.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135666580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Economou, Femke Vanden Bempt, Shauni Van Herck, Toivo Glatz, Jan Wouters, Pol Ghesquière, Jolijn Vanderauwera, Maaike Vandermosten
Abstract Early childhood is a critical period for structural brain development as well as an important window for the identification and remediation of reading difficulties. Recent research supports the implementation of interventions in at-risk populations as early as kindergarten or first grade, yet the neurocognitive mechanisms following such interventions remain understudied. To address this, we investigated cortical structure by means of anatomical MRI before and after a 12-week tablet-based intervention in: (1) at-risk children receiving phonics-based training (n = 29; n = 16 complete pre-post datasets), (2) at-risk children engaging with active control training (n = 24; n = 15 complete pre-post datasets) and (3) typically developing children (n = 25; n = 14 complete pre-post datasets) receiving no intervention. At baseline, we found higher surface area of the right supramarginal gyrus in at-risk children compared to typically developing peers, extending previous evidence that early anatomical differences exist in children who may later develop dyslexia. Our longitudinal analysis revealed significant post-intervention thickening of the left supramarginal gyrus, present exclusively in the intervention group but not the active control or typical control groups. Altogether, this study contributes new knowledge to our understanding of the brain morphology associated with cognitive risk for dyslexia and response to early intervention, which in turn raises new questions on how early anatomy and plasticity may shape the trajectories of long-term literacy development.
{"title":"Cortical structure in pre-readers at cognitive risk for dyslexia: Baseline differences and response to intervention","authors":"Maria Economou, Femke Vanden Bempt, Shauni Van Herck, Toivo Glatz, Jan Wouters, Pol Ghesquière, Jolijn Vanderauwera, Maaike Vandermosten","doi":"10.1162/nol_a_00122","DOIUrl":"https://doi.org/10.1162/nol_a_00122","url":null,"abstract":"Abstract Early childhood is a critical period for structural brain development as well as an important window for the identification and remediation of reading difficulties. Recent research supports the implementation of interventions in at-risk populations as early as kindergarten or first grade, yet the neurocognitive mechanisms following such interventions remain understudied. To address this, we investigated cortical structure by means of anatomical MRI before and after a 12-week tablet-based intervention in: (1) at-risk children receiving phonics-based training (n = 29; n = 16 complete pre-post datasets), (2) at-risk children engaging with active control training (n = 24; n = 15 complete pre-post datasets) and (3) typically developing children (n = 25; n = 14 complete pre-post datasets) receiving no intervention. At baseline, we found higher surface area of the right supramarginal gyrus in at-risk children compared to typically developing peers, extending previous evidence that early anatomical differences exist in children who may later develop dyslexia. Our longitudinal analysis revealed significant post-intervention thickening of the left supramarginal gyrus, present exclusively in the intervention group but not the active control or typical control groups. Altogether, this study contributes new knowledge to our understanding of the brain morphology associated with cognitive risk for dyslexia and response to early intervention, which in turn raises new questions on how early anatomy and plasticity may shape the trajectories of long-term literacy development.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135548660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Much of the language we encounter in our everyday lives comes in the form of conversation, yet the majority of research on the neural basis of language comprehension has used input from only one speaker at a time. 20 adults were scanned while passively observing audiovisual conversations using functional magnetic resonance imaging. In a block-design task, participants watched 20-second videos of puppets speaking either to another puppet (the “dialogue” condition) or directly to the viewer (“monologue”), while the audio was either comprehensible (played forward) or incomprehensible (played backward). Individually functionally-localized left-hemisphere language regions responded more to comprehensible than incomprehensible speech but did not respond differently to dialogue than monologue. In a second task, participants watched videos (1–3 minutes each) of two puppets conversing with each other, in which one puppet was comprehensible while the other’s speech was reversed. All participants saw the same visual input but were randomly assigned which character’s speech was comprehensible. In left-hemisphere cortical language regions, the timecourse of activity was correlated only among participants who heard the same character speaking comprehensibly, despite identical visual input across all participants. For comparison, some individually-localized theory of mind regions and right hemisphere homologues of language regions responded more to dialogue than monologue in the first task, and in the second task, activity in some regions was correlated across all participants regardless of which character was speaking comprehensibly. Together, these results suggest that canonical left-hemisphere cortical language regions are not sensitive to differences between observed dialogue and monologue.
{"title":"Left-hemisphere cortical language regions respond equally to observed dialogue and monologue","authors":"Halie Olson, Emily Chen, Kirsten Lydic, Rebecca Saxe","doi":"10.1162/nol_a_00123","DOIUrl":"https://doi.org/10.1162/nol_a_00123","url":null,"abstract":"Abstract Much of the language we encounter in our everyday lives comes in the form of conversation, yet the majority of research on the neural basis of language comprehension has used input from only one speaker at a time. 20 adults were scanned while passively observing audiovisual conversations using functional magnetic resonance imaging. In a block-design task, participants watched 20-second videos of puppets speaking either to another puppet (the “dialogue” condition) or directly to the viewer (“monologue”), while the audio was either comprehensible (played forward) or incomprehensible (played backward). Individually functionally-localized left-hemisphere language regions responded more to comprehensible than incomprehensible speech but did not respond differently to dialogue than monologue. In a second task, participants watched videos (1–3 minutes each) of two puppets conversing with each other, in which one puppet was comprehensible while the other’s speech was reversed. All participants saw the same visual input but were randomly assigned which character’s speech was comprehensible. In left-hemisphere cortical language regions, the timecourse of activity was correlated only among participants who heard the same character speaking comprehensibly, despite identical visual input across all participants. For comparison, some individually-localized theory of mind regions and right hemisphere homologues of language regions responded more to dialogue than monologue in the first task, and in the second task, activity in some regions was correlated across all participants regardless of which character was speaking comprehensibly. Together, these results suggest that canonical left-hemisphere cortical language regions are not sensitive to differences between observed dialogue and monologue.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135745225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carina Kauf, Greta Tuckute, Roger P. Levy, Jacob Andreas, Evelina Fedorenko
Abstract Representations from artificial neural network (ANN) language models have been shown to predict human brain activity in the language network. To understand what aspects of linguistic stimuli contribute to ANN-to-brain similarity, we used an fMRI data set of responses to n = 627 naturalistic English sentences (Pereira et al., 2018) and systematically manipulated the stimuli for which ANN representations were extracted. In particular, we (i) perturbed sentences’ word order, (ii) removed different subsets of words, or (iii) replaced sentences with other sentences of varying semantic similarity. We found that the lexical-semantic content of the sentence (largely carried by content words) rather than the sentence’s syntactic form (conveyed via word order or function words) is primarily responsible for the ANN-to-brain similarity. In follow-up analyses, we found that perturbation manipulations that adversely affect brain predictivity also lead to more divergent representations in the ANN’s embedding space and decrease the ANN’s ability to predict upcoming tokens in those stimuli. Further, results are robust as to whether the mapping model is trained on intact or perturbed stimuli and whether the ANN sentence representations are conditioned on the same linguistic context that humans saw. The critical result—that lexical-semantic content is the main contributor to the similarity between ANN representations and neural ones—aligns with the idea that the goal of the human language system is to extract meaning from linguistic strings. Finally, this work highlights the strength of systematic experimental manipulations for evaluating how close we are to accurate and generalizable models of the human language network.
人工神经网络(ANN)语言模型的表征已被证明可以在语言网络中预测人脑活动。为了了解语言刺激的哪些方面有助于神经网络与大脑的相似性,我们使用了对n = 627个自然英语句子(Pereira et al., 2018)的反应的功能磁共振成像数据集,并系统地操纵了提取神经网络表征的刺激。特别是,我们(i)扰乱句子的词序,(ii)删除不同的词子集,或(iii)用语义相似度不同的其他句子替换句子。我们发现,句子的词汇语义内容(主要由实词承载)而不是句子的句法形式(通过词序或虚词传达)是人工神经网络与大脑相似度的主要原因。在后续分析中,我们发现对大脑预测产生不利影响的扰动操作也会导致人工神经网络嵌入空间中出现更多不同的表征,并降低人工神经网络预测这些刺激中即将到来的标记的能力。此外,对于映射模型是在完整的还是扰动的刺激上训练的,以及人工神经网络的句子表示是否以人类看到的相同的语言语境为条件,结果是鲁棒的。关键的结果——词汇语义内容是人工神经网络表示和神经网络表示之间相似性的主要贡献者——与人类语言系统的目标是从语言字符串中提取意义的想法一致。最后,这项工作强调了系统实验操作的强度,以评估我们离人类语言网络的准确和可推广模型有多近。
{"title":"Lexical-Semantic Content, Not Syntactic Structure, Is the Main Contributor to ANN-Brain Similarity of fMRI Responses in the Language Network","authors":"Carina Kauf, Greta Tuckute, Roger P. Levy, Jacob Andreas, Evelina Fedorenko","doi":"10.1162/nol_a_00116","DOIUrl":"https://doi.org/10.1162/nol_a_00116","url":null,"abstract":"Abstract Representations from artificial neural network (ANN) language models have been shown to predict human brain activity in the language network. To understand what aspects of linguistic stimuli contribute to ANN-to-brain similarity, we used an fMRI data set of responses to n = 627 naturalistic English sentences (Pereira et al., 2018) and systematically manipulated the stimuli for which ANN representations were extracted. In particular, we (i) perturbed sentences’ word order, (ii) removed different subsets of words, or (iii) replaced sentences with other sentences of varying semantic similarity. We found that the lexical-semantic content of the sentence (largely carried by content words) rather than the sentence’s syntactic form (conveyed via word order or function words) is primarily responsible for the ANN-to-brain similarity. In follow-up analyses, we found that perturbation manipulations that adversely affect brain predictivity also lead to more divergent representations in the ANN’s embedding space and decrease the ANN’s ability to predict upcoming tokens in those stimuli. Further, results are robust as to whether the mapping model is trained on intact or perturbed stimuli and whether the ANN sentence representations are conditioned on the same linguistic context that humans saw. The critical result—that lexical-semantic content is the main contributor to the similarity between ANN representations and neural ones—aligns with the idea that the goal of the human language system is to extract meaning from linguistic strings. Finally, this work highlights the strength of systematic experimental manipulations for evaluating how close we are to accurate and generalizable models of the human language network.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136102035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-18eCollection Date: 2023-01-01DOI: 10.1162/nol_a_00114
Mackenzie Philips, Sarah M Schneck, Deborah F Levy, Stephen M Wilson
Imaging studies of language processing in clinical populations can be complicated to interpret for several reasons, one being the difficulty of matching the effortfulness of processing across individuals or tasks. To better understand how effortful linguistic processing is reflected in functional activity, we investigated the neural correlates of task difficulty in linguistic and non-linguistic contexts in the auditory modality and then compared our findings to a recent analogous experiment in the visual modality in a different cohort. Nineteen neurologically normal individuals were scanned with fMRI as they performed a linguistic task (semantic matching) and a non-linguistic task (melodic matching), each with two levels of difficulty. We found that left hemisphere frontal and temporal language regions, as well as the right inferior frontal gyrus, were modulated by linguistic demand and not by non-linguistic demand. This was broadly similar to what was previously observed in the visual modality. In contrast, the multiple demand (MD) network, a set of brain regions thought to support cognitive flexibility in many contexts, was modulated neither by linguistic demand nor by non-linguistic demand in the auditory modality. This finding was in striking contradistinction to what was previously observed in the visual modality, where the MD network was robustly modulated by both linguistic and non-linguistic demand. Our findings suggest that while the language network is modulated by linguistic demand irrespective of modality, modulation of the MD network by linguistic demand is not inherent to linguistic processing, but rather depends on specific task factors.
{"title":"Modality-Specificity of the Neural Correlates of Linguistic and Non-Linguistic Demand.","authors":"Mackenzie Philips, Sarah M Schneck, Deborah F Levy, Stephen M Wilson","doi":"10.1162/nol_a_00114","DOIUrl":"10.1162/nol_a_00114","url":null,"abstract":"<p><p>Imaging studies of language processing in clinical populations can be complicated to interpret for several reasons, one being the difficulty of matching the effortfulness of processing across individuals or tasks. To better understand how effortful linguistic processing is reflected in functional activity, we investigated the neural correlates of task difficulty in linguistic and non-linguistic contexts in the auditory modality and then compared our findings to a recent analogous experiment in the visual modality in a different cohort. Nineteen neurologically normal individuals were scanned with fMRI as they performed a linguistic task (semantic matching) and a non-linguistic task (melodic matching), each with two levels of difficulty. We found that left hemisphere frontal and temporal language regions, as well as the right inferior frontal gyrus, were modulated by linguistic demand and not by non-linguistic demand. This was broadly similar to what was previously observed in the visual modality. In contrast, the multiple demand (MD) network, a set of brain regions thought to support cognitive flexibility in many contexts, was modulated neither by linguistic demand nor by non-linguistic demand in the auditory modality. This finding was in striking contradistinction to what was previously observed in the visual modality, where the MD network was robustly modulated by both linguistic and non-linguistic demand. Our findings suggest that while the language network is modulated by linguistic demand irrespective of modality, modulation of the MD network by linguistic demand is not inherent to linguistic processing, but rather depends on specific task factors.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10575553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41239385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}