Eleanor Huizeling, Sophie Arana, Peter Hagoort, Jan-Mathijs Schoffelen
Typical adults read remarkably quickly. Such fast reading is facilitated by brain processes that are sensitive to both word frequency and contextual constraints. It is debated as to whether these attributes have additive or interactive effects on language processing in the brain. We investigated this issue by analysing existing magnetoencephalography data from 99 participants reading intact and scrambled sentences. Using a cross-validated model comparison scheme, we found that lexical frequency predicted the word-by-word elicited MEG signal in a widespread cortical network, irrespective of sentential context. In contrast, index (ordinal word position) was more strongly encoded in sentence words, in left front-temporal areas. This confirms that frequency influences word processing independently of predictability, and that contextual constraints affect word-by-word brain responses. With a conservative multiple comparisons correction, only the interaction between lexical frequency and surprisal survived, in anterior temporal and frontal cortex, and not between lexical frequency and entropy, nor between lexical frequency and index. However, interestingly, the uncorrected index × frequency interaction revealed an effect in left frontal and temporal cortex that reversed in time and space for intact compared to scrambled sentences. Finally, we provide evidence to suggest that, in sentences, lexical frequency and predictability may independently influence early (<150 ms) and late stages of word processing, but also interact during late stages of word processing (>150-250 ms), thus helping to converge previous contradictory eye-tracking and electrophysiological literature. Current neurocognitive models of reading would benefit from accounting for these differing effects of lexical frequency and predictability on different stages of word processing.
{"title":"Lexical Frequency and Sentence Context Influence the Brain's Response to Single Words.","authors":"Eleanor Huizeling, Sophie Arana, Peter Hagoort, Jan-Mathijs Schoffelen","doi":"10.1162/nol_a_00054","DOIUrl":"https://doi.org/10.1162/nol_a_00054","url":null,"abstract":"<p><p>Typical adults read remarkably quickly. Such fast reading is facilitated by brain processes that are sensitive to both word frequency and contextual constraints. It is debated as to whether these attributes have additive or interactive effects on language processing in the brain. We investigated this issue by analysing existing magnetoencephalography data from 99 participants reading <i>intact and scrambled sentences</i>. Using a cross-validated model comparison scheme, we found that lexical frequency predicted the word-by-word elicited MEG signal in a widespread cortical network, irrespective of sentential context. In contrast, index (ordinal word position) was more strongly encoded in sentence words, in left front-temporal areas. This confirms that frequency influences word processing independently of predictability, and that contextual constraints affect word-by-word brain responses. With a conservative multiple comparisons correction, only the interaction between lexical frequency and surprisal survived, in anterior temporal and frontal cortex, and not between lexical frequency and entropy, nor between lexical frequency and index. However, interestingly, the uncorrected index × frequency interaction revealed an effect in left frontal and temporal cortex that reversed in time and space for intact compared to scrambled sentences. Finally, we provide evidence to suggest that, in sentences, lexical frequency and predictability may independently influence early (<150 ms) and late stages of word processing, but also interact during late stages of word processing (>150-250 ms), thus helping to converge previous contradictory eye-tracking and electrophysiological literature. Current neurocognitive models of reading would benefit from accounting for these differing effects of lexical frequency and predictability on different stages of word processing.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158670/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9875132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cas W Coopmans, Helen de Hoop, Peter Hagoort, Andrea E Martin
Recent research has established that cortical activity "tracks" the presentation rate of syntactic phrases in continuous speech, even though phrases are abstract units that do not have direct correlates in the acoustic signal. We investigated whether cortical tracking of phrase structures is modulated by the extent to which these structures compositionally determine meaning. To this end, we recorded electroencephalography (EEG) of 38 native speakers who listened to naturally spoken Dutch stimuli in different conditions, which parametrically modulated the degree to which syntactic structure and lexical semantics determine sentence meaning. Tracking was quantified through mutual information between the EEG data and either the speech envelopes or abstract annotations of syntax, all of which were filtered in the frequency band corresponding to the presentation rate of phrases (1.1-2.1 Hz). Overall, these mutual information analyses showed stronger tracking of phrases in regular sentences than in stimuli whose lexical-syntactic content is reduced, but no consistent differences in tracking between sentences and stimuli that contain a combination of syntactic structure and lexical content. While there were no effects of compositional meaning on the degree of phrase-structure tracking, analyses of event-related potentials elicited by sentence-final words did reveal meaning-induced differences between conditions. Our findings suggest that cortical tracking of structure in sentences indexes the internal generation of this structure, a process that is modulated by the properties of its input, but not by the compositional interpretation of its output.
{"title":"Effects of Structure and Meaning on Cortical Tracking of Linguistic Units in Naturalistic Speech.","authors":"Cas W Coopmans, Helen de Hoop, Peter Hagoort, Andrea E Martin","doi":"10.1162/nol_a_00070","DOIUrl":"https://doi.org/10.1162/nol_a_00070","url":null,"abstract":"<p><p>Recent research has established that cortical activity \"tracks\" the presentation rate of syntactic phrases in continuous speech, even though phrases are abstract units that do not have direct correlates in the acoustic signal. We investigated whether cortical tracking of phrase structures is modulated by the extent to which these structures compositionally determine meaning. To this end, we recorded electroencephalography (EEG) of 38 native speakers who listened to naturally spoken Dutch stimuli in different conditions, which parametrically modulated the degree to which syntactic structure and lexical semantics determine sentence meaning. Tracking was quantified through mutual information between the EEG data and either the speech envelopes or abstract annotations of syntax, all of which were filtered in the frequency band corresponding to the presentation rate of phrases (1.1-2.1 Hz). Overall, these mutual information analyses showed stronger tracking of phrases in regular sentences than in stimuli whose lexical-syntactic content is reduced, but no consistent differences in tracking between sentences and stimuli that contain a combination of syntactic structure and lexical content. While there were no effects of compositional meaning on the degree of phrase-structure tracking, analyses of event-related potentials elicited by sentence-final words did reveal meaning-induced differences between conditions. Our findings suggest that cortical tracking of structure in sentences indexes the internal generation of this structure, a process that is modulated by the properties of its input, but not by the compositional interpretation of its output.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158633/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9557907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During language processing, people make rapid use of contextual information to promote comprehension of upcoming words. When new words are learned implicitly, information contained in the surrounding context can provide constraints on their possible meaning. In the current study, EEG was recorded as participants listened to a series of three sentences, each containing an identical target pseudoword, with the aim of using contextual information in the surrounding language to identify a meaning representation for the novel word. In half of trials, sentences were semantically coherent so that participants could develop a single representation for the novel word that fit all contexts. Other trials contained unrelated sentence contexts so that meaning associations were not possible. We observed greater theta band enhancement over the left-hemisphere across central and posterior electrodes in response to pseudowords processed across semantically related compared to unrelated contexts. Additionally, relative alpha and beta band suppression was increased prior to pseudoword onset in trials where contextual information more readily promoted pseudoword-meaning associations. Under the hypothesis that theta enhancement indexes processing demands during lexical access, the current study provides evidence for selective online memory retrieval to novel words learned implicitly in a spoken context.
{"title":"Neural oscillations reflect meaning identification for novel words in context.","authors":"Jacob Pohaku Momsen, Alyson D Abel","doi":"10.1162/nol_a_00052","DOIUrl":"https://doi.org/10.1162/nol_a_00052","url":null,"abstract":"<p><p>During language processing, people make rapid use of contextual information to promote comprehension of upcoming words. When new words are learned implicitly, information contained in the surrounding context can provide constraints on their possible meaning. In the current study, EEG was recorded as participants listened to a series of three sentences, each containing an identical target pseudoword, with the aim of using contextual information in the surrounding language to identify a meaning representation for the novel word. In half of trials, sentences were semantically coherent so that participants could develop a single representation for the novel word that fit all contexts. Other trials contained unrelated sentence contexts so that meaning associations were not possible. We observed greater theta band enhancement over the left-hemisphere across central and posterior electrodes in response to pseudowords processed across semantically related compared to unrelated contexts. Additionally, relative alpha and beta band suppression was increased prior to pseudoword onset in trials where contextual information more readily promoted pseudoword-meaning associations. Under the hypothesis that theta enhancement indexes processing demands during lexical access, the current study provides evidence for selective online memory retrieval to novel words learned implicitly in a spoken context.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9632687/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9493372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meng-Huan Wu, Andrew J Anderson, Robert A Jacobs, Rajeev D S Raizada
Analogical reasoning, for example, inferring that teacher is to chalk as mechanic is to wrench, plays a fundamental role in human cognition. However, whether brain activity patterns of individual words are encoded in a way that could facilitate analogical reasoning is unclear. Recent advances in computational linguistics have shown that information about analogical problems can be accessed by simple addition and subtraction of word embeddings (e.g., wrench = mechanic + chalk - teacher). Critically, this property emerges in artificial neural networks that were not trained to produce analogies but instead were trained to produce general-purpose semantic representations. Here, we test whether such emergent property can be observed in representations in human brains, as well as in artificial neural networks. fMRI activation patterns were recorded while participants viewed isolated words but did not perform analogical reasoning tasks. Analogy relations were constructed from word pairs that were categorically or thematically related, and we tested whether the predicted fMRI pattern calculated with simple arithmetic was more correlated with the pattern of the target word than other words. We observed that the predicted fMRI patterns contain information about not only the identity of the target word but also its category and theme (e.g., teaching-related). In summary, this study demonstrated that information about analogy questions can be reliably accessed with the addition and subtraction of fMRI patterns, and that, similar to word embeddings, this property holds for task-general patterns elicited when participants were not explicitly told to perform analogical reasoning.
{"title":"Analogy-Related Information Can Be Accessed by Simple Addition and Subtraction of fMRI Activation Patterns, Without Participants Performing any Analogy Task.","authors":"Meng-Huan Wu, Andrew J Anderson, Robert A Jacobs, Rajeev D S Raizada","doi":"10.1162/nol_a_00045","DOIUrl":"https://doi.org/10.1162/nol_a_00045","url":null,"abstract":"<p><p>Analogical reasoning, for example, inferring that <i>teacher</i> is to <i>chalk</i> as <i>mechanic</i> is to <i>wrench</i>, plays a fundamental role in human cognition. However, whether brain activity patterns of individual words are encoded in a way that could facilitate analogical reasoning is unclear. Recent advances in computational linguistics have shown that information about analogical problems can be accessed by simple addition and subtraction of word embeddings (e.g., <i>wrench</i> = <i>mechanic</i> + <i>chalk</i> - <i>teacher</i>). Critically, this property emerges in artificial neural networks that were not trained to produce analogies but instead were trained to produce general-purpose semantic representations. Here, we test whether such emergent property can be observed in representations in human brains, as well as in artificial neural networks. fMRI activation patterns were recorded while participants viewed isolated words but did not perform analogical reasoning tasks. Analogy relations were constructed from word pairs that were categorically or thematically related, and we tested whether the predicted fMRI pattern calculated with simple arithmetic was more correlated with the pattern of the target word than other words. We observed that the predicted fMRI patterns contain information about not only the identity of the target word but also its category and theme (e.g., teaching-related). In summary, this study demonstrated that information about analogy questions can be reliably accessed with the addition and subtraction of fMRI patterns, and that, similar to word embeddings, this property holds for task-general patterns elicited when participants were not explicitly told to perform analogical reasoning.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158578/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9504746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katarina Bendtz, Sarah Ericsson, Josephine Schneider, Julia Borg, Jana Bašnáková, Julia Uddén
Face-to-face communication requires skills that go beyond core language abilities. In dialogue, we routinely make inferences beyond the literal meaning of utterances and distinguish between different speech acts based on, e.g., contextual cues. It is, however, not known whether such communicative skills potentially overlap with core language skills or other capacities, such as theory of mind (ToM). In this functional magnetic resonance imaging (fMRI) study we investigate these questions by capitalizing on individual variation in pragmatic skills in the general population. Based on behavioral data from 199 participants, we selected participants with higher vs. lower pragmatic skills for the fMRI study (N = 57). In the scanner, participants listened to dialogues including a direct or an indirect target utterance. The paradigm allowed participants at the whole group level to (passively) distinguish indirect from direct speech acts, as evidenced by a robust activity difference between these speech acts in an extended language network including ToM areas. Individual differences in pragmatic skills modulated activation in two additional regions outside the core language regions (one cluster in the left lateral parietal cortex and intraparietal sulcus and one in the precuneus). The behavioral results indicate segregation of pragmatic skill from core language and ToM. In conclusion, contextualized and multimodal communication requires a set of interrelated pragmatic processes that are neurocognitively segregated: (1) from core language and (2) partly from ToM.
{"title":"Individual Differences in Indirect Speech Act Processing Found Outside the Language Network.","authors":"Katarina Bendtz, Sarah Ericsson, Josephine Schneider, Julia Borg, Jana Bašnáková, Julia Uddén","doi":"10.1162/nol_a_00066","DOIUrl":"https://doi.org/10.1162/nol_a_00066","url":null,"abstract":"<p><p>Face-to-face communication requires skills that go beyond core language abilities. In dialogue, we routinely make inferences beyond the literal meaning of utterances and distinguish between different speech acts based on, e.g., contextual cues. It is, however, not known whether such communicative skills potentially overlap with core language skills or other capacities, such as theory of mind (ToM). In this functional magnetic resonance imaging (fMRI) study we investigate these questions by capitalizing on individual variation in pragmatic skills in the general population. Based on behavioral data from 199 participants, we selected participants with higher vs. lower pragmatic skills for the fMRI study (<i>N</i> = 57). In the scanner, participants listened to dialogues including a direct or an indirect target utterance. The paradigm allowed participants at the whole group level to (passively) distinguish indirect from direct speech acts, as evidenced by a robust activity difference between these speech acts in an extended language network including ToM areas. Individual differences in pragmatic skills modulated activation in two additional regions outside the core language regions (one cluster in the left lateral parietal cortex and intraparietal sulcus and one in the precuneus). The behavioral results indicate segregation of pragmatic skill from core language and ToM. In conclusion, contextualized and multimodal communication requires a set of interrelated pragmatic processes that are neurocognitively segregated: (1) from core language and (2) partly from ToM.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158615/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9504778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Statistical learning (SL) is hypothesized to play an important role in language development. However, the measures typically used to assess SL, particularly at the level of individual participants, are largely indirect and have low sensitivity. Recently, a neural metric based on frequency-tagging has been proposed as an alternative measure for studying SL. We tested the sensitivity of frequency-tagging measures for studying SL in individual participants in an artificial language paradigm, using non-invasive electroencephalograph (EEG) recordings of neural activity in humans. Importantly, we used carefully constructed controls to address potential acoustic confounds of the frequency-tagging approach, and compared the sensitivity of EEG-based metrics to both explicit and implicit behavioral tests of SL. Group-level results confirm that frequency-tagging can provide a robust indication of SL for an artificial language, above and beyond potential acoustic confounds. However, this metric had very low sensitivity at the level of individual participants, with significant effects found only in 30% of participants. Comparison of the neural metric to previously established behavioral measures for assessing SL showed a significant yet weak correspondence with performance on an implicit task, which was above-chance in 70% of participants, but no correspondence with the more common explicit 2-alternative forced-choice task, where performance did not exceed chance-level. Given the proposed ubiquitous nature of SL, our results highlight some of the operational and methodological challenges of obtaining robust metrics for assessing SL, as well as the potential confounds that should be taken into account when using the frequency-tagging approach in EEG studies.
{"title":"Assessing the Sensitivity of EEG-Based Frequency-Tagging as a Metric for Statistical Learning.","authors":"Danna Pinto, Anat Prior, Elana Zion Golumbic","doi":"10.1162/nol_a_00061","DOIUrl":"https://doi.org/10.1162/nol_a_00061","url":null,"abstract":"<p><p>Statistical learning (SL) is hypothesized to play an important role in language development. However, the measures typically used to assess SL, particularly at the level of individual participants, are largely indirect and have low sensitivity. Recently, a neural metric based on frequency-tagging has been proposed as an alternative measure for studying SL. We tested the sensitivity of frequency-tagging measures for studying SL in individual participants in an artificial language paradigm, using non-invasive electroencephalograph (EEG) recordings of neural activity in humans. Importantly, we used carefully constructed controls to address potential acoustic confounds of the frequency-tagging approach, and compared the sensitivity of EEG-based metrics to both explicit and implicit behavioral tests of SL. Group-level results confirm that frequency-tagging can provide a robust indication of SL for an artificial language, above and beyond potential acoustic confounds. However, this metric had very low sensitivity at the level of individual participants, with significant effects found only in 30% of participants. Comparison of the neural metric to previously established behavioral measures for assessing SL showed a significant yet weak correspondence with performance on an implicit task, which was above-chance in 70% of participants, but no correspondence with the more common explicit 2-alternative forced-choice task, where performance did not exceed chance-level. Given the proposed ubiquitous nature of SL, our results highlight some of the operational and methodological challenges of obtaining robust metrics for assessing SL, as well as the potential confounds that should be taken into account when using the frequency-tagging approach in EEG studies.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158570/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9504776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan H Drucker, Charles M Epstein, Keith M McGregor, Kyle Hortman, Kaundinya S Gopinath, Bruce Crosson
Abstract 1 Hz repetitive transcranial magnetic stimulation (rTMS) was used to decrease excitability of right pars triangularis (R PTr) to determine whether increased R PTr activity during picture naming in older adults hampers word finding. We hypothesized that decreasing R PTr excitability would reduce interference with word finding, facilitating faster picture naming. 15 older and 16 younger adults received two rTMS sessions. In one, speech onset latencies for picture naming were measured after both sham and active R PTr stimulation. In the other session, sham and active stimulation of a control region, right pars opercularis (R POp), were administered before picture naming. Order of active vs. sham stimulation within session was counterbalanced. Younger adults showed no significant effects of stimulation. In older adults, a trend indicated that participants named pictures more quickly after active than sham R PTr stimulation. However, older adults also showed longer responses during R PTr than R POp sham stimulation. When order of active vs. sham stimulation was modeled, older adults receiving active stimulation first had significantly faster responding after active than sham R PTr stimulation and significantly faster responding after R PTr than R POp stimulation, consistent with experimental hypotheses. However, older adults receiving sham stimulation first showed no significant differences between conditions. Findings are best understood, based on previous studies, when the interaction between the excitatory effects of picture naming and the inhibitory effects of 1 Hz rTMS on R PTr is considered. Implications regarding right frontal activity in older adults and for design of future experiments are discussed.
{"title":"Reduced Interference and Serial Dependency Effects for Naming in Older but Not Younger Adults after 1 Hz rTMS of Right Pars Triangularis.","authors":"Jonathan H Drucker, Charles M Epstein, Keith M McGregor, Kyle Hortman, Kaundinya S Gopinath, Bruce Crosson","doi":"10.1162/nol_a_00063","DOIUrl":"https://doi.org/10.1162/nol_a_00063","url":null,"abstract":"Abstract 1 Hz repetitive transcranial magnetic stimulation (rTMS) was used to decrease excitability of right pars triangularis (R PTr) to determine whether increased R PTr activity during picture naming in older adults hampers word finding. We hypothesized that decreasing R PTr excitability would reduce interference with word finding, facilitating faster picture naming. 15 older and 16 younger adults received two rTMS sessions. In one, speech onset latencies for picture naming were measured after both sham and active R PTr stimulation. In the other session, sham and active stimulation of a control region, right pars opercularis (R POp), were administered before picture naming. Order of active vs. sham stimulation within session was counterbalanced. Younger adults showed no significant effects of stimulation. In older adults, a trend indicated that participants named pictures more quickly after active than sham R PTr stimulation. However, older adults also showed longer responses during R PTr than R POp sham stimulation. When order of active vs. sham stimulation was modeled, older adults receiving active stimulation first had significantly faster responding after active than sham R PTr stimulation and significantly faster responding after R PTr than R POp stimulation, consistent with experimental hypotheses. However, older adults receiving sham stimulation first showed no significant differences between conditions. Findings are best understood, based on previous studies, when the interaction between the excitatory effects of picture naming and the inhibitory effects of 1 Hz rTMS on R PTr is considered. Implications regarding right frontal activity in older adults and for design of future experiments are discussed.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158568/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9504779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yue Gao, Xiangzhi Meng, Zilin Bai, Xin Liu, Manli Zhang, Hehui Li, Guosheng Ding, Li Liu, James R Booth
Whether reading in different writing systems recruits language-unique or language-universal neural processes is a long-standing debate. Many studies have shown the left arcuate fasciculus (AF) to be involved in phonological and reading processes. In contrast, little is known about the role of the right AF in reading, but some have suggested that it may play a role in visual spatial aspects of reading or the prosodic components of language. The right AF may be more important for reading in Chinese due to its logographic and tonal properties, but this hypothesis has yet to be tested. We recruited a group of Chinese-English bilingual children (8.2 to 12.0 years old) to explore the common and unique relation of reading skill in English and Chinese to fractional anisotropy (FA) in the bilateral AF. We found that both English and Chinese reading skills were positively correlated with FA in the rostral part of the left AF-direct segment. Additionally, English reading skill was positively correlated with FA in the caudal part of the left AF-direct segment, which was also positively correlated with phonological awareness. In contrast, Chinese reading skill was positively correlated with FA in certain segments of the right AF, which was positively correlated with visual spatial ability, but not tone discrimination ability. Our results suggest that there are language universal substrates of reading across languages, but that certain left AF nodes support phonological mechanisms important for reading in English, whereas certain right AF nodes support visual spatial mechanisms important for reading in Chinese.
{"title":"Left and Right Arcuate Fasciculi Are Uniquely Related to Word Reading Skills in Chinese-English Bilingual Children.","authors":"Yue Gao, Xiangzhi Meng, Zilin Bai, Xin Liu, Manli Zhang, Hehui Li, Guosheng Ding, Li Liu, James R Booth","doi":"10.1162/nol_a_00051","DOIUrl":"https://doi.org/10.1162/nol_a_00051","url":null,"abstract":"<p><p>Whether reading in different writing systems recruits language-unique or language-universal neural processes is a long-standing debate. Many studies have shown the left arcuate fasciculus (AF) to be involved in phonological and reading processes. In contrast, little is known about the role of the right AF in reading, but some have suggested that it may play a role in visual spatial aspects of reading or the prosodic components of language. The right AF may be more important for reading in Chinese due to its logographic and tonal properties, but this hypothesis has yet to be tested. We recruited a group of Chinese-English bilingual children (8.2 to 12.0 years old) to explore the common and unique relation of reading skill in English and Chinese to fractional anisotropy (FA) in the bilateral AF. We found that both English and Chinese reading skills were positively correlated with FA in the rostral part of the left AF-direct segment. Additionally, English reading skill was positively correlated with FA in the caudal part of the left AF-direct segment, which was also positively correlated with phonological awareness. In contrast, Chinese reading skill was positively correlated with FA in certain segments of the right AF, which was positively correlated with visual spatial ability, but not tone discrimination ability. Our results suggest that there are language universal substrates of reading across languages, but that certain left AF nodes support phonological mechanisms important for reading in English, whereas certain right AF nodes support visual spatial mechanisms important for reading in Chinese.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158580/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9858906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aura A L Cruz Heredia, Bethany Dickerson, Ellen Lau
Sustained anterior negativities have been the focus of much neurolinguistics research concerned with the language-memory interface, but what neural computations do they actually reflect? During the comprehension of sentences with long-distance dependencies between elements (such as object wh-questions), prior event-related potential work has demonstrated sustained anterior negativities (SANs) across the dependency region. SANs have been traditionally interpreted as an index of working memory resources responsible for storing the first element (e.g., wh-phrase) until the second element (e.g., verb) is encountered and the two can be integrated. However, it is also known that humans pursue top-down approaches in processing long-distance dependencies-predicting units and structures before actually encountering them. This study tests the hypothesis that SANs are a more general neural index of syntactic prediction. Across three experiments, we evaluated SANs in traditional wh-dependency contrasts, but also in sentences in which subordinating adverbials (e.g., although) trigger a prediction for a second clause, compared to temporal adverbials (e.g., today) that do not. We find no SAN associated with subordinating adverbials, contra the syntactic prediction hypothesis. More surprisingly, we observe SANs across matrix questions but not embedded questions. Since both involved identical long-distance dependencies, these results are also inconsistent with the traditional syntactic working memory account of the SAN. We suggest that a more general hypothesis that sustained neural activity supports working memory can be maintained, however, if the sustained anterior negativity reflects working memory encoding at the non-linguistic discourse representation level, rather than at the sentence level.
{"title":"Towards Understanding Sustained Neural Activity Across Syntactic Dependencies.","authors":"Aura A L Cruz Heredia, Bethany Dickerson, Ellen Lau","doi":"10.1162/nol_a_00050","DOIUrl":"https://doi.org/10.1162/nol_a_00050","url":null,"abstract":"<p><p>Sustained anterior negativities have been the focus of much neurolinguistics research concerned with the language-memory interface, but what neural computations do they actually reflect? During the comprehension of sentences with long-distance dependencies between elements (such as object <i>wh</i>-questions), prior event-related potential work has demonstrated sustained anterior negativities (SANs) across the dependency region. SANs have been traditionally interpreted as an index of working memory resources responsible for storing the first element (e.g., <i>wh</i>-phrase) until the second element (e.g., verb) is encountered and the two can be integrated. However, it is also known that humans pursue top-down approaches in processing long-distance dependencies-predicting units and structures before actually encountering them. This study tests the hypothesis that SANs are a more general neural index of syntactic prediction. Across three experiments, we evaluated SANs in traditional <i>wh</i>-dependency contrasts, but also in sentences in which subordinating adverbials (e.g., <i>although</i>) trigger a prediction for a second clause, compared to temporal adverbials (e.g., <i>today</i>) that do not. We find no SAN associated with subordinating adverbials, contra the syntactic prediction hypothesis. More surprisingly, we observe SANs across matrix questions but not embedded questions. Since both involved identical long-distance dependencies, these results are also inconsistent with the traditional syntactic working memory account of the SAN. We suggest that a more general hypothesis that sustained neural activity supports working memory can be maintained, however, if the sustained anterior negativity reflects working memory encoding at the non-linguistic discourse representation level, rather than at the sentence level.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9504745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neuro- and psycholinguistic experimentation supports the early decomposition of morphologically complex words within the ventral processing stream, which MEG has localized to the M170 response in the (left) visual word form area (VWFA). Decomposition into an exhaustive parse of visual morpheme forms extends beyond words like farmer to those imitating complexity (e.g., brother; Lewis et al., 2011), and to "unique" stems occurring in only one word but following the syntax and semantics of their affix (e.g., vulnerable; Gwilliams & Marantz, 2018). Evidence comes primarily from suffixation; other morphological processes have been under-investigated. This study explores circumfixation, infixation, and reduplication in Tagalog. In addition to investigating whether these are parsed like suffixation, we address an outstanding question concerning semantically empty morphemes. Some words in Tagalog resemble English winter as decomposition is not supported (wint-er); these apparently reduplicated pseudoreduplicates lack the syntactic and semantic features of reduplicated forms. However, unlike winter, these words exhibit phonological behavior predicted only if they involve a reduplicating morpheme. If these are decomposed, this provides evidence that words are analyzed as complex, like English vulnerable, when the grammar demands it. In a lexical decision task with MEG, we find that VWFA activity correlates with stem:word transition probability for circumfixed, infixed, and reduplicated words. Furthermore, a Bayesian analysis suggests that pseudoreduplicates with reduplicate-like phonology are also decomposed; other pseudoreduplicates are not. These findings are consistent with an interpretation that decomposition is modulated by phonology in addition to syntax and semantics.
神经和心理语言学实验支持腹侧处理流中形态复杂单词的早期分解,MEG已经定位于(左)视觉词形成区(VWFA)的M170反应。分解为视觉语素形式的详尽解析,从像farmer这样的词扩展到模仿复杂性的词(例如,brother;Lewis et al., 2011),以及只出现在一个单词中但遵循其词缀的语法和语义的“唯一”词干(例如,vulnerable;Gwilliams & Marantz, 2018)。证据主要来自后缀;其他形态学过程也在研究中。本研究探讨了他加禄语的外固定、内固定和重复。除了研究这些语素是否像后缀一样被解析之外,我们还解决了一个关于语义空语素的突出问题。他加禄语中的一些词类似于英语中的winter,因为不支持分解(winter -er);这些明显重复的伪重复缺乏重复形式的语法和语义特征。然而,与冬天不同的是,这些词只有在涉及重复的语素时才会表现出语音行为。如果对这些词进行分解,就可以证明,当语法需要时,单词被分析为复杂的,就像英语一样脆弱。在用MEG进行的词汇决策任务中,我们发现VWFA活动与词干转移概率有关,包括限定词、不限定词和重复词。此外,贝叶斯分析表明,具有类似重复音系的假重复音系也被分解;其他的假重复物则不是。这些发现与一种解释相一致,即分解除受句法和语义调节外,还受音韵学调节。
{"title":"Early Form-Based Morphological Decomposition in Tagalog: MEG Evidence from Reduplication, Infixation, and Circumfixation.","authors":"Samantha Wray, Linnaea Stockall, Alec Marantz","doi":"10.1162/nol_a_00062","DOIUrl":"https://doi.org/10.1162/nol_a_00062","url":null,"abstract":"<p><p>Neuro- and psycholinguistic experimentation supports the early decomposition of morphologically complex words within the ventral processing stream, which MEG has localized to the M170 response in the (left) visual word form area (VWFA). Decomposition into an exhaustive parse of visual morpheme forms extends beyond words like <i>farmer</i> to those imitating complexity (e.g., <i>brother</i>; Lewis et al., 2011), and to \"unique\" stems occurring in only one word but following the syntax and semantics of their affix (e.g., <i>vulnerable</i>; Gwilliams & Marantz, 2018). Evidence comes primarily from suffixation; other morphological processes have been under-investigated. This study explores circumfixation, infixation, and reduplication in Tagalog. In addition to investigating whether these are parsed like suffixation, we address an outstanding question concerning semantically empty morphemes. Some words in Tagalog resemble English <i>winter</i> as decomposition is not supported (<i>wint</i>-<i>er</i>); these apparently reduplicated pseudoreduplicates lack the syntactic and semantic features of reduplicated forms. However, unlike <i>winter</i>, these words exhibit phonological behavior predicted only if they involve a reduplicating morpheme. If these are decomposed, this provides evidence that words are analyzed as complex, like English <i>vulnerable</i>, when the grammar demands it. In a lexical decision task with MEG, we find that VWFA activity correlates with stem:word transition probability for circumfixed, infixed, and reduplicated words. Furthermore, a Bayesian analysis suggests that pseudoreduplicates with reduplicate-like phonology are also decomposed; other pseudoreduplicates are not. These findings are consistent with an interpretation that decomposition is modulated by phonology in addition to syntax and semantics.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158618/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9504767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}