Pub Date : 2022-03-30eCollection Date: 2022-03-01DOI: 10.1162/nol_a_00065
Sara B W Troutman, David J Madden, Michele T Diaz
As people age, one of the most common complaints is difficulty with word retrieval. A wealth of behavioral research confirms such age-related language production deficits, yet the structural neural differences that relate to age-related language production deficits remains an open area of exploration. Therefore, the present study used a large sample of healthy adults across adulthood to investigate how age-related white matter differences in three key left-hemisphere language tracts may contribute to age-related differences in language ability. Specifically, we used diffusion tensor imaging to measure fractional anisotropy (FA) and radial diffusivity (RD) which are indicators of white matter structure. We then used a series of path models to test whether white matter from the superior longitudinal fasciculus (SLF), the inferior longitudinal fasciculus, and the frontal aslant tract (FAT) mediated age-related differences in one form of language production, picture naming. We found that FA, as well as RD from the SLF and FAT mediated the relation between age and picture naming performance, whereas a control tract (corticospinal) was not a mediator. Moreover, differences between mediation of picture naming and a control naming condition suggest that left SLF has a greater role in higher-order aspects of naming, such as semantic and lexical selection whereas left FAT is more sensitive to sensorimotor aspects of fluency or speech motor planning. These results suggest that dorsal white matter contributes to age-related differences in generating speech and may be particularly important in supporting word retrieval across adulthood.
{"title":"Cerebral White Matter Mediation of Age-Related Differences in Picture Naming Across Adulthood.","authors":"Sara B W Troutman, David J Madden, Michele T Diaz","doi":"10.1162/nol_a_00065","DOIUrl":"10.1162/nol_a_00065","url":null,"abstract":"<p><p>As people age, one of the most common complaints is difficulty with word retrieval. A wealth of behavioral research confirms such age-related language production deficits, yet the structural neural differences that relate to age-related language production deficits remains an open area of exploration. Therefore, the present study used a large sample of healthy adults across adulthood to investigate how age-related white matter differences in three key left-hemisphere language tracts may contribute to age-related differences in language ability. Specifically, we used diffusion tensor imaging to measure fractional anisotropy (FA) and radial diffusivity (RD) which are indicators of white matter structure. We then used a series of path models to test whether white matter from the superior longitudinal fasciculus (SLF), the inferior longitudinal fasciculus, and the frontal aslant tract (FAT) mediated age-related differences in one form of language production, picture naming. We found that FA, as well as RD from the SLF and FAT mediated the relation between age and picture naming performance, whereas a control tract (corticospinal) was not a mediator. Moreover, differences between mediation of picture naming and a control naming condition suggest that left SLF has a greater role in higher-order aspects of naming, such as semantic and lexical selection whereas left FAT is more sensitive to sensorimotor aspects of fluency or speech motor planning. These results suggest that dorsal white matter contributes to age-related differences in generating speech and may be particularly important in supporting word retrieval across adulthood.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9169883/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10252281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-10eCollection Date: 2022-01-01DOI: 10.1162/nol_a_00047
Erik Kaestner, Xiaojing Wu, Daniel Friedman, Patricia Dugan, Orrin Devinsky, Chad Carlson, Werner Doyle, Thomas Thesen, Eric Halgren
As part of silent reading models, visual orthographic information is transduced into an auditory phonological code in a process of grapheme-to-phoneme conversion (GPC). This process is often identified with lateral temporal-parietal regions associated with auditory phoneme encoding. However, the role of articulatory phonemic representations and the precentral gyrus in GPC is ambiguous. Though the precentral gyrus is implicated in many functional MRI studies of reading, it is not clear if the time course of activity in this region is consistent with the precentral gyrus being involved in GPC. We recorded cortical electrophysiology during a bimodal match/mismatch task from eight patients with perisylvian subdural electrodes to examine the time course of neural activity during a task that necessitated GPC. Patients made a match/mismatch decision between a 3-letter string and the following auditory bi-phoneme. We characterized the distribution and timing of evoked broadband high gamma (70-170 Hz) as well as phase-locking between electrodes. The precentral gyrus emerged with a high concentration of broadband high gamma responses to visual and auditory language as well as mismatch effects. The pars opercularis, supramarginal gyrus, and superior temporal gyrus were also involved. The precentral gyrus showed strong phase-locking with the caudal fusiform gyrus during letter-string presentation and with surrounding perisylvian cortex during the bimodal visual-auditory comparison period. These findings hint at a role for precentral cortex in transducing visual into auditory codes during silent reading.
{"title":"The Precentral Gyrus Contributions to the Early Time-Course of Grapheme-to-Phoneme Conversion.","authors":"Erik Kaestner, Xiaojing Wu, Daniel Friedman, Patricia Dugan, Orrin Devinsky, Chad Carlson, Werner Doyle, Thomas Thesen, Eric Halgren","doi":"10.1162/nol_a_00047","DOIUrl":"10.1162/nol_a_00047","url":null,"abstract":"<p><p>As part of silent reading models, visual orthographic information is transduced into an auditory phonological code in a process of grapheme-to-phoneme conversion (GPC). This process is often identified with lateral temporal-parietal regions associated with auditory phoneme encoding. However, the role of articulatory phonemic representations and the precentral gyrus in GPC is ambiguous. Though the precentral gyrus is implicated in many functional MRI studies of reading, it is not clear if the time course of activity in this region is consistent with the precentral gyrus being involved in GPC. We recorded cortical electrophysiology during a bimodal match/mismatch task from eight patients with perisylvian subdural electrodes to examine the time course of neural activity during a task that necessitated GPC. Patients made a match/mismatch decision between a 3-letter string and the following auditory bi-phoneme. We characterized the distribution and timing of evoked broadband high gamma (70-170 Hz) as well as phase-locking between electrodes. The precentral gyrus emerged with a high concentration of broadband high gamma responses to visual and auditory language as well as mismatch effects. The pars opercularis, supramarginal gyrus, and superior temporal gyrus were also involved. The precentral gyrus showed strong phase-locking with the caudal fusiform gyrus during letter-string presentation and with surrounding perisylvian cortex during the bimodal visual-auditory comparison period. These findings hint at a role for precentral cortex in transducing visual into auditory codes during silent reading.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158576/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9875135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tobias Teichert, G Nike Gnanateja, Srivatsun Sadagopan, Bharath Chandrasekaran
Envelope and frequency-following responses (FFRENV and FFRTFS) are scalp-recorded electrophysiological potentials that closely follow the periodicity of complex sounds such as speech. These signals have been established as important biomarkers in speech and learning disorders. However, despite important advances, it has remained challenging to map altered FFRENV and FFRTFS to altered processing in specific brain regions. Here we explore the utility of a deconvolution approach based on the assumption that FFRENV and FFRTFS reflect the linear superposition of responses that are triggered by the glottal pulse in each cycle of the fundamental frequency (F0 responses). We tested the deconvolution method by applying it to FFRENV and FFRTFS of rhesus monkeys to human speech and click trains with time-varying pitch patterns. Our analyses show that F0ENV responses could be measured with high signal-to-noise ratio and featured several spectro-temporally and topographically distinct components that likely reflect the activation of brainstem (<5 ms; 200-1000 Hz), midbrain (5-15 ms; 100-250 Hz), and cortex (15-35 ms; ~90 Hz). In contrast, F0TFS responses contained only one spectro-temporal component that likely reflected activity in the midbrain. In summary, our results support the notion that the latency of F0 components map meaningfully onto successive processing stages. This opens the possibility that pathologically altered FFRENV or FFRTFS may be linked to altered F0ENV or F0TFS and from there to specific processing stages and ultimately spatially targeted interventions.
{"title":"A Linear Superposition Model of Envelope and Frequency Following Responses May Help Identify Generators Based on Latency.","authors":"Tobias Teichert, G Nike Gnanateja, Srivatsun Sadagopan, Bharath Chandrasekaran","doi":"10.1162/nol_a_00072","DOIUrl":"https://doi.org/10.1162/nol_a_00072","url":null,"abstract":"<p><p>Envelope and frequency-following responses (FFR<sub>ENV</sub> and FFR<sub>TFS</sub>) are scalp-recorded electrophysiological potentials that closely follow the periodicity of complex sounds such as speech. These signals have been established as important biomarkers in speech and learning disorders. However, despite important advances, it has remained challenging to map altered FFR<sub>ENV</sub> and FFR<sub>TFS</sub> to altered processing in specific brain regions. Here we explore the utility of a deconvolution approach based on the assumption that FFR<sub>ENV</sub> and FFR<sub>TFS</sub> reflect the linear superposition of responses that are triggered by the glottal pulse in each cycle of the fundamental frequency (F0 responses). We tested the deconvolution method by applying it to FFR<sub>ENV</sub> and FFR<sub>TFS</sub> of rhesus monkeys to human speech and click trains with time-varying pitch patterns. Our analyses show that F0<sub>ENV</sub> responses could be measured with high signal-to-noise ratio and featured several spectro-temporally and topographically distinct components that likely reflect the activation of brainstem (<5 ms; 200-1000 Hz), midbrain (5-15 ms; 100-250 Hz), and cortex (15-35 ms; ~90 Hz). In contrast, F0<sub>TFS</sub> responses contained only one spectro-temporal component that likely reflected activity in the midbrain. In summary, our results support the notion that the latency of F0 components map meaningfully onto successive processing stages. This opens the possibility that pathologically altered FFR<sub>ENV</sub> or FFR<sub>TFS</sub> may be linked to altered F0<sub>ENV</sub> or F0<sub>TFS</sub> and from there to specific processing stages and ultimately spatially targeted interventions.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10003646/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9112292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric S Jackson, Swethasri Dravida, Xian Zhang, J Adam Noah, Vincent Gracco, Joy Hirsch
People who stutter learn to anticipate many of their overt stuttering events. Despite the critical role of anticipation, particularly how responses to anticipation shape stuttering behaviors, the neural bases associated with anticipation are unknown. We used a novel approach to identify anticipated and unanticipated words, which were produced by 22 adult stutterers in a delayed-response task while hemodynamic activity was measured using functional near infrared spectroscopy (fNIRS). Twenty-two control participants were included such that each individualized set of anticipated and unanticipated words was produced by one stutterer and one control participant. We conducted an analysis on the right dorsolateral prefrontal cortex (R-DLPFC) based on converging lines of evidence from the stuttering and cognitive control literatures. We also assessed connectivity between the R-DLPFC and right supramarginal gyrus (R-SMG), two key nodes of the frontoparietal network (FPN), to assess the role of cognitive control, and particularly error-likelihood monitoring, in stuttering anticipation. All analyses focused on the five-second anticipation phase preceding the go signal to produce speech. The results indicate that anticipated words are associated with elevated activation in the R-DLPFC, and that compared to non-stutterers, stutterers exhibit greater activity in the R-DLPFC, irrespective of anticipation. Further, anticipated words are associated with reduced connectivity between the R-DLPFC and R-SMG. These findings highlight the potential roles of the R-DLPFC and the greater FPN as a neural substrate of stuttering anticipation. The results also support previous accounts of error-likelihood monitoring and action-stopping in stuttering anticipation. Overall, this work offers numerous directions for future research with clinical implications for targeted neuromodulation.
{"title":"Activation in Right Dorsolateral Prefrontal Cortex Underlies Stuttering Anticipation.","authors":"Eric S Jackson, Swethasri Dravida, Xian Zhang, J Adam Noah, Vincent Gracco, Joy Hirsch","doi":"10.1162/nol_a_00073","DOIUrl":"https://doi.org/10.1162/nol_a_00073","url":null,"abstract":"<p><p>People who stutter learn to anticipate many of their overt stuttering events. Despite the critical role of anticipation, particularly how responses to anticipation shape stuttering behaviors, the neural bases associated with anticipation are unknown. We used a novel approach to identify anticipated and unanticipated words, which were produced by 22 adult stutterers in a delayed-response task while hemodynamic activity was measured using functional near infrared spectroscopy (fNIRS). Twenty-two control participants were included such that each individualized set of anticipated and unanticipated words was produced by one stutterer and one control participant. We conducted an analysis on the right dorsolateral prefrontal cortex (R-DLPFC) based on converging lines of evidence from the stuttering and cognitive control literatures. We also assessed connectivity between the R-DLPFC and right supramarginal gyrus (R-SMG), two key nodes of the frontoparietal network (FPN), to assess the role of cognitive control, and particularly error-likelihood monitoring, in stuttering anticipation. All analyses focused on the five-second anticipation phase preceding the go signal to produce speech. The results indicate that anticipated words are associated with elevated activation in the R-DLPFC, and that compared to non-stutterers, stutterers exhibit greater activity in the R-DLPFC, irrespective of anticipation. Further, anticipated words are associated with reduced connectivity between the R-DLPFC and R-SMG. These findings highlight the potential roles of the R-DLPFC and the greater FPN as a neural substrate of stuttering anticipation. The results also support previous accounts of error-likelihood monitoring and action-stopping in stuttering anticipation. Overall, this work offers numerous directions for future research with clinical implications for targeted neuromodulation.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158639/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9705201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikolay Novitskiy, Akshay R Maggu, Ching Man Lai, Peggy H Y Chan, Kay H Y Wong, Hugh Simon Lam, Tak Yeung Leung, Ting Fan Leung, Patrick C M Wong
We investigated the development of early-latency and long-latency brain responses to native and non-native speech to shed light on the neurophysiological underpinnings of perceptual narrowing and early language development. Specifically, we postulated a two-level process to explain the decrease in sensitivity to non-native phonemes toward the end of infancy. Neurons at the earlier stages of the ascending auditory pathway mature rapidly during infancy facilitating the encoding of both native and non-native sounds. This growth enables neurons at the later stages of the auditory pathway to assign phonological status to speech according to the infant's native language environment. To test this hypothesis, we collected early-latency and long-latency neural responses to native and non-native lexical tones from 85 Cantonese-learning children aged between 23 days and 24 months, 16 days. As expected, a broad range of presumably subcortical early-latency neural encoding measures grew rapidly and substantially during the first two years for both native and non-native tones. By contrast, long-latency cortical electrophysiological changes occurred on a much slower scale and showed sensitivity to nativeness at around six months. Our study provided a comprehensive understanding of early language development by revealing the complementary roles of earlier and later stages of speech processing in the developing brain.
{"title":"Early Development of Neural Speech Encoding Depends on Age but Not Native Language Status: Evidence From Lexical Tone.","authors":"Nikolay Novitskiy, Akshay R Maggu, Ching Man Lai, Peggy H Y Chan, Kay H Y Wong, Hugh Simon Lam, Tak Yeung Leung, Ting Fan Leung, Patrick C M Wong","doi":"10.1162/nol_a_00049","DOIUrl":"https://doi.org/10.1162/nol_a_00049","url":null,"abstract":"<p><p>We investigated the development of early-latency and long-latency brain responses to native and non-native speech to shed light on the neurophysiological underpinnings of perceptual narrowing and early language development. Specifically, we postulated a two-level process to explain the decrease in sensitivity to non-native phonemes toward the end of infancy. Neurons at the earlier stages of the ascending auditory pathway mature rapidly during infancy facilitating the encoding of both native and non-native sounds. This growth enables neurons at the later stages of the auditory pathway to assign phonological status to speech according to the infant's native language environment. To test this hypothesis, we collected early-latency and long-latency neural responses to native and non-native lexical tones from 85 Cantonese-learning children aged between 23 days and 24 months, 16 days. As expected, a broad range of presumably subcortical early-latency neural encoding measures grew rapidly and substantially during the first two years for both native and non-native tones. By contrast, long-latency cortical electrophysiological changes occurred on a much slower scale and showed sensitivity to nativeness at around six months. Our study provided a comprehensive understanding of early language development by revealing the complementary roles of earlier and later stages of speech processing in the developing brain.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10178623/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9875134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katharina H Menn, Emma K Ward, Ricarda Braukmann, Carlijn van den Boomen, Jan Buitelaar, Sabine Hunnius, Tineke M Snijders
During speech processing, neural activity in non-autistic adults and infants tracks the speech envelope. Recent research in adults indicates that this neural tracking relates to linguistic knowledge and may be reduced in autism. Such reduced tracking, if present already in infancy, could impede language development. In the current study, we focused on children with a family history of autism, who often show a delay in first language acquisition. We investigated whether differences in tracking of sung nursery rhymes during infancy relate to language development and autism symptoms in childhood. We assessed speech-brain coherence at either 10 or 14 months of age in a total of 22 infants with high likelihood of autism due to family history and 19 infants without family history of autism. We analyzed the relationship between speech-brain coherence in these infants and their vocabulary at 24 months as well as autism symptoms at 36 months. Our results showed significant speech-brain coherence in the 10- and 14-month-old infants. We found no evidence for a relationship between speech-brain coherence and later autism symptoms. Importantly, speech-brain coherence in the stressed syllable rate (1-3 Hz) predicted later vocabulary. Follow-up analyses showed evidence for a relationship between tracking and vocabulary only in 10-month-olds but not in 14-month-olds and indicated possible differences between the likelihood groups. Thus, early tracking of sung nursery rhymes is related to language development in childhood.
{"title":"Neural Tracking in Infancy Predicts Language Development in Children With and Without Family History of Autism.","authors":"Katharina H Menn, Emma K Ward, Ricarda Braukmann, Carlijn van den Boomen, Jan Buitelaar, Sabine Hunnius, Tineke M Snijders","doi":"10.1162/nol_a_00074","DOIUrl":"https://doi.org/10.1162/nol_a_00074","url":null,"abstract":"<p><p>During speech processing, neural activity in non-autistic adults and infants tracks the speech envelope. Recent research in adults indicates that this neural tracking relates to linguistic knowledge and may be reduced in autism. Such reduced tracking, if present already in infancy, could impede language development. In the current study, we focused on children with a family history of autism, who often show a delay in first language acquisition. We investigated whether differences in tracking of sung nursery rhymes during infancy relate to language development and autism symptoms in childhood. We assessed speech-brain coherence at either 10 or 14 months of age in a total of 22 infants with high likelihood of autism due to family history and 19 infants without family history of autism. We analyzed the relationship between speech-brain coherence in these infants and their vocabulary at 24 months as well as autism symptoms at 36 months. Our results showed significant speech-brain coherence in the 10- and 14-month-old infants. We found no evidence for a relationship between speech-brain coherence and later autism symptoms. Importantly, speech-brain coherence in the stressed syllable rate (1-3 Hz) predicted later vocabulary. Follow-up analyses showed evidence for a relationship between tracking and vocabulary only in 10-month-olds but not in 14-month-olds and indicated possible differences between the likelihood groups. Thus, early tracking of sung nursery rhymes is related to language development in childhood.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158647/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9504377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dafna Ben-Zion, Ella Gabitov, Anat Prior, Tali Bitan
The current study explores the effects of time and sleep on the consolidation of a novel language learning task containing both item-specific knowledge and the extraction of grammatical regularities. We also compare consolidation effects in language and motor sequence learning tasks, to ask whether consolidation mechanisms are domain general. Young adults learned to apply plural inflections to novel words based on morphophonological rules embedded in the input, and learned to type a motor sequence using a keyboard. Participants were randomly assigned into one of two groups, practicing each task during either the morning or evening hours. Both groups were retested 12 and 24 hours post-training. Performance on frequent trained items in the language task stabilized only following sleep, consistent with a hippocampal mechanism for item-specific learning. However, regularity extraction, indicated by generalization to untrained items in the linguistic task, as well as performance on motor sequence learning, improved 24 hours post-training, irrespective of the timing of sleep. This consolidation process is consistent with a frontostriatal skill-learning mechanism, common across the language and motor domains. This conclusion is further reinforced by cross-domain correlations at the individual level between improvement across 24 hours in the motor task and in the low-frequency trained items in the linguistic task, which involve regularity extraction. Taken together, our results at the group and individual levels suggest that some aspects of consolidation are shared across the motor and language domains, and more specifically, between motor sequence learning and grammar learning.
{"title":"Effects of Sleep on Language and Motor Consolidation: Evidence of Domain General and Specific Mechanisms.","authors":"Dafna Ben-Zion, Ella Gabitov, Anat Prior, Tali Bitan","doi":"10.1162/nol_a_00060","DOIUrl":"https://doi.org/10.1162/nol_a_00060","url":null,"abstract":"<p><p>The current study explores the effects of time and sleep on the consolidation of a novel language learning task containing both item-specific knowledge and the extraction of grammatical regularities. We also compare consolidation effects in language and motor sequence learning tasks, to ask whether consolidation mechanisms are domain general. Young adults learned to apply plural inflections to novel words based on morphophonological rules embedded in the input, and learned to type a motor sequence using a keyboard. Participants were randomly assigned into one of two groups, practicing each task during either the morning or evening hours. Both groups were retested 12 and 24 hours post-training. Performance on frequent trained items in the language task stabilized only following sleep, consistent with a hippocampal mechanism for item-specific learning. However, regularity extraction, indicated by generalization to untrained items in the linguistic task, as well as performance on motor sequence learning, improved 24 hours post-training, irrespective of the timing of sleep. This consolidation process is consistent with a frontostriatal skill-learning mechanism, common across the language and motor domains. This conclusion is further reinforced by cross-domain correlations at the individual level between improvement across 24 hours in the motor task and in the low-frequency trained items in the linguistic task, which involve regularity extraction. Taken together, our results at the group and individual levels suggest that some aspects of consolidation are shared across the motor and language domains, and more specifically, between motor sequence learning and grammar learning.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158628/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9504777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prediction-based theories of language comprehension assume that listeners predict both the meaning and phonological form of likely upcoming words. In alleged event-related potential (ERP) demonstrations of phonological prediction, prediction-mismatching words elicit a phonological mismatch negativity (PMN), a frontocentral negativity that precedes the centroparietal N400 component. However, classification and replicability of the PMN has proven controversial, with ongoing debate on whether the PMN is a distinct component or merely an early part of the N400. In this electroencephalography (EEG) study, we therefore attempted to replicate the PMN effect and its separability from the N400, using a participant sample size (N = 48) that was more than double that of previous studies. Participants listened to sentences containing either a predictable word or an unpredictable word with/without phonological overlap with the predictable word. Preregistered analyses revealed a widely distributed negative-going ERP in response to unpredictable words in both the early (150-250 ms) and the N400 (300-500 ms) time windows. Bayes factor analysis yielded moderate evidence against a different scalp distribution of the effects in the two time windows. Although our findings do not speak against phonological prediction during sentence comprehension, they do speak against the PMN effect specifically as a marker of phonological prediction mismatch. Instead of an PMN effect, our results demonstrate the early onset of the auditory N400 effect associated with unpredictable words. Our failure to replicate further highlights the risk associated with commonly employed data-contingent analyses (e.g., analyses involving time windows or electrodes that were selected based on visual inspection) and small sample sizes in the cognitive neuroscience of language.
{"title":"Can You Hear What's Coming? Failure to Replicate ERP Evidence for Phonological Prediction.","authors":"Victoria R Poulton, Mante S Nieuwland","doi":"10.1162/nol_a_00078","DOIUrl":"https://doi.org/10.1162/nol_a_00078","url":null,"abstract":"<p><p>Prediction-based theories of language comprehension assume that listeners predict both the meaning and phonological form of likely upcoming words. In alleged event-related potential (ERP) demonstrations of phonological prediction, prediction-mismatching words elicit a phonological mismatch negativity (PMN), a frontocentral negativity that precedes the centroparietal N400 component. However, classification and replicability of the PMN has proven controversial, with ongoing debate on whether the PMN is a distinct component or merely an early part of the N400. In this electroencephalography (EEG) study, we therefore attempted to replicate the PMN effect and its separability from the N400, using a participant sample size (<i>N</i> = 48) that was more than double that of previous studies. Participants listened to sentences containing either a predictable word or an unpredictable word with/without phonological overlap with the predictable word. Preregistered analyses revealed a widely distributed negative-going ERP in response to unpredictable words in both the early (150-250 ms) and the N400 (300-500 ms) time windows. Bayes factor analysis yielded moderate evidence against a different scalp distribution of the effects in the two time windows. Although our findings do not speak against phonological prediction during sentence comprehension, they do speak against the PMN effect specifically as a marker of phonological prediction mismatch. Instead of an PMN effect, our results demonstrate the early onset of the auditory N400 effect associated with unpredictable words. Our failure to replicate further highlights the risk associated with commonly employed data-contingent analyses (e.g., analyses involving time windows or electrodes that were selected based on visual inspection) and small sample sizes in the cognitive neuroscience of language.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158594/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9858912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henry Railo, Anni Varjonen, Minna Lehtonen, Pilleriin Sikka
Learning to pronounce a foreign phoneme requires an individual to acquire a motor program that enables the reproduction of the new acoustic target sound. This process is largely based on the use of auditory feedback to detect pronunciation errors to adjust vocalization. While early auditory evoked neural activity underlies automatic detection and adaptation to vocalization errors, little is known about the neural correlates of acquiring novel speech targets. To investigate the neural processes that mediate the learning of foreign phoneme pronunciation, we recorded event-related potentials when participants (N = 19) pronounced native or foreign phonemes. Behavioral results indicated that the participants' pronunciation of the foreign phoneme improved during the experiment. Early auditory responses (N1 and P2 waves, approximately 85-290 ms after the sound onset) revealed no differences between foreign and native phonemes. In contrast, the amplitude of the frontocentrally distributed late slow wave (LSW, 320-440 ms) was modulated by the pronunciation of the foreign phonemes, and the effect changed during the experiment, paralleling the improvement in pronunciation. These results suggest that the LSW may reflect higher-order monitoring processes that signal successful pronunciation and help learn novel phonemes.
{"title":"Event-Related Potential Correlates of Learning to Produce Novel Foreign Phonemes.","authors":"Henry Railo, Anni Varjonen, Minna Lehtonen, Pilleriin Sikka","doi":"10.1162/nol_a_00080","DOIUrl":"https://doi.org/10.1162/nol_a_00080","url":null,"abstract":"<p><p>Learning to pronounce a foreign phoneme requires an individual to acquire a motor program that enables the reproduction of the new acoustic target sound. This process is largely based on the use of auditory feedback to detect pronunciation errors to adjust vocalization. While early auditory evoked neural activity underlies automatic detection and adaptation to vocalization errors, little is known about the neural correlates of acquiring novel speech targets. To investigate the neural processes that mediate the learning of foreign phoneme pronunciation, we recorded event-related potentials when participants (<i>N</i> = 19) pronounced native or foreign phonemes. Behavioral results indicated that the participants' pronunciation of the foreign phoneme improved during the experiment. Early auditory responses (N1 and P2 waves, approximately 85-290 ms after the sound onset) revealed no differences between foreign and native phonemes. In contrast, the amplitude of the frontocentrally distributed late slow wave (LSW, 320-440 ms) was modulated by the pronunciation of the foreign phonemes, and the effect changed during the experiment, paralleling the improvement in pronunciation. These results suggest that the LSW may reflect higher-order monitoring processes that signal successful pronunciation and help learn novel phonemes.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158638/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9858908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The relationship among syntactic, semantic, and conceptual processes in language comprehension is a central question to the neurobiology of language. Several studies have suggested that conceptual combination in particular can be localized to the left anterior temporal lobe (LATL), while syntactic processes are more often associated with the posterior temporal lobe or inferior frontal gyrus. However, LATL activity can also correlate with syntactic computations, particularly in narrative comprehension. Here we investigated the degree to which LATL conceptual combination is dependent on syntax, specifically asking whether rapid (∼200 ms) magnetoencephalography effects of conceptual combination in the LATL can occur in the absence of licit syntactic phrase closure and in the absence of a semantically plausible output for the composition. We find that such effects do occur: LATL effects of conceptual combination were observed even when there was no syntactic phrase closure or plausible meaning. But syntactic closure did have an additive effect such that LATL signals were the highest for expressions that composed both conceptually and syntactically. Our findings conform to an account in which LATL conceptual composition is influenced by local syntactic composition but is also able to operate without it.
{"title":"Conceptual Combination in the LATL With and Without Syntactic Composition.","authors":"Alicia Parrish, Liina Pylkkänen","doi":"10.1162/nol_a_00048","DOIUrl":"https://doi.org/10.1162/nol_a_00048","url":null,"abstract":"<p><p>The relationship among syntactic, semantic, and conceptual processes in language comprehension is a central question to the neurobiology of language. Several studies have suggested that conceptual combination in particular can be localized to the left anterior temporal lobe (LATL), while syntactic processes are more often associated with the posterior temporal lobe or inferior frontal gyrus. However, LATL activity can also correlate with syntactic computations, particularly in narrative comprehension. Here we investigated the degree to which LATL conceptual combination is dependent on syntax, specifically asking whether rapid (∼200 ms) magnetoencephalography effects of conceptual combination in the LATL can occur in the absence of licit syntactic phrase closure and in the absence of a semantically plausible output for the composition. We find that such effects do occur: LATL effects of conceptual combination were observed even when there was no syntactic phrase closure or plausible meaning. But syntactic closure did have an additive effect such that LATL signals were the highest for expressions that composed both conceptually and syntactically. Our findings conform to an account in which LATL conceptual composition is influenced by local syntactic composition but is also able to operate without it.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10158584/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9875130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}