The brain structures and cognitive abilities necessary for successful monitoring of one's own speech errors remain unknown. We aimed to inform self-monitoring models by examining the neural and behavioral correlates of phonological and semantic error detection in individuals with post-stroke aphasia. First, we determined whether detection related to other abilities proposed to contribute to monitoring according to various theories, including naming ability, fluency, word-level auditory comprehension, sentence-level auditory comprehension, and executive function. Regression analyses revealed that fluency and executive scores were independent predictors of phonological error detection, while a measure of word-level comprehension related to semantic error detection. Next, we used multivariate lesion-symptom mapping to determine lesion locations associated with reduced error detection. Reduced overall error detection related to damage to a region of frontal white matter extending into dorsolateral prefrontal cortex (DLPFC). Detection of phonological errors related to damage to the same areas, but the lesion-behavior association was stronger, suggesting the localization for overall error detection was driven primarily by phonological error detection. These findings demonstrate that monitoring of different error types relies on distinct cognitive functions, and provide causal evidence for the importance of frontal white matter tracts and DLPFC for self-monitoring of speech.
During language comprehension, online neural processing is strongly influenced by the constraints of the prior context. While the N400 ERP response (300-500ms) is known to be sensitive to a word's semantic predictability, less is known about a set of late positive-going ERP responses (600-1000ms) that can be elicited when an incoming word violates strong predictions about upcoming content (late frontal positivity) or about what is possible given the prior context (late posterior positivity/P600). Across three experiments, we systematically manipulated the length of the prior context and the source of lexical constraint to determine their influence on comprehenders' online neural responses to these two types of prediction violations. In Experiment 1, within minimal contexts, both lexical prediction violations and semantically anomalous words produced a larger N400 than expected continuations (James unlocked the door/laptop/gardener), but no late positive effects were observed. Critically, the late posterior positivity/P600 to semantic anomalies appeared when these same sentences were embedded within longer discourse contexts (Experiment 2a), and the late frontal positivity appeared to lexical prediction violations when the preceding context was rich and globally constraining (Experiment 2b). We interpret these findings within a hierarchical generative framework of language comprehension. This framework highlights the role of comprehension goals and broader linguistic context, and how these factors influence both top-down prediction and the decision to update or reanalyze the prior context when these predictions are violated.
Determining how the cognitive components of reading - orthographic, phonological, and semantic representations - are instantiated in the brain has been a longstanding goal of psychology and human cognitive neuroscience. The two most prominent computational models of reading instantiate different cognitive processes, implying different neural processes. Artificial neural network (ANN) models of reading posit non-symbolic, distributed representations. The dual-route cascaded (DRC) model instead suggests two routes of processing, one representing symbolic rules of spelling-sound correspondence, the other representing orthographic and phonological lexicons. These models are not adjudicated by behavioral data and have never before been directly compared in terms of neural plausibility. We used representational similarity analysis to compare the predictions of these models to neural data from participants reading aloud. Both the ANN and DRC model representations corresponded with neural activity. However, ANN model representations correlated to more reading-relevant areas of cortex. When contributions from the DRC model were statistically controlled, partial correlations revealed that the ANN model accounted for significant variance in the neural data. The opposite analysis, examining the variance explained by the DRC model with contributions from the ANN model factored out, revealed no correspondence to neural activity. Our results suggest that ANNs trained using distributed representations provide a better correspondence between cognitive and neural coding. Additionally, this framework provides a principled approach for comparing computational models of cognitive function to gain insight into neural representations.
Understanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19-30 years) and 32 older adults (aged 65-81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on the bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including the premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.

