How does the brain adjust its decision processes to ensure timely decision completion? Computational modelling and electrophysiological investigations have pointed to dynamic 'urgency' processes that serve to progressively reduce the quantity of evidence required to reach choice commitment as time elapses. In humans, such urgency dynamics have been observed exclusively in neural signals that accumulate evidence for a specific motor plan. Across three complementary experiments in humans (male and female), we characterise an electrophysiological signal that traces dynamic urgency and exhibits unique properties not observed in effector-selective signals. Firstly, it provides a representation of urgency alone, growing only as a function of time and not evidence strength. Secondly, when choice reports must be withheld until a response cue, this signal peaks and decays long before response execution, mirroring the early termination dynamics of a motor-independent evidence accumulation signal. These properties suggest that the brain may use urgency signals not only to expedite motor planning but also to hasten cognitive deliberation. These data demonstrate that urgency processes operate in a variety of perceptual choice scenarios and that they can be monitored in a model-independent manner via non-invasive brain signals.Significance Statement Computational models suggest that, when decisions are time-constrained the brain progressively lowers the amount of evidence it requires to reach choice commitment, thus increasingly sacrificing accuracy for timely decision completion. In humans, neurophysiological investigations have identified signatures of these 'urgency' effects exclusively in areas of the brain that plan the decision-reporting actions. Here, we characterise a human electroencephalogram signature of urgency that exhibits several novel properties: it traces the urgency component of the decision and terminates upon choice commitment even when the decision-reporting action is deferred until later. These observations suggest that urgency can serve to hasten the deliberation process and not just the movements that a decision entails.
Pitch and time are the essential dimensions defining musical melody. Recent electrophysiological studies have explored the neural encoding of musical pitch and time by leveraging probabilistic models of their sequences, but few have studied how the features might interact. This study examines these interactions by introducing "chimeric music," which pairs two distinct melodies and exchanges their pitch contours and note onset times to create two new melodies, distorting musical pattern while maintaining the marginal statistics of the original pieces' pitch and temporal sequences. Through this manipulation, we aimed to dissect the music processing and the interaction between pitch and time. Employing the temporal response function framework, we analyzed the neural encoding of melodic expectation and musical downbeats in participants with varying levels of musical training. Our findings from 27 participants of either sex revealed differences in the encoding of melodic expectation between original and chimeric stimuli in both dimensions, with a significant impact of musical experience. This suggests that decoupling the pitch and temporal structure affects expectation processing. In our analysis of downbeat encoding, we found an enhanced neural response when participants heard a note that aligned with the downbeat during music listening. In chimeric music, responses to downbeats were larger when the note was also a downbeat in the original music that provided the pitch sequence, indicating an effect of pitch structure on beat perception. This study advances our understanding of the neural underpinnings of music, emphasizing the significance of pitch-time interaction in the neural encoding of music.
During successful language comprehension, speech sounds (phonemes) are encoded within a series of neural patterns that evolve over time. Here we tested whether these neural dynamics of speech encoding are altered for individuals with a language disorder. We recorded EEG responses from the human brains of 39 individuals with post-stroke aphasia (13♀/26♂) and 24 healthy age-matched controls (i.e., older adults; 8♀/16♂) during 25 min of natural story listening. We estimated the duration of phonetic feature encoding, speed of evolution across neural populations, and the spatial location of encoding over EEG sensors. First, we establish that phonetic features are robustly encoded in EEG responses of healthy older adults. Second, when comparing individuals with aphasia to healthy controls, we find significantly decreased phonetic encoding in the aphasic group after a shared initial processing pattern (0.08-0.25 s after phoneme onset). Phonetic features were less strongly encoded over left-lateralized electrodes in the aphasia group compared to controls, with no difference in speed of neural pattern evolution. Finally, we observed that healthy controls, but not individuals with aphasia, encode phonetic features longer when uncertainty about word identity is high, indicating that this mechanism-encoding phonetic information until word identity is resolved-is crucial for successful comprehension. Together, our results suggest that aphasia may entail failure to maintain lower-order information long enough to recognize lexical items.
Recent studies have suggested the importance of statistical image features in both natural scene and object recognition, while the spatial layout or shape information is still important. In the present study, to investigate the roles of low- and high-level statistical image features in natural scene and object recognition, we conducted categorization tasks using a wide variety of natural scene and object images, along with two types of synthesized images: Portilla-Simoncelli (PS) synthesized images, which preserve low-level statistical features, and style-synthesized (SS) images, which retain higher-level statistical features. Behavioral experiments revealed that human observers (of either sex) could categorize style-synthesized versions of natural scene and object images with high accuracy. Furthermore, we recorded visual evoked potentials (VEPs) for the original, SS, and PS images and decoded natural scene and object categories using a support vector machine. Consistent with the behavioral results, natural scene categories were decoded with high accuracy within 200 ms after the stimulus onset. In contrast, object categories were successfully decoded only from VEPs for original images at later latencies. Finally, we examined whether style features could classify natural scene and object categories. The classification accuracy for natural scene categories showed a similar trend to the behavioral data, whereas that for object categories did not align with the behavioral results. Taken together, these findings suggest that although natural scene and object categories can be recognized relatively easily even when layout information is disrupted, the extent to which statistical features contribute to categorization differs between natural scenes and objects.
At autopsy, >95% of ALS cases display a redistribution of the essential RNA binding protein TDP-43 from the nucleus into cytoplasmic aggregates. The mislocalization and aggregation of TDP-43 is believed to be a key pathological driver in ALS. Due to its vital role in basic cellular mechanisms, direct depletion of TDP-43 is unlikely to lead to a promising therapy. Therefore, we have explored the utility of identifying genes that modify its mislocalization or aggregation. We have previously shown that loss of rad-23 improves locomotor deficits in TDP-43 Caenorhabditis elegans models of disease and increases the degradation rate of TDP-43 in cellular models. To understand the mechanism through which these protective effects occur, we generated an inducible mutant TDP-43 HEK293 cell line. We find that knockdown of RAD23A reduces insoluble TDP-43 levels in this model and primary rat cortical neurons expressing human TDP-43A315T Utilizing a discovery-based proteomics approach, we then explored how loss of RAD23A remodels the proteome. Through this proteomic screen, we identified USP13, a deubiquitinase, as a new potent modifier of TDP-43 induced aggregation and cytotoxicity. We find that knockdown of USP13 reduces the abundance of sarkosyl insoluble mTDP-43 in both our HEK293 model and primary rat neurons, reduces cell death in primary rat motor neurons, and improves locomotor deficits in C. elegans ALS models.
The auditory system plays a crucial role as the brain's early warning system. Previous work has shown that the brain automatically monitors unfolding auditory scenes and rapidly detects new events. Here, we focus on understanding how automatic change detection interfaces with the networks that regulate arousal and attention, measuring pupil dilation (PD) as an indicator of listener arousal and microsaccades (MS) as an index of attentional sampling. Naive participants (N = 36, both sexes) were exposed to artificial "scenes" comprising multiple concurrent streams of pure tones while their ocular activity was monitored. The scenes were categorized as REG or RND, featuring isochronous (regular) or random temporal structures in the tone streams. Previous work showed that listeners are sensitive to predictable scene structure and use this information to facilitate change processing. Scene changes were introduced by either adding or removing a single tone stream. Results revealed distinct patterns in the recruitment of arousal and attention during auditory scene analysis. Sustained PD was reduced in REG scenes compared with RND, indicating reduced arousal in predictable contexts. However, no differences in sustained MS activity were observed between scene types, suggesting no differences in attentional engagement. Scene changes, though task-irrelevant, elicited PD as well as MS suppression, consistent with automatic attentional capture and increased arousal. Notably, only MS responses were modulated by scene regularity. This suggests that changes within predictable environments more effectively recruit attentional resources. Together, these findings offer novel insights into how automatic auditory scene analysis interacts with neural systems governing arousal and attention.

