Event boundaries help structure the content of episodic memories by segmenting continuous experiences into discrete events. Event boundaries may also serve to preserve meaningful information within an event, thereby actively separating important memories from interfering representations imposed by past and future events. Here, we tested the hypothesis that event boundaries organize emotional memory based on changing dynamics as events unfold. We developed a novel threat-reversal learning task whereby participants encoded trial-unique exemplars from two semantic categories across three phases: preconditioning, fear acquisition, and reversal. Shock contingencies were established for one category during acquisition (CS+) and then switched to the other during reversal (CS-). Importantly, reversal either was separated by a perceptible event boundary (Experiment 1) or occurred immediately after acquisition, with no perceptible context shift (Experiment 2). In a surprise recognition memory test the next day, memory performance tracked the learning contingencies from encoding in Experiment 1, such that participants selectively recognized more threat-associated CS+ exemplars from before (retroactive) and during acquisition, but this pattern reversed toward CS- exemplars encoded during reversal. By contrast, participants with continuous encoding-without a boundary between conditioning and reversal-exhibited undifferentiated memory for exemplars from both categories encoded before acquisition and after reversal. Further analyses highlight nuanced effects of event boundaries on reversing conditioned fear, updating mnemonic generalization, and emotional biasing of temporal source memory. These findings suggest that event boundaries provide anchor points to organize memory for distinctly meaningful information, thereby adaptively structuring memory based on the content of our experiences.
The perception of rhythmic patterns is crucial for the recognition of words in spoken languages, yet it remains unclear how these patterns are represented in the brain. Here, we tested the hypothesis that rhythmic patterns are encoded by neural activity phase-locked to the temporal modulation of these patterns in the speech signal. To test this hypothesis, we analyzed EEGs evoked with long sequences of alternating syllables acoustically manipulated to be perceived as a series of different rhythmic groupings in English. We found that the magnitude of the EEG at the syllable and grouping rates of each sequence was significantly higher than the noise baseline, indicating that the neural parsing of syllables and rhythmic groupings operates at different timescales. Distributional differences between the scalp topographies associated with each timescale suggests a further mechanistic dissociation between the neural segmentation of syllables and groupings. In addition, we observed that the neural tracking of louder syllables, which in trochaic languages like English are associated with the beginning of rhythmic groupings, was more robust than the neural tracking of softer syllables. The results of further bootstrapping and brain-behavior analyses indicate that the perception of rhythmic patterns is modulated by the magnitude of grouping alternations in the neural signal. These findings suggest that the temporal coding of rhythmic patterns in stress-based languages like English is supported by temporal regularities that are linguistically relevant in the speech signal.
We perceive visual objects as unified although different brain areas process different features. An attentional mechanism has been proposed to be involved with feature binding, as evidenced by observations of binding errors (i.e., illusory conjunctions) when attention is diverted. However, the neural underpinnings of this feature binding are not well understood. We examined the neural mechanisms of feature binding by recording EEG during an attentionally demanding discrimination task. Unlike prestimulus alpha oscillatory activity and early ERPs (i.e., the N1 and P1 components), the N1pc, reflecting stimulus-evoked spatial attention, was reduced for errors relative to correct responses and illusory conjunctions. However, the later SPCN, reflecting visual short-term memory, was reduced for illusory conjunctions and errors compared with correct responses. Furthermore, binding errors were associated with distinct posterior lateralized activity during this 200- to 300-msec window. These results implicate a temporal binding window that integrates visual features after stimulus-evoked attention but before encoding into visual short-term memory.