The field of stereotactic neurosurgery developed more than 70 years ago to address a therapy gap for patients with severe psychiatric disorders. In the decades since, it has matured tremendously, benefiting from advances in clinical and basic sciences. Deep brain stimulation (DBS) for severe, treatment-resistant psychiatric disorders is currently poised to transition from a stage of empiricism to one increasingly rooted in scientific discovery. Current drivers of this transition are advances in neuroimaging, but rapidly emerging ones are neurophysiological-as we understand more about the neural basis of these disorders, we will more successfully be able to use interventions such as invasive stimulation to restore dysfunctional circuits to health. Paralleling this transition is a steady increase in the consistency and quality of outcome data. Here, we focus on obsessive-compulsive disorder and depression, two topics that have received the most attention in terms of trial volume and scientific effort.
Many animals can navigate toward a goal they cannot see based on an internal representation of that goal in the brain's spatial maps. These maps are organized around networks with stable fixed-point dynamics (attractors), anchored to landmarks, and reciprocally connected to motor control. This review summarizes recent progress in understanding these networks, focusing on studies in arthropods. One factor driving recent progress is the availability of the Drosophila connectome; however, it is increasingly clear that navigation depends on ongoing synaptic plasticity in these networks. Functional synapses appear to be continually reselected from the set of anatomical potential synapses based on the interaction of Hebbian learning rules, sensory feedback, attractor dynamics, and neuromodulation. This can explain how the brain's maps of space are rapidly updated; it may also explain how the brain can initialize goals as stable fixed points for navigation.
Flexible behavior requires the creation, updating, and expression of memories to depend on context. While the neural underpinnings of each of these processes have been intensively studied, recent advances in computational modeling revealed a key challenge in context-dependent learning that had been largely ignored previously: Under naturalistic conditions, context is typically uncertain, necessitating contextual inference. We review a theoretical approach to formalizing context-dependent learning in the face of contextual uncertainty and the core computations it requires. We show how this approach begins to organize a large body of disparate experimental observations, from multiple levels of brain organization (including circuits, systems, and behavior) and multiple brain regions (most prominently the prefrontal cortex, the hippocampus, and motor cortices), into a coherent framework. We argue that contextual inference may also be key to understanding continual learning in the brain. This theory-driven perspective places contextual inference as a core component of learning.
Despite increasing evidence of its involvement in several key functions of the cerebral cortex, the vestibular sense rarely enters our consciousness. Indeed, the extent to which these internal signals are incorporated within cortical sensory representation and how they might be relied upon for sensory-driven decision-making, during, for example, spatial navigation, is yet to be understood. Recent novel experimental approaches in rodents have probed both the physiological and behavioral significance of vestibular signals and indicate that their widespread integration with vision improves both the cortical representation and perceptual accuracy of self-motion and orientation. Here, we summarize these recent findings with a focus on cortical circuits involved in visual perception and spatial navigation and highlight the major remaining knowledge gaps. We suggest that vestibulo-visual integration reflects a process of constant updating regarding the status of self-motion, and access to such information by the cortex is used for sensory perception and predictions that may be implemented for rapid, navigation-related decision-making.