Eye movements are often directed toward stimuli with specific features. Decades of neurophysiological research has determined that this behavior is subserved by a feature-reweighting of the neural activation encoding potential eye movements. Despite the considerable body of research examining feature-based target selection, no comprehensive theoretical account of the feature-reweighting mechanism has yet been proposed. Given that such a theory is fundamental to our understanding of the nature of oculomotor processing, we propose an oculomotor feature-reweighting mechanism here. We first summarize the considerable anatomical and functional evidence suggesting that oculomotor substrates that encode potential eye movements rely on the visual cortices for feature information. Next, we highlight the results from our recent behavioral experiments demonstrating that feature information manifests in the oculomotor system in order of featural complexity, regardless of whether the feature information is task-relevant. Based on the available evidence, we propose an oculomotor feature-reweighting mechanism whereby (1) visual information is projected into the oculomotor system only after a visual representation manifests in the highest stage of the cortical visual processing hierarchy necessary to represent the relevant features and (2) these dynamically recruited cortical module(s) then perform feature discrimination via shifting neural feature representations, while also maintaining parity between the feature representations in cortical and oculomotor substrates by dynamically reweighting oculomotor vectors. Finally, we discuss how our behavioral experiments may extend to other areas in vision science and its possible clinical applications.
The cellular biology of brains is relatively well-understood, but neuroscientists have not yet generated a theory explaining how brains work. Explanations of how neurons collectively operate to produce what brains can do are tentative and incomplete. Without prior assumptions about the brain mechanisms, I attempt here to identify major obstacles to progress in neuroscientific understanding of brains and central nervous systems. Most of the obstacles to our understanding are conceptual. Neuroscience lacks concepts and models rooted in experimental results explaining how neurons interact at all scales. The cerebral cortex is thought to control awake activities, which contrasts with recent experimental results. There is ambiguity distinguishing task-related brain activities from spontaneous activities and organized intrinsic activities. Brains are regarded as driven by external and internal stimuli in contrast to their considerable autonomy. Experimental results are explained by sensory inputs, behavior, and psychological concepts. Time and space are regarded as mutually independent variables for spiking, post-synaptic events, and other measured variables, in contrast to experimental results. Dynamical systems theory and models describing evolution of variables with time as the independent variable are insufficient to account for central nervous system activities. Spatial dynamics may be a practical solution. The general hypothesis that measurements of changes in fundamental brain variables, action potentials, transmitter releases, post-synaptic transmembrane currents, etc., propagating in central nervous systems reveal how they work, carries no additional assumptions. Combinations of current techniques could reveal many aspects of spatial dynamics of spiking, post-synaptic processing, and plasticity in insects and rodents to start with. But problems defining baseline and reference conditions hinder interpretations of the results. Furthermore, the facts that pooling and averaging of data destroy their underlying dynamics imply that single-trial designs and statistics are necessary.
The precise control of bite force and gape is vital for safe and effective breakdown and manipulation of food inside the oral cavity during feeding. Yet, the role of the orofacial sensorimotor cortex (OSMcx) in the control of bite force and gape is still largely unknown. The aim of this study was to elucidate how individual neurons and populations of neurons in multiple regions of OSMcx differentially encode bite force and static gape when subjects (Macaca mulatta) generated different levels of bite force at varying gapes. We examined neuronal activity recorded simultaneously from three microelectrode arrays implanted chronically in the primary motor (MIo), primary somatosensory (SIo), and cortical masticatory (CMA) areas of OSMcx. We used generalized linear models to evaluate encoding properties of individual neurons and utilized dimensionality reduction techniques to decompose population activity into components related to specific task parameters. Individual neurons encoded bite force more strongly than gape in all three OSMCx areas although bite force was a better predictor of spiking activity in MIo vs. SIo. Population activity differentiated between levels of bite force and gape while preserving task-independent temporal modulation across the behavioral trial. While activation patterns of neuronal populations were comparable across OSMCx areas, the total variance explained by task parameters was context-dependent and differed across areas. These findings suggest that the cortical control of static gape during biting may rely on computations at the population level whereas the strong encoding of bite force at the individual neuron level allows for the precise and rapid control of bite force.
Performing successful adaptive behaviour relies on our ability to process a wide range of temporal intervals with certain precision. Studies on the role of the cerebellum in temporal information processing have adopted the dogma that the cerebellum is involved in sub-second processing. However, emerging evidence shows that the cerebellum might be involved in suprasecond temporal processing as well. Here we review the reciprocal loops between cerebellum and cerebral cortex and provide a theoretical account of cerebro-cerebellar interactions with a focus on how cerebellar output can modulate cerebral processing during learning of complex sequences. Finally, we propose that while the ability of the cerebellum to support millisecond timescales might be intrinsic to cerebellar circuitry, the ability to support supra-second timescales might result from cerebellar interactions with other brain regions, such as the prefrontal cortex.
Cognitive and behavioral processes are often accompanied by changes within well-defined frequency bands of the local field potential (LFP i.e., the voltage induced by neuronal activity). These changes are detectable in the frequency domain using the Fourier transform and are often interpreted as neuronal oscillations. However, aside some well-known exceptions, the processes underlying such changes are difficult to track in time, making their oscillatory nature hard to verify. In addition, many non-periodic neural processes can also have spectra that emphasize specific frequencies. Thus, the notion that spectral changes reflect oscillations is likely too restrictive. In this study, we use a simple yet versatile framework to understand the frequency spectra of neural recordings. Using simulations, we derive the Fourier spectra of periodic, quasi-periodic and non-periodic neural processes having diverse waveforms, illustrating how these attributes shape their spectral signatures. We then show how neural processes sum their energy in the local field potential in simulated and real-world recording scenarios. We find that the spectral power of neural processes is essentially determined by two aspects: (1) the distribution of neural events in time and (2) the waveform of the voltage induced by single neural events. Taken together, this work guides the interpretation of the Fourier spectrum of neural recordings and indicates that power increases in specific frequency bands do not necessarily reflect periodic neural activity.
Introduction: In patients with severe auditory impairment, partial hearing restoration can be achieved by sensory prostheses for the electrical stimulation of the central nervous system. However, these state-of-the-art approaches suffer from limited spectral resolution: electrical field spread depends on the impedance of the surrounding medium, impeding spatially focused electrical stimulation in neural tissue. To overcome these limitations, optogenetic activation could be applied in such prostheses to achieve enhanced resolution through precise and differential stimulation of nearby neuronal ensembles. Previous experiments have provided a first proof for behavioral detectability of optogenetic activation in the rodent auditory system, but little is known about the generation of complex and behaviorally relevant sensory patterns involving differential activation.
Methods: In this study, we developed and behaviorally tested an optogenetic implant to excite two spatially separated points along the tonotopy of the murine inferior colliculus (ICc).
Results: Using a reward based operant Go/No-Go paradigm, we show that differential optogenetic activation of a sub-cortical sensory pathway is possible and efficient. We demonstrate how animals which were previously trained in a frequency discrimination paradigm (a) rapidly respond to either sound or optogenetic stimulation, (b) generally detect optogenetic stimulation of two different neuronal ensembles, and (c) discriminate between them.
Discussion: Our results demonstrate that optogenetic excitatory stimulation at different points of the ICc tonotopy elicits a stable response behavior over time periods of several months. With this study, we provide the first proof of principle for sub-cortical differential stimulation of sensory systems using complex artificial cues in freely moving animals.
A hand passing in front of a camera produces a large and obvious disruption of a video. Yet the closure of the eyelid during a blink, which lasts for hundreds of milliseconds and occurs thousands of times per day, typically goes unnoticed. What are the neural mechanisms that mediate our uninterrupted visual experience despite frequent occlusion of the eyes? Here, we review the existing literature on the neurophysiology, perceptual consequences, and behavioral dynamics of blinks. We begin by detailing the kinematics of the eyelid that define a blink. We next discuss the ways in which blinks alter visual function by occluding the pupil, decreasing visual sensitivity, and moving the eyes. Then, to anchor our understanding, we review the similarities between blinks and other actions that lead to reductions in visual sensitivity, such as saccadic eye movements. The similarity between these two actions has led to suggestions that they share a common neural substrate. We consider the extent of overlap in their neural circuits and go on to explain how recent findings regarding saccade suppression cast doubt on the strong version of the shared mechanism hypothesis. We also evaluate alternative explanations of how blink-related processes modulate neural activity to maintain visual stability: a reverberating corticothalamic loop to maintain information in the face of lid closure; and a suppression of visual transients related to lid closure. Next, we survey the many areas throughout the brain that contribute to the execution of, regulation of, or response to blinks. Regardless of the underlying mechanisms, blinks drastically attenuate our visual abilities, yet these perturbations fail to reach awareness. We conclude by outlining opportunities for future work to better understand how the brain maintains visual perception in the face of eye blinks. Future work will likely benefit from incorporating theories of perceptual stability, neurophysiology, and novel behavior paradigms to address issues central to our understanding of natural visual behavior and for the clinical rehabilitation of active vision.

