{"title":"As If Time Really Mattered: Temporal Strategies for Neural Coding of Sensory Information","authors":"Eaton Peabody","doi":"10.4324/9781315789347-16","DOIUrl":null,"url":null,"abstract":"Potential strategies for temporal neural processing in the brain and their implications for the design of artificial neural networks are considered. Current connectionist thinking holds that neurons send signals to each other by changes in their average rate of discharge. This implies that there is one output signal per neuron at any given time (scalar coding), and that all neuronal specificity is achieved solely by patterns of synaptic connections. However, information can be carried by temporal codes, in temporal patterns of neural discharges and by relative times of arrival of individual spikes. Temporal coding permits multiplexing of information in the time domain, which potentially increases the flexibility of neural networks. A broadcast model of information transmission is contrasted with the current notion of highly specific connectivity. Evidence for temporal coding in somatoception, audition, electroception, gustation, olfaction and vision is reviewed, and possible neural architectures for temporal information processing are discussed. 1. The role of timing in the brain The human brain is by far the most capable, the most versatile, and the most complex informationprocessing system known to science. For those concerned with problems of artificial intelligence there has long been the dream that once its functional principles are well understood, the design and construction of adaptive devices more powerful than any yet seen could follow in a straightforward manner. Despite great advances, the neurosciences are still far from understanding the nature of the \"neural code\" underlying the detailed workings of the brain. i.e. exactly which information-processing operations are involved. If we choose to view the brain in informational terms, as an adaptive signalling system embedded within an external environment, then the issue of which aspects of neural activity constitute the \"signals\" in the system is absolutely critical to understanding its functioning. It is a question which must be answered before all others, because all functional assumptions, interpretations, and models depend upon the appropriate choice of what processes neurons use to convey information. The role of the time patterns of neural discharges in the transmission and processing of information in the nervous system has been debated since the pulsatile nature of nervous transmission was recognized less than a century ago. Because external stimuli can be physically well-characterized and controlled, the encoding of sensory information has always played a pivotal role in more general conceptions of neural coding. 2. Coding by average discharge rate With the advent of single cell recording techniques in neurophysiology, it was generally assumed that neural information is encoded solely in the average neural discharge rates of neurons (Adrian 1928). This notion of a average discharge rate code, sometimes called the Frequency Coding principle1, has persisted and forms the basis for virtually all neural net design (Feldman 1990) and almost all neuroscientific investigations concerned with information processing (Barlow 1972). While there is much accumulated experimental evidence to support such a principle in many systems, it does not necessarily follow that only average rate codes are used in the nervous coding. From the advent of modern electrophysiology, there were always other conceptions of how sense information could be transmitted (Troland 1921; Troland 1929; Wever & Bray 1937; Boring 1942; Wever 1949). Many other types of codes produce signals which co-vary with average rates, and these other coding schemes may actually contain much higher quality information than average discharge rates. In the auditory nerve, for example, stimulus periodicities below a few kHz are much more precisely represented by interspike interval statistics than by discharge rates (Goldstein & Srulovicz 1977), but because both interval patterns and discharge rate patterns are observed together, it is difficult to determine directly which kinds of codes are functionally operant. However, since rate-coding has become the default assumption of practicing neuroscientists, the burden of proof generally falls on the alternatives. The principle of rate coding has a number of wide-ranging ramifications in the way that neural networks, both wet and dry are conceptualized. A mean rate code entails some time window over which spikes are counted, and depending upon the system, this window is usually thought to be on the order of tens to hundreds of milliseconds or more. Long integration windows can present problems in sensory systems where coherent, detailed percepts can be generated with short 1\"Frequency\" has two meanings, one associated with a rate of events, the other associated with a particular periodicity of events. Frequency Coding implies the former meaning. stimulus durations (e.g. tachistoscopically presented images, tone bursts). The meaningful use of an average discharge rate is also stretched when only a handful of spikes are discharged within an integration window, as often occurs in cortical neurons. Rate coding goes hand in hand with the doctrine of \"specific nerve energies,\" as it was laid out by Müller and Helmholtz (see discussion in (Boring 1933; Boring 1942)). The principle asserts that specific sensory modalities have specific types of sense receptors. Consequently it is by virtue of connection to a given type of receptor that a given neuron is interpreted to be sending a signal related to a particular quality (a visual signal as opposed to a smell). Helmholtz through his study of the cochlea elevated this principle to also include quality differences within a sense modality. Thus, in Helmholtz's view, because particular auditory nerve fibers are connected to receptors at specific places on the cochlear partition, and hence have different frequency sensitivities, they signal different pure tone pitches by virtue of their connectivity. Coding exclusively by average discharge rate necessitates this kind of \"labelled line\" or \"place\" coding because there is no other means internal to the spike train itself for conveying what kind of signal it is (e.g. a taste vs. a sound; the semantics of the message). While the doctrine of specific nerve energies does not mandate that average rate be the signal encoded in the spike train (e.g. see the discussion of Troland's resonance-frequency theory of hearing (Boring 1942)), it has generally been taken on faith that sensory coding could be accomplished solely by rate-place codes. Unless temporal patterns are immediately obvious and impossible to ignore, looking elsewhere into coding alternatives has generally been regarded by neuroscientists as wasted effort. In tandem with exclusive use of rate codes, it has often been assumed that there is no usable temporal structure in spike trains, i.e. spike trains can be functionally described as a Poisson process with one independent parameter, the mean rate of arrivals. As a result, in many higherlevel models of neuronal networks, the temporal dynamics of spike generation are ignored in favor of mean rates or discharge probabilities. One far reaching consequence of these high level functional descriptions is that the neural output signal in any given time period is conceived as a scalar quantity. This effectively rules out the multiplexing of signals in the time domain, which would require a finer grained representation of time and a different (e.g. Fourier) interpretation of the signal. Since only one output signal can be sent from each neural element, multiple input signals converging on a given element must be converted into one output signal. An analogy could be made to a telegraph network which recieves messages from a hundred stations, but can only transmit one message to all of its hundred connecting stations. Each additional signal must compete with all others at each node. In contrast, a station which has several frequency bands available can process meaningful information in one or two bands and relay the other messages unchanged. Even the assumption that all postsynaptic neurons receive the same message can be called into question, since conduction blocks in different branches of axon trees can filter the spike trains that arrive at the respective synapses (Bittner 1968; Raymond & Lettvin 1978; Waxman 1978; Raymond 1979; Wasserman 1992). Instead of one informationally-passive output line fanning out to send the same signal to all postsynaptic elements, a branching structure is created which sequentially filters the signals. Thus the shift from scalars to multidimensional signalling and the inclusion of axonal operations can drastically the functional topology of the network, and with it the flexibility of infomation processing. Largely because of the ordering in cortical maps of retinotopic positions, cochleotopic positions, and somatotopic positions, it has long been assumed that the cortex is a spatial pattern processor. This view of cortical structures was crystallized in a set of far-reaching and of provocative papers by David Marr (Marr 1970; McNaughton & Nadel 1990; Marr 1991). In these papers Marr proposed general information processing mechanisms for the major cortical structures in the brain: the cerebral cortex, the hippocampus (\"archicortex\") and the cerebellar cortex. While it seems abundantly clear that spatially ordered maps are functionally very important, there is no inherent reason why the cortex must be only a spatial processor, why it cannot also be structured so as to effect time-space transformations (Pitts & McCulloch 1947). Alternative timeplace architectures, such as those first articulated by Licklider (Licklider 1951) and Braitenberg (Braitenberg 1961; Braitenberg 1967) take advantage of spatial orderings to perform computations in the time domain. After a long period of relative neglect, the recent discoveries of neuronal synchronies in","PeriodicalId":82238,"journal":{"name":"Origins","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2018-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Origins","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4324/9781315789347-16","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
Potential strategies for temporal neural processing in the brain and their implications for the design of artificial neural networks are considered. Current connectionist thinking holds that neurons send signals to each other by changes in their average rate of discharge. This implies that there is one output signal per neuron at any given time (scalar coding), and that all neuronal specificity is achieved solely by patterns of synaptic connections. However, information can be carried by temporal codes, in temporal patterns of neural discharges and by relative times of arrival of individual spikes. Temporal coding permits multiplexing of information in the time domain, which potentially increases the flexibility of neural networks. A broadcast model of information transmission is contrasted with the current notion of highly specific connectivity. Evidence for temporal coding in somatoception, audition, electroception, gustation, olfaction and vision is reviewed, and possible neural architectures for temporal information processing are discussed. 1. The role of timing in the brain The human brain is by far the most capable, the most versatile, and the most complex informationprocessing system known to science. For those concerned with problems of artificial intelligence there has long been the dream that once its functional principles are well understood, the design and construction of adaptive devices more powerful than any yet seen could follow in a straightforward manner. Despite great advances, the neurosciences are still far from understanding the nature of the "neural code" underlying the detailed workings of the brain. i.e. exactly which information-processing operations are involved. If we choose to view the brain in informational terms, as an adaptive signalling system embedded within an external environment, then the issue of which aspects of neural activity constitute the "signals" in the system is absolutely critical to understanding its functioning. It is a question which must be answered before all others, because all functional assumptions, interpretations, and models depend upon the appropriate choice of what processes neurons use to convey information. The role of the time patterns of neural discharges in the transmission and processing of information in the nervous system has been debated since the pulsatile nature of nervous transmission was recognized less than a century ago. Because external stimuli can be physically well-characterized and controlled, the encoding of sensory information has always played a pivotal role in more general conceptions of neural coding. 2. Coding by average discharge rate With the advent of single cell recording techniques in neurophysiology, it was generally assumed that neural information is encoded solely in the average neural discharge rates of neurons (Adrian 1928). This notion of a average discharge rate code, sometimes called the Frequency Coding principle1, has persisted and forms the basis for virtually all neural net design (Feldman 1990) and almost all neuroscientific investigations concerned with information processing (Barlow 1972). While there is much accumulated experimental evidence to support such a principle in many systems, it does not necessarily follow that only average rate codes are used in the nervous coding. From the advent of modern electrophysiology, there were always other conceptions of how sense information could be transmitted (Troland 1921; Troland 1929; Wever & Bray 1937; Boring 1942; Wever 1949). Many other types of codes produce signals which co-vary with average rates, and these other coding schemes may actually contain much higher quality information than average discharge rates. In the auditory nerve, for example, stimulus periodicities below a few kHz are much more precisely represented by interspike interval statistics than by discharge rates (Goldstein & Srulovicz 1977), but because both interval patterns and discharge rate patterns are observed together, it is difficult to determine directly which kinds of codes are functionally operant. However, since rate-coding has become the default assumption of practicing neuroscientists, the burden of proof generally falls on the alternatives. The principle of rate coding has a number of wide-ranging ramifications in the way that neural networks, both wet and dry are conceptualized. A mean rate code entails some time window over which spikes are counted, and depending upon the system, this window is usually thought to be on the order of tens to hundreds of milliseconds or more. Long integration windows can present problems in sensory systems where coherent, detailed percepts can be generated with short 1"Frequency" has two meanings, one associated with a rate of events, the other associated with a particular periodicity of events. Frequency Coding implies the former meaning. stimulus durations (e.g. tachistoscopically presented images, tone bursts). The meaningful use of an average discharge rate is also stretched when only a handful of spikes are discharged within an integration window, as often occurs in cortical neurons. Rate coding goes hand in hand with the doctrine of "specific nerve energies," as it was laid out by Müller and Helmholtz (see discussion in (Boring 1933; Boring 1942)). The principle asserts that specific sensory modalities have specific types of sense receptors. Consequently it is by virtue of connection to a given type of receptor that a given neuron is interpreted to be sending a signal related to a particular quality (a visual signal as opposed to a smell). Helmholtz through his study of the cochlea elevated this principle to also include quality differences within a sense modality. Thus, in Helmholtz's view, because particular auditory nerve fibers are connected to receptors at specific places on the cochlear partition, and hence have different frequency sensitivities, they signal different pure tone pitches by virtue of their connectivity. Coding exclusively by average discharge rate necessitates this kind of "labelled line" or "place" coding because there is no other means internal to the spike train itself for conveying what kind of signal it is (e.g. a taste vs. a sound; the semantics of the message). While the doctrine of specific nerve energies does not mandate that average rate be the signal encoded in the spike train (e.g. see the discussion of Troland's resonance-frequency theory of hearing (Boring 1942)), it has generally been taken on faith that sensory coding could be accomplished solely by rate-place codes. Unless temporal patterns are immediately obvious and impossible to ignore, looking elsewhere into coding alternatives has generally been regarded by neuroscientists as wasted effort. In tandem with exclusive use of rate codes, it has often been assumed that there is no usable temporal structure in spike trains, i.e. spike trains can be functionally described as a Poisson process with one independent parameter, the mean rate of arrivals. As a result, in many higherlevel models of neuronal networks, the temporal dynamics of spike generation are ignored in favor of mean rates or discharge probabilities. One far reaching consequence of these high level functional descriptions is that the neural output signal in any given time period is conceived as a scalar quantity. This effectively rules out the multiplexing of signals in the time domain, which would require a finer grained representation of time and a different (e.g. Fourier) interpretation of the signal. Since only one output signal can be sent from each neural element, multiple input signals converging on a given element must be converted into one output signal. An analogy could be made to a telegraph network which recieves messages from a hundred stations, but can only transmit one message to all of its hundred connecting stations. Each additional signal must compete with all others at each node. In contrast, a station which has several frequency bands available can process meaningful information in one or two bands and relay the other messages unchanged. Even the assumption that all postsynaptic neurons receive the same message can be called into question, since conduction blocks in different branches of axon trees can filter the spike trains that arrive at the respective synapses (Bittner 1968; Raymond & Lettvin 1978; Waxman 1978; Raymond 1979; Wasserman 1992). Instead of one informationally-passive output line fanning out to send the same signal to all postsynaptic elements, a branching structure is created which sequentially filters the signals. Thus the shift from scalars to multidimensional signalling and the inclusion of axonal operations can drastically the functional topology of the network, and with it the flexibility of infomation processing. Largely because of the ordering in cortical maps of retinotopic positions, cochleotopic positions, and somatotopic positions, it has long been assumed that the cortex is a spatial pattern processor. This view of cortical structures was crystallized in a set of far-reaching and of provocative papers by David Marr (Marr 1970; McNaughton & Nadel 1990; Marr 1991). In these papers Marr proposed general information processing mechanisms for the major cortical structures in the brain: the cerebral cortex, the hippocampus ("archicortex") and the cerebellar cortex. While it seems abundantly clear that spatially ordered maps are functionally very important, there is no inherent reason why the cortex must be only a spatial processor, why it cannot also be structured so as to effect time-space transformations (Pitts & McCulloch 1947). Alternative timeplace architectures, such as those first articulated by Licklider (Licklider 1951) and Braitenberg (Braitenberg 1961; Braitenberg 1967) take advantage of spatial orderings to perform computations in the time domain. After a long period of relative neglect, the recent discoveries of neuronal synchronies in