Pub Date : 2021-06-25DOI: 10.1093/oso/9780190070557.003.0006
S. Grossberg
This chapter explains fundamental differences between seeing and recognition, notably how and why our brains use conscious seeing to control actions like looking and reaching, while we learn both view-, size-, and view-specific object recognition categories, and view-, size-, and position-invariant object recognition categories, as our eyes search a scene during active vision. The dorsal Where cortical stream and the ventral What cortical stream interact to regulate invariant category learning by solving the View-to-Object Binding problem whereby inferotemporal, or IT, cortex associates only views of a single object with its learned invariant category. Feature-category resonances between V2/V4 and IT support category recognition. Symptoms of visual agnosia emerge when IT is lesioned. V2 and V4 interact to enable amodal completion of partially occluded objects behind their occluders, without requiring that all occluders look transparent. V4 represents the unoccluded surfaces of opaque objects and triggers a surface-shroud resonance with posterial parietal cortex, or PPC, that renders surfaces consciously visible, and enables them to control actions. Clinical symptoms of visual neglect emerge when PPC is lesioned. A unified explanation is given of data about visual crowding, situational awareness, change blindness, motion-induced blindness, visual search, perceptual stability, and target swapping. Although visual boundaries and surfaces obey computationally complementary laws, feedback between boundaries and surfaces ensure their consistency and initiate figure-ground separation, while commanding our eyes to foveate sequences of salient features on object surfaces, and thereby triggering invariant category learning. What-to-Where stream interactions enable Where’s Waldo searches for desired objects in cluttered scenes.
{"title":"Conscious Seeing and Invariant Recognition","authors":"S. Grossberg","doi":"10.1093/oso/9780190070557.003.0006","DOIUrl":"https://doi.org/10.1093/oso/9780190070557.003.0006","url":null,"abstract":"This chapter explains fundamental differences between seeing and recognition, notably how and why our brains use conscious seeing to control actions like looking and reaching, while we learn both view-, size-, and view-specific object recognition categories, and view-, size-, and position-invariant object recognition categories, as our eyes search a scene during active vision. The dorsal Where cortical stream and the ventral What cortical stream interact to regulate invariant category learning by solving the View-to-Object Binding problem whereby inferotemporal, or IT, cortex associates only views of a single object with its learned invariant category. Feature-category resonances between V2/V4 and IT support category recognition. Symptoms of visual agnosia emerge when IT is lesioned. V2 and V4 interact to enable amodal completion of partially occluded objects behind their occluders, without requiring that all occluders look transparent. V4 represents the unoccluded surfaces of opaque objects and triggers a surface-shroud resonance with posterial parietal cortex, or PPC, that renders surfaces consciously visible, and enables them to control actions. Clinical symptoms of visual neglect emerge when PPC is lesioned. A unified explanation is given of data about visual crowding, situational awareness, change blindness, motion-induced blindness, visual search, perceptual stability, and target swapping. Although visual boundaries and surfaces obey computationally complementary laws, feedback between boundaries and surfaces ensure their consistency and initiate figure-ground separation, while commanding our eyes to foveate sequences of salient features on object surfaces, and thereby triggering invariant category learning. What-to-Where stream interactions enable Where’s Waldo searches for desired objects in cluttered scenes.","PeriodicalId":370230,"journal":{"name":"Conscious Mind, Resonant Brain","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121878959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-25DOI: 10.1093/oso/9780190070557.003.0004
S. Grossberg
Multiple paradoxical visual percepts are explained using boundary completion and surface filling-in properties, including discounting the illuminant; brightness constancy, contrast, and assimilation; the Craik-O’Brien-Cornsweet Effect; and Glass patterns. Boundaries act as both generators and barriers to filling-in using specific cooperative and competitive interactions. Oriented local contrast detectors, like cortical simple cells, create uncertainties that are resolved using networks of simple, complex, and hypercomplex cells, leading to unexpected insights such as why Roman typeface letter fonts use serifs. Further uncertainties are resolved by interactions with bipole grouping cells. These simple-complex-hypercomplex-bipole networks form a double filter and grouping network that provides unified explanations of texture segregation, hyperacuity, and illusory contour strength. Discounting the illuminant suppresses illumination contaminants so that feature contours can hierarchically induce surface filling-in. These three hierarchical resolutions of uncertainty explain neon color spreading. Why groupings do not penetrate occluding objects is explained, as are percepts of DaVinci stereopsis, the Koffka-Benussi and Kanizsa-Minguzzi rings, and pictures of graffiti artists and Mooney faces. The property of analog coherence is achieved by laminar neocortical circuits. Variations of a shared canonical laminar circuit have explained data about vision, speech, and cognition. The FACADE theory of 3D vision and figure-ground separation explains much more data than a Bayesian model can. The same cortical process that assures consistency of boundary and surface percepts, despite their complementary laws, also explains how figure-ground separation is triggered. It is also explained how cortical areas V2 and V4 regulate seeing and recognition without forcing all occluders to look transparent.
{"title":"How a Brain Sees: Neural Mechanisms","authors":"S. Grossberg","doi":"10.1093/oso/9780190070557.003.0004","DOIUrl":"https://doi.org/10.1093/oso/9780190070557.003.0004","url":null,"abstract":"Multiple paradoxical visual percepts are explained using boundary completion and surface filling-in properties, including discounting the illuminant; brightness constancy, contrast, and assimilation; the Craik-O’Brien-Cornsweet Effect; and Glass patterns. Boundaries act as both generators and barriers to filling-in using specific cooperative and competitive interactions. Oriented local contrast detectors, like cortical simple cells, create uncertainties that are resolved using networks of simple, complex, and hypercomplex cells, leading to unexpected insights such as why Roman typeface letter fonts use serifs. Further uncertainties are resolved by interactions with bipole grouping cells. These simple-complex-hypercomplex-bipole networks form a double filter and grouping network that provides unified explanations of texture segregation, hyperacuity, and illusory contour strength. Discounting the illuminant suppresses illumination contaminants so that feature contours can hierarchically induce surface filling-in. These three hierarchical resolutions of uncertainty explain neon color spreading. Why groupings do not penetrate occluding objects is explained, as are percepts of DaVinci stereopsis, the Koffka-Benussi and Kanizsa-Minguzzi rings, and pictures of graffiti artists and Mooney faces. The property of analog coherence is achieved by laminar neocortical circuits. Variations of a shared canonical laminar circuit have explained data about vision, speech, and cognition. The FACADE theory of 3D vision and figure-ground separation explains much more data than a Bayesian model can. The same cortical process that assures consistency of boundary and surface percepts, despite their complementary laws, also explains how figure-ground separation is triggered. It is also explained how cortical areas V2 and V4 regulate seeing and recognition without forcing all occluders to look transparent.","PeriodicalId":370230,"journal":{"name":"Conscious Mind, Resonant Brain","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129063732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-25DOI: 10.1093/oso/9780190070557.003.0008
S. Grossberg
This chapter explains why visual motion perception is not just perception of the changing positions of moving objects. Computationally complementary processes process static objects with different orientations, and moving objects with different motion directions, via parallel cortical form and motion streams through V2 and MT. The motion stream pools multiple oriented object contours to estimate object motion direction. Such pooling coarsens estimates of object depth, which require precise matches of oriented stimuli from both eyes. Negative aftereffects of form and motion stimuli illustrate these complementary properties. Feature tracking signals begin to overcome directional ambiguities due to the aperture problem. Motion capture by short-range and long-range directional filters, together with competitive interactions, process feature tracking and ambiguous motion directional signals to generate a coherent representation of object motion direction and speed. Many properties of motion perception are explained, notably barberpole illusion and properties of long-range apparent motion, including how apparent motion speed varies with flash interstimulus interval, distance, and luminance; apparent motion of illusory contours; phi and beta motion; split motion; gamma motion; Ternus motion; Korte’s Laws; line motion illusion; induced motion; motion transparency; chopsticks illusion; Johannson motion; and Duncker motion. Gaussian waves of apparent motion clarify how tracking occurs, and explain spatial attention shifts through time. This motion processor helps to quantitatively simulate neurophysiological data about motion-based decision-making in monkeys when it inputs to a model of how the lateral intraparietal, or LIP, area chooses a movement direction from the motion direction estimate. Bayesian decision-making models cannot explain these data.
{"title":"How We See and Recognize Object Motion","authors":"S. Grossberg","doi":"10.1093/oso/9780190070557.003.0008","DOIUrl":"https://doi.org/10.1093/oso/9780190070557.003.0008","url":null,"abstract":"This chapter explains why visual motion perception is not just perception of the changing positions of moving objects. Computationally complementary processes process static objects with different orientations, and moving objects with different motion directions, via parallel cortical form and motion streams through V2 and MT. The motion stream pools multiple oriented object contours to estimate object motion direction. Such pooling coarsens estimates of object depth, which require precise matches of oriented stimuli from both eyes. Negative aftereffects of form and motion stimuli illustrate these complementary properties. Feature tracking signals begin to overcome directional ambiguities due to the aperture problem. Motion capture by short-range and long-range directional filters, together with competitive interactions, process feature tracking and ambiguous motion directional signals to generate a coherent representation of object motion direction and speed. Many properties of motion perception are explained, notably barberpole illusion and properties of long-range apparent motion, including how apparent motion speed varies with flash interstimulus interval, distance, and luminance; apparent motion of illusory contours; phi and beta motion; split motion; gamma motion; Ternus motion; Korte’s Laws; line motion illusion; induced motion; motion transparency; chopsticks illusion; Johannson motion; and Duncker motion. Gaussian waves of apparent motion clarify how tracking occurs, and explain spatial attention shifts through time. This motion processor helps to quantitatively simulate neurophysiological data about motion-based decision-making in monkeys when it inputs to a model of how the lateral intraparietal, or LIP, area chooses a movement direction from the motion direction estimate. Bayesian decision-making models cannot explain these data.","PeriodicalId":370230,"journal":{"name":"Conscious Mind, Resonant Brain","volume":"47 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114013152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-25DOI: 10.1093/oso/9780190070557.003.0005
S. Grossberg
This chapter begins to explain many of our most important perceptual and cognitive abilities, including how we rapidly learn to categorize and recognize so many objects and events in the world, how we remember and anticipate events that may occur in familiar situations, how we pay attention to events that particularly interest us, and how we become conscious of these events. These abilities enable us to engage in fantasy activities such as visual imagery, internalized speech, and planning. They support our ability to learn language quickly and to complete and consciously hear speech sounds in noise. The chapter begins to explain key differences between perception and recognition, and introduces Adaptive Resonance Theory, or ART, which is now the most advanced cognitive and neural theory of how our brains learn to attend, recognize, and predict objects and events in a changing world. ART cycles of resonance and reset solve the stability-plasticity dilemma so that we can learn quickly without new learning forcing catastrophic forgetting of previously learned memories. ART can learn quickly or slowly, with supervision and without it, and both many-to-one maps and one-to-many maps. It uses learned top-down expectations, attentional focusing, and mismatch-mediated hypothesis testing to do so, and is thus a self-organizing production system. ART can be derived from a simple thought experiment, and explains and predicts many psychological and neurobiological data about normal behavior. When these processes break down in specific ways, they cause symptoms of mental disorders such as schizophrenia, autism, amnesia, and Alzheimer’s disease.
{"title":"Learning to Attend, Recognize, and Predict the World","authors":"S. Grossberg","doi":"10.1093/oso/9780190070557.003.0005","DOIUrl":"https://doi.org/10.1093/oso/9780190070557.003.0005","url":null,"abstract":"This chapter begins to explain many of our most important perceptual and cognitive abilities, including how we rapidly learn to categorize and recognize so many objects and events in the world, how we remember and anticipate events that may occur in familiar situations, how we pay attention to events that particularly interest us, and how we become conscious of these events. These abilities enable us to engage in fantasy activities such as visual imagery, internalized speech, and planning. They support our ability to learn language quickly and to complete and consciously hear speech sounds in noise. The chapter begins to explain key differences between perception and recognition, and introduces Adaptive Resonance Theory, or ART, which is now the most advanced cognitive and neural theory of how our brains learn to attend, recognize, and predict objects and events in a changing world. ART cycles of resonance and reset solve the stability-plasticity dilemma so that we can learn quickly without new learning forcing catastrophic forgetting of previously learned memories. ART can learn quickly or slowly, with supervision and without it, and both many-to-one maps and one-to-many maps. It uses learned top-down expectations, attentional focusing, and mismatch-mediated hypothesis testing to do so, and is thus a self-organizing production system. ART can be derived from a simple thought experiment, and explains and predicts many psychological and neurobiological data about normal behavior. When these processes break down in specific ways, they cause symptoms of mental disorders such as schizophrenia, autism, amnesia, and Alzheimer’s disease.","PeriodicalId":370230,"journal":{"name":"Conscious Mind, Resonant Brain","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131210710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-25DOI: 10.1093/oso/9780190070557.003.0015
S. Grossberg
This chapter explains how humans and other animals learn to adaptively time their behaviors to match external environmental constraints. It hereby explains how nerve cells learn to bridge big time intervals of hundreds of milliseconds or even several seconds, and thereby associate events that are separated in time. This is accomplished by a spectrum of cells that each respond in overlapping time intervals and whose population response can bridge intervals much larger than any individual cell can. Such spectral timing occurs in circuits that include the lateral entorhinal cortex and hippocampal cortex. Trace conditioning, in which CS and US are separated in time, requires the hippocampus, whereas delay conditioning, in which they overlap, does not. The Weber law observed in trace conditioning naturally emerges from spectral timing dynamics, as later confirmed by data about hippocampal time cells. Hippocampal adaptive timing enables a cognitive-emotional resonance to be sustained long enough to become conscious of its feeling and its causal event, and to support BDNF-modulated memory consolidation. Spectral timing supports balanced exploratory and consummatory behaviors whereby restless exploration for immediate gratification is replaced by adaptively timed consummation. During expected disconfirmations of reward, orienting responses are inhibited until an adaptively timed response is released. Hippocampally-mediated incentive motivation supports timed responding via the cerebellum. mGluR regulates adaptive timing in hippocampus, cerebellum, and basal ganglia. Breakdowns of mGluR and dopamine modulation cause symptoms of autism and Fragile X syndrome. Inter-personal circular reactions enable social cognitive capabilities, including joint attention and imitation learning, to develop.
{"title":"Adaptively Timed Learning","authors":"S. Grossberg","doi":"10.1093/oso/9780190070557.003.0015","DOIUrl":"https://doi.org/10.1093/oso/9780190070557.003.0015","url":null,"abstract":"This chapter explains how humans and other animals learn to adaptively time their behaviors to match external environmental constraints. It hereby explains how nerve cells learn to bridge big time intervals of hundreds of milliseconds or even several seconds, and thereby associate events that are separated in time. This is accomplished by a spectrum of cells that each respond in overlapping time intervals and whose population response can bridge intervals much larger than any individual cell can. Such spectral timing occurs in circuits that include the lateral entorhinal cortex and hippocampal cortex. Trace conditioning, in which CS and US are separated in time, requires the hippocampus, whereas delay conditioning, in which they overlap, does not. The Weber law observed in trace conditioning naturally emerges from spectral timing dynamics, as later confirmed by data about hippocampal time cells. Hippocampal adaptive timing enables a cognitive-emotional resonance to be sustained long enough to become conscious of its feeling and its causal event, and to support BDNF-modulated memory consolidation. Spectral timing supports balanced exploratory and consummatory behaviors whereby restless exploration for immediate gratification is replaced by adaptively timed consummation. During expected disconfirmations of reward, orienting responses are inhibited until an adaptively timed response is released. Hippocampally-mediated incentive motivation supports timed responding via the cerebellum. mGluR regulates adaptive timing in hippocampus, cerebellum, and basal ganglia. Breakdowns of mGluR and dopamine modulation cause symptoms of autism and Fragile X syndrome. Inter-personal circular reactions enable social cognitive capabilities, including joint attention and imitation learning, to develop.","PeriodicalId":370230,"journal":{"name":"Conscious Mind, Resonant Brain","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131015412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-25DOI: 10.1093/oso/9780190070557.003.0002
S. Grossberg
A historical overview is given of interdisciplinary work in physics and psychology by some of the greatest nineteenth-century scientists, and why the fields split, leading to a century of ferment before the current scientific revolution in mind-brain sciences began to understand how we autonomously adapt to a changing world. New nonlinear, nonlocal, and nonstationary intuitions and laws are needed to understand how brains make minds. Work of Helmholtz on vision illustrates why he left psychology. His concept of unconscious inference presaged modern ideas about learning, expectation, and matching that this book scientifically explains. The fact that brains are designed to control behavioral success has profound implications for the methods and models that can unify mind and brain. Backward learning in time, and serial learning, illustrate why neural networks are a natural language for explaining brain dynamics, including the correct functional stimuli and laws for short-term memory (STM), medium-term memory (MTM), and long-term memory (LTM) traces. In particular, brains process spatial patterns of STM and LTM, not just individual traces. A thought experiment leads to universal laws for how neurons, and more generally all cellular tissues, process distributed STM patterns in cooperative-competitive networks without experiencing contamination by noise or pattern saturation. The chapter illustrates how thinking this way leads to unified and principled explanations of huge databases. A brief history of the advantages and disadvantages of the binary, linear, and continuous-nonlinear sources of neural models is described, and how models like Deep Learning and the author’s contributions fit into it.
{"title":"How a Brain Makes a Mind","authors":"S. Grossberg","doi":"10.1093/oso/9780190070557.003.0002","DOIUrl":"https://doi.org/10.1093/oso/9780190070557.003.0002","url":null,"abstract":"A historical overview is given of interdisciplinary work in physics and psychology by some of the greatest nineteenth-century scientists, and why the fields split, leading to a century of ferment before the current scientific revolution in mind-brain sciences began to understand how we autonomously adapt to a changing world. New nonlinear, nonlocal, and nonstationary intuitions and laws are needed to understand how brains make minds. Work of Helmholtz on vision illustrates why he left psychology. His concept of unconscious inference presaged modern ideas about learning, expectation, and matching that this book scientifically explains. The fact that brains are designed to control behavioral success has profound implications for the methods and models that can unify mind and brain. Backward learning in time, and serial learning, illustrate why neural networks are a natural language for explaining brain dynamics, including the correct functional stimuli and laws for short-term memory (STM), medium-term memory (MTM), and long-term memory (LTM) traces. In particular, brains process spatial patterns of STM and LTM, not just individual traces. A thought experiment leads to universal laws for how neurons, and more generally all cellular tissues, process distributed STM patterns in cooperative-competitive networks without experiencing contamination by noise or pattern saturation. The chapter illustrates how thinking this way leads to unified and principled explanations of huge databases. A brief history of the advantages and disadvantages of the binary, linear, and continuous-nonlinear sources of neural models is described, and how models like Deep Learning and the author’s contributions fit into it.","PeriodicalId":370230,"journal":{"name":"Conscious Mind, Resonant Brain","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115766506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-25DOI: 10.1093/oso/9780190070557.003.0007
S. Grossberg
This chapter begins an analysis of how we see changing visual images and scenes. It explains why moving objects do not create unduly persistent trails, or streaks, of persistent visual images that could interfere with our ability to see what is there after they pass by. It does so by showing how the circuits already described for static visual form perception automatically reset themselves in response to changing visual cues, and thereby prevent undue persistence, when they are augmented with habituative transmitter gates, or MTM traces. The MTM traces gate specific connections among the hypercomplex cells that control completion of static boundaries. These MTM-gated circuits embody gated dipoles whose rebound properties autonomically reset boundaries at appropriate times in response to changing visual inputs. A tradeoff between boundary resonance and reset is clarified by this analysis. This kind of resonance and reset cycle shares many properties with the resonance and reset cycle that controls the learning of recognition categories in Adaptive Resonance Theory. The MTM-gated circuits quantitatively explain the main properties of visual persistence that do occur, including persistence of real and illusory contours, persistence after offset of oriented adapting stimuli, and persistence due to spatial competition. Psychophysical data about afterimages and residual traces are also explained by the same mechanisms.
{"title":"How Do We See a Changing World?","authors":"S. Grossberg","doi":"10.1093/oso/9780190070557.003.0007","DOIUrl":"https://doi.org/10.1093/oso/9780190070557.003.0007","url":null,"abstract":"This chapter begins an analysis of how we see changing visual images and scenes. It explains why moving objects do not create unduly persistent trails, or streaks, of persistent visual images that could interfere with our ability to see what is there after they pass by. It does so by showing how the circuits already described for static visual form perception automatically reset themselves in response to changing visual cues, and thereby prevent undue persistence, when they are augmented with habituative transmitter gates, or MTM traces. The MTM traces gate specific connections among the hypercomplex cells that control completion of static boundaries. These MTM-gated circuits embody gated dipoles whose rebound properties autonomically reset boundaries at appropriate times in response to changing visual inputs. A tradeoff between boundary resonance and reset is clarified by this analysis. This kind of resonance and reset cycle shares many properties with the resonance and reset cycle that controls the learning of recognition categories in Adaptive Resonance Theory. The MTM-gated circuits quantitatively explain the main properties of visual persistence that do occur, including persistence of real and illusory contours, persistence after offset of oriented adapting stimuli, and persistence due to spatial competition. Psychophysical data about afterimages and residual traces are also explained by the same mechanisms.","PeriodicalId":370230,"journal":{"name":"Conscious Mind, Resonant Brain","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127566850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}