{"title":"State-dependent Filtering of the Ring Model","authors":"Jing Yan, Yunxuan Feng, Wei Dai, Yaoyu Zhang","doi":"arxiv-2408.01817","DOIUrl":null,"url":null,"abstract":"Robustness is a measure of functional reliability of a system against\nperturbations. To achieve a good and robust performance, a system must filter\nout external perturbations by its internal priors. These priors are usually\ndistilled in the structure and the states of the system. Biophysical neural\nnetwork are known to be robust but the exact mechanisms are still elusive. In\nthis paper, we probe how orientation-selective neurons organized on a 1-D ring\nnetwork respond to perturbations in the hope of gaining some insights on the\nrobustness of visual system in brain. We analyze the steady-state of the\nrate-based network and prove that the activation state of neurons, rather than\ntheir firing rates, determines how the model respond to perturbations. We then\nidentify specific perturbation patterns that induce the largest responses for\ndifferent configurations of activation states, and find them to be sinusoidal\nor sinusoidal-like while other patterns are largely attenuated. Similar results\nare observed in a spiking ring model. Finally, we remap the perturbations in\norientation back into the 2-D image space using Gabor functions. The resulted\noptimal perturbation patterns mirror adversarial attacks in deep learning that\nexploit the priors of the system. Our results suggest that based on different\nstate configurations, these priors could underlie some of the illusionary\nexperiences as the cost of visual robustness.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"49 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.01817","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Robustness is a measure of functional reliability of a system against
perturbations. To achieve a good and robust performance, a system must filter
out external perturbations by its internal priors. These priors are usually
distilled in the structure and the states of the system. Biophysical neural
network are known to be robust but the exact mechanisms are still elusive. In
this paper, we probe how orientation-selective neurons organized on a 1-D ring
network respond to perturbations in the hope of gaining some insights on the
robustness of visual system in brain. We analyze the steady-state of the
rate-based network and prove that the activation state of neurons, rather than
their firing rates, determines how the model respond to perturbations. We then
identify specific perturbation patterns that induce the largest responses for
different configurations of activation states, and find them to be sinusoidal
or sinusoidal-like while other patterns are largely attenuated. Similar results
are observed in a spiking ring model. Finally, we remap the perturbations in
orientation back into the 2-D image space using Gabor functions. The resulted
optimal perturbation patterns mirror adversarial attacks in deep learning that
exploit the priors of the system. Our results suggest that based on different
state configurations, these priors could underlie some of the illusionary
experiences as the cost of visual robustness.