{"title":"Probabilistic representations as building blocks for higher-level vision","authors":"Andrey Chetverikov, Arni Kristjansson","doi":"10.51628/001c.55730","DOIUrl":null,"url":null,"abstract":"Current theories of perception suggest that the brain represents features of the world as probability distributions, but can such uncertain foundations provide the basis for everyday vision? Perceiving objects and scenes requires knowing not just how features (e.g., colors) are distributed but also where they are and which other features they are combined with. Using a Bayesian computational model, we recovered probabilistic representations used by human observers to search for odd stimuli among distractors. Importantly, we found that the brain integrates information between feature dimensions and spatial locations, leading to more precise representations compared to when information integration is not possible. We also uncovered representational asymmetries and biases, showing their spatial organization and explain how this structure argues against “summary statistics” accounts of visual representations. Our results confirm that probabilistically encoded visual features are bound with other features and to particular locations, providing a powerful demonstration of how probabilistic representations can be a foundation for higher-level vision.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurons, behavior, data analysis and theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.51628/001c.55730","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Current theories of perception suggest that the brain represents features of the world as probability distributions, but can such uncertain foundations provide the basis for everyday vision? Perceiving objects and scenes requires knowing not just how features (e.g., colors) are distributed but also where they are and which other features they are combined with. Using a Bayesian computational model, we recovered probabilistic representations used by human observers to search for odd stimuli among distractors. Importantly, we found that the brain integrates information between feature dimensions and spatial locations, leading to more precise representations compared to when information integration is not possible. We also uncovered representational asymmetries and biases, showing their spatial organization and explain how this structure argues against “summary statistics” accounts of visual representations. Our results confirm that probabilistically encoded visual features are bound with other features and to particular locations, providing a powerful demonstration of how probabilistic representations can be a foundation for higher-level vision.