{"title":"测试集成编码的灵活性:跨模态集成感知的局限性。","authors":"Greer Gillies, Keisuke Fukuda, Jonathan S Cant","doi":"10.1037/xge0001470","DOIUrl":null,"url":null,"abstract":"<p><p>Ensemble coding (the brain's ability to rapidly extract summary statistics from groups of items) has been demonstrated across a range of low-level (e.g., average color) to high-level (e.g., average facial expression) visual features, and even on information that cannot be gleaned solely from retinal input (e.g., object lifelikeness). There is also evidence that ensemble coding can interact with other cognitive systems such as long-term memory (LTM), as observers are able to derive the average cost of items. We extended this line of research to examine if different sensory modalities can interact during ensemble coding. Participants made judgments about the average sweetness of groups of different visually presented foods. We found that, when viewed simultaneously, observers were limited in the number of items they could incorporate into their cross-modal ensemble percepts. We speculate that this capacity limit is caused by the cross-modal translation of visual percepts into taste representations stored in LTM. This was supported by findings that (a) participants could use similar stimuli to form capacity-unlimited ensemble representations of average screen size and (b) participants could extract the average sweetness of displays when items were viewed in sequence, with no capacity limitation (suggesting that spatial attention constrains the number of necessary visual cues an observer can integrate in a given moment to trigger cross-modal retrieval of taste). Together, the results of our study demonstrate that there are limits to the flexibility of ensemble coding, especially when multiple cognitive systems need to interact to compress sensory information into an ensemble representation. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":15698,"journal":{"name":"Journal of Experimental Psychology: General","volume":" ","pages":"56-69"},"PeriodicalIF":3.7000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Testing the flexibility of ensemble coding: Limitations in cross-modal ensemble perception.\",\"authors\":\"Greer Gillies, Keisuke Fukuda, Jonathan S Cant\",\"doi\":\"10.1037/xge0001470\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Ensemble coding (the brain's ability to rapidly extract summary statistics from groups of items) has been demonstrated across a range of low-level (e.g., average color) to high-level (e.g., average facial expression) visual features, and even on information that cannot be gleaned solely from retinal input (e.g., object lifelikeness). There is also evidence that ensemble coding can interact with other cognitive systems such as long-term memory (LTM), as observers are able to derive the average cost of items. We extended this line of research to examine if different sensory modalities can interact during ensemble coding. Participants made judgments about the average sweetness of groups of different visually presented foods. We found that, when viewed simultaneously, observers were limited in the number of items they could incorporate into their cross-modal ensemble percepts. We speculate that this capacity limit is caused by the cross-modal translation of visual percepts into taste representations stored in LTM. This was supported by findings that (a) participants could use similar stimuli to form capacity-unlimited ensemble representations of average screen size and (b) participants could extract the average sweetness of displays when items were viewed in sequence, with no capacity limitation (suggesting that spatial attention constrains the number of necessary visual cues an observer can integrate in a given moment to trigger cross-modal retrieval of taste). Together, the results of our study demonstrate that there are limits to the flexibility of ensemble coding, especially when multiple cognitive systems need to interact to compress sensory information into an ensemble representation. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>\",\"PeriodicalId\":15698,\"journal\":{\"name\":\"Journal of Experimental Psychology: General\",\"volume\":\" \",\"pages\":\"56-69\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Experimental Psychology: General\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1037/xge0001470\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/9/21 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental Psychology: General","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/xge0001470","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/9/21 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
Testing the flexibility of ensemble coding: Limitations in cross-modal ensemble perception.
Ensemble coding (the brain's ability to rapidly extract summary statistics from groups of items) has been demonstrated across a range of low-level (e.g., average color) to high-level (e.g., average facial expression) visual features, and even on information that cannot be gleaned solely from retinal input (e.g., object lifelikeness). There is also evidence that ensemble coding can interact with other cognitive systems such as long-term memory (LTM), as observers are able to derive the average cost of items. We extended this line of research to examine if different sensory modalities can interact during ensemble coding. Participants made judgments about the average sweetness of groups of different visually presented foods. We found that, when viewed simultaneously, observers were limited in the number of items they could incorporate into their cross-modal ensemble percepts. We speculate that this capacity limit is caused by the cross-modal translation of visual percepts into taste representations stored in LTM. This was supported by findings that (a) participants could use similar stimuli to form capacity-unlimited ensemble representations of average screen size and (b) participants could extract the average sweetness of displays when items were viewed in sequence, with no capacity limitation (suggesting that spatial attention constrains the number of necessary visual cues an observer can integrate in a given moment to trigger cross-modal retrieval of taste). Together, the results of our study demonstrate that there are limits to the flexibility of ensemble coding, especially when multiple cognitive systems need to interact to compress sensory information into an ensemble representation. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
期刊介绍:
The Journal of Experimental Psychology: General publishes articles describing empirical work that bridges the traditional interests of two or more communities of psychology. The work may touch on issues dealt with in JEP: Learning, Memory, and Cognition, JEP: Human Perception and Performance, JEP: Animal Behavior Processes, or JEP: Applied, but may also concern issues in other subdisciplines of psychology, including social processes, developmental processes, psychopathology, neuroscience, or computational modeling. Articles in JEP: General may be longer than the usual journal publication if necessary, but shorter articles that bridge subdisciplines will also be considered.