Pub Date : 1900-01-01DOI: 10.32470/ccn.2022.1304-0
Xue L Gong, Alexander G. Huth, F. Theunissen
{"title":"Phonemic representation of narrative speech in human cerebral cortex","authors":"Xue L Gong, Alexander G. Huth, F. Theunissen","doi":"10.32470/ccn.2022.1304-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1304-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130268194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2022.1180-0
Jong-Chun Lim, Sungbeen Park, Sungshin Kim
{"title":"A goal-driven Deep Reinforcement Learning Model Predicts Neural Representations Related to Human Visuomotor Control","authors":"Jong-Chun Lim, Sungbeen Park, Sungshin Kim","doi":"10.32470/ccn.2022.1180-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1180-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129450640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2022.1079-0
Sebastian Hellmann, Michael Zehetleitner, Manuel Rausch
{"title":"Dynamical Models of Decision Confidence in Visual Perception: Implementation and Comparison","authors":"Sebastian Hellmann, Michael Zehetleitner, Manuel Rausch","doi":"10.32470/ccn.2022.1079-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1079-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121311058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2022.1156-0
R. Murray, Devin Kehoe
: Deep neural networks have made rapid advances in object recognition, but progress has mostly been made through experimentation, with little guidance from normative theories. Here we use ideal observer theory and associated methods to compare current network performance to theoretical limits on performance. We measure network performance and ideal observer performance on a modified ImageNet task, where model observers view samples from a limited number of object categories, in several levels of external white Gaussian noise. We find that although current networks achieve 90% performance or better on the standard ImageNet task, the ideal observer performs vastly better on the more limited task we consider here. The networks' "calculation efficiency", a measure of the extent to which they use all available information to perform a task, is on the order of 10 -5 , an exceedingly small value. We consider reasons why efficiency may be so low, and outline further uses of ideal obsevers and noise methods to understand network performance.
{"title":"Efficiency of object recognition networks on an absolute scale","authors":"R. Murray, Devin Kehoe","doi":"10.32470/ccn.2022.1156-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1156-0","url":null,"abstract":": Deep neural networks have made rapid advances in object recognition, but progress has mostly been made through experimentation, with little guidance from normative theories. Here we use ideal observer theory and associated methods to compare current network performance to theoretical limits on performance. We measure network performance and ideal observer performance on a modified ImageNet task, where model observers view samples from a limited number of object categories, in several levels of external white Gaussian noise. We find that although current networks achieve 90% performance or better on the standard ImageNet task, the ideal observer performs vastly better on the more limited task we consider here. The networks' \"calculation efficiency\", a measure of the extent to which they use all available information to perform a task, is on the order of 10 -5 , an exceedingly small value. We consider reasons why efficiency may be so low, and outline further uses of ideal obsevers and noise methods to understand network performance.","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121171190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2022.1034-0
Binxu Wang, Carlos R. Ponce
{"title":"Factorized convolution models for interpreting neuron-guided images synthesis","authors":"Binxu Wang, Carlos R. Ponce","doi":"10.32470/ccn.2022.1034-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1034-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127573553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2022.1270-0
Yaoguang Jiang, M. Platt
{"title":"The neurobiology of strategic competition","authors":"Yaoguang Jiang, M. Platt","doi":"10.32470/ccn.2022.1270-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1270-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134618448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2022.1126-0
Fabian M. Renz, Shany Grossman, P. Dayan, Christian F. Doeller, Nicolas W. Schuck
: Cognitive maps represent relational structures and are taken to be important for generalization and optimal decision making in spatial as well as non-spatial domains. While many studies have investigated the benefits of cognitive maps, how these maps are learned from experience has remained less clear. We introduce a new graph-structured sequence task to better understand how cognitive maps are learned. Participants observed sequences of episodes followed by a reward, thereby learning about the underlying transition structure and fluctuating reward contingencies. Importantly, the task structure allowed participants to generalize value from some episode sequences to others, and generalizability was either signaled by episode similarity or had to be inferred more indirectly. Behavioral data demonstrated participants ` ability to learn about signaled and unsignaled generalizability with different speed, indicating that the formation of cognitive maps partially relies on exploiting observable similarities across episodes. We hypothesize that a possible neural mechanism involved in learning cognitive maps as described here is experience replay.
{"title":"Representation learning facilitates different levels of generalization","authors":"Fabian M. Renz, Shany Grossman, P. Dayan, Christian F. Doeller, Nicolas W. Schuck","doi":"10.32470/ccn.2022.1126-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1126-0","url":null,"abstract":": Cognitive maps represent relational structures and are taken to be important for generalization and optimal decision making in spatial as well as non-spatial domains. While many studies have investigated the benefits of cognitive maps, how these maps are learned from experience has remained less clear. We introduce a new graph-structured sequence task to better understand how cognitive maps are learned. Participants observed sequences of episodes followed by a reward, thereby learning about the underlying transition structure and fluctuating reward contingencies. Importantly, the task structure allowed participants to generalize value from some episode sequences to others, and generalizability was either signaled by episode similarity or had to be inferred more indirectly. Behavioral data demonstrated participants ` ability to learn about signaled and unsignaled generalizability with different speed, indicating that the formation of cognitive maps partially relies on exploiting observable similarities across episodes. We hypothesize that a possible neural mechanism involved in learning cognitive maps as described here is experience replay.","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133006078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2022.1044-0
Zhe-Xin Xu, G. DeAngelis
{"title":"Contextual Influences on the Perception of Motion and Depth","authors":"Zhe-Xin Xu, G. DeAngelis","doi":"10.32470/ccn.2022.1044-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1044-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129078408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}