首页 > 最新文献

2022 Conference on Cognitive Computational Neuroscience最新文献

英文 中文
Phonemic representation of narrative speech in human cerebral cortex 叙事性言语在人脑皮层的音位表征
Pub Date : 1900-01-01 DOI: 10.32470/ccn.2022.1304-0
Xue L Gong, Alexander G. Huth, F. Theunissen
{"title":"Phonemic representation of narrative speech in human cerebral cortex","authors":"Xue L Gong, Alexander G. Huth, F. Theunissen","doi":"10.32470/ccn.2022.1304-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1304-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130268194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A goal-driven Deep Reinforcement Learning Model Predicts Neural Representations Related to Human Visuomotor Control 目标驱动的深度强化学习模型预测与人类视觉运动控制相关的神经表征
Pub Date : 1900-01-01 DOI: 10.32470/ccn.2022.1180-0
Jong-Chun Lim, Sungbeen Park, Sungshin Kim
{"title":"A goal-driven Deep Reinforcement Learning Model Predicts Neural Representations Related to Human Visuomotor Control","authors":"Jong-Chun Lim, Sungbeen Park, Sungshin Kim","doi":"10.32470/ccn.2022.1180-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1180-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129450640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization Demands Task-Appropriate Modular Neural Architectures 泛化需要任务适当的模块化神经结构
Pub Date : 1900-01-01 DOI: 10.32470/ccn.2022.1119-0
Ruiyi Zhang, X. Pitkow, D. Angelaki
{"title":"Generalization Demands Task-Appropriate Modular Neural Architectures","authors":"Ruiyi Zhang, X. Pitkow, D. Angelaki","doi":"10.32470/ccn.2022.1119-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1119-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130150397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamical Models of Decision Confidence in Visual Perception: Implementation and Comparison 视觉感知决策信心的动态模型:实现与比较
Pub Date : 1900-01-01 DOI: 10.32470/ccn.2022.1079-0
Sebastian Hellmann, Michael Zehetleitner, Manuel Rausch
{"title":"Dynamical Models of Decision Confidence in Visual Perception: Implementation and Comparison","authors":"Sebastian Hellmann, Michael Zehetleitner, Manuel Rausch","doi":"10.32470/ccn.2022.1079-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1079-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121311058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficiency of object recognition networks on an absolute scale 目标识别网络的绝对效率
Pub Date : 1900-01-01 DOI: 10.32470/ccn.2022.1156-0
R. Murray, Devin Kehoe
: Deep neural networks have made rapid advances in object recognition, but progress has mostly been made through experimentation, with little guidance from normative theories. Here we use ideal observer theory and associated methods to compare current network performance to theoretical limits on performance. We measure network performance and ideal observer performance on a modified ImageNet task, where model observers view samples from a limited number of object categories, in several levels of external white Gaussian noise. We find that although current networks achieve 90% performance or better on the standard ImageNet task, the ideal observer performs vastly better on the more limited task we consider here. The networks' "calculation efficiency", a measure of the extent to which they use all available information to perform a task, is on the order of 10 -5 , an exceedingly small value. We consider reasons why efficiency may be so low, and outline further uses of ideal obsevers and noise methods to understand network performance.
深度神经网络在物体识别方面取得了快速进展,但进展主要是通过实验取得的,很少有规范理论的指导。在这里,我们使用理想观测器理论和相关方法来比较当前网络性能和性能的理论限制。我们在改进的ImageNet任务上测量网络性能和理想观测器性能,其中模型观测器在几个级别的外部高斯白噪声中查看有限数量的对象类别的样本。我们发现,尽管目前的网络在标准ImageNet任务上的性能达到90%或更高,但理想的观测器在我们这里考虑的更有限的任务上的性能要好得多。这些网络的“计算效率”(衡量它们利用所有可用信息来完成一项任务的程度)在10 -5的数量级上,这是一个非常小的值。我们考虑了效率可能如此之低的原因,并概述了理想观察器和噪声方法的进一步使用,以了解网络性能。
{"title":"Efficiency of object recognition networks on an absolute scale","authors":"R. Murray, Devin Kehoe","doi":"10.32470/ccn.2022.1156-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1156-0","url":null,"abstract":": Deep neural networks have made rapid advances in object recognition, but progress has mostly been made through experimentation, with little guidance from normative theories. Here we use ideal observer theory and associated methods to compare current network performance to theoretical limits on performance. We measure network performance and ideal observer performance on a modified ImageNet task, where model observers view samples from a limited number of object categories, in several levels of external white Gaussian noise. We find that although current networks achieve 90% performance or better on the standard ImageNet task, the ideal observer performs vastly better on the more limited task we consider here. The networks' \"calculation efficiency\", a measure of the extent to which they use all available information to perform a task, is on the order of 10 -5 , an exceedingly small value. We consider reasons why efficiency may be so low, and outline further uses of ideal obsevers and noise methods to understand network performance.","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121171190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Factorized convolution models for interpreting neuron-guided images synthesis 用于解释神经元引导图像合成的分解卷积模型
Pub Date : 1900-01-01 DOI: 10.32470/ccn.2022.1034-0
Binxu Wang, Carlos R. Ponce
{"title":"Factorized convolution models for interpreting neuron-guided images synthesis","authors":"Binxu Wang, Carlos R. Ponce","doi":"10.32470/ccn.2022.1034-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1034-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127573553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The neurobiology of strategic competition 战略竞争的神经生物学
Pub Date : 1900-01-01 DOI: 10.32470/ccn.2022.1270-0
Yaoguang Jiang, M. Platt
{"title":"The neurobiology of strategic competition","authors":"Yaoguang Jiang, M. Platt","doi":"10.32470/ccn.2022.1270-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1270-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134618448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Representation learning facilitates different levels of generalization 表征学习促进了不同层次的泛化
Pub Date : 1900-01-01 DOI: 10.32470/ccn.2022.1126-0
Fabian M. Renz, Shany Grossman, P. Dayan, Christian F. Doeller, Nicolas W. Schuck
: Cognitive maps represent relational structures and are taken to be important for generalization and optimal decision making in spatial as well as non-spatial domains. While many studies have investigated the benefits of cognitive maps, how these maps are learned from experience has remained less clear. We introduce a new graph-structured sequence task to better understand how cognitive maps are learned. Participants observed sequences of episodes followed by a reward, thereby learning about the underlying transition structure and fluctuating reward contingencies. Importantly, the task structure allowed participants to generalize value from some episode sequences to others, and generalizability was either signaled by episode similarity or had to be inferred more indirectly. Behavioral data demonstrated participants ` ability to learn about signaled and unsignaled generalizability with different speed, indicating that the formation of cognitive maps partially relies on exploiting observable similarities across episodes. We hypothesize that a possible neural mechanism involved in learning cognitive maps as described here is experience replay.
认知地图表示关系结构,并被认为对空间和非空间领域的概括和最佳决策非常重要。虽然许多研究已经调查了认知地图的好处,但这些地图是如何从经验中学习的仍然不太清楚。我们引入了一个新的图结构序列任务,以更好地理解认知地图是如何学习的。参与者观察了奖励之后的情节序列,从而了解了潜在的过渡结构和波动的奖励偶然性。重要的是,任务结构允许参与者从一些情节序列中归纳出其他情节序列的价值,而归纳性要么通过情节相似性来表示,要么必须更间接地推断出来。行为数据表明,参与者以不同的速度学习有信号和无信号泛化的能力,表明认知地图的形成部分依赖于利用可观察到的情节之间的相似性。我们假设一个可能的神经机制涉及学习认知地图,这里描述的是经验回放。
{"title":"Representation learning facilitates different levels of generalization","authors":"Fabian M. Renz, Shany Grossman, P. Dayan, Christian F. Doeller, Nicolas W. Schuck","doi":"10.32470/ccn.2022.1126-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1126-0","url":null,"abstract":": Cognitive maps represent relational structures and are taken to be important for generalization and optimal decision making in spatial as well as non-spatial domains. While many studies have investigated the benefits of cognitive maps, how these maps are learned from experience has remained less clear. We introduce a new graph-structured sequence task to better understand how cognitive maps are learned. Participants observed sequences of episodes followed by a reward, thereby learning about the underlying transition structure and fluctuating reward contingencies. Importantly, the task structure allowed participants to generalize value from some episode sequences to others, and generalizability was either signaled by episode similarity or had to be inferred more indirectly. Behavioral data demonstrated participants ` ability to learn about signaled and unsignaled generalizability with different speed, indicating that the formation of cognitive maps partially relies on exploiting observable similarities across episodes. We hypothesize that a possible neural mechanism involved in learning cognitive maps as described here is experience replay.","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133006078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond task-optimized neural models: constraints from embodied cognition 超越任务优化神经模型:来自具身认知的约束
Pub Date : 1900-01-01 DOI: 10.32470/ccn.2022.1154-0
Kaushik J. Lakshminarasimhan, Akis Stavropoulos, D. Angelaki
{"title":"Beyond task-optimized neural models: constraints from embodied cognition","authors":"Kaushik J. Lakshminarasimhan, Akis Stavropoulos, D. Angelaki","doi":"10.32470/ccn.2022.1154-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1154-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123922777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contextual Influences on the Perception of Motion and Depth 情境对运动和深度感知的影响
Pub Date : 1900-01-01 DOI: 10.32470/ccn.2022.1044-0
Zhe-Xin Xu, G. DeAngelis
{"title":"Contextual Influences on the Perception of Motion and Depth","authors":"Zhe-Xin Xu, G. DeAngelis","doi":"10.32470/ccn.2022.1044-0","DOIUrl":"https://doi.org/10.32470/ccn.2022.1044-0","url":null,"abstract":"","PeriodicalId":341186,"journal":{"name":"2022 Conference on Cognitive Computational Neuroscience","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129078408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 Conference on Cognitive Computational Neuroscience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1