Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1341-0
Andrea Ibarra Chaoul, M. Siegel
Electrophysiological signals of cortical population activity contain oscillatory and fractal (1/frequency) components. However, the relationship between these components is unclear. To address this, we investigated human resting-state MEG recordings. We applied combined source-analysis, signal orthogonalization and irregular-resampling autospectral analysis (IRASA) to separate oscillatory and fractal components of the MEG signals at the cortical source-level. We then compared the spatial correlation structure of fractal and oscillatory components across the human cortex. We found that these correlation structures differed, which suggests different mechanisms underlying fractal and oscillatory population signal components.
{"title":"Functional connectivity of fractal and oscillatory cortical activity is distinct","authors":"Andrea Ibarra Chaoul, M. Siegel","doi":"10.32470/ccn.2019.1341-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1341-0","url":null,"abstract":"Electrophysiological signals of cortical population activity contain oscillatory and fractal (1/frequency) components. However, the relationship between these components is unclear. To address this, we investigated human resting-state MEG recordings. We applied combined source-analysis, signal orthogonalization and irregular-resampling autospectral analysis (IRASA) to separate oscillatory and fractal components of the MEG signals at the cortical source-level. We then compared the spatial correlation structure of fractal and oscillatory components across the human cortex. We found that these correlation structures differed, which suggests different mechanisms underlying fractal and oscillatory population signal components.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130147781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1205-0
M. M. Nejad, Daniel Trpevski, J. Kotaleski, R. Schmidt
Striatal projection neurons (SPNs) in the basal ganglia gradually increase their firing rate during movement initiation. Arkypallidal neurons in globus pallidus briefly increase their firing rate upon a Stop signal, which cues movement cancellation. This increase potentially leads to the suppression of movement-related activity in striatum by inhibiting SPNs. However, this brief inhibition from arkypallidal neurons may be too short to completely prevent the gradual firing rate increase in SPNs. Here, we investigated the impact of the brief inhibition on the gradual increase in a multi-compartmental model of a SPN. We reproduced the movement-related firing pattern in the SPN model neuron by brief clustered excitation added to a baseline, subthreshold excitation. This brief clustered excitation evoked a dendritic plateau potential leading to a long-lasting depolarization at the soma, which enhanced the somatic excitability and evoked spikes upon the baseline excitation that was formerly subthreshold. A brief inhibition, representing arkypallidal stop responses, applied on the dendritic site where the clustered excitation was present, suppressed the somatic depolarization and attenuated the movement-related activity similar to the firing pattern observed in rats for successful action suppression. We conclude that arkypallidal Stop responses can suppress movement-related activity in the striatum by suppressing the dendritic plateau potentials.
{"title":"Stopping actions by suppressing striatal plateau potentials","authors":"M. M. Nejad, Daniel Trpevski, J. Kotaleski, R. Schmidt","doi":"10.32470/ccn.2019.1205-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1205-0","url":null,"abstract":"Striatal projection neurons (SPNs) in the basal ganglia gradually increase their firing rate during movement initiation. Arkypallidal neurons in globus pallidus briefly increase their firing rate upon a Stop signal, which cues movement cancellation. This increase potentially leads to the suppression of movement-related activity in striatum by inhibiting SPNs. However, this brief inhibition from arkypallidal neurons may be too short to completely prevent the gradual firing rate increase in SPNs. Here, we investigated the impact of the brief inhibition on the gradual increase in a multi-compartmental model of a SPN. We reproduced the movement-related firing pattern in the SPN model neuron by brief clustered excitation added to a baseline, subthreshold excitation. This brief clustered excitation evoked a dendritic plateau potential leading to a long-lasting depolarization at the soma, which enhanced the somatic excitability and evoked spikes upon the baseline excitation that was formerly subthreshold. A brief inhibition, representing arkypallidal stop responses, applied on the dendritic site where the clustered excitation was present, suppressed the somatic depolarization and attenuated the movement-related activity similar to the firing pattern observed in rats for successful action suppression. We conclude that arkypallidal Stop responses can suppress movement-related activity in the striatum by suppressing the dendritic plateau potentials.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125698441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1404-0
Simon Faghel-Soubeyrand, Arjen Alink, E. Bamps, F. Gosselin, I. Charest
Over recent years, multivariate pattern analysis (“decoding”) approaches have become increasingly used to investigate “when” and “where” our brains conduct meaningful processes about their visual environments. Studies using time-resolved decoding of M/EEG patterns have described numerous processes such as object/face familiarity and the emergence of basic-to-abstract category information. Surprisingly, no study has, to our knowledge, revealed “what” (i.e. the actual visual information that) our brain uses while these computations are examined by decoding algorithms. Here, we revealed the time course at which our brain extracts realistic category-specific information about visual objects (i.e. emotion-type & gender information from faces) with time-resolved decoding of high-density EEG patterns, as well as carefully controlled tasks and visual stimulation. Then, we derived temporal generalization matrices and showed that category-specific information is 1) first diffused across brain areas (250 to 350 ms) and 2) encoded under a stable neural pattern that suggests evidence accumulation (350 to 650 ms after face onset). Finally, we bridged time-resolved decoding with psychophysics and revealed the specific visual information (spatial frequency, feature position & orientation information) that support these brain computations. Doing so, we uncovered interconnected dynamics between visual features, and the accumulation and diffusion of category-specific information in the brain.
{"title":"Visual representations supporting category-specific information about visual objects in the brain","authors":"Simon Faghel-Soubeyrand, Arjen Alink, E. Bamps, F. Gosselin, I. Charest","doi":"10.32470/ccn.2019.1404-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1404-0","url":null,"abstract":"Over recent years, multivariate pattern analysis (“decoding”) approaches have become increasingly used to investigate “when” and “where” our brains conduct meaningful processes about their visual environments. Studies using time-resolved decoding of M/EEG patterns have described numerous processes such as object/face familiarity and the emergence of basic-to-abstract category information. Surprisingly, no study has, to our knowledge, revealed “what” (i.e. the actual visual information that) our brain uses while these computations are examined by decoding algorithms. Here, we revealed the time course at which our brain extracts realistic category-specific information about visual objects (i.e. emotion-type & gender information from faces) with time-resolved decoding of high-density EEG patterns, as well as carefully controlled tasks and visual stimulation. Then, we derived temporal generalization matrices and showed that category-specific information is 1) first diffused across brain areas (250 to 350 ms) and 2) encoded under a stable neural pattern that suggests evidence accumulation (350 to 650 ms after face onset). Finally, we bridged time-resolved decoding with psychophysics and revealed the specific visual information (spatial frequency, feature position & orientation information) that support these brain computations. Doing so, we uncovered interconnected dynamics between visual features, and the accumulation and diffusion of category-specific information in the brain.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125924891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1070-0
N. Zarr, Joshua W. Brown
In this work we address two inter-related issues. First, the computational roles of the orbitofrontal cortex (OFC) and hippocampus in value-based decision-making have been unclear, with various proposed roles in value representation, cognitive maps, and prospection. Second, reinforcement learning models have been slow to adapt to more general problems in which the reward values of states may change over time, thus requiring different Q values for a given state at different times. We have developed a model of artificial general intelligence that treats much of the brain as a high dimensional control system in the framework of control theory. We show with computational modeling and combined fMRI and representational similarity analysis (RSA) that the model can autonomously learn to solve problems and provides a clear computational account of how a number of brain regions, particularly the OFC, interact to guide behavior to achieve arbitrary goals.
{"title":"The orbitofrontal cortex as a negative feedback control system: computational modeling and fMRI","authors":"N. Zarr, Joshua W. Brown","doi":"10.32470/ccn.2019.1070-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1070-0","url":null,"abstract":"In this work we address two inter-related issues. First, the computational roles of the orbitofrontal cortex (OFC) and hippocampus in value-based decision-making have been unclear, with various proposed roles in value representation, cognitive maps, and prospection. Second, reinforcement learning models have been slow to adapt to more general problems in which the reward values of states may change over time, thus requiring different Q values for a given state at different times. We have developed a model of artificial general intelligence that treats much of the brain as a high dimensional control system in the framework of control theory. We show with computational modeling and combined fMRI and representational similarity analysis (RSA) that the model can autonomously learn to solve problems and provides a clear computational account of how a number of brain regions, particularly the OFC, interact to guide behavior to achieve arbitrary goals.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125496322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1304-0
H. Stein, Joao Barbosa, J. Dalmau, A. Compte
In working memory (WM) tasks, attractive biases to previous items are evidence for continuous temporal integration of memories. These serial biases have been modeled as a product of synaptic short-term plasticity, allowing WM representations to endure in a synaptic trace and interfere with the next trial even when neural activity returns to baseline values. We hypothesized that the NMDAR, a key component of both short-term potentiation (STP) and stable WM delay activity, would be of central importance to serial biases in a visuospatial WM task. Confirming this hypothesis, we found drastically reduced biases in patients with anti-NMDAR encephalitis and schizophrenia, both diseases that have been related to NMDAR hypofunction. We simulated serial biases in a spiking neural network supported by a Hebbian STP mechanism that builds up during persistent delay-activity. We found a close correspondence between patient and model behavior when gradually lowering levels of STP, suggesting a disruption of short-term plasticity in associative cortices of schizophrenic and anti-NMDAR encephalitis patients. Further, we explored the capability of the model to explain reduced biases in light of the disinhibition theory of schizophrenia.
{"title":"NMDA-Receptor Dysfunction Disrupts Serial Biases in Spatial Working Memory","authors":"H. Stein, Joao Barbosa, J. Dalmau, A. Compte","doi":"10.32470/ccn.2019.1304-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1304-0","url":null,"abstract":"In working memory (WM) tasks, attractive biases to previous items are evidence for continuous temporal integration of memories. These serial biases have been modeled as a product of synaptic short-term plasticity, allowing WM representations to endure in a synaptic trace and interfere with the next trial even when neural activity returns to baseline values. We hypothesized that the NMDAR, a key component of both short-term potentiation (STP) and stable WM delay activity, would be of central importance to serial biases in a visuospatial WM task. Confirming this hypothesis, we found drastically reduced biases in patients with anti-NMDAR encephalitis and schizophrenia, both diseases that have been related to NMDAR hypofunction. We simulated serial biases in a spiking neural network supported by a Hebbian STP mechanism that builds up during persistent delay-activity. We found a close correspondence between patient and model behavior when gradually lowering levels of STP, suggesting a disruption of short-term plasticity in associative cortices of schizophrenic and anti-NMDAR encephalitis patients. Further, we explored the capability of the model to explain reduced biases in light of the disinhibition theory of schizophrenia.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127833123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1145-0
Simone Viganò, V. Borghesani, M. Piazza
A fundamental issue in cognitive science is the so-called “symbol-grounding problem” (Harnad 1980), related to the question of how symbols acquire meaning. One simple view posits that, for concrete words, our brain solves the problem by creating associations between the neural representations of the surface forms of symbols (spoken or written words) to the one(s) evoked by the object, action, or event classes the symbols refer to (e.g., see Pulvermuller 2013; 2018). Evidence supporting this view comes from the observation that words related to well known concepts such as numerical quantities (Piazza et al. 2007; Eger et al. 2009), colors (e.g. Simmons et al. 2007), manipulable objects (Chao et al. 1999), places (Kumar et al. 2017), or actions (Hauk 2004; 2011), automatically re-activate the same brain regions that are active during the perception/execution of those specific object features/actions. These data, however, are informative on the neural bases of symbol grounded representations, but not on those underlying symbol grounding: i) they fall short in assessing the role of memory systems implicated in this kind of symbol-toconcept associative learning, and ii) they do not provide a full picture of the effects that symbol grounding has on the brain. Here, to investigate the neural changes generated by this process, we adopted an artificial learning paradigm where 21 adult subjects learned to categorize novel multisensory objects by giving them specific symbolic labels.
认知科学中的一个基本问题是所谓的“符号基础问题”(Harnad 1980),与符号如何获得意义的问题有关。一种简单的观点认为,对于具体的单词,我们的大脑通过在符号的表面形式(口语或书面文字)的神经表征与符号所指的对象、动作或事件类所唤起的表征之间建立联系来解决问题(例如,参见粉状穆勒2013;2018)。支持这一观点的证据来自于对与众所周知的概念相关的词汇的观察,如数值量(Piazza et al. 2007;Eger等人,2009),颜色(例如Simmons等人,2007),可操作对象(Chao等人,1999),地点(Kumar等人,2017)或动作(Hauk 2004;2011),自动重新激活在感知/执行这些特定对象特征/动作期间活跃的相同大脑区域。然而,这些数据在符号基础表征的神经基础上提供了信息,但在那些潜在的符号基础上却没有:1)它们在评估涉及这种从符号到概念的联想学习的记忆系统的作用方面存在不足,2)它们没有提供符号基础对大脑的影响的全貌。为了研究这一过程所产生的神经变化,我们采用了人工学习范式,让21名成年受试者通过给予特定的符号标签来学习对新的多感官物体进行分类。
{"title":"How the Human Brain Solves the Symbol-Grounding Problem","authors":"Simone Viganò, V. Borghesani, M. Piazza","doi":"10.32470/ccn.2019.1145-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1145-0","url":null,"abstract":"A fundamental issue in cognitive science is the so-called “symbol-grounding problem” (Harnad 1980), related to the question of how symbols acquire meaning. One simple view posits that, for concrete words, our brain solves the problem by creating associations between the neural representations of the surface forms of symbols (spoken or written words) to the one(s) evoked by the object, action, or event classes the symbols refer to (e.g., see Pulvermuller 2013; 2018). Evidence supporting this view comes from the observation that words related to well known concepts such as numerical quantities (Piazza et al. 2007; Eger et al. 2009), colors (e.g. Simmons et al. 2007), manipulable objects (Chao et al. 1999), places (Kumar et al. 2017), or actions (Hauk 2004; 2011), automatically re-activate the same brain regions that are active during the perception/execution of those specific object features/actions. These data, however, are informative on the neural bases of symbol grounded representations, but not on those underlying symbol grounding: i) they fall short in assessing the role of memory systems implicated in this kind of symbol-toconcept associative learning, and ii) they do not provide a full picture of the effects that symbol grounding has on the brain. Here, to investigate the neural changes generated by this process, we adopted an artificial learning paradigm where 21 adult subjects learned to categorize novel multisensory objects by giving them specific symbolic labels.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127946944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1228-0
C. Koch, Shu-Chen Li, T. Polk, Nicolas W. Schuck
Human aging is characterized by losses in spatial cognition as well as reductions in distinctiveness of categoryspecific fMRI activation patterns. One mechanism linking theses two phenomena could be that broader neural tuning functions lead to more signal confusions when tuning-based representations of walking direction are read out. To test this idea, we developed a novel method that allowed us to investigate changes in fMRI-measured pattern similarity while participants navigated in different directions in a virtual spatial navigation task. We expected that adjacent directions are represented more similarly within direction sensitive brain areas, reflecting a tuning-function-like signal. Importantly, heightened similarity might lead downstream areas to become more likely to confuse neighboring directions. We therefore analyzed predictions of a decoder trained on these representations, asking (1) whether decoder confusions between two directions increased proportionally to their angular similarity, (2) and how this differs between age groups. Evidence for tuning-function-like signals was found in the retrosplenial complex and primary visual cortex. Significant age differences in tuning width, however, were only found in the primary visual cortex. Our findings introduce a novel approach to measure tuning specificity using fMRI and suggest broader visual direction tuning in older adults might underlie age-related spatial navigation impairments.
{"title":"How Aging Shapes Neural Representations of Space: fMRI Evidence for Broader Direction Tuning Functions in Older Adults","authors":"C. Koch, Shu-Chen Li, T. Polk, Nicolas W. Schuck","doi":"10.32470/ccn.2019.1228-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1228-0","url":null,"abstract":"Human aging is characterized by losses in spatial cognition as well as reductions in distinctiveness of categoryspecific fMRI activation patterns. One mechanism linking theses two phenomena could be that broader neural tuning functions lead to more signal confusions when tuning-based representations of walking direction are read out. To test this idea, we developed a novel method that allowed us to investigate changes in fMRI-measured pattern similarity while participants navigated in different directions in a virtual spatial navigation task. We expected that adjacent directions are represented more similarly within direction sensitive brain areas, reflecting a tuning-function-like signal. Importantly, heightened similarity might lead downstream areas to become more likely to confuse neighboring directions. We therefore analyzed predictions of a decoder trained on these representations, asking (1) whether decoder confusions between two directions increased proportionally to their angular similarity, (2) and how this differs between age groups. Evidence for tuning-function-like signals was found in the retrosplenial complex and primary visual cortex. Significant age differences in tuning width, however, were only found in the primary visual cortex. Our findings introduce a novel approach to measure tuning specificity using fMRI and suggest broader visual direction tuning in older adults might underlie age-related spatial navigation impairments.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127583323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1075-0
Astrid Zeman, C. V. Meel, H. O. D. Beeck
Deep Convolutional Neural Networks (CNNs) are lauded for their high accuracy in object classification, as well as their striking similarity to human brain and behaviour. Both humans and CNNs maintain high classification accuracy despite changes in the scale, rotation, and translation of objects. In this study, we present images of novel objects at different scales and compare representational similarity in the human brain versus CNNs. We measure human fMRI responses in primary visual cortex (V1) and the object selective lateral occipital complex (LOC). We also measure the internal representations of CNNs that have been trained for largescale object recognition. Novel objects lack consensus on their name and identity, and therefore do not clearly belong to any specific object category. These novel objects are individuated in LOC, but not V1. V1 and LOC both significantly represent size and pixel information. In contrast, the late layers of CNNs show they are able to individuate objects but do not retain size information. Thus, while the human brain and CNNs are both able to recognise objects in spite of changes to their size, only the human brain retains this size information throughout the later stages of information processing.
{"title":"Novel Object Scale Differences in Deep Convolutional Neural Networks versus Human Object Recognition Areas","authors":"Astrid Zeman, C. V. Meel, H. O. D. Beeck","doi":"10.32470/ccn.2019.1075-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1075-0","url":null,"abstract":"Deep Convolutional Neural Networks (CNNs) are lauded for their high accuracy in object classification, as well as their striking similarity to human brain and behaviour. Both humans and CNNs maintain high classification accuracy despite changes in the scale, rotation, and translation of objects. In this study, we present images of novel objects at different scales and compare representational similarity in the human brain versus CNNs. We measure human fMRI responses in primary visual cortex (V1) and the object selective lateral occipital complex (LOC). We also measure the internal representations of CNNs that have been trained for largescale object recognition. Novel objects lack consensus on their name and identity, and therefore do not clearly belong to any specific object category. These novel objects are individuated in LOC, but not V1. V1 and LOC both significantly represent size and pixel information. In contrast, the late layers of CNNs show they are able to individuate objects but do not retain size information. Thus, while the human brain and CNNs are both able to recognise objects in spite of changes to their size, only the human brain retains this size information throughout the later stages of information processing.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132619615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1372-0
Parisa Abedi Khoozani, Paul R. Schrater, Dominik M. Endres, K. Fiehler, Gunnar Blohm
To reach to objects, humans rely on relative positions of target objects to surrounding objects (allocentric) as well as to their own bodies (egocentric). Previous studies demonstrated that scene configuration and object relevancy to the task modulates the combination weights of allocentric and egocentric information. Egocentric coding for reaching is studied extensively; however, how allocentric information is coupled and used in reaching is unknown. Using a computational approach, we show that clustering mechanisms for allocentric coding combined with causal Bayesian integration of allocentric and egocentric information can account for the observed reaching behavior. To further understand allocentric coding, we propose two strategies, global vs. distributed landmark clustering (GLC vs. DLC). Both models can replicate the current data but each has distinct implications. GLC efficiently encodes the scene relative to a single virtual reference but loses all the local structure information. In contrary, DLC stores more redundant inter-object relationship information. Consequently, DLC is more sensitive to the changes of the scene. Further experiments must differentiate between the two proposed strategies.
为了接触到物体,人类依靠目标物体与周围物体的相对位置(非中心)以及与自己身体的相对位置(自我中心)。先前的研究表明,场景配置和目标与任务的相关性调节了非中心和自我中心信息的组合权重。以自我为中心的伸手编码得到了广泛的研究;然而,非中心信息是如何耦合和使用在到达是未知的。通过计算方法,我们证明了非中心编码的聚类机制与非中心和自我中心信息的因果贝叶斯整合可以解释观察到的到达行为。为了进一步理解非中心编码,我们提出了两种策略,全局与分布式地标聚类(GLC vs. DLC)。这两种模型都可以复制当前的数据,但每种模型都有不同的含义。相对于单个虚拟参考,GLC有效地对场景进行编码,但丢失了所有的局部结构信息。相反,DLC存储了更多冗余的对象间关系信息。因此,DLC对场景的变化更加敏感。进一步的实验必须区分这两种策略。
{"title":"Models of allocentric coding for reaching in naturalistic visual scenes","authors":"Parisa Abedi Khoozani, Paul R. Schrater, Dominik M. Endres, K. Fiehler, Gunnar Blohm","doi":"10.32470/ccn.2019.1372-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1372-0","url":null,"abstract":"To reach to objects, humans rely on relative positions of target objects to surrounding objects (allocentric) as well as to their own bodies (egocentric). Previous studies demonstrated that scene configuration and object relevancy to the task modulates the combination weights of allocentric and egocentric information. Egocentric coding for reaching is studied extensively; however, how allocentric information is coupled and used in reaching is unknown. Using a computational approach, we show that clustering mechanisms for allocentric coding combined with causal Bayesian integration of allocentric and egocentric information can account for the observed reaching behavior. To further understand allocentric coding, we propose two strategies, global vs. distributed landmark clustering (GLC vs. DLC). Both models can replicate the current data but each has distinct implications. GLC efficiently encodes the scene relative to a single virtual reference but loses all the local structure information. In contrary, DLC stores more redundant inter-object relationship information. Consequently, DLC is more sensitive to the changes of the scene. Further experiments must differentiate between the two proposed strategies.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132290922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1169-0
Krista Bond, Alexis Porter, T. Verstynen
Humans and other mammals flexibly select actions in noisy, uncertain contexts, quickly using feedback to adapt their decision policies to either explore other options or to exploit what they know. Drawing inspiration from the plasticity of cortico-basal ganglia-thalamic circuitry, we recently developed a cognitive model of decision-making that uses both a value-driven learning signal to update an internal estimate of state action-value (i.e., conflict in the probability of reward between two choices) and a change-point-driven learning signal that adapts to changes in reward contingencies (i.e., a previously high value target becoming devalued). In this work, we expand on previous results from our group (Bond, Dunovan, & Verstynen, 2018) to more carefully detail how these environmental signals drive changes in the decision process. Across nine separate behavioral testing sessions, we independently manipulated the level of value-conflict and volatility in action-outcome contingencies. Using a hierarchical drift diffusion model, we found that the belief in the value difference between options had the greatest influence on decision processes, impacting drift rate, while estimates of environmental change had a smaller, but detectable influence on the decision threshold. Taken together, these findings bolster our previous work showing how separate environmental signals impact different aspects of the decision algorithm.
{"title":"A potential reset mechanism for the modulation of decision processes under uncertainty","authors":"Krista Bond, Alexis Porter, T. Verstynen","doi":"10.32470/ccn.2019.1169-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1169-0","url":null,"abstract":"Humans and other mammals flexibly select actions in noisy, uncertain contexts, quickly using feedback to adapt their decision policies to either explore other options or to exploit what they know. Drawing inspiration from the plasticity of cortico-basal ganglia-thalamic circuitry, we recently developed a cognitive model of decision-making that uses both a value-driven learning signal to update an internal estimate of state action-value (i.e., conflict in the probability of reward between two choices) and a change-point-driven learning signal that adapts to changes in reward contingencies (i.e., a previously high value target becoming devalued). In this work, we expand on previous results from our group (Bond, Dunovan, & Verstynen, 2018) to more carefully detail how these environmental signals drive changes in the decision process. Across nine separate behavioral testing sessions, we independently manipulated the level of value-conflict and volatility in action-outcome contingencies. Using a hierarchical drift diffusion model, we found that the belief in the value difference between options had the greatest influence on decision processes, impacting drift rate, while estimates of environmental change had a smaller, but detectable influence on the decision threshold. Taken together, these findings bolster our previous work showing how separate environmental signals impact different aspects of the decision algorithm.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130318531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}