Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1246-0
Noa Malem-Shinitski, S. Seelig, S. Reich, Ralf Engbert
Understanding human gaze, and the saccadic selection process underlying it, is an important question in cognitive-neuroscience with many interesting applications in areas from psychology to computer vision. One way to advance our understanding is to develop generative models that capture the spatial interaction between fixations and the temporal structure of a sequence of fixations, known as scanpaths. Such models are scarce in the literature and even fewer attempt to model inter-subject variability. In this work, we present a new parametric model for scanpath generation. We develop a discrete-time probabilistic generative model, with a Markovian structure, where at each step the next fixation location is selected using one of two strategies exploitation or exploration. We implement efficient Bayesian inference for hyperparameter estimation using an HMC within Gibbs approach. Our model is able to capture interobserver variability in terms of saccade length and direction as demonstrated by fitting the model to a dataset of scanpaths from 35 subjects performing a task of free viewing of 30 natural scene image.
{"title":"Bayesian inference for an exploration-exploitation model of human gaze control","authors":"Noa Malem-Shinitski, S. Seelig, S. Reich, Ralf Engbert","doi":"10.32470/ccn.2019.1246-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1246-0","url":null,"abstract":"Understanding human gaze, and the saccadic selection process underlying it, is an important question in cognitive-neuroscience with many interesting applications in areas from psychology to computer vision. One way to advance our understanding is to develop generative models that capture the spatial interaction between fixations and the temporal structure of a sequence of fixations, known as scanpaths. Such models are scarce in the literature and even fewer attempt to model inter-subject variability. In this work, we present a new parametric model for scanpath generation. We develop a discrete-time probabilistic generative model, with a Markovian structure, where at each step the next fixation location is selected using one of two strategies exploitation or exploration. We implement efficient Bayesian inference for hyperparameter estimation using an HMC within Gibbs approach. Our model is able to capture interobserver variability in terms of saccade length and direction as demonstrated by fitting the model to a dataset of scanpaths from 35 subjects performing a task of free viewing of 30 natural scene image.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134490792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1324-0
Shirley Mark, R. Moran, Thomas Parr, S. Kennerley, Timothy Edward John Behrens
Animals can transfer knowledge that was learnt previously and infer when this knowledge is relevant. Frequently, the relations between elements in an environment or task follow hidden underlying structure. We suggest that animals represent these underlying structures using abstract basis sets that are generalized over particularities of the current environment, such as its stimuli and size. We show that this type of representation allows inference of important task states, correct behavioural policy and the existence of unobserved routes. We further conducted two experiments in which participants learned three maps during two successive days and asked how the structural knowledge that was acquire during the first day affect participants behaviour during the second day. In line with our model, we show that participants who have a correct structural prior are able to infer the existence of unobserved routes and are able to infer appropriate behavioural policy. Therefore supporting the idea that abstract structural knowledge can be acquired and generalised across different cognitive maps.
{"title":"A mechanistic account of transferring structural knowledge across cognitive maps","authors":"Shirley Mark, R. Moran, Thomas Parr, S. Kennerley, Timothy Edward John Behrens","doi":"10.32470/ccn.2019.1324-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1324-0","url":null,"abstract":"Animals can transfer knowledge that was learnt previously and infer when this knowledge is relevant. Frequently, the relations between elements in an environment or task follow hidden underlying structure. We suggest that animals represent these underlying structures using abstract basis sets that are generalized over particularities of the current environment, such as its stimuli and size. We show that this type of representation allows inference of important task states, correct behavioural policy and the existence of unobserved routes. We further conducted two experiments in which participants learned three maps during two successive days and asked how the structural knowledge that was acquire during the first day affect participants behaviour during the second day. In line with our model, we show that participants who have a correct structural prior are able to infer the existence of unobserved routes and are able to infer appropriate behavioural policy. Therefore supporting the idea that abstract structural knowledge can be acquired and generalised across different cognitive maps.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134492961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1381-0
C. Papadimitriou, S. Vempala, Daniel Mitropolsky, Michael Collins, W. Maass, L. Abbott
Do brains compute? How do brains learn? How are intelligence and language achieved in the human brain? In this pursuit, we develop a formal calculus and associated programming language for brain computation, based on the assembly hypothesis, first proposed by Hebb: the basic unit of memory and computation in the brain is an assembly, a sparse distribution over neurons. We show that assemblies can be realized efficiently and neuroplausibly by using random projection, inhibition, and plasticity. Repeated applications of this RP&C primitive (random projection and cap) lead to (1) stable assembly creation through projection; (2) association and pattern completion; and finally (3) merge, where two assemblies form a higher-level assembly and, eventually, hierarchies. Further, these operations are composable, allowing the creation of stable computational circuits and structures. We argue that this functionality, in the presence of merge in particular, might underlie language and syntax in humans.
{"title":"A Calculus for Brain Computation","authors":"C. Papadimitriou, S. Vempala, Daniel Mitropolsky, Michael Collins, W. Maass, L. Abbott","doi":"10.32470/ccn.2019.1381-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1381-0","url":null,"abstract":"Do brains compute? How do brains learn? How are intelligence and language achieved in the human brain? In this pursuit, we develop a formal calculus and associated programming language for brain computation, based on the assembly hypothesis, first proposed by Hebb: the basic unit of memory and computation in the brain is an assembly, a sparse distribution over neurons. We show that assemblies can be realized efficiently and neuroplausibly by using random projection, inhibition, and plasticity. Repeated applications of this RP&C primitive (random projection and cap) lead to (1) stable assembly creation through projection; (2) association and pattern completion; and finally (3) merge, where two assemblies form a higher-level assembly and, eventually, hierarchies. Further, these operations are composable, allowing the creation of stable computational circuits and structures. We argue that this functionality, in the presence of merge in particular, might underlie language and syntax in humans.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134283830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1091-0
R. Blything, Ivan I. Vankov, Casimir J. H. Ludwig, J. Bowers
What mechanism supports our ability to recognize objects over a wide range of different retinal locations? Most research in psychology and neuroscience suggests that learning to identify a novel object at one retinal location only supports the ability to identify that object at nearby retinal locations, and to date, neural network models of object identification show a similar restriction in generalization. As a consequence, it is widely assumed that objects need to be learned at multiple locations. We challenge this view and show the capacity to generalize across retinal locations (what we call on-line translation tolerance) has been underestimated in humans and artificial neural networks. Two eye tracking studies demonstrate that novel objects can be recognized following translations of 9° and even 18°. Additionally, computational studies showed that convolutional neural networks can achieve similarly robust generalization when a mechanism (Global Average Pooling) was built in to generate larger receptive fields.
{"title":"Extreme Translation Tolerance in Humans and Machines","authors":"R. Blything, Ivan I. Vankov, Casimir J. H. Ludwig, J. Bowers","doi":"10.32470/ccn.2019.1091-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1091-0","url":null,"abstract":"What mechanism supports our ability to recognize objects over a wide range of different retinal locations? Most research in psychology and neuroscience suggests that learning to identify a novel object at one retinal location only supports the ability to identify that object at nearby retinal locations, and to date, neural network models of object identification show a similar restriction in generalization. As a consequence, it is widely assumed that objects need to be learned at multiple locations. We challenge this view and show the capacity to generalize across retinal locations (what we call on-line translation tolerance) has been underestimated in humans and artificial neural networks. Two eye tracking studies demonstrate that novel objects can be recognized following translations of 9° and even 18°. Additionally, computational studies showed that convolutional neural networks can achieve similarly robust generalization when a mechanism (Global Average Pooling) was built in to generate larger receptive fields.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132130537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1372-0
Parisa Abedi Khoozani, Paul R. Schrater, Dominik M. Endres, K. Fiehler, Gunnar Blohm
To reach to objects, humans rely on relative positions of target objects to surrounding objects (allocentric) as well as to their own bodies (egocentric). Previous studies demonstrated that scene configuration and object relevancy to the task modulates the combination weights of allocentric and egocentric information. Egocentric coding for reaching is studied extensively; however, how allocentric information is coupled and used in reaching is unknown. Using a computational approach, we show that clustering mechanisms for allocentric coding combined with causal Bayesian integration of allocentric and egocentric information can account for the observed reaching behavior. To further understand allocentric coding, we propose two strategies, global vs. distributed landmark clustering (GLC vs. DLC). Both models can replicate the current data but each has distinct implications. GLC efficiently encodes the scene relative to a single virtual reference but loses all the local structure information. In contrary, DLC stores more redundant inter-object relationship information. Consequently, DLC is more sensitive to the changes of the scene. Further experiments must differentiate between the two proposed strategies.
为了接触到物体,人类依靠目标物体与周围物体的相对位置(非中心)以及与自己身体的相对位置(自我中心)。先前的研究表明,场景配置和目标与任务的相关性调节了非中心和自我中心信息的组合权重。以自我为中心的伸手编码得到了广泛的研究;然而,非中心信息是如何耦合和使用在到达是未知的。通过计算方法,我们证明了非中心编码的聚类机制与非中心和自我中心信息的因果贝叶斯整合可以解释观察到的到达行为。为了进一步理解非中心编码,我们提出了两种策略,全局与分布式地标聚类(GLC vs. DLC)。这两种模型都可以复制当前的数据,但每种模型都有不同的含义。相对于单个虚拟参考,GLC有效地对场景进行编码,但丢失了所有的局部结构信息。相反,DLC存储了更多冗余的对象间关系信息。因此,DLC对场景的变化更加敏感。进一步的实验必须区分这两种策略。
{"title":"Models of allocentric coding for reaching in naturalistic visual scenes","authors":"Parisa Abedi Khoozani, Paul R. Schrater, Dominik M. Endres, K. Fiehler, Gunnar Blohm","doi":"10.32470/ccn.2019.1372-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1372-0","url":null,"abstract":"To reach to objects, humans rely on relative positions of target objects to surrounding objects (allocentric) as well as to their own bodies (egocentric). Previous studies demonstrated that scene configuration and object relevancy to the task modulates the combination weights of allocentric and egocentric information. Egocentric coding for reaching is studied extensively; however, how allocentric information is coupled and used in reaching is unknown. Using a computational approach, we show that clustering mechanisms for allocentric coding combined with causal Bayesian integration of allocentric and egocentric information can account for the observed reaching behavior. To further understand allocentric coding, we propose two strategies, global vs. distributed landmark clustering (GLC vs. DLC). Both models can replicate the current data but each has distinct implications. GLC efficiently encodes the scene relative to a single virtual reference but loses all the local structure information. In contrary, DLC stores more redundant inter-object relationship information. Consequently, DLC is more sensitive to the changes of the scene. Further experiments must differentiate between the two proposed strategies.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132290922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1075-0
Astrid Zeman, C. V. Meel, H. O. D. Beeck
Deep Convolutional Neural Networks (CNNs) are lauded for their high accuracy in object classification, as well as their striking similarity to human brain and behaviour. Both humans and CNNs maintain high classification accuracy despite changes in the scale, rotation, and translation of objects. In this study, we present images of novel objects at different scales and compare representational similarity in the human brain versus CNNs. We measure human fMRI responses in primary visual cortex (V1) and the object selective lateral occipital complex (LOC). We also measure the internal representations of CNNs that have been trained for largescale object recognition. Novel objects lack consensus on their name and identity, and therefore do not clearly belong to any specific object category. These novel objects are individuated in LOC, but not V1. V1 and LOC both significantly represent size and pixel information. In contrast, the late layers of CNNs show they are able to individuate objects but do not retain size information. Thus, while the human brain and CNNs are both able to recognise objects in spite of changes to their size, only the human brain retains this size information throughout the later stages of information processing.
{"title":"Novel Object Scale Differences in Deep Convolutional Neural Networks versus Human Object Recognition Areas","authors":"Astrid Zeman, C. V. Meel, H. O. D. Beeck","doi":"10.32470/ccn.2019.1075-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1075-0","url":null,"abstract":"Deep Convolutional Neural Networks (CNNs) are lauded for their high accuracy in object classification, as well as their striking similarity to human brain and behaviour. Both humans and CNNs maintain high classification accuracy despite changes in the scale, rotation, and translation of objects. In this study, we present images of novel objects at different scales and compare representational similarity in the human brain versus CNNs. We measure human fMRI responses in primary visual cortex (V1) and the object selective lateral occipital complex (LOC). We also measure the internal representations of CNNs that have been trained for largescale object recognition. Novel objects lack consensus on their name and identity, and therefore do not clearly belong to any specific object category. These novel objects are individuated in LOC, but not V1. V1 and LOC both significantly represent size and pixel information. In contrast, the late layers of CNNs show they are able to individuate objects but do not retain size information. Thus, while the human brain and CNNs are both able to recognise objects in spite of changes to their size, only the human brain retains this size information throughout the later stages of information processing.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132619615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1165-0
Rebekka Heinen, Lorena Deuker, Thomas Naselaris, N. Axmacher
Deep neural network features can be used to train encoding models that accurately predict brain activity from the visual cortex. Using these features together with ultrahigh field fMRI could open a new set of opportunities ranging from human vision to areas such as learning and memory consolidation. Is it possible to apply encoding models based on deep neural network features to high-resolution fMRI data? We investigated this using the feature-weighted receptive field (fwrf) model on ultra-high field fMRI during a natural image viewing task. Applying the fwrf model to our data we were able to predict brain activity along the ventral visual stream (VVS). In line with previous studies, we found a shift from low to high network layers while predicting brain activity in early visual areas compared to higher regions of the VVS. We conclude that encoding models based on neural network features can be applied to ultra-high field fMRI data, suggesting similar processing of visual scenes in neural networks and the human visual association cortex. Our results suggest that these models cannot only be used to study vision but other processes such as memory and imagination.
{"title":"Using deep neural network features to predict voxelwise activity in ultra-high field fMRI","authors":"Rebekka Heinen, Lorena Deuker, Thomas Naselaris, N. Axmacher","doi":"10.32470/ccn.2019.1165-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1165-0","url":null,"abstract":"Deep neural network features can be used to train encoding models that accurately predict brain activity from the visual cortex. Using these features together with ultrahigh field fMRI could open a new set of opportunities ranging from human vision to areas such as learning and memory consolidation. Is it possible to apply encoding models based on deep neural network features to high-resolution fMRI data? We investigated this using the feature-weighted receptive field (fwrf) model on ultra-high field fMRI during a natural image viewing task. Applying the fwrf model to our data we were able to predict brain activity along the ventral visual stream (VVS). In line with previous studies, we found a shift from low to high network layers while predicting brain activity in early visual areas compared to higher regions of the VVS. We conclude that encoding models based on neural network features can be applied to ultra-high field fMRI data, suggesting similar processing of visual scenes in neural networks and the human visual association cortex. Our results suggest that these models cannot only be used to study vision but other processes such as memory and imagination.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132710663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1184-0
Lea Musiolek, F. Blankenburg, D. Ostwald, Milena Rabovsky
In research on human language comprehension, the N400 component of the event-related brain potential (ERP) has attracted attention as an electrophysiological indicator of meaning processing in the brain. However, despite much research, the specific functional basis of the N400 remains widely debated. Recent neural network modeling work suggests that N400 amplitudes can be simulated as the stimulus-induced change in internally represented probabilities of aspects of meaning (Rabovsky, Hansen, & McClelland, 2018). Here, we assess this idea based on single-trial N400 amplitudes measured in an oddball-like roving paradigm with written words from different semantic categories varying in semantic feature overlap. We model the N400 as Semantic Surprise, the change in the probability distribution of a stimulus’s semantic features for each trial. Simple condition-based analyses produced a significant effect of category switch on N400 amplitude, and the trial-by-trial modeling similarly revealed negative effects of Semantic Surprise on N400 amplitude. From fitting a forgetting parameter for each participant, we also gleaned insights into the rates of forgetting of past input to the semantic system. Thus, we provide a computationally explicit account of N400 amplitudes, which links the N400 and thus the neurocognitive processes involved in human language comprehension to the Bayesian brain hypothesis.
{"title":"Modeling the N400 brain potential as Semantic Bayesian Surprise","authors":"Lea Musiolek, F. Blankenburg, D. Ostwald, Milena Rabovsky","doi":"10.32470/ccn.2019.1184-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1184-0","url":null,"abstract":"In research on human language comprehension, the N400 component of the event-related brain potential (ERP) has attracted attention as an electrophysiological indicator of meaning processing in the brain. However, despite much research, the specific functional basis of the N400 remains widely debated. Recent neural network modeling work suggests that N400 amplitudes can be simulated as the stimulus-induced change in internally represented probabilities of aspects of meaning (Rabovsky, Hansen, & McClelland, 2018). Here, we assess this idea based on single-trial N400 amplitudes measured in an oddball-like roving paradigm with written words from different semantic categories varying in semantic feature overlap. We model the N400 as Semantic Surprise, the change in the probability distribution of a stimulus’s semantic features for each trial. Simple condition-based analyses produced a significant effect of category switch on N400 amplitude, and the trial-by-trial modeling similarly revealed negative effects of Semantic Surprise on N400 amplitude. From fitting a forgetting parameter for each participant, we also gleaned insights into the rates of forgetting of past input to the semantic system. Thus, we provide a computationally explicit account of N400 amplitudes, which links the N400 and thus the neurocognitive processes involved in human language comprehension to the Bayesian brain hypothesis.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116588768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1306-0
Joon-Young Moon, K. Müsch, C. Schroeder, C. Honey
Human brain dynamics combine external drivers (e.g. sensory information) and internal drivers (e.g. expectations and memories). How do the patterns of inter-regional coupling change when the balance of external and internal information is altered? To investigate this question, we analyzed intracranial (ECoG) recordings from human listeners exposed to an auditory narrative. We measured the latencies of coupling across consecutive stages of cortical auditory processing and we investigated if and how the latencies varied as a function of stimulus drive. We found that the latencies along the auditory pathway vary between no delay (“synchronized state”) and a small, nonzero delay (~20 ms, “propagating state”) depending on the external stimulation. The long-latency propagating state was most often observed in the absence of external information, during the silent boundaries between sentences. Moreover, propagating states were associated with transient increases in alpha-band (8-12 Hz) oscillatory processes. Both synchronized and propagating states were reproduced in a coupled oscillator model by altering the strength of the external drive. The data and model suggest that cortical networks transition between i) synchronized dynamics driven by an external stimulus, and ii) long-latency propagating dynamics in the absence of an external stimulus.
{"title":"Synchronized and Propagating States of Human Auditory Processing","authors":"Joon-Young Moon, K. Müsch, C. Schroeder, C. Honey","doi":"10.32470/ccn.2019.1306-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1306-0","url":null,"abstract":"Human brain dynamics combine external drivers (e.g. sensory information) and internal drivers (e.g. expectations and memories). How do the patterns of inter-regional coupling change when the balance of external and internal information is altered? To investigate this question, we analyzed intracranial (ECoG) recordings from human listeners exposed to an auditory narrative. We measured the latencies of coupling across consecutive stages of cortical auditory processing and we investigated if and how the latencies varied as a function of stimulus drive. We found that the latencies along the auditory pathway vary between no delay (“synchronized state”) and a small, nonzero delay (~20 ms, “propagating state”) depending on the external stimulation. The long-latency propagating state was most often observed in the absence of external information, during the silent boundaries between sentences. Moreover, propagating states were associated with transient increases in alpha-band (8-12 Hz) oscillatory processes. Both synchronized and propagating states were reproduced in a coupled oscillator model by altering the strength of the external drive. The data and model suggest that cortical networks transition between i) synchronized dynamics driven by an external stimulus, and ii) long-latency propagating dynamics in the absence of an external stimulus.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124200373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.32470/ccn.2019.1070-0
N. Zarr, Joshua W. Brown
In this work we address two inter-related issues. First, the computational roles of the orbitofrontal cortex (OFC) and hippocampus in value-based decision-making have been unclear, with various proposed roles in value representation, cognitive maps, and prospection. Second, reinforcement learning models have been slow to adapt to more general problems in which the reward values of states may change over time, thus requiring different Q values for a given state at different times. We have developed a model of artificial general intelligence that treats much of the brain as a high dimensional control system in the framework of control theory. We show with computational modeling and combined fMRI and representational similarity analysis (RSA) that the model can autonomously learn to solve problems and provides a clear computational account of how a number of brain regions, particularly the OFC, interact to guide behavior to achieve arbitrary goals.
{"title":"The orbitofrontal cortex as a negative feedback control system: computational modeling and fMRI","authors":"N. Zarr, Joshua W. Brown","doi":"10.32470/ccn.2019.1070-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1070-0","url":null,"abstract":"In this work we address two inter-related issues. First, the computational roles of the orbitofrontal cortex (OFC) and hippocampus in value-based decision-making have been unclear, with various proposed roles in value representation, cognitive maps, and prospection. Second, reinforcement learning models have been slow to adapt to more general problems in which the reward values of states may change over time, thus requiring different Q values for a given state at different times. We have developed a model of artificial general intelligence that treats much of the brain as a high dimensional control system in the framework of control theory. We show with computational modeling and combined fMRI and representational similarity analysis (RSA) that the model can autonomously learn to solve problems and provides a clear computational account of how a number of brain regions, particularly the OFC, interact to guide behavior to achieve arbitrary goals.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125496322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}