Non-sensory thalamic nuclei interact with the cortex through thalamocortical and cortico-basal ganglia-thalamocortical loops. Reciprocal connections between the mediodorsal thalamus (MD) and the prefrontal cortex are particularly important in cognition, while the reciprocal connections of the ventromedial (VM), ventral anterior (VA), and ventrolateral (VL) thalamus with the prefrontal and motor cortex are necessary for sensorimotor information processing. However, limited and often oversimplified understanding of the connectivity of the MD, VA, and VL nuclei in primates have hampered development of accurate models that explain their contribution to cognitive and sensorimotor functions. The current prevalent view suggests that the MD connects with the prefrontal cortex, while the VA and VL primarily connect with the premotor and motor cortices. However, past studies have also reported diverse connections that enable these nuclei to integrate information across a multitude of brain systems. In this review, we provide a comprehensive overview of the anatomical connectivity of the primate MD, VA, and VL with the cortex. By synthesizing recent findings, we aim to offer a valuable resource for students, newcomers to the field, and experts developing new theories or models of thalamic function. Our review highlights the complexity of these connections and underscores the need for further research to fully understand the diverse roles of these thalamic nuclei in primates.
非感觉丘脑核通过丘脑皮质环路和皮质-基底节-丘脑皮质环路与大脑皮质相互作用。丘脑背内侧(MD)与前额叶皮层之间的相互联系在认知中尤为重要,而丘脑腹外侧(VM)、腹前侧(VA)和腹外侧(VL)与前额叶和运动皮层之间的相互联系则是感觉运动信息处理所必需的。然而,由于对灵长类动物丘脑MD、VA和VL核的联系性的理解有限,而且往往过于简单化,因此阻碍了建立准确的模型来解释它们对认知和感觉运动功能的贡献。目前流行的观点认为,MD 与前额叶皮层相连,而 VA 和 VL 主要与前运动皮层和运动皮层相连。然而,过去的研究也报道了使这些核团能够整合多个大脑系统信息的多种连接。在这篇综述中,我们全面概述了灵长类动物丘脑MD、VA和VL与大脑皮层的解剖学连接。通过综合最新研究结果,我们旨在为该领域的学生、新手和开发丘脑功能新理论或模型的专家提供有价值的资源。我们的综述强调了这些连接的复杂性,并强调了进一步研究的必要性,以便充分了解这些丘脑核在灵长类动物中的多样性。
{"title":"Anatomical Connections of Primate Mediodorsal and Motor Thalamic Nuclei with the Cortex","authors":"Bianca Sieveritz, Roozbeh Kiani","doi":"arxiv-2409.02065","DOIUrl":"https://doi.org/arxiv-2409.02065","url":null,"abstract":"Non-sensory thalamic nuclei interact with the cortex through thalamocortical\u0000and cortico-basal ganglia-thalamocortical loops. Reciprocal connections between\u0000the mediodorsal thalamus (MD) and the prefrontal cortex are particularly\u0000important in cognition, while the reciprocal connections of the ventromedial\u0000(VM), ventral anterior (VA), and ventrolateral (VL) thalamus with the\u0000prefrontal and motor cortex are necessary for sensorimotor information\u0000processing. However, limited and often oversimplified understanding of the\u0000connectivity of the MD, VA, and VL nuclei in primates have hampered development\u0000of accurate models that explain their contribution to cognitive and\u0000sensorimotor functions. The current prevalent view suggests that the MD\u0000connects with the prefrontal cortex, while the VA and VL primarily connect with\u0000the premotor and motor cortices. However, past studies have also reported\u0000diverse connections that enable these nuclei to integrate information across a\u0000multitude of brain systems. In this review, we provide a comprehensive overview\u0000of the anatomical connectivity of the primate MD, VA, and VL with the cortex.\u0000By synthesizing recent findings, we aim to offer a valuable resource for\u0000students, newcomers to the field, and experts developing new theories or models\u0000of thalamic function. Our review highlights the complexity of these connections\u0000and underscores the need for further research to fully understand the diverse\u0000roles of these thalamic nuclei in primates.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"137 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayesha Vermani, Matthew Dowling, Hyungju Jeon, Ian Jordan, Josue Nassar, Yves Bernaerts, Yuan Zhao, Steven Van Vaerenbergh, Il Memming Park
Function and dysfunctions of neural systems are tied to the temporal evolution of neural states. The current limitations in showing their causal role stem largely from the absence of tools capable of probing the brain's internal state in real-time. This gap restricts the scope of experiments vital for advancing both fundamental and clinical neuroscience. Recent advances in real-time machine learning technologies, particularly in analyzing neural time series as nonlinear stochastic dynamical systems, are beginning to bridge this gap. These technologies enable immediate interpretation of and interaction with neural systems, offering new insights into neural computation. However, several significant challenges remain. Issues such as slow convergence rates, high-dimensional data complexities, structured noise, non-identifiability, and a general lack of inductive biases tailored for neural dynamics are key hurdles. Overcoming these challenges is crucial for the full realization of real-time neural data analysis for the causal investigation of neural computation and advanced perturbation based brain machine interfaces. In this paper, we provide a comprehensive perspective on the current state of the field, focusing on these persistent issues and outlining potential paths forward. We emphasize the importance of large-scale integrative neuroscience initiatives and the role of meta-learning in overcoming these challenges. These approaches represent promising research directions that could redefine the landscape of neuroscience experiments and brain-machine interfaces, facilitating breakthroughs in understanding brain function, and treatment of neurological disorders.
{"title":"Real-Time Machine Learning Strategies for a New Kind of Neuroscience Experiments","authors":"Ayesha Vermani, Matthew Dowling, Hyungju Jeon, Ian Jordan, Josue Nassar, Yves Bernaerts, Yuan Zhao, Steven Van Vaerenbergh, Il Memming Park","doi":"arxiv-2409.01280","DOIUrl":"https://doi.org/arxiv-2409.01280","url":null,"abstract":"Function and dysfunctions of neural systems are tied to the temporal\u0000evolution of neural states. The current limitations in showing their causal\u0000role stem largely from the absence of tools capable of probing the brain's\u0000internal state in real-time. This gap restricts the scope of experiments vital\u0000for advancing both fundamental and clinical neuroscience. Recent advances in\u0000real-time machine learning technologies, particularly in analyzing neural time\u0000series as nonlinear stochastic dynamical systems, are beginning to bridge this\u0000gap. These technologies enable immediate interpretation of and interaction with\u0000neural systems, offering new insights into neural computation. However, several\u0000significant challenges remain. Issues such as slow convergence rates,\u0000high-dimensional data complexities, structured noise, non-identifiability, and\u0000a general lack of inductive biases tailored for neural dynamics are key\u0000hurdles. Overcoming these challenges is crucial for the full realization of\u0000real-time neural data analysis for the causal investigation of neural\u0000computation and advanced perturbation based brain machine interfaces. In this\u0000paper, we provide a comprehensive perspective on the current state of the\u0000field, focusing on these persistent issues and outlining potential paths\u0000forward. We emphasize the importance of large-scale integrative neuroscience\u0000initiatives and the role of meta-learning in overcoming these challenges. These\u0000approaches represent promising research directions that could redefine the\u0000landscape of neuroscience experiments and brain-machine interfaces,\u0000facilitating breakthroughs in understanding brain function, and treatment of\u0000neurological disorders.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a previous paper, we have shown that an ontology of quantum mechanics in terms of states and events with internal phenomenal aspects, that is, a form of panprotopsychism, is well suited to explaining the phenomenal aspects of consciousness. We have proved there that the palette and grain combination problems of panpsychism and panprotopsychism arise from implicit hypotheses based on classical physics about supervenience that are inappropriate at the quantum level, where an exponential number of emergent properties and states arise. In this article, we address what is probably the first and most important combination problem of panpsychism: the subject-summing problem originally posed by William James. We begin by identifying the physical counterparts of the subjects of experience within the quantum panprotopsychic approach presented in that article. To achieve this, we turn to the notion of subject of experience inspired by the idea of prehension proposed by Whitehead and show that this notion can be adapted to the quantum ontology of objects and events. Due to the indeterminacy of quantum mechanics and its causal openness, this ontology also seems to be suitable for the analysis of the remaining aspects of the structure combination problem, which shows how the structuration of consciousness could have evolved from primitive animals to humans. The analysis imposes conditions on possible implementations of quantum cognition mechanisms in the brain and suggests new problems and strategies to address them. In particular, with regard to the structuring of experiences in animals with different degrees of evolutionary development.
{"title":"Quantum panprotopsychism and the structure and subject-summing combination problem","authors":"Rodolfo Gambini, Jorge Pullin","doi":"arxiv-2409.01368","DOIUrl":"https://doi.org/arxiv-2409.01368","url":null,"abstract":"In a previous paper, we have shown that an ontology of quantum mechanics in\u0000terms of states and events with internal phenomenal aspects, that is, a form of\u0000panprotopsychism, is well suited to explaining the phenomenal aspects of\u0000consciousness. We have proved there that the palette and grain combination\u0000problems of panpsychism and panprotopsychism arise from implicit hypotheses\u0000based on classical physics about supervenience that are inappropriate at the\u0000quantum level, where an exponential number of emergent properties and states\u0000arise. In this article, we address what is probably the first and most\u0000important combination problem of panpsychism: the subject-summing problem\u0000originally posed by William James. We begin by identifying the physical\u0000counterparts of the subjects of experience within the quantum panprotopsychic\u0000approach presented in that article. To achieve this, we turn to the notion of\u0000subject of experience inspired by the idea of prehension proposed by Whitehead\u0000and show that this notion can be adapted to the quantum ontology of objects and\u0000events. Due to the indeterminacy of quantum mechanics and its causal openness,\u0000this ontology also seems to be suitable for the analysis of the remaining\u0000aspects of the structure combination problem, which shows how the structuration\u0000of consciousness could have evolved from primitive animals to humans. The\u0000analysis imposes conditions on possible implementations of quantum cognition\u0000mechanisms in the brain and suggests new problems and strategies to address\u0000them. In particular, with regard to the structuring of experiences in animals\u0000with different degrees of evolutionary development.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"89 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangxu Yu, Mindi Ruan, Chuanbo Hu, Wenqi Li, Lynn K. Paul, Xin Li, Shuo Wang
In this study, we present a quantitative and comprehensive analysis of social gaze in people with autism spectrum disorder (ASD). Diverging from traditional first-person camera perspectives based on eye-tracking technologies, this study utilizes a third-person perspective database from the Autism Diagnostic Observation Schedule, 2nd Edition (ADOS-2) interview videos, encompassing ASD participants and neurotypical individuals as a reference group. Employing computational models, we extracted and processed gaze-related features from the videos of both participants and examiners. The experimental samples were divided into three groups based on the presence of social gaze abnormalities and ASD diagnosis. This study quantitatively analyzed four gaze features: gaze engagement, gaze variance, gaze density map, and gaze diversion frequency. Furthermore, we developed a classifier trained on these features to identify gaze abnormalities in ASD participants. Together, we demonstrated the effectiveness of analyzing social gaze in people with ASD in naturalistic settings, showcasing the potential of third-person video perspectives in enhancing ASD diagnosis through gaze analysis.
{"title":"Video-based Analysis Reveals Atypical Social Gaze in People with Autism Spectrum Disorder","authors":"Xiangxu Yu, Mindi Ruan, Chuanbo Hu, Wenqi Li, Lynn K. Paul, Xin Li, Shuo Wang","doi":"arxiv-2409.00664","DOIUrl":"https://doi.org/arxiv-2409.00664","url":null,"abstract":"In this study, we present a quantitative and comprehensive analysis of social\u0000gaze in people with autism spectrum disorder (ASD). Diverging from traditional\u0000first-person camera perspectives based on eye-tracking technologies, this study\u0000utilizes a third-person perspective database from the Autism Diagnostic\u0000Observation Schedule, 2nd Edition (ADOS-2) interview videos, encompassing ASD\u0000participants and neurotypical individuals as a reference group. Employing\u0000computational models, we extracted and processed gaze-related features from the\u0000videos of both participants and examiners. The experimental samples were\u0000divided into three groups based on the presence of social gaze abnormalities\u0000and ASD diagnosis. This study quantitatively analyzed four gaze features: gaze\u0000engagement, gaze variance, gaze density map, and gaze diversion frequency.\u0000Furthermore, we developed a classifier trained on these features to identify\u0000gaze abnormalities in ASD participants. Together, we demonstrated the\u0000effectiveness of analyzing social gaze in people with ASD in naturalistic\u0000settings, showcasing the potential of third-person video perspectives in\u0000enhancing ASD diagnosis through gaze analysis.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ralf M. Haefner, Jeff Beck, Cristina Savin, Mehrdad Salmasi, Xaq Pitkow
This perspective piece is the result of a Generative Adversarial Collaboration (GAC) tackling the question `How does neural activity represent probability distributions?'. We have addressed three major obstacles to progress on answering this question: first, we provide a unified language for defining competing hypotheses. Second, we explain the fundamentals of three prominent proposals for probabilistic computations -- Probabilistic Population Codes (PPCs), Distributed Distributional Codes (DDCs), and Neural Sampling Codes (NSCs) -- and describe similarities and differences in that common language. Third, we review key empirical data previously taken as evidence for at least one of these proposal, and describe how it may or may not be explainable by alternative proposals. Finally, we describe some key challenges in resolving the debate, and propose potential directions to address them through a combination of theory and experiments.
{"title":"How does the brain compute with probabilities?","authors":"Ralf M. Haefner, Jeff Beck, Cristina Savin, Mehrdad Salmasi, Xaq Pitkow","doi":"arxiv-2409.02709","DOIUrl":"https://doi.org/arxiv-2409.02709","url":null,"abstract":"This perspective piece is the result of a Generative Adversarial\u0000Collaboration (GAC) tackling the question `How does neural activity represent\u0000probability distributions?'. We have addressed three major obstacles to\u0000progress on answering this question: first, we provide a unified language for\u0000defining competing hypotheses. Second, we explain the fundamentals of three\u0000prominent proposals for probabilistic computations -- Probabilistic Population\u0000Codes (PPCs), Distributed Distributional Codes (DDCs), and Neural Sampling\u0000Codes (NSCs) -- and describe similarities and differences in that common\u0000language. Third, we review key empirical data previously taken as evidence for\u0000at least one of these proposal, and describe how it may or may not be\u0000explainable by alternative proposals. Finally, we describe some key challenges\u0000in resolving the debate, and propose potential directions to address them\u0000through a combination of theory and experiments.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Perceptions and actions, thoughts and memories result from coordinated activity in hundreds or even thousands of neurons in the brain. It is an old dream of the physics community to provide a statistical mechanics description for these and other emergent phenomena of life. These aspirations appear in a new light because of developments in our ability to measure the electrical activity of the brain, sampling thousands of individual neurons simultaneously over hours or days. We review the progress that has been made in bringing theory and experiment together, focusing on maximum entropy methods and a phenomenological renormalization group. These approaches have uncovered new, quantitatively reproducible collective behaviors in networks of real neurons, and provide examples of rich parameter--free predictions that agree in detail with experiment.
{"title":"Statistical mechanics for networks of real neurons","authors":"Leenoy Meshulam, William Bialek","doi":"arxiv-2409.00412","DOIUrl":"https://doi.org/arxiv-2409.00412","url":null,"abstract":"Perceptions and actions, thoughts and memories result from coordinated\u0000activity in hundreds or even thousands of neurons in the brain. It is an old\u0000dream of the physics community to provide a statistical mechanics description\u0000for these and other emergent phenomena of life. These aspirations appear in a\u0000new light because of developments in our ability to measure the electrical\u0000activity of the brain, sampling thousands of individual neurons simultaneously\u0000over hours or days. We review the progress that has been made in bringing\u0000theory and experiment together, focusing on maximum entropy methods and a\u0000phenomenological renormalization group. These approaches have uncovered new,\u0000quantitatively reproducible collective behaviors in networks of real neurons,\u0000and provide examples of rich parameter--free predictions that agree in detail\u0000with experiment.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We address the linguistic problem of the sequential arrangement of a head and its dependents from an information theoretic perspective. In particular, we consider the optimal placement of a head that maximizes the predictability of the sequence. We assume that dependents are statistically independent given a head, in line with the open-choice principle and the core assumptions of dependency grammar. We demonstrate the optimality of harmonic order, i.e., placing the head last maximizes the predictability of the head whereas placing the head first maximizes the predictability of dependents. We also show that postponing the head is the optimal strategy to maximize its predictability while bringing it forward is the optimal strategy to maximize the predictability of dependents. We unravel the advantages of the strategy of maximizing the predictability of the head over maximizing the predictability of dependents. Our findings shed light on the placements of the head adopted by real languages or emerging in different kinds of experiments.
{"title":"Predictability maximization and the origins of word order harmony","authors":"Ramon Ferrer-i-Cancho","doi":"arxiv-2408.16570","DOIUrl":"https://doi.org/arxiv-2408.16570","url":null,"abstract":"We address the linguistic problem of the sequential arrangement of a head and\u0000its dependents from an information theoretic perspective. In particular, we\u0000consider the optimal placement of a head that maximizes the predictability of\u0000the sequence. We assume that dependents are statistically independent given a\u0000head, in line with the open-choice principle and the core assumptions of\u0000dependency grammar. We demonstrate the optimality of harmonic order, i.e.,\u0000placing the head last maximizes the predictability of the head whereas placing\u0000the head first maximizes the predictability of dependents. We also show that\u0000postponing the head is the optimal strategy to maximize its predictability\u0000while bringing it forward is the optimal strategy to maximize the\u0000predictability of dependents. We unravel the advantages of the strategy of\u0000maximizing the predictability of the head over maximizing the predictability of\u0000dependents. Our findings shed light on the placements of the head adopted by\u0000real languages or emerging in different kinds of experiments.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"459 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is a mystery how the brain decodes color vision purely from the optic nerve signals it receives, with a core inferential challenge being how it disentangles internal perception with the correct color dimensionality from the unknown encoding properties of the eye. In this paper, we introduce a computational framework for modeling this emergence of human color vision by simulating both the eye and the cortex. Existing research often overlooks how the cortex develops color vision or represents color space internally, assuming that the color dimensionality is known a priori; however, we argue that the visual cortex has the capability and the challenge of inferring the color dimensionality purely from fluctuations in the optic nerve signals. To validate our theory, we introduce a simulation engine for biological eyes based on established vision science and generate optic nerve signals resulting from looking at natural images. Further, we propose a model of cortical learning based on self-supervised principle and show that this model naturally learns to generate color vision by disentangling retinal invariants from the sensory signals. When the retina contains N types of color photoreceptors, our simulation shows that N-dimensional color vision naturally emerges, verified through formal colorimetry. Using this framework, we also present the first simulation work that successfully boosts the color dimensionality, as observed in gene therapy on squirrel monkeys, and demonstrates the possibility of enhancing human color vision from 3D to 4D.
大脑如何纯粹从接收到的视觉信号中解码颜色视觉是一个谜,其核心推论挑战是如何从眼睛未知的编码特性中分离出具有正确颜色维度的内部感知。在本文中,我们引入了一个计算框架,通过模拟眼睛和大脑皮层来模拟人类色彩视觉的出现。现有的研究往往忽略了大脑皮层是如何发展色彩视觉或在内部表示色彩空间的,并假设色彩维度是先验已知的;然而,我们认为视觉皮层有能力也有挑战纯粹从视神经信号的波动中推断色彩维度。为了验证我们的理论,我们引入了一个基于视觉科学的生物眼睛模拟引擎,并生成了观看自然图像时产生的视神经信号。此外,我们还提出了一个基于自我监督原理的皮层学习模型,并证明该模型通过从感觉信号中分离视网膜不变因素,自然地学习生成彩色视觉。当视网膜包含 N 种颜色光感受器时,我们的模拟结果表明,N 维色彩视觉会自然产生,并通过正式的色彩测量得到验证。利用这一框架,我们还首次展示了在松鼠猴基因治疗中观察到的成功提升色彩维度的模拟工作,并证明了将人类色彩视觉从三维提升到四维的可能性。
{"title":"A Computational Framework for Modeling Emergence of Color Vision in the Human Brain","authors":"Atsunobu Kotani, Ren Ng","doi":"arxiv-2408.16916","DOIUrl":"https://doi.org/arxiv-2408.16916","url":null,"abstract":"It is a mystery how the brain decodes color vision purely from the optic\u0000nerve signals it receives, with a core inferential challenge being how it\u0000disentangles internal perception with the correct color dimensionality from the\u0000unknown encoding properties of the eye. In this paper, we introduce a\u0000computational framework for modeling this emergence of human color vision by\u0000simulating both the eye and the cortex. Existing research often overlooks how\u0000the cortex develops color vision or represents color space internally, assuming\u0000that the color dimensionality is known a priori; however, we argue that the\u0000visual cortex has the capability and the challenge of inferring the color\u0000dimensionality purely from fluctuations in the optic nerve signals. To validate\u0000our theory, we introduce a simulation engine for biological eyes based on\u0000established vision science and generate optic nerve signals resulting from\u0000looking at natural images. Further, we propose a model of cortical learning\u0000based on self-supervised principle and show that this model naturally learns to\u0000generate color vision by disentangling retinal invariants from the sensory\u0000signals. When the retina contains N types of color photoreceptors, our\u0000simulation shows that N-dimensional color vision naturally emerges, verified\u0000through formal colorimetry. Using this framework, we also present the first\u0000simulation work that successfully boosts the color dimensionality, as observed\u0000in gene therapy on squirrel monkeys, and demonstrates the possibility of\u0000enhancing human color vision from 3D to 4D.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid growth of the drone industry, particularly in the use of small unmanned aerial systems (sUAS) and unmanned aerial vehicles (UAVs), requires the development of advanced training protocols for remote pilots. Remote pilots must develop a combination of technical and cognitive skills to manage the complexities of modern drone operations. This paper explores the integration of neurotechnology, specifically auricular vagus nerve stimulation (aVNS), as a method to enhance remote pilot training and performance. The scientific literature shows aVNS can safely improve cognitive functions such as attention, learning, and memory. It has also been shown useful to manage stress responses. For safe and efficient sUAS/UAV operation, it is essential for pilots to maintain high levels of vigilance and decision-making under pressure. By modulating sympathetic stress and cortical arousal, aVNS can prime cognitive faculties before training, help maintain focus during training and improve stress recovery post-training. Furthermore, aVNS has demonstrated the potential to enhance multitasking and cognitive control. This may help remote pilots during complex sUAS operations by potentially reducing the risk of impulsive decision-making or cognitive errors. This paper advocates for the inclusion of aVNS in remote pilot training programs by proposing that it can provide significant benefits in improving cognitive readiness, skill and knowledge acquisition, as well as operational safety and efficiency. Future research should focus on optimizing aVNS protocols for drone pilots while assessing long-term benefits to industrial safety and workforce readiness in real-world scenarios.
{"title":"Auricular Vagus Nerve Stimulation for Enhancing Remote Pilot Training and Operations","authors":"William J. Tyler","doi":"arxiv-2408.16755","DOIUrl":"https://doi.org/arxiv-2408.16755","url":null,"abstract":"The rapid growth of the drone industry, particularly in the use of small\u0000unmanned aerial systems (sUAS) and unmanned aerial vehicles (UAVs), requires\u0000the development of advanced training protocols for remote pilots. Remote pilots\u0000must develop a combination of technical and cognitive skills to manage the\u0000complexities of modern drone operations. This paper explores the integration of\u0000neurotechnology, specifically auricular vagus nerve stimulation (aVNS), as a\u0000method to enhance remote pilot training and performance. The scientific\u0000literature shows aVNS can safely improve cognitive functions such as attention,\u0000learning, and memory. It has also been shown useful to manage stress responses.\u0000For safe and efficient sUAS/UAV operation, it is essential for pilots to\u0000maintain high levels of vigilance and decision-making under pressure. By\u0000modulating sympathetic stress and cortical arousal, aVNS can prime cognitive\u0000faculties before training, help maintain focus during training and improve\u0000stress recovery post-training. Furthermore, aVNS has demonstrated the potential\u0000to enhance multitasking and cognitive control. This may help remote pilots\u0000during complex sUAS operations by potentially reducing the risk of impulsive\u0000decision-making or cognitive errors. This paper advocates for the inclusion of\u0000aVNS in remote pilot training programs by proposing that it can provide\u0000significant benefits in improving cognitive readiness, skill and knowledge\u0000acquisition, as well as operational safety and efficiency. Future research\u0000should focus on optimizing aVNS protocols for drone pilots while assessing\u0000long-term benefits to industrial safety and workforce readiness in real-world\u0000scenarios.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prakash Chandra Kavi, Gorka Zamora Lopez, Daniel Ari Friedman
The emergence of cognition requires a framework that bridges evolutionary principles with neurocomputational mechanisms. This paper introduces the "thoughtseed" framework, proposing that cognition arises from the dynamic interaction of self-organizing units of embodied knowledge called "thoughtseeds." We leverage evolutionary theory, "neuronal packets," and the "Inner Screen" hypothesis within Free Energy Principle, and propose a four-level hierarchical model of the cognitive agent's internal states: Neuronal Packet Domains (NPDs), Knowledge Domains (KDs), thoughtseeds network, and meta-cognition. The dynamic interplay within this hierarchy, mediated by nested Markov blankets and reciprocal message passing, facilitates the emergence of thoughtseeds as coherent patterns of activity that guide perception, action, and learning. The framework further explores the role of the organism's Umwelt and the principles of active inference, especially the generative model at each nested level, in shaping the selection and activation of thoughtseeds, leading to adaptive behavior through surprise minimization. The "Inner Screen" is posited as the locus of conscious experience, where the content of the dominant thoughtseed is projected, maintaining a unitary conscious experience. Active thoughtseeds are proposed as the fundamental units of thought that contribute to the "content of consciousness." We present a mathematical framework grounded in active inference and dynamical systems theory. The thoughtseed framework represents an initial but promising step towards a novel, biologically-grounded model for understanding the organizing principles and emergence of embodied cognition, offering a unified account of cognitive phenomena, from basic physiological regulation to higher-order thought processes, and potentially bridge neuroscience and contemplative traditions.
{"title":"Thoughtseeds: Evolutionary Priors, Nested Markov Blankets, and the Emergence of Embodied Cognition","authors":"Prakash Chandra Kavi, Gorka Zamora Lopez, Daniel Ari Friedman","doi":"arxiv-2408.15982","DOIUrl":"https://doi.org/arxiv-2408.15982","url":null,"abstract":"The emergence of cognition requires a framework that bridges evolutionary\u0000principles with neurocomputational mechanisms. This paper introduces the\u0000\"thoughtseed\" framework, proposing that cognition arises from the dynamic\u0000interaction of self-organizing units of embodied knowledge called\u0000\"thoughtseeds.\" We leverage evolutionary theory, \"neuronal packets,\" and the\u0000\"Inner Screen\" hypothesis within Free Energy Principle, and propose a\u0000four-level hierarchical model of the cognitive agent's internal states:\u0000Neuronal Packet Domains (NPDs), Knowledge Domains (KDs), thoughtseeds network,\u0000and meta-cognition. The dynamic interplay within this hierarchy, mediated by\u0000nested Markov blankets and reciprocal message passing, facilitates the\u0000emergence of thoughtseeds as coherent patterns of activity that guide\u0000perception, action, and learning. The framework further explores the role of\u0000the organism's Umwelt and the principles of active inference, especially the\u0000generative model at each nested level, in shaping the selection and activation\u0000of thoughtseeds, leading to adaptive behavior through surprise minimization.\u0000The \"Inner Screen\" is posited as the locus of conscious experience, where the\u0000content of the dominant thoughtseed is projected, maintaining a unitary\u0000conscious experience. Active thoughtseeds are proposed as the fundamental units\u0000of thought that contribute to the \"content of consciousness.\" We present a\u0000mathematical framework grounded in active inference and dynamical systems\u0000theory. The thoughtseed framework represents an initial but promising step\u0000towards a novel, biologically-grounded model for understanding the organizing\u0000principles and emergence of embodied cognition, offering a unified account of\u0000cognitive phenomena, from basic physiological regulation to higher-order\u0000thought processes, and potentially bridge neuroscience and contemplative\u0000traditions.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}