Sai Koukuntla, Joshua B. Julian, Jesse C. Kaminsky, Manuel Schottdorf, David W. Tank, Carlos D. Brody, Adam S. Charles
Modern applications often leverage multiple views of a subject of study. Within neuroscience, there is growing interest in large-scale simultaneous recordings across multiple brain regions. Understanding the relationship between views (e.g., the neural activity in each region recorded) can reveal fundamental principles about the characteristics of each representation and about the system. However, existing methods to characterize such relationships either lack the expressivity required to capture complex nonlinearities, describe only sources of variance that are shared between views, or discard geometric information that is crucial to interpreting the data. Here, we develop a nonlinear neural network-based method that, given paired samples of high-dimensional views, disentangles low-dimensional shared and private latent variables underlying these views while preserving intrinsic data geometry. Across multiple simulated and real datasets, we demonstrate that our method outperforms competing methods. Using simulated populations of lateral geniculate nucleus (LGN) and V1 neurons we demonstrate our model's ability to discover interpretable shared and private structure across different noise conditions. On a dataset of unrotated and corresponding but randomly rotated MNIST digits, we recover private latents for the rotated view that encode rotation angle regardless of digit class, and places the angle representation on a 1-d manifold, while shared latents encode digit class but not rotation angle. Applying our method to simultaneous Neuropixels recordings of hippocampus and prefrontal cortex while mice run on a linear track, we discover a low-dimensional shared latent space that encodes the animal's position. We propose our approach as a general-purpose method for finding succinct and interpretable descriptions of paired data sets in terms of disentangled shared and private latent variables.
{"title":"Unsupervised discovery of the shared and private geometry in multi-view data","authors":"Sai Koukuntla, Joshua B. Julian, Jesse C. Kaminsky, Manuel Schottdorf, David W. Tank, Carlos D. Brody, Adam S. Charles","doi":"arxiv-2408.12091","DOIUrl":"https://doi.org/arxiv-2408.12091","url":null,"abstract":"Modern applications often leverage multiple views of a subject of study.\u0000Within neuroscience, there is growing interest in large-scale simultaneous\u0000recordings across multiple brain regions. Understanding the relationship\u0000between views (e.g., the neural activity in each region recorded) can reveal\u0000fundamental principles about the characteristics of each representation and\u0000about the system. However, existing methods to characterize such relationships\u0000either lack the expressivity required to capture complex nonlinearities,\u0000describe only sources of variance that are shared between views, or discard\u0000geometric information that is crucial to interpreting the data. Here, we\u0000develop a nonlinear neural network-based method that, given paired samples of\u0000high-dimensional views, disentangles low-dimensional shared and private latent\u0000variables underlying these views while preserving intrinsic data geometry.\u0000Across multiple simulated and real datasets, we demonstrate that our method\u0000outperforms competing methods. Using simulated populations of lateral\u0000geniculate nucleus (LGN) and V1 neurons we demonstrate our model's ability to\u0000discover interpretable shared and private structure across different noise\u0000conditions. On a dataset of unrotated and corresponding but randomly rotated\u0000MNIST digits, we recover private latents for the rotated view that encode\u0000rotation angle regardless of digit class, and places the angle representation\u0000on a 1-d manifold, while shared latents encode digit class but not rotation\u0000angle. Applying our method to simultaneous Neuropixels recordings of\u0000hippocampus and prefrontal cortex while mice run on a linear track, we discover\u0000a low-dimensional shared latent space that encodes the animal's position. We\u0000propose our approach as a general-purpose method for finding succinct and\u0000interpretable descriptions of paired data sets in terms of disentangled shared\u0000and private latent variables.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roman SerratU1215 Inserm - UB, Alexandre Oliveira-PintoU1215 Inserm - UB, Giovanni MarsicanoU1215 Inserm - UB, Sandrine PouvreauU1215 Inserm - UB
Mitochondrial calcium handling is a particularly active research area in the neuroscience field, as it plays key roles in the regulation of several functions of the central nervous system, such as synaptic transmission and plasticity, astrocyte calcium signaling, neuronal activity{ldots} In the last few decades, a panel of techniques have been developed to measure mitochondrial calcium dynamics, relying mostly on photonic microscopy, and including synthetic sensors, hybrid sensors and genetically encoded calcium sensors. The goal of this review is to endow the reader with a deep knowledge of the historical and latest tools to monitor mitochondrial calcium events in the brain, as well as a comprehensive overview of the current state of the art in brain mitochondrial calcium signaling. We will discuss the main calcium probes used in the field, their mitochondrial targeting strategies, their key properties and major drawbacks. In addition, we will detail the main roles of mitochondrial calcium handling in neuronal tissues through an extended report of the recent studies using mitochondrial targeted calcium sensors in neuronal and astroglial cells, in vitro and in vivo.
{"title":"Imaging mitochondrial calcium dynamics in the central nervous system","authors":"Roman SerratU1215 Inserm - UB, Alexandre Oliveira-PintoU1215 Inserm - UB, Giovanni MarsicanoU1215 Inserm - UB, Sandrine PouvreauU1215 Inserm - UB","doi":"arxiv-2408.12202","DOIUrl":"https://doi.org/arxiv-2408.12202","url":null,"abstract":"Mitochondrial calcium handling is a particularly active research area in the\u0000neuroscience field, as it plays key roles in the regulation of several\u0000functions of the central nervous system, such as synaptic transmission and\u0000plasticity, astrocyte calcium signaling, neuronal activity{ldots} In the last\u0000few decades, a panel of techniques have been developed to measure mitochondrial\u0000calcium dynamics, relying mostly on photonic microscopy, and including\u0000synthetic sensors, hybrid sensors and genetically encoded calcium sensors. The\u0000goal of this review is to endow the reader with a deep knowledge of the\u0000historical and latest tools to monitor mitochondrial calcium events in the\u0000brain, as well as a comprehensive overview of the current state of the art in\u0000brain mitochondrial calcium signaling. We will discuss the main calcium probes\u0000used in the field, their mitochondrial targeting strategies, their key\u0000properties and major drawbacks. In addition, we will detail the main roles of\u0000mitochondrial calcium handling in neuronal tissues through an extended report\u0000of the recent studies using mitochondrial targeted calcium sensors in neuronal\u0000and astroglial cells, in vitro and in vivo.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sleep staging is critical for assessing sleep quality and diagnosing disorders. Recent advancements in artificial intelligence have driven the development of automated sleep staging models, which still face two significant challenges. 1) Simultaneously extracting prominent temporal and spatial sleep features from multi-channel raw signals, including characteristic sleep waveforms and salient spatial brain networks. 2) Capturing the spatial-temporal coupling patterns essential for accurate sleep staging. To address these challenges, we propose a novel framework named ST-USleepNet, comprising a spatial-temporal graph construction module (ST) and a U-shaped sleep network (USleepNet). The ST module converts raw signals into a spatial-temporal graph to model spatial-temporal couplings. The USleepNet utilizes a U-shaped structure originally designed for image segmentation. Similar to how image segmentation isolates significant targets, when applied to both raw sleep signals and ST module-generated graph data, USleepNet segments these inputs to extract prominent temporal and spatial sleep features simultaneously. Testing on three datasets demonstrates that ST-USleepNet outperforms existing baselines, and model visualizations confirm its efficacy in extracting prominent sleep features and temporal-spatial coupling patterns across various sleep stages. The code is available at: https://github.com/Majy-Yuji/ST-USleepNet.git.
睡眠分期对于评估睡眠质量和诊断疾病至关重要。人工智能的最新进展推动了自动睡眠分期模型的发展,但这些模型仍面临两个重大挑战。1) 同时从多通道原始信号中提取突出的时间和空间睡眠特征,包括特征性睡眠波形和显著的空间脑网络。2)捕捉对准确睡眠分期至关重要的空间-时间耦合模式。为了应对这些挑战,我们提出了一种名为 ST-USleepNet 的新型框架,由空间-时间图构建模块(ST)和 U 型睡眠网络(USleepNet)组成。ST 模块将原始信号转换为时空图,以模拟时空耦合。USleepNet 采用的 U 形结构最初是为图像分割而设计的。与图像分割分离重要目标的方法类似,当应用于原始睡眠信号和 ST 模块生成的图形数据时,USleepNet 会对这些输入进行分割,以同时提取突出的时间和空间睡眠特征。在三个数据集上进行的测试表明,ST-USleepNet 的性能优于现有的基线,模型可视化也证实了它在提取各睡眠阶段的主要睡眠特征和时空耦合模式方面的功效。代码可在以下网址获取:https://github.com/Majy-Yuji/ST-USleepNet.git。
{"title":"ST-USleepNet: A Spatial-Temporal Coupling Prominence Network for Multi-Channel Sleep Staging","authors":"Jingying Ma, Qika Lin, Ziyu Jia, Mengling Feng","doi":"arxiv-2408.11884","DOIUrl":"https://doi.org/arxiv-2408.11884","url":null,"abstract":"Sleep staging is critical for assessing sleep quality and diagnosing\u0000disorders. Recent advancements in artificial intelligence have driven the\u0000development of automated sleep staging models, which still face two significant\u0000challenges. 1) Simultaneously extracting prominent temporal and spatial sleep\u0000features from multi-channel raw signals, including characteristic sleep\u0000waveforms and salient spatial brain networks. 2) Capturing the spatial-temporal\u0000coupling patterns essential for accurate sleep staging. To address these\u0000challenges, we propose a novel framework named ST-USleepNet, comprising a\u0000spatial-temporal graph construction module (ST) and a U-shaped sleep network\u0000(USleepNet). The ST module converts raw signals into a spatial-temporal graph\u0000to model spatial-temporal couplings. The USleepNet utilizes a U-shaped\u0000structure originally designed for image segmentation. Similar to how image\u0000segmentation isolates significant targets, when applied to both raw sleep\u0000signals and ST module-generated graph data, USleepNet segments these inputs to\u0000extract prominent temporal and spatial sleep features simultaneously. Testing\u0000on three datasets demonstrates that ST-USleepNet outperforms existing\u0000baselines, and model visualizations confirm its efficacy in extracting\u0000prominent sleep features and temporal-spatial coupling patterns across various\u0000sleep stages. The code is available at:\u0000https://github.com/Majy-Yuji/ST-USleepNet.git.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding how the brain represents and processes information is crucial for advancing neuroscience and artificial intelligence. Representational similarity analysis (RSA) has been instrumental in characterizing neural representations, but traditional RSA relies solely on geometric properties, overlooking crucial topological information. This thesis introduces Topological RSA (tRSA), a novel framework combining geometric and topological properties of neural representations. tRSA applies nonlinear monotonic transforms to representational dissimilarities, emphasizing local topology while retaining intermediate-scale geometry. The resulting geo-topological matrices enable model comparisons robust to noise and individual idiosyncrasies. This thesis introduces several key methodological advances: (1) Topological RSA (tRSA) for identifying computational signatures and testing topological hypotheses; (2) Adaptive Geo-Topological Dependence Measure (AGTDM) for detecting complex multivariate relationships; (3) Procrustes-aligned Multidimensional Scaling (pMDS) for revealing neural computation stages; (4) Temporal Topological Data Analysis (tTDA) for uncovering developmental trajectories; and (5) Single-cell Topological Simplicial Analysis (scTSA) for characterizing cell population complexity. Through analyses of neural recordings, biological data, and neural network simulations, this thesis demonstrates the power and versatility of these methods in understanding brains, computational models, and complex biological systems. They not only offer robust approaches for adjudicating among competing models but also reveal novel theoretical insights into the nature of neural computation. This work lays the foundation for future investigations at the intersection of topology, neuroscience, and time series analysis, paving the way for more nuanced understanding of brain function and dysfunction.
{"title":"Topological Representational Similarity Analysis in Brains and Beyond","authors":"Baihan Lin","doi":"arxiv-2408.11948","DOIUrl":"https://doi.org/arxiv-2408.11948","url":null,"abstract":"Understanding how the brain represents and processes information is crucial\u0000for advancing neuroscience and artificial intelligence. Representational\u0000similarity analysis (RSA) has been instrumental in characterizing neural\u0000representations, but traditional RSA relies solely on geometric properties,\u0000overlooking crucial topological information. This thesis introduces Topological\u0000RSA (tRSA), a novel framework combining geometric and topological properties of\u0000neural representations. tRSA applies nonlinear monotonic transforms to representational\u0000dissimilarities, emphasizing local topology while retaining intermediate-scale\u0000geometry. The resulting geo-topological matrices enable model comparisons\u0000robust to noise and individual idiosyncrasies. This thesis introduces several\u0000key methodological advances: (1) Topological RSA (tRSA) for identifying\u0000computational signatures and testing topological hypotheses; (2) Adaptive\u0000Geo-Topological Dependence Measure (AGTDM) for detecting complex multivariate\u0000relationships; (3) Procrustes-aligned Multidimensional Scaling (pMDS) for\u0000revealing neural computation stages; (4) Temporal Topological Data Analysis\u0000(tTDA) for uncovering developmental trajectories; and (5) Single-cell\u0000Topological Simplicial Analysis (scTSA) for characterizing cell population\u0000complexity. Through analyses of neural recordings, biological data, and neural network\u0000simulations, this thesis demonstrates the power and versatility of these\u0000methods in understanding brains, computational models, and complex biological\u0000systems. They not only offer robust approaches for adjudicating among competing\u0000models but also reveal novel theoretical insights into the nature of neural\u0000computation. This work lays the foundation for future investigations at the\u0000intersection of topology, neuroscience, and time series analysis, paving the\u0000way for more nuanced understanding of brain function and dysfunction.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A ubiquitous phenomenon observed throughout the primate hierarchical visual system is the sparsification of the neural representation of visual stimuli as a result of familiarization by repeated exposure, manifested as the sharpening of the population tuning curves and suppression of neural responses at the population level. In this work, we investigated the computational implications and circuit mechanisms underlying these neurophysiological observations in an early visual cortical circuit model. We found that such a recurrent neural circuit, shaped by BCM Hebbian learning, can also reproduce these phenomena. The resulting circuit became more robust against noises in encoding the familiar stimuli. Analysis of the geometry of the neural response manifold revealed that recurrent computation and familiar learning transform the response manifold and the neural dynamics, resulting in enhanced robustness against noise and better stimulus discrimination. This prediction is supported by preliminary physiological evidence. Familiarity training increases the alignment of the slow modes of network dynamics with the invariant features of the learned images. These findings revealed how these rapid plasticity mechanisms can improve contextual visual processing in even the early visual areas in the hierarchical visual system.
{"title":"Manifold Transform by Recurrent Cortical Circuit Enhances Robust Encoding of Familiar Stimuli","authors":"Weifan Wang, Xueyan Niu, Tai-Sing Lee","doi":"arxiv-2408.10873","DOIUrl":"https://doi.org/arxiv-2408.10873","url":null,"abstract":"A ubiquitous phenomenon observed throughout the primate hierarchical visual\u0000system is the sparsification of the neural representation of visual stimuli as\u0000a result of familiarization by repeated exposure, manifested as the sharpening\u0000of the population tuning curves and suppression of neural responses at the\u0000population level. In this work, we investigated the computational implications\u0000and circuit mechanisms underlying these neurophysiological observations in an\u0000early visual cortical circuit model. We found that such a recurrent neural\u0000circuit, shaped by BCM Hebbian learning, can also reproduce these phenomena.\u0000The resulting circuit became more robust against noises in encoding the\u0000familiar stimuli. Analysis of the geometry of the neural response manifold\u0000revealed that recurrent computation and familiar learning transform the\u0000response manifold and the neural dynamics, resulting in enhanced robustness\u0000against noise and better stimulus discrimination. This prediction is supported\u0000by preliminary physiological evidence. Familiarity training increases the\u0000alignment of the slow modes of network dynamics with the invariant features of\u0000the learned images. These findings revealed how these rapid plasticity\u0000mechanisms can improve contextual visual processing in even the early visual\u0000areas in the hierarchical visual system.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zijian Dong, Yilei Wu, Zijiao Chen, Yichi Zhang, Yueming Jin, Juan Helen Zhou
We introduce Scaffold Prompt Tuning (ScaPT), a novel prompt-based framework for adapting large-scale functional magnetic resonance imaging (fMRI) pre-trained models to downstream tasks, with high parameter efficiency and improved performance compared to fine-tuning and baselines for prompt tuning. The full fine-tuning updates all pre-trained parameters, which may distort the learned feature space and lead to overfitting with limited training data which is common in fMRI fields. In contrast, we design a hierarchical prompt structure that transfers the knowledge learned from high-resource tasks to low-resource ones. This structure, equipped with a Deeply-conditioned Input-Prompt (DIP) mapping module, allows for efficient adaptation by updating only 2% of the trainable parameters. The framework enhances semantic interpretability through attention mechanisms between inputs and prompts, and it clusters prompts in the latent space in alignment with prior knowledge. Experiments on public resting state fMRI datasets reveal ScaPT outperforms fine-tuning and multitask-based prompt tuning in neurodegenerative diseases diagnosis/prognosis and personality trait prediction, even with fewer than 20 participants. It highlights ScaPT's efficiency in adapting pre-trained fMRI models to low-resource tasks.
{"title":"Prompt Your Brain: Scaffold Prompt Tuning for Efficient Adaptation of fMRI Pre-trained Model","authors":"Zijian Dong, Yilei Wu, Zijiao Chen, Yichi Zhang, Yueming Jin, Juan Helen Zhou","doi":"arxiv-2408.10567","DOIUrl":"https://doi.org/arxiv-2408.10567","url":null,"abstract":"We introduce Scaffold Prompt Tuning (ScaPT), a novel prompt-based framework\u0000for adapting large-scale functional magnetic resonance imaging (fMRI)\u0000pre-trained models to downstream tasks, with high parameter efficiency and\u0000improved performance compared to fine-tuning and baselines for prompt tuning.\u0000The full fine-tuning updates all pre-trained parameters, which may distort the\u0000learned feature space and lead to overfitting with limited training data which\u0000is common in fMRI fields. In contrast, we design a hierarchical prompt\u0000structure that transfers the knowledge learned from high-resource tasks to\u0000low-resource ones. This structure, equipped with a Deeply-conditioned\u0000Input-Prompt (DIP) mapping module, allows for efficient adaptation by updating\u0000only 2% of the trainable parameters. The framework enhances semantic\u0000interpretability through attention mechanisms between inputs and prompts, and\u0000it clusters prompts in the latent space in alignment with prior knowledge.\u0000Experiments on public resting state fMRI datasets reveal ScaPT outperforms\u0000fine-tuning and multitask-based prompt tuning in neurodegenerative diseases\u0000diagnosis/prognosis and personality trait prediction, even with fewer than 20\u0000participants. It highlights ScaPT's efficiency in adapting pre-trained fMRI\u0000models to low-resource tasks.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To date, most dendritic studies have predominantly focused on the apical zone of pyramidal two-point neurons (TPNs) receiving only feedback (FB) connections from higher perceptual layers and using them for learning. Recent cellular neurophysiology and computational neuroscience studies suggests that the apical input (context), coming from feedback and lateral connections, is multifaceted and far more diverse, with greater implications for ongoing learning and processing in the brain than previously realized. In addition to the FB, the apical tuft receives signals from neighboring cells of the same network as proximal (P) context, other parts of the brain as distal (D) context, and overall coherent information across the network as universal (U) context. The integrated context (C) amplifies and suppresses the transmission of coherent and conflicting feedforward (FF) signals, respectively. Specifically, we show that complex context-sensitive (CS)-TPNs flexibly integrate C moment-by-moment with the FF somatic current at the soma such that the somatic current is amplified when both feedforward (FF) and C are coherent; otherwise, it is attenuated. This generates the event only when the FF and C currents are coherent, which is then translated into a singlet or a burst based on the FB information. Spiking simulation results show that this flexible integration of somatic and contextual currents enables the propagation of more coherent signals (bursts), making learning faster with fewer neurons. Similar behavior is observed when this functioning is used in conventional artificial networks, where orders of magnitude fewer neurons are required to process vast amounts of heterogeneous real-world audio-visual (AV) data trained using backpropagation (BP). The computational findings presented here demonstrate the universality of CS-TPNs, suggesting a dendritic narrative that was previously overlooked.
迄今为止,大多数树突研究主要集中在锥体两点神经元(TPNs)的顶端区,这些神经元只接受来自更高知觉层的反馈(FB)连接,并利用它们进行学习。最近的细胞神经生理学和计算神经科学研究表明,来自反馈和横向联系的顶端输入(上下文)是多方面的,而且远比以前认识到的更为多样,对大脑的持续学习和处理具有更大的影响。除 FB 外,顶端丘还接收来自同一网络中相邻细胞的信号,作为近端(P)上下文;接收来自大脑其他部分的信号,作为远端(D)上下文;接收来自整个网络的全部连贯信息,作为通用(U)上下文。整合上下文(C)分别放大和抑制一致性和冲突性前馈(FF)信号的传输。具体来说,我们证明了复杂语境敏感(CS)-TPNs 能灵活地将 C 与 FF 体电流在体瘤处逐时整合,当前馈(FF)和 C 相一致时,体电流被放大;反之,则被减弱。这样,只有当 FF 电流和 C 电流相干时,才会产生事件,然后根据 FB 信息将其转化为单次或脉冲串。尖峰模拟的结果表明,这种灵活地整合了意义电流和情境电流的方法能够传播更多的一致性信号(脉冲串),从而以更少的神经元提高学习速度。在传统人工网络中使用这种功能时,也能观察到类似的行为,在传统人工网络中,使用反向传播(BP)技术处理大量异构的真实世界视听(AV)数据所需的神经元数量要少得多。本文介绍的计算发现证明了CS-TPNs的普遍性,表明了一种以前被忽视的树突叙事。
{"title":"An Overlooked Role of Context-Sensitive Dendrites","authors":"Mohsin Raza, Ahsan Adeel","doi":"arxiv-2408.11019","DOIUrl":"https://doi.org/arxiv-2408.11019","url":null,"abstract":"To date, most dendritic studies have predominantly focused on the apical zone\u0000of pyramidal two-point neurons (TPNs) receiving only feedback (FB) connections\u0000from higher perceptual layers and using them for learning. Recent cellular\u0000neurophysiology and computational neuroscience studies suggests that the apical\u0000input (context), coming from feedback and lateral connections, is multifaceted\u0000and far more diverse, with greater implications for ongoing learning and\u0000processing in the brain than previously realized. In addition to the FB, the\u0000apical tuft receives signals from neighboring cells of the same network as\u0000proximal (P) context, other parts of the brain as distal (D) context, and\u0000overall coherent information across the network as universal (U) context. The\u0000integrated context (C) amplifies and suppresses the transmission of coherent\u0000and conflicting feedforward (FF) signals, respectively. Specifically, we show\u0000that complex context-sensitive (CS)-TPNs flexibly integrate C moment-by-moment\u0000with the FF somatic current at the soma such that the somatic current is\u0000amplified when both feedforward (FF) and C are coherent; otherwise, it is\u0000attenuated. This generates the event only when the FF and C currents are\u0000coherent, which is then translated into a singlet or a burst based on the FB\u0000information. Spiking simulation results show that this flexible integration of\u0000somatic and contextual currents enables the propagation of more coherent\u0000signals (bursts), making learning faster with fewer neurons. Similar behavior\u0000is observed when this functioning is used in conventional artificial networks,\u0000where orders of magnitude fewer neurons are required to process vast amounts of\u0000heterogeneous real-world audio-visual (AV) data trained using backpropagation\u0000(BP). The computational findings presented here demonstrate the universality of\u0000CS-TPNs, suggesting a dendritic narrative that was previously overlooked.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Random Number Generation Tasks (RNGTs) are used in psychology for examining how humans generate sequences devoid of predictable patterns. By adapting an existing human RNGT for an LLM-compatible environment, this preliminary study tests whether ChatGPT-3.5, a large language model (LLM) trained on human-generated text, exhibits human-like cognitive biases when generating random number sequences. Initial findings indicate that ChatGPT-3.5 more effectively avoids repetitive and sequential patterns compared to humans, with notably lower repeat frequencies and adjacent number frequencies. Continued research into different models, parameters, and prompting methodologies will deepen our understanding of how LLMs can more closely mimic human random generation behaviors, while also broadening their applications in cognitive and behavioral science research.
{"title":"A Comparison of Large Language Model and Human Performance on Random Number Generation Tasks","authors":"Rachel M. Harrison","doi":"arxiv-2408.09656","DOIUrl":"https://doi.org/arxiv-2408.09656","url":null,"abstract":"Random Number Generation Tasks (RNGTs) are used in psychology for examining\u0000how humans generate sequences devoid of predictable patterns. By adapting an\u0000existing human RNGT for an LLM-compatible environment, this preliminary study\u0000tests whether ChatGPT-3.5, a large language model (LLM) trained on\u0000human-generated text, exhibits human-like cognitive biases when generating\u0000random number sequences. Initial findings indicate that ChatGPT-3.5 more\u0000effectively avoids repetitive and sequential patterns compared to humans, with\u0000notably lower repeat frequencies and adjacent number frequencies. Continued\u0000research into different models, parameters, and prompting methodologies will\u0000deepen our understanding of how LLMs can more closely mimic human random\u0000generation behaviors, while also broadening their applications in cognitive and\u0000behavioral science research.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper gives an introduction to textit{Cognidynamics}, that is to the dynamics of cognitive systems driven by optimal objectives imposed over time when they interact either with a defined virtual or with a real-world environment. The proposed theory is developed in the general framework of dynamic programming which leads to think of computational laws dictated by classic Hamiltonian equations. Those equations lead to the formulation of a neural propagation scheme in cognitive agents modeled by dynamic neural networks which exhibits locality in both space and time, thus contributing the longstanding debate on biological plausibility of learning algorithms like Backpropagation. We interpret the learning process in terms of energy exchange with the environment and show the crucial role of energy dissipation and its links with focus of attention mechanisms and conscious behavior.
{"title":"An Introduction to Cognidynamics","authors":"Marco Gori","doi":"arxiv-2408.13112","DOIUrl":"https://doi.org/arxiv-2408.13112","url":null,"abstract":"This paper gives an introduction to textit{Cognidynamics}, that is to the\u0000dynamics of cognitive systems driven by optimal objectives imposed over time\u0000when they interact either with a defined virtual or with a real-world\u0000environment. The proposed theory is developed in the general framework of\u0000dynamic programming which leads to think of computational laws dictated by\u0000classic Hamiltonian equations. Those equations lead to the formulation of a\u0000neural propagation scheme in cognitive agents modeled by dynamic neural\u0000networks which exhibits locality in both space and time, thus contributing the\u0000longstanding debate on biological plausibility of learning algorithms like\u0000Backpropagation. We interpret the learning process in terms of energy exchange\u0000with the environment and show the crucial role of energy dissipation and its\u0000links with focus of attention mechanisms and conscious behavior.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study focuses on EEG-based BMI for detecting voluntary keystrokes, aiming to develop a reliable brain-computer interface (BCI) to simulate and anticipate keystrokes, especially for individuals with motor impairments. The methodology includes extensive segmentation, event alignment, ERP plot analysis, and signal analysis. Different deep learning models are trained to classify EEG data into three categories -- `resting state' (0), `d' key press (1), and `l' key press (2). Real-time keypress simulation based on neural activity is enabled through integration with a tkinter-based graphical user interface. Feature engineering utilized ERP windows, and the SVC model achieved 90.42% accuracy in event classification. Additionally, deep learning models -- MLP (89% accuracy), Catboost (87.39% accuracy), KNN (72.59%), Gaussian Naive Bayes (79.21%), Logistic Regression (90.81% accuracy), and a novel Bi-Directional LSTM-GRU hybrid model (89% accuracy) -- were developed for BCI keyboard simulation. Finally, a GUI was created to predict and simulate keystrokes using the trained MLP model.
{"title":"EEG Right & Left Voluntary Hand Movement-based Virtual Brain-Computer Interfacing Keyboard with Machine Learning and a Hybrid Bi-Directional LSTM-GRU Model","authors":"Biplov Paneru, Bishwash Paneru, Sanjog Chhetri Sapkota","doi":"arxiv-2409.00035","DOIUrl":"https://doi.org/arxiv-2409.00035","url":null,"abstract":"This study focuses on EEG-based BMI for detecting voluntary keystrokes,\u0000aiming to develop a reliable brain-computer interface (BCI) to simulate and\u0000anticipate keystrokes, especially for individuals with motor impairments. The\u0000methodology includes extensive segmentation, event alignment, ERP plot\u0000analysis, and signal analysis. Different deep learning models are trained to\u0000classify EEG data into three categories -- `resting state' (0), `d' key press\u0000(1), and `l' key press (2). Real-time keypress simulation based on neural\u0000activity is enabled through integration with a tkinter-based graphical user\u0000interface. Feature engineering utilized ERP windows, and the SVC model achieved\u000090.42% accuracy in event classification. Additionally, deep learning models --\u0000MLP (89% accuracy), Catboost (87.39% accuracy), KNN (72.59%), Gaussian Naive\u0000Bayes (79.21%), Logistic Regression (90.81% accuracy), and a novel\u0000Bi-Directional LSTM-GRU hybrid model (89% accuracy) -- were developed for BCI\u0000keyboard simulation. Finally, a GUI was created to predict and simulate\u0000keystrokes using the trained MLP model.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}