Dinor NagarSchool of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel, Moritz ZaissInstitute of Neuroradiology, Friedrich-Alexander Universitat Erlangen-NurnbergDepartment of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander Universitat Erlangen-Nurnberg, Erlangen, Germany, Or PerlmanDepartment of Biomedical Engineering, Tel Aviv University, Tel Aviv, IsraelSagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
Magnetic resonance imaging (MRI) relies on radiofrequency (RF) excitation of proton spin. Clinical diagnosis requires a comprehensive collation of biophysical data via multiple MRI contrasts, acquired using a series of RF sequences that lead to lengthy examinations. Here, we developed a vision transformer-based framework that captures the spatiotemporal magnetic signal evolution and decodes the brain tissue response to RF excitation, constituting an MRI on a chip. Following a per-subject rapid calibration scan (28.2 s), a wide variety of image contrasts including fully quantitative molecular, water relaxation, and magnetic field maps can be generated automatically. The method was validated across healthy subjects and a cancer patient in two different imaging sites, and proved to be 94% faster than alternative protocols. The deep MRI on a chip (DeepMonC) framework may reveal the molecular composition of the human brain tissue in a wide range of pathologies, while offering clinically attractive scan times.
{"title":"Decoding the human brain tissue response to radiofrequency excitation using a biophysical-model-free deep MRI on a chip framework","authors":"Dinor NagarSchool of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel, Moritz ZaissInstitute of Neuroradiology, Friedrich-Alexander Universitat Erlangen-NurnbergDepartment of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander Universitat Erlangen-Nurnberg, Erlangen, Germany, Or PerlmanDepartment of Biomedical Engineering, Tel Aviv University, Tel Aviv, IsraelSagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel","doi":"arxiv-2408.08376","DOIUrl":"https://doi.org/arxiv-2408.08376","url":null,"abstract":"Magnetic resonance imaging (MRI) relies on radiofrequency (RF) excitation of\u0000proton spin. Clinical diagnosis requires a comprehensive collation of\u0000biophysical data via multiple MRI contrasts, acquired using a series of RF\u0000sequences that lead to lengthy examinations. Here, we developed a vision\u0000transformer-based framework that captures the spatiotemporal magnetic signal\u0000evolution and decodes the brain tissue response to RF excitation, constituting\u0000an MRI on a chip. Following a per-subject rapid calibration scan (28.2 s), a\u0000wide variety of image contrasts including fully quantitative molecular, water\u0000relaxation, and magnetic field maps can be generated automatically. The method\u0000was validated across healthy subjects and a cancer patient in two different\u0000imaging sites, and proved to be 94% faster than alternative protocols. The deep\u0000MRI on a chip (DeepMonC) framework may reveal the molecular composition of the\u0000human brain tissue in a wide range of pathologies, while offering clinically\u0000attractive scan times.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"89 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felipe Yáñez, Xiaoliang Luo, Omar Valerio Minero, Bradley C. Love
Large language models (LLMs) have emerged as powerful tools in various domains. Recent studies have shown that LLMs can surpass humans in certain tasks, such as predicting the outcomes of neuroscience studies. What role does this leave for humans in the overall decision process? One possibility is that humans, despite performing worse than LLMs, can still add value when teamed with them. A human and machine team can surpass each individual teammate when team members' confidence is well-calibrated and team members diverge in which tasks they find difficult (i.e., calibration and diversity are needed). We simplified and extended a Bayesian approach to combining judgments using a logistic regression framework that integrates confidence-weighted judgments for any number of team members. Using this straightforward method, we demonstrated in a neuroscience forecasting task that, even when humans were inferior to LLMs, their combination with one or more LLMs consistently improved team performance. Our hope is that this simple and effective strategy for integrating the judgments of humans and machines will lead to productive collaborations.
{"title":"Confidence-weighted integration of human and machine judgments for superior decision-making","authors":"Felipe Yáñez, Xiaoliang Luo, Omar Valerio Minero, Bradley C. Love","doi":"arxiv-2408.08083","DOIUrl":"https://doi.org/arxiv-2408.08083","url":null,"abstract":"Large language models (LLMs) have emerged as powerful tools in various\u0000domains. Recent studies have shown that LLMs can surpass humans in certain\u0000tasks, such as predicting the outcomes of neuroscience studies. What role does\u0000this leave for humans in the overall decision process? One possibility is that\u0000humans, despite performing worse than LLMs, can still add value when teamed\u0000with them. A human and machine team can surpass each individual teammate when\u0000team members' confidence is well-calibrated and team members diverge in which\u0000tasks they find difficult (i.e., calibration and diversity are needed). We\u0000simplified and extended a Bayesian approach to combining judgments using a\u0000logistic regression framework that integrates confidence-weighted judgments for\u0000any number of team members. Using this straightforward method, we demonstrated\u0000in a neuroscience forecasting task that, even when humans were inferior to\u0000LLMs, their combination with one or more LLMs consistently improved team\u0000performance. Our hope is that this simple and effective strategy for\u0000integrating the judgments of humans and machines will lead to productive\u0000collaborations.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seeking high-quality neural latent representations to reveal the intrinsic correlation between neural activity and behavior or sensory stimulation has attracted much interest. Currently, some deep latent variable models rely on behavioral information (e.g., movement direction and position) as an aid to build expressive embeddings while being restricted by fixed time scales. Visual neural activity from passive viewing lacks clearly correlated behavior or task information, and high-dimensional visual stimulation leads to intricate neural dynamics. To cope with such conditions, we propose Time-Dependent SwapVAE, following the approach of separating content and style spaces in Swap-VAE, on the basis of which we introduce state variables to construct conditional distributions with temporal dependence for the above two spaces. Our model progressively generates latent variables along neural activity sequences, and we apply self-supervised contrastive learning to shape its latent space. In this way, it can effectively analyze complex neural dynamics from sequences of arbitrary length, even without task or behavioral data as auxiliary inputs. We compare TiDe-SwapVAE with alternative models on synthetic data and neural data from mouse visual cortex. The results show that our model not only accurately decodes complex visual stimuli but also extracts explicit temporal neural dynamics, demonstrating that it builds latent representations more relevant to visual stimulation.
{"title":"Time-Dependent VAE for Building Latent Factor from Visual Neural Activity with Complex Dynamics","authors":"Liwei Huang, ZhengYu Ma, Liutao Yu, Huihui Zhou, Yonghong Tian","doi":"arxiv-2408.07908","DOIUrl":"https://doi.org/arxiv-2408.07908","url":null,"abstract":"Seeking high-quality neural latent representations to reveal the intrinsic\u0000correlation between neural activity and behavior or sensory stimulation has\u0000attracted much interest. Currently, some deep latent variable models rely on\u0000behavioral information (e.g., movement direction and position) as an aid to\u0000build expressive embeddings while being restricted by fixed time scales. Visual\u0000neural activity from passive viewing lacks clearly correlated behavior or task\u0000information, and high-dimensional visual stimulation leads to intricate neural\u0000dynamics. To cope with such conditions, we propose Time-Dependent SwapVAE,\u0000following the approach of separating content and style spaces in Swap-VAE, on\u0000the basis of which we introduce state variables to construct conditional\u0000distributions with temporal dependence for the above two spaces. Our model\u0000progressively generates latent variables along neural activity sequences, and\u0000we apply self-supervised contrastive learning to shape its latent space. In\u0000this way, it can effectively analyze complex neural dynamics from sequences of\u0000arbitrary length, even without task or behavioral data as auxiliary inputs. We\u0000compare TiDe-SwapVAE with alternative models on synthetic data and neural data\u0000from mouse visual cortex. The results show that our model not only accurately\u0000decodes complex visual stimuli but also extracts explicit temporal neural\u0000dynamics, demonstrating that it builds latent representations more relevant to\u0000visual stimulation.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kimberly Nestor, Javier Rasero, Richard Betzel, Peter J. Gianaros, Timothy Verstynen
Mammalian functional architecture flexibly adapts, transitioning from integration where information is distributed across the cortex, to segregation where information is focal in densely connected communities of brain regions. This flexibility in cortical brain networks is hypothesized to be driven by control signals originating from subcortical pathways, with the basal ganglia shifting the cortex towards integrated processing states and the cerebellum towards segregated states. In a sample of healthy human participants (N=242), we used fMRI to measure temporal variation in global brain networks while participants performed two tasks with similar cognitive demands (Stroop and Multi-Source Inference Task (MSIT)). Using the modularity index, we determined cortical networks shifted from integration (low modularity) at rest to high modularity during easier i.e. congruent (segregation). Increased task difficulty (incongruent) resulted in lower modularity in comparison to the easier counterpart indicating more integration of the cortical network. Influence of basal ganglia and cerebellum was measured using eigenvector centrality. Results correlated with decreases and increases in cortical modularity respectively, with only the basal ganglia influence preceding cortical integration. Our results support the theory the basal ganglia shifts cortical networks to integrated states due to environmental demand. Cerebellar influence correlates with shifts to segregated cortical states, though may not play a causal role.
{"title":"Cortical network reconfiguration aligns with shifts of basal ganglia and cerebellar influence","authors":"Kimberly Nestor, Javier Rasero, Richard Betzel, Peter J. Gianaros, Timothy Verstynen","doi":"arxiv-2408.07977","DOIUrl":"https://doi.org/arxiv-2408.07977","url":null,"abstract":"Mammalian functional architecture flexibly adapts, transitioning from\u0000integration where information is distributed across the cortex, to segregation\u0000where information is focal in densely connected communities of brain regions.\u0000This flexibility in cortical brain networks is hypothesized to be driven by\u0000control signals originating from subcortical pathways, with the basal ganglia\u0000shifting the cortex towards integrated processing states and the cerebellum\u0000towards segregated states. In a sample of healthy human participants (N=242),\u0000we used fMRI to measure temporal variation in global brain networks while\u0000participants performed two tasks with similar cognitive demands (Stroop and\u0000Multi-Source Inference Task (MSIT)). Using the modularity index, we determined\u0000cortical networks shifted from integration (low modularity) at rest to high\u0000modularity during easier i.e. congruent (segregation). Increased task\u0000difficulty (incongruent) resulted in lower modularity in comparison to the\u0000easier counterpart indicating more integration of the cortical network.\u0000Influence of basal ganglia and cerebellum was measured using eigenvector\u0000centrality. Results correlated with decreases and increases in cortical\u0000modularity respectively, with only the basal ganglia influence preceding\u0000cortical integration. Our results support the theory the basal ganglia shifts\u0000cortical networks to integrated states due to environmental demand. Cerebellar\u0000influence correlates with shifts to segregated cortical states, though may not\u0000play a causal role.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The extremely limited working memory span, typically around four items, contrasts sharply with our everyday experience of processing much larger streams of sensory information concurrently. This disparity suggests that working memory can organize information into compact representations such as chunks, yet the underlying neural mechanisms remain largely unknown. Here, we propose a recurrent neural network model for chunking within the framework of the synaptic theory of working memory. We showed that by selectively suppressing groups of stimuli, the network can maintain and retrieve the stimuli in chunks, hence exceeding the basic capacity. Moreover, we show that our model can dynamically construct hierarchical representations within working memory through hierarchical chunking. A consequence of this proposed mechanism is a new limit on the number of items that can be stored and subsequently retrieved from working memory, depending only on the basic working memory capacity when chunking is not invoked. Predictions from our model were confirmed by analyzing single-unit responses in epileptic patients and memory experiments with verbal material. Our work provides a novel conceptual and analytical framework for understanding the on-the-fly organization of information in the brain that is crucial for cognition.
{"title":"Hierarchical Working Memory and a New Magic Number","authors":"Weishun Zhong, Mikhail Katkov, Misha Tsodyks","doi":"arxiv-2408.07637","DOIUrl":"https://doi.org/arxiv-2408.07637","url":null,"abstract":"The extremely limited working memory span, typically around four items,\u0000contrasts sharply with our everyday experience of processing much larger\u0000streams of sensory information concurrently. This disparity suggests that\u0000working memory can organize information into compact representations such as\u0000chunks, yet the underlying neural mechanisms remain largely unknown. Here, we\u0000propose a recurrent neural network model for chunking within the framework of\u0000the synaptic theory of working memory. We showed that by selectively\u0000suppressing groups of stimuli, the network can maintain and retrieve the\u0000stimuli in chunks, hence exceeding the basic capacity. Moreover, we show that\u0000our model can dynamically construct hierarchical representations within working\u0000memory through hierarchical chunking. A consequence of this proposed mechanism\u0000is a new limit on the number of items that can be stored and subsequently\u0000retrieved from working memory, depending only on the basic working memory\u0000capacity when chunking is not invoked. Predictions from our model were\u0000confirmed by analyzing single-unit responses in epileptic patients and memory\u0000experiments with verbal material. Our work provides a novel conceptual and\u0000analytical framework for understanding the on-the-fly organization of\u0000information in the brain that is crucial for cognition.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The action potential is widely considered a purely electrical phenomenon. However, one also finds mechanical and thermal changes that can be observed experimentally. In particular, nerve membranes become thicker and axons contract. The spatial length of the action potential can be quite large, ranging from millimeters to many centimeters. This suggests to employ macroscopic thermodynamics methods to understand its properties. The pulse length is several orders of magnitude larger than the synaptic gap, larger than the distance of the nodes of Ranvier, and even larger than the size of many neurons such as pyramidal cells or brain stem motor neurons. Here, we review the mechanical changes in nerves, theoretical possibilities to explain them, and implications of a mechanical nerve pulse for the neuron and for the brain. In particular, the contraction of nerves gives rise to the possibility of fast mechanical synapses.
{"title":"The mechanical properties of nerves, the size of the action potential, and consequences for the brain","authors":"T. Heimburg","doi":"arxiv-2408.07615","DOIUrl":"https://doi.org/arxiv-2408.07615","url":null,"abstract":"The action potential is widely considered a purely electrical phenomenon.\u0000However, one also finds mechanical and thermal changes that can be observed\u0000experimentally. In particular, nerve membranes become thicker and axons\u0000contract. The spatial length of the action potential can be quite large,\u0000ranging from millimeters to many centimeters. This suggests to employ\u0000macroscopic thermodynamics methods to understand its properties. The pulse\u0000length is several orders of magnitude larger than the synaptic gap, larger than\u0000the distance of the nodes of Ranvier, and even larger than the size of many\u0000neurons such as pyramidal cells or brain stem motor neurons. Here, we review\u0000the mechanical changes in nerves, theoretical possibilities to explain them,\u0000and implications of a mechanical nerve pulse for the neuron and for the brain.\u0000In particular, the contraction of nerves gives rise to the possibility of fast\u0000mechanical synapses.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Murat Kucukosmanoglu, Javier O. Garcia, Justin Brooks, Kanika Bansal
Deep neural network (DNN) models have demonstrated impressive performance in various domains, yet their application in cognitive neuroscience is limited due to their lack of interpretability. In this study we employ two structurally different and complementary DNN-based models, a one-dimensional convolutional neural network (1D-CNN) and a bidirectional long short-term memory network (BiLSTM), to classify individual cognitive states from fMRI BOLD data, with a focus on understanding the cognitive underpinnings of the classification decisions. We show that despite the architectural differences, both models consistently produce a robust relationship between prediction accuracy and individual cognitive performance, such that low performance leads to poor prediction accuracy. To achieve model explainability, we used permutation techniques to calculate feature importance, allowing us to identify the most critical brain regions influencing model predictions. Across models, we found the dominance of visual networks, suggesting that task-driven state differences are primarily encoded in visual processing. Attention and control networks also showed relatively high importance, however, default mode and temporal-parietal networks demonstrated negligible contribution in differentiating cognitive states. Additionally, we observed individual trait-based effects and subtle model-specific differences, such that 1D-CNN showed slightly better overall performance, while BiLSTM showed better sensitivity for individual behavior; these initial findings require further research and robustness testing to be fully established. Our work underscores the importance of explainable DNN models in uncovering the neural mechanisms underlying cognitive state transitions, providing a foundation for future work in this domain.
{"title":"Cognitive Networks and Performance Drive fMRI-Based State Classification Using DNN Models","authors":"Murat Kucukosmanoglu, Javier O. Garcia, Justin Brooks, Kanika Bansal","doi":"arxiv-2409.00003","DOIUrl":"https://doi.org/arxiv-2409.00003","url":null,"abstract":"Deep neural network (DNN) models have demonstrated impressive performance in\u0000various domains, yet their application in cognitive neuroscience is limited due\u0000to their lack of interpretability. In this study we employ two structurally\u0000different and complementary DNN-based models, a one-dimensional convolutional\u0000neural network (1D-CNN) and a bidirectional long short-term memory network\u0000(BiLSTM), to classify individual cognitive states from fMRI BOLD data, with a\u0000focus on understanding the cognitive underpinnings of the classification\u0000decisions. We show that despite the architectural differences, both models\u0000consistently produce a robust relationship between prediction accuracy and\u0000individual cognitive performance, such that low performance leads to poor\u0000prediction accuracy. To achieve model explainability, we used permutation\u0000techniques to calculate feature importance, allowing us to identify the most\u0000critical brain regions influencing model predictions. Across models, we found\u0000the dominance of visual networks, suggesting that task-driven state differences\u0000are primarily encoded in visual processing. Attention and control networks also\u0000showed relatively high importance, however, default mode and temporal-parietal\u0000networks demonstrated negligible contribution in differentiating cognitive\u0000states. Additionally, we observed individual trait-based effects and subtle\u0000model-specific differences, such that 1D-CNN showed slightly better overall\u0000performance, while BiLSTM showed better sensitivity for individual behavior;\u0000these initial findings require further research and robustness testing to be\u0000fully established. Our work underscores the importance of explainable DNN\u0000models in uncovering the neural mechanisms underlying cognitive state\u0000transitions, providing a foundation for future work in this domain.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giulio Basso, Reinhold Scherer, Michael Taynnan Barros
With conventional silicon-based computing approaching its physical and efficiency limits, biocomputing emerges as a promising alternative. This approach utilises biomaterials such as DNA and neurons as an interesting alternative to data processing and storage. This study explores the potential of neuronal biocomputing to rival silicon-based systems. We explore neuronal logic gates and sequential circuits that mimic conventional computer architectures. Through mathematical modelling, optimisation, and computer simulation, we demonstrate the operational capabilities of neuronal sequential circuits. These circuits include a neuronal NAND gate, SR Latch flip-flop, and D flip-flop memory units. Our approach involves manipulating neuron communication, synaptic conductance, spike buffers, neuron types, and specific neuronal network topology designs. The experiments demonstrate the practicality of encoding binary information using patterns of neuronal activity and overcoming synchronization difficulties with neuronal buffers and inhibition strategies. Our results confirm the effectiveness and scalability of neuronal logic circuits, showing that they maintain a stable metabolic burden even in complex data storage configurations. Our study not only demonstrates the concept of embodied biocomputing by manipulating neuronal properties for digital signal processing but also establishes the foundation for cutting-edge biocomputing technologies. Our designs open up possibilities for using neurons as energy-efficient computing solutions. These solutions have the potential to become an alternate to silicon-based systems by providing a carbon-neutral, biologically feasible alternative.
随着传统的硅基计算在物理和效率方面逐渐接近极限,生物计算成为一种前景广阔的替代方案。这种方法利用 DNA 和神经元等生物材料作为数据处理和存储的有趣替代方案。本研究探讨了神经元生物计算与硅基系统相媲美的潜力。我们探索了模仿传统计算机体系结构的神经元逻辑门和顺序电路。通过数学建模、优化和计算机模拟,我们展示了神经元序列电路的运行能力。这些电路包括神经元 NAND 门、SR Latch 触发器和 D 触发器存储单元。我们的方法包括操纵神经元通信、突触传导、尖峰缓冲器、神经元类型和特定的神经元网络拓扑设计。实验证明了利用神经元活动模式编码二进制信息以及利用神经元缓冲器和抑制策略克服同步困难的实用性。我们的研究结果证实了神经元逻辑电路的有效性和可扩展性,表明即使是不复杂的数据存储配置,它们也能保持稳定的代谢负担。我们的研究不仅证明了通过操纵神经元特性进行数字信号处理的嵌入式生物计算概念,还为尖端生物计算技术奠定了基础。我们的设计为使用神经元作为高能效计算解决方案提供了可能性。通过提供碳中和、生物可行的替代方案,这些解决方案有可能成为硅基系统的替代品。
{"title":"Embodied Biocomputing Sequential Circuits with Data Processing and Storage for Neurons-on-a-chip","authors":"Giulio Basso, Reinhold Scherer, Michael Taynnan Barros","doi":"arxiv-2408.07628","DOIUrl":"https://doi.org/arxiv-2408.07628","url":null,"abstract":"With conventional silicon-based computing approaching its physical and\u0000efficiency limits, biocomputing emerges as a promising alternative. This\u0000approach utilises biomaterials such as DNA and neurons as an interesting\u0000alternative to data processing and storage. This study explores the potential\u0000of neuronal biocomputing to rival silicon-based systems. We explore neuronal\u0000logic gates and sequential circuits that mimic conventional computer\u0000architectures. Through mathematical modelling, optimisation, and computer\u0000simulation, we demonstrate the operational capabilities of neuronal sequential\u0000circuits. These circuits include a neuronal NAND gate, SR Latch flip-flop, and\u0000D flip-flop memory units. Our approach involves manipulating neuron\u0000communication, synaptic conductance, spike buffers, neuron types, and specific\u0000neuronal network topology designs. The experiments demonstrate the practicality\u0000of encoding binary information using patterns of neuronal activity and\u0000overcoming synchronization difficulties with neuronal buffers and inhibition\u0000strategies. Our results confirm the effectiveness and scalability of neuronal\u0000logic circuits, showing that they maintain a stable metabolic burden even in\u0000complex data storage configurations. Our study not only demonstrates the\u0000concept of embodied biocomputing by manipulating neuronal properties for\u0000digital signal processing but also establishes the foundation for cutting-edge\u0000biocomputing technologies. Our designs open up possibilities for using neurons\u0000as energy-efficient computing solutions. These solutions have the potential to\u0000become an alternate to silicon-based systems by providing a carbon-neutral,\u0000biologically feasible alternative.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"318 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Major depressive disorder (MDD) is a debilitating health condition affecting a substantial part of the world's population. At present, there is no biological theory of MDD, and treatment is partial at best. Here I present a theory of MDD that explains its etiology, symptoms, pathophysiology, and treatment. MDD involves stressful life events that the person does not manage to resolve. In this situation animals normally execute a 'disengage' survival response. In MDD, this response is chronically executed, leading to depressed mood and the somatic MDD symptoms. To explain the biological mechanisms involved, I present a novel theory of opioids, where each opioid mediates one of the basic survival responses. The opioid mediating 'disengage' is dynorphin. The paper presents strong evidence for chronic dynorphin signaling in MDD and for its causal role in the disorder. The theory also explains bipolar disorder, and the mechanisms behind the treatment of both disorders.
{"title":"A Dynorphin Theory of Depression and Bipolar Disorder","authors":"Ari Rappoport","doi":"arxiv-2408.06763","DOIUrl":"https://doi.org/arxiv-2408.06763","url":null,"abstract":"Major depressive disorder (MDD) is a debilitating health condition affecting\u0000a substantial part of the world's population. At present, there is no\u0000biological theory of MDD, and treatment is partial at best. Here I present a\u0000theory of MDD that explains its etiology, symptoms, pathophysiology, and\u0000treatment. MDD involves stressful life events that the person does not manage\u0000to resolve. In this situation animals normally execute a 'disengage' survival\u0000response. In MDD, this response is chronically executed, leading to depressed\u0000mood and the somatic MDD symptoms. To explain the biological mechanisms\u0000involved, I present a novel theory of opioids, where each opioid mediates one\u0000of the basic survival responses. The opioid mediating 'disengage' is dynorphin.\u0000The paper presents strong evidence for chronic dynorphin signaling in MDD and\u0000for its causal role in the disorder. The theory also explains bipolar disorder,\u0000and the mechanisms behind the treatment of both disorders.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"177 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a complete theory of autism spectrum disorder (ASD), explaining its etiology, symptoms, and pathology. The core cause of ASD is excessive stress-induced postnatal release of corticotropin-releasing hormone (CRH). CRH competes with urocortins for binding to the CRH2 receptor, impairing their essential function in the utilization of glucose for growth. This results in impaired development of all brain areas depending on CRH2, including areas that are central in social development and eye gaze learning, and low-level sensory areas. Excessive CRH also induces excessive release of adrenal androgens (mainly DHEA), which impairs the long-term plasticity function of gonadal steroids. I show that these two effects can explain all of the known symptoms and properties of ASD. The theory is supported by strong diverse evidence, and points to very early detection biomarkers and preventive pharmaceutical treatments, one of which seems to be very promising.
{"title":"A CRH Theory of Autism Spectrum Disorder","authors":"Ari Rappoport","doi":"arxiv-2408.06750","DOIUrl":"https://doi.org/arxiv-2408.06750","url":null,"abstract":"This paper presents a complete theory of autism spectrum disorder (ASD),\u0000explaining its etiology, symptoms, and pathology. The core cause of ASD is\u0000excessive stress-induced postnatal release of corticotropin-releasing hormone\u0000(CRH). CRH competes with urocortins for binding to the CRH2 receptor, impairing\u0000their essential function in the utilization of glucose for growth. This results\u0000in impaired development of all brain areas depending on CRH2, including areas\u0000that are central in social development and eye gaze learning, and low-level\u0000sensory areas. Excessive CRH also induces excessive release of adrenal\u0000androgens (mainly DHEA), which impairs the long-term plasticity function of\u0000gonadal steroids. I show that these two effects can explain all of the known\u0000symptoms and properties of ASD. The theory is supported by strong diverse\u0000evidence, and points to very early detection biomarkers and preventive\u0000pharmaceutical treatments, one of which seems to be very promising.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}