首页 > 最新文献

Neural Computation最新文献

英文 中文
The Leaky Integrate-and-Fire Neuron Is a Change-Point Detector for Compound Poisson Processes 泄漏的积分与发射神经元是复合泊松过程的变化点探测器
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01750
Shivaram Mani;Paul Hurley;André van Schaik;Travis Monk
Animal nervous systems can detect changes in their environments within hundredths of a second. They do so by discerning abrupt shifts in sensory neural activity. Many neuroscience studies have employed change-point detection (CPD) algorithms to estimate such abrupt shifts in neural activity. But very few studies have suggested that spiking neurons themselves are online change-point detectors. We show that a leaky integrate-and-fire (LIF) neuron implements an online CPD algorithm for a compound Poisson process. We quantify the CPD performance of an LIF neuron under various regions of its parameter space. We show that CPD can be a recursive algorithm where the output of one algorithm can be input to another. Then we show that a simple feedforward network of LIF neurons can quickly and reliably detect very small changes in input spiking rates. For example, our network detects a 5% change in input rates within 20 ms on average, and false-positive detections are extremely rare. In a rigorous statistical context, we interpret the salient features of the LIF neuron: its membrane potential, synaptic weight, time constant, resting potential, action potentials, and threshold. Our results potentially generalize beyond the LIF neuron model and its associated CPD problem. If spiking neurons perform change-point detection on their inputs, then the electrophysiological properties of their membranes must be related to the spiking statistics of their inputs. We demonstrate one example of this relationship for the LIF neuron and compound Poisson processes and suggest how to test this hypothesis more broadly. Maybe neurons are not noisy devices whose action potentials must be averaged over time or populations. Instead, neurons might implement sophisticated, optimal, and online statistical algorithms on their inputs.
动物的神经系统能在百分之一秒内察觉到环境的变化。它们通过识别感觉神经活动的突然变化来做到这一点。许多神经科学研究都采用变化点检测(CPD)算法来估计神经活动的突变。但很少有研究表明,尖峰神经元本身就是在线的变化点探测器。我们展示了一个泄漏的集成和发射(LIF)神经元实现了复合泊松过程的在线CPD算法。我们量化了LIF神经元在其参数空间的不同区域下的CPD性能。我们证明CPD可以是一个递归算法,其中一个算法的输出可以输入到另一个算法。然后我们证明了一个简单的LIF神经元前馈网络可以快速可靠地检测输入尖峰率的非常小的变化。例如,我们的网络平均在20毫秒内检测到5%的输入速率变化,并且假阳性检测非常罕见。在严格的统计背景下,我们解释了LIF神经元的显著特征:它的膜电位、突触重量、时间常数、静息电位、动作电位和阈值。我们的结果可能会推广到LIF神经元模型及其相关的CPD问题之外。如果尖峰神经元在其输入上执行变化点检测,那么其膜的电生理特性必须与其输入的尖峰统计有关。我们为LIF神经元和复合泊松过程展示了这种关系的一个例子,并建议如何更广泛地验证这一假设。也许神经元不是嘈杂的装置,其动作电位必须随时间或群体平均。相反,神经元可能会在其输入上实现复杂的、最优的和在线的统计算法。
{"title":"The Leaky Integrate-and-Fire Neuron Is a Change-Point Detector for Compound Poisson Processes","authors":"Shivaram Mani;Paul Hurley;André van Schaik;Travis Monk","doi":"10.1162/neco_a_01750","DOIUrl":"10.1162/neco_a_01750","url":null,"abstract":"Animal nervous systems can detect changes in their environments within hundredths of a second. They do so by discerning abrupt shifts in sensory neural activity. Many neuroscience studies have employed change-point detection (CPD) algorithms to estimate such abrupt shifts in neural activity. But very few studies have suggested that spiking neurons themselves are online change-point detectors. We show that a leaky integrate-and-fire (LIF) neuron implements an online CPD algorithm for a compound Poisson process. We quantify the CPD performance of an LIF neuron under various regions of its parameter space. We show that CPD can be a recursive algorithm where the output of one algorithm can be input to another. Then we show that a simple feedforward network of LIF neurons can quickly and reliably detect very small changes in input spiking rates. For example, our network detects a 5% change in input rates within 20 ms on average, and false-positive detections are extremely rare. In a rigorous statistical context, we interpret the salient features of the LIF neuron: its membrane potential, synaptic weight, time constant, resting potential, action potentials, and threshold. Our results potentially generalize beyond the LIF neuron model and its associated CPD problem. If spiking neurons perform change-point detection on their inputs, then the electrophysiological properties of their membranes must be related to the spiking statistics of their inputs. We demonstrate one example of this relationship for the LIF neuron and compound Poisson processes and suggest how to test this hypothesis more broadly. Maybe neurons are not noisy devices whose action potentials must be averaged over time or populations. Instead, neurons might implement sophisticated, optimal, and online statistical algorithms on their inputs.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"926-956"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge as a Breaking of Ergodicity 知识是对遍历性的突破。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01741
Yang He;Vassiliy Lubchenko
We construct a thermodynamic potential that can guide training of a generative model defined on a set of binary degrees of freedom. We argue that upon reduction in description, so as to make the generative model computationally manageable, the potential develops multiple minima. This is mirrored by the emergence of multiple minima in the free energy proper of the generative model itself. The variety of training samples that employ N binary degrees of freedom is ordinarily much lower than the size 2N of the full phase space. The nonrepresented configurations, we argue, should be thought of as comprising a high-temperature phase separated by an extensive energy gap from the configurations composing the training set. Thus, training amounts to sampling a free energy surface in the form of a library of distinct bound states, each of which breaks ergodicity. The ergodicity breaking prevents escape into the near continuum of states comprising the high-temperature phase; thus, it is necessary for proper functionality. It may, however, have the side effect of limiting access to patterns that were underrepresented in the training set. At the same time, the ergodicity breaking within the library complicates both learning and retrieval. As a remedy, one may concurrently employ multiple generative models—up to one model per free energy minimum.
我们构造了一个热力学势,它可以指导在一组二元自由度上定义的生成模型的训练。我们认为,在简化描述后,为了使生成模型在计算上可管理,潜力发展为多个极小值。这反映在生成模型本身的自由能属性中出现多个极小值。使用$N$二元自由度的训练样本的种类通常远低于完整相空间的2$^{N}$的大小。我们认为,非表征构型应该被认为是包含一个高温相,与组成训练集的构型之间有一个广泛的能量缺口。因此,训练相当于以不同界态库的形式对自由能面进行采样,其中每一个都打破了遍历性。遍历性断裂防止逸出到包含高温相的近连续态;因此,它对于适当的功能是必要的。然而,它可能具有限制访问训练集中未充分表示的模式的副作用。同时,图书馆内部的遍历性断裂给学习和检索带来了复杂性。作为补救措施,可以同时使用多个生成模型-每个自由能最小值最多使用一个模型。
{"title":"Knowledge as a Breaking of Ergodicity","authors":"Yang He;Vassiliy Lubchenko","doi":"10.1162/neco_a_01741","DOIUrl":"10.1162/neco_a_01741","url":null,"abstract":"We construct a thermodynamic potential that can guide training of a generative model defined on a set of binary degrees of freedom. We argue that upon reduction in description, so as to make the generative model computationally manageable, the potential develops multiple minima. This is mirrored by the emergence of multiple minima in the free energy proper of the generative model itself. The variety of training samples that employ N binary degrees of freedom is ordinarily much lower than the size 2N of the full phase space. The nonrepresented configurations, we argue, should be thought of as comprising a high-temperature phase separated by an extensive energy gap from the configurations composing the training set. Thus, training amounts to sampling a free energy surface in the form of a library of distinct bound states, each of which breaks ergodicity. The ergodicity breaking prevents escape into the near continuum of states comprising the high-temperature phase; thus, it is necessary for proper functionality. It may, however, have the side effect of limiting access to patterns that were underrepresented in the training set. At the same time, the ergodicity breaking within the library complicates both learning and retrieval. As a remedy, one may concurrently employ multiple generative models—up to one model per free energy minimum.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"742-792"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active Inference and Intentional Behavior 主动推理和有意行为。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01738
Karl J. Friston;Tommaso Salvatori;Takuya Isomura;Alexander Tschantz;Alex Kiefer;Tim Verbelen;Magnus Koudahl;Aswin Paul;Thomas Parr;Adeel Razi;Brett J. Kagan;Christopher L. Buckley;Maxwell J. D. Ramstead
Recent advances in theoretical biology suggest that key definitions of basal cognition and sentient behavior may arise as emergent properties of in vitro cell cultures and neuronal networks. Such neuronal networks reorganize activity to demonstrate structured behaviors when embodied in structured information landscapes. In this article, we characterize this kind of self-organization through the lens of the free energy principle, that is, as self-evidencing. We do this by first discussing the definitions of reactive and sentient behavior in the setting of active inference, which describes the behavior of agents that model the consequences of their actions. We then introduce a formal account of intentional behavior that describes agents as driven by a preferred end point or goal in latent state-spaces. We then investigate these forms of (reactive, sentient, and intentional) behavior using simulations. First, we simulate the in vitro experiments, in which neuronal cultures modulated activity to improve gameplay in a simplified version of Pong by implementing nested, free energy minimizing processes. The simulations are then used to deconstruct the ensuing predictive behavior, leading to the distinction between merely reactive, sentient, and intentional behavior with the latter formalized in terms of inductive inference. This distinction is further studied using simple machine learning benchmarks (navigation in a grid world and the Tower of Hanoi problem) that show how quickly and efficiently adaptive behavior emerges under an inductive form of active inference.
理论生物学的最新进展表明,基础认知和感知行为的关键定义可能随着体外细胞培养和神经网络的涌现特性而出现。这样的神经网络在结构化的信息环境中重新组织活动,以展示结构化的行为。在本文中,我们通过自由能原理来描述这种自组织,即自明性。为此,我们首先讨论了主动推理中反应性和感知性行为的定义,主动推理描述了对其行为后果进行建模的智能体的行为。然后,我们引入了一种有意行为的正式描述,将智能体描述为由潜在状态空间中的首选终点或目标驱动的。然后,我们使用模拟来研究这些形式(反应性、感知性和有意性)的行为。首先,我们模拟了体外实验,在实验中,神经元培养通过实现嵌套的自由能量最小化过程来调节活动,以改善简化版《Pong》的游戏玩法。然后,模拟被用来解构随后的预测行为,导致仅仅反应性、感知性和有意性行为之间的区别,后者以归纳推理的方式形式化。使用简单的机器学习基准(网格世界中的导航和河内塔问题)进一步研究了这种区别,这些基准显示了在主动推理的归纳形式下,自适应行为是如何快速有效地出现的。
{"title":"Active Inference and Intentional Behavior","authors":"Karl J. Friston;Tommaso Salvatori;Takuya Isomura;Alexander Tschantz;Alex Kiefer;Tim Verbelen;Magnus Koudahl;Aswin Paul;Thomas Parr;Adeel Razi;Brett J. Kagan;Christopher L. Buckley;Maxwell J. D. Ramstead","doi":"10.1162/neco_a_01738","DOIUrl":"10.1162/neco_a_01738","url":null,"abstract":"Recent advances in theoretical biology suggest that key definitions of basal cognition and sentient behavior may arise as emergent properties of in vitro cell cultures and neuronal networks. Such neuronal networks reorganize activity to demonstrate structured behaviors when embodied in structured information landscapes. In this article, we characterize this kind of self-organization through the lens of the free energy principle, that is, as self-evidencing. We do this by first discussing the definitions of reactive and sentient behavior in the setting of active inference, which describes the behavior of agents that model the consequences of their actions. We then introduce a formal account of intentional behavior that describes agents as driven by a preferred end point or goal in latent state-spaces. We then investigate these forms of (reactive, sentient, and intentional) behavior using simulations. First, we simulate the in vitro experiments, in which neuronal cultures modulated activity to improve gameplay in a simplified version of Pong by implementing nested, free energy minimizing processes. The simulations are then used to deconstruct the ensuing predictive behavior, leading to the distinction between merely reactive, sentient, and intentional behavior with the latter formalized in terms of inductive inference. This distinction is further studied using simple machine learning benchmarks (navigation in a grid world and the Tower of Hanoi problem) that show how quickly and efficiently adaptive behavior emerges under an inductive form of active inference.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"666-700"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning in Wilson-Cowan Model for Metapopulation 基于Wilson-Cowan模型的元人口学习。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01744
Raffaele Marino;Lorenzo Buffoni;Lorenzo Chicchi;Francesca Di Patti;Diego Febbe;Lorenzo Giambagli;Duccio Fanelli
The Wilson-Cowan model for metapopulation, a neural mass network model, treats different subcortical regions of the brain as connected nodes, with connections representing various types of structural, functional, or effective neuronal connectivity between these regions. Each region comprises interacting populations of excitatory and inhibitory cells, consistent with the standard Wilson-Cowan model. In this article, we show how to incorporate stable attractors into such a metapopulation model’s dynamics. By doing so, we transform the neural mass network model into a biologically inspired learning algorithm capable of solving different classification tasks. We test it on MNIST and Fashion MNIST in combination with convolutional neural networks, as well as on CIFAR-10 and TF-FLOWERS, and in combination with a transformer architecture (BERT) on IMDB, consistently achieving high classification accuracy.
Wilson-Cowan模型是一种神经质量网络模型,它将大脑皮层下的不同区域视为连接的节点,这些连接代表了这些区域之间不同类型的结构、功能或有效的神经元连接。每个区域包括相互作用的兴奋性和抑制性细胞群,与标准的威尔逊-考恩模型一致。在本文中,我们将展示如何将稳定吸引子纳入这种元种群模型的动力学中。通过这样做,我们将神经质量网络模型转换为能够解决不同分类任务的生物学启发学习算法。我们将其与卷积神经网络、CIFAR-10和TF-FLOWERS以及IMDB上的变压器架构(BERT)结合在一起,在MNIST和Fashion MNIST上进行了测试,始终如一地实现了较高的分类精度。
{"title":"Learning in Wilson-Cowan Model for Metapopulation","authors":"Raffaele Marino;Lorenzo Buffoni;Lorenzo Chicchi;Francesca Di Patti;Diego Febbe;Lorenzo Giambagli;Duccio Fanelli","doi":"10.1162/neco_a_01744","DOIUrl":"10.1162/neco_a_01744","url":null,"abstract":"The Wilson-Cowan model for metapopulation, a neural mass network model, treats different subcortical regions of the brain as connected nodes, with connections representing various types of structural, functional, or effective neuronal connectivity between these regions. Each region comprises interacting populations of excitatory and inhibitory cells, consistent with the standard Wilson-Cowan model. In this article, we show how to incorporate stable attractors into such a metapopulation model’s dynamics. By doing so, we transform the neural mass network model into a biologically inspired learning algorithm capable of solving different classification tasks. We test it on MNIST and Fashion MNIST in combination with convolutional neural networks, as well as on CIFAR-10 and TF-FLOWERS, and in combination with a transformer architecture (BERT) on IMDB, consistently achieving high classification accuracy.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"701-741"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nearly Optimal Learning Using Sparse Deep ReLU Networks in Regularized Empirical Risk Minimization With Lipschitz Loss 基于稀疏深度ReLU网络的带Lipschitz损失正则化经验风险最小化的近最优学习。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01742
Ke Huang;Mingming Liu;Shujie Ma
We propose a sparse deep ReLU network (SDRN) estimator of the regression function obtained from regularized empirical risk minimization with a Lipschitz loss function. Our framework can be applied to a variety of regression and classification problems. We establish novel nonasymptotic excess risk bounds for our SDRN estimator when the regression function belongs to a Sobolev space with mixed derivatives. We obtain a new, nearly optimal, risk rate in the sense that the SDRN estimator can achieve nearly the same optimal minimax convergence rate as one-dimensional nonparametric regression with the dimension involved in a logarithm term only when the feature dimension is fixed. The estimator has a slightly slower rate when the dimension grows with the sample size. We show that the depth of the SDRN estimator grows with the sample size in logarithmic order, and the total number of nodes and weights grows in polynomial order of the sample size to have the nearly optimal risk rate. The proposed SDRN can go deeper with fewer parameters to well estimate the regression and overcome the overfitting problem encountered by conventional feedforward neural networks.
我们提出了一种稀疏深度 ReLU 网络(SDRN)回归函数估计器,该估计器由正则化经验风险最小化与 Lipschitz 损失函数获得。我们的框架可应用于各种回归和分类问题。当回归函数属于具有混合导数的 Sobolev 空间时,我们为 SDRN 估计器建立了新的非渐近超额风险边界。我们得到了一种新的、近乎最优的风险率,即只有当特征维度固定时,SDRN 估计器才能达到与一维非参数回归几乎相同的最优最小收敛率,且其维度涉及对数项。当维度随样本量增长时,估计器的收敛速度稍慢。我们的研究表明,SDRN 估计器的深度随样本量的对数阶增长,节点和权重的总数随样本量的多项式阶增长,从而获得接近最优的风险率。所提出的 SDRN 可以用更少的参数进行更深入的回归估计,克服了传统前馈神经网络所遇到的过拟合问题。
{"title":"Nearly Optimal Learning Using Sparse Deep ReLU Networks in Regularized Empirical Risk Minimization With Lipschitz Loss","authors":"Ke Huang;Mingming Liu;Shujie Ma","doi":"10.1162/neco_a_01742","DOIUrl":"10.1162/neco_a_01742","url":null,"abstract":"We propose a sparse deep ReLU network (SDRN) estimator of the regression function obtained from regularized empirical risk minimization with a Lipschitz loss function. Our framework can be applied to a variety of regression and classification problems. We establish novel nonasymptotic excess risk bounds for our SDRN estimator when the regression function belongs to a Sobolev space with mixed derivatives. We obtain a new, nearly optimal, risk rate in the sense that the SDRN estimator can achieve nearly the same optimal minimax convergence rate as one-dimensional nonparametric regression with the dimension involved in a logarithm term only when the feature dimension is fixed. The estimator has a slightly slower rate when the dimension grows with the sample size. We show that the depth of the SDRN estimator grows with the sample size in logarithmic order, and the total number of nodes and weights grows in polynomial order of the sample size to have the nearly optimal risk rate. The proposed SDRN can go deeper with fewer parameters to well estimate the regression and overcome the overfitting problem encountered by conventional feedforward neural networks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"815-870"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Context-Sensitive Processing in a Model Neocortical Pyramidal Cell With Two Sites of Input Integration 具有两个输入整合位点的新皮质锥体细胞模型的上下文敏感加工。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01739
Bruce P. Graham;Jim W. Kay;William A. Phillips
Neocortical layer 5 thick-tufted pyramidal cells are prone to exhibiting burst firing on receipt of coincident basal and apical dendritic inputs. These inputs carry different information, with basal inputs coming from feedforward sensory pathways and apical inputs coming from diverse sources that provide context in the cortical hierarchy. We explore the information processing possibilities of this burst firing using computer simulations of a noisy compartmental cell model. Simulated data on stochastic burst firing due to brief, simultaneously injected basal and apical currents allow estimation of burst firing probability for different stimulus current amplitudes. Information-theory-based partial information decomposition (PID) is used to quantify the contributions of the apical and basal input streams to the information in the cell output bursting probability. Four different operating regimes are apparent, depending on the relative strengths of the input streams, with output burst probability carrying more or less information that is uniquely contributed by either the basal or apical input, or shared and synergistic information due to the combined streams. We derive and fit transfer functions for these different regimes that describe burst probability over the different ranges of basal and apical input amplitudes. The operating regimes can be classified into distinct modes of information processing, depending on the contribution of apical input to output bursting: apical cooperation, in which both basal and apical inputs are required to generate a burst; apical amplification, in which basal input alone can generate a burst but the burst probability is modulated by apical input; apical drive, in which apical input alone can produce a burst; and apical integration, in which strong apical or basal inputs alone, as well as their combination, can generate bursting. In particular, PID and the transfer function clarify that the apical amplification mode has the features required for contextually modulated information processing.
大脑皮层第 5 层厚簇锥体细胞在接收到重合的基底和顶端树突输入时容易出现爆发性发射。这些输入携带不同的信息,基底输入来自前馈感觉通路,而顶端输入则来自皮层层次结构中提供上下文的不同来源。我们利用计算机模拟了一个噪声分区细胞模型,探索了这种突发性发射的信息处理可能性。通过对同时注入基底和顶端电流的短暂随机猝发发射的模拟数据,我们可以估算出不同刺激电流幅度下的猝发发射概率。基于信息论的部分信息分解(PID)被用来量化顶端和基底输入流对细胞输出猝发概率信息的贡献。根据输入流的相对强度,输出猝发概率或多或少地包含了由基底或顶端输入所独有的信息,或由综合输入流所共享和协同的信息。我们推导并拟合了这些不同状态的传递函数,它们描述了基底和顶端输入振幅不同范围内的猝发概率。根据心尖输入对输出猝发的贡献,这些运行状态可分为不同的信息处理模式:心尖合作,即需要基底和心尖输入才能产生猝发;心尖放大,即基底输入单独就能产生猝发,但猝发概率受心尖输入的调节;心尖驱动,即心尖输入单独就能产生猝发;心尖整合,即心尖或基底输入单独以及它们的组合都能产生猝发。特别是,PID 和传递函数表明,根尖放大模式具有上下文调制信息处理所需的特征。
{"title":"Context-Sensitive Processing in a Model Neocortical Pyramidal Cell With Two Sites of Input Integration","authors":"Bruce P. Graham;Jim W. Kay;William A. Phillips","doi":"10.1162/neco_a_01739","DOIUrl":"10.1162/neco_a_01739","url":null,"abstract":"Neocortical layer 5 thick-tufted pyramidal cells are prone to exhibiting burst firing on receipt of coincident basal and apical dendritic inputs. These inputs carry different information, with basal inputs coming from feedforward sensory pathways and apical inputs coming from diverse sources that provide context in the cortical hierarchy. We explore the information processing possibilities of this burst firing using computer simulations of a noisy compartmental cell model. Simulated data on stochastic burst firing due to brief, simultaneously injected basal and apical currents allow estimation of burst firing probability for different stimulus current amplitudes. Information-theory-based partial information decomposition (PID) is used to quantify the contributions of the apical and basal input streams to the information in the cell output bursting probability. Four different operating regimes are apparent, depending on the relative strengths of the input streams, with output burst probability carrying more or less information that is uniquely contributed by either the basal or apical input, or shared and synergistic information due to the combined streams. We derive and fit transfer functions for these different regimes that describe burst probability over the different ranges of basal and apical input amplitudes. The operating regimes can be classified into distinct modes of information processing, depending on the contribution of apical input to output bursting: apical cooperation, in which both basal and apical inputs are required to generate a burst; apical amplification, in which basal input alone can generate a burst but the burst probability is modulated by apical input; apical drive, in which apical input alone can produce a burst; and apical integration, in which strong apical or basal inputs alone, as well as their combination, can generate bursting. In particular, PID and the transfer function clarify that the apical amplification mode has the features required for contextually modulated information processing.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"588-634"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced EEG Forecasting: A Probabilistic Deep Learning Approach 增强脑电图预测:一种概率深度学习方法。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01743
Hanna Pankka;Jaakko Lehtinen;Risto J. Ilmoniemi;Timo Roine
Forecasting electroencephalography (EEG) signals, that is, estimating future values of the time series based on the past ones, is essential in many real-time EEG-based applications, such as brain–computer interfaces and closed-loop brain stimulation. As these applications are becoming more and more common, the importance of a good prediction model has increased. Previously, the autoregressive model (AR) has been employed for this task; however, its prediction accuracy tends to fade quickly as multiple steps are predicted. We aim to improve on this by applying probabilistic deep learning to make robust longer-range forecasts. For this, we applied the probabilistic deep neural network model WaveNet to forecast resting-state EEG in theta- (4–7.5 Hz) and alpha-frequency (8–13 Hz) bands and compared it to the AR model. WaveNet reliably predicted EEG signals in both theta and alpha frequencies 150 ms ahead, with mean absolute errors of 1.0 ± 1.1 µV (theta) and 0.9 ± 1.1 µV (alpha), and outperformed the AR model in estimating the signal amplitude and phase. Furthermore, we found that the probabilistic approach offers a way of forecasting even more accurately while effectively discarding uncertain predictions. We demonstrate for the first time that probabilistic deep learning can be used to forecast resting-state EEG time series. In the future, the developed model can enhance the real-time estimation of brain states in brain–computer interfaces and brain stimulation protocols. It may also be useful for answering neuroscientific questions and for diagnostic purposes.
预测脑电图(EEG)信号,即根据过去的时间序列估计未来的值,在许多基于脑电图的实时应用中是必不可少的,如脑机接口和闭环脑刺激。随着这些应用变得越来越普遍,一个好的预测模型的重要性增加了。在此之前,自回归模型(AR)已被用于该任务;然而,随着预测步骤的增多,其预测精度会迅速下降。我们的目标是通过应用概率深度学习来进行稳健的长期预测来改进这一点。为此,我们应用概率深度神经网络模型WaveNet来预测θ - (4-7.5 Hz)和α -频率(8-13 Hz)频段的静息状态脑电图,并将其与AR模型进行比较。WaveNet在θ和α频率下都能可靠地提前150 ms预测脑电图信号,平均绝对误差为1.0 $pm$ 1.1 $mu$V (θ)和0.9 $pm$ 1.1 $mu$V (α),并且在估计信号幅度和相位方面优于AR模型。此外,我们发现概率方法提供了一种更准确的预测方法,同时有效地丢弃了不确定的预测。我们首次证明了概率深度学习可以用于预测静息状态脑电图时间序列。在未来,所开发的模型可以增强对脑机接口和脑刺激协议中大脑状态的实时估计。它也可能用于回答神经科学问题和诊断目的。
{"title":"Enhanced EEG Forecasting: A Probabilistic Deep Learning Approach","authors":"Hanna Pankka;Jaakko Lehtinen;Risto J. Ilmoniemi;Timo Roine","doi":"10.1162/neco_a_01743","DOIUrl":"10.1162/neco_a_01743","url":null,"abstract":"Forecasting electroencephalography (EEG) signals, that is, estimating future values of the time series based on the past ones, is essential in many real-time EEG-based applications, such as brain–computer interfaces and closed-loop brain stimulation. As these applications are becoming more and more common, the importance of a good prediction model has increased. Previously, the autoregressive model (AR) has been employed for this task; however, its prediction accuracy tends to fade quickly as multiple steps are predicted. We aim to improve on this by applying probabilistic deep learning to make robust longer-range forecasts. For this, we applied the probabilistic deep neural network model WaveNet to forecast resting-state EEG in theta- (4–7.5 Hz) and alpha-frequency (8–13 Hz) bands and compared it to the AR model. WaveNet reliably predicted EEG signals in both theta and alpha frequencies 150 ms ahead, with mean absolute errors of 1.0 ± 1.1 µV (theta) and 0.9 ± 1.1 µV (alpha), and outperformed the AR model in estimating the signal amplitude and phase. Furthermore, we found that the probabilistic approach offers a way of forecasting even more accurately while effectively discarding uncertain predictions. We demonstrate for the first time that probabilistic deep learning can be used to forecast resting-state EEG time series. In the future, the developed model can enhance the real-time estimation of brain states in brain–computer interfaces and brain stimulation protocols. It may also be useful for answering neuroscientific questions and for diagnostic purposes.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"793-814"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spiking Neuron-Astrocyte Networks for Image Recognition 脉冲神经元-星形胶质细胞网络用于图像识别。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01740
Jhunlyn Lorenzo;Juan-Antonio Rico-Gallego;Stéphane Binczak;Sabir Jacquir
From biological and artificial network perspectives, researchers have started acknowledging astrocytes as computational units mediating neural processes. Here, we propose a novel biologically inspired neuron-astrocyte network model for image recognition, one of the first attempts at implementing astrocytes in spiking neuron networks (SNNs) using a standard data set. The architecture for image recognition has three primary units: the preprocessing unit for converting the image pixels into spiking patterns, the neuron-astrocyte network forming bipartite (neural connections) and tripartite synapses (neural and astrocytic connections), and the classifier unit. In the astrocyte-mediated SNNs, an astrocyte integrates neural signals following the simplified Postnov model. It then modulates the integrate-and-fire (IF) neurons via gliotransmission, thereby strengthening the synaptic connections of the neurons within the astrocytic territory. We develop an architecture derived from a baseline SNN model for unsupervised digit classification. The spiking neuron-astrocyte networks (SNANs) display better network performance with an optimal variance-bias trade-off than SNN alone. We demonstrate that astrocytes promote faster learning, support memory formation and recognition, and provide a simplified network architecture. Our proposed SNAN can serve as a benchmark for future researchers on astrocyte implementation in artificial networks, particularly in neuromorphic systems, for its simplified design.
从生物学和人工网络的角度来看,研究人员已经开始承认星形胶质细胞是调节神经过程的计算单位。在这里,我们提出了一种新的受生物学启发的神经元-星形胶质细胞网络模型用于图像识别,这是使用标准数据集在峰值神经元网络(snn)中实现星形胶质细胞的首次尝试之一。图像识别的架构有三个主要单元:将图像像素转换为峰值模式的预处理单元,形成二部(神经连接)和三部突触(神经和星形细胞连接)的神经元-星形胶质细胞网络,以及分类器单元。在星形胶质细胞介导的snn中,星形胶质细胞按照简化的Postnov模型整合神经信号。然后,它通过胶质传递调节整合-激活(IF)神经元,从而加强星形细胞区域内神经元的突触连接。我们开发了一种基于基线SNN模型的架构,用于无监督数字分类。峰值神经元-星形胶质细胞网络(SNANs)具有最佳方差-偏差权衡,比单独SNN具有更好的网络性能。我们证明星形胶质细胞促进更快的学习,支持记忆形成和识别,并提供简化的网络结构。我们提出的SNAN可以作为未来人工网络中星形胶质细胞实现的基准,特别是在神经形态系统中,因为它的简化设计。
{"title":"Spiking Neuron-Astrocyte Networks for Image Recognition","authors":"Jhunlyn Lorenzo;Juan-Antonio Rico-Gallego;Stéphane Binczak;Sabir Jacquir","doi":"10.1162/neco_a_01740","DOIUrl":"10.1162/neco_a_01740","url":null,"abstract":"From biological and artificial network perspectives, researchers have started acknowledging astrocytes as computational units mediating neural processes. Here, we propose a novel biologically inspired neuron-astrocyte network model for image recognition, one of the first attempts at implementing astrocytes in spiking neuron networks (SNNs) using a standard data set. The architecture for image recognition has three primary units: the preprocessing unit for converting the image pixels into spiking patterns, the neuron-astrocyte network forming bipartite (neural connections) and tripartite synapses (neural and astrocytic connections), and the classifier unit. In the astrocyte-mediated SNNs, an astrocyte integrates neural signals following the simplified Postnov model. It then modulates the integrate-and-fire (IF) neurons via gliotransmission, thereby strengthening the synaptic connections of the neurons within the astrocytic territory. We develop an architecture derived from a baseline SNN model for unsupervised digit classification. The spiking neuron-astrocyte networks (SNANs) display better network performance with an optimal variance-bias trade-off than SNN alone. We demonstrate that astrocytes promote faster learning, support memory formation and recognition, and provide a simplified network architecture. Our proposed SNAN can serve as a benchmark for future researchers on astrocyte implementation in artificial networks, particularly in neuromorphic systems, for its simplified design.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"635-665"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamics of Continuous Attractor Neural Networks With Spike Frequency Adaptation 具有尖峰频率自适应的连续吸引子神经网络动力学。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-14 DOI: 10.1162/neco_a_01757
Yujun Li;Tianhao Chu;Si Wu
Attractor neural networks consider that neural information is stored as stationary states of a dynamical system formed by a large number of interconnected neurons. The attractor property empowers a neural system to encode information robustly, but it also incurs the difficulty of rapid update of network states, which can impair information update and search in the brain. To overcome this difficulty, a solution is to include adaptation in the attractor network dynamics, whereby the adaptation serves as a slow negative feedback mechanism to destabilize what are otherwise permanently stable states. In such a way, the neural system can, on one hand, represent information reliably using attractor states, and on the other hand, perform computations wherever rapid state updating is involved. Previous studies have shown that continuous attractor neural networks with adaptation (A-CANNs) exhibit rich dynamical behaviors accounting for various brain functions. In this review, we present a comprehensive view of the rich diverse dynamics of A-CANNs. Moreover, we provide a unified mathematical framework to understand these different dynamical behaviors and briefly discuss their biological implications.
吸引子神经网络将神经信息存储为由大量相互连接的神经元组成的动态系统的定态。吸引子的特性使神经系统具有鲁棒性,但也带来了网络状态快速更新的困难,影响了大脑对信息的更新和搜索。为了克服这一困难,一种解决方案是在吸引子网络动力学中加入适应,即适应作为一种缓慢的负反馈机制来破坏原本永久稳定的状态。通过这种方式,神经系统一方面可以使用吸引子状态可靠地表示信息,另一方面,在涉及快速状态更新的情况下执行计算。已有研究表明,具有适应性的连续吸引子神经网络(a - cns)具有丰富的动态行为,可以解释各种脑功能。在这篇综述中,我们提出了一个全面的观点,丰富多样的动态的a - can。此外,我们提供了一个统一的数学框架来理解这些不同的动力学行为,并简要讨论了它们的生物学意义。
{"title":"Dynamics of Continuous Attractor Neural Networks With Spike Frequency Adaptation","authors":"Yujun Li;Tianhao Chu;Si Wu","doi":"10.1162/neco_a_01757","DOIUrl":"10.1162/neco_a_01757","url":null,"abstract":"Attractor neural networks consider that neural information is stored as stationary states of a dynamical system formed by a large number of interconnected neurons. The attractor property empowers a neural system to encode information robustly, but it also incurs the difficulty of rapid update of network states, which can impair information update and search in the brain. To overcome this difficulty, a solution is to include adaptation in the attractor network dynamics, whereby the adaptation serves as a slow negative feedback mechanism to destabilize what are otherwise permanently stable states. In such a way, the neural system can, on one hand, represent information reliably using attractor states, and on the other hand, perform computations wherever rapid state updating is involved. Previous studies have shown that continuous attractor neural networks with adaptation (A-CANNs) exhibit rich dynamical behaviors accounting for various brain functions. In this review, we present a comprehensive view of the rich diverse dynamics of A-CANNs. Moreover, we provide a unified mathematical framework to understand these different dynamical behaviors and briefly discuss their biological implications.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 6","pages":"1057-1101"},"PeriodicalIF":2.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144026069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Code Translation With LIF Neuron Microcircuits 用LIF神经元微电路进行神经代码翻译。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-14 DOI: 10.1162/neco_a_01754
Ville Karlsson;Joni Kämäräinen
Spiking neural networks (SNNs) provide an energy-efficient alternative to traditional artificial neural networks, leveraging diverse neural encoding schemes such as rate, time-to-first-spike (TTFS), and population-based binary codes. Each encoding method offers distinct advantages: TTFS enables rapid and precise transmission with minimal energy use, rate encoding provides robust signal representation, and binary population encoding aligns well with digital hardware implementations. This letter introduces a set of neural microcircuits based on leaky integrate-and-fire neurons that enable translation between these encoding schemes. We propose two applications showcasing the utility of these microcircuits. First, we demonstrate a number comparison operation that significantly reduces spike transmission by switching from rate to TTFS encoding. Second, we present a high-bandwidth neural transmitter capable of encoding and transmitting binary population-encoded data through a single axon and reconstructing it at the target site. Additionally, we conduct a detailed analysis of these microcircuits, providing quantitative metrics to assess their efficiency in terms of neuron count, synaptic complexity, spike overhead, and runtime. Our findings highlight the potential of LIF neuron microcircuits in computational neuroscience and neuromorphic computing, offering a pathway to more interpretable and efficient SNN designs.
峰值神经网络(snn)提供了传统人工神经网络的节能替代方案,利用各种神经编码方案,如速率、第一次峰值时间(TTFS)和基于种群的二进制代码。每种编码方法都有其独特的优点:TTFS能够以最小的能耗实现快速和精确的传输,速率编码提供健壮的信号表示,二进制总体编码与数字硬件实现很好地一致。这封信介绍了一套基于泄漏的集成和激活神经元的神经微电路,可以在这些编码方案之间进行转换。我们提出了两个应用来展示这些微电路的效用。首先,我们演示了一个数字比较操作,通过从速率编码切换到TTFS编码,显著减少了尖峰传输。其次,我们提出了一种高带宽的神经递质,能够通过单个轴突编码和传输二进制种群编码数据,并在目标位点重建它。此外,我们对这些微电路进行了详细的分析,提供了定量指标来评估它们在神经元计数、突触复杂性、尖峰开销和运行时间方面的效率。我们的研究结果强调了LIF神经元微电路在计算神经科学和神经形态计算中的潜力,为更可解释和更有效的SNN设计提供了一条途径。
{"title":"Neural Code Translation With LIF Neuron Microcircuits","authors":"Ville Karlsson;Joni Kämäräinen","doi":"10.1162/neco_a_01754","DOIUrl":"10.1162/neco_a_01754","url":null,"abstract":"Spiking neural networks (SNNs) provide an energy-efficient alternative to traditional artificial neural networks, leveraging diverse neural encoding schemes such as rate, time-to-first-spike (TTFS), and population-based binary codes. Each encoding method offers distinct advantages: TTFS enables rapid and precise transmission with minimal energy use, rate encoding provides robust signal representation, and binary population encoding aligns well with digital hardware implementations. This letter introduces a set of neural microcircuits based on leaky integrate-and-fire neurons that enable translation between these encoding schemes. We propose two applications showcasing the utility of these microcircuits. First, we demonstrate a number comparison operation that significantly reduces spike transmission by switching from rate to TTFS encoding. Second, we present a high-bandwidth neural transmitter capable of encoding and transmitting binary population-encoded data through a single axon and reconstructing it at the target site. Additionally, we conduct a detailed analysis of these microcircuits, providing quantitative metrics to assess their efficiency in terms of neuron count, synaptic complexity, spike overhead, and runtime. Our findings highlight the potential of LIF neuron microcircuits in computational neuroscience and neuromorphic computing, offering a pathway to more interpretable and efficient SNN designs.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 6","pages":"1124-1153"},"PeriodicalIF":2.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144046411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1