首页 > 最新文献

Neural Computation最新文献

英文 中文
Bioplausible Unsupervised Delay Learning for Extracting Spatiotemporal Features in Spiking Neural Networks 在尖峰神经网络中提取时空特征的生物无监督延迟学习
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-07 DOI: 10.1162/neco_a_01674
Alireza Nadafian;Mohammad Ganjtabesh
The plasticity of the conduction delay between neurons plays a fundamental role in learning temporal features that are essential for processing videos, speech, and many high-level functions. However, the exact underlying mechanisms in the brain for this modulation are still under investigation. Devising a rule for precisely adjusting the synaptic delays could eventually help in developing more efficient and powerful brain-inspired computational models. In this article, we propose an unsupervised bioplausible learning rule for adjusting the synaptic delays in spiking neural networks. We also provide the mathematical proofs to show the convergence of our rule in learning spatiotemporal patterns. Furthermore, to show the effectiveness of our learning rule, we conducted several experiments on random dot kinematogram and a subset of DVS128 Gesture data sets. The experimental results indicate the efficiency of applying our proposed delay learning rule in extracting spatiotemporal features in an STDP-based spiking neural network.
神经元之间传导延迟的可塑性在学习时间特征方面起着根本性的作用,而时间特征对于处理视频、语音和许多高级功能至关重要。然而,这种调节在大脑中的确切潜在机制仍在研究之中。设计一种规则来精确调整突触延迟,最终将有助于开发更高效、更强大的大脑启发计算模型。在本文中,我们提出了一种用于调整尖峰神经网络中突触延迟的无监督生物可学习规则。我们还提供了数学证明,展示了我们的规则在学习时空模式时的收敛性。此外,为了证明我们的学习规则的有效性,我们在随机点运动图和 DVS128 手势数据集子集上进行了多次实验。实验结果表明,在基于 STDP 的尖峰神经网络中应用我们提出的延迟学习规则提取时空特征非常有效。
{"title":"Bioplausible Unsupervised Delay Learning for Extracting Spatiotemporal Features in Spiking Neural Networks","authors":"Alireza Nadafian;Mohammad Ganjtabesh","doi":"10.1162/neco_a_01674","DOIUrl":"10.1162/neco_a_01674","url":null,"abstract":"The plasticity of the conduction delay between neurons plays a fundamental role in learning temporal features that are essential for processing videos, speech, and many high-level functions. However, the exact underlying mechanisms in the brain for this modulation are still under investigation. Devising a rule for precisely adjusting the synaptic delays could eventually help in developing more efficient and powerful brain-inspired computational models. In this article, we propose an unsupervised bioplausible learning rule for adjusting the synaptic delays in spiking neural networks. We also provide the mathematical proofs to show the convergence of our rule in learning spatiotemporal patterns. Furthermore, to show the effectiveness of our learning rule, we conducted several experiments on random dot kinematogram and a subset of DVS128 Gesture data sets. The experimental results indicate the efficiency of applying our proposed delay learning rule in extracting spatiotemporal features in an STDP-based spiking neural network.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Generalized Canonical Correlation Analysis: Distributed Alternating Iteration-Based Approach 稀疏广义典型相关分析:基于分布式交替迭代的方法。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-07 DOI: 10.1162/neco_a_01673
Kexin Lv;Jia Cai;Junyi Huo;Chao Shang;Xiaolin Huang;Jie Yang
Sparse canonical correlation analysis (CCA) is a useful statistical tool to detect latent information with sparse structures. However, sparse CCA, where the sparsity could be considered as a Laplace prior on the canonical variates, works only for two data sets, that is, there are only two views or two distinct objects. To overcome this limitation, we propose a sparse generalized canonical correlation analysis (GCCA), which could detect the latent relations of multiview data with sparse structures. Specifically, we convert the GCCA into a linear system of equations and impose ℓ1 minimization penalty to pursue sparsity. This results in a nonconvex problem on the Stiefel manifold. Based on consensus optimization, a distributed alternating iteration approach is developed, and consistency is investigated elaborately under mild conditions. Experiments on several synthetic and real-world data sets demonstrate the effectiveness of the proposed algorithm.
稀疏典型相关分析(CCA)是一种有用的统计工具,可用于检测具有稀疏结构的潜在信息。然而,稀疏 CCA(稀疏性可视为典型变量的拉普拉斯先验)仅适用于两个数据集,即只有两个视图或两个不同的对象。为了克服这一局限,我们提出了一种稀疏广义典型相关分析(GCCA),它可以检测具有稀疏结构的多视图数据的潜在关系。具体来说,我们将 GCCA 转换为线性方程组,并施加 $ell _1$ 最小化惩罚以追求稀疏性。这就产生了一个 Stiefel 流形上的非凸问题。基于共识优化,我们开发了一种分布式交替迭代方法,并在温和条件下对一致性进行了详细研究。在几个合成和真实世界数据集上的实验证明了所提算法的有效性。
{"title":"Sparse Generalized Canonical Correlation Analysis: Distributed Alternating Iteration-Based Approach","authors":"Kexin Lv;Jia Cai;Junyi Huo;Chao Shang;Xiaolin Huang;Jie Yang","doi":"10.1162/neco_a_01673","DOIUrl":"10.1162/neco_a_01673","url":null,"abstract":"Sparse canonical correlation analysis (CCA) is a useful statistical tool to detect latent information with sparse structures. However, sparse CCA, where the sparsity could be considered as a Laplace prior on the canonical variates, works only for two data sets, that is, there are only two views or two distinct objects. To overcome this limitation, we propose a sparse generalized canonical correlation analysis (GCCA), which could detect the latent relations of multiview data with sparse structures. Specifically, we convert the GCCA into a linear system of equations and impose ℓ1 minimization penalty to pursue sparsity. This results in a nonconvex problem on the Stiefel manifold. Based on consensus optimization, a distributed alternating iteration approach is developed, and consistency is investigated elaborately under mild conditions. Experiments on several synthetic and real-world data sets demonstrate the effectiveness of the proposed algorithm.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Firing in a Hybrid Central Pattern Generator for Spinal Motor Circuits 脊髓运动电路混合中央模式发生器中的稀疏点火功能
IF 2.9 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-04-23 DOI: 10.1162/neco_a_01660
Beck Strohmer;Elias Najarro;Jessica Ausborn;Rune W. Berg;Silvia Tolu
Central pattern generators are circuits generating rhythmic movements, such as walking. The majority of existing computational models of these circuits produce antagonistic output where all neurons within a population spike with a broad burst at about the same neuronal phase with respect to network output. However, experimental recordings reveal that many neurons within these circuits fire sparsely, sometimes as rarely as once within a cycle. Here we address the sparse neuronal firing and develop a model to replicate the behavior of individual neurons within rhythm-generating populations to increase biological plausibility and facilitate new insights into the underlying mechanisms of rhythm generation. The developed network architecture is able to produce sparse firing of individual neurons, creating a novel implementation for exploring the contribution of network architecture on rhythmic output. Furthermore, the introduction of sparse firing of individual neurons within the rhythm-generating circuits is one of the factors that allows for a broad neuronal phase representation of firing at the population level. This moves the model toward recent experimental findings of evenly distributed neuronal firing across phases among individual spinal neurons. The network is tested by methodically iterating select parameters to gain an understanding of how connectivity and the interplay of excitation and inhibition influence the output. This knowledge can be applied in future studies to implement a biologically plausible rhythm-generating circuit for testing biological hypotheses.
摘要 中枢模式发生器是产生有节奏运动(如行走)的电路。这些回路的大多数现有计算模型都会产生拮抗输出,即群体中的所有神经元都会在与网络输出大致相同的神经元相位上以宽泛的爆发式尖峰输出。然而,实验记录显示,这些回路中的许多神经元会稀疏地发射,有时在一个周期内仅发射一次。在此,我们针对神经元发射稀疏的问题,建立了一个模型,以复制节奏产生群体中单个神经元的行为,从而提高生物合理性,并促进对节奏产生内在机制的新认识。所开发的网络架构能够产生单个神经元的稀疏发射,为探索网络架构对节奏输出的贡献提供了一种新的实现方式。此外,在节奏产生回路中引入单个神经元的稀疏发射,也是允许在群体水平上对发射进行广泛神经元相位表征的因素之一。这使得该模型趋向于最近的实验结果,即单个脊髓神经元在不同阶段均匀分布神经元发射。通过有条不紊地反复选择参数对网络进行测试,以了解连通性以及兴奋和抑制的相互作用如何影响输出。这些知识可应用于未来的研究,以实现生物上合理的节律产生电路,从而检验生物学假说。
{"title":"Sparse Firing in a Hybrid Central Pattern Generator for Spinal Motor Circuits","authors":"Beck Strohmer;Elias Najarro;Jessica Ausborn;Rune W. Berg;Silvia Tolu","doi":"10.1162/neco_a_01660","DOIUrl":"10.1162/neco_a_01660","url":null,"abstract":"Central pattern generators are circuits generating rhythmic movements, such as walking. The majority of existing computational models of these circuits produce antagonistic output where all neurons within a population spike with a broad burst at about the same neuronal phase with respect to network output. However, experimental recordings reveal that many neurons within these circuits fire sparsely, sometimes as rarely as once within a cycle. Here we address the sparse neuronal firing and develop a model to replicate the behavior of individual neurons within rhythm-generating populations to increase biological plausibility and facilitate new insights into the underlying mechanisms of rhythm generation. The developed network architecture is able to produce sparse firing of individual neurons, creating a novel implementation for exploring the contribution of network architecture on rhythmic output. Furthermore, the introduction of sparse firing of individual neurons within the rhythm-generating circuits is one of the factors that allows for a broad neuronal phase representation of firing at the population level. This moves the model toward recent experimental findings of evenly distributed neuronal firing across phases among individual spinal neurons. The network is tested by methodically iterating select parameters to gain an understanding of how connectivity and the interplay of excitation and inhibition influence the output. This knowledge can be applied in future studies to implement a biologically plausible rhythm-generating circuit for testing biological hypotheses.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10535063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140795919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Obtaining Lower Query Complexities Through Lightweight Zeroth-Order Proximal Gradient Algorithms 通过轻量级零阶近似梯度算法降低查询复杂度
IF 2.9 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-04-23 DOI: 10.1162/neco_a_01636
Bin Gu;Xiyuan Wei;Hualin Zhang;Yi Chang;Heng Huang
Zeroth-order (ZO) optimization is one key technique for machine learning problems where gradient calculation is expensive or impossible. Several variance, reduced ZO proximal algorithms have been proposed to speed up ZO optimization for nonsmooth problems, and all of them opted for the coordinated ZO estimator against the random ZO estimator when approximating the true gradient, since the former is more accurate. While the random ZO estimator introduces a larger error and makes convergence analysis more challenging compared to coordinated ZO estimator, it requires only O(1) computation, which is significantly less than O(d) computation of the coordinated ZO estimator, with d being dimension of the problem space. To take advantage of the computationally efficient nature of the random ZO estimator, we first propose a ZO objective decrease (ZOOD) property that can incorporate two different types of errors in the upper bound of convergence rate. Next, we propose two generic reduction frameworks for ZO optimization, which can automatically derive the convergence results for convex and nonconvex problems, respectively, as long as the convergence rate for the inner solver satisfies the ZOOD property. With the application of two reduction frameworks on our proposed ZOR-ProxSVRG and ZOR-ProxSAGA, two variance-reduced ZO proximal algorithms with fully random ZO estimators, we improve the state-of-the-art function query complexities from Omindn1/2ε2,dε3 to O˜n+dε2 under d>n12 for nonconvex problems, and from Odε2 to O˜nlog1ε+dε for convex problems. Finally, we conduct experiments to verify the superiority of our proposed methods.
对于梯度计算昂贵或无法实现的机器学习问题,零阶(ZO)优化是一项关键技术。为了加快非光滑问题的 ZO 优化速度,人们提出了几种方差缩小 ZO 近似算法,所有这些算法在逼近真实梯度时都选择了协调 ZO 估计器,而不是随机 ZO 估计器,因为前者更准确。虽然与协调 ZO 估计器相比,随机 ZO 估计器引入的误差更大,收敛分析更具挑战性,但它只需要 O(1) 计算量,明显少于协调 ZO 估计器的 O(d) 计算量(d 为问题空间的维数)。为了利用随机 ZO 估计器的高效计算特性,我们首先提出了一种 ZO 目标下降(ZOOD)特性,它可以将两种不同类型的误差纳入收敛速率的上限。接下来,我们提出了两种通用的 ZO 优化还原框架,只要内求解器的收敛速率满足 ZOOD 属性,它们就能分别自动推导出凸问题和非凸问题的收敛结果。在我们提出的 ZOR-ProxSVRG 和 ZOR-ProxSAGA 这两个具有全随机 ZO 估计子的方差降低 ZO 近似算法上应用了两个降低框架,我们将最先进的函数查询复杂度从 Omindn1/2ε2,dε3 提高到 O˜n+dε2(d>n12 时)(适用于非凸问题),并将凸问题的复杂度从 Odε2 提高到 O˜nlog1ε+dε。最后,我们通过实验验证了所提方法的优越性。
{"title":"Obtaining Lower Query Complexities Through Lightweight Zeroth-Order Proximal Gradient Algorithms","authors":"Bin Gu;Xiyuan Wei;Hualin Zhang;Yi Chang;Heng Huang","doi":"10.1162/neco_a_01636","DOIUrl":"10.1162/neco_a_01636","url":null,"abstract":"Zeroth-order (ZO) optimization is one key technique for machine learning problems where gradient calculation is expensive or impossible. Several variance, reduced ZO proximal algorithms have been proposed to speed up ZO optimization for nonsmooth problems, and all of them opted for the coordinated ZO estimator against the random ZO estimator when approximating the true gradient, since the former is more accurate. While the random ZO estimator introduces a larger error and makes convergence analysis more challenging compared to coordinated ZO estimator, it requires only O(1) computation, which is significantly less than O(d) computation of the coordinated ZO estimator, with d being dimension of the problem space. To take advantage of the computationally efficient nature of the random ZO estimator, we first propose a ZO objective decrease (ZOOD) property that can incorporate two different types of errors in the upper bound of convergence rate. Next, we propose two generic reduction frameworks for ZO optimization, which can automatically derive the convergence results for convex and nonconvex problems, respectively, as long as the convergence rate for the inner solver satisfies the ZOOD property. With the application of two reduction frameworks on our proposed ZOR-ProxSVRG and ZOR-ProxSAGA, two variance-reduced ZO proximal algorithms with fully random ZO estimators, we improve the state-of-the-art function query complexities from Omindn1/2ε2,dε3 to O˜n+dε2 under d>n12 for nonconvex problems, and from Odε2 to O˜nlog1ε+dε for convex problems. Finally, we conduct experiments to verify the superiority of our proposed methods.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140066270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Overview of the Free Energy Principle and Related Research 自由能原理及相关研究概述。
IF 2.9 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-04-23 DOI: 10.1162/neco_a_01642
Zhengquan Zhang;Feng Xu
The free energy principle and its corollary, the active inference framework, serve as theoretical foundations in the domain of neuroscience, explaining the genesis of intelligent behavior. This principle states that the processes of perception, learning, and decision making—within an agent—are all driven by the objective of “minimizing free energy,” evincing the following behaviors: learning and employing a generative model of the environment to interpret observations, thereby achieving perception, and selecting actions to maintain a stable preferred state and minimize the uncertainty about the environment, thereby achieving decision making. This fundamental principle can be used to explain how the brain processes perceptual information, learns about the environment, and selects actions. Two pivotal tenets are that the agent employs a generative model for perception and planning and that interaction with the world (and other agents) enhances the performance of the generative model and augments perception. With the evolution of control theory and deep learning tools, agents based on the FEP have been instantiated in various ways across different domains, guiding the design of a multitude of generative models and decision-making algorithms. This letter first introduces the basic concepts of the FEP, followed by its historical development and connections with other theories of intelligence, and then delves into the specific application of the FEP to perception and decision making, encompassing both low-dimensional simple situations and high-dimensional complex situations. It compares the FEP with model-based reinforcement learning to show that the FEP provides a better objective function. We illustrate this using numerical studies of Dreamer3 by adding expected information gain into the standard objective function. In a complementary fashion, existing reinforcement learning, and deep learning algorithms can also help implement the FEP-based agents. Finally, we discuss the various capabilities that agents need to possess in complex environments and state that the FEP can aid agents in acquiring these capabilities.
自由能原理及其推论,即主动推理框架,是神经科学领域的理论基础,解释了智能行为的起源。该原理指出,一个行为主体的感知、学习和决策过程都是由 "自由能最小化 "这一目标驱动的,并表现出以下行为:学习并运用环境生成模型来解释观察结果,从而实现感知;选择行动以维持稳定的首选状态,并将环境的不确定性最小化,从而实现决策。这一基本原理可以用来解释大脑是如何处理感知信息、学习环境知识和选择行动的。两个关键原则是,代理采用生成模型进行感知和规划,而与世界(和其他代理)的互动可提高生成模型的性能并增强感知。随着控制理论和深度学习工具的发展,基于 FEP 的代理已在不同领域以各种方式得到实例化,并指导了大量生成模型和决策算法的设计。这封信首先介绍了 FEP 的基本概念,然后介绍了它的历史发展及其与其他智能理论的联系,最后深入探讨了 FEP 在感知和决策方面的具体应用,包括低维简单情况和高维复杂情况。它将 FEP 与基于模型的强化学习进行了比较,表明 FEP 提供了更好的目标函数。我们通过对 Dreamer3 的数值研究,在标准目标函数中加入了预期信息增益,从而说明了这一点。作为补充,现有的强化学习和深度学习算法也可以帮助实现基于 FEP 的代理。最后,我们讨论了代理在复杂环境中需要具备的各种能力,并指出 FEP 可以帮助代理获得这些能力。
{"title":"An Overview of the Free Energy Principle and Related Research","authors":"Zhengquan Zhang;Feng Xu","doi":"10.1162/neco_a_01642","DOIUrl":"10.1162/neco_a_01642","url":null,"abstract":"The free energy principle and its corollary, the active inference framework, serve as theoretical foundations in the domain of neuroscience, explaining the genesis of intelligent behavior. This principle states that the processes of perception, learning, and decision making—within an agent—are all driven by the objective of “minimizing free energy,” evincing the following behaviors: learning and employing a generative model of the environment to interpret observations, thereby achieving perception, and selecting actions to maintain a stable preferred state and minimize the uncertainty about the environment, thereby achieving decision making. This fundamental principle can be used to explain how the brain processes perceptual information, learns about the environment, and selects actions. Two pivotal tenets are that the agent employs a generative model for perception and planning and that interaction with the world (and other agents) enhances the performance of the generative model and augments perception. With the evolution of control theory and deep learning tools, agents based on the FEP have been instantiated in various ways across different domains, guiding the design of a multitude of generative models and decision-making algorithms. This letter first introduces the basic concepts of the FEP, followed by its historical development and connections with other theories of intelligence, and then delves into the specific application of the FEP to perception and decision making, encompassing both low-dimensional simple situations and high-dimensional complex situations. It compares the FEP with model-based reinforcement learning to show that the FEP provides a better objective function. We illustrate this using numerical studies of Dreamer3 by adding expected information gain into the standard objective function. In a complementary fashion, existing reinforcement learning, and deep learning algorithms can also help implement the FEP-based agents. Finally, we discuss the various capabilities that agents need to possess in complex environments and state that the FEP can aid agents in acquiring these capabilities.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140066224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks 在低函数兴奋-抑制尖峰网络中利用潜在边界逼近非线性函数
IF 2.9 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-04-23 DOI: 10.1162/neco_a_01658
William F. Podlaski;Christian K. Machens
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
深度前馈和递归神经网络已成为成功的大脑功能模型,但它们忽略了明显的生物细节,如尖峰和戴尔定律。在这里,我们认为这些细节对于理解真实神经回路的运行方式至关重要。为此,我们提出了一个在低等级兴奋-抑制尖峰网络中进行基于尖峰计算的新框架。通过考虑具有秩-1 连接性的群体,我们将每个神经元的尖峰阈值视为低维输入-输出空间的边界。然后,我们展示了抑制性神经元群的组合阈值如何在该空间中形成稳定的边界,而兴奋性神经元群的组合阈值又如何形成不稳定的边界。将这两条边界结合起来,就会在两条边界的交汇处形成具有抑制稳定动态的秩-2 兴奋-抑制(EI)网络。由此产生的网络的计算可以理解为两个凸函数的差分,因此能够近似任意非线性输入-输出映射。我们展示了这些网络的若干特性,包括噪声抑制和放大、不规则活动和突触平衡,以及它们与边界变软的极限速率网络动力学的关系。最后,虽然我们的工作侧重于小型网络(5-50 个神经元),但我们讨论了将其扩展到更大网络的潜在途径。总之,我们的工作为尖峰网络提出了一个新的视角,可以作为从机理上理解基于尖峰的生物计算的起点。
{"title":"Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks","authors":"William F. Podlaski;Christian K. Machens","doi":"10.1162/neco_a_01658","DOIUrl":"10.1162/neco_a_01658","url":null,"abstract":"Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10535068","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140805834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Improving the Generation Quality of Autoregressive Slot VAEs 努力提高自回归槽式 VAE 的生成质量
IF 2.9 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-04-23 DOI: 10.1162/neco_a_01635
Patrick Emami;Pan He;Sanjay Ranka;Anand Rangarajan
Unconditional scene inference and generation are challenging to learn jointly with a single compositional model. Despite encouraging progress on models that extract object-centric representations (“slots”) from images, unconditional generation of scenes from slots has received less attention. This is primarily because learning the multiobject relations necessary to imagine coherent scenes is difficult. We hypothesize that most existing slot-based models have a limited ability to learn object correlations. We propose two improvements that strengthen object correlation learning. The first is to condition the slots on a global, scene-level variable that captures higher-order correlations between slots. Second, we address the fundamental lack of a canonical order for objects in images by proposing to learn a consistent order to use for the autoregressive generation of scene objects. Specifically, we train an autoregressive slot prior to sequentially generate scene objects following a learned order. Ordered slot inference entails first estimating a randomly ordered set of slots using existing approaches for extracting slots from images, then aligning those slots to ordered slots generated autoregressively with the slot prior. Our experiments across three multiobject environments demonstrate clear gains in unconditional scene generation quality. Detailed ablation studies are also provided that validate the two proposed improvements.
无条件场景推理和生成对使用单一合成模型进行联合学习具有挑战性。尽管在从图像中提取以物体为中心的表征("槽")的模型方面取得了令人鼓舞的进展,但从 "槽 "中无条件生成场景的研究却较少受到关注。这主要是因为学习想象连贯场景所需的多物体关系非常困难。我们假设,大多数现有的基于插槽的模型学习物体相关性的能力有限。我们提出了两个改进方案来加强物体相关性学习。首先,将捕捉槽间高阶相关性的全局场景级变量作为槽的条件。其次,我们针对图像中物体缺乏典型顺序这一根本问题,提出了学习一致的顺序,用于场景物体的自回归生成。具体来说,我们先训练一个自回归插槽,然后按照学习到的顺序依次生成场景对象。有序插槽推理首先需要使用现有的从图像中提取插槽的方法来估计一组随机有序的插槽,然后将这些插槽与使用插槽先验自回归生成的有序插槽对齐。我们在三个多目标环境中进行的实验表明,无条件场景生成质量明显提高。我们还提供了详细的消融研究,验证了这两项改进建议。
{"title":"Toward Improving the Generation Quality of Autoregressive Slot VAEs","authors":"Patrick Emami;Pan He;Sanjay Ranka;Anand Rangarajan","doi":"10.1162/neco_a_01635","DOIUrl":"10.1162/neco_a_01635","url":null,"abstract":"Unconditional scene inference and generation are challenging to learn jointly with a single compositional model. Despite encouraging progress on models that extract object-centric representations (“slots”) from images, unconditional generation of scenes from slots has received less attention. This is primarily because learning the multiobject relations necessary to imagine coherent scenes is difficult. We hypothesize that most existing slot-based models have a limited ability to learn object correlations. We propose two improvements that strengthen object correlation learning. The first is to condition the slots on a global, scene-level variable that captures higher-order correlations between slots. Second, we address the fundamental lack of a canonical order for objects in images by proposing to learn a consistent order to use for the autoregressive generation of scene objects. Specifically, we train an autoregressive slot prior to sequentially generate scene objects following a learned order. Ordered slot inference entails first estimating a randomly ordered set of slots using existing approaches for extracting slots from images, then aligning those slots to ordered slots generated autoregressively with the slot prior. Our experiments across three multiobject environments demonstrate clear gains in unconditional scene generation quality. Detailed ablation studies are also provided that validate the two proposed improvements.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140066272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synaptic Information Storage Capacity Measured With Information Theory 用信息论测量突触信息存储能力
IF 2.9 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-04-23 DOI: 10.1162/neco_a_01659
Mohammad Samavat;Thomas M. Bartol;Kristen M. Harris;Terrence J. Sejnowski
Variation in the strength of synapses can be quantified by measuring the anatomical properties of synapses. Quantifying precision of synaptic plasticity is fundamental to understanding information storage and retrieval in neural circuits. Synapses from the same axon onto the same dendrite have a common history of coactivation, making them ideal candidates for determining the precision of synaptic plasticity based on the similarity of their physical dimensions. Here, the precision and amount of information stored in synapse dimensions were quantified with Shannon information theory, expanding prior analysis that used signal detection theory (Bartol et al., 2015). The two methods were compared using dendritic spine head volumes in the middle of the stratum radiatum of hippocampal area CA1 as well-defined measures of synaptic strength. Information theory delineated the number of distinguishable synaptic strengths based on nonoverlapping bins of dendritic spine head volumes. Shannon entropy was applied to measure synaptic information storage capacity (SISC) and resulted in a lower bound of 4.1 bits and upper bound of 4.59 bits of information based on 24 distinguishable sizes. We further compared the distribution of distinguishable sizes and a uniform distribution using Kullback-Leibler divergence and discovered that there was a nearly uniform distribution of spine head volumes across the sizes, suggesting optimal use of the distinguishable values. Thus, SISC provides a new analytical measure that can be generalized to probe synaptic strengths and capacity for plasticity in different brain regions of different species and among animals raised in different conditions or during learning. How brain diseases and disorders affect the precision of synaptic plasticity can also be probed.
摘要 通过测量突触的解剖特性,可以量化突触强度的变化。量化突触可塑性的精确度是理解神经回路中信息存储和检索的基础。从同一轴突到同一树突的突触具有共同的共激活历史,这使它们成为根据其物理尺寸的相似性确定突触可塑性精度的理想候选者。在这里,我们用香农信息理论量化了存储在突触尺寸中的信息的精度和数量,扩展了之前使用信号检测理论的分析(Bartol 等人,2015 年)。这两种方法使用海马 CA1 区放射层中部的树突棘头体积作为突触强度的明确测量指标进行比较。信息论根据树突棘头体积的非重叠区划分了可区分的突触强度数量。香农熵(Shannon entropy)被用于测量突触信息存储容量(SISC),结果是基于 24 种可区分大小的信息下限为 4.1 比特,上限为 4.59 比特。我们使用库尔贝-莱伯勒发散法进一步比较了可区分大小的分布和均匀分布,发现不同大小的脊柱头体积几乎均匀分布,这表明可区分值得到了最佳利用。因此,SISC提供了一种新的分析测量方法,可用于探测不同物种不同脑区的突触强度和可塑性能力,以及在不同条件下或学习过程中饲养的动物之间的突触强度和可塑性能力。此外,还可以探究大脑疾病和失调如何影响突触可塑性的精确性。
{"title":"Synaptic Information Storage Capacity Measured With Information Theory","authors":"Mohammad Samavat;Thomas M. Bartol;Kristen M. Harris;Terrence J. Sejnowski","doi":"10.1162/neco_a_01659","DOIUrl":"10.1162/neco_a_01659","url":null,"abstract":"Variation in the strength of synapses can be quantified by measuring the anatomical properties of synapses. Quantifying precision of synaptic plasticity is fundamental to understanding information storage and retrieval in neural circuits. Synapses from the same axon onto the same dendrite have a common history of coactivation, making them ideal candidates for determining the precision of synaptic plasticity based on the similarity of their physical dimensions. Here, the precision and amount of information stored in synapse dimensions were quantified with Shannon information theory, expanding prior analysis that used signal detection theory (Bartol et al., 2015). The two methods were compared using dendritic spine head volumes in the middle of the stratum radiatum of hippocampal area CA1 as well-defined measures of synaptic strength. Information theory delineated the number of distinguishable synaptic strengths based on nonoverlapping bins of dendritic spine head volumes. Shannon entropy was applied to measure synaptic information storage capacity (SISC) and resulted in a lower bound of 4.1 bits and upper bound of 4.59 bits of information based on 24 distinguishable sizes. We further compared the distribution of distinguishable sizes and a uniform distribution using Kullback-Leibler divergence and discovered that there was a nearly uniform distribution of spine head volumes across the sizes, suggesting optimal use of the distinguishable values. Thus, SISC provides a new analytical measure that can be generalized to probe synaptic strengths and capacity for plasticity in different brain regions of different species and among animals raised in different conditions or during learning. How brain diseases and disorders affect the precision of synaptic plasticity can also be probed.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140779632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Instance-Specific Model Perturbation Improves Generalized Zero-Shot Learning 针对具体实例的模型扰动改进了广义零点学习。
IF 2.9 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-04-23 DOI: 10.1162/neco_a_01639
Guanyu Yang;Kaizhu Huang;Rui Zhang;Xi Yang
Zero-shot learning (ZSL) refers to the design of predictive functions on new classes (unseen classes) of data that have never been seen during training. In a more practical scenario, generalized zero-shot learning (GZSL) requires predicting both seen and unseen classes accurately. In the absence of target samples, many GZSL models may overfit training data and are inclined to predict individuals as categories that have been seen in training. To alleviate this problem, we develop a parameter-wise adversarial training process that promotes robust recognition of seen classes while designing during the test a novel model perturbation mechanism to ensure sufficient sensitivity to unseen classes. Concretely, adversarial perturbation is conducted on the model to obtain instance-specific parameters so that predictions can be biased to unseen classes in the test. Meanwhile, the robust training encourages the model robustness, leading to nearly unaffected prediction for seen classes. Moreover, perturbations in the parameter space, computed from multiple individuals simultaneously, can be used to avoid the effect of perturbations that are too extreme and ruin the predictions. Comparison results on four benchmark ZSL data sets show the effective improvement that the proposed framework made on zero-shot methods with learned metrics.
零点学习(Zero-shot Learning,ZSL)指的是对训练过程中从未见过的新数据类别(未见类别)设计预测函数。在更实际的情况下,广义零点学习(GZSL)需要同时准确预测已见类和未见类。在没有目标样本的情况下,许多 GZSL 模型可能会过度拟合训练数据,并倾向于将个体预测为训练中出现过的类别。为了缓解这一问题,我们开发了一种参数化对抗训练过程,该过程可促进对已见类别的稳健识别,同时在测试过程中设计一种新颖的模型扰动机制,以确保对未见类别有足够的灵敏度。具体来说,对模型进行对抗扰动以获得特定实例的参数,从而在测试中对未见类别进行有偏差的预测。同时,鲁棒性训练可提高模型的鲁棒性,从而使预测结果几乎不受所见类别的影响。此外,通过同时计算多个个体的参数空间扰动,可以避免过于极端的扰动影响预测结果。在四个基准 ZSL 数据集上的比较结果表明,所提出的框架有效地改进了使用已学指标的零点方法。
{"title":"Instance-Specific Model Perturbation Improves Generalized Zero-Shot Learning","authors":"Guanyu Yang;Kaizhu Huang;Rui Zhang;Xi Yang","doi":"10.1162/neco_a_01639","DOIUrl":"10.1162/neco_a_01639","url":null,"abstract":"Zero-shot learning (ZSL) refers to the design of predictive functions on new classes (unseen classes) of data that have never been seen during training. In a more practical scenario, generalized zero-shot learning (GZSL) requires predicting both seen and unseen classes accurately. In the absence of target samples, many GZSL models may overfit training data and are inclined to predict individuals as categories that have been seen in training. To alleviate this problem, we develop a parameter-wise adversarial training process that promotes robust recognition of seen classes while designing during the test a novel model perturbation mechanism to ensure sufficient sensitivity to unseen classes. Concretely, adversarial perturbation is conducted on the model to obtain instance-specific parameters so that predictions can be biased to unseen classes in the test. Meanwhile, the robust training encourages the model robustness, leading to nearly unaffected prediction for seen classes. Moreover, perturbations in the parameter space, computed from multiple individuals simultaneously, can be used to avoid the effect of perturbations that are too extreme and ruin the predictions. Comparison results on four benchmark ZSL data sets show the effective improvement that the proposed framework made on zero-shot methods with learned metrics.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140066265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heterogeneous Forgetting Rates and Greedy Allocation in Slot-Based Memory Networks Promotes Signal Retention 基于插槽的记忆网络中的异质遗忘率和贪婪分配促进信号保持
IF 2.9 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-04-23 DOI: 10.1162/neco_a_01655
BethAnna Jones;Lawrence Snyder;ShiNung Ching
A key question in the neuroscience of memory encoding pertains to the mechanisms by which afferent stimuli are allocated within memory networks. This issue is especially pronounced in the domain of working memory, where capacity is finite. Presumably the brain must embed some “policy” by which to allocate these mnemonic resources in an online manner in order to maximally represent and store afferent information for as long as possible and without interference from subsequent stimuli. Here, we engage this question through a top-down theoretical modeling framework. We formally optimize a gating mechanism that projects afferent stimuli onto a finite number of memory slots within a recurrent network architecture. In the absence of external input, the activity in each slot attenuates over time (i.e., a process of gradual forgetting). It turns out that the optimal gating policy consists of a direct projection from sensory activity to memory slots, alongside an activity-dependent lateral inhibition. Interestingly, allocating resources myopically (greedily with respect to the current stimulus) leads to efficient utilization of slots over time. In other words, later-arriving stimuli are distributed across slots in such a way that the network state is minimally shifted and so prior signals are minimally “overwritten.” Further, networks with heterogeneity in the timescales of their forgetting rates retain stimuli better than those that are more homogeneous. Our results suggest how online, recurrent networks working on temporally localized objectives without high-level supervision can nonetheless implement efficient allocation of memory resources over time.
摘要 记忆编码神经科学中的一个关键问题涉及传入刺激在记忆网络中的分配机制。这个问题在容量有限的工作记忆领域尤为突出。据推测,大脑必须嵌入某种 "政策",以在线方式分配这些记忆资源,从而在尽可能长的时间内最大限度地表征和存储传入信息,并且不受后续刺激的干扰。在这里,我们通过一个自上而下的理论建模框架来探讨这个问题。我们正式优化了一种门控机制,该机制将传入刺激投射到递归网络结构中有限数量的记忆槽中。在没有外部输入的情况下,每个记忆槽中的活动会随着时间的推移而减弱(即逐渐遗忘的过程)。事实证明,最佳门控策略包括从感觉活动到记忆槽的直接投射,以及依赖于活动的横向抑制。有趣的是,近视地分配资源(对当前刺激的贪婪)会随着时间的推移有效地利用记忆槽。换句话说,后来到达的刺激会以这样一种方式分配到各个槽中,即网络状态会发生最小程度的偏移,因此先前的信号会被最小程度地 "覆盖"。此外,遗忘率时间尺度具有异质性的网络比同质性较高的网络能更好地保留刺激。我们的研究结果表明,在线递归网络如何在没有高层监督的情况下实现时间局部目标,并随着时间的推移有效分配内存资源。
{"title":"Heterogeneous Forgetting Rates and Greedy Allocation in Slot-Based Memory Networks Promotes Signal Retention","authors":"BethAnna Jones;Lawrence Snyder;ShiNung Ching","doi":"10.1162/neco_a_01655","DOIUrl":"10.1162/neco_a_01655","url":null,"abstract":"A key question in the neuroscience of memory encoding pertains to the mechanisms by which afferent stimuli are allocated within memory networks. This issue is especially pronounced in the domain of working memory, where capacity is finite. Presumably the brain must embed some “policy” by which to allocate these mnemonic resources in an online manner in order to maximally represent and store afferent information for as long as possible and without interference from subsequent stimuli. Here, we engage this question through a top-down theoretical modeling framework. We formally optimize a gating mechanism that projects afferent stimuli onto a finite number of memory slots within a recurrent network architecture. In the absence of external input, the activity in each slot attenuates over time (i.e., a process of gradual forgetting). It turns out that the optimal gating policy consists of a direct projection from sensory activity to memory slots, alongside an activity-dependent lateral inhibition. Interestingly, allocating resources myopically (greedily with respect to the current stimulus) leads to efficient utilization of slots over time. In other words, later-arriving stimuli are distributed across slots in such a way that the network state is minimally shifted and so prior signals are minimally “overwritten.” Further, networks with heterogeneity in the timescales of their forgetting rates retain stimuli better than those that are more homogeneous. Our results suggest how online, recurrent networks working on temporally localized objectives without high-level supervision can nonetheless implement efficient allocation of memory resources over time.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140772905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1