首页 > 最新文献

Neural Computation最新文献

英文 中文
Excitation–Inhibition Balance Controls Synchronization in a Simple Model of Coupled Phase Oscillators 激励-抑制平衡控制耦合相位振荡器的简单模型同步。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-17 DOI: 10.1162/neco_a_01763
Satoshi Kuroki;Kenji Mizuseki
Collective neuronal activity in the brain synchronizes during rest and desynchronizes during active behaviors, influencing cognitive processes such as memory consolidation, knowledge abstraction, and creative thinking. These states involve significant modulation of inhibition, which alters the excitation–inhibition (EI) balance of synaptic inputs. However, the influence of the EI balance on collective neuronal oscillation remains only partially understood. In this study, we introduce the EI-Kuramoto model, a modified version of the Kuramoto model, in which oscillators are categorized into excitatory and inhibitory groups with four distinct interaction types: excitatory–excitatory, excitatory–inhibitory, inhibitory–excitatory, and inhibitory–inhibitory. Numerical simulations identify three dynamic states—synchronized, bistable, and desynchronized—that can be controlled by adjusting the strength of the four interaction types. Theoretical analysis further demonstrates that the balance among these interactions plays a critical role in determining the dynamic states. This study provides valuable insights into the role of EI balance in synchronizing coupled oscillators and neurons.
大脑中的集体神经元活动在休息时同步,在活动时不同步,影响记忆巩固、知识抽象和创造性思维等认知过程。这些状态涉及抑制的显著调节,这改变了突触输入的兴奋-抑制(EI)平衡。然而,EI平衡对集体神经元振荡的影响尚不完全清楚。在本研究中,我们引入了EI-Kuramoto模型,这是Kuramoto模型的改进版本,其中振荡子被分为兴奋性和抑制性组,具有四种不同的相互作用类型:兴奋-兴奋、兴奋-抑制、抑制-兴奋和抑制-抑制。数值模拟确定了同步、双稳态和非同步三种动态状态,这三种动态状态可以通过调整四种相互作用类型的强度来控制。理论分析进一步表明,这些相互作用之间的平衡在决定动态状态方面起着关键作用。这项研究为EI平衡在同步耦合振荡器和神经元中的作用提供了有价值的见解。
{"title":"Excitation–Inhibition Balance Controls Synchronization in a Simple Model of Coupled Phase Oscillators","authors":"Satoshi Kuroki;Kenji Mizuseki","doi":"10.1162/neco_a_01763","DOIUrl":"10.1162/neco_a_01763","url":null,"abstract":"Collective neuronal activity in the brain synchronizes during rest and desynchronizes during active behaviors, influencing cognitive processes such as memory consolidation, knowledge abstraction, and creative thinking. These states involve significant modulation of inhibition, which alters the excitation–inhibition (EI) balance of synaptic inputs. However, the influence of the EI balance on collective neuronal oscillation remains only partially understood. In this study, we introduce the EI-Kuramoto model, a modified version of the Kuramoto model, in which oscillators are categorized into excitatory and inhibitory groups with four distinct interaction types: excitatory–excitatory, excitatory–inhibitory, inhibitory–excitatory, and inhibitory–inhibitory. Numerical simulations identify three dynamic states—synchronized, bistable, and desynchronized—that can be controlled by adjusting the strength of the four interaction types. Theoretical analysis further demonstrates that the balance among these interactions plays a critical role in determining the dynamic states. This study provides valuable insights into the role of EI balance in synchronizing coupled oscillators and neurons.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 7","pages":"1353-1372"},"PeriodicalIF":2.7,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11048764","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reformulation of RBM to Unify Linear and Nonlinear Dimensionality Reduction 统一线性降维和非线性降维的RBM的重新表述。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01751
Jiangsheng You;Chun-Yen Liu
A restricted Boltzmann machine (RBM) is a two-layer neural network with shared weights and has been extensively studied for dimensionality reduction, data representation, and recommendation systems in the literature. The traditional RBM requires a probabilistic interpretation of the values on both layers and a Markov chain Monte Carlo (MCMC) procedure to generate samples during the training. The contrastive divergence (CD) is efficient to train the RBM, but its convergence has not been proved mathematically. In this letter, we investigate the RBM by using a maximum a posteriori (MAP) estimate and the expectation–maximization (EM) algorithm. We show that the CD algorithm without MCMC is convergent for the conditional likelihood object function. Another key contribution in this letter is the reformulation of the RBM into a deterministic model. Within the reformulated RBM, the CD algorithm without MCMC approximates the gradient descent (GD) method. This reformulated RBM can take the continuous scalar and vector variables on the nodes with flexibility in choosing the activation functions. Numerical experiments show its capability in both linear and nonlinear dimensionality reduction, and for the nonlinear dimensionality reduction, the reformulated RBM can outperform principal component analysis (PCA) by choosing the proper activation functions. Finally, we demonstrate its application to vector-valued nodes for the CIFAR-10 data set (color images) and the multivariate sequence data, which cannot be configured naturally with the traditional RBM. This work not only provides theoretical insights regarding the traditional RBM but also unifies the linear and nonlinear dimensionality reduction for scalar and vector variables.
受限玻尔兹曼机(RBM)是一种具有共享权重的双层神经网络,在文献中已被广泛用于降维、数据表示和推荐系统。传统的 RBM 需要对两层的值进行概率解释,并在训练过程中使用马尔可夫链蒙特卡罗(MCMC)程序生成样本。对比发散(CD)是训练 RBM 的有效方法,但其收敛性尚未得到数学证明。在这封信中,我们使用最大后验(MAP)估计和期望最大化(EM)算法研究了 RBM。我们证明,对于条件似然对象函数,不使用 MCMC 的 CD 算法是收敛的。这封信的另一个重要贡献是将 RBM 重构为确定性模型。在重构的 RBM 中,无 MCMC 的 CD 算法近似于梯度下降(GD)方法。这种重构的 RBM 可以采用节点上的连续标量和矢量变量,并能灵活选择激活函数。数值实验显示了它在线性和非线性降维方面的能力,在非线性降维方面,通过选择适当的激活函数,重构的 RBM 可以优于主成分分析(PCA)。最后,我们演示了它在 CIFAR-10 数据集(彩色图像)和多元序列数据的向量值节点上的应用,传统的 RBM 无法自然配置这些节点。这项工作不仅提供了有关传统 RBM 的理论见解,还统一了标量变量和矢量变量的线性和非线性降维。
{"title":"Reformulation of RBM to Unify Linear and Nonlinear Dimensionality Reduction","authors":"Jiangsheng You;Chun-Yen Liu","doi":"10.1162/neco_a_01751","DOIUrl":"10.1162/neco_a_01751","url":null,"abstract":"A restricted Boltzmann machine (RBM) is a two-layer neural network with shared weights and has been extensively studied for dimensionality reduction, data representation, and recommendation systems in the literature. The traditional RBM requires a probabilistic interpretation of the values on both layers and a Markov chain Monte Carlo (MCMC) procedure to generate samples during the training. The contrastive divergence (CD) is efficient to train the RBM, but its convergence has not been proved mathematically. In this letter, we investigate the RBM by using a maximum a posteriori (MAP) estimate and the expectation–maximization (EM) algorithm. We show that the CD algorithm without MCMC is convergent for the conditional likelihood object function. Another key contribution in this letter is the reformulation of the RBM into a deterministic model. Within the reformulated RBM, the CD algorithm without MCMC approximates the gradient descent (GD) method. This reformulated RBM can take the continuous scalar and vector variables on the nodes with flexibility in choosing the activation functions. Numerical experiments show its capability in both linear and nonlinear dimensionality reduction, and for the nonlinear dimensionality reduction, the reformulated RBM can outperform principal component analysis (PCA) by choosing the proper activation functions. Finally, we demonstrate its application to vector-valued nodes for the CIFAR-10 data set (color images) and the multivariate sequence data, which cannot be configured naturally with the traditional RBM. This work not only provides theoretical insights regarding the traditional RBM but also unifies the linear and nonlinear dimensionality reduction for scalar and vector variables.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"1034-1055"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Generalized Time Rescaling Theorem for Temporal Point Processes 时间点过程的广义时间重标定理。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01745
Xi Zhang;Akshay Aravamudan;Georgios C. Anagnostopoulos
Temporal point processes are essential for modeling event dynamics in fields such as neuroscience and social media. The time rescaling theorem is commonly used to assess model fit by transforming a point process into a homogeneous Poisson process. However, this approach requires that the process be nonterminating and that complete (hence, unbounded) realizations are observed—conditions that are often unmet in practice. This article introduces a generalized time-rescaling theorem to address these limitations and, as such, facilitates a more widely applicable evaluation framework for point process models in diverse real-world scenarios.
时间点过程对于神经科学和社交媒体等领域的事件动态建模至关重要。时间重定标定理通常通过将点过程转换为同质泊松过程来评估模型拟合度。然而,这种方法要求过程是非终结的,并且能观察到完整的(因此是无界的)实现--这些条件在实践中往往无法满足。本文引入了一个广义的时间缩放定理来解决这些局限性,从而为点过程模型在现实世界的各种场景中提供了一个更广泛适用的评估框架。
{"title":"A Generalized Time Rescaling Theorem for Temporal Point Processes","authors":"Xi Zhang;Akshay Aravamudan;Georgios C. Anagnostopoulos","doi":"10.1162/neco_a_01745","DOIUrl":"10.1162/neco_a_01745","url":null,"abstract":"Temporal point processes are essential for modeling event dynamics in fields such as neuroscience and social media. The time rescaling theorem is commonly used to assess model fit by transforming a point process into a homogeneous Poisson process. However, this approach requires that the process be nonterminating and that complete (hence, unbounded) realizations are observed—conditions that are often unmet in practice. This article introduces a generalized time-rescaling theorem to address these limitations and, as such, facilitates a more widely applicable evaluation framework for point process models in diverse real-world scenarios.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"871-885"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10979820","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adding Space to Random Networks of Spiking Neurons: A Method Based on Scaling the Network Size 给脉冲神经元随机网络增加空间:一种基于网络大小缩放的方法。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01747
Cecilia Romaro;Jose Roberto Castilho Piqueira;A. C. Roque
Many spiking neural network models are based on random graphs that do not include topological and structural properties featured in real brain networks. To turn these models into spatial networks that describe the topographic arrangement of connections is a challenging task because one has to deal with neurons at the spatial network boundary. Addition of space may generate spurious network behavior like oscillations introduced by periodic boundary conditions or unbalanced neuronal spiking due to lack or excess of connections. Here, we introduce a boundary solution method for networks with added spatial extension that prevents the occurrence of spurious spiking behavior. The method is based on a recently proposed technique for scaling the network size that preserves first- and second-order statistics.
许多脉冲神经网络模型是基于随机图的,不包括真实大脑网络的拓扑和结构特性。将这些模型转化为描述连接的地形排列的空间网络是一项具有挑战性的任务,因为人们必须处理空间网络边界上的神经元。空间的增加可能会产生虚假的网络行为,如由周期性边界条件引入的振荡或由于缺乏或过多的连接而导致的不平衡神经元尖峰。在这里,我们引入了一种边界求解方法,用于增加空间扩展的网络,以防止虚假尖峰行为的发生。该方法基于最近提出的一种扩展网络大小的技术,该技术保留了一阶和二阶统计量。
{"title":"Adding Space to Random Networks of Spiking Neurons: A Method Based on Scaling the Network Size","authors":"Cecilia Romaro;Jose Roberto Castilho Piqueira;A. C. Roque","doi":"10.1162/neco_a_01747","DOIUrl":"10.1162/neco_a_01747","url":null,"abstract":"Many spiking neural network models are based on random graphs that do not include topological and structural properties featured in real brain networks. To turn these models into spatial networks that describe the topographic arrangement of connections is a challenging task because one has to deal with neurons at the spatial network boundary. Addition of space may generate spurious network behavior like oscillations introduced by periodic boundary conditions or unbalanced neuronal spiking due to lack or excess of connections. Here, we introduce a boundary solution method for networks with added spatial extension that prevents the occurrence of spurious spiking behavior. The method is based on a recently proposed technique for scaling the network size that preserves first- and second-order statistics.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"957-986"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elucidating the Theoretical Underpinnings of Surrogate Gradient Learning in Spiking Neural Networks 阐明尖峰神经网络中替代梯度学习的理论基础
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01752
Julia Gygax;Friedemann Zenke
Training spiking neural networks to approximate universal functions is essential for studying information processing in the brain and for neuromorphic computing. Yet the binary nature of spikes poses a challenge for direct gradient-based training. Surrogate gradients have been empirically successful in circumventing this problem, but their theoretical foundation remains elusive. Here, we investigate the relation of surrogate gradients to two theoretically well-founded approaches. On the one hand, we consider smoothed probabilistic models, which, due to the lack of support for automatic differentiation, are impractical for training multilayer spiking neural networks but provide derivatives equivalent to surrogate gradients for single neurons. On the other hand, we investigate stochastic automatic differentiation, which is compatible with discrete randomness but has not yet been used to train spiking neural networks. We find that the latter gives surrogate gradients a theoretical basis in stochastic spiking neural networks, where the surrogate derivative matches the derivative of the neuronal escape noise function. This finding supports the effectiveness of surrogate gradients in practice and suggests their suitability for stochastic spiking neural networks. However, surrogate gradients are generally not gradients of a surrogate loss despite their relation to stochastic automatic differentiation. Nevertheless, we empirically confirm the effectiveness of surrogate gradients in stochastic multilayer spiking neural networks and discuss their relation to deterministic networks as a special case. Our work gives theoretical support to surrogate gradients and the choice of a suitable surrogate derivative in stochastic spiking neural networks.
训练脉冲神经网络来近似通用函数对于研究大脑中的信息处理和神经形态计算是必不可少的。然而,尖峰的二元性对直接基于梯度的训练提出了挑战。替代梯度在经验上已经成功地规避了这个问题,但它们的理论基础仍然难以捉摸。在这里,我们研究了代理梯度与两种理论上有充分根据的方法的关系。一方面,我们考虑平滑概率模型,由于缺乏对自动分化的支持,它对于训练多层尖峰神经网络是不切实际的,但它为单个神经元提供了相当于代理梯度的导数。另一方面,我们研究了随机自动微分,它与离散随机性兼容,但尚未用于训练尖峰神经网络。我们发现后者为随机尖峰神经网络中的代理梯度提供了理论基础,其中代理导数与神经元逃逸噪声函数的导数相匹配。这一发现支持了替代梯度在实践中的有效性,并表明它们适合于随机尖峰神经网络。然而,代理梯度通常不是代理损失的梯度,尽管它们与随机自动微分有关。然而,我们在经验上证实了代理梯度在随机多层脉冲神经网络中的有效性,并作为一个特例讨论了它们与确定性网络的关系。我们的工作为随机尖峰神经网络的代理梯度和合适的代理导数的选择提供了理论支持。
{"title":"Elucidating the Theoretical Underpinnings of Surrogate Gradient Learning in Spiking Neural Networks","authors":"Julia Gygax;Friedemann Zenke","doi":"10.1162/neco_a_01752","DOIUrl":"10.1162/neco_a_01752","url":null,"abstract":"Training spiking neural networks to approximate universal functions is essential for studying information processing in the brain and for neuromorphic computing. Yet the binary nature of spikes poses a challenge for direct gradient-based training. Surrogate gradients have been empirically successful in circumventing this problem, but their theoretical foundation remains elusive. Here, we investigate the relation of surrogate gradients to two theoretically well-founded approaches. On the one hand, we consider smoothed probabilistic models, which, due to the lack of support for automatic differentiation, are impractical for training multilayer spiking neural networks but provide derivatives equivalent to surrogate gradients for single neurons. On the other hand, we investigate stochastic automatic differentiation, which is compatible with discrete randomness but has not yet been used to train spiking neural networks. We find that the latter gives surrogate gradients a theoretical basis in stochastic spiking neural networks, where the surrogate derivative matches the derivative of the neuronal escape noise function. This finding supports the effectiveness of surrogate gradients in practice and suggests their suitability for stochastic spiking neural networks. However, surrogate gradients are generally not gradients of a surrogate loss despite their relation to stochastic automatic differentiation. Nevertheless, we empirically confirm the effectiveness of surrogate gradients in stochastic multilayer spiking neural networks and discuss their relation to deterministic networks as a special case. Our work gives theoretical support to surrogate gradients and the choice of a suitable surrogate derivative in stochastic spiking neural networks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"886-925"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10979826","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed Synaptic Connection Strength Changes Dynamics in a Population Firing Rate Model in Response to Continuous External Stimuli 响应连续外部刺激的群体放电率模型中分布式突触连接强度的动态变化。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01749
Masato Sugino;Mai Tanaka;Kenta Shimba;Kiyoshi Kotani;Yasuhiko Jimbo
Neural network complexity allows for diverse neuronal population dynamics and realizes higherorder brain functions such as cognition and memory. Complexity is enhanced through chemical synapses with exponentially decaying conductance and greater variation in the neuronal connection strength due to synaptic plasticity. However, in the macroscopic neuronal population model, synaptic connections are often described by spike connections, and connection strengths within the population are assumed to be uniform. Thus, the effects of synaptic connections variation on network synchronization remain unclear. Based on recent advances in mean field theory for the quadratic integrate-and-fire neuronal network model, we introduce synaptic conductance and variation of connection strength into the excitatory and inhibitory neuronal population model and derive the macroscopic firing rate equations for faithful modeling. We then introduce a heuristic switching rule of the dynamic system with respect to the mean membrane potentials to avoid divergences in the computation caused by variations in the neuronal connection strength. We show that the switching rule agrees with the numerical computation of the microscopic level model. In the derived model, variations in synaptic conductance and connection strength strongly alter the stability of the solutions to the equations, which is related to the mechanism of synchronous firing. When we apply physiologically plausible values from layer 4 of the mammalian primary visual cortex to the derived model, we observe event-related desynchronization at the alpha and beta frequencies and event-related synchronization at the gamma frequency over a wide range of balanced external currents. Our results show that the introduction of complex synaptic connections and physiologically valid numerical values into the low-dimensional mean field equations reproduces dynamic changes such as eventrelated (de)synchronization, and provides a unique mathematical insight into the relationship between synaptic strength variation and oscillatory mechanism.
神经网络的复杂性允许不同的神经元种群动态,实现更高阶的大脑功能,如认知和记忆。由于突触的可塑性,化学突触的电导呈指数衰减,神经元连接强度的变化更大,从而增强了复杂性。然而,在宏观神经元群体模型中,突触连接通常被描述为尖峰连接,并且假设群体内的连接强度是均匀的。因此,突触连接变化对网络同步的影响尚不清楚。基于二次积分-放电神经网络模型平均场理论的最新进展,我们将突触电导和连接强度的变化引入兴奋性和抑制性神经元群模型中,并推导出宏观放电率方程,以忠实地建模。然后,我们引入了动态系统相对于平均膜电位的启发式切换规则,以避免由于神经元连接强度的变化而导致的计算分歧。结果表明,该开关规则与微观水平模型的数值计算一致。在推导的模型中,突触电导和连接强度的变化强烈地改变了方程解的稳定性,这与同步放电的机制有关。当我们将哺乳动物初级视觉皮层第4层的生理合理值应用于衍生模型时,我们观察到在广泛的平衡外部电流范围内,α和β频率上的事件相关非同步和γ频率上的事件相关同步。我们的研究结果表明,在低维平均场方程中引入复杂的突触连接和生理上有效的数值再现了事件相关(去)同步等动态变化,并为突触强度变化与振荡机制之间的关系提供了独特的数学见解。
{"title":"Distributed Synaptic Connection Strength Changes Dynamics in a Population Firing Rate Model in Response to Continuous External Stimuli","authors":"Masato Sugino;Mai Tanaka;Kenta Shimba;Kiyoshi Kotani;Yasuhiko Jimbo","doi":"10.1162/neco_a_01749","DOIUrl":"10.1162/neco_a_01749","url":null,"abstract":"Neural network complexity allows for diverse neuronal population dynamics and realizes higherorder brain functions such as cognition and memory. Complexity is enhanced through chemical synapses with exponentially decaying conductance and greater variation in the neuronal connection strength due to synaptic plasticity. However, in the macroscopic neuronal population model, synaptic connections are often described by spike connections, and connection strengths within the population are assumed to be uniform. Thus, the effects of synaptic connections variation on network synchronization remain unclear. Based on recent advances in mean field theory for the quadratic integrate-and-fire neuronal network model, we introduce synaptic conductance and variation of connection strength into the excitatory and inhibitory neuronal population model and derive the macroscopic firing rate equations for faithful modeling. We then introduce a heuristic switching rule of the dynamic system with respect to the mean membrane potentials to avoid divergences in the computation caused by variations in the neuronal connection strength. We show that the switching rule agrees with the numerical computation of the microscopic level model. In the derived model, variations in synaptic conductance and connection strength strongly alter the stability of the solutions to the equations, which is related to the mechanism of synchronous firing. When we apply physiologically plausible values from layer 4 of the mammalian primary visual cortex to the derived model, we observe event-related desynchronization at the alpha and beta frequencies and event-related synchronization at the gamma frequency over a wide range of balanced external currents. Our results show that the introduction of complex synaptic connections and physiologically valid numerical values into the low-dimensional mean field equations reproduces dynamic changes such as eventrelated (de)synchronization, and provides a unique mathematical insight into the relationship between synaptic strength variation and oscillatory mechanism.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"987-1009"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multilevel Data Representation for Training Deep Helmholtz Machines 训练深度亥姆霍兹机的多层数据表示。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01748
Jose Miguel Ramos;Luis Sa-Couto;Andreas Wichert
A vast majority of the current research in the field of machine learning is done using algorithms with strong arguments pointing to their biological implausibility such as backpropagation, deviating the field’s focus from understanding its original organic inspiration to a compulsive search for optimal performance. Yet there have been a few proposed models that respect most of the biological constraints present in the human brain and are valid candidates for mimicking some of its properties and mechanisms. In this letter, we focus on guiding the learning of a biologically plausible generative model called the Helmholtz machine in complex search spaces using a heuristic based on the human image perception mechanism. We hypothesize that this model’s learning algorithm is not fit for deep networks due to its Hebbian-like local update rule, rendering it incapable of taking full advantage of the compositional properties that multilayer networks provide. We propose to overcome this problem by providing the network’s hidden layers with visual queues at different resolutions using multilevel data representation. The results on several image data sets showed that the model was able to not only obtain better overall quality but also a wider diversity in the generated images, corroborating our intuition that using our proposed heuristic allows the model to take more advantage of the network’s depth growth. More important, they show the unexplored possibilities underlying brain-inspired models and techniques.
目前机器学习领域的绝大多数研究都是使用算法来完成的,这些算法有很强的论据,指出它们在生物学上是不可信的,比如反向传播,偏离了该领域的重点,从理解其原始的有机灵感转向了对最佳性能的强制搜索。然而,已经提出了一些模型,这些模型尊重人类大脑中存在的大多数生物限制,并且是模仿其某些特性和机制的有效候选者。在这封信中,我们专注于使用基于人类图像感知机制的启发式方法指导复杂搜索空间中称为亥姆霍兹机的生物学上合理的生成模型的学习。我们假设该模型的学习算法不适合深度网络,因为它的hebbian式局部更新规则,使其无法充分利用多层网络提供的组合特性。我们建议通过使用多层数据表示为网络的隐藏层提供不同分辨率的可视化队列来克服这个问题。在几个图像数据集上的结果表明,该模型不仅能够获得更好的整体质量,而且生成的图像具有更大的多样性,这证实了我们的直觉,即使用我们提出的启发式方法可以让模型更多地利用网络的深度增长。更重要的是,它们展示了大脑启发模型和技术背后未被探索的可能性。
{"title":"Multilevel Data Representation for Training Deep Helmholtz Machines","authors":"Jose Miguel Ramos;Luis Sa-Couto;Andreas Wichert","doi":"10.1162/neco_a_01748","DOIUrl":"10.1162/neco_a_01748","url":null,"abstract":"A vast majority of the current research in the field of machine learning is done using algorithms with strong arguments pointing to their biological implausibility such as backpropagation, deviating the field’s focus from understanding its original organic inspiration to a compulsive search for optimal performance. Yet there have been a few proposed models that respect most of the biological constraints present in the human brain and are valid candidates for mimicking some of its properties and mechanisms. In this letter, we focus on guiding the learning of a biologically plausible generative model called the Helmholtz machine in complex search spaces using a heuristic based on the human image perception mechanism. We hypothesize that this model’s learning algorithm is not fit for deep networks due to its Hebbian-like local update rule, rendering it incapable of taking full advantage of the compositional properties that multilayer networks provide. We propose to overcome this problem by providing the network’s hidden layers with visual queues at different resolutions using multilevel data representation. The results on several image data sets showed that the model was able to not only obtain better overall quality but also a wider diversity in the generated images, corroborating our intuition that using our proposed heuristic allows the model to take more advantage of the network’s depth growth. More important, they show the unexplored possibilities underlying brain-inspired models and techniques.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"1010-1033"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Leaky Integrate-and-Fire Neuron Is a Change-Point Detector for Compound Poisson Processes 泄漏的积分与发射神经元是复合泊松过程的变化点探测器
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01750
Shivaram Mani;Paul Hurley;André van Schaik;Travis Monk
Animal nervous systems can detect changes in their environments within hundredths of a second. They do so by discerning abrupt shifts in sensory neural activity. Many neuroscience studies have employed change-point detection (CPD) algorithms to estimate such abrupt shifts in neural activity. But very few studies have suggested that spiking neurons themselves are online change-point detectors. We show that a leaky integrate-and-fire (LIF) neuron implements an online CPD algorithm for a compound Poisson process. We quantify the CPD performance of an LIF neuron under various regions of its parameter space. We show that CPD can be a recursive algorithm where the output of one algorithm can be input to another. Then we show that a simple feedforward network of LIF neurons can quickly and reliably detect very small changes in input spiking rates. For example, our network detects a 5% change in input rates within 20 ms on average, and false-positive detections are extremely rare. In a rigorous statistical context, we interpret the salient features of the LIF neuron: its membrane potential, synaptic weight, time constant, resting potential, action potentials, and threshold. Our results potentially generalize beyond the LIF neuron model and its associated CPD problem. If spiking neurons perform change-point detection on their inputs, then the electrophysiological properties of their membranes must be related to the spiking statistics of their inputs. We demonstrate one example of this relationship for the LIF neuron and compound Poisson processes and suggest how to test this hypothesis more broadly. Maybe neurons are not noisy devices whose action potentials must be averaged over time or populations. Instead, neurons might implement sophisticated, optimal, and online statistical algorithms on their inputs.
动物的神经系统能在百分之一秒内察觉到环境的变化。它们通过识别感觉神经活动的突然变化来做到这一点。许多神经科学研究都采用变化点检测(CPD)算法来估计神经活动的突变。但很少有研究表明,尖峰神经元本身就是在线的变化点探测器。我们展示了一个泄漏的集成和发射(LIF)神经元实现了复合泊松过程的在线CPD算法。我们量化了LIF神经元在其参数空间的不同区域下的CPD性能。我们证明CPD可以是一个递归算法,其中一个算法的输出可以输入到另一个算法。然后我们证明了一个简单的LIF神经元前馈网络可以快速可靠地检测输入尖峰率的非常小的变化。例如,我们的网络平均在20毫秒内检测到5%的输入速率变化,并且假阳性检测非常罕见。在严格的统计背景下,我们解释了LIF神经元的显著特征:它的膜电位、突触重量、时间常数、静息电位、动作电位和阈值。我们的结果可能会推广到LIF神经元模型及其相关的CPD问题之外。如果尖峰神经元在其输入上执行变化点检测,那么其膜的电生理特性必须与其输入的尖峰统计有关。我们为LIF神经元和复合泊松过程展示了这种关系的一个例子,并建议如何更广泛地验证这一假设。也许神经元不是嘈杂的装置,其动作电位必须随时间或群体平均。相反,神经元可能会在其输入上实现复杂的、最优的和在线的统计算法。
{"title":"The Leaky Integrate-and-Fire Neuron Is a Change-Point Detector for Compound Poisson Processes","authors":"Shivaram Mani;Paul Hurley;André van Schaik;Travis Monk","doi":"10.1162/neco_a_01750","DOIUrl":"10.1162/neco_a_01750","url":null,"abstract":"Animal nervous systems can detect changes in their environments within hundredths of a second. They do so by discerning abrupt shifts in sensory neural activity. Many neuroscience studies have employed change-point detection (CPD) algorithms to estimate such abrupt shifts in neural activity. But very few studies have suggested that spiking neurons themselves are online change-point detectors. We show that a leaky integrate-and-fire (LIF) neuron implements an online CPD algorithm for a compound Poisson process. We quantify the CPD performance of an LIF neuron under various regions of its parameter space. We show that CPD can be a recursive algorithm where the output of one algorithm can be input to another. Then we show that a simple feedforward network of LIF neurons can quickly and reliably detect very small changes in input spiking rates. For example, our network detects a 5% change in input rates within 20 ms on average, and false-positive detections are extremely rare. In a rigorous statistical context, we interpret the salient features of the LIF neuron: its membrane potential, synaptic weight, time constant, resting potential, action potentials, and threshold. Our results potentially generalize beyond the LIF neuron model and its associated CPD problem. If spiking neurons perform change-point detection on their inputs, then the electrophysiological properties of their membranes must be related to the spiking statistics of their inputs. We demonstrate one example of this relationship for the LIF neuron and compound Poisson processes and suggest how to test this hypothesis more broadly. Maybe neurons are not noisy devices whose action potentials must be averaged over time or populations. Instead, neurons might implement sophisticated, optimal, and online statistical algorithms on their inputs.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"926-956"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge as a Breaking of Ergodicity 知识是对遍历性的突破。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01741
Yang He;Vassiliy Lubchenko
We construct a thermodynamic potential that can guide training of a generative model defined on a set of binary degrees of freedom. We argue that upon reduction in description, so as to make the generative model computationally manageable, the potential develops multiple minima. This is mirrored by the emergence of multiple minima in the free energy proper of the generative model itself. The variety of training samples that employ N binary degrees of freedom is ordinarily much lower than the size 2N of the full phase space. The nonrepresented configurations, we argue, should be thought of as comprising a high-temperature phase separated by an extensive energy gap from the configurations composing the training set. Thus, training amounts to sampling a free energy surface in the form of a library of distinct bound states, each of which breaks ergodicity. The ergodicity breaking prevents escape into the near continuum of states comprising the high-temperature phase; thus, it is necessary for proper functionality. It may, however, have the side effect of limiting access to patterns that were underrepresented in the training set. At the same time, the ergodicity breaking within the library complicates both learning and retrieval. As a remedy, one may concurrently employ multiple generative models—up to one model per free energy minimum.
我们构造了一个热力学势,它可以指导在一组二元自由度上定义的生成模型的训练。我们认为,在简化描述后,为了使生成模型在计算上可管理,潜力发展为多个极小值。这反映在生成模型本身的自由能属性中出现多个极小值。使用$N$二元自由度的训练样本的种类通常远低于完整相空间的2$^{N}$的大小。我们认为,非表征构型应该被认为是包含一个高温相,与组成训练集的构型之间有一个广泛的能量缺口。因此,训练相当于以不同界态库的形式对自由能面进行采样,其中每一个都打破了遍历性。遍历性断裂防止逸出到包含高温相的近连续态;因此,它对于适当的功能是必要的。然而,它可能具有限制访问训练集中未充分表示的模式的副作用。同时,图书馆内部的遍历性断裂给学习和检索带来了复杂性。作为补救措施,可以同时使用多个生成模型-每个自由能最小值最多使用一个模型。
{"title":"Knowledge as a Breaking of Ergodicity","authors":"Yang He;Vassiliy Lubchenko","doi":"10.1162/neco_a_01741","DOIUrl":"10.1162/neco_a_01741","url":null,"abstract":"We construct a thermodynamic potential that can guide training of a generative model defined on a set of binary degrees of freedom. We argue that upon reduction in description, so as to make the generative model computationally manageable, the potential develops multiple minima. This is mirrored by the emergence of multiple minima in the free energy proper of the generative model itself. The variety of training samples that employ N binary degrees of freedom is ordinarily much lower than the size 2N of the full phase space. The nonrepresented configurations, we argue, should be thought of as comprising a high-temperature phase separated by an extensive energy gap from the configurations composing the training set. Thus, training amounts to sampling a free energy surface in the form of a library of distinct bound states, each of which breaks ergodicity. The ergodicity breaking prevents escape into the near continuum of states comprising the high-temperature phase; thus, it is necessary for proper functionality. It may, however, have the side effect of limiting access to patterns that were underrepresented in the training set. At the same time, the ergodicity breaking within the library complicates both learning and retrieval. As a remedy, one may concurrently employ multiple generative models—up to one model per free energy minimum.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"742-792"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active Inference and Intentional Behavior 主动推理和有意行为。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01738
Karl J. Friston;Tommaso Salvatori;Takuya Isomura;Alexander Tschantz;Alex Kiefer;Tim Verbelen;Magnus Koudahl;Aswin Paul;Thomas Parr;Adeel Razi;Brett J. Kagan;Christopher L. Buckley;Maxwell J. D. Ramstead
Recent advances in theoretical biology suggest that key definitions of basal cognition and sentient behavior may arise as emergent properties of in vitro cell cultures and neuronal networks. Such neuronal networks reorganize activity to demonstrate structured behaviors when embodied in structured information landscapes. In this article, we characterize this kind of self-organization through the lens of the free energy principle, that is, as self-evidencing. We do this by first discussing the definitions of reactive and sentient behavior in the setting of active inference, which describes the behavior of agents that model the consequences of their actions. We then introduce a formal account of intentional behavior that describes agents as driven by a preferred end point or goal in latent state-spaces. We then investigate these forms of (reactive, sentient, and intentional) behavior using simulations. First, we simulate the in vitro experiments, in which neuronal cultures modulated activity to improve gameplay in a simplified version of Pong by implementing nested, free energy minimizing processes. The simulations are then used to deconstruct the ensuing predictive behavior, leading to the distinction between merely reactive, sentient, and intentional behavior with the latter formalized in terms of inductive inference. This distinction is further studied using simple machine learning benchmarks (navigation in a grid world and the Tower of Hanoi problem) that show how quickly and efficiently adaptive behavior emerges under an inductive form of active inference.
理论生物学的最新进展表明,基础认知和感知行为的关键定义可能随着体外细胞培养和神经网络的涌现特性而出现。这样的神经网络在结构化的信息环境中重新组织活动,以展示结构化的行为。在本文中,我们通过自由能原理来描述这种自组织,即自明性。为此,我们首先讨论了主动推理中反应性和感知性行为的定义,主动推理描述了对其行为后果进行建模的智能体的行为。然后,我们引入了一种有意行为的正式描述,将智能体描述为由潜在状态空间中的首选终点或目标驱动的。然后,我们使用模拟来研究这些形式(反应性、感知性和有意性)的行为。首先,我们模拟了体外实验,在实验中,神经元培养通过实现嵌套的自由能量最小化过程来调节活动,以改善简化版《Pong》的游戏玩法。然后,模拟被用来解构随后的预测行为,导致仅仅反应性、感知性和有意性行为之间的区别,后者以归纳推理的方式形式化。使用简单的机器学习基准(网格世界中的导航和河内塔问题)进一步研究了这种区别,这些基准显示了在主动推理的归纳形式下,自适应行为是如何快速有效地出现的。
{"title":"Active Inference and Intentional Behavior","authors":"Karl J. Friston;Tommaso Salvatori;Takuya Isomura;Alexander Tschantz;Alex Kiefer;Tim Verbelen;Magnus Koudahl;Aswin Paul;Thomas Parr;Adeel Razi;Brett J. Kagan;Christopher L. Buckley;Maxwell J. D. Ramstead","doi":"10.1162/neco_a_01738","DOIUrl":"10.1162/neco_a_01738","url":null,"abstract":"Recent advances in theoretical biology suggest that key definitions of basal cognition and sentient behavior may arise as emergent properties of in vitro cell cultures and neuronal networks. Such neuronal networks reorganize activity to demonstrate structured behaviors when embodied in structured information landscapes. In this article, we characterize this kind of self-organization through the lens of the free energy principle, that is, as self-evidencing. We do this by first discussing the definitions of reactive and sentient behavior in the setting of active inference, which describes the behavior of agents that model the consequences of their actions. We then introduce a formal account of intentional behavior that describes agents as driven by a preferred end point or goal in latent state-spaces. We then investigate these forms of (reactive, sentient, and intentional) behavior using simulations. First, we simulate the in vitro experiments, in which neuronal cultures modulated activity to improve gameplay in a simplified version of Pong by implementing nested, free energy minimizing processes. The simulations are then used to deconstruct the ensuing predictive behavior, leading to the distinction between merely reactive, sentient, and intentional behavior with the latter formalized in terms of inductive inference. This distinction is further studied using simple machine learning benchmarks (navigation in a grid world and the Tower of Hanoi problem) that show how quickly and efficiently adaptive behavior emerges under an inductive form of active inference.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"666-700"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Computation
全部 Geobiology Appl. Clay Sci. Geochim. Cosmochim. Acta J. Hydrol. Org. Geochem. Carbon Balance Manage. Contrib. Mineral. Petrol. Int. J. Biometeorol. IZV-PHYS SOLID EART+ J. Atmos. Chem. Acta Oceanolog. Sin. Acta Geophys. ACTA GEOL POL ACTA PETROL SIN ACTA GEOL SIN-ENGL AAPG Bull. Acta Geochimica Adv. Atmos. Sci. Adv. Meteorol. Am. J. Phys. Anthropol. Am. J. Sci. Am. Mineral. Annu. Rev. Earth Planet. Sci. Appl. Geochem. Aquat. Geochem. Ann. Glaciol. Archaeol. Anthropol. Sci. ARCHAEOMETRY ARCT ANTARCT ALP RES Asia-Pac. J. Atmos. Sci. ATMOSPHERE-BASEL Atmos. Res. Aust. J. Earth Sci. Atmos. Chem. Phys. Atmos. Meas. Tech. Basin Res. Big Earth Data BIOGEOSCIENCES Geostand. Geoanal. Res. GEOLOGY Geosci. J. Geochem. J. Geochem. Trans. Geosci. Front. Geol. Ore Deposits Global Biogeochem. Cycles Gondwana Res. Geochem. Int. Geol. J. Geophys. Prospect. Geosci. Model Dev. GEOL BELG GROUNDWATER Hydrogeol. J. Hydrol. Earth Syst. Sci. Hydrol. Processes Int. J. Climatol. Int. J. Earth Sci. Int. Geol. Rev. Int. J. Disaster Risk Reduct. Int. J. Geomech. Int. J. Geog. Inf. Sci. Isl. Arc J. Afr. Earth. Sci. J. Adv. Model. Earth Syst. J APPL METEOROL CLIM J. Atmos. Oceanic Technol. J. Atmos. Sol. Terr. Phys. J. Clim. J. Earth Sci. J. Earth Syst. Sci. J. Environ. Eng. Geophys. J. Geog. Sci. Mineral. Mag. Miner. Deposita Mon. Weather Rev. Nat. Hazards Earth Syst. Sci. Nat. Clim. Change Nat. Geosci. Ocean Dyn. Ocean and Coastal Research npj Clim. Atmos. Sci. Ocean Modell. Ocean Sci. Ore Geol. Rev. OCEAN SCI J Paleontol. J. PALAEOGEOGR PALAEOCL PERIOD MINERAL PETROLOGY+ Phys. Chem. Miner. Polar Sci. Prog. Oceanogr. Quat. Sci. Rev. Q. J. Eng. Geol. Hydrogeol. RADIOCARBON Pure Appl. Geophys. Resour. Geol. Rev. Geophys. Sediment. Geol.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1