首页 > 最新文献

Neural Computation最新文献

英文 中文
Deep Nonnegative Matrix Factorization with Beta Divergences. 利用贝塔差分进行深度非负矩阵因式分解
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-23 DOI: 10.1162/neco_a_01679
Valentin Leplat, Le T K Hien, Akwum Onwunta, Nicolas Gillis

Deep nonnegative matrix factorization (deep NMF) has recently emerged as a valuable technique for extracting multiple layers of features across different scales. However, all existing deep NMF models and algorithms have primarily centered their evaluation on the least squares error, which may not be the most appropriate metric for assessing the quality of approximations on diverse data sets. For instance, when dealing with data types such as audio signals and documents, it is widely acknowledged that ß-divergences offer a more suitable alternative. In this article, we develop new models and algorithms for deep NMF using some ß-divergences, with a focus on the Kullback-Leibler divergence. Subsequently, we apply these techniques to the extraction of facial features, the identification of topics within document collections, and the identification of materials within hyperspectral images.

深度非负矩阵因式分解(deep nonnegative matrix factorization,deep NMF)是最近出现的一种提取不同尺度多层特征的重要技术。然而,所有现有的深度非负矩阵因式分解模型和算法都主要以最小二乘误差为评估核心,而这可能并不是评估不同数据集近似质量的最合适指标。例如,在处理音频信号和文档等数据类型时,人们普遍认为ß-差分提供了更合适的选择。在本文中,我们利用一些ß-发散为深度 NMF 开发了新的模型和算法,重点是 Kullback-Leibler 发散。随后,我们将这些技术应用于面部特征的提取、文档集中主题的识别以及高光谱图像中材料的识别。
{"title":"Deep Nonnegative Matrix Factorization with Beta Divergences.","authors":"Valentin Leplat, Le T K Hien, Akwum Onwunta, Nicolas Gillis","doi":"10.1162/neco_a_01679","DOIUrl":"https://doi.org/10.1162/neco_a_01679","url":null,"abstract":"<p><p>Deep nonnegative matrix factorization (deep NMF) has recently emerged as a valuable technique for extracting multiple layers of features across different scales. However, all existing deep NMF models and algorithms have primarily centered their evaluation on the least squares error, which may not be the most appropriate metric for assessing the quality of approximations on diverse data sets. For instance, when dealing with data types such as audio signals and documents, it is widely acknowledged that ß-divergences offer a more suitable alternative. In this article, we develop new models and algorithms for deep NMF using some ß-divergences, with a focus on the Kullback-Leibler divergence. Subsequently, we apply these techniques to the extraction of facial features, the identification of topics within document collections, and the identification of materials within hyperspectral images.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Orthogonal Gated Recurrent Unit With Neumann-Cayley Transformation. 采用 Neumann-Cayley 变换的正交门控循环单元
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-23 DOI: 10.1162/neco_a_01710
Vasily Zadorozhnyy, Edison Mucllari, Cole Pospisil, Duc Nguyen, Qiang Ye

In recent years, using orthogonal matrices has been shown to be a promising approach to improving recurrent neural networks (RNNs) with training, stability, and convergence, particularly to control gradients. While gated recurrent unit (GRU) and long short-term memory (LSTM) architectures address the vanishing gradient problem by using a variety of gates and memory cells, they are still prone to the exploding gradient problem. In this work, we analyze the gradients in GRU and propose the use of orthogonal matrices to prevent exploding gradient problems and enhance long-term memory. We study where to use orthogonal matrices and propose a Neumann series-based scaled Cayley transformation for training orthogonal matrices in GRU, which we call Neumann-Cayley orthogonal GRU (NC-GRU). We present detailed experiments of our model on several synthetic and real-world tasks, which show that NC-GRU significantly outperforms GRU and several other RNNs.

近年来,利用正交矩阵改进递归神经网络(RNN)的训练、稳定性和收敛性,特别是控制梯度,已被证明是一种很有前途的方法。虽然门控递归单元(GRU)和长短期记忆(LSTM)架构通过使用各种门和记忆单元解决了梯度消失问题,但它们仍然容易出现梯度爆炸问题。在这项工作中,我们分析了 GRU 中的梯度,并建议使用正交矩阵来防止梯度爆炸问题并增强长期记忆。我们研究了在何处使用正交矩阵,并提出了一种基于 Neumann 序列的缩放 Cayley 变换,用于在 GRU 中训练正交矩阵,我们称之为 Neumann-Cayley 正交 GRU(NC-GRU)。我们在多个合成任务和实际任务中对我们的模型进行了详细实验,结果表明 NC-GRU 明显优于 GRU 和其他几个 RNN。
{"title":"Orthogonal Gated Recurrent Unit With Neumann-Cayley Transformation.","authors":"Vasily Zadorozhnyy, Edison Mucllari, Cole Pospisil, Duc Nguyen, Qiang Ye","doi":"10.1162/neco_a_01710","DOIUrl":"https://doi.org/10.1162/neco_a_01710","url":null,"abstract":"<p><p>In recent years, using orthogonal matrices has been shown to be a promising approach to improving recurrent neural networks (RNNs) with training, stability, and convergence, particularly to control gradients. While gated recurrent unit (GRU) and long short-term memory (LSTM) architectures address the vanishing gradient problem by using a variety of gates and memory cells, they are still prone to the exploding gradient problem. In this work, we analyze the gradients in GRU and propose the use of orthogonal matrices to prevent exploding gradient problems and enhance long-term memory. We study where to use orthogonal matrices and propose a Neumann series-based scaled Cayley transformation for training orthogonal matrices in GRU, which we call Neumann-Cayley orthogonal GRU (NC-GRU). We present detailed experiments of our model on several synthetic and real-world tasks, which show that NC-GRU significantly outperforms GRU and several other RNNs.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latent Space Bayesian Optimization With Latent Data Augmentation for Enhanced Exploration. 潜空间贝叶斯优化与潜数据增强,以加强探索。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-23 DOI: 10.1162/neco_a_01708
Onur Boyar, Ichiro Takeuchi

Latent space Bayesian optimization (LSBO) combines generative models, typically variational autoencoders (VAE), with Bayesian optimization (BO), to generate de novo objects of interest. However, LSBO faces challenges due to the mismatch between the objectives of BO and VAE, resulting in poor exploration capabilities. In this article, we propose novel contributions to enhance LSBO efficiency and overcome this challenge. We first introduce the concept of latent consistency/inconsistency as a crucial problem in LSBO, arising from the VAE-BO mismatch. To address this, we propose the latent consistent aware-acquisition function (LCA-AF) that leverages consistent points in LSBO. Additionally, we present LCA-VAE, a novel VAE method that creates a latent space with increased consistent points through data augmentation in latent space and penalization of latent inconsistencies. Combining LCA-VAE and LCA-AF, we develop LCA-LSBO. Our approach achieves high sample efficiency and effective exploration, emphasizing the significance of addressing latent consistency through the novel incorporation of data augmentation in latent space within LCA-VAE in LSBO. We showcase the performance of our proposal via de novo image generation and de novo chemical design tasks.

潜空间贝叶斯优化(LSBO)将生成模型(通常是变异自动编码器(VAE))与贝叶斯优化(BO)相结合,生成新的感兴趣对象。然而,由于贝叶斯优化(BO)和变异自编码器(VAE)的目标不匹配,LSBO 面临着挑战,导致探索能力低下。在本文中,我们将提出新的贡献,以提高 LSBO 的效率并克服这一挑战。我们首先介绍了潜在一致性/不一致性的概念,它是 LSBO 中的一个关键问题,由 VAE-BO 不匹配引起。为了解决这个问题,我们提出了潜在一致性感知获取函数(LCA-AF),它利用了 LSBO 中的一致性点。此外,我们还提出了 LCA-VAE,这是一种新颖的 VAE 方法,它通过在潜在空间中增加数据和对潜在不一致性进行惩罚来创建一个具有更多一致点的潜在空间。结合 LCA-VAE 和 LCA-AF,我们开发了 LCA-LSBO。我们的方法实现了高采样效率和有效探索,通过在 LSBO 的 LCA-VAE 中新加入潜空间数据增强,强调了解决潜一致性问题的重要性。我们通过全新图像生成和全新化学设计任务展示了我们建议的性能。
{"title":"Latent Space Bayesian Optimization With Latent Data Augmentation for Enhanced Exploration.","authors":"Onur Boyar, Ichiro Takeuchi","doi":"10.1162/neco_a_01708","DOIUrl":"https://doi.org/10.1162/neco_a_01708","url":null,"abstract":"<p><p>Latent space Bayesian optimization (LSBO) combines generative models, typically variational autoencoders (VAE), with Bayesian optimization (BO), to generate de novo objects of interest. However, LSBO faces challenges due to the mismatch between the objectives of BO and VAE, resulting in poor exploration capabilities. In this article, we propose novel contributions to enhance LSBO efficiency and overcome this challenge. We first introduce the concept of latent consistency/inconsistency as a crucial problem in LSBO, arising from the VAE-BO mismatch. To address this, we propose the latent consistent aware-acquisition function (LCA-AF) that leverages consistent points in LSBO. Additionally, we present LCA-VAE, a novel VAE method that creates a latent space with increased consistent points through data augmentation in latent space and penalization of latent inconsistencies. Combining LCA-VAE and LCA-AF, we develop LCA-LSBO. Our approach achieves high sample efficiency and effective exploration, emphasizing the significance of addressing latent consistency through the novel incorporation of data augmentation in latent space within LCA-VAE in LSBO. We showcase the performance of our proposal via de novo image generation and de novo chemical design tasks.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ℓ 1 -Regularized ICA: A Novel Method for Analysis of Task-Related fMRI Data. ℓ 1 -Regularized ICA:分析任务相关 fMRI 数据的新方法。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-23 DOI: 10.1162/neco_a_01709
Yusuke Endo, Koujin Takeda

We propose a new method of independent component analysis (ICA) in order to extract appropriate features from high-dimensional data. In general, matrix factorization methods including ICA have a problem regarding the interpretability of extracted features. For the improvement of interpretability, sparse constraint on a factorized matrix is helpful. With this background, we construct a new ICA method with sparsity. In our method, the ℓ1-regularized IC term is added to the cost function of ICA, and minimization of the cost function is performed by a difference of convex functions algorithm. For the validity of our proposed method, we apply it to synthetic data and real functional magnetic resonance imaging data.

我们提出了一种新的独立分量分析(ICA)方法,以便从高维数据中提取适当的特征。一般来说,包括 ICA 在内的矩阵因式分解方法在提取特征的可解释性方面存在问题。为了提高可解释性,对因式分解矩阵进行稀疏约束很有帮助。在此背景下,我们构建了一种具有稀疏性的新 ICA 方法。在我们的方法中,ICA 的代价函数中加入了 ℓ1-regularized IC 项,代价函数的最小化是通过凸函数差分算法来实现的。为了证明我们提出的方法的有效性,我们将其应用于合成数据和真实的功能磁共振成像数据。
{"title":"ℓ 1 -Regularized ICA: A Novel Method for Analysis of Task-Related fMRI Data.","authors":"Yusuke Endo, Koujin Takeda","doi":"10.1162/neco_a_01709","DOIUrl":"https://doi.org/10.1162/neco_a_01709","url":null,"abstract":"<p><p>We propose a new method of independent component analysis (ICA) in order to extract appropriate features from high-dimensional data. In general, matrix factorization methods including ICA have a problem regarding the interpretability of extracted features. For the improvement of interpretability, sparse constraint on a factorized matrix is helpful. With this background, we construct a new ICA method with sparsity. In our method, the ℓ1-regularized IC term is added to the cost function of ICA, and minimization of the cost function is performed by a difference of convex functions algorithm. For the validity of our proposed method, we apply it to synthetic data and real functional magnetic resonance imaging data.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KLIF: An Optimized Spiking Neuron Unit for Tuning Surrogate Gradient Function. KLIF:用于调整代梯度函数的优化尖峰神经元单元
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-23 DOI: 10.1162/neco_a_01712
Chunming Jiang, Yilei Zhang

Spiking neural networks (SNNs) have garnered significant attention owing to their adeptness in processing temporal information, low power consumption, and enhanced biological plausibility. Despite these advantages, the development of efficient and high-performing learning algorithms for SNNs remains a formidable challenge. Techniques such as artificial neural network (ANN)-to-SNN conversion can convert ANNs to SNNs with minimal performance loss, but they necessitate prolonged simulations to approximate rate coding accurately. Conversely, the direct training of SNNs using spike-based backpropagation (BP), such as surrogate gradient approximation, is more flexible and widely adopted. Nevertheless, our research revealed that the shape of the surrogate gradient function profoundly influences the training and inference accuracy of SNNs. Importantly, we identified that the shape of the surrogate gradient function significantly affects the final training accuracy. The shape of the surrogate gradient function is typically manually selected before training and remains static throughout the training process. In this article, we introduce a novel k-based leaky integrate-and-fire (KLIF) spiking neural model. KLIF, featuring a learnable parameter, enables the dynamic adjustment of the height and width of the effective surrogate gradient near threshold during training. Our proposed model undergoes evaluation on static CIFAR-10 and CIFAR-100 data sets, as well as neuromorphic CIFAR10-DVS and DVS128-Gesture data sets. Experimental results demonstrate that KLIF outperforms the leaky Integrate-and-Fire (LIF) model across multiple data sets and network architectures. The superior performance of KLIF positions it as a viable replacement for the essential role of LIF in SNNs across diverse tasks.

尖峰神经网络(SNN)因其善于处理时间信息、低功耗和更高的生物合理性而备受关注。尽管具有这些优势,为尖峰神经网络开发高效和高性能的学习算法仍然是一项艰巨的挑战。人工神经网络(ANN)到 SNN 的转换等技术能以最小的性能损失将 ANN 转换为 SNN,但这些技术需要长时间的模拟才能准确地近似速率编码。相反,使用基于尖峰的反向传播(BP)直接训练 SNN(如代梯度逼近)则更为灵活,也被广泛采用。然而,我们的研究发现,代梯度函数的形状深刻影响着 SNN 的训练和推理精度。重要的是,我们发现代梯度函数的形状会显著影响最终的训练精度。代梯度函数的形状通常在训练前人工选择,并在整个训练过程中保持不变。在这篇文章中,我们介绍了一种新颖的基于 k 的泄漏积分发射(KLIF)尖峰神经模型。KLIF 具有一个可学习的参数,能在训练过程中动态调整阈值附近有效替代梯度的高度和宽度。我们提出的模型在静态 CIFAR-10 和 CIFAR-100 数据集以及神经形态 CIFAR10-DVS 和 DVS128-Gesture 数据集上进行了评估。实验结果表明,在多个数据集和网络架构中,KLIF 的性能都优于泄漏的 "集成-发射"(LIF)模型。KLIF 的优越性能使其成为 SNN 中 LIF 在各种任务中发挥重要作用的可行替代品。
{"title":"KLIF: An Optimized Spiking Neuron Unit for Tuning Surrogate Gradient Function.","authors":"Chunming Jiang, Yilei Zhang","doi":"10.1162/neco_a_01712","DOIUrl":"https://doi.org/10.1162/neco_a_01712","url":null,"abstract":"<p><p>Spiking neural networks (SNNs) have garnered significant attention owing to their adeptness in processing temporal information, low power consumption, and enhanced biological plausibility. Despite these advantages, the development of efficient and high-performing learning algorithms for SNNs remains a formidable challenge. Techniques such as artificial neural network (ANN)-to-SNN conversion can convert ANNs to SNNs with minimal performance loss, but they necessitate prolonged simulations to approximate rate coding accurately. Conversely, the direct training of SNNs using spike-based backpropagation (BP), such as surrogate gradient approximation, is more flexible and widely adopted. Nevertheless, our research revealed that the shape of the surrogate gradient function profoundly influences the training and inference accuracy of SNNs. Importantly, we identified that the shape of the surrogate gradient function significantly affects the final training accuracy. The shape of the surrogate gradient function is typically manually selected before training and remains static throughout the training process. In this article, we introduce a novel k-based leaky integrate-and-fire (KLIF) spiking neural model. KLIF, featuring a learnable parameter, enables the dynamic adjustment of the height and width of the effective surrogate gradient near threshold during training. Our proposed model undergoes evaluation on static CIFAR-10 and CIFAR-100 data sets, as well as neuromorphic CIFAR10-DVS and DVS128-Gesture data sets. Experimental results demonstrate that KLIF outperforms the leaky Integrate-and-Fire (LIF) model across multiple data sets and network architectures. The superior performance of KLIF positions it as a viable replacement for the essential role of LIF in SNNs across diverse tasks.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Associative Learning and Active Inference. 联想学习和主动推理
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-23 DOI: 10.1162/neco_a_01711
Petr Anokhin, Artyom Sorokin, Mikhail Burtsev, Karl Friston

Associative learning is a behavioral phenomenon in which individuals develop connections between stimuli or events based on their co-occurrence. Initially studied by Pavlov in his conditioning experiments, the fundamental principles of learning have been expanded on through the discovery of a wide range of learning phenomena. Computational models have been developed based on the concept of minimizing reward prediction errors. The Rescorla-Wagner model, in particular, is a well-known model that has greatly influenced the field of reinforcement learning. However, the simplicity of these models restricts their ability to fully explain the diverse range of behavioral phenomena associated with learning. In this study, we adopt the free energy principle, which suggests that living systems strive to minimize surprise or uncertainty under their internal models of the world. We consider the learning process as the minimization of free energy and investigate its relationship with the Rescorla-Wagner model, focusing on the informational aspects of learning, different types of surprise, and prediction errors based on beliefs and values. Furthermore, we explore how well-known behavioral phenomena such as blocking, overshadowing, and latent inhibition can be modeled within the active inference framework. We accomplish this by using the informational and novelty aspects of attention, which share similar ideas proposed by seemingly contradictory models such as Mackintosh and Pearce-Hall models. Thus, we demonstrate that the free energy principle, as a theoretical framework derived from first principles, can integrate the ideas and models of associative learning proposed based on empirical experiments and serve as a framework for a better understanding of the computational processes behind associative learning in the brain.

联想学习是一种行为现象,在这种现象中,个体会根据刺激物或事件的共同发生建立起它们之间的联系。学习的基本原理最初是由巴甫洛夫在他的条件反射实验中研究出来的。基于奖励预测误差最小化的概念,人们开发出了计算模型。其中,雷斯科拉-瓦格纳模型(Rescorla-Wagner model)是一个著名的模型,对强化学习领域产生了巨大影响。然而,这些模型的简单性限制了它们完全解释与学习相关的各种行为现象的能力。在本研究中,我们采用了自由能原理,该原理认为生命系统在其内部世界模型下,会努力将意外或不确定性降至最低。我们将学习过程视为自由能最小化的过程,并研究其与雷斯科拉-瓦格纳模型的关系,重点关注学习的信息方面、不同类型的惊喜以及基于信念和价值观的预测错误。此外,我们还探讨了如何在主动推理框架内对阻滞、阴影和潜在抑制等众所周知的行为现象进行建模。我们利用注意力的信息性和新颖性来实现这一目标,这两个方面与麦金托什模型和皮尔斯-霍尔模型等看似矛盾的模型所提出的观点相似。因此,我们证明了自由能原理作为一个从第一性原理衍生出来的理论框架,可以整合根据经验实验提出的联想学习思想和模型,并以此为框架更好地理解大脑中联想学习背后的计算过程。
{"title":"Associative Learning and Active Inference.","authors":"Petr Anokhin, Artyom Sorokin, Mikhail Burtsev, Karl Friston","doi":"10.1162/neco_a_01711","DOIUrl":"https://doi.org/10.1162/neco_a_01711","url":null,"abstract":"<p><p>Associative learning is a behavioral phenomenon in which individuals develop connections between stimuli or events based on their co-occurrence. Initially studied by Pavlov in his conditioning experiments, the fundamental principles of learning have been expanded on through the discovery of a wide range of learning phenomena. Computational models have been developed based on the concept of minimizing reward prediction errors. The Rescorla-Wagner model, in particular, is a well-known model that has greatly influenced the field of reinforcement learning. However, the simplicity of these models restricts their ability to fully explain the diverse range of behavioral phenomena associated with learning. In this study, we adopt the free energy principle, which suggests that living systems strive to minimize surprise or uncertainty under their internal models of the world. We consider the learning process as the minimization of free energy and investigate its relationship with the Rescorla-Wagner model, focusing on the informational aspects of learning, different types of surprise, and prediction errors based on beliefs and values. Furthermore, we explore how well-known behavioral phenomena such as blocking, overshadowing, and latent inhibition can be modeled within the active inference framework. We accomplish this by using the informational and novelty aspects of attention, which share similar ideas proposed by seemingly contradictory models such as Mackintosh and Pearce-Hall models. Thus, we demonstrate that the free energy principle, as a theoretical framework derived from first principles, can integrate the ideas and models of associative learning proposed based on empirical experiments and serve as a framework for a better understanding of the computational processes behind associative learning in the brain.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Realizing Synthetic Active Inference Agents, Part II: Variational Message Updates. 实现合成主动推理代理,第二部分:变异信息更新。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-23 DOI: 10.1162/neco_a_01713
Thijs van de Laar, Magnus Koudahl, Bert de Vries

The free energy principle (FEP) describes (biological) agents as minimizing a variational free energy (FE) with respect to a generative model of their environment. Active inference (AIF) is a corollary of the FEP that describes how agents explore and exploit their environment by minimizing an expected FE objective. In two related papers, we describe a scalable, epistemic approach to synthetic AIF by message passing on free-form Forney-style factor graphs (FFGs). A companion paper (part I of this article; Koudahl et al., 2023) introduces a constrained FFG (CFFG) notation that visually represents (generalized) FE objectives for AIF. This article (part II) derives message-passing algorithms that minimize (generalized) FE objectives on a CFFG by variational calculus. A comparison between simulated Bethe and generalized FE agents illustrates how the message-passing approach to synthetic AIF induces epistemic behavior on a T-maze navigation task. Extension of the T-maze simulation to learning goal statistics and a multiagent bargaining setting illustrate how this approach encourages reuse of nodes and updates in alternative settings. With a full message-passing account of synthetic AIF agents, it becomes possible to derive and reuse message updates across models and move closer to industrial applications of synthetic AIF.

自由能原理(FEP)将(生物)代理描述为相对于其环境的生成模型最小化可变自由能(FE)。主动推理(AIF)是自由能原理的必然结果,它描述了生物体如何通过最小化预期自由能目标来探索和利用其环境。在两篇相关论文中,我们描述了通过在自由形式的福尼式因子图(FFGs)上进行消息传递来合成 AIF 的可扩展认识论方法。另一篇相关论文(本文第一部分;Koudahl 等人,2023 年)介绍了一种受限 FFG(CFFG)符号,它能直观地表示 AIF 的(广义)FE 目标。本文(第二部分)通过变分法推导了在 CFFG 上最小化(广义)FE 目标的消息传递算法。模拟贝特代理和广义 FE 代理之间的比较说明了合成 AIF 的信息传递方法如何在 T 型迷宫导航任务中诱导认识行为。将 T 型迷宫模拟扩展到学习目标统计和多代理讨价还价设置,说明了这种方法如何鼓励在其他设置中重复使用节点和更新。有了合成 AIF 代理的完整消息传递账户,就有可能在不同模型中推导和重用消息更新,并更接近合成 AIF 的工业应用。
{"title":"Realizing Synthetic Active Inference Agents, Part II: Variational Message Updates.","authors":"Thijs van de Laar, Magnus Koudahl, Bert de Vries","doi":"10.1162/neco_a_01713","DOIUrl":"https://doi.org/10.1162/neco_a_01713","url":null,"abstract":"<p><p>The free energy principle (FEP) describes (biological) agents as minimizing a variational free energy (FE) with respect to a generative model of their environment. Active inference (AIF) is a corollary of the FEP that describes how agents explore and exploit their environment by minimizing an expected FE objective. In two related papers, we describe a scalable, epistemic approach to synthetic AIF by message passing on free-form Forney-style factor graphs (FFGs). A companion paper (part I of this article; Koudahl et al., 2023) introduces a constrained FFG (CFFG) notation that visually represents (generalized) FE objectives for AIF. This article (part II) derives message-passing algorithms that minimize (generalized) FE objectives on a CFFG by variational calculus. A comparison between simulated Bethe and generalized FE agents illustrates how the message-passing approach to synthetic AIF induces epistemic behavior on a T-maze navigation task. Extension of the T-maze simulation to learning goal statistics and a multiagent bargaining setting illustrate how this approach encourages reuse of nodes and updates in alternative settings. With a full message-passing account of synthetic AIF agents, it becomes possible to derive and reuse message updates across models and move closer to industrial applications of synthetic AIF.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Electrical Signaling Beyond Neurons. 神经元之外的电子信号传递
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-17 DOI: 10.1162/neco_a_01696
Travis Monk, Nik Dennler, Nicholas Ralph, Shavika Rastogi, Saeed Afshar, Pablo Urbizagastegui, Russell Jarvis, André van Schaik, Andrew Adamatzky

Neural action potentials (APs) are difficult to interpret as signal encoders and/or computational primitives. Their relationships with stimuli and behaviors are obscured by the staggering complexity of nervous systems themselves. We can reduce this complexity by observing that "simpler" neuron-less organisms also transduce stimuli into transient electrical pulses that affect their behaviors. Without a complicated nervous system, APs are often easier to understand as signal/response mechanisms. We review examples of nonneural stimulus transductions in domains of life largely neglected by theoretical neuroscience: bacteria, protozoans, plants, fungi, and neuron-less animals. We report properties of those electrical signals-for example, amplitudes, durations, ionic bases, refractory periods, and particularly their ecological purposes. We compare those properties with those of neurons to infer the tasks and selection pressures that neurons satisfy. Throughout the tree of life, nonneural stimulus transductions time behavioral responses to environmental changes. Nonneural organisms represent the presence or absence of a stimulus with the presence or absence of an electrical signal. Their transductions usually exhibit high sensitivity and specificity to a stimulus, but are often slow compared to neurons. Neurons appear to be sacrificing the specificity of their stimulus transductions for sensitivity and speed. We interpret cellular stimulus transductions as a cell's assertion that it detected something important at that moment in time. In particular, we consider neural APs as fast but noisy detection assertions. We infer that a principal goal of nervous systems is to detect extremely weak signals from noisy sensory spikes under enormous time pressure. We discuss neural computation proposals that address this goal by casting neurons as devices that implement online, analog, probabilistic computations with their membrane potentials. Those proposals imply a measurable relationship between afferent neural spiking statistics and efferent neural membrane electrophysiology.

神经动作电位(APs)很难被解释为信号编码器和/或计算原语。神经系统本身惊人的复杂性掩盖了它们与刺激和行为之间的关系。我们可以通过观察 "更简单 "的无神经元生物,将刺激转化为影响其行为的瞬时电脉冲,从而降低这种复杂性。没有复杂的神经系统,AP 通常更容易理解为信号/反应机制。我们回顾了理论神经科学在很大程度上忽视的生命领域中的非神经刺激信号转导实例:细菌、原生动物、植物、真菌和无神经元动物。我们报告了这些电信号的特性--例如振幅、持续时间、离子基础、折射周期,尤其是它们的生态目的。我们将这些特性与神经元的特性进行比较,以推断神经元所满足的任务和选择压力。在整个生命树中,非神经刺激传导为行为对环境变化的反应定时。非神经生物以电信号的存在或不存在来表示刺激的存在或不存在。它们的信号转导通常对刺激具有高灵敏度和特异性,但与神经元相比,它们的信号转导通常比较缓慢。神经元似乎牺牲了刺激信号传导的特异性,以换取灵敏度和速度。我们将细胞刺激转导解释为细胞断言它在那一时刻检测到了重要的东西。特别是,我们将神经 AP 视为快速但有噪声的检测断言。我们推断,神经系统的主要目标是在巨大的时间压力下,从嘈杂的感觉尖峰中检测出极其微弱的信号。针对这一目标,我们讨论了神经计算建议,将神经元视为利用膜电位实现在线、模拟、概率计算的设备。这些建议意味着传入神经尖峰统计与传出神经膜电生理学之间存在可测量的关系。
{"title":"Electrical Signaling Beyond Neurons.","authors":"Travis Monk, Nik Dennler, Nicholas Ralph, Shavika Rastogi, Saeed Afshar, Pablo Urbizagastegui, Russell Jarvis, André van Schaik, Andrew Adamatzky","doi":"10.1162/neco_a_01696","DOIUrl":"10.1162/neco_a_01696","url":null,"abstract":"<p><p>Neural action potentials (APs) are difficult to interpret as signal encoders and/or computational primitives. Their relationships with stimuli and behaviors are obscured by the staggering complexity of nervous systems themselves. We can reduce this complexity by observing that \"simpler\" neuron-less organisms also transduce stimuli into transient electrical pulses that affect their behaviors. Without a complicated nervous system, APs are often easier to understand as signal/response mechanisms. We review examples of nonneural stimulus transductions in domains of life largely neglected by theoretical neuroscience: bacteria, protozoans, plants, fungi, and neuron-less animals. We report properties of those electrical signals-for example, amplitudes, durations, ionic bases, refractory periods, and particularly their ecological purposes. We compare those properties with those of neurons to infer the tasks and selection pressures that neurons satisfy. Throughout the tree of life, nonneural stimulus transductions time behavioral responses to environmental changes. Nonneural organisms represent the presence or absence of a stimulus with the presence or absence of an electrical signal. Their transductions usually exhibit high sensitivity and specificity to a stimulus, but are often slow compared to neurons. Neurons appear to be sacrificing the specificity of their stimulus transductions for sensitivity and speed. We interpret cellular stimulus transductions as a cell's assertion that it detected something important at that moment in time. In particular, we consider neural APs as fast but noisy detection assertions. We infer that a principal goal of nervous systems is to detect extremely weak signals from noisy sensory spikes under enormous time pressure. We discuss neural computation proposals that address this goal by casting neurons as devices that implement online, analog, probabilistic computations with their membrane potentials. Those proposals imply a measurable relationship between afferent neural spiking statistics and efferent neural membrane electrophysiology.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trainable Reference Spikes Improve Temporal Information Processing of SNNs With Supervised Learning. 可训练的参考尖峰通过监督学习改善 SNN 的时间信息处理能力
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-17 DOI: 10.1162/neco_a_01702
Zeyuan Wang, Luis Cruz

Spiking neural networks (SNNs) are the next-generation neural networks composed of biologically plausible neurons that communicate through trains of spikes. By modifying the plastic parameters of SNNs, including weights and time delays, SNNs can be trained to perform various AI tasks, although in general not at the same level of performance as typical artificial neural networks (ANNs). One possible solution to improve the performance of SNNs is to consider plastic parameters other than just weights and time delays drawn from the inherent complexity of the neural system of the brain, which may help SNNs improve their information processing ability and achieve brainlike functions. Here, we propose reference spikes as a new type of plastic parameters in a supervised learning scheme in SNNs. A neuron receives reference spikes through synapses providing reference information independent of input to help during learning, whose number of spikes and timings are trainable by error backpropagation. Theoretically, reference spikes improve the temporal information processing of SNNs by modulating the integration of incoming spikes at a detailed level. Through comparative computational experiments, we demonstrate using supervised learning that reference spikes improve the memory capacity of SNNs to map input spike patterns to target output spike patterns and increase classification accuracy on the MNIST, Fashion-MNIST, and SHD data sets, where both input and target output are temporally encoded. Our results demonstrate that applying reference spikes improves the performance of SNNs by enhancing their temporal information processing ability.

尖峰神经网络(SNN)是下一代神经网络,由生物神经元组成,通过尖峰序列进行通信。通过修改尖峰神经网络的可塑性参数(包括权重和时间延迟),可以训练尖峰神经网络执行各种人工智能任务,但其性能一般无法与典型的人工神经网络(ANN)媲美。要提高 SNN 的性能,一种可能的解决方案是考虑从大脑神经系统固有的复杂性中提取权重和时间延迟以外的可塑参数,这可能有助于 SNN 提高其信息处理能力并实现类似大脑的功能。在此,我们提出将参考尖峰作为 SNNs 监督学习方案中的一种新型可塑性参数。神经元通过突触接收参考尖峰,在学习过程中提供独立于输入的参考信息。从理论上讲,参考尖峰通过在细节层面上调节输入尖峰的整合,可以改善 SNN 的时间信息处理。通过比较计算实验,我们利用监督学习证明,参考尖峰提高了 SNNs 将输入尖峰模式映射到目标输出尖峰模式的记忆能力,并提高了 MNIST、Fashion-MNIST 和 SHD 数据集的分类准确率,其中输入和目标输出都是时间编码的。我们的研究结果表明,应用参考尖峰可以提高 SNN 的时间信息处理能力,从而提高 SNN 的性能。
{"title":"Trainable Reference Spikes Improve Temporal Information Processing of SNNs With Supervised Learning.","authors":"Zeyuan Wang, Luis Cruz","doi":"10.1162/neco_a_01702","DOIUrl":"10.1162/neco_a_01702","url":null,"abstract":"<p><p>Spiking neural networks (SNNs) are the next-generation neural networks composed of biologically plausible neurons that communicate through trains of spikes. By modifying the plastic parameters of SNNs, including weights and time delays, SNNs can be trained to perform various AI tasks, although in general not at the same level of performance as typical artificial neural networks (ANNs). One possible solution to improve the performance of SNNs is to consider plastic parameters other than just weights and time delays drawn from the inherent complexity of the neural system of the brain, which may help SNNs improve their information processing ability and achieve brainlike functions. Here, we propose reference spikes as a new type of plastic parameters in a supervised learning scheme in SNNs. A neuron receives reference spikes through synapses providing reference information independent of input to help during learning, whose number of spikes and timings are trainable by error backpropagation. Theoretically, reference spikes improve the temporal information processing of SNNs by modulating the integration of incoming spikes at a detailed level. Through comparative computational experiments, we demonstrate using supervised learning that reference spikes improve the memory capacity of SNNs to map input spike patterns to target output spike patterns and increase classification accuracy on the MNIST, Fashion-MNIST, and SHD data sets, where both input and target output are temporally encoded. Our results demonstrate that applying reference spikes improves the performance of SNNs by enhancing their temporal information processing ability.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inference on the Macroscopic Dynamics of Spiking Neurons. 尖峰神经元宏观动态推论
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-17 DOI: 10.1162/neco_a_01701
Nina Baldy, Martin Breyton, Marmaduke M Woodman, Viktor K Jirsa, Meysam Hashemi

The process of inference on networks of spiking neurons is essential to decipher the underlying mechanisms of brain computation and function. In this study, we conduct inference on parameters and dynamics of a mean-field approximation, simplifying the interactions of neurons. Estimating parameters of this class of generative model allows one to predict the system's dynamics and responses under changing inputs and, indeed, changing parameters. We first assume a set of known state-space equations and address the problem of inferring the lumped parameters from observed time series. Crucially, we consider this problem in the setting of bistability, random fluctuations in system dynamics, and partial observations, in which some states are hidden. To identify the most efficient estimation or inversion scheme in this particular system identification, we benchmark against state-of-the-art optimization and Bayesian estimation algorithms, highlighting their strengths and weaknesses. Additionally, we explore how well the statistical relationships between parameters are maintained across different scales. We found that deep neural density estimators outperform other algorithms in the inversion scheme, despite potentially resulting in overestimated uncertainty and correlation between parameters. Nevertheless, this issue can be improved by incorporating time-delay embedding. We then eschew the mean-field approximation and employ deep neural ODEs on spiking neurons, illustrating prediction of system dynamics and vector fields from microscopic states. Overall, this study affords an opportunity to predict brain dynamics and responses to various perturbations or pharmacological interventions using deep neural networks.

对尖峰神经元网络的推理过程对于破译大脑计算和功能的内在机制至关重要。在这项研究中,我们简化了神经元的相互作用,对均值场近似模型的参数和动态进行推断。估算这类生成模型的参数可以预测系统在输入变化以及参数变化情况下的动态和反应。我们首先假设一组已知的状态空间方程,并解决从观测到的时间序列中推断集合参数的问题。最重要的是,我们是在双稳态、系统动态随机波动和部分观测(其中某些状态是隐藏的)的背景下考虑这个问题的。为了确定这一特定系统识别中最有效的估计或反演方案,我们以最先进的优化和贝叶斯估计算法为基准,突出它们的优缺点。此外,我们还探讨了参数之间的统计关系在不同尺度上的保持情况。我们发现,深度神经密度估计在反演方案中优于其他算法,尽管可能会导致参数之间的不确定性和相关性被高估。不过,这个问题可以通过时延嵌入得到改善。然后,我们摒弃了均场近似,在尖峰神经元上采用了深度神经 ODE,说明了从微观状态预测系统动态和向量场的方法。总之,这项研究为利用深度神经网络预测大脑动态和对各种扰动或药物干预的反应提供了机会。
{"title":"Inference on the Macroscopic Dynamics of Spiking Neurons.","authors":"Nina Baldy, Martin Breyton, Marmaduke M Woodman, Viktor K Jirsa, Meysam Hashemi","doi":"10.1162/neco_a_01701","DOIUrl":"10.1162/neco_a_01701","url":null,"abstract":"<p><p>The process of inference on networks of spiking neurons is essential to decipher the underlying mechanisms of brain computation and function. In this study, we conduct inference on parameters and dynamics of a mean-field approximation, simplifying the interactions of neurons. Estimating parameters of this class of generative model allows one to predict the system's dynamics and responses under changing inputs and, indeed, changing parameters. We first assume a set of known state-space equations and address the problem of inferring the lumped parameters from observed time series. Crucially, we consider this problem in the setting of bistability, random fluctuations in system dynamics, and partial observations, in which some states are hidden. To identify the most efficient estimation or inversion scheme in this particular system identification, we benchmark against state-of-the-art optimization and Bayesian estimation algorithms, highlighting their strengths and weaknesses. Additionally, we explore how well the statistical relationships between parameters are maintained across different scales. We found that deep neural density estimators outperform other algorithms in the inversion scheme, despite potentially resulting in overestimated uncertainty and correlation between parameters. Nevertheless, this issue can be improved by incorporating time-delay embedding. We then eschew the mean-field approximation and employ deep neural ODEs on spiking neurons, illustrating prediction of system dynamics and vector fields from microscopic states. Overall, this study affords an opportunity to predict brain dynamics and responses to various perturbations or pharmacological interventions using deep neural networks.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1