首页 > 最新文献

Neural Computation最新文献

英文 中文
Attractor-Based Models for Sequences and Pattern Generation in Neural Circuits. 神经回路中基于吸引子的序列和模式生成模型。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-02 DOI: 10.1162/NECO.a.1492
Juliana Londono Alvarez, Katherine Morrison, Carina Curto

Neural circuits in the brain perform a variety of essential functions, including input classification, pattern completion, and the generation of rhythms and oscillations that support functions such as breathing and locomotion. There is also substantial evidence that the brain encodes memories and processes information via sequences of neural activity. Traditionally, rhythmic activity and pattern generation have been modeled using coupled oscillators, whereas input classification and pattern completion have been modeled using at-tractor neural networks. Here, we present a theoretical framework that demonstrates how attractor-based networks can also generate diverse rhythmic patterns, such as those of central pattern generator circuits (CPGs). Additionally, we propose a mechanism for transitioning between patterns. Specifically, we construct a network that can step through a sequence of five different quadruped gaits. It is composed of two dynamically distinct modules: a counter network that can count the number of external inputs it receives via a sequence of fixed points and a locomotion network that encodes five different quadruped gaits as limit cycles. A sequence of locomotive gaits is obtained by connecting the counter network with the locomotion network. Specifically, we introduce a new architecture for layering networks that produces fusion attractors, binding pairs of attractors from individual layers. All of this is accomplished within a unified framework of attractor-based models using threshold-linear networks.

大脑中的神经回路执行各种基本功能,包括输入分类、模式完成以及支持呼吸和运动等功能的节奏和振荡的产生。也有大量证据表明,大脑通过一系列神经活动来编码记忆和处理信息。传统上,节奏活动和模式生成是使用耦合振荡器建模的,而输入分类和模式完成是使用牵引器神经网络建模的。在这里,我们提出了一个理论框架,证明了基于吸引子的网络如何也能产生不同的节奏模式,例如中央模式产生电路(cpg)的节奏模式。此外,我们还提出了一种模式之间转换的机制。具体来说,我们构建了一个网络,可以步进五种不同的四足动物步态序列。它由两个动态不同的模块组成:一个计数器网络,可以计算通过一系列固定点接收的外部输入的数量;一个运动网络,将五种不同的四足步态编码为极限环。将反网络与运动网络连接,得到机车步态序列。具体来说,我们为分层网络引入了一种新的结构,该结构产生融合吸引子,将来自各个层的吸引子对绑定在一起。所有这些都是在使用阈值线性网络的基于吸引子的模型的统一框架内完成的。
{"title":"Attractor-Based Models for Sequences and Pattern Generation in Neural Circuits.","authors":"Juliana Londono Alvarez, Katherine Morrison, Carina Curto","doi":"10.1162/NECO.a.1492","DOIUrl":"10.1162/NECO.a.1492","url":null,"abstract":"<p><p>Neural circuits in the brain perform a variety of essential functions, including input classification, pattern completion, and the generation of rhythms and oscillations that support functions such as breathing and locomotion. There is also substantial evidence that the brain encodes memories and processes information via sequences of neural activity. Traditionally, rhythmic activity and pattern generation have been modeled using coupled oscillators, whereas input classification and pattern completion have been modeled using at-tractor neural networks. Here, we present a theoretical framework that demonstrates how attractor-based networks can also generate diverse rhythmic patterns, such as those of central pattern generator circuits (CPGs). Additionally, we propose a mechanism for transitioning between patterns. Specifically, we construct a network that can step through a sequence of five different quadruped gaits. It is composed of two dynamically distinct modules: a counter network that can count the number of external inputs it receives via a sequence of fixed points and a locomotion network that encodes five different quadruped gaits as limit cycles. A sequence of locomotive gaits is obtained by connecting the counter network with the locomotion network. Specifically, we introduce a new architecture for layering networks that produces fusion attractors, binding pairs of attractors from individual layers. All of this is accomplished within a unified framework of attractor-based models using threshold-linear networks.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-35"},"PeriodicalIF":2.1,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Glutamate-Glutamine Cycling Underlies Presynaptic ATP Homeostasis. 局部谷氨酸-谷氨酰胺循环是突触前ATP稳态的基础。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-02 DOI: 10.1162/NECO.a.1490
Reinoud Maex

Presynaptic axon terminals maintain in their cytosol an almost constant level of adenosine triphosphate (ATP) to safeguard neurotransmission during varying workloads. In the study reported in this letter, it is argued that the vesicular release of neurotransmitter and the recycling of transmitter via astrocytes may itself be a mechanism of ATP homeostasis. In a minimal metabolic model of a presynaptic axon bouton, the accumulation of glutamate into vesicles and the activity-dependent supply of its precursor glutamine by astrocytes generated a steady-state level of ATP that was independent of the workload. When the workload increased, an enhanced supply of glutamine raised the rate of ATP production through the conversion of glutamate to the Krebs cycle intermediate α-ketoglutarate. The accumulation and release of glutamate, on the other hand, acted as a leak that diminished ATP production when the workload decreased. The fraction of ATP that the axon spent on the release and recycling of glutamate was small (4.7%), irrespective of the workload. Increasing this fraction enhanced the speed of ATP homeostasis and reduced the futile production of ATP. The model can be extended to axons releasing other, or coreleasing multiple, transmitters. Hence, the activity-dependent formation and release of neurotransmitter may be a universal mechanism of ATP homeostasis.

突触前轴突末端在其细胞质中维持几乎恒定水平的三磷酸腺苷(ATP),以保护不同负荷下的神经传递。在这封信中报道的研究中,认为神经递质的囊泡释放和递质通过星形胶质细胞的再循环本身可能是ATP稳态的一种机制。在突触前轴突钮扣的最小代谢模型中,谷氨酸在囊泡中的积累和星形胶质细胞对其前体谷氨酰胺的活性依赖性供应产生了与负荷无关的稳态ATP水平。当负荷增加时,通过谷氨酸转化为克雷布斯循环的中间体α-酮戊二酸,谷氨酰胺供应的增加提高了ATP的产生速度。另一方面,当工作量减少时,谷氨酸的积累和释放就像泄漏一样减少了ATP的产生。与负荷无关,轴突用于谷氨酸释放和再循环的ATP比例很小(4.7%)。增加这个分数提高了ATP稳态的速度,减少了无用的ATP产生。该模型可以扩展到轴突释放其他或共同释放多个递质。因此,神经递质的活性依赖性形成和释放可能是ATP稳态的普遍机制。
{"title":"Local Glutamate-Glutamine Cycling Underlies Presynaptic ATP Homeostasis.","authors":"Reinoud Maex","doi":"10.1162/NECO.a.1490","DOIUrl":"https://doi.org/10.1162/NECO.a.1490","url":null,"abstract":"<p><p>Presynaptic axon terminals maintain in their cytosol an almost constant level of adenosine triphosphate (ATP) to safeguard neurotransmission during varying workloads. In the study reported in this letter, it is argued that the vesicular release of neurotransmitter and the recycling of transmitter via astrocytes may itself be a mechanism of ATP homeostasis. In a minimal metabolic model of a presynaptic axon bouton, the accumulation of glutamate into vesicles and the activity-dependent supply of its precursor glutamine by astrocytes generated a steady-state level of ATP that was independent of the workload. When the workload increased, an enhanced supply of glutamine raised the rate of ATP production through the conversion of glutamate to the Krebs cycle intermediate α-ketoglutarate. The accumulation and release of glutamate, on the other hand, acted as a leak that diminished ATP production when the workload decreased. The fraction of ATP that the axon spent on the release and recycling of glutamate was small (4.7%), irrespective of the workload. Increasing this fraction enhanced the speed of ATP homeostasis and reduced the futile production of ATP. The model can be extended to axons releasing other, or coreleasing multiple, transmitters. Hence, the activity-dependent formation and release of neurotransmitter may be a universal mechanism of ATP homeostasis.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-36"},"PeriodicalIF":2.1,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reframing the Expected Free Energy: Four Formulations and a Unification. 重构预期自由能:四种表述与统一。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-02 DOI: 10.1162/NECO.a.1491
Théophile Champion, Howard Bowman, Dimitrije Marković, Marek Grześ

Active inference is a process theory of perception, learning, and decision making that is applied to a range of research fields, including neuroscience, robotics, psychology, and machine learning. Active inference rests on an objective function called the expected free energy, which can be justified by the intuitive plausibility of its formulations-for example, the risk plus ambiguity and information gain/pragmatic value formulations. This letter seeks to formalize the problem of deriving these formulations from a single root expected free energy definition-the unification problem. Then we analyze two approaches to defining expected free energy. More precisely, the expected free energy is either defined as (1) the risk over observations plus ambiguity or (2) the risk over states plus ambiguity. In the first setting, no rigorous mathematical justification for the expected free energy has been proposed to date, but all the formulations can be recovered from it by assuming that the likelihood of target distribution T(o|s) is the likelihood of the generative model P(o|s). Importantly, under this likelihood constraint, if the likelihood is lossless,1 then prior preferences over observations can be defined arbitrarily. However, in the more general case of partially observable Markov decision processes (POMDPs), we demonstrate that the likelihood constraint effectively restricts the set of valid prior preferences over observations. Indeed, only a limited class of prior preferences over observations is compatible with the likelihood mapping of the generative model. In the second setting, a justification of the root expected free energy definition exists, but this setting only accounts for two formulations: the risk over states plus ambiguity and entropy plus expected energy formulations. We conclude with a discussion of the conditions under which a unification of expected free energy formulations has been proposed in the literature by appeal to the free energy principle in the specific context of systems without random fluctuations.

主动推理是一种关于感知、学习和决策的过程理论,应用于一系列研究领域,包括神经科学、机器人、心理学和机器学习。主动推理依赖于一个被称为期望自由能的目标函数,它可以通过其公式的直观合理性来证明——例如,风险加模糊性和信息增益/实用价值公式。这封信试图形式化从单一根期望自由能定义推导出这些公式的问题——统一问题。然后分析了定义期望自由能的两种方法。更准确地说,期望自由能要么定义为(1)观测值加上模糊性的风险,要么定义为(2)状态加上模糊性的风险。在第一种设置中,迄今为止还没有对期望自由能提出严格的数学证明,但通过假设目标分布的似然T(o|s)是生成模型P(o|s)的似然,可以从中恢复所有的公式。重要的是,在这种似然约束下,如果似然是无损的,那么可以任意定义对观测的先验偏好。然而,在部分可观察马尔可夫决策过程(pomdp)的更一般的情况下,我们证明了似然约束有效地限制了有效先验偏好的集合。事实上,只有一类有限的先验偏好与生成模型的似然映射是相容的。在第二种设置中,存在根期望自由能定义的证明,但这种设置只考虑两种公式:状态风险加上模糊性和熵加上期望能量公式。最后,我们讨论了在没有随机波动的系统的特定情况下,通过诉诸自由能原理,在文献中提出期望自由能公式的统一的条件。
{"title":"Reframing the Expected Free Energy: Four Formulations and a Unification.","authors":"Théophile Champion, Howard Bowman, Dimitrije Marković, Marek Grześ","doi":"10.1162/NECO.a.1491","DOIUrl":"https://doi.org/10.1162/NECO.a.1491","url":null,"abstract":"<p><p>Active inference is a process theory of perception, learning, and decision making that is applied to a range of research fields, including neuroscience, robotics, psychology, and machine learning. Active inference rests on an objective function called the expected free energy, which can be justified by the intuitive plausibility of its formulations-for example, the risk plus ambiguity and information gain/pragmatic value formulations. This letter seeks to formalize the problem of deriving these formulations from a single root expected free energy definition-the unification problem. Then we analyze two approaches to defining expected free energy. More precisely, the expected free energy is either defined as (1) the risk over observations plus ambiguity or (2) the risk over states plus ambiguity. In the first setting, no rigorous mathematical justification for the expected free energy has been proposed to date, but all the formulations can be recovered from it by assuming that the likelihood of target distribution T(o|s) is the likelihood of the generative model P(o|s). Importantly, under this likelihood constraint, if the likelihood is lossless,1 then prior preferences over observations can be defined arbitrarily. However, in the more general case of partially observable Markov decision processes (POMDPs), we demonstrate that the likelihood constraint effectively restricts the set of valid prior preferences over observations. Indeed, only a limited class of prior preferences over observations is compatible with the likelihood mapping of the generative model. In the second setting, a justification of the root expected free energy definition exists, but this setting only accounts for two formulations: the risk over states plus ambiguity and entropy plus expected energy formulations. We conclude with a discussion of the conditions under which a unification of expected free energy formulations has been proposed in the literature by appeal to the free energy principle in the specific context of systems without random fluctuations.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-31"},"PeriodicalIF":2.1,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Interplay between BOLD Signal Variability, Complexity, Static and Dynamic Functional Brain Network Features During Movie Viewing. 探索观看电影时BOLD信号变异性、复杂性、静态和动态脑功能网络特征之间的相互作用。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-02 DOI: 10.1162/NECO.a.1488
Amir Hossein Ghaderi, Hongye Wang, Andrea B Protzner

Exploring the dynamics and complexity of brain signal is critical to advancing our understanding of brain function. Recent fMRI studies have revealed links between BOLD signal variability or complexity with static/dynamics features of functional brain networks (FBN). However, the association between variability/complexity and regional centrality is still understudied. Here we investigate the association between variability/complexity and static/dynamic nodal features of FBN using graph theory analysis with fMRI BOLD data acquired during naturalistic movie watching. We found that variability positively correlated with fine-scale complexity but negatively correlated with coarse-scale complexity. Specifically, regions with high centrality and clustering coefficient were related to less variable but more complex signal. Similar relationships persisted for dynamic FBN, but the associations with certain aspects (e.g., eigenvector centrality) of regional centrality dynamics became insignificant. Our findings demonstrate that the relationship between BOLD signal variability and static/dynamic FBN with BOLD signal complexity depends on the temporal scale of signal complexity and that time-varying features of FBN reflect the complexities of how BOLD signal variability/complexity coevolve with dynamic FBN.

探索大脑信号的动态和复杂性对于提高我们对大脑功能的理解至关重要。最近的fMRI研究揭示了BOLD信号的变异性或复杂性与功能性脑网络(FBN)的静态/动态特征之间的联系。然而,变异/复杂性与区域中心性之间的关系仍未得到充分研究。在这里,我们利用图论分析和观看自然主义电影时获得的fMRI BOLD数据来研究FBN的变异性/复杂性与静态/动态节点特征之间的关系。研究发现,变异与精细尺度复杂性呈正相关,与粗尺度复杂性呈负相关。具体而言,中心性和聚类系数高的区域与变量较少但更复杂的信号相关。动态FBN也存在类似的关系,但与区域中心性动态的某些方面(如特征向量中心性)的关联变得微不足道。研究结果表明,BOLD信号变异性与静态/动态FBN之间的关系取决于信号复杂性的时间尺度,而FBN的时变特征反映了BOLD信号变异性/复杂性如何与动态FBN共同演化的复杂性。
{"title":"Exploring the Interplay between BOLD Signal Variability, Complexity, Static and Dynamic Functional Brain Network Features During Movie Viewing.","authors":"Amir Hossein Ghaderi, Hongye Wang, Andrea B Protzner","doi":"10.1162/NECO.a.1488","DOIUrl":"https://doi.org/10.1162/NECO.a.1488","url":null,"abstract":"<p><p>Exploring the dynamics and complexity of brain signal is critical to advancing our understanding of brain function. Recent fMRI studies have revealed links between BOLD signal variability or complexity with static/dynamics features of functional brain networks (FBN). However, the association between variability/complexity and regional centrality is still understudied. Here we investigate the association between variability/complexity and static/dynamic nodal features of FBN using graph theory analysis with fMRI BOLD data acquired during naturalistic movie watching. We found that variability positively correlated with fine-scale complexity but negatively correlated with coarse-scale complexity. Specifically, regions with high centrality and clustering coefficient were related to less variable but more complex signal. Similar relationships persisted for dynamic FBN, but the associations with certain aspects (e.g., eigenvector centrality) of regional centrality dynamics became insignificant. Our findings demonstrate that the relationship between BOLD signal variability and static/dynamic FBN with BOLD signal complexity depends on the temporal scale of signal complexity and that time-varying features of FBN reflect the complexities of how BOLD signal variability/complexity coevolve with dynamic FBN.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-30"},"PeriodicalIF":2.1,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuromodulators Generate Multiple Context-Relevant Behaviors in Recurrent Neural Networks. 神经调节剂在循环神经网络中产生多种情境相关行为。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-02 DOI: 10.1162/NECO.a.1489
Ben Tsuda, Stefan C Pate, Kay M Tye, Hava T Siegelmann, Terrence J Sejnowski

Neuromodulators are critical controllers of neural states, with dysfunctions linked to various neuropsychiatric disorders. Although many biological aspects of neuromodulation have been studied, the computational principles underlying how neuromodulation of distributed neural populations controls brain states remain unclear. In contrast to external contextual inputs, neuromodulation can act as a single scalar signal that is broadcast to a vast population of neurons. We model the modulation of synaptic weight in a recurrent neural network model and show that neuromodulators can dramatically alter the function of a network, even when highly simplified. We find that under structural constraints like those in brains, this provides a fundamental mechanism that can increase the computational capability and flexibility of a neural network. Diffuse synaptic weight modulation enables storage of multiple memories using a common set of synapses that are able to generate diverse, even diametrically opposed, behaviors. Our findings help explain how neuromodulators unlock specific behaviors by creating task-specific hyperchannels in neural activity space and motivate more flexible, compact and capable machine learning architectures.

神经调节剂是神经状态的关键控制者,其功能障碍与各种神经精神疾病有关。尽管已经研究了神经调节的许多生物学方面,但分布式神经群体的神经调节如何控制大脑状态的计算原理仍然不清楚。与外部环境输入相反,神经调节可以作为一个单一的标量信号传播给大量的神经元。我们在递归神经网络模型中模拟突触重量的调节,并表明神经调节剂可以显著地改变网络的功能,即使高度简化。我们发现,在像大脑这样的结构约束下,这提供了一种基本机制,可以提高神经网络的计算能力和灵活性。弥漫性突触权重调制可以使用一组共同的突触来存储多个记忆,这些突触能够产生不同的,甚至完全相反的行为。我们的发现有助于解释神经调节剂如何通过在神经活动空间中创建特定任务的超通道来解锁特定行为,并激发更灵活、更紧凑、更有能力的机器学习架构。
{"title":"Neuromodulators Generate Multiple Context-Relevant Behaviors in Recurrent Neural Networks.","authors":"Ben Tsuda, Stefan C Pate, Kay M Tye, Hava T Siegelmann, Terrence J Sejnowski","doi":"10.1162/NECO.a.1489","DOIUrl":"https://doi.org/10.1162/NECO.a.1489","url":null,"abstract":"<p><p>Neuromodulators are critical controllers of neural states, with dysfunctions linked to various neuropsychiatric disorders. Although many biological aspects of neuromodulation have been studied, the computational principles underlying how neuromodulation of distributed neural populations controls brain states remain unclear. In contrast to external contextual inputs, neuromodulation can act as a single scalar signal that is broadcast to a vast population of neurons. We model the modulation of synaptic weight in a recurrent neural network model and show that neuromodulators can dramatically alter the function of a network, even when highly simplified. We find that under structural constraints like those in brains, this provides a fundamental mechanism that can increase the computational capability and flexibility of a neural network. Diffuse synaptic weight modulation enables storage of multiple memories using a common set of synapses that are able to generate diverse, even diametrically opposed, behaviors. Our findings help explain how neuromodulators unlock specific behaviors by creating task-specific hyperchannels in neural activity space and motivate more flexible, compact and capable machine learning architectures.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-36"},"PeriodicalIF":2.1,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object Detection, Recognition, Deep Learning, and the Universal Law of Generalization. 对象检测、识别、深度学习和普遍的泛化规律。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-02 DOI: 10.1162/NECO.a.1483
Faris B Rustom, Rohan Sharma, Haluk Öğmen, Arash Yazdanbakhsh

Object detection and recognition are fundamental functions that play a significant role in the success of species. Because the appearance of an object exhibits large variability, the brain has to group these different stimuli under the same object identity, a process of generalization. Does the process of generalization follow some general principles, or is it an ad hoc bag of tricks? The universal law of generalization (ULoG) provides evidence that generalization follows similar properties across a variety of species and tasks. Here, we tested the hypothesis derived from ULoG that the internal representations underlying generalization reflect the natural properties of object detection and recognition in our environment rather than the specifics of the system solving these problems. Neural networks with universal-approximation capability have been successful in many object detection and recognition tasks; however, how these networks reach their decisions remains opaque. To provide a strong test for ecological validity, we used natural camouflage, which is nature's test bed for object detection and recognition. We trained a deep neural network with natural images of "clear" and "camouflaged" animals and examined the emerging internal representations. We extended ULoG to a realistic learning regime, with multiple consequential stimuli, and developed two methods to determine category prototypes. Our results show that with a proper choice of category prototypes, the generalization functions are monotone decreasing, similar to the generalization functions of biological systems. Critically, we show that camouflaged inputs are not represented randomly but rather systematically appear at the tail of the monotone decreasing functions. Our results support the hypothesis that the internal representations underlying generalization in object detection and recognition are shaped mainly by the properties of the ecological environment, even though different biological and artificial systems may generate these internal representations through drastically different learning and adaptation processes. Furthermore, the extended version of ULoG provides a tool to analyze how the system organizes its internal representations during learning as well as how it makes its decisions.

目标检测和识别是物种成功生存的基本功能。因为一个物体的外观表现出很大的可变性,大脑必须将这些不同的刺激归为同一个物体的身份,这是一个概括的过程。泛化的过程是否遵循一些普遍的原则,或者它是一个特别的技巧袋?普遍泛化定律(ULoG)提供了证据,证明泛化遵循各种物种和任务的相似属性。在这里,我们测试了来自ULoG的假设,即泛化背后的内部表征反映了我们环境中对象检测和识别的自然属性,而不是解决这些问题的系统的细节。具有通用逼近能力的神经网络在许多目标检测和识别任务中取得了成功;然而,这些网络是如何做出决定的仍不清楚。为了提供一个强有力的生态效度测试,我们使用了自然伪装,这是自然界对目标检测和识别的试验台。我们用“清晰”和“伪装”动物的自然图像训练了一个深度神经网络,并检查了新兴的内部表征。我们将ULoG扩展到具有多个相应刺激的现实学习机制,并开发了两种方法来确定类别原型。结果表明,在适当选择类别原型的情况下,泛化函数是单调递减的,类似于生物系统的泛化函数。关键的是,我们表明伪装的输入不是随机表示的,而是系统地出现在单调递减函数的尾部。我们的研究结果支持这样的假设,即物体检测和识别中泛化的内部表征主要是由生态环境的特性形成的,尽管不同的生物和人工系统可能通过截然不同的学习和适应过程产生这些内部表征。此外,ULoG的扩展版本提供了一个工具来分析系统在学习过程中如何组织其内部表示以及如何做出决策。
{"title":"Object Detection, Recognition, Deep Learning, and the Universal Law of Generalization.","authors":"Faris B Rustom, Rohan Sharma, Haluk Öğmen, Arash Yazdanbakhsh","doi":"10.1162/NECO.a.1483","DOIUrl":"https://doi.org/10.1162/NECO.a.1483","url":null,"abstract":"<p><p>Object detection and recognition are fundamental functions that play a significant role in the success of species. Because the appearance of an object exhibits large variability, the brain has to group these different stimuli under the same object identity, a process of generalization. Does the process of generalization follow some general principles, or is it an ad hoc bag of tricks? The universal law of generalization (ULoG) provides evidence that generalization follows similar properties across a variety of species and tasks. Here, we tested the hypothesis derived from ULoG that the internal representations underlying generalization reflect the natural properties of object detection and recognition in our environment rather than the specifics of the system solving these problems. Neural networks with universal-approximation capability have been successful in many object detection and recognition tasks; however, how these networks reach their decisions remains opaque. To provide a strong test for ecological validity, we used natural camouflage, which is nature's test bed for object detection and recognition. We trained a deep neural network with natural images of \"clear\" and \"camouflaged\" animals and examined the emerging internal representations. We extended ULoG to a realistic learning regime, with multiple consequential stimuli, and developed two methods to determine category prototypes. Our results show that with a proper choice of category prototypes, the generalization functions are monotone decreasing, similar to the generalization functions of biological systems. Critically, we show that camouflaged inputs are not represented randomly but rather systematically appear at the tail of the monotone decreasing functions. Our results support the hypothesis that the internal representations underlying generalization in object detection and recognition are shaped mainly by the properties of the ecological environment, even though different biological and artificial systems may generate these internal representations through drastically different learning and adaptation processes. Furthermore, the extended version of ULoG provides a tool to analyze how the system organizes its internal representations during learning as well as how it makes its decisions.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-45"},"PeriodicalIF":2.1,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulated Complex Cells Contribute to Object Recognition Through Representational Untangling. 模拟复杂细胞有助于通过代表性解缠对象识别。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 DOI: 10.1162/NECO.a.1480
Mitchell B Slapik, Harel Z Shouval

The visual system performs a remarkable feat: it takes complex retinal activation patterns and decodes them for object recognition. This operation, termed "representational untangling," organizes neural representations by clustering similar objects together while separating different categories of objects. While representational untangling is usually associated with higher-order visual areas like the inferior temporal cortex, it remains unclear how the early visual system contributes to this process-whether through highly selective neurons or high-dimensional population codes. This article investigates how a computational model of early vision contributes to representational untangling. Using a computational visual hierarchy and two different data sets consisting of numerals and objects, we demonstrate that simulated complex cells significantly contribute to representational untangling for object recognition. Our findings challenge prior theories by showing that untangling does not depend on skewed, sparse, or high-dimensional representations. Instead, simulated complex cells reformat visual information into a low-dimensional, yet more separable, neural code, striking a balance between representational untangling and computational efficiency.

视觉系统完成了一项非凡的壮举:它获取复杂的视网膜激活模式,并对其进行解码,以便识别物体。这种操作被称为“表征解缠”,通过将相似的对象聚在一起,同时分离不同类别的对象来组织神经表征。虽然表征性解结通常与高阶视觉区域(如下颞叶皮层)有关,但尚不清楚早期视觉系统是如何参与这一过程的——是通过高度选择性的神经元还是通过高维的种群代码。本文研究了早期视觉的计算模型如何有助于表征解结。使用计算视觉层次和由数字和物体组成的两个不同数据集,我们证明了模拟复杂细胞对物体识别的表征解缠有显著贡献。我们的研究结果挑战了先前的理论,表明解缠并不依赖于扭曲的、稀疏的或高维的表征。相反,模拟的复杂细胞将视觉信息重新格式化为低维,但更可分离的神经代码,在表征解结和计算效率之间取得平衡。
{"title":"Simulated Complex Cells Contribute to Object Recognition Through Representational Untangling.","authors":"Mitchell B Slapik, Harel Z Shouval","doi":"10.1162/NECO.a.1480","DOIUrl":"10.1162/NECO.a.1480","url":null,"abstract":"<p><p>The visual system performs a remarkable feat: it takes complex retinal activation patterns and decodes them for object recognition. This operation, termed \"representational untangling,\" organizes neural representations by clustering similar objects together while separating different categories of objects. While representational untangling is usually associated with higher-order visual areas like the inferior temporal cortex, it remains unclear how the early visual system contributes to this process-whether through highly selective neurons or high-dimensional population codes. This article investigates how a computational model of early vision contributes to representational untangling. Using a computational visual hierarchy and two different data sets consisting of numerals and objects, we demonstrate that simulated complex cells significantly contribute to representational untangling for object recognition. Our findings challenge prior theories by showing that untangling does not depend on skewed, sparse, or high-dimensional representations. Instead, simulated complex cells reformat visual information into a low-dimensional, yet more separable, neural code, striking a balance between representational untangling and computational efficiency.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"145-164"},"PeriodicalIF":2.1,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12848683/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Learning in Echo State Networks for Input Reconstruction 输入重构回声状态网络的无监督学习。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 DOI: 10.1162/NECO.a.38
Taiki Yamada;Yuichi Katori;Kantaro Fujiwara
Echo state networks (ESNs) are a class of recurrent neural networks in which only the readout layer is trainable, while the recurrent and input layers are fixed. This architectural constraint enables computationally efficient processing of time-series data. Traditionally, the readout layer in ESNs is trained using supervised learning with target outputs. In this study, we focus on input reconstruction (IR), where the readout layer is trained to reconstruct the input time series fed into the ESN. We show that IR can be achieved through unsupervised learning (UL), without access to supervised targets, provided that the ESN parameters are known a priori and satisfy invertibility conditions. This formulation allows applications relying on IR, such as dynamical system replication and noise filtering, to be reformulated within the UL framework via straightforward integration with existing algorithms. Our results suggest that prior knowledge of ESN parameters can reduce reliance on supervision, thereby establishing a new principle—not only by fixing part of the network parameters but also by exploiting their specific values. Furthermore, our UL-based algorithms for input reconstruction and related tasks are suitable for autonomous processing, offering insights into how analogous computational mechanisms might operate in the brain in principle. These findings contribute to a deeper understanding of the mathematical foundations of ESNs and their relevance to models in computational neuroscience.
回声状态网络(esn)是一类只有读出层是可训练的,而循环层和输入层是固定的递归神经网络。这种体系结构约束使时间序列数据的计算效率得以提高。传统上,ESNs中的读出层是使用带有目标输出的监督学习来训练的。在本研究中,我们专注于输入重建(IR),其中读出层被训练以重建输入到回声状态网络的输入时间序列。我们证明了IR可以通过无监督学习(UL)来实现,而不需要访问有监督的目标,只要回声状态网络参数是先验的并且满足可逆性条件。该公式允许依赖于IR的应用程序,如动态系统复制和噪声过滤,通过与现有算法的直接集成,在UL框架内重新制定。我们的研究结果表明,回声状态网络参数的先验知识可以减少对监督的依赖,从而建立一个新的原则——不仅通过固定部分网络参数,而且通过利用它们的特定值。此外,我们的输入重建和相关任务的基于ul的算法适用于自主处理,为类似的计算机制在大脑中的运作原理提供了见解。这些发现有助于更深入地理解esn的数学基础及其与计算神经科学模型的相关性。
{"title":"Unsupervised Learning in Echo State Networks for Input Reconstruction","authors":"Taiki Yamada;Yuichi Katori;Kantaro Fujiwara","doi":"10.1162/NECO.a.38","DOIUrl":"10.1162/NECO.a.38","url":null,"abstract":"Echo state networks (ESNs) are a class of recurrent neural networks in which only the readout layer is trainable, while the recurrent and input layers are fixed. This architectural constraint enables computationally efficient processing of time-series data. Traditionally, the readout layer in ESNs is trained using supervised learning with target outputs. In this study, we focus on input reconstruction (IR), where the readout layer is trained to reconstruct the input time series fed into the ESN. We show that IR can be achieved through unsupervised learning (UL), without access to supervised targets, provided that the ESN parameters are known a priori and satisfy invertibility conditions. This formulation allows applications relying on IR, such as dynamical system replication and noise filtering, to be reformulated within the UL framework via straightforward integration with existing algorithms. Our results suggest that prior knowledge of ESN parameters can reduce reliance on supervision, thereby establishing a new principle—not only by fixing part of the network parameters but also by exploiting their specific values. Furthermore, our UL-based algorithms for input reconstruction and related tasks are suitable for autonomous processing, offering insights into how analogous computational mechanisms might operate in the brain in principle. These findings contribute to a deeper understanding of the mathematical foundations of ESNs and their relevance to models in computational neuroscience.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"38 2","pages":"198-227"},"PeriodicalIF":2.1,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145403093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sum-of-Norms Regularized Nonnegative Matrix Factorization 范数和正则化非负矩阵分解。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 DOI: 10.1162/NECO.a.1482
Andersen Ang;Waqas Bin Hamed;Hans De Sterck
When applying nonnegative matrix factorization (NMF), the rank parameter is generally unknown. This rank, called the nonnegative rank, is usually estimated heuristically since computing its exact value is NP-hard. In this work, we propose an approximation method to estimate the rank on the fly while solving NMF. We use the sum-of-norm (SON), a group-lasso structure that encourages pairwise similarity, to reduce the rank of a factor matrix when the initial rank is overestimated. On various data sets, SON-NMF can reveal the correct nonnegative rank of the data without prior knowledge or parameter tuning. SON-NMF is a nonconvex, nonsmooth, nonseparable, and nonproximable problem, making it nontrivial to solve. First, since rank estimation in NMF is NP-hard, the proposed approach does not benefit from lower computational complexity. Using a graph-theoretic argument, we prove that the complexity of SON NMF is essentially irreducible. Second, the per iteration cost of algorithms for SON-NMF can be high. This motivates us to propose a first-order BCD algorithm that approximately solves SON-NMF with low per iteration cost via the proximal average operator. SON-NMF exhibits favorable features for applications. Besides the ability to automatically estimate the rank from data, SON-NMF can handle rank-deficient data matrices and detect weak components with little energy. Furthermore, in hyperspectral imaging, SON-NMF naturally addresses the issue of spectral variability.
在应用非负矩阵分解(NMF)时,秩参数通常是未知的。这个秩称为非负秩,通常是启发式估计的,因为计算它的确切值是np困难的。在这项工作中,我们提出了一种在求解NMF时动态估计秩的近似方法。我们使用规范和(SON),一种鼓励两两相似性的组套索结构,当初始秩被高估时降低因子矩阵的秩。在各种数据集上,SON-NMF可以在不需要先验知识或参数调优的情况下显示数据的正确非负秩。SON-NMF是一个非凸的、非光滑的、不可分离的、不可接近的问题,这使得它的求解是非平凡的。首先,由于NMF中的秩估计是np困难的,因此所提出的方法不会从较低的计算复杂度中获益。利用图论论证,证明了SON NMF的复杂性本质上是不可约的。其次,SON-NMF算法的每次迭代成本可能很高。这促使我们提出一种一阶BCD算法,该算法通过近平均算子以较低的每次迭代成本近似求解SON-NMF。SON-NMF具有良好的应用特性。除了能够从数据中自动估计秩外,SON-NMF还可以处理秩不足的数据矩阵并以较少的能量检测弱成分。此外,在高光谱成像中,SON-NMF自然地解决了光谱变异性的问题。
{"title":"Sum-of-Norms Regularized Nonnegative Matrix Factorization","authors":"Andersen Ang;Waqas Bin Hamed;Hans De Sterck","doi":"10.1162/NECO.a.1482","DOIUrl":"10.1162/NECO.a.1482","url":null,"abstract":"When applying nonnegative matrix factorization (NMF), the rank parameter is generally unknown. This rank, called the nonnegative rank, is usually estimated heuristically since computing its exact value is NP-hard. In this work, we propose an approximation method to estimate the rank on the fly while solving NMF. We use the sum-of-norm (SON), a group-lasso structure that encourages pairwise similarity, to reduce the rank of a factor matrix when the initial rank is overestimated. On various data sets, SON-NMF can reveal the correct nonnegative rank of the data without prior knowledge or parameter tuning. SON-NMF is a nonconvex, nonsmooth, nonseparable, and nonproximable problem, making it nontrivial to solve. First, since rank estimation in NMF is NP-hard, the proposed approach does not benefit from lower computational complexity. Using a graph-theoretic argument, we prove that the complexity of SON NMF is essentially irreducible. Second, the per iteration cost of algorithms for SON-NMF can be high. This motivates us to propose a first-order BCD algorithm that approximately solves SON-NMF with low per iteration cost via the proximal average operator. SON-NMF exhibits favorable features for applications. Besides the ability to automatically estimate the rank from data, SON-NMF can handle rank-deficient data matrices and detect weak components with little energy. Furthermore, in hyperspectral imaging, SON-NMF naturally addresses the issue of spectral variability.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"38 2","pages":"228-255"},"PeriodicalIF":2.1,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximation Rates in Fréchet Metrics: Barron Spaces, Paley-Wiener Spaces, and Fourier Multipliers 弗雷切度量中的近似率:巴伦空间,佩利-维纳空间和傅立叶乘法器。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 DOI: 10.1162/NECO.a.1481
Ahmed Abdeljawad;Thomas Dittrich
Operator learning is a recent development in the simulation of partial differential equations by means of neural networks. The idea behind this approach is to learn the behavior of an operator, such that the resulting neural network is an approximate mapping in infinite-dimensional spaces that is capable of (approximately) simulating the solution operator governed by the partial differential equation. In our work, we study some general approximation capabilities for linear differential operators by approximating the corresponding symbol in the Fourier domain. Analogous to the structure of the class of Hörmander symbols, we consider the approximation with respect to a topology that is induced by a sequence of semi-norms. In that sense, we measure the approximation error in terms of a Fréchet metric, and our main result identifies sufficient conditions for achieving a predefined approximation error. We then focus on a natural extension of our main theorem, in which we reduce the assumptions on the sequence of seminorms. Based on existing approximation results for the exponential spectral Barron space, we then present a concrete example of symbols that can be approximated well.
算子学习是近年来利用神经网络模拟偏微分方程的一个新发展。这种方法背后的思想是学习算子的行为,这样得到的神经网络是无限维空间中的近似映射,能够(近似地)模拟由偏微分方程控制的解算子。在我们的工作中,我们通过在傅里叶域中近似相应的符号来研究线性微分算子的一些一般近似能力。类似于Hörmander符号类的结构,我们考虑关于由半规范序列诱导的拓扑的逼近。从这个意义上说,我们根据一个fr度量来测量近似误差,我们的主要结果确定了实现预定义近似误差的充分条件。然后,我们将重点放在主要定理的自然推广上,其中我们减少了对半精序列的假设。在已有的指数谱巴伦空间近似结果的基础上,我们给出了一个可以很好近似的符号的具体例子。
{"title":"Approximation Rates in Fréchet Metrics: Barron Spaces, Paley-Wiener Spaces, and Fourier Multipliers","authors":"Ahmed Abdeljawad;Thomas Dittrich","doi":"10.1162/NECO.a.1481","DOIUrl":"10.1162/NECO.a.1481","url":null,"abstract":"Operator learning is a recent development in the simulation of partial differential equations by means of neural networks. The idea behind this approach is to learn the behavior of an operator, such that the resulting neural network is an approximate mapping in infinite-dimensional spaces that is capable of (approximately) simulating the solution operator governed by the partial differential equation. In our work, we study some general approximation capabilities for linear differential operators by approximating the corresponding symbol in the Fourier domain. Analogous to the structure of the class of Hörmander symbols, we consider the approximation with respect to a topology that is induced by a sequence of semi-norms. In that sense, we measure the approximation error in terms of a Fréchet metric, and our main result identifies sufficient conditions for achieving a predefined approximation error. We then focus on a natural extension of our main theorem, in which we reduce the assumptions on the sequence of seminorms. Based on existing approximation results for the exponential spectral Barron space, we then present a concrete example of symbols that can be approximated well.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"38 2","pages":"165-197"},"PeriodicalIF":2.1,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1