首页 > 最新文献

Neural Computation最新文献

英文 中文
Deconstructing Deep Active Inference: A Contrarian Information Gatherer 解构深度主动推理:逆向信息收集器
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1162/neco_a_01697
Théophile Champion;Marek Grześ;Lisa Bonheme;Howard Bowman
Active inference is a theory of perception, learning, and decision making that can be applied to neuroscience, robotics, psychology, and machine learning. Recently, intensive research has been taking place to scale up this framework using Monte Carlo tree search and deep learning. The goal of this activity is to solve more complicated tasks using deep active inference. First, we review the existing literature and then progressively build a deep active inference agent as follows: we (1) implement a variational autoencoder (VAE), (2) implement a deep hidden Markov model (HMM), and (3) implement a deep critical hidden Markov model (CHMM). For the CHMM, we implemented two versions, one minimizing expected free energy, CHMM[EFE] and one maximizing rewards, CHMM[reward]. Then we experimented with three different action selection strategies: the ε-greedy algorithm as well as softmax and best action selection. According to our experiments, the models able to solve the dSprites environment are the ones that maximize rewards. On further inspection, we found that the CHMM minimizing expected free energy almost always picks the same action, which makes it unable to solve the dSprites environment. In contrast, the CHMM maximizing reward keeps on selecting all the actions, enabling it to successfully solve the task. The only difference between those two CHMMs is the epistemic value, which aims to make the outputs of the transition and encoder networks as close as possible. Thus, the CHMM minimizing expected free energy repeatedly picks a single action and becomes an expert at predicting the future when selecting this action. This effectively makes the KL divergence between the output of the transition and encoder networks small. Additionally, when selecting the action down the average reward is zero, while for all the other actions, the expected reward will be negative. Therefore, if the CHMM has to stick to a single action to keep the KL divergence small, then the action down is the most rewarding. We also show in simulation that the epistemic value used in deep active inference can behave degenerately and in certain circumstances effectively lose, rather than gain, information. As the agent minimizing EFE is not able to explore its environment, the appropriate formulation of the epistemic value in deep active inference remains an open question.
主动推理是一种感知、学习和决策理论,可应用于神经科学、机器人学、心理学和机器学习。最近,人们正在利用蒙特卡洛树搜索和深度学习进行深入研究,以扩大这一框架的规模。这项活动的目标是利用深度主动推理解决更复杂的任务。首先,我们回顾了现有文献,然后逐步建立了一个深度主动推理代理,具体如下:我们(1)实现了一个变异自动编码器(VAE),(2)实现了一个深度隐藏马尔可夫模型(HMM),(3)实现了一个深度临界隐藏马尔可夫模型(CHMM)。对于 CHMM,我们实施了两个版本,一个是预期自由能最小化版本 CHMM[EFE],另一个是奖励最大化版本 CHMM[reward]。然后,我们实验了三种不同的行动选择策略:ε-贪婪算法以及软最大值和最佳行动选择。根据我们的实验,能够解决 dSprites 环境的模型都是奖励最大化的模型。进一步观察发现,期望自由能最小化的 CHMM 几乎总是选择相同的行动,这使它无法解决 dSprites 环境。与此相反,奖励最大化的 CHMM 始终选择所有行动,从而成功地解决了任务。这两种 CHMM 的唯一区别在于认识值,其目的是使转换网络和编码网络的输出尽可能接近。因此,最小化预期自由能的 CHMM 会反复选择单一行动,并在选择该行动时成为预测未来的专家。这就有效地减小了过渡网络和编码器网络输出之间的 KL 分歧。此外,在选择 "向下 "动作时,平均奖励为零,而对于所有其他动作,预期奖励将为负值。因此,如果 CHMM 必须坚持采取单一行动以保持较小的 KL 分歧,那么向下行动的回报最高。我们还在模拟中表明,深度主动推理中使用的认识值可能会出现退化行为,在某些情况下会有效地损失而不是获得信息。由于最小化 EFE 的代理无法探索其环境,因此深度主动推理中认识值的适当表述仍是一个悬而未决的问题。
{"title":"Deconstructing Deep Active Inference: A Contrarian Information Gatherer","authors":"Théophile Champion;Marek Grześ;Lisa Bonheme;Howard Bowman","doi":"10.1162/neco_a_01697","DOIUrl":"10.1162/neco_a_01697","url":null,"abstract":"Active inference is a theory of perception, learning, and decision making that can be applied to neuroscience, robotics, psychology, and machine learning. Recently, intensive research has been taking place to scale up this framework using Monte Carlo tree search and deep learning. The goal of this activity is to solve more complicated tasks using deep active inference. First, we review the existing literature and then progressively build a deep active inference agent as follows: we (1) implement a variational autoencoder (VAE), (2) implement a deep hidden Markov model (HMM), and (3) implement a deep critical hidden Markov model (CHMM). For the CHMM, we implemented two versions, one minimizing expected free energy, CHMM[EFE] and one maximizing rewards, CHMM[reward]. Then we experimented with three different action selection strategies: the ε-greedy algorithm as well as softmax and best action selection. According to our experiments, the models able to solve the dSprites environment are the ones that maximize rewards. On further inspection, we found that the CHMM minimizing expected free energy almost always picks the same action, which makes it unable to solve the dSprites environment. In contrast, the CHMM maximizing reward keeps on selecting all the actions, enabling it to successfully solve the task. The only difference between those two CHMMs is the epistemic value, which aims to make the outputs of the transition and encoder networks as close as possible. Thus, the CHMM minimizing expected free energy repeatedly picks a single action and becomes an expert at predicting the future when selecting this action. This effectively makes the KL divergence between the output of the transition and encoder networks small. Additionally, when selecting the action down the average reward is zero, while for all the other actions, the expected reward will be negative. Therefore, if the CHMM has to stick to a single action to keep the KL divergence small, then the action down is the most rewarding. We also show in simulation that the epistemic value used in deep active inference can behave degenerately and in certain circumstances effectively lose, rather than gain, information. As the agent minimizing EFE is not able to explore its environment, the appropriate formulation of the epistemic value in deep active inference remains an open question.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 11","pages":"2403-2445"},"PeriodicalIF":2.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive Representations: Building Blocks of Intelligence 预测表征:智能的基石
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1162/neco_a_01705
Wilka Carvalho;Momchil S. Tomov;William de Cothi;Caswell Barry;Samuel J. Gershman
Adaptive behavior often requires predicting future events. The theory of reinforcement learning prescribes what kinds of predictive representations are useful and how to compute them. This review integrates these theoretical ideas with work on cognition and neuroscience. We pay special attention to the successor representation and its generalizations, which have been widely applied as both engineering tools and models of brain function. This convergence suggests that particular kinds of predictive representations may function as versatile building blocks of intelligence.
适应行为往往需要预测未来事件。强化学习理论规定了哪些类型的预测表征是有用的,以及如何计算它们。这篇综述将这些理论观点与认知和神经科学方面的工作相结合。我们特别关注继任表征及其泛化,它们已被广泛应用为工程工具和大脑功能模型。这种趋同性表明,特定类型的预测表征可以作为智能的多功能构件发挥作用。
{"title":"Predictive Representations: Building Blocks of Intelligence","authors":"Wilka Carvalho;Momchil S. Tomov;William de Cothi;Caswell Barry;Samuel J. Gershman","doi":"10.1162/neco_a_01705","DOIUrl":"10.1162/neco_a_01705","url":null,"abstract":"Adaptive behavior often requires predicting future events. The theory of reinforcement learning prescribes what kinds of predictive representations are useful and how to compute them. This review integrates these theoretical ideas with work on cognition and neuroscience. We pay special attention to the successor representation and its generalizations, which have been widely applied as both engineering tools and models of brain function. This convergence suggests that particular kinds of predictive representations may function as versatile building blocks of intelligence.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 11","pages":"2225-2298"},"PeriodicalIF":2.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Electrical Signaling Beyond Neurons 神经元之外的电子信号传递
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-17 DOI: 10.1162/neco_a_01696
Travis Monk;Nik Dennler;Nicholas Ralph;Shavika Rastogi;Saeed Afshar;Pablo Urbizagastegui;Russell Jarvis;André van Schaik;Andrew Adamatzky
Neural action potentials (APs) are difficult to interpret as signal encoders and/or computational primitives. Their relationships with stimuli and behaviors are obscured by the staggering complexity of nervous systems themselves. We can reduce this complexity by observing that “simpler” neuron-less organisms also transduce stimuli into transient electrical pulses that affect their behaviors. Without a complicated nervous system, APs are often easier to understand as signal/response mechanisms. We review examples of nonneural stimulus transductions in domains of life largely neglected by theoretical neuroscience: bacteria, protozoans, plants, fungi, and neuron-less animals. We report properties of those electrical signals—for example, amplitudes, durations, ionic bases, refractory periods, and particularly their ecological purposes. We compare those properties with those of neurons to infer the tasks and selection pressures that neurons satisfy. Throughout the tree of life, nonneural stimulus transductions time behavioral responses to environmental changes. Nonneural organisms represent the presence or absence of a stimulus with the presence or absence of an electrical signal. Their transductions usually exhibit high sensitivity and specificity to a stimulus, but are often slow compared to neurons. Neurons appear to be sacrificing the specificity of their stimulus transductions for sensitivity and speed. We interpret cellular stimulus transductions as a cell’s assertion that it detected something important at that moment in time. In particular, we consider neural APs as fast but noisy detection assertions. We infer that a principal goal of nervous systems is to detect extremely weak signals from noisy sensory spikes under enormous time pressure. We discuss neural computation proposals that address this goal by casting neurons as devices that implement online, analog, probabilistic computations with their membrane potentials. Those proposals imply a measurable relationship between afferent neural spiking statistics and efferent neural membrane electrophysiology.
神经动作电位(APs)很难被解释为信号编码器和/或计算原语。神经系统本身惊人的复杂性掩盖了它们与刺激和行为之间的关系。我们可以通过观察 "更简单 "的无神经元生物,将刺激转化为影响其行为的瞬时电脉冲,从而降低这种复杂性。没有复杂的神经系统,AP 通常更容易理解为信号/反应机制。我们回顾了理论神经科学在很大程度上忽视的生命领域中的非神经刺激信号转导实例:细菌、原生动物、植物、真菌和无神经元动物。我们报告了这些电信号的特性--例如振幅、持续时间、离子基础、折射周期,尤其是它们的生态目的。我们将这些特性与神经元的特性进行比较,以推断神经元所满足的任务和选择压力。在整个生命树中,非神经刺激传导为行为对环境变化的反应定时。非神经生物以电信号的存在或不存在来表示刺激的存在或不存在。它们的信号转导通常对刺激具有高灵敏度和特异性,但与神经元相比,它们的信号转导通常比较缓慢。神经元似乎牺牲了刺激信号传导的特异性,以换取灵敏度和速度。我们将细胞刺激转导解释为细胞断言它在那一时刻检测到了重要的东西。特别是,我们将神经 AP 视为快速但有噪声的检测断言。我们推断,神经系统的主要目标是在巨大的时间压力下,从嘈杂的感觉尖峰中检测出极其微弱的信号。针对这一目标,我们讨论了神经计算建议,将神经元视为利用膜电位实现在线、模拟、概率计算的设备。这些建议意味着传入神经尖峰统计与传出神经膜电生理学之间存在可测量的关系。
{"title":"Electrical Signaling Beyond Neurons","authors":"Travis Monk;Nik Dennler;Nicholas Ralph;Shavika Rastogi;Saeed Afshar;Pablo Urbizagastegui;Russell Jarvis;André van Schaik;Andrew Adamatzky","doi":"10.1162/neco_a_01696","DOIUrl":"10.1162/neco_a_01696","url":null,"abstract":"Neural action potentials (APs) are difficult to interpret as signal encoders and/or computational primitives. Their relationships with stimuli and behaviors are obscured by the staggering complexity of nervous systems themselves. We can reduce this complexity by observing that “simpler” neuron-less organisms also transduce stimuli into transient electrical pulses that affect their behaviors. Without a complicated nervous system, APs are often easier to understand as signal/response mechanisms. We review examples of nonneural stimulus transductions in domains of life largely neglected by theoretical neuroscience: bacteria, protozoans, plants, fungi, and neuron-less animals. We report properties of those electrical signals—for example, amplitudes, durations, ionic bases, refractory periods, and particularly their ecological purposes. We compare those properties with those of neurons to infer the tasks and selection pressures that neurons satisfy. Throughout the tree of life, nonneural stimulus transductions time behavioral responses to environmental changes. Nonneural organisms represent the presence or absence of a stimulus with the presence or absence of an electrical signal. Their transductions usually exhibit high sensitivity and specificity to a stimulus, but are often slow compared to neurons. Neurons appear to be sacrificing the specificity of their stimulus transductions for sensitivity and speed. We interpret cellular stimulus transductions as a cell’s assertion that it detected something important at that moment in time. In particular, we consider neural APs as fast but noisy detection assertions. We infer that a principal goal of nervous systems is to detect extremely weak signals from noisy sensory spikes under enormous time pressure. We discuss neural computation proposals that address this goal by casting neurons as devices that implement online, analog, probabilistic computations with their membrane potentials. Those proposals imply a measurable relationship between afferent neural spiking statistics and efferent neural membrane electrophysiology.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 10","pages":"1939-2029"},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713896","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trainable Reference Spikes Improve Temporal Information Processing of SNNs With Supervised Learning 可训练的参考尖峰通过监督学习改善 SNN 的时间信息处理能力
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-17 DOI: 10.1162/neco_a_01702
Zeyuan Wang;Luis Cruz
Spiking neural networks (SNNs) are the next-generation neural networks composed of biologically plausible neurons that communicate through trains of spikes. By modifying the plastic parameters of SNNs, including weights and time delays, SNNs can be trained to perform various AI tasks, although in general not at the same level of performance as typical artificial neural networks (ANNs). One possible solution to improve the performance of SNNs is to consider plastic parameters other than just weights and time delays drawn from the inherent complexity of the neural system of the brain, which may help SNNs improve their information processing ability and achieve brainlike functions. Here, we propose reference spikes as a new type of plastic parameters in a supervised learning scheme in SNNs. A neuron receives reference spikes through synapses providing reference information independent of input to help during learning, whose number of spikes and timings are trainable by error backpropagation. Theoretically, reference spikes improve the temporal information processing of SNNs by modulating the integration of incoming spikes at a detailed level. Through comparative computational experiments, we demonstrate using supervised learning that reference spikes improve the memory capacity of SNNs to map input spike patterns to target output spike patterns and increase classification accuracy on the MNIST, Fashion-MNIST, and SHD data sets, where both input and target output are temporally encoded. Our results demonstrate that applying reference spikes improves the performance of SNNs by enhancing their temporal information processing ability.
尖峰神经网络(SNN)是下一代神经网络,由生物神经元组成,通过尖峰序列进行通信。通过修改尖峰神经网络的可塑性参数(包括权重和时间延迟),可以训练尖峰神经网络执行各种人工智能任务,但其性能一般无法与典型的人工神经网络(ANN)媲美。要提高 SNN 的性能,一种可能的解决方案是考虑从大脑神经系统固有的复杂性中提取权重和时间延迟以外的可塑参数,这可能有助于 SNN 提高其信息处理能力并实现类似大脑的功能。在此,我们提出将参考尖峰作为 SNNs 监督学习方案中的一种新型可塑性参数。神经元通过突触接收参考尖峰,在学习过程中提供独立于输入的参考信息。从理论上讲,参考尖峰通过在细节层面上调节输入尖峰的整合,可以改善 SNN 的时间信息处理。通过比较计算实验,我们利用监督学习证明,参考尖峰提高了 SNNs 将输入尖峰模式映射到目标输出尖峰模式的记忆能力,并提高了 MNIST、Fashion-MNIST 和 SHD 数据集的分类准确率,其中输入和目标输出都是时间编码的。我们的研究结果表明,应用参考尖峰可以提高 SNN 的时间信息处理能力,从而提高 SNN 的性能。
{"title":"Trainable Reference Spikes Improve Temporal Information Processing of SNNs With Supervised Learning","authors":"Zeyuan Wang;Luis Cruz","doi":"10.1162/neco_a_01702","DOIUrl":"10.1162/neco_a_01702","url":null,"abstract":"Spiking neural networks (SNNs) are the next-generation neural networks composed of biologically plausible neurons that communicate through trains of spikes. By modifying the plastic parameters of SNNs, including weights and time delays, SNNs can be trained to perform various AI tasks, although in general not at the same level of performance as typical artificial neural networks (ANNs). One possible solution to improve the performance of SNNs is to consider plastic parameters other than just weights and time delays drawn from the inherent complexity of the neural system of the brain, which may help SNNs improve their information processing ability and achieve brainlike functions. Here, we propose reference spikes as a new type of plastic parameters in a supervised learning scheme in SNNs. A neuron receives reference spikes through synapses providing reference information independent of input to help during learning, whose number of spikes and timings are trainable by error backpropagation. Theoretically, reference spikes improve the temporal information processing of SNNs by modulating the integration of incoming spikes at a detailed level. Through comparative computational experiments, we demonstrate using supervised learning that reference spikes improve the memory capacity of SNNs to map input spike patterns to target output spike patterns and increase classification accuracy on the MNIST, Fashion-MNIST, and SHD data sets, where both input and target output are temporally encoded. Our results demonstrate that applying reference spikes improves the performance of SNNs by enhancing their temporal information processing ability.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 10","pages":"2136-2169"},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inference on the Macroscopic Dynamics of Spiking Neurons 尖峰神经元宏观动态推论
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-17 DOI: 10.1162/neco_a_01701
Nina Baldy;Martin Breyton;Marmaduke M. Woodman;Viktor K. Jirsa;Meysam Hashemi
The process of inference on networks of spiking neurons is essential to decipher the underlying mechanisms of brain computation and function. In this study, we conduct inference on parameters and dynamics of a mean-field approximation, simplifying the interactions of neurons. Estimating parameters of this class of generative model allows one to predict the system’s dynamics and responses under changing inputs and, indeed, changing parameters. We first assume a set of known state-space equations and address the problem of inferring the lumped parameters from observed time series. Crucially, we consider this problem in the setting of bistability, random fluctuations in system dynamics, and partial observations, in which some states are hidden. To identify the most efficient estimation or inversion scheme in this particular system identification, we benchmark against state-of-the-art optimization and Bayesian estimation algorithms, highlighting their strengths and weaknesses. Additionally, we explore how well the statistical relationships between parameters are maintained across different scales. We found that deep neural density estimators outperform other algorithms in the inversion scheme, despite potentially resulting in overestimated uncertainty and correlation between parameters. Nevertheless, this issue can be improved by incorporating time-delay embedding. We then eschew the mean-field approximation and employ deep neural ODEs on spiking neurons, illustrating prediction of system dynamics and vector fields from microscopic states. Overall, this study affords an opportunity to predict brain dynamics and responses to various perturbations or pharmacological interventions using deep neural networks.
对尖峰神经元网络的推理过程对于破译大脑计算和功能的内在机制至关重要。在这项研究中,我们简化了神经元的相互作用,对均值场近似模型的参数和动态进行推断。估算这类生成模型的参数可以预测系统在输入变化以及参数变化情况下的动态和反应。我们首先假设一组已知的状态空间方程,并解决从观测到的时间序列中推断集合参数的问题。最重要的是,我们是在双稳态、系统动态随机波动和部分观测(其中某些状态是隐藏的)的背景下考虑这个问题的。为了确定这一特定系统识别中最有效的估计或反演方案,我们以最先进的优化和贝叶斯估计算法为基准,突出它们的优缺点。此外,我们还探讨了参数之间的统计关系在不同尺度上的保持情况。我们发现,深度神经密度估计在反演方案中优于其他算法,尽管可能会导致参数之间的不确定性和相关性被高估。不过,这个问题可以通过时延嵌入得到改善。然后,我们摒弃了均场近似,在尖峰神经元上采用了深度神经 ODE,说明了从微观状态预测系统动态和向量场的方法。总之,这项研究为利用深度神经网络预测大脑动态和对各种扰动或药物干预的反应提供了机会。
{"title":"Inference on the Macroscopic Dynamics of Spiking Neurons","authors":"Nina Baldy;Martin Breyton;Marmaduke M. Woodman;Viktor K. Jirsa;Meysam Hashemi","doi":"10.1162/neco_a_01701","DOIUrl":"10.1162/neco_a_01701","url":null,"abstract":"The process of inference on networks of spiking neurons is essential to decipher the underlying mechanisms of brain computation and function. In this study, we conduct inference on parameters and dynamics of a mean-field approximation, simplifying the interactions of neurons. Estimating parameters of this class of generative model allows one to predict the system’s dynamics and responses under changing inputs and, indeed, changing parameters. We first assume a set of known state-space equations and address the problem of inferring the lumped parameters from observed time series. Crucially, we consider this problem in the setting of bistability, random fluctuations in system dynamics, and partial observations, in which some states are hidden. To identify the most efficient estimation or inversion scheme in this particular system identification, we benchmark against state-of-the-art optimization and Bayesian estimation algorithms, highlighting their strengths and weaknesses. Additionally, we explore how well the statistical relationships between parameters are maintained across different scales. We found that deep neural density estimators outperform other algorithms in the inversion scheme, despite potentially resulting in overestimated uncertainty and correlation between parameters. Nevertheless, this issue can be improved by incorporating time-delay embedding. We then eschew the mean-field approximation and employ deep neural ODEs on spiking neurons, illustrating prediction of system dynamics and vector fields from microscopic states. Overall, this study affords an opportunity to predict brain dynamics and responses to various perturbations or pharmacological interventions using deep neural networks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 10","pages":"2030-2072"},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713873","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Top-Down Priors Disambiguate Target and Distractor Features in Simulated Covert Visual Search 在模拟隐蔽视觉搜索中,自上而下的先验信息能区分目标和干扰特征。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-17 DOI: 10.1162/neco_a_01700
Justin D. Theiss;Michael A. Silver
Several models of visual search consider visual attention as part of a perceptual inference process, in which top-down priors disambiguate bottom-up sensory information. Many of these models have focused on gaze behavior, but there are relatively fewer models of covert spatial attention, in which attention is directed to a peripheral location in visual space without a shift in gaze direction. Here, we propose a biologically plausible model of covert attention during visual search that helps to bridge the gap between Bayesian modeling and neurophysiological modeling by using (1) top-down priors over target features that are acquired through Hebbian learning, and (2) spatial resampling of modeled cortical receptive fields to enhance local spatial resolution of image representations for downstream target classification. By training a simple generative model using a Hebbian update rule, top-down priors for target features naturally emerge without the need for hand-tuned or predetermined priors. Furthermore, the implementation of covert spatial attention in our model is based on a known neurobiological mechanism, providing a plausible process through which Bayesian priors could locally enhance the spatial resolution of image representations. We validate this model during simulated visual search for handwritten digits among nondigit distractors, demonstrating that top-down priors improve accuracy for estimation of target location and classification, relative to bottom-up signals alone. Our results support previous reports in the literature that demonstrated beneficial effects of top-down priors on visual search performance, while extending this literature to incorporate known neural mechanisms of covert spatial attention.
有几种视觉搜索模型将视觉注意力视为知觉推理过程的一部分,在这一过程中,自上而下的先验信息会对自下而上的感官信息产生歧义。这些模型中有很多都侧重于注视行为,但关于隐蔽空间注意的模型却相对较少,在这种模型中,注意力被引导到视觉空间的外围位置,而不改变注视方向。在这里,我们提出了一种在视觉搜索过程中隐蔽注意力的生物学合理模型,该模型有助于弥合贝叶斯建模和神经生理学建模之间的差距,它使用通过希比学习和建模皮层感受野的空间重采样获得的目标特征自上而下的先验,以提高图像表征的局部空间分辨率,从而进行下游目标分类。通过使用希比更新规则训练一个简单的生成模型,目标特征的自上而下先验条件就会自然出现,而无需人工调整或预先确定先验条件。此外,我们模型中隐蔽空间注意力的实现是基于已知的神经生物学机制,提供了一个合理的过程,通过这个过程,贝叶斯先验可以局部增强图像表征的空间分辨率。我们在模拟视觉搜索非数字干扰物中手写数字的过程中验证了这一模型,结果表明自上而下的前置条件提高了估计目标位置和分类的准确性,而自下而上的信号则相对较弱。我们的研究结果支持了之前的文献报道,这些报道证明了自上而下先验对视觉搜索性能的有利影响,同时也扩展了这些文献,将已知的隐蔽空间注意力神经机制纳入其中。
{"title":"Top-Down Priors Disambiguate Target and Distractor Features in Simulated Covert Visual Search","authors":"Justin D. Theiss;Michael A. Silver","doi":"10.1162/neco_a_01700","DOIUrl":"10.1162/neco_a_01700","url":null,"abstract":"Several models of visual search consider visual attention as part of a perceptual inference process, in which top-down priors disambiguate bottom-up sensory information. Many of these models have focused on gaze behavior, but there are relatively fewer models of covert spatial attention, in which attention is directed to a peripheral location in visual space without a shift in gaze direction. Here, we propose a biologically plausible model of covert attention during visual search that helps to bridge the gap between Bayesian modeling and neurophysiological modeling by using (1) top-down priors over target features that are acquired through Hebbian learning, and (2) spatial resampling of modeled cortical receptive fields to enhance local spatial resolution of image representations for downstream target classification. By training a simple generative model using a Hebbian update rule, top-down priors for target features naturally emerge without the need for hand-tuned or predetermined priors. Furthermore, the implementation of covert spatial attention in our model is based on a known neurobiological mechanism, providing a plausible process through which Bayesian priors could locally enhance the spatial resolution of image representations. We validate this model during simulated visual search for handwritten digits among nondigit distractors, demonstrating that top-down priors improve accuracy for estimation of target location and classification, relative to bottom-up signals alone. Our results support previous reports in the literature that demonstrated beneficial effects of top-down priors on visual search performance, while extending this literature to incorporate known neural mechanisms of covert spatial attention.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 10","pages":"2201-2224"},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mechanism of Duration Perception in Artificial Brains Suggests New Model of Attentional Entrainment 人工大脑的时长感知机制提出了注意力牵制的新模式
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-17 DOI: 10.1162/neco_a_01699
Ali Tehrani-Saleh;J. Devin McAuley;Christoph Adami
While cognitive theory has advanced several candidate frameworks to explain attentional entrainment, the neural basis for the temporal allocation of attention is unknown. Here we present a new model of attentional entrainment guided by empirical evidence obtained using a cohort of 50 artificial brains. These brains were evolved in silico to perform a duration judgment task similar to one where human subjects perform duration judgments in auditory oddball paradigms. We found that the artificial brains display psychometric characteristics remarkably similar to those of human listeners and exhibit similar patterns of distortions of perception when presented with out-of-rhythm oddballs. A detailed analysis of mechanisms behind the duration distortion suggests that attention peaks at the end of the tone, which is inconsistent with previous attentional entrainment models. Instead, the new model of entrainment emphasizes increased attention to those aspects of the stimulus that the brain expects to be highly informative.
尽管认知理论已经提出了几种候选框架来解释注意力诱导,但注意力的时间分配的神经基础尚不清楚。在这里,我们通过使用一组 50 个人工大脑所获得的经验证据,提出了一个新的注意力诱导模型。这些人工大脑是在硅学基础上进化而来的,可以完成类似于人类受试者在听觉怪人范式中进行持续时间判断的任务。我们发现,人工大脑显示出与人类听者极为相似的心理测量特征,并在遇到节奏失常的怪音时表现出相似的感知失真模式。对时长失真背后机制的详细分析表明,注意力在音调结束时达到顶峰,这与之前的注意力夹带模型不一致。相反,新的注意力诱导模型强调的是,大脑会增加对刺激信息量大的那些方面的注意。
{"title":"Mechanism of Duration Perception in Artificial Brains Suggests New Model of Attentional Entrainment","authors":"Ali Tehrani-Saleh;J. Devin McAuley;Christoph Adami","doi":"10.1162/neco_a_01699","DOIUrl":"10.1162/neco_a_01699","url":null,"abstract":"While cognitive theory has advanced several candidate frameworks to explain attentional entrainment, the neural basis for the temporal allocation of attention is unknown. Here we present a new model of attentional entrainment guided by empirical evidence obtained using a cohort of 50 artificial brains. These brains were evolved in silico to perform a duration judgment task similar to one where human subjects perform duration judgments in auditory oddball paradigms. We found that the artificial brains display psychometric characteristics remarkably similar to those of human listeners and exhibit similar patterns of distortions of perception when presented with out-of-rhythm oddballs. A detailed analysis of mechanisms behind the duration distortion suggests that attention peaks at the end of the tone, which is inconsistent with previous attentional entrainment models. Instead, the new model of entrainment emphasizes increased attention to those aspects of the stimulus that the brain expects to be highly informative.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 10","pages":"2170-2200"},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active Inference and Reinforcement Learning: A Unified Inference on Continuous State and Action Spaces Under Partial Observability 主动推理与强化学习:部分可观测性下连续状态和行动空间的统一推理》(A Unified Inference on Continuous State and Action Spaces under Partial Observability.
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-17 DOI: 10.1162/neco_a_01698
Parvin Malekzadeh;Konstantinos N. Plataniotis
Reinforcement learning (RL) has garnered significant attention for developing decision-making agents that aim to maximize rewards, specified by an external supervisor, within fully observable environments. However, many real-world problems involve partial or noisy observations, where agents cannot access complete and accurate information about the environment. These problems are commonly formulated as partially observable Markov decision processes (POMDPs). Previous studies have tackled RL in POMDPs by either incorporating the memory of past actions and observations or by inferring the true state of the environment from observed data. Nevertheless, aggregating observations and actions over time becomes impractical in problems with large decision-making time horizons and high-dimensional spaces. Furthermore, inference-based RL approaches often require many environmental samples to perform well, as they focus solely on reward maximization and neglect uncertainty in the inferred state. Active inference (AIF) is a framework naturally formulated in POMDPs and directs agents to select actions by minimizing a function called expected free energy (EFE). This supplies reward-maximizing (or exploitative) behavior, as in RL, with information-seeking (or exploratory) behavior. Despite this exploratory behavior of AIF, its use is limited to problems with small time horizons and discrete spaces due to the computational challenges associated with EFE. In this article, we propose a unified principle that establishes a theoretical connection between AIF and RL, enabling seamless integration of these two approaches and overcoming their limitations in continuous space POMDP settings. We substantiate our findings with rigorous theoretical analysis, providing novel perspectives for using AIF in designing and implementing artificial agents. Experimental results demonstrate the superior learning capabilities of our method compared to other alternative RL approaches in solving partially observable tasks with continuous spaces. Notably, our approach harnesses information-seeking exploration, enabling it to effectively solve reward-free problems and rendering explicit task reward design by an external supervisor optional.
强化学习(RL)在开发决策代理方面备受关注,这些代理的目标是在完全可观测的环境中最大限度地提高外部监督者指定的奖励。然而,现实世界中的许多问题都涉及部分或嘈杂的观测,在这些问题中,代理无法获得有关环境的完整而准确的信息。这些问题通常被表述为部分可观测马尔可夫决策过程(POMDP)。以往的研究通过结合对过去行动和观察结果的记忆,或通过从观察数据推断环境的真实状态,来解决 POMDPs 中的 RL 问题。然而,在决策时间跨度大、空间维度高的问题中,随着时间的推移汇总观察结果和行动是不切实际的。此外,基于推理的 RL 方法往往需要许多环境样本才能取得良好效果,因为它们只关注报酬最大化,而忽略了推理状态的不确定性。主动推理(AIF)是在 POMDPs 中自然形成的一个框架,它指导代理通过最小化称为期望自由能(EFE)的函数来选择行动。这将 RL 中的报酬最大化(或利用)行为与信息搜索(或探索)行为结合起来。尽管 AIF 具有探索行为,但由于 EFE 带来的计算挑战,它的应用仅限于小时间跨度和离散空间的问题。在本文中,我们提出了一个统一的原则,在 AIF 和 RL 之间建立了理论联系,实现了这两种方法的无缝集成,克服了它们在连续空间 POMDP 设置中的局限性。我们通过严谨的理论分析证实了我们的发现,为使用 AIF 设计和实现人工代理提供了新的视角。实验结果表明,在解决连续空间的部分可观测任务时,与其他可供选择的 RL 方法相比,我们的方法具有更强的学习能力。值得注意的是,我们的方法利用了信息搜索探索,使其能够有效地解决无奖励问题,并使外部监督者的明确任务奖励设计变得可有可无。
{"title":"Active Inference and Reinforcement Learning: A Unified Inference on Continuous State and Action Spaces Under Partial Observability","authors":"Parvin Malekzadeh;Konstantinos N. Plataniotis","doi":"10.1162/neco_a_01698","DOIUrl":"10.1162/neco_a_01698","url":null,"abstract":"Reinforcement learning (RL) has garnered significant attention for developing decision-making agents that aim to maximize rewards, specified by an external supervisor, within fully observable environments. However, many real-world problems involve partial or noisy observations, where agents cannot access complete and accurate information about the environment. These problems are commonly formulated as partially observable Markov decision processes (POMDPs). Previous studies have tackled RL in POMDPs by either incorporating the memory of past actions and observations or by inferring the true state of the environment from observed data. Nevertheless, aggregating observations and actions over time becomes impractical in problems with large decision-making time horizons and high-dimensional spaces. Furthermore, inference-based RL approaches often require many environmental samples to perform well, as they focus solely on reward maximization and neglect uncertainty in the inferred state. Active inference (AIF) is a framework naturally formulated in POMDPs and directs agents to select actions by minimizing a function called expected free energy (EFE). This supplies reward-maximizing (or exploitative) behavior, as in RL, with information-seeking (or exploratory) behavior. Despite this exploratory behavior of AIF, its use is limited to problems with small time horizons and discrete spaces due to the computational challenges associated with EFE. In this article, we propose a unified principle that establishes a theoretical connection between AIF and RL, enabling seamless integration of these two approaches and overcoming their limitations in continuous space POMDP settings. We substantiate our findings with rigorous theoretical analysis, providing novel perspectives for using AIF in designing and implementing artificial agents. Experimental results demonstrate the superior learning capabilities of our method compared to other alternative RL approaches in solving partially observable tasks with continuous spaces. Notably, our approach harnesses information-seeking exploration, enabling it to effectively solve reward-free problems and rendering explicit task reward design by an external supervisor optional.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 10","pages":"2073-2135"},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Search for Data-Driven and Reproducible Schizophrenia Subtypes Using Resting State fMRI Data From Multiple Sites 利用多部位静息态 fMRI 数据寻找数据驱动和可重复的精神分裂症亚型。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-19 DOI: 10.1162/neco_a_01689
Lærke Gebser Krohne;Ingeborg Helbech Hansen;Kristoffer H. Madsen
For decades, fMRI data have been used to search for biomarkers for patients with schizophrenia. Still, firm conclusions are yet to be made, which is often attributed to the high internal heterogeneity of the disorder. A promising way to disentangle the heterogeneity is to search for subgroups of patients with more homogeneous biological profiles. We applied an unsupervised multiple co-clustering (MCC) method to identify subtypes using functional connectivity data from a multisite resting-state data set. We merged data from two publicly available databases and split the data into a discovery data set (143 patients and 143 healthy controls (HC)) and an external test data set (63 patients and 63 HC) from independent sites. On the discovery data, we investigated the stability of the clustering toward data splits and initializations. Subsequently we searched for cluster solutions, also called “views,” with a significant diagnosis association and evaluated these based on their subject and feature cluster separability, and correlation to clinical manifestations as measured with the positive and negative syndrome scale (PANSS). Finally, we validated our findings by testing the diagnosis association on the external test data. A major finding of our study was that the stability of the clustering was highly dependent on variations in the data set, and even across initializations, we found only a moderate subject clustering stability. Nevertheless, we still discovered one view with a significant diagnosis association. This view reproducibly showed an overrepresentation of schizophrenia patients in three subject clusters, and one feature cluster showed a continuous trend, ranging from positive to negative connectivity values, when sorted according to the proportions of patients with schizophrenia. When investigating all patients, none of the feature clusters in the view were associated with severity of positive, negative, and generalized symptoms, indicating that the cluster solutions reflect other disease related mechanisms.
几十年来,fMRI 数据一直被用于寻找精神分裂症患者的生物标志物。然而,由于精神分裂症具有高度的内部异质性,至今仍未得出确切的结论。消除异质性的一个可行方法是寻找生物特征更为同质的患者亚群。我们采用了一种无监督多重协同聚类(MCC)方法,利用来自多点静息态数据集的功能连接数据来识别亚型。我们合并了两个公开数据库中的数据,并将数据分为发现数据集(143 名患者和 143 名健康对照(HC))和外部测试数据集(63 名患者和 63 名健康对照)。在发现数据上,我们研究了聚类对数据分割和初始化的稳定性。随后,我们搜索了具有显著诊断关联性的聚类解决方案(也称为 "观点"),并根据其主题和特征聚类的可分离性,以及与临床表现的相关性进行了评估,这些临床表现是用正负综合征量表(PANSS)测量的。最后,我们在外部测试数据上测试了诊断关联,从而验证了我们的研究结果。我们研究的一个主要发现是,聚类的稳定性高度依赖于数据集的变化,即使在不同的初始化过程中,我们也只发现了中等程度的主体聚类稳定性。尽管如此,我们仍然发现了一个与诊断有显著关联的视图。在三个研究对象聚类中,该视图重复性地显示出精神分裂症患者的比例过高,而且根据精神分裂症患者的比例进行排序时,一个特征聚类显示出从正连接值到负连接值的连续趋势。在调查所有患者时,视图中没有一个特征群与阳性症状、阴性症状和全身症状的严重程度相关,这表明群解反映了其他疾病相关机制。
{"title":"On the Search for Data-Driven and Reproducible Schizophrenia Subtypes Using Resting State fMRI Data From Multiple Sites","authors":"Lærke Gebser Krohne;Ingeborg Helbech Hansen;Kristoffer H. Madsen","doi":"10.1162/neco_a_01689","DOIUrl":"10.1162/neco_a_01689","url":null,"abstract":"For decades, fMRI data have been used to search for biomarkers for patients with schizophrenia. Still, firm conclusions are yet to be made, which is often attributed to the high internal heterogeneity of the disorder. A promising way to disentangle the heterogeneity is to search for subgroups of patients with more homogeneous biological profiles. We applied an unsupervised multiple co-clustering (MCC) method to identify subtypes using functional connectivity data from a multisite resting-state data set. We merged data from two publicly available databases and split the data into a discovery data set (143 patients and 143 healthy controls (HC)) and an external test data set (63 patients and 63 HC) from independent sites. On the discovery data, we investigated the stability of the clustering toward data splits and initializations. Subsequently we searched for cluster solutions, also called “views,” with a significant diagnosis association and evaluated these based on their subject and feature cluster separability, and correlation to clinical manifestations as measured with the positive and negative syndrome scale (PANSS). Finally, we validated our findings by testing the diagnosis association on the external test data. A major finding of our study was that the stability of the clustering was highly dependent on variations in the data set, and even across initializations, we found only a moderate subject clustering stability. Nevertheless, we still discovered one view with a significant diagnosis association. This view reproducibly showed an overrepresentation of schizophrenia patients in three subject clusters, and one feature cluster showed a continuous trend, ranging from positive to negative connectivity values, when sorted according to the proportions of patients with schizophrenia. When investigating all patients, none of the feature clusters in the view were associated with severity of positive, negative, and generalized symptoms, indicating that the cluster solutions reflect other disease related mechanisms.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 9","pages":"1799-1831"},"PeriodicalIF":2.7,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spontaneous Emergence of Robustness to Light Variation in CNNs With a Precortically Inspired Module 带有前皮质启发模块的 CNN 对光线变化的自发鲁棒性。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-19 DOI: 10.1162/neco_a_01691
J. Petkovic;R. Fioresi
The analogies between the mammalian primary visual cortex and the structure of CNNs used for image classification tasks suggest that the introduction of an additional preliminary convolutional module inspired by the mathematical modeling of the precortical neuronal circuits can improve robustness with respect to global light intensity and contrast variations in the input images. We validate this hypothesis using the popular databases MNIST, FashionMNIST, and SVHN for these variations once an extra module is added.
哺乳动物初级视觉皮层与用于图像分类任务的 CNN 结构之间的类比表明,在皮层前神经元电路数学建模的启发下,引入额外的初步卷积模块可以提高输入图像中全局光强度和对比度变化的鲁棒性。我们使用流行的数据库 MNIST、FashionMNIST 和 SVHN 验证了这一假设,即一旦添加了额外的模块,这些变化就会出现。
{"title":"Spontaneous Emergence of Robustness to Light Variation in CNNs With a Precortically Inspired Module","authors":"J. Petkovic;R. Fioresi","doi":"10.1162/neco_a_01691","DOIUrl":"10.1162/neco_a_01691","url":null,"abstract":"The analogies between the mammalian primary visual cortex and the structure of CNNs used for image classification tasks suggest that the introduction of an additional preliminary convolutional module inspired by the mathematical modeling of the precortical neuronal circuits can improve robustness with respect to global light intensity and contrast variations in the input images. We validate this hypothesis using the popular databases MNIST, FashionMNIST, and SVHN for these variations once an extra module is added.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 9","pages":"1832-1853"},"PeriodicalIF":2.7,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1