Mikhail Kiselev , Alexander Ivanitsky , Denis Larionov
{"title":"A purely spiking approach to reinforcement learning","authors":"Mikhail Kiselev , Alexander Ivanitsky , Denis Larionov","doi":"10.1016/j.cogsys.2024.101317","DOIUrl":null,"url":null,"abstract":"<div><div>At present, implementation of learning mechanisms in spiking neural networks (SNN) cannot be considered as a solved scientific problem despite plenty of SNN learning algorithms proposed. It is also true for SNN implementation of reinforcement learning (RL), while RL is especially important for SNNs because of its close relationship to the domains most promising from the viewpoint of SNN application such as robotics. In the present paper, an SNN structure is described which, seemingly, can be used in wide range of RL tasks. The distinctive feature of our approach is usage of only the spike forms of all signals involved — sensory input streams, output signals sent to actuators and reward/punishment signals. Besides that, selection of the neuron/plasticity models was determined by the requirement that they should be easily implemented on modern neurochips. The SNN structure considered in the paper includes spiking neurons described by a generalization of the LIFAT (leaky integrate-and-fire neuron with adaptive threshold) model and a simple spike timing dependent synaptic plasticity model (a generalization of dopamine-modulated plasticity). In this study, we use the model-free approach to RL but it is based on very general assumptions about RL task characteristics and has no visible limitations on its applicability (inside the class of model-free RL tasks). To test our SNN, we apply it to a simple but non-trivial task of training the network to keep a chaotically moving light spot in the view field of an emulated Dynamic Vision Sensor (DVS) camera. Successful solution of this RL problem can be considered as an evidence in favor of efficiency of our approach.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"89 ","pages":"Article 101317"},"PeriodicalIF":2.1000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041724001116","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
At present, implementation of learning mechanisms in spiking neural networks (SNN) cannot be considered as a solved scientific problem despite plenty of SNN learning algorithms proposed. It is also true for SNN implementation of reinforcement learning (RL), while RL is especially important for SNNs because of its close relationship to the domains most promising from the viewpoint of SNN application such as robotics. In the present paper, an SNN structure is described which, seemingly, can be used in wide range of RL tasks. The distinctive feature of our approach is usage of only the spike forms of all signals involved — sensory input streams, output signals sent to actuators and reward/punishment signals. Besides that, selection of the neuron/plasticity models was determined by the requirement that they should be easily implemented on modern neurochips. The SNN structure considered in the paper includes spiking neurons described by a generalization of the LIFAT (leaky integrate-and-fire neuron with adaptive threshold) model and a simple spike timing dependent synaptic plasticity model (a generalization of dopamine-modulated plasticity). In this study, we use the model-free approach to RL but it is based on very general assumptions about RL task characteristics and has no visible limitations on its applicability (inside the class of model-free RL tasks). To test our SNN, we apply it to a simple but non-trivial task of training the network to keep a chaotically moving light spot in the view field of an emulated Dynamic Vision Sensor (DVS) camera. Successful solution of this RL problem can be considered as an evidence in favor of efficiency of our approach.
期刊介绍:
Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial.
The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition.
Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.