Nicholas Soures, Peter Helfer, Anurag Daram, Tej Pandit, Dhireesha Kudithipudi
{"title":"TACOS: Task Agnostic Continual Learning in Spiking Neural Networks","authors":"Nicholas Soures, Peter Helfer, Anurag Daram, Tej Pandit, Dhireesha Kudithipudi","doi":"arxiv-2409.00021","DOIUrl":null,"url":null,"abstract":"Catastrophic interference, the loss of previously learned information when\nlearning new information, remains a major challenge in machine learning. Since\nliving organisms do not seem to suffer from this problem, researchers have\ntaken inspiration from biology to improve memory retention in artificial\nintelligence systems. However, previous attempts to use bio-inspired mechanisms\nhave typically resulted in systems that rely on task boundary information\nduring training and/or explicit task identification during inference,\ninformation that is not available in real-world scenarios. Here, we show that\nneuro-inspired mechanisms such as synaptic consolidation and metaplasticity can\nmitigate catastrophic interference in a spiking neural network, using only\nsynapse-local information, with no need for task awareness, and with a fixed\nmemory size that does not need to be increased when training on new tasks. Our\nmodel, TACOS, combines neuromodulation with complex synaptic dynamics to enable\nnew learning while protecting previous information. We evaluate TACOS on\nsequential image recognition tasks and demonstrate its effectiveness in\nreducing catastrophic interference. Our results show that TACOS outperforms\nexisting regularization techniques in domain-incremental learning scenarios. We\nalso report the results of an ablation study to elucidate the contribution of\neach neuro-inspired mechanism separately.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"38 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.00021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Catastrophic interference, the loss of previously learned information when
learning new information, remains a major challenge in machine learning. Since
living organisms do not seem to suffer from this problem, researchers have
taken inspiration from biology to improve memory retention in artificial
intelligence systems. However, previous attempts to use bio-inspired mechanisms
have typically resulted in systems that rely on task boundary information
during training and/or explicit task identification during inference,
information that is not available in real-world scenarios. Here, we show that
neuro-inspired mechanisms such as synaptic consolidation and metaplasticity can
mitigate catastrophic interference in a spiking neural network, using only
synapse-local information, with no need for task awareness, and with a fixed
memory size that does not need to be increased when training on new tasks. Our
model, TACOS, combines neuromodulation with complex synaptic dynamics to enable
new learning while protecting previous information. We evaluate TACOS on
sequential image recognition tasks and demonstrate its effectiveness in
reducing catastrophic interference. Our results show that TACOS outperforms
existing regularization techniques in domain-incremental learning scenarios. We
also report the results of an ablation study to elucidate the contribution of
each neuro-inspired mechanism separately.