{"title":"Emulating Brain-like Rapid Learning in Neuromorphic Edge Computing","authors":"Kenneth Stewart, Michael Neumeier, Sumit Bam Shrestha, Garrick Orchard, Emre Neftci","doi":"arxiv-2408.15800","DOIUrl":null,"url":null,"abstract":"Achieving personalized intelligence at the edge with real-time learning\ncapabilities holds enormous promise in enhancing our daily experiences and\nhelping decision making, planning, and sensing. However, efficient and reliable\nedge learning remains difficult with current technology due to the lack of\npersonalized data, insufficient hardware capabilities, and inherent challenges\nposed by online learning. Over time and across multiple developmental stages, the brain has evolved to\nefficiently incorporate new knowledge by gradually building on previous\nknowledge. In this work, we emulate the multiple stages of learning with\ndigital neuromorphic technology that simulates the neural and synaptic\nprocesses of the brain using two stages of learning. First, a meta-training\nstage trains the hyperparameters of synaptic plasticity for one-shot learning\nusing a differentiable simulation of the neuromorphic hardware. This\nmeta-training process refines a hardware local three-factor synaptic plasticity\nrule and its associated hyperparameters to align with the trained task domain.\nIn a subsequent deployment stage, these optimized hyperparameters enable fast,\ndata-efficient, and accurate learning of new classes. We demonstrate our\napproach using event-driven vision sensor data and the Intel Loihi neuromorphic\nprocessor with its plasticity dynamics, achieving real-time one-shot learning\nof new classes that is vastly improved over transfer learning. Our methodology\ncan be deployed with arbitrary plasticity models and can be applied to\nsituations demanding quick learning and adaptation at the edge, such as\nnavigating unfamiliar environments or learning unexpected categories of data\nthrough user engagement.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"28 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.15800","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Achieving personalized intelligence at the edge with real-time learning
capabilities holds enormous promise in enhancing our daily experiences and
helping decision making, planning, and sensing. However, efficient and reliable
edge learning remains difficult with current technology due to the lack of
personalized data, insufficient hardware capabilities, and inherent challenges
posed by online learning. Over time and across multiple developmental stages, the brain has evolved to
efficiently incorporate new knowledge by gradually building on previous
knowledge. In this work, we emulate the multiple stages of learning with
digital neuromorphic technology that simulates the neural and synaptic
processes of the brain using two stages of learning. First, a meta-training
stage trains the hyperparameters of synaptic plasticity for one-shot learning
using a differentiable simulation of the neuromorphic hardware. This
meta-training process refines a hardware local three-factor synaptic plasticity
rule and its associated hyperparameters to align with the trained task domain.
In a subsequent deployment stage, these optimized hyperparameters enable fast,
data-efficient, and accurate learning of new classes. We demonstrate our
approach using event-driven vision sensor data and the Intel Loihi neuromorphic
processor with its plasticity dynamics, achieving real-time one-shot learning
of new classes that is vastly improved over transfer learning. Our methodology
can be deployed with arbitrary plasticity models and can be applied to
situations demanding quick learning and adaptation at the edge, such as
navigating unfamiliar environments or learning unexpected categories of data
through user engagement.