{"title":"地狱尖峰神经网络的可扩展框架","authors":"Marissa Dominijanni","doi":"arxiv-2409.11567","DOIUrl":null,"url":null,"abstract":"This paper introduces Inferno, a software library built on top of PyTorch\nthat is designed to meet distinctive challenges of using spiking neural\nnetworks (SNNs) for machine learning tasks. We describe the architecture of\nInferno and key differentiators that make it uniquely well-suited to these\ntasks. We show how Inferno supports trainable heterogeneous delays on both CPUs\nand GPUs, and how Inferno enables a \"write once, apply everywhere\" development\nmethodology for novel models and techniques. We compare Inferno's performance\nto BindsNET, a library aimed at machine learning with SNNs, and\nBrian2/Brian2CUDA which is popular in neuroscience. Among several examples, we\nshow how the design decisions made by Inferno facilitate easily implementing\nthe new methods of Nadafian and Ganjtabesh in delay learning with spike-timing\ndependent plasticity.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Inferno: An Extensible Framework for Spiking Neural Networks\",\"authors\":\"Marissa Dominijanni\",\"doi\":\"arxiv-2409.11567\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper introduces Inferno, a software library built on top of PyTorch\\nthat is designed to meet distinctive challenges of using spiking neural\\nnetworks (SNNs) for machine learning tasks. We describe the architecture of\\nInferno and key differentiators that make it uniquely well-suited to these\\ntasks. We show how Inferno supports trainable heterogeneous delays on both CPUs\\nand GPUs, and how Inferno enables a \\\"write once, apply everywhere\\\" development\\nmethodology for novel models and techniques. We compare Inferno's performance\\nto BindsNET, a library aimed at machine learning with SNNs, and\\nBrian2/Brian2CUDA which is popular in neuroscience. Among several examples, we\\nshow how the design decisions made by Inferno facilitate easily implementing\\nthe new methods of Nadafian and Ganjtabesh in delay learning with spike-timing\\ndependent plasticity.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"23 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11567\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11567","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Inferno: An Extensible Framework for Spiking Neural Networks
This paper introduces Inferno, a software library built on top of PyTorch
that is designed to meet distinctive challenges of using spiking neural
networks (SNNs) for machine learning tasks. We describe the architecture of
Inferno and key differentiators that make it uniquely well-suited to these
tasks. We show how Inferno supports trainable heterogeneous delays on both CPUs
and GPUs, and how Inferno enables a "write once, apply everywhere" development
methodology for novel models and techniques. We compare Inferno's performance
to BindsNET, a library aimed at machine learning with SNNs, and
Brian2/Brian2CUDA which is popular in neuroscience. Among several examples, we
show how the design decisions made by Inferno facilitate easily implementing
the new methods of Nadafian and Ganjtabesh in delay learning with spike-timing
dependent plasticity.