{"title":"Spiking Diffusion Models","authors":"Jiahang Cao, Hanzhong Guo, Ziqing Wang, Deming Zhou, Hao Cheng, Qiang Zhang, Renjing Xu","doi":"arxiv-2408.16467","DOIUrl":null,"url":null,"abstract":"Recent years have witnessed Spiking Neural Networks (SNNs) gaining attention\nfor their ultra-low energy consumption and high biological plausibility\ncompared with traditional Artificial Neural Networks (ANNs). Despite their\ndistinguished properties, the application of SNNs in the computationally\nintensive field of image generation is still under exploration. In this paper,\nwe propose the Spiking Diffusion Models (SDMs), an innovative family of\nSNN-based generative models that excel in producing high-quality samples with\nsignificantly reduced energy consumption. In particular, we propose a\nTemporal-wise Spiking Mechanism (TSM) that allows SNNs to capture more temporal\nfeatures from a bio-plasticity perspective. In addition, we propose a\nthreshold-guided strategy that can further improve the performances by up to\n16.7% without any additional training. We also make the first attempt to use\nthe ANN-SNN approach for SNN-based generation tasks. Extensive experimental\nresults reveal that our approach not only exhibits comparable performance to\nits ANN counterpart with few spiking time steps, but also outperforms previous\nSNN-based generative models by a large margin. Moreover, we also demonstrate\nthe high-quality generation ability of SDM on large-scale datasets, e.g., LSUN\nbedroom. This development marks a pivotal advancement in the capabilities of\nSNN-based generation, paving the way for future research avenues to realize\nlow-energy and low-latency generative applications. Our code is available at\nhttps://github.com/AndyCao1125/SDM.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"160 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.16467","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent years have witnessed Spiking Neural Networks (SNNs) gaining attention
for their ultra-low energy consumption and high biological plausibility
compared with traditional Artificial Neural Networks (ANNs). Despite their
distinguished properties, the application of SNNs in the computationally
intensive field of image generation is still under exploration. In this paper,
we propose the Spiking Diffusion Models (SDMs), an innovative family of
SNN-based generative models that excel in producing high-quality samples with
significantly reduced energy consumption. In particular, we propose a
Temporal-wise Spiking Mechanism (TSM) that allows SNNs to capture more temporal
features from a bio-plasticity perspective. In addition, we propose a
threshold-guided strategy that can further improve the performances by up to
16.7% without any additional training. We also make the first attempt to use
the ANN-SNN approach for SNN-based generation tasks. Extensive experimental
results reveal that our approach not only exhibits comparable performance to
its ANN counterpart with few spiking time steps, but also outperforms previous
SNN-based generative models by a large margin. Moreover, we also demonstrate
the high-quality generation ability of SDM on large-scale datasets, e.g., LSUN
bedroom. This development marks a pivotal advancement in the capabilities of
SNN-based generation, paving the way for future research avenues to realize
low-energy and low-latency generative applications. Our code is available at
https://github.com/AndyCao1125/SDM.