Towards Efficient Formal Verification of Spiking Neural Network

Baekryun Seong, Jieung Kim, Sang-Ki Ko
{"title":"Towards Efficient Formal Verification of Spiking Neural Network","authors":"Baekryun Seong, Jieung Kim, Sang-Ki Ko","doi":"arxiv-2408.10900","DOIUrl":null,"url":null,"abstract":"Recently, AI research has primarily focused on large language models (LLMs),\nand increasing accuracy often involves scaling up and consuming more power. The\npower consumption of AI has become a significant societal issue; in this\ncontext, spiking neural networks (SNNs) offer a promising solution. SNNs\noperate event-driven, like the human brain, and compress information\ntemporally. These characteristics allow SNNs to significantly reduce power\nconsumption compared to perceptron-based artificial neural networks (ANNs),\nhighlighting them as a next-generation neural network technology. However,\nsocietal concerns regarding AI go beyond power consumption, with the\nreliability of AI models being a global issue. For instance, adversarial\nattacks on AI models are a well-studied problem in the context of traditional\nneural networks. Despite their importance, the stability and property\nverification of SNNs remains in the early stages of research. Most SNN\nverification methods are time-consuming and barely scalable, making practical\napplications challenging. In this paper, we introduce temporal encoding to\nachieve practical performance in verifying the adversarial robustness of SNNs.\nWe conduct a theoretical analysis of this approach and demonstrate its success\nin verifying SNNs at previously unmanageable scales. Our contribution advances\nSNN verification to a practical level, facilitating the safer application of\nSNNs.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.10900","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, AI research has primarily focused on large language models (LLMs), and increasing accuracy often involves scaling up and consuming more power. The power consumption of AI has become a significant societal issue; in this context, spiking neural networks (SNNs) offer a promising solution. SNNs operate event-driven, like the human brain, and compress information temporally. These characteristics allow SNNs to significantly reduce power consumption compared to perceptron-based artificial neural networks (ANNs), highlighting them as a next-generation neural network technology. However, societal concerns regarding AI go beyond power consumption, with the reliability of AI models being a global issue. For instance, adversarial attacks on AI models are a well-studied problem in the context of traditional neural networks. Despite their importance, the stability and property verification of SNNs remains in the early stages of research. Most SNN verification methods are time-consuming and barely scalable, making practical applications challenging. In this paper, we introduce temporal encoding to achieve practical performance in verifying the adversarial robustness of SNNs. We conduct a theoretical analysis of this approach and demonstrate its success in verifying SNNs at previously unmanageable scales. Our contribution advances SNN verification to a practical level, facilitating the safer application of SNNs.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
实现尖峰神经网络的高效形式验证
最近,人工智能研究主要集中在大型语言模型(LLM)上,而要提高准确性,往往需要扩大规模,消耗更多电力。在这种情况下,尖峰神经网络(SNN)提供了一个很有前景的解决方案。尖峰神经网络像人脑一样由事件驱动运行,并按时间压缩信息。与基于感知器的人工神经网络(ANN)相比,尖峰神经网络的这些特点使其能够显著降低功耗,从而成为下一代神经网络技术。然而,社会对人工智能的关注不仅限于功耗,人工智能模型的可靠性也是一个全球性问题。例如,在传统神经网络中,对人工智能模型的对抗性攻击是一个经过深入研究的问题。尽管 SNNs 十分重要,但其稳定性和属性验证仍处于早期研究阶段。大多数 SNN 验证方法既耗时又难以扩展,使实际应用面临挑战。我们对这种方法进行了理论分析,并证明它能在以前无法管理的规模上成功验证 SNN。我们的贡献将 SNN 验证提升到了实用水平,从而促进了 SNN 的更安全应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Hardware-Friendly Implementation of Physical Reservoir Computing with CMOS-based Time-domain Analog Spiking Neurons Self-Contrastive Forward-Forward Algorithm Bio-Inspired Mamba: Temporal Locality and Bioplausible Learning in Selective State Space Models PReLU: Yet Another Single-Layer Solution to the XOR Problem Inferno: An Extensible Framework for Spiking Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1