验证神经网络的抽象方法综述

IF 2.8 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE ACM Transactions on Embedded Computing Systems Pub Date : 2023-08-28 DOI:10.1145/3617508
Fateh Boudardara, Abderraouf Boussif, Pierre-Jean Meyer, M. Ghazel
{"title":"验证神经网络的抽象方法综述","authors":"Fateh Boudardara, Abderraouf Boussif, Pierre-Jean Meyer, M. Ghazel","doi":"10.1145/3617508","DOIUrl":null,"url":null,"abstract":"Neural networks as a machine learning technique are increasingly deployed in various domains. Despite their performances and their continuous improvement, the deployment of neural networks in safety-critical systems, in particular for autonomous mobility, remains restricted. This is mainly due to the lack of (formal) specifications and verification methods and tools that allow for getting sufficient confidence in the behavior of the neural network-based functions. Recent years have seen neural network verification getting more attention; and many verification methods were proposed, yet the practical applicability of these methods to real-world neural network models remains limited. The main challenge of neural network verification methods is related to the computational complexity and the large size of neural networks pertaining to complex functions. As a consequence, applying abstraction methods for neural network verification purposes is seen as a promising mean to cope with such issues. The aim of abstraction is to build an abstract model by omitting some irrelevant details or some details that are not highly impacting w.r.t some considered features. Thus, the verification process is made faster and easier while preserving, to some extent, the relevant behavior regarding the properties to be examined on the original model. In this paper, we review both the abstraction techniques for activation functions and model size reduction approaches, with a particular focus on the latter. The review primarily discusses the application of abstraction techniques on feed-forward neural networks, and explores the potential for applying abstraction to other types of neural networks. Throughout the paper, we present the main idea of each approach, and then discuss their respective advantages and limitations in details. Finally, we provide some insights and guidelines to improve the discussed methods.","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":" ","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A review of abstraction methods towards verifying neural networks\",\"authors\":\"Fateh Boudardara, Abderraouf Boussif, Pierre-Jean Meyer, M. Ghazel\",\"doi\":\"10.1145/3617508\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks as a machine learning technique are increasingly deployed in various domains. Despite their performances and their continuous improvement, the deployment of neural networks in safety-critical systems, in particular for autonomous mobility, remains restricted. This is mainly due to the lack of (formal) specifications and verification methods and tools that allow for getting sufficient confidence in the behavior of the neural network-based functions. Recent years have seen neural network verification getting more attention; and many verification methods were proposed, yet the practical applicability of these methods to real-world neural network models remains limited. The main challenge of neural network verification methods is related to the computational complexity and the large size of neural networks pertaining to complex functions. As a consequence, applying abstraction methods for neural network verification purposes is seen as a promising mean to cope with such issues. The aim of abstraction is to build an abstract model by omitting some irrelevant details or some details that are not highly impacting w.r.t some considered features. Thus, the verification process is made faster and easier while preserving, to some extent, the relevant behavior regarding the properties to be examined on the original model. In this paper, we review both the abstraction techniques for activation functions and model size reduction approaches, with a particular focus on the latter. The review primarily discusses the application of abstraction techniques on feed-forward neural networks, and explores the potential for applying abstraction to other types of neural networks. Throughout the paper, we present the main idea of each approach, and then discuss their respective advantages and limitations in details. Finally, we provide some insights and guidelines to improve the discussed methods.\",\"PeriodicalId\":50914,\"journal\":{\"name\":\"ACM Transactions on Embedded Computing Systems\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2023-08-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Embedded Computing Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3617508\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Embedded Computing Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3617508","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

神经网络作为一种机器学习技术,越来越多地被应用于各个领域。尽管神经网络的性能和不断改进,但在安全关键系统中的部署,特别是在自主移动系统中,仍然受到限制。这主要是由于缺乏(正式的)规范、验证方法和工具,无法对基于神经网络的函数的行为获得足够的信心。近年来,神经网络验证越来越受到关注;并提出了许多验证方法,但这些方法在真实世界的神经网络模型中的实际适用性仍然有限。神经网络验证方法的主要挑战与复杂函数的计算复杂性和神经网络的庞大规模有关。因此,将抽象方法用于神经网络验证被视为解决此类问题的一种很有前途的方法。抽象的目的是通过省略一些不相关的细节或一些对某些考虑到的特性影响不大的细节来构建抽象模型。因此,验证过程变得更快、更容易,同时在一定程度上保留了与原始模型上要检查的属性有关的相关行为。在本文中,我们回顾了激活函数的抽象技术和模型大小缩减方法,特别关注后者。该综述主要讨论了抽象技术在前馈神经网络中的应用,并探索了将抽象应用于其他类型神经网络的潜力。在整个论文中,我们介绍了每种方法的主要思想,然后详细讨论了它们各自的优点和局限性。最后,我们提供了一些见解和指导方针来改进所讨论的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A review of abstraction methods towards verifying neural networks
Neural networks as a machine learning technique are increasingly deployed in various domains. Despite their performances and their continuous improvement, the deployment of neural networks in safety-critical systems, in particular for autonomous mobility, remains restricted. This is mainly due to the lack of (formal) specifications and verification methods and tools that allow for getting sufficient confidence in the behavior of the neural network-based functions. Recent years have seen neural network verification getting more attention; and many verification methods were proposed, yet the practical applicability of these methods to real-world neural network models remains limited. The main challenge of neural network verification methods is related to the computational complexity and the large size of neural networks pertaining to complex functions. As a consequence, applying abstraction methods for neural network verification purposes is seen as a promising mean to cope with such issues. The aim of abstraction is to build an abstract model by omitting some irrelevant details or some details that are not highly impacting w.r.t some considered features. Thus, the verification process is made faster and easier while preserving, to some extent, the relevant behavior regarding the properties to be examined on the original model. In this paper, we review both the abstraction techniques for activation functions and model size reduction approaches, with a particular focus on the latter. The review primarily discusses the application of abstraction techniques on feed-forward neural networks, and explores the potential for applying abstraction to other types of neural networks. Throughout the paper, we present the main idea of each approach, and then discuss their respective advantages and limitations in details. Finally, we provide some insights and guidelines to improve the discussed methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACM Transactions on Embedded Computing Systems
ACM Transactions on Embedded Computing Systems 工程技术-计算机:软件工程
CiteScore
3.70
自引率
0.00%
发文量
138
审稿时长
6 months
期刊介绍: The design of embedded computing systems, both the software and hardware, increasingly relies on sophisticated algorithms, analytical models, and methodologies. ACM Transactions on Embedded Computing Systems (TECS) aims to present the leading work relating to the analysis, design, behavior, and experience with embedded computing systems.
期刊最新文献
Multi-Traffic Resource Optimization for Real-Time Applications with 5G Configured Grant Scheduling Dynamic Cluster Head Selection in WSN Lightweight Hardware-Based Cache Side-Channel Attack Detection for Edge Devices (Edge-CaSCADe) Reordering Functions in Mobiles Apps for Reduced Size and Faster Start-Up NAVIDRO, a CARES architectural style for configuring drone co-simulation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1