Fateh Boudardara, Abderraouf Boussif, Pierre-Jean Meyer, M. Ghazel
{"title":"验证神经网络的抽象方法综述","authors":"Fateh Boudardara, Abderraouf Boussif, Pierre-Jean Meyer, M. Ghazel","doi":"10.1145/3617508","DOIUrl":null,"url":null,"abstract":"Neural networks as a machine learning technique are increasingly deployed in various domains. Despite their performances and their continuous improvement, the deployment of neural networks in safety-critical systems, in particular for autonomous mobility, remains restricted. This is mainly due to the lack of (formal) specifications and verification methods and tools that allow for getting sufficient confidence in the behavior of the neural network-based functions. Recent years have seen neural network verification getting more attention; and many verification methods were proposed, yet the practical applicability of these methods to real-world neural network models remains limited. The main challenge of neural network verification methods is related to the computational complexity and the large size of neural networks pertaining to complex functions. As a consequence, applying abstraction methods for neural network verification purposes is seen as a promising mean to cope with such issues. The aim of abstraction is to build an abstract model by omitting some irrelevant details or some details that are not highly impacting w.r.t some considered features. Thus, the verification process is made faster and easier while preserving, to some extent, the relevant behavior regarding the properties to be examined on the original model. In this paper, we review both the abstraction techniques for activation functions and model size reduction approaches, with a particular focus on the latter. The review primarily discusses the application of abstraction techniques on feed-forward neural networks, and explores the potential for applying abstraction to other types of neural networks. Throughout the paper, we present the main idea of each approach, and then discuss their respective advantages and limitations in details. Finally, we provide some insights and guidelines to improve the discussed methods.","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":" ","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A review of abstraction methods towards verifying neural networks\",\"authors\":\"Fateh Boudardara, Abderraouf Boussif, Pierre-Jean Meyer, M. Ghazel\",\"doi\":\"10.1145/3617508\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks as a machine learning technique are increasingly deployed in various domains. Despite their performances and their continuous improvement, the deployment of neural networks in safety-critical systems, in particular for autonomous mobility, remains restricted. This is mainly due to the lack of (formal) specifications and verification methods and tools that allow for getting sufficient confidence in the behavior of the neural network-based functions. Recent years have seen neural network verification getting more attention; and many verification methods were proposed, yet the practical applicability of these methods to real-world neural network models remains limited. The main challenge of neural network verification methods is related to the computational complexity and the large size of neural networks pertaining to complex functions. As a consequence, applying abstraction methods for neural network verification purposes is seen as a promising mean to cope with such issues. The aim of abstraction is to build an abstract model by omitting some irrelevant details or some details that are not highly impacting w.r.t some considered features. Thus, the verification process is made faster and easier while preserving, to some extent, the relevant behavior regarding the properties to be examined on the original model. In this paper, we review both the abstraction techniques for activation functions and model size reduction approaches, with a particular focus on the latter. The review primarily discusses the application of abstraction techniques on feed-forward neural networks, and explores the potential for applying abstraction to other types of neural networks. Throughout the paper, we present the main idea of each approach, and then discuss their respective advantages and limitations in details. Finally, we provide some insights and guidelines to improve the discussed methods.\",\"PeriodicalId\":50914,\"journal\":{\"name\":\"ACM Transactions on Embedded Computing Systems\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2023-08-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Embedded Computing Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3617508\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Embedded Computing Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3617508","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
A review of abstraction methods towards verifying neural networks
Neural networks as a machine learning technique are increasingly deployed in various domains. Despite their performances and their continuous improvement, the deployment of neural networks in safety-critical systems, in particular for autonomous mobility, remains restricted. This is mainly due to the lack of (formal) specifications and verification methods and tools that allow for getting sufficient confidence in the behavior of the neural network-based functions. Recent years have seen neural network verification getting more attention; and many verification methods were proposed, yet the practical applicability of these methods to real-world neural network models remains limited. The main challenge of neural network verification methods is related to the computational complexity and the large size of neural networks pertaining to complex functions. As a consequence, applying abstraction methods for neural network verification purposes is seen as a promising mean to cope with such issues. The aim of abstraction is to build an abstract model by omitting some irrelevant details or some details that are not highly impacting w.r.t some considered features. Thus, the verification process is made faster and easier while preserving, to some extent, the relevant behavior regarding the properties to be examined on the original model. In this paper, we review both the abstraction techniques for activation functions and model size reduction approaches, with a particular focus on the latter. The review primarily discusses the application of abstraction techniques on feed-forward neural networks, and explores the potential for applying abstraction to other types of neural networks. Throughout the paper, we present the main idea of each approach, and then discuss their respective advantages and limitations in details. Finally, we provide some insights and guidelines to improve the discussed methods.
期刊介绍:
The design of embedded computing systems, both the software and hardware, increasingly relies on sophisticated algorithms, analytical models, and methodologies. ACM Transactions on Embedded Computing Systems (TECS) aims to present the leading work relating to the analysis, design, behavior, and experience with embedded computing systems.