{"title":"Reachability analysis of recurrent neural networks","authors":"Sung Woo Choi , Yuntao Li , Xiaodong Yang , Tomoya Yamaguchi , Bardh Hoxha , Georgios Fainekos , Danil Prokhorov , Hoang-Dung Tran","doi":"10.1016/j.nahs.2025.101581","DOIUrl":null,"url":null,"abstract":"<div><div>The paper proposes a new sparse star set representation and extends the recent star reachability method to verify the robustness of vanilla, long short-term memory (LSTM), and gated recurrent units (GRU) recurrent neural networks (RNNs) for safety-critical applications. RNNs are a popular machine learning method for various applications, but they are vulnerable to adversarial attacks, where slightly perturbing the input sequence can lead to an unexpected result. Recent notable techniques for verifying RNNs include unrolling and invariant inference approaches. The first method has scaling issues since unrolling an RNN creates a large feedforward neural network. The second method, using invariant sets, has better scalability but can produce unknown results due to the accumulation of over-approximation errors over time. This paper introduces a complementary verification method for RNNs that is both sound and complete. A relaxation parameter can be used to convert the method into a fast over-approximation method that still provides soundness guarantees. The vanilla RNN verification method is designed to be used with NNV, a tool for verifying deep neural networks and learning-enabled cyber–physical systems, while the verification approach of LSTM and GRU RNNs are implemented on <em>StarV</em>. Compared to state-of-the-art methods for verifying a vanilla RNN, the extended exact reachability method is <span><math><mrow><mn>10</mn><mo>×</mo></mrow></math></span> faster, and the over-approximation method is <span><math><mrow><mn>100</mn><mo>×</mo></mrow></math></span> to <span><math><mrow><mn>5000</mn><mo>×</mo></mrow></math></span> faster. Although the sparse star set is slow compared to state-of-the-art methods, it was able to verify more robust cases in general than them.</div></div>","PeriodicalId":49011,"journal":{"name":"Nonlinear Analysis-Hybrid Systems","volume":"56 ","pages":"Article 101581"},"PeriodicalIF":3.7000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nonlinear Analysis-Hybrid Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1751570X2500007X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The paper proposes a new sparse star set representation and extends the recent star reachability method to verify the robustness of vanilla, long short-term memory (LSTM), and gated recurrent units (GRU) recurrent neural networks (RNNs) for safety-critical applications. RNNs are a popular machine learning method for various applications, but they are vulnerable to adversarial attacks, where slightly perturbing the input sequence can lead to an unexpected result. Recent notable techniques for verifying RNNs include unrolling and invariant inference approaches. The first method has scaling issues since unrolling an RNN creates a large feedforward neural network. The second method, using invariant sets, has better scalability but can produce unknown results due to the accumulation of over-approximation errors over time. This paper introduces a complementary verification method for RNNs that is both sound and complete. A relaxation parameter can be used to convert the method into a fast over-approximation method that still provides soundness guarantees. The vanilla RNN verification method is designed to be used with NNV, a tool for verifying deep neural networks and learning-enabled cyber–physical systems, while the verification approach of LSTM and GRU RNNs are implemented on StarV. Compared to state-of-the-art methods for verifying a vanilla RNN, the extended exact reachability method is faster, and the over-approximation method is to faster. Although the sparse star set is slow compared to state-of-the-art methods, it was able to verify more robust cases in general than them.
期刊介绍:
Nonlinear Analysis: Hybrid Systems welcomes all important research and expository papers in any discipline. Papers that are principally concerned with the theory of hybrid systems should contain significant results indicating relevant applications. Papers that emphasize applications should consist of important real world models and illuminating techniques. Papers that interrelate various aspects of hybrid systems will be most welcome.