Reachability analysis of recurrent neural networks

IF 3.7 2区 计算机科学 Q2 AUTOMATION & CONTROL SYSTEMS Nonlinear Analysis-Hybrid Systems Pub Date : 2025-02-21 DOI:10.1016/j.nahs.2025.101581
Sung Woo Choi , Yuntao Li , Xiaodong Yang , Tomoya Yamaguchi , Bardh Hoxha , Georgios Fainekos , Danil Prokhorov , Hoang-Dung Tran
{"title":"Reachability analysis of recurrent neural networks","authors":"Sung Woo Choi ,&nbsp;Yuntao Li ,&nbsp;Xiaodong Yang ,&nbsp;Tomoya Yamaguchi ,&nbsp;Bardh Hoxha ,&nbsp;Georgios Fainekos ,&nbsp;Danil Prokhorov ,&nbsp;Hoang-Dung Tran","doi":"10.1016/j.nahs.2025.101581","DOIUrl":null,"url":null,"abstract":"<div><div>The paper proposes a new sparse star set representation and extends the recent star reachability method to verify the robustness of vanilla, long short-term memory (LSTM), and gated recurrent units (GRU) recurrent neural networks (RNNs) for safety-critical applications. RNNs are a popular machine learning method for various applications, but they are vulnerable to adversarial attacks, where slightly perturbing the input sequence can lead to an unexpected result. Recent notable techniques for verifying RNNs include unrolling and invariant inference approaches. The first method has scaling issues since unrolling an RNN creates a large feedforward neural network. The second method, using invariant sets, has better scalability but can produce unknown results due to the accumulation of over-approximation errors over time. This paper introduces a complementary verification method for RNNs that is both sound and complete. A relaxation parameter can be used to convert the method into a fast over-approximation method that still provides soundness guarantees. The vanilla RNN verification method is designed to be used with NNV, a tool for verifying deep neural networks and learning-enabled cyber–physical systems, while the verification approach of LSTM and GRU RNNs are implemented on <em>StarV</em>. Compared to state-of-the-art methods for verifying a vanilla RNN, the extended exact reachability method is <span><math><mrow><mn>10</mn><mo>×</mo></mrow></math></span> faster, and the over-approximation method is <span><math><mrow><mn>100</mn><mo>×</mo></mrow></math></span> to <span><math><mrow><mn>5000</mn><mo>×</mo></mrow></math></span> faster. Although the sparse star set is slow compared to state-of-the-art methods, it was able to verify more robust cases in general than them.</div></div>","PeriodicalId":49011,"journal":{"name":"Nonlinear Analysis-Hybrid Systems","volume":"56 ","pages":"Article 101581"},"PeriodicalIF":3.7000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nonlinear Analysis-Hybrid Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1751570X2500007X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The paper proposes a new sparse star set representation and extends the recent star reachability method to verify the robustness of vanilla, long short-term memory (LSTM), and gated recurrent units (GRU) recurrent neural networks (RNNs) for safety-critical applications. RNNs are a popular machine learning method for various applications, but they are vulnerable to adversarial attacks, where slightly perturbing the input sequence can lead to an unexpected result. Recent notable techniques for verifying RNNs include unrolling and invariant inference approaches. The first method has scaling issues since unrolling an RNN creates a large feedforward neural network. The second method, using invariant sets, has better scalability but can produce unknown results due to the accumulation of over-approximation errors over time. This paper introduces a complementary verification method for RNNs that is both sound and complete. A relaxation parameter can be used to convert the method into a fast over-approximation method that still provides soundness guarantees. The vanilla RNN verification method is designed to be used with NNV, a tool for verifying deep neural networks and learning-enabled cyber–physical systems, while the verification approach of LSTM and GRU RNNs are implemented on StarV. Compared to state-of-the-art methods for verifying a vanilla RNN, the extended exact reachability method is 10× faster, and the over-approximation method is 100× to 5000× faster. Although the sparse star set is slow compared to state-of-the-art methods, it was able to verify more robust cases in general than them.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
Nonlinear Analysis-Hybrid Systems
Nonlinear Analysis-Hybrid Systems AUTOMATION & CONTROL SYSTEMS-MATHEMATICS, APPLIED
CiteScore
8.30
自引率
9.50%
发文量
65
审稿时长
>12 weeks
期刊介绍: Nonlinear Analysis: Hybrid Systems welcomes all important research and expository papers in any discipline. Papers that are principally concerned with the theory of hybrid systems should contain significant results indicating relevant applications. Papers that emphasize applications should consist of important real world models and illuminating techniques. Papers that interrelate various aspects of hybrid systems will be most welcome.
期刊最新文献
Stabilization of unstable impulsive systems via stochastic discrete-time feedback control with Lévy noise Reachability analysis of recurrent neural networks Chaotic dynamics of three-dimensional piecewise linear systems with sliding heteroclinic cycles Recurrent output tracking of Boolean networks Bifurcation analysis and sliding mode control of a singular piecewise-smooth prey–predator model with distributed delay
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1