Backward Reachability Analysis of Neural Feedback Loops: Techniques for Linear and Nonlinear Systems

Nicholas Rober;Sydney M. Katz;Chelsea Sidrane;Esen Yel;Michael Everett;Mykel J. Kochenderfer;Jonathan P. How
{"title":"Backward Reachability Analysis of Neural Feedback Loops: Techniques for Linear and Nonlinear Systems","authors":"Nicholas Rober;Sydney M. Katz;Chelsea Sidrane;Esen Yel;Michael Everett;Mykel J. Kochenderfer;Jonathan P. How","doi":"10.1109/OJCSYS.2023.3265901","DOIUrl":null,"url":null,"abstract":"As neural networks (NNs) become more prevalent in safety-critical applications such as control of vehicles, there is a growing need to certify that systems with NN components are safe. This paper presents a set of backward reachability approaches for safety certification of neural feedback loops (NFLs), i.e., closed-loop systems with NN control policies. While backward reachability strategies have been developed for systems without NN components, the nonlinearities in NN activation functions and general noninvertibility of NN weight matrices make backward reachability for NFLs a challenging problem. To avoid the difficulties associated with propagating sets backward through NNs, we introduce a framework that leverages standard forward NN analysis tools to efficiently find over-approximations to backprojection (BP) sets, i.e., sets of states for which an NN policy will lead a system to a given target set. We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs and propose computationally efficient strategies. We use numerical results from a variety of models to showcase the proposed algorithms, including a demonstration of safety certification for a 6D system.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"2 ","pages":"108-124"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9552933/9973428/10097878.pdf","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE open journal of control systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10097878/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

As neural networks (NNs) become more prevalent in safety-critical applications such as control of vehicles, there is a growing need to certify that systems with NN components are safe. This paper presents a set of backward reachability approaches for safety certification of neural feedback loops (NFLs), i.e., closed-loop systems with NN control policies. While backward reachability strategies have been developed for systems without NN components, the nonlinearities in NN activation functions and general noninvertibility of NN weight matrices make backward reachability for NFLs a challenging problem. To avoid the difficulties associated with propagating sets backward through NNs, we introduce a framework that leverages standard forward NN analysis tools to efficiently find over-approximations to backprojection (BP) sets, i.e., sets of states for which an NN policy will lead a system to a given target set. We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs and propose computationally efficient strategies. We use numerical results from a variety of models to showcase the proposed algorithms, including a demonstration of safety certification for a 6D system.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
神经反馈回路的后向可达性分析:线性和非线性系统的技术
随着神经网络在车辆控制等安全关键应用中越来越普遍,越来越需要证明具有神经网络组件的系统是安全的。本文提出了一组用于神经反馈回路(NFL)安全认证的向后可达性方法,即具有NN控制策略的闭环系统。虽然已经为没有NN组件的系统开发了向后可达性策略,但NN激活函数的非线性和NN权重矩阵的一般不可逆性使得NFL的向后可达性成为一个具有挑战性的问题。为了避免通过神经网络向后传播集合的困难,我们引入了一个框架,该框架利用标准的正向神经网络分析工具来有效地找到反向投影(BP)集合的过度近似,即神经网络策略将引导系统到达给定目标集的状态集合。我们提出了用前馈神经网络表示控制策略的线性和非线性系统的近似计算BP的框架,并提出了计算高效的策略。我们使用各种模型的数值结果来展示所提出的算法,包括6D系统的安全认证演示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Erratum to “Learning to Boost the Performance of Stable Nonlinear Systems” Generalizing Robust Control Barrier Functions From a Controller Design Perspective 2024 Index IEEE Open Journal of Control Systems Vol. 3 Front Cover Table of Contents
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1