{"title":"神经反馈回路安全验证的约束感知细化","authors":"Nicholas Rober;Jonathan P. How","doi":"10.1109/LCSYS.2024.3518912","DOIUrl":null,"url":null,"abstract":"This letter presents a method to efficiently reduce conservativeness in reachable set over approximations (RSOAs) to verify safety for neural feedback loops (NFLs), i.e., systems that have neural networks in their control pipelines. While generating RSOAs is a tractable alternative to calculating exact reachable sets, RSOAs can be overly conservative, especially when generated over long time horizons or for highly nonlinear NN control policies. Refinement strategies such as partitioning or symbolic propagation are typically used to limit the conservativeness of RSOAs, but these approaches come with a high computational cost and often can only be used to verify safety for simple reachability problems. This letter presents Constraint-Aware Refinement for Verification (CARV): an efficient refinement strategy that reduces the conservativeness of RSOAs by explicitly using the safety constraints on the NFL. Unlike existing approaches that seek to refine RSOAs over the entire time horizon, CARV limits the computational cost of refinement by refining RSOAs only where necessary to verify safety. We demonstrate that CARV can verify the safety of an NFL where other approaches either fail or take more than \n<inline-formula> <tex-math>$60\\times $ </tex-math></inline-formula>\n longer and \n<inline-formula> <tex-math>$40\\times $ </tex-math></inline-formula>\n the memory.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"8 ","pages":"3219-3224"},"PeriodicalIF":2.4000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Constraint-Aware Refinement for Safety Verification of Neural Feedback Loops\",\"authors\":\"Nicholas Rober;Jonathan P. How\",\"doi\":\"10.1109/LCSYS.2024.3518912\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This letter presents a method to efficiently reduce conservativeness in reachable set over approximations (RSOAs) to verify safety for neural feedback loops (NFLs), i.e., systems that have neural networks in their control pipelines. While generating RSOAs is a tractable alternative to calculating exact reachable sets, RSOAs can be overly conservative, especially when generated over long time horizons or for highly nonlinear NN control policies. Refinement strategies such as partitioning or symbolic propagation are typically used to limit the conservativeness of RSOAs, but these approaches come with a high computational cost and often can only be used to verify safety for simple reachability problems. This letter presents Constraint-Aware Refinement for Verification (CARV): an efficient refinement strategy that reduces the conservativeness of RSOAs by explicitly using the safety constraints on the NFL. Unlike existing approaches that seek to refine RSOAs over the entire time horizon, CARV limits the computational cost of refinement by refining RSOAs only where necessary to verify safety. We demonstrate that CARV can verify the safety of an NFL where other approaches either fail or take more than \\n<inline-formula> <tex-math>$60\\\\times $ </tex-math></inline-formula>\\n longer and \\n<inline-formula> <tex-math>$40\\\\times $ </tex-math></inline-formula>\\n the memory.\",\"PeriodicalId\":37235,\"journal\":{\"name\":\"IEEE Control Systems Letters\",\"volume\":\"8 \",\"pages\":\"3219-3224\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-12-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Control Systems Letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10804193/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Control Systems Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10804193/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Constraint-Aware Refinement for Safety Verification of Neural Feedback Loops
This letter presents a method to efficiently reduce conservativeness in reachable set over approximations (RSOAs) to verify safety for neural feedback loops (NFLs), i.e., systems that have neural networks in their control pipelines. While generating RSOAs is a tractable alternative to calculating exact reachable sets, RSOAs can be overly conservative, especially when generated over long time horizons or for highly nonlinear NN control policies. Refinement strategies such as partitioning or symbolic propagation are typically used to limit the conservativeness of RSOAs, but these approaches come with a high computational cost and often can only be used to verify safety for simple reachability problems. This letter presents Constraint-Aware Refinement for Verification (CARV): an efficient refinement strategy that reduces the conservativeness of RSOAs by explicitly using the safety constraints on the NFL. Unlike existing approaches that seek to refine RSOAs over the entire time horizon, CARV limits the computational cost of refinement by refining RSOAs only where necessary to verify safety. We demonstrate that CARV can verify the safety of an NFL where other approaches either fail or take more than
$60\times $
longer and
$40\times $
the memory.