基于内存的分布偏移检测,用于具有统计保障的学习型网络物理系统

Yahan Yang, Ramneet Kaur, Souradeep Dutta, Insup Lee
{"title":"基于内存的分布偏移检测,用于具有统计保障的学习型网络物理系统","authors":"Yahan Yang, Ramneet Kaur, Souradeep Dutta, Insup Lee","doi":"10.1145/3643892","DOIUrl":null,"url":null,"abstract":"Incorporating learning based components in the current state-of-the-art cyber physical systems (CPS) has been a challenge due to the brittleness of the underlying deep neural networks. On the bright side, if executed correctly with safety guarantees, this has the ability to revolutionize domains like autonomous systems, medicine and other safety critical domains. This is because, it would allow system designers to use high dimensional outputs from sensors like camera and LiDAR. The trepidation in deploying systems with vision and LiDAR components comes from incidents of catastrophic failures in the real world. Recent reports of self-driving cars running into difficult to handle scenarios is ingrained in the software components which handle such sensor inputs.\n The ability to handle such high dimensional signals is due to the explosion of algorithms which use deep neural networks. Sadly, the reason behind the safety issues is also due to deep neural networks themselves. The pitfalls occur due to possible over-fitting, and lack of awareness about the blind spots induced by the training distribution. Ideally, system designers would wish to cover as many scenarios during training as possible. But, achieving a meaningful coverage is impossible. This naturally leads to the following question: Is it feasible to flag out-of-distribution (OOD) samples without causing too many false alarms? Such an OOD detector should be executable in a fashion that is computationally efficient. This is because OOD detectors often are executed as frequently as the sensors are sampled.\n Our aim in this paper is to build an effective anomaly detector. To this end, we propose the idea of a memory bank to cache data samples which are representative enough to cover most of the in-distribution data. The similarity with respect to such samples can be a measure of familiarity of the test input. This is made possible by an appropriate choice of distance function tailored to the type of sensor we are interested in. Additionally, we adapt conformal anomaly detection framework to capture the distribution shifts with a guarantee of false alarm rate. We report the performance of our technique on two challenging scenarios: a self-driving car setting implemented inside the simulator CARLA with image inputs and autonomous racing car navigation setting with LiDAR inputs. From the experiments, it is clear that a deviation from in-distribution setting can potentially lead to unsafe behavior. Although it should be noted that not all OOD inputs lead to precarious situations in practice, but staying in-distribution is akin to staying within a safety bubble and predictable behavior. An added benefit of our memory based approach is that the OOD detector produces interpretable feedback for a human designer. This is of utmost importance since it recommends a potential fix for the situation as well. In other competing approaches such a feedback is difficult to obtain due to reliance on techniques which use variational autoencoders.","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Memory-based Distribution Shift Detection for Learning Enabled Cyber-Physical Systems with Statistical Guarantees\",\"authors\":\"Yahan Yang, Ramneet Kaur, Souradeep Dutta, Insup Lee\",\"doi\":\"10.1145/3643892\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Incorporating learning based components in the current state-of-the-art cyber physical systems (CPS) has been a challenge due to the brittleness of the underlying deep neural networks. On the bright side, if executed correctly with safety guarantees, this has the ability to revolutionize domains like autonomous systems, medicine and other safety critical domains. This is because, it would allow system designers to use high dimensional outputs from sensors like camera and LiDAR. The trepidation in deploying systems with vision and LiDAR components comes from incidents of catastrophic failures in the real world. Recent reports of self-driving cars running into difficult to handle scenarios is ingrained in the software components which handle such sensor inputs.\\n The ability to handle such high dimensional signals is due to the explosion of algorithms which use deep neural networks. Sadly, the reason behind the safety issues is also due to deep neural networks themselves. The pitfalls occur due to possible over-fitting, and lack of awareness about the blind spots induced by the training distribution. Ideally, system designers would wish to cover as many scenarios during training as possible. But, achieving a meaningful coverage is impossible. This naturally leads to the following question: Is it feasible to flag out-of-distribution (OOD) samples without causing too many false alarms? Such an OOD detector should be executable in a fashion that is computationally efficient. This is because OOD detectors often are executed as frequently as the sensors are sampled.\\n Our aim in this paper is to build an effective anomaly detector. To this end, we propose the idea of a memory bank to cache data samples which are representative enough to cover most of the in-distribution data. The similarity with respect to such samples can be a measure of familiarity of the test input. This is made possible by an appropriate choice of distance function tailored to the type of sensor we are interested in. Additionally, we adapt conformal anomaly detection framework to capture the distribution shifts with a guarantee of false alarm rate. We report the performance of our technique on two challenging scenarios: a self-driving car setting implemented inside the simulator CARLA with image inputs and autonomous racing car navigation setting with LiDAR inputs. From the experiments, it is clear that a deviation from in-distribution setting can potentially lead to unsafe behavior. Although it should be noted that not all OOD inputs lead to precarious situations in practice, but staying in-distribution is akin to staying within a safety bubble and predictable behavior. An added benefit of our memory based approach is that the OOD detector produces interpretable feedback for a human designer. This is of utmost importance since it recommends a potential fix for the situation as well. In other competing approaches such a feedback is difficult to obtain due to reliance on techniques which use variational autoencoders.\",\"PeriodicalId\":505086,\"journal\":{\"name\":\"ACM Transactions on Cyber-Physical Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Cyber-Physical Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3643892\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Cyber-Physical Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3643892","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于底层深度神经网络的脆弱性,在当前最先进的网络物理系统(CPS)中加入基于学习的组件一直是一项挑战。从好的方面看,如果能在保证安全的前提下正确执行,这将有能力彻底改变自主系统、医学和其他安全关键领域。这是因为,它将允许系统设计师使用来自摄像头和激光雷达等传感器的高维输出。在部署带有视觉和激光雷达组件的系统时所产生的恐惧来自于现实世界中发生的灾难性故障事件。最近关于自动驾驶汽车遇到难以处理的情况的报道,在处理此类传感器输入的软件组件中根深蒂固。之所以能够处理如此高维的信号,是因为使用深度神经网络的算法层出不穷。遗憾的是,安全问题背后的原因也在于深度神经网络本身。由于可能存在过度拟合,以及对训练分布引起的盲点缺乏认识,这些隐患就会出现。理想情况下,系统设计者希望在训练过程中覆盖尽可能多的场景。但是,实现有意义的覆盖是不可能的。这自然会引出以下问题:在不引起过多误报的情况下标记出分布外样本(OOD)是否可行?这样的 OOD 检测器应该能以高效的计算方式执行。这是因为 OOD 检测器的执行频率通常与传感器的采样频率相同。我们在本文中的目标是建立一个有效的异常检测器。为此,我们提出了一个内存库的概念,用来缓存数据样本,这些样本具有足够的代表性,可以覆盖大部分的分布数据。与这些样本的相似度可以用来衡量测试输入的熟悉程度。针对我们感兴趣的传感器类型,我们可以选择适当的距离函数来实现这一点。此外,我们还调整了保形异常检测框架,以在保证误报率的前提下捕捉分布偏移。我们报告了我们的技术在两个具有挑战性的场景中的表现:在模拟器 CARLA 中使用图像输入实现的自动驾驶汽车设置和使用激光雷达输入的自动赛车导航设置。从实验中可以清楚地看出,偏离内分布设置有可能导致不安全行为。需要注意的是,在实践中并非所有的 OOD 输入都会导致不安全的情况,但保持在分布范围内就相当于保持在安全气泡和可预测的行为范围内。我们基于记忆的方法还有一个好处,那就是 OOD 检测器能为人类设计师提供可解释的反馈。这一点极为重要,因为它还能推荐潜在的修复方案。在其他竞争方法中,由于依赖于使用变异自动编码器的技术,很难获得这样的反馈。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Memory-based Distribution Shift Detection for Learning Enabled Cyber-Physical Systems with Statistical Guarantees
Incorporating learning based components in the current state-of-the-art cyber physical systems (CPS) has been a challenge due to the brittleness of the underlying deep neural networks. On the bright side, if executed correctly with safety guarantees, this has the ability to revolutionize domains like autonomous systems, medicine and other safety critical domains. This is because, it would allow system designers to use high dimensional outputs from sensors like camera and LiDAR. The trepidation in deploying systems with vision and LiDAR components comes from incidents of catastrophic failures in the real world. Recent reports of self-driving cars running into difficult to handle scenarios is ingrained in the software components which handle such sensor inputs. The ability to handle such high dimensional signals is due to the explosion of algorithms which use deep neural networks. Sadly, the reason behind the safety issues is also due to deep neural networks themselves. The pitfalls occur due to possible over-fitting, and lack of awareness about the blind spots induced by the training distribution. Ideally, system designers would wish to cover as many scenarios during training as possible. But, achieving a meaningful coverage is impossible. This naturally leads to the following question: Is it feasible to flag out-of-distribution (OOD) samples without causing too many false alarms? Such an OOD detector should be executable in a fashion that is computationally efficient. This is because OOD detectors often are executed as frequently as the sensors are sampled. Our aim in this paper is to build an effective anomaly detector. To this end, we propose the idea of a memory bank to cache data samples which are representative enough to cover most of the in-distribution data. The similarity with respect to such samples can be a measure of familiarity of the test input. This is made possible by an appropriate choice of distance function tailored to the type of sensor we are interested in. Additionally, we adapt conformal anomaly detection framework to capture the distribution shifts with a guarantee of false alarm rate. We report the performance of our technique on two challenging scenarios: a self-driving car setting implemented inside the simulator CARLA with image inputs and autonomous racing car navigation setting with LiDAR inputs. From the experiments, it is clear that a deviation from in-distribution setting can potentially lead to unsafe behavior. Although it should be noted that not all OOD inputs lead to precarious situations in practice, but staying in-distribution is akin to staying within a safety bubble and predictable behavior. An added benefit of our memory based approach is that the OOD detector produces interpretable feedback for a human designer. This is of utmost importance since it recommends a potential fix for the situation as well. In other competing approaches such a feedback is difficult to obtain due to reliance on techniques which use variational autoencoders.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Introduction to the Special Issue on Fault-Resilient Cyber-Physical Systems – Part I ACM TCPS Foreword to Special Issue for ICCPS 2022 LEC-MiCs: Low-Energy Checkpointing in Mixed-Criticality Multi-Core Systems SIoV Mobility Management using SDVN-enabled Traffic Light Cooperative Framework Characterizing the effect of mind wandering on partially autonomous braking dynamics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1