The Quantitative Risk Norm - A Proposed Tailoring of HARA for ADS

Fredrik Warg, Martin A. Skoglund, Anders Thorsén, Rolf Johansson, M. Brännström, Magnus Gyllenhammar, Martin Sanfridson
{"title":"The Quantitative Risk Norm - A Proposed Tailoring of HARA for ADS","authors":"Fredrik Warg, Martin A. Skoglund, Anders Thorsén, Rolf Johansson, M. Brännström, Magnus Gyllenhammar, Martin Sanfridson","doi":"10.1109/DSN-W50199.2020.00026","DOIUrl":null,"url":null,"abstract":"One of the major challenges of automated driving systems (ADS) is showing that they drive safely. Key to ensuring safety is eliciting a complete set of top-level safety requirements (safety goals). This is typically done with an activity called hazard analysis and risk assessment (HARA). In this paper we argue that the HARA of ISO 26262:2018 is not directly suitable for an ADS, both because the number of relevant operational situations may be vast, and because the ability of the ADS to make decisions in order to reduce risks will affect the analysis of exposure and hazards. Instead we propose a tailoring using a quantitative risk norm (QRN) with consequence classes, where each class has a limit for the frequency within which the consequences may occur. Incident types are then defined and assigned to the consequence classes; the requirements prescribing the limits of these incident types are used as safety goals to fulfil in the implementation. The main benefits of the QRN approach are the ability to show completeness of safety goals, and make sure that the safety strategy is not limited by safety goals which are not formulated in a way suitable for an ADS.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSN-W50199.2020.00026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

One of the major challenges of automated driving systems (ADS) is showing that they drive safely. Key to ensuring safety is eliciting a complete set of top-level safety requirements (safety goals). This is typically done with an activity called hazard analysis and risk assessment (HARA). In this paper we argue that the HARA of ISO 26262:2018 is not directly suitable for an ADS, both because the number of relevant operational situations may be vast, and because the ability of the ADS to make decisions in order to reduce risks will affect the analysis of exposure and hazards. Instead we propose a tailoring using a quantitative risk norm (QRN) with consequence classes, where each class has a limit for the frequency within which the consequences may occur. Incident types are then defined and assigned to the consequence classes; the requirements prescribing the limits of these incident types are used as safety goals to fulfil in the implementation. The main benefits of the QRN approach are the ability to show completeness of safety goals, and make sure that the safety strategy is not limited by safety goals which are not formulated in a way suitable for an ADS.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
定量风险规范——针对ADS的HARA裁剪建议
自动驾驶系统(ADS)面临的主要挑战之一是证明它们的驾驶安全性。确保安全的关键是引出一套完整的顶级安全要求(安全目标)。这通常是通过称为危害分析和风险评估(HARA)的活动来完成的。在本文中,我们认为ISO 26262:2018的HARA并不直接适用于ADS,因为相关操作情况的数量可能很大,而且ADS为降低风险而做出决策的能力将影响暴露和危害的分析。相反,我们建议使用带有后果类别的定量风险规范(QRN)进行裁剪,其中每个类别都有可能发生后果的频率限制。然后定义事件类型并将其分配给结果类;规定这些事件类型的限制的要求被用作在实施中要实现的安全目标。QRN方法的主要优点是能够显示安全目标的完整性,并确保安全策略不受不适合ADS的安全目标的限制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
PyTorchFI: A Runtime Perturbation Tool for DNNs AI Safety Landscape From short-term specific system engineering to long-term artificial general intelligence DSN-W 2020 TOC Approaching certification of complex systems Exploring Fault Parameter Space Using Reinforcement Learning-based Fault Injection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1