{"title":"Safety assurance for automated systems in transport: A collective case study of real-world fatal crashes","authors":"Stuart Ballingall, Majid Sarvi, Peter Sweatman","doi":"10.1016/j.jsr.2024.11.008","DOIUrl":null,"url":null,"abstract":"<div><div><em>Introduction</em>: Traditional vehicle safety assurance frameworks are challenged by Automated Driving Systems (ADSs) that enable dynamic driving tasks to be performed without active involvement of a human driver. Further, an ADS’s driving functionality can be changed during in-service operation, using software updates developed using Machine Learning (ML). Learnings from real-world cases will be a key input to reforming current regulatory frameworks to assure ADS safety. However, ADSs are yet to be deployed in mass volumes, and limited data are available regarding their in-service safety performance. <em>Method:</em> To overcome these limitations, a collective case study was undertaken, drawing upon three relevant real-world cases involving automated control systems that were a causative factor in major transport safety incidents. <em>Results:</em> A range of findings were identified, which informed recommendations for reform. The study found some assurance processes, decisions and oversight were not commensurate with risk or safety integrity levels, including a lack of independence with reviews and approvals for safety–critical system components. Two cases were also impacted by conflict or bias with regulatory approvals. Other commonalities included a lack of safeguards to ensure systems were not operated outside their design domain, and a lack of system redundancy to ensure safe operation if a system component fails. Further, the identification and validation of system responses to scenarios that could be encountered within design domain boundaries was lacking. For the two cases in which safety–critical functionality was developed using ML, it’s concerning no regulator reports provided detailed findings regarding the role of ML models, algorithms, or training data.</div></div>","PeriodicalId":48224,"journal":{"name":"Journal of Safety Research","volume":"92 ","pages":"Pages 27-39"},"PeriodicalIF":3.9000,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Safety Research","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0022437524001579","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ERGONOMICS","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: Traditional vehicle safety assurance frameworks are challenged by Automated Driving Systems (ADSs) that enable dynamic driving tasks to be performed without active involvement of a human driver. Further, an ADS’s driving functionality can be changed during in-service operation, using software updates developed using Machine Learning (ML). Learnings from real-world cases will be a key input to reforming current regulatory frameworks to assure ADS safety. However, ADSs are yet to be deployed in mass volumes, and limited data are available regarding their in-service safety performance. Method: To overcome these limitations, a collective case study was undertaken, drawing upon three relevant real-world cases involving automated control systems that were a causative factor in major transport safety incidents. Results: A range of findings were identified, which informed recommendations for reform. The study found some assurance processes, decisions and oversight were not commensurate with risk or safety integrity levels, including a lack of independence with reviews and approvals for safety–critical system components. Two cases were also impacted by conflict or bias with regulatory approvals. Other commonalities included a lack of safeguards to ensure systems were not operated outside their design domain, and a lack of system redundancy to ensure safe operation if a system component fails. Further, the identification and validation of system responses to scenarios that could be encountered within design domain boundaries was lacking. For the two cases in which safety–critical functionality was developed using ML, it’s concerning no regulator reports provided detailed findings regarding the role of ML models, algorithms, or training data.
导言:自动驾驶系统(ADS)无需人类驾驶员的主动参与即可执行动态驾驶任务,这对传统的车辆安全保障框架提出了挑战。此外,自动驾驶系统的驾驶功能可在使用过程中通过使用机器学习(ML)开发的软件更新进行更改。从实际案例中汲取的经验将成为改革当前监管框架以确保自动驾驶辅助系统安全的关键投入。然而,自动驾驶辅助系统尚未大规模部署,有关其在役安全性能的数据也很有限。方法:为了克服这些局限性,我们进行了一项集体案例研究,借鉴了三个相关的真实案例,这些案例涉及自动控制系统,它们是重大运输安全事故的诱因。研究结果研究发现了一系列问题,并提出了改革建议。研究发现,一些保证流程、决策和监督与风险或安全完整性水平不相称,包括对安全关键系统组件的审查和批准缺乏独立性。两个案例还受到监管审批冲突或偏见的影响。其他共性还包括缺乏保障措施以确保系统不在设计范围之外运行,以及缺乏系统冗余以确保系统组件发生故障时的安全运行。此外,还缺乏对设计域范围内可能遇到的情况的系统响应的识别和验证。在使用 ML 开发安全关键功能的两个案例中,令人担忧的是,监管机构的报告没有提供有关 ML 模型、算法或训练数据作用的详细调查结果。
期刊介绍:
Journal of Safety Research is an interdisciplinary publication that provides for the exchange of ideas and scientific evidence capturing studies through research in all areas of safety and health, including traffic, workplace, home, and community. This forum invites research using rigorous methodologies, encourages translational research, and engages the global scientific community through various partnerships (e.g., this outreach includes highlighting some of the latest findings from the U.S. Centers for Disease Control and Prevention).