保证案例作为审计人工智能和自治系统的基石:研讨会结果和来自ExamAI项目的政治行动建议

R. Adler, M. Klaes
{"title":"保证案例作为审计人工智能和自治系统的基石:研讨会结果和来自ExamAI项目的政治行动建议","authors":"R. Adler, M. Klaes","doi":"10.48550/arXiv.2208.08198","DOIUrl":null,"url":null,"abstract":". The European Machinery Directive and related harmonized standards do consider that software is used to generate safety-relevant behavior of the machinery but do not consider all kinds of software. In particular, software based on machine learning (ML) are not considered for the realization of safety-relevant behavior. This limits the introduction of suitable safety concepts for autonomous mobile robots and other autonomous machinery, which commonly depend on ML-based functions. We investigated this issue and the way safety standards define safety measures to be implemented against software faults. Functional safety standards use Safety Integrity Levels (SILs) to define which safety measures shall be implemented. They provide rules for determining the SIL and rules for selecting safety measures depending on the SIL. In this paper, we argue that this approach can hardly be adopted with respect to ML and other kinds of Artificial Intelligence (AI). Instead of simple rules for determining an SIL and applying related measures against faults, we propose the use of assurance cases to argue that the individually selected and applied measures are sufficient in the given case. To get a first rating regarding the feasibility and usefulness of our proposal, we presented and discussed it in a workshop with experts from industry, German statu-tory accident insurance companies, work safety and standardization commis-sions, and representatives from various national, European, and international working groups dealing with safety and AI. In this paper, we summarize the proposal and the workshop discussion. Moreover, we check to which extent our proposal is in line with the European AI Act proposal and current safety standardization initiatives addressing AI and Autonomous Systems.","PeriodicalId":129626,"journal":{"name":"Interacción","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assurance Cases as Foundation Stone for Auditing AI-enabled and Autonomous Systems: Workshop Results and Political Recommendations for Action from the ExamAI Project\",\"authors\":\"R. Adler, M. Klaes\",\"doi\":\"10.48550/arXiv.2208.08198\",\"DOIUrl\":null,\"url\":null,\"abstract\":\". The European Machinery Directive and related harmonized standards do consider that software is used to generate safety-relevant behavior of the machinery but do not consider all kinds of software. In particular, software based on machine learning (ML) are not considered for the realization of safety-relevant behavior. This limits the introduction of suitable safety concepts for autonomous mobile robots and other autonomous machinery, which commonly depend on ML-based functions. We investigated this issue and the way safety standards define safety measures to be implemented against software faults. Functional safety standards use Safety Integrity Levels (SILs) to define which safety measures shall be implemented. They provide rules for determining the SIL and rules for selecting safety measures depending on the SIL. In this paper, we argue that this approach can hardly be adopted with respect to ML and other kinds of Artificial Intelligence (AI). Instead of simple rules for determining an SIL and applying related measures against faults, we propose the use of assurance cases to argue that the individually selected and applied measures are sufficient in the given case. To get a first rating regarding the feasibility and usefulness of our proposal, we presented and discussed it in a workshop with experts from industry, German statu-tory accident insurance companies, work safety and standardization commis-sions, and representatives from various national, European, and international working groups dealing with safety and AI. In this paper, we summarize the proposal and the workshop discussion. Moreover, we check to which extent our proposal is in line with the European AI Act proposal and current safety standardization initiatives addressing AI and Autonomous Systems.\",\"PeriodicalId\":129626,\"journal\":{\"name\":\"Interacción\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Interacción\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2208.08198\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interacción","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2208.08198","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

. 欧洲机械指令和相关的协调标准确实考虑了软件用于产生机械的安全相关行为,但没有考虑所有类型的软件。特别是,基于机器学习(ML)的软件没有被考虑用于安全相关行为的实现。这限制了为自主移动机器人和其他自主机械引入合适的安全概念,这些概念通常依赖于基于ml的功能。我们调查了这个问题,以及安全标准定义针对软件故障实现的安全措施的方式。功能安全标准使用安全完整性等级(SILs)来定义应实施哪些安全措施。它们提供了确定SIL和根据SIL选择安全措施的规则。在本文中,我们认为这种方法很难被ML和其他类型的人工智能(AI)所采用。我们建议使用保证案例来证明,在给定的情况下,单独选择和应用的措施是足够的,而不是用于确定SIL和针对故障应用相关措施的简单规则。为了获得关于我们建议的可行性和有用性的第一评级,我们在一个研讨会上与来自工业界、德国法定意外保险公司、工作安全和标准化委员会的专家以及来自处理安全和人工智能的各个国家、欧洲和国际工作组的代表进行了介绍和讨论。在本文中,我们总结了该提案和研讨会讨论。此外,我们检查我们的提案在多大程度上符合欧洲人工智能法案提案以及当前针对人工智能和自主系统的安全标准化倡议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Assurance Cases as Foundation Stone for Auditing AI-enabled and Autonomous Systems: Workshop Results and Political Recommendations for Action from the ExamAI Project
. The European Machinery Directive and related harmonized standards do consider that software is used to generate safety-relevant behavior of the machinery but do not consider all kinds of software. In particular, software based on machine learning (ML) are not considered for the realization of safety-relevant behavior. This limits the introduction of suitable safety concepts for autonomous mobile robots and other autonomous machinery, which commonly depend on ML-based functions. We investigated this issue and the way safety standards define safety measures to be implemented against software faults. Functional safety standards use Safety Integrity Levels (SILs) to define which safety measures shall be implemented. They provide rules for determining the SIL and rules for selecting safety measures depending on the SIL. In this paper, we argue that this approach can hardly be adopted with respect to ML and other kinds of Artificial Intelligence (AI). Instead of simple rules for determining an SIL and applying related measures against faults, we propose the use of assurance cases to argue that the individually selected and applied measures are sufficient in the given case. To get a first rating regarding the feasibility and usefulness of our proposal, we presented and discussed it in a workshop with experts from industry, German statu-tory accident insurance companies, work safety and standardization commis-sions, and representatives from various national, European, and international working groups dealing with safety and AI. In this paper, we summarize the proposal and the workshop discussion. Moreover, we check to which extent our proposal is in line with the European AI Act proposal and current safety standardization initiatives addressing AI and Autonomous Systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Thousand Faces of Explainable AI Along the Machine Learning Life Cycle: Industrial Reality and Current State of Research Tell Me, What Are You Most Afraid Of? Exploring the Effects of Agent Representation on Information Disclosure in Human-Chatbot Interaction Modular 3D Interface Design for Accessible VR Applications A new perspective on the prediction of the innovation performance: A data driven methodology to identify innovation indicators through a comparative study of Boston's neighborhoods Two Heads are Better than One: A Bio-inspired Method for Improving Classification on EEG-ET Data
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1