AI Safety Landscape From short-term specific system engineering to long-term artificial general intelligence

J. Hernández-Orallo
{"title":"AI Safety Landscape From short-term specific system engineering to long-term artificial general intelligence","authors":"J. Hernández-Orallo","doi":"10.1109/DSN-W50199.2020.00023","DOIUrl":null,"url":null,"abstract":"AI Safety is an emerging area that integrates very different perspectives from mainstream AI, critical system engineering, dependable autonomous systems, artificial general intelligence, and many other areas concerned and occupied with building AI systems that are safe. Because of this diversity, there is an important level of disagreement in the terminology, the ontologies and the priorities of the field. The Consortium on the Landscape of AI Safety (CLAIS) is an international initiative to create a worldwide, consensus-based and generally-accepted knowledge base (online, interactive and constantly evolving) of structured subareas in AI Safety, including terminology, technologies, research gaps and opportunities, resources, people and groups working in the area, and connection with other subareas and disciplines. In this note we summarise early discussions around the initiative, the associated workshops, its current state and activities, including the body of knowledge, and how to contribute. On a more technical side, I will cover a few spots in the landscape, from very specific and short-term safety engineering issues appearing in specialised systems, to more long-term hazards emerging from more general and powerful intelligent systems.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"34 9","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSN-W50199.2020.00023","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

AI Safety is an emerging area that integrates very different perspectives from mainstream AI, critical system engineering, dependable autonomous systems, artificial general intelligence, and many other areas concerned and occupied with building AI systems that are safe. Because of this diversity, there is an important level of disagreement in the terminology, the ontologies and the priorities of the field. The Consortium on the Landscape of AI Safety (CLAIS) is an international initiative to create a worldwide, consensus-based and generally-accepted knowledge base (online, interactive and constantly evolving) of structured subareas in AI Safety, including terminology, technologies, research gaps and opportunities, resources, people and groups working in the area, and connection with other subareas and disciplines. In this note we summarise early discussions around the initiative, the associated workshops, its current state and activities, including the body of knowledge, and how to contribute. On a more technical side, I will cover a few spots in the landscape, from very specific and short-term safety engineering issues appearing in specialised systems, to more long-term hazards emerging from more general and powerful intelligent systems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从短期的特定系统工程到长期的人工通用智能
人工智能安全是一个新兴领域,它整合了与主流人工智能、关键系统工程、可靠自主系统、通用人工智能以及许多其他与构建安全人工智能系统相关的领域截然不同的观点。由于这种多样性,在术语、本体论和该领域的优先级方面存在很大程度的分歧。人工智能安全前景联盟(CLAIS)是一项国际倡议,旨在为人工智能安全的结构化子领域创建一个全球、基于共识和普遍接受的知识库(在线、互动和不断发展),包括术语、技术、研究差距和机会、资源、在该领域工作的人员和团体,以及与其他子领域和学科的联系。在本文中,我们总结了围绕该计划的早期讨论,相关的研讨会,其当前状态和活动,包括知识体系,以及如何做出贡献。在更技术性的方面,我将涵盖一些领域,从专业系统中出现的非常具体和短期的安全工程问题,到更通用和更强大的智能系统中出现的更长期的危险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
PyTorchFI: A Runtime Perturbation Tool for DNNs AI Safety Landscape From short-term specific system engineering to long-term artificial general intelligence DSN-W 2020 TOC Approaching certification of complex systems Exploring Fault Parameter Space Using Reinforcement Learning-based Fault Injection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1