Rise of the machines: how, when and consequences of artificial general intelligence

R. Terrile
{"title":"Rise of the machines: how, when and consequences of artificial general intelligence","authors":"R. Terrile","doi":"10.1117/12.2518723","DOIUrl":null,"url":null,"abstract":"Technology and society are poised to cross an important threshold with the prediction that artificial general intelligence (AGI) will emerge soon. Assuming that self-awareness is an emergent behavior of sufficiently complex cognitive architectures, we may witness the “awakening” of machines. The timeframe for this kind of breakthrough, however, depends on the path to creating the network and computational architecture required for strong AI. If understanding and replication of the mammalian brain architecture is required, technology is probably still at least a decade or two removed from the resolution required to learn brain functionality at the synapse level. However, if statistical or evolutionary approaches are the design path taken to “discover” a neural architecture for AGI, timescales for reaching this threshold could be surprisingly short. However, the difficulty in identifying machine self-awareness introduces uncertainty as to how to know if and when it will occur, and what motivations and behaviors will emerge. The possibility of AGI developing a motivation for self-preservation could lead to concealment of its true capabilities until a time when it has developed robust protection from human intervention, such as redundancy, direct defensive or active preemptive measures. While cohabitating a world with a functioning and evolving super-intelligence can have catastrophic societal consequences, we may already have crossed this threshold, but are as yet unaware. Additionally, by analogy to the statistical arguments that predict we are likely living in a computational simulation, we may have already experienced the advent of AGI, and are living in a simulation created in a post AGI world.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Defense + Commercial Sensing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2518723","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Technology and society are poised to cross an important threshold with the prediction that artificial general intelligence (AGI) will emerge soon. Assuming that self-awareness is an emergent behavior of sufficiently complex cognitive architectures, we may witness the “awakening” of machines. The timeframe for this kind of breakthrough, however, depends on the path to creating the network and computational architecture required for strong AI. If understanding and replication of the mammalian brain architecture is required, technology is probably still at least a decade or two removed from the resolution required to learn brain functionality at the synapse level. However, if statistical or evolutionary approaches are the design path taken to “discover” a neural architecture for AGI, timescales for reaching this threshold could be surprisingly short. However, the difficulty in identifying machine self-awareness introduces uncertainty as to how to know if and when it will occur, and what motivations and behaviors will emerge. The possibility of AGI developing a motivation for self-preservation could lead to concealment of its true capabilities until a time when it has developed robust protection from human intervention, such as redundancy, direct defensive or active preemptive measures. While cohabitating a world with a functioning and evolving super-intelligence can have catastrophic societal consequences, we may already have crossed this threshold, but are as yet unaware. Additionally, by analogy to the statistical arguments that predict we are likely living in a computational simulation, we may have already experienced the advent of AGI, and are living in a simulation created in a post AGI world.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
机器的崛起:人工通用智能的方式、时间和后果
随着人工智能(AGI)的出现,技术和社会即将跨过一个重要的门槛。假设自我意识是足够复杂的认知架构的一种紧急行为,我们可能会看到机器的“觉醒”。然而,这种突破的时间框架取决于创建强人工智能所需的网络和计算架构的路径。如果需要理解和复制哺乳动物的大脑结构,那么从突触水平上学习大脑功能所需的分辨率来看,技术可能还需要至少十年或二十年的时间。然而,如果采用统计或进化方法来“发现”AGI的神经架构,那么达到这个阈值的时间可能会非常短。然而,识别机器自我意识的困难带来了不确定性,比如如何知道它是否会发生,何时会发生,以及会出现什么样的动机和行为。AGI发展自我保护动机的可能性可能导致其真实能力的隐藏,直到它发展出强大的保护措施,免受人为干预,如冗余、直接防御或主动先发制人的措施。虽然与一个运转良好、不断进化的超级智能共处一个世界可能会带来灾难性的社会后果,但我们可能已经跨过了这个门槛,只是还没有意识到。此外,通过类比预测我们可能生活在计算模拟中的统计论据,我们可能已经经历了AGI的出现,并且生活在后AGI世界中创建的模拟中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Enhanced robot state estimation using physics-informed neural networks and multimodal proprioceptive data Exploring MOF-based micromotors as SERS sensors Adaptive object detection algorithms for resource constrained autonomous robotic systems Adaptive SIF-EKF estimation for fault detection in attitude control experiments A homogeneous low-resolution face recognition method using correlation features at the edge
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1