人工智能:灾难性风险的论据

IF 2.1 1区 哲学 0 PHILOSOPHY Philosophy Compass Pub Date : 2024-02-10 DOI:10.1111/phc3.12964
Adam Bales, William D'Alessandro, Cameron Domenico Kirk-Giannini
{"title":"人工智能:灾难性风险的论据","authors":"Adam Bales, William D'Alessandro, Cameron Domenico Kirk-Giannini","doi":"10.1111/phc3.12964","DOIUrl":null,"url":null,"abstract":"Recent progress in artificial intelligence (AI) has drawn attention to the technology's transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the <i>Problem of Power-Seeking</i> — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that they might obtain it, that this could lead to catastrophe, and that we might build and deploy such systems anyway. The second argument claims that the development of human-level AI will unlock rapid further progress, culminating in AI systems far more capable than any human — this is the <i>Singularity Hypothesis</i>. Power-seeking behavior on the part of such systems might be particularly dangerous. We discuss a variety of objections to both arguments and conclude by assessing the state of the debate.","PeriodicalId":40011,"journal":{"name":"Philosophy Compass","volume":"73 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence: Arguments for Catastrophic Risk\",\"authors\":\"Adam Bales, William D'Alessandro, Cameron Domenico Kirk-Giannini\",\"doi\":\"10.1111/phc3.12964\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent progress in artificial intelligence (AI) has drawn attention to the technology's transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the <i>Problem of Power-Seeking</i> — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that they might obtain it, that this could lead to catastrophe, and that we might build and deploy such systems anyway. The second argument claims that the development of human-level AI will unlock rapid further progress, culminating in AI systems far more capable than any human — this is the <i>Singularity Hypothesis</i>. Power-seeking behavior on the part of such systems might be particularly dangerous. We discuss a variety of objections to both arguments and conclude by assessing the state of the debate.\",\"PeriodicalId\":40011,\"journal\":{\"name\":\"Philosophy Compass\",\"volume\":\"73 1\",\"pages\":\"\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-02-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Philosophy Compass\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1111/phc3.12964\",\"RegionNum\":1,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"PHILOSOPHY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Philosophy Compass","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1111/phc3.12964","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"PHILOSOPHY","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)的最新进展引起了人们对该技术变革潜力的关注,包括一些人认为它可能造成大规模伤害的前景。我们回顾了两个有影响力的论点,它们声称人工智能可能带来灾难性风险。第一个论点--权力寻租问题--声称,在某些假设条件下,先进的人工智能系统在追求目标的过程中很可能会做出危险的权力寻租行为。我们回顾了认为人工智能系统可能寻求权力、可能获得权力、可能导致灾难以及我们可能建造和部署此类系统的理由。第二个论点认为,人类水平的人工智能的发展将带来进一步的快速进步,最终形成能力远超人类的人工智能系统--这就是奇点假说(Singularity Hypothesis)。这种系统的权力追求行为可能特别危险。我们讨论了对这两种论点的各种反对意见,最后对辩论的现状进行了评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Artificial Intelligence: Arguments for Catastrophic Risk
Recent progress in artificial intelligence (AI) has drawn attention to the technology's transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that they might obtain it, that this could lead to catastrophe, and that we might build and deploy such systems anyway. The second argument claims that the development of human-level AI will unlock rapid further progress, culminating in AI systems far more capable than any human — this is the Singularity Hypothesis. Power-seeking behavior on the part of such systems might be particularly dangerous. We discuss a variety of objections to both arguments and conclude by assessing the state of the debate.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Philosophy Compass
Philosophy Compass Arts and Humanities-Philosophy
CiteScore
3.50
自引率
0.00%
发文量
87
期刊最新文献
Manipulation Cases in Free Will and Moral Responsibility, Part 1: Cases and Arguments. Manipulation cases in free will and moral responsibility, part 2: Manipulator-focused responses. Gratitude: Its Nature and Normativity Anti‐Exceptionalism about Logic (Part I): From Naturalism to Anti‐Exceptionalism Conventionalist Accounts of Personal Identity Over Time
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1