评论:人类应该期待自主的他人吗?

IF 4.5 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Human-Computer Interaction Pub Date : 2021-11-17 DOI:10.1080/07370024.2021.1976639
John M. Carroll
{"title":"评论:人类应该期待自主的他人吗?","authors":"John M. Carroll","doi":"10.1080/07370024.2021.1976639","DOIUrl":null,"url":null,"abstract":"Hancock (this issue) identifies potential adverse consequences of emerging autonomous agent technology. He is not talking about Roomba and Waymo, but about future systems that could be more capable, and potentially, far more autonomous. As is often the case, it is difficult to fit a precise timeline to future technologies, and, as Hancock argues, human inability to fathom or even to perceive the timeline for autonomous agents is a specific challenge in this regard (see below). One concern Hancock raises is that humans, in pursuing the development of autonomous agents, are ipso facto ceding control of life on Earth to a new “peak predator.” At least in part, he is led to the predator framing through recognizing that autonomous systems are developing in a context of human conflict, indeed, often as “implements of human conflict.” Hancock argues that the transition to a new peak predator, even if not a Skynet/Terminator catastrophe, is unlikely to be the path “ultimately most conducive to human welfare,” and that, having ceded the peak position, it is doubtful humans could ever again regain control. Hancock also raises a set of concerns regarding the incommensurability of fundamental aspects of experience for human and autonomous agents. For example, humans and emerging autonomous agents could experience time very differently. Hancock criticizes the expression “real time” as indicative of how humans uncritically privilege a conception of time and duration indexed closely to the parameters of human perception and cognition. However, emerging autonomous agents might think through and carry out complex courses of action before humans could notice that anything even happened. Indeed, Hancock notes that the very transition from contemporary AI to truly autonomous agents could occur in what humans will experience as “a single perceptual moment.” Later in his paper, Hancock broadens the incommensurability point: “ . . . there is no necessary reason why autonomous cognition need resemble human cognition in any manner.” These concerns are worth analyzing, criticizing, and planning for now. The peak predator concern is the most vivid and concretely problematic hypothetical in Hancock’s paper. However, to the extent that emerging autonomous agents become autonomous, they would not be mere implements of human conflict. What is their stake in human conflicts? Why would they participate at all? They might be peak predators in the low-level sense of lethal capability, but predators are motivated to be predators for extrinsic reasons. What could those reasons be for autonomous agents? If we grant that they would be driven by logic, we ought to be able to come up with concrete possibilities, future scenarios that we can identify and plan for. There are, most definitely, reasons to question the human project of developing autonomous weapon systems, and, more specifically, of exploring possibilities for extremely powerful and autonomous weapon systems without a deep understanding of what that even means. Hancock cites Kahn’s (1962) scenario analysis of the “accidental war” that became the background plot for Dr. Strangelove, among other nuclear nightmare narratives of the Cold War. Even if we regard the peak predator scenario as more likely to be a challenging point of inflection for humans than","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"37 1","pages":"251 - 253"},"PeriodicalIF":4.5000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Commentary: Should humans look forward to autonomous others?\",\"authors\":\"John M. Carroll\",\"doi\":\"10.1080/07370024.2021.1976639\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hancock (this issue) identifies potential adverse consequences of emerging autonomous agent technology. He is not talking about Roomba and Waymo, but about future systems that could be more capable, and potentially, far more autonomous. As is often the case, it is difficult to fit a precise timeline to future technologies, and, as Hancock argues, human inability to fathom or even to perceive the timeline for autonomous agents is a specific challenge in this regard (see below). One concern Hancock raises is that humans, in pursuing the development of autonomous agents, are ipso facto ceding control of life on Earth to a new “peak predator.” At least in part, he is led to the predator framing through recognizing that autonomous systems are developing in a context of human conflict, indeed, often as “implements of human conflict.” Hancock argues that the transition to a new peak predator, even if not a Skynet/Terminator catastrophe, is unlikely to be the path “ultimately most conducive to human welfare,” and that, having ceded the peak position, it is doubtful humans could ever again regain control. Hancock also raises a set of concerns regarding the incommensurability of fundamental aspects of experience for human and autonomous agents. For example, humans and emerging autonomous agents could experience time very differently. Hancock criticizes the expression “real time” as indicative of how humans uncritically privilege a conception of time and duration indexed closely to the parameters of human perception and cognition. However, emerging autonomous agents might think through and carry out complex courses of action before humans could notice that anything even happened. Indeed, Hancock notes that the very transition from contemporary AI to truly autonomous agents could occur in what humans will experience as “a single perceptual moment.” Later in his paper, Hancock broadens the incommensurability point: “ . . . there is no necessary reason why autonomous cognition need resemble human cognition in any manner.” These concerns are worth analyzing, criticizing, and planning for now. The peak predator concern is the most vivid and concretely problematic hypothetical in Hancock’s paper. However, to the extent that emerging autonomous agents become autonomous, they would not be mere implements of human conflict. What is their stake in human conflicts? Why would they participate at all? They might be peak predators in the low-level sense of lethal capability, but predators are motivated to be predators for extrinsic reasons. What could those reasons be for autonomous agents? If we grant that they would be driven by logic, we ought to be able to come up with concrete possibilities, future scenarios that we can identify and plan for. There are, most definitely, reasons to question the human project of developing autonomous weapon systems, and, more specifically, of exploring possibilities for extremely powerful and autonomous weapon systems without a deep understanding of what that even means. Hancock cites Kahn’s (1962) scenario analysis of the “accidental war” that became the background plot for Dr. Strangelove, among other nuclear nightmare narratives of the Cold War. Even if we regard the peak predator scenario as more likely to be a challenging point of inflection for humans than\",\"PeriodicalId\":56306,\"journal\":{\"name\":\"Human-Computer Interaction\",\"volume\":\"37 1\",\"pages\":\"251 - 253\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2021-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human-Computer Interaction\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1080/07370024.2021.1976639\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human-Computer Interaction","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1080/07370024.2021.1976639","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0

摘要

汉考克(本期)指出了新兴的自主代理技术的潜在不利后果。他说的不是Roomba和Waymo,而是未来可能更强大、更自主的系统。通常情况下,很难将一个精确的时间线与未来的技术相匹配,而且,正如汉考克所说,人类无法理解甚至感知自主代理的时间线是这方面的一个具体挑战(见下文)。汉考克提出的一个担忧是,人类在追求自主智能体的发展的过程中,实际上是把地球上生命的控制权交给了一个新的“峰值捕食者”。至少在某种程度上,他通过认识到自主系统是在人类冲突的背景下发展起来的,实际上,往往是“人类冲突的工具”,从而得出了捕食者框架。汉考克认为,即使不是天网/终结者的灾难,向新的峰值捕食者的过渡也不太可能是“最终最有利于人类福利”的道路,而且,在放弃了峰值位置之后,人类是否能再次获得控制权是值得怀疑的。汉考克还提出了一系列关于人类和自主主体的基本经验方面的不可通约性的担忧。例如,人类和新兴的自主代理对时间的体验可能非常不同。汉考克批评“实时”一词表明,人类不加批判地赋予了一种时间和持续时间的概念,这种概念与人类感知和认知的参数密切相关。然而,新兴的自主智能体可能会在人类注意到任何事情发生之前,思考并执行复杂的行动过程。事实上,汉考克指出,从当代人工智能到真正自主代理的转变可能发生在人类将经历的“单一感知时刻”。后来在他的论文中,汉考克扩大了不可通约性的观点:“……没有必要的理由说明自主认知必须在任何方面与人类认知相似。”这些担忧现在值得分析、批评和计划。在汉考克的论文中,对捕食者的担忧是最生动、最具体的有问题的假设。然而,在某种程度上,新兴的自主代理变得自主,它们将不仅仅是人类冲突的工具。他们在人类冲突中的利益是什么?他们为什么要参与呢?从低级的致命能力来看,它们可能是顶级的捕食者,但捕食者之所以成为捕食者是有外在原因的。对于自主代理来说,这些原因是什么呢?如果我们承认它们是由逻辑驱动的,我们应该能够提出具体的可能性,我们可以识别和规划的未来场景。毫无疑问,有理由质疑人类开发自主武器系统的计划,更具体地说,是在没有深刻理解其含义的情况下探索极其强大的自主武器系统的可能性。汉考克引用了卡恩(1962)对“意外战争”的情景分析,这成为了《奇爱博士》的背景情节,以及其他关于冷战的核噩梦叙述。即使我们认为食肉动物的峰值情景更有可能是人类的一个具有挑战性的拐点,而不是
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Commentary: Should humans look forward to autonomous others?
Hancock (this issue) identifies potential adverse consequences of emerging autonomous agent technology. He is not talking about Roomba and Waymo, but about future systems that could be more capable, and potentially, far more autonomous. As is often the case, it is difficult to fit a precise timeline to future technologies, and, as Hancock argues, human inability to fathom or even to perceive the timeline for autonomous agents is a specific challenge in this regard (see below). One concern Hancock raises is that humans, in pursuing the development of autonomous agents, are ipso facto ceding control of life on Earth to a new “peak predator.” At least in part, he is led to the predator framing through recognizing that autonomous systems are developing in a context of human conflict, indeed, often as “implements of human conflict.” Hancock argues that the transition to a new peak predator, even if not a Skynet/Terminator catastrophe, is unlikely to be the path “ultimately most conducive to human welfare,” and that, having ceded the peak position, it is doubtful humans could ever again regain control. Hancock also raises a set of concerns regarding the incommensurability of fundamental aspects of experience for human and autonomous agents. For example, humans and emerging autonomous agents could experience time very differently. Hancock criticizes the expression “real time” as indicative of how humans uncritically privilege a conception of time and duration indexed closely to the parameters of human perception and cognition. However, emerging autonomous agents might think through and carry out complex courses of action before humans could notice that anything even happened. Indeed, Hancock notes that the very transition from contemporary AI to truly autonomous agents could occur in what humans will experience as “a single perceptual moment.” Later in his paper, Hancock broadens the incommensurability point: “ . . . there is no necessary reason why autonomous cognition need resemble human cognition in any manner.” These concerns are worth analyzing, criticizing, and planning for now. The peak predator concern is the most vivid and concretely problematic hypothetical in Hancock’s paper. However, to the extent that emerging autonomous agents become autonomous, they would not be mere implements of human conflict. What is their stake in human conflicts? Why would they participate at all? They might be peak predators in the low-level sense of lethal capability, but predators are motivated to be predators for extrinsic reasons. What could those reasons be for autonomous agents? If we grant that they would be driven by logic, we ought to be able to come up with concrete possibilities, future scenarios that we can identify and plan for. There are, most definitely, reasons to question the human project of developing autonomous weapon systems, and, more specifically, of exploring possibilities for extremely powerful and autonomous weapon systems without a deep understanding of what that even means. Hancock cites Kahn’s (1962) scenario analysis of the “accidental war” that became the background plot for Dr. Strangelove, among other nuclear nightmare narratives of the Cold War. Even if we regard the peak predator scenario as more likely to be a challenging point of inflection for humans than
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Human-Computer Interaction
Human-Computer Interaction 工程技术-计算机:控制论
CiteScore
12.20
自引率
3.80%
发文量
15
审稿时长
>12 weeks
期刊介绍: Human-Computer Interaction (HCI) is a multidisciplinary journal defining and reporting on fundamental research in human-computer interaction. The goal of HCI is to be a journal of the highest quality that combines the best research and design work to extend our understanding of human-computer interaction. The target audience is the research community with an interest in both the scientific implications and practical relevance of how interactive computer systems should be designed and how they are actually used. HCI is concerned with the theoretical, empirical, and methodological issues of interaction science and system design as it affects the user.
期刊最新文献
File hyper-searching explained Social fidelity in cooperative virtual reality maritime training The future of PIM: pragmatics and potential Clarifying and differentiating discoverability Design and evaluation of a versatile text input device for virtual and immersive workspaces
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1