{"title":"Commentary: Should humans look forward to autonomous others?","authors":"John M. Carroll","doi":"10.1080/07370024.2021.1976639","DOIUrl":null,"url":null,"abstract":"Hancock (this issue) identifies potential adverse consequences of emerging autonomous agent technology. He is not talking about Roomba and Waymo, but about future systems that could be more capable, and potentially, far more autonomous. As is often the case, it is difficult to fit a precise timeline to future technologies, and, as Hancock argues, human inability to fathom or even to perceive the timeline for autonomous agents is a specific challenge in this regard (see below). One concern Hancock raises is that humans, in pursuing the development of autonomous agents, are ipso facto ceding control of life on Earth to a new “peak predator.” At least in part, he is led to the predator framing through recognizing that autonomous systems are developing in a context of human conflict, indeed, often as “implements of human conflict.” Hancock argues that the transition to a new peak predator, even if not a Skynet/Terminator catastrophe, is unlikely to be the path “ultimately most conducive to human welfare,” and that, having ceded the peak position, it is doubtful humans could ever again regain control. Hancock also raises a set of concerns regarding the incommensurability of fundamental aspects of experience for human and autonomous agents. For example, humans and emerging autonomous agents could experience time very differently. Hancock criticizes the expression “real time” as indicative of how humans uncritically privilege a conception of time and duration indexed closely to the parameters of human perception and cognition. However, emerging autonomous agents might think through and carry out complex courses of action before humans could notice that anything even happened. Indeed, Hancock notes that the very transition from contemporary AI to truly autonomous agents could occur in what humans will experience as “a single perceptual moment.” Later in his paper, Hancock broadens the incommensurability point: “ . . . there is no necessary reason why autonomous cognition need resemble human cognition in any manner.” These concerns are worth analyzing, criticizing, and planning for now. The peak predator concern is the most vivid and concretely problematic hypothetical in Hancock’s paper. However, to the extent that emerging autonomous agents become autonomous, they would not be mere implements of human conflict. What is their stake in human conflicts? Why would they participate at all? They might be peak predators in the low-level sense of lethal capability, but predators are motivated to be predators for extrinsic reasons. What could those reasons be for autonomous agents? If we grant that they would be driven by logic, we ought to be able to come up with concrete possibilities, future scenarios that we can identify and plan for. There are, most definitely, reasons to question the human project of developing autonomous weapon systems, and, more specifically, of exploring possibilities for extremely powerful and autonomous weapon systems without a deep understanding of what that even means. Hancock cites Kahn’s (1962) scenario analysis of the “accidental war” that became the background plot for Dr. Strangelove, among other nuclear nightmare narratives of the Cold War. Even if we regard the peak predator scenario as more likely to be a challenging point of inflection for humans than","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"37 1","pages":"251 - 253"},"PeriodicalIF":4.5000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human-Computer Interaction","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1080/07370024.2021.1976639","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0
Abstract
Hancock (this issue) identifies potential adverse consequences of emerging autonomous agent technology. He is not talking about Roomba and Waymo, but about future systems that could be more capable, and potentially, far more autonomous. As is often the case, it is difficult to fit a precise timeline to future technologies, and, as Hancock argues, human inability to fathom or even to perceive the timeline for autonomous agents is a specific challenge in this regard (see below). One concern Hancock raises is that humans, in pursuing the development of autonomous agents, are ipso facto ceding control of life on Earth to a new “peak predator.” At least in part, he is led to the predator framing through recognizing that autonomous systems are developing in a context of human conflict, indeed, often as “implements of human conflict.” Hancock argues that the transition to a new peak predator, even if not a Skynet/Terminator catastrophe, is unlikely to be the path “ultimately most conducive to human welfare,” and that, having ceded the peak position, it is doubtful humans could ever again regain control. Hancock also raises a set of concerns regarding the incommensurability of fundamental aspects of experience for human and autonomous agents. For example, humans and emerging autonomous agents could experience time very differently. Hancock criticizes the expression “real time” as indicative of how humans uncritically privilege a conception of time and duration indexed closely to the parameters of human perception and cognition. However, emerging autonomous agents might think through and carry out complex courses of action before humans could notice that anything even happened. Indeed, Hancock notes that the very transition from contemporary AI to truly autonomous agents could occur in what humans will experience as “a single perceptual moment.” Later in his paper, Hancock broadens the incommensurability point: “ . . . there is no necessary reason why autonomous cognition need resemble human cognition in any manner.” These concerns are worth analyzing, criticizing, and planning for now. The peak predator concern is the most vivid and concretely problematic hypothetical in Hancock’s paper. However, to the extent that emerging autonomous agents become autonomous, they would not be mere implements of human conflict. What is their stake in human conflicts? Why would they participate at all? They might be peak predators in the low-level sense of lethal capability, but predators are motivated to be predators for extrinsic reasons. What could those reasons be for autonomous agents? If we grant that they would be driven by logic, we ought to be able to come up with concrete possibilities, future scenarios that we can identify and plan for. There are, most definitely, reasons to question the human project of developing autonomous weapon systems, and, more specifically, of exploring possibilities for extremely powerful and autonomous weapon systems without a deep understanding of what that even means. Hancock cites Kahn’s (1962) scenario analysis of the “accidental war” that became the background plot for Dr. Strangelove, among other nuclear nightmare narratives of the Cold War. Even if we regard the peak predator scenario as more likely to be a challenging point of inflection for humans than
期刊介绍:
Human-Computer Interaction (HCI) is a multidisciplinary journal defining and reporting
on fundamental research in human-computer interaction. The goal of HCI is to be a journal
of the highest quality that combines the best research and design work to extend our
understanding of human-computer interaction. The target audience is the research
community with an interest in both the scientific implications and practical relevance of
how interactive computer systems should be designed and how they are actually used. HCI is
concerned with the theoretical, empirical, and methodological issues of interaction science
and system design as it affects the user.