技术自主的社会生产

IF 4.5 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Human-Computer Interaction Pub Date : 2022-01-31 DOI:10.1080/07370024.2021.1976641
V. Kaptelinin
{"title":"技术自主的社会生产","authors":"V. Kaptelinin","doi":"10.1080/07370024.2021.1976641","DOIUrl":null,"url":null,"abstract":"The discussion of potential dangers, brought about by intelligent machines, can be traced back at least to Wiener (1960). However, it has never been more needed than it is now. Current technological developments make these dangers increasingly concrete and real, and so the paper by Hancock (this volume) is particularly timely. By systematically presenting and analyzing some of the key issues, problems, and approaches in the current discourse on autonomous agents, the paper does a valuable job in further engaging the HCI research community in the discourse. A key strength of the paper, in my view, is that it is apparently designed to invite comments, disagreements, and alternative perspectives. In this commentary, I reflect on a central theme in Hancock’s analysis, namely, the emergence of agents’ own intentions as a (presumably inevitable) result of the ongoing progress in artificial intelligence (AI). This is one of the most fascinating issues in the entire field of AI. The theme has not only become an object of academic debates, but also made a massive impact on popular culture (as exemplified, for instance, by movies and TV series, such as Blade Runner or Westworld). The question at the heart of the issue is: How and why can an AI system be transformed from a piece of human-controlled technology with constrained autonomy (limited to deciding how to perform the task assigned to it) to a fully autonomous agent, acting on its own intentions? Current attempts to envision a future, in which fully autonomous AI systems become a reality, often gloss over the specific causes and mechanisms of such a transformation. In some cases, e.g., in “slave uprising” scenarios, is it implied that the transformation may happen because designers, when trying to create systems that are as similar to humans as possible, fall victims, often literally, to their own success. At the most basic level, the underlying assumption appears to be that increasingly more advanced cognitive capabilities of a technology – even if they are only used when acting on someone or something else’s intentions – eventually lead to the development of self-awareness, which, in turn, gives rise to full autonomy. Hancock outlines a particular perspective on how agents’ full autonomy can be expected to develop. According to this perspective, dubbed “isles of autonomy,” the path to full autonomy starts with the emergence of isolated technologies having constrained autonomy, such as autonomous vehicles or autopilots. Each of these isles, when young and unstable, is initially surrounded and supported by human attendants, who take care of them (similarly to taking care of “prematurely born neonates”). Over time, the isles grow and eventually merge into a fully autonomous system. This perspective, even if rather metaphorical, potentially provides useful guidance for thinking about autonomous agents. However, the perspective does not clarify why and how exactly a constrained autonomy transforms into a full autonomy over the course of the described development. Arguably, the entire development may, in principle, take place without ever progressing to full autonomy. First, when an isle expands and the technology in question becomes less dependent on human support and maintenance, the autonomy of that technology does not necessarily become less constrained, because its tasks may still be assigned to it by someone or something else. For instance,","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"21 1","pages":"256 - 258"},"PeriodicalIF":4.5000,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"The social production of technological autonomy\",\"authors\":\"V. Kaptelinin\",\"doi\":\"10.1080/07370024.2021.1976641\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The discussion of potential dangers, brought about by intelligent machines, can be traced back at least to Wiener (1960). However, it has never been more needed than it is now. Current technological developments make these dangers increasingly concrete and real, and so the paper by Hancock (this volume) is particularly timely. By systematically presenting and analyzing some of the key issues, problems, and approaches in the current discourse on autonomous agents, the paper does a valuable job in further engaging the HCI research community in the discourse. A key strength of the paper, in my view, is that it is apparently designed to invite comments, disagreements, and alternative perspectives. In this commentary, I reflect on a central theme in Hancock’s analysis, namely, the emergence of agents’ own intentions as a (presumably inevitable) result of the ongoing progress in artificial intelligence (AI). This is one of the most fascinating issues in the entire field of AI. The theme has not only become an object of academic debates, but also made a massive impact on popular culture (as exemplified, for instance, by movies and TV series, such as Blade Runner or Westworld). The question at the heart of the issue is: How and why can an AI system be transformed from a piece of human-controlled technology with constrained autonomy (limited to deciding how to perform the task assigned to it) to a fully autonomous agent, acting on its own intentions? Current attempts to envision a future, in which fully autonomous AI systems become a reality, often gloss over the specific causes and mechanisms of such a transformation. In some cases, e.g., in “slave uprising” scenarios, is it implied that the transformation may happen because designers, when trying to create systems that are as similar to humans as possible, fall victims, often literally, to their own success. At the most basic level, the underlying assumption appears to be that increasingly more advanced cognitive capabilities of a technology – even if they are only used when acting on someone or something else’s intentions – eventually lead to the development of self-awareness, which, in turn, gives rise to full autonomy. Hancock outlines a particular perspective on how agents’ full autonomy can be expected to develop. According to this perspective, dubbed “isles of autonomy,” the path to full autonomy starts with the emergence of isolated technologies having constrained autonomy, such as autonomous vehicles or autopilots. Each of these isles, when young and unstable, is initially surrounded and supported by human attendants, who take care of them (similarly to taking care of “prematurely born neonates”). Over time, the isles grow and eventually merge into a fully autonomous system. This perspective, even if rather metaphorical, potentially provides useful guidance for thinking about autonomous agents. However, the perspective does not clarify why and how exactly a constrained autonomy transforms into a full autonomy over the course of the described development. Arguably, the entire development may, in principle, take place without ever progressing to full autonomy. First, when an isle expands and the technology in question becomes less dependent on human support and maintenance, the autonomy of that technology does not necessarily become less constrained, because its tasks may still be assigned to it by someone or something else. For instance,\",\"PeriodicalId\":56306,\"journal\":{\"name\":\"Human-Computer Interaction\",\"volume\":\"21 1\",\"pages\":\"256 - 258\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2022-01-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human-Computer Interaction\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1080/07370024.2021.1976641\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human-Computer Interaction","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1080/07370024.2021.1976641","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 1

摘要

关于智能机器带来的潜在危险的讨论,至少可以追溯到Wiener(1960)。然而,它从来没有像现在这样被需要。当前的技术发展使这些危险越来越具体和真实,因此汉考克的论文(本卷)特别及时。通过系统地呈现和分析当前关于自主代理的讨论中的一些关键问题、问题和方法,本文在进一步吸引HCI研究界参与讨论方面做了一项有价值的工作。在我看来,这篇论文的一个关键优势在于,它显然是为了征求意见、分歧和其他观点而设计的。在这篇评论中,我反思了汉考克分析中的一个中心主题,即,作为人工智能(AI)不断进步的一个(可能是不可避免的)结果,代理人自己意图的出现。这是整个人工智能领域最令人着迷的问题之一。这一主题不仅成为学术争论的对象,而且对流行文化产生了巨大影响(例如,电影和电视剧,如《银翼杀手》或《西部世界》)。这个问题的核心问题是:人工智能系统如何以及为什么可以从一项具有有限自主权的人类控制技术(仅限于决定如何执行分配给它的任务)转变为一个完全自主的代理,按照自己的意图行事?目前,人们试图设想一个完全自主的人工智能系统成为现实的未来,但往往掩盖了这种转变的具体原因和机制。在某些情况下,例如,在“奴隶起义”场景中,是否暗示着这种转变可能发生,因为设计师在试图创造尽可能与人类相似的系统时,往往会成为自己成功的受害者。在最基本的层面上,潜在的假设似乎是,一项技术越来越先进的认知能力——即使它们只在按照某人或其他事物的意图行动时使用——最终会导致自我意识的发展,而自我意识反过来又会产生完全的自主性。汉考克概述了一个关于代理人完全自主如何发展的特殊观点。根据这种被称为“自主岛”的观点,实现完全自主的道路始于限制自主的孤立技术的出现,比如自动驾驶汽车或自动驾驶仪。这些岛屿中的每一个,在年轻和不稳定的时候,最初都由人类服务员包围和支持,他们照顾他们(类似于照顾“早产的新生儿”)。随着时间的推移,这些岛屿不断增长,最终合并成一个完全自治的系统。这种观点,即使是隐喻性的,也可能为思考自主代理提供有用的指导。然而,该观点并没有阐明为什么以及如何在所描述的开发过程中将受约束的自治转换为完全自治。可以说,从原则上讲,整个发展可能永远不会发展到完全自主。首先,当岛屿扩大时,相关技术对人类支持和维护的依赖减少,该技术的自主性并不一定会减少,因为它的任务可能仍然由某人或其他事物分配给它。例如,
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The social production of technological autonomy
The discussion of potential dangers, brought about by intelligent machines, can be traced back at least to Wiener (1960). However, it has never been more needed than it is now. Current technological developments make these dangers increasingly concrete and real, and so the paper by Hancock (this volume) is particularly timely. By systematically presenting and analyzing some of the key issues, problems, and approaches in the current discourse on autonomous agents, the paper does a valuable job in further engaging the HCI research community in the discourse. A key strength of the paper, in my view, is that it is apparently designed to invite comments, disagreements, and alternative perspectives. In this commentary, I reflect on a central theme in Hancock’s analysis, namely, the emergence of agents’ own intentions as a (presumably inevitable) result of the ongoing progress in artificial intelligence (AI). This is one of the most fascinating issues in the entire field of AI. The theme has not only become an object of academic debates, but also made a massive impact on popular culture (as exemplified, for instance, by movies and TV series, such as Blade Runner or Westworld). The question at the heart of the issue is: How and why can an AI system be transformed from a piece of human-controlled technology with constrained autonomy (limited to deciding how to perform the task assigned to it) to a fully autonomous agent, acting on its own intentions? Current attempts to envision a future, in which fully autonomous AI systems become a reality, often gloss over the specific causes and mechanisms of such a transformation. In some cases, e.g., in “slave uprising” scenarios, is it implied that the transformation may happen because designers, when trying to create systems that are as similar to humans as possible, fall victims, often literally, to their own success. At the most basic level, the underlying assumption appears to be that increasingly more advanced cognitive capabilities of a technology – even if they are only used when acting on someone or something else’s intentions – eventually lead to the development of self-awareness, which, in turn, gives rise to full autonomy. Hancock outlines a particular perspective on how agents’ full autonomy can be expected to develop. According to this perspective, dubbed “isles of autonomy,” the path to full autonomy starts with the emergence of isolated technologies having constrained autonomy, such as autonomous vehicles or autopilots. Each of these isles, when young and unstable, is initially surrounded and supported by human attendants, who take care of them (similarly to taking care of “prematurely born neonates”). Over time, the isles grow and eventually merge into a fully autonomous system. This perspective, even if rather metaphorical, potentially provides useful guidance for thinking about autonomous agents. However, the perspective does not clarify why and how exactly a constrained autonomy transforms into a full autonomy over the course of the described development. Arguably, the entire development may, in principle, take place without ever progressing to full autonomy. First, when an isle expands and the technology in question becomes less dependent on human support and maintenance, the autonomy of that technology does not necessarily become less constrained, because its tasks may still be assigned to it by someone or something else. For instance,
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Human-Computer Interaction
Human-Computer Interaction 工程技术-计算机:控制论
CiteScore
12.20
自引率
3.80%
发文量
15
审稿时长
>12 weeks
期刊介绍: Human-Computer Interaction (HCI) is a multidisciplinary journal defining and reporting on fundamental research in human-computer interaction. The goal of HCI is to be a journal of the highest quality that combines the best research and design work to extend our understanding of human-computer interaction. The target audience is the research community with an interest in both the scientific implications and practical relevance of how interactive computer systems should be designed and how they are actually used. HCI is concerned with the theoretical, empirical, and methodological issues of interaction science and system design as it affects the user.
期刊最新文献
File hyper-searching explained Social fidelity in cooperative virtual reality maritime training The future of PIM: pragmatics and potential Clarifying and differentiating discoverability Design and evaluation of a versatile text input device for virtual and immersive workspaces
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1