首页 > 最新文献

Human-Computer Interaction最新文献

英文 中文
Commentary: Societal Reactions to Hopes and Threats of Autonomous Agent Actions: Reflections about Public Opinion and Technology Implementations 评论:社会对自主代理行动的希望和威胁的反应:关于公众舆论和技术实现的思考
IF 5.3 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2021-11-24 DOI: 10.1080/07370024.2021.1976642
Kimon Kieslich
In the paper Avoiding Adverse Autonomous Agent Actions , Hancock (2021) sketches the technolo-gical development of automomous agents leading to a point in the (near) future, where machines become truly independent agents. He further elaborates that this development comes with both great promises, but also serious, even existential threats. Hancock concludes with highlighting the importance to prepare against problematic actions that autonomous agents might enact and suggests measures for humanity to take. and when intelligence will exceed human intelligence. Instead, I will reflect on the societal challenges outlined in Hancock’s article. More specifically, I will address the role of public opinion as a factor in the implementation of autonomous agents into society. Thereby, public perception potential strengths and opportunites may lead to exaggerated expectations, while public perception of potential weaknesses and threats may lead to overexceeded
汉考克(2021)在论文《避免不利的自主代理行为》(avoid Adverse Autonomous Agent Actions)中概述了自主代理的技术发展,从而在(不久)的将来,机器成为真正独立的代理。他进一步阐述说,这种发展既带来了巨大的希望,也带来了严重的,甚至是存在的威胁。汉考克最后强调了准备应对自主代理可能实施的问题行为的重要性,并提出了人类应采取的措施。当智能超越人类智能时。相反,我将反思汉考克文章中概述的社会挑战。更具体地说,我将讨论公众舆论作为在社会中实施自主代理人的一个因素的作用。因此,公众对潜在优势和机会的感知可能导致高估预期,而公众对潜在弱点和威胁的感知可能导致高估预期
{"title":"Commentary: Societal Reactions to Hopes and Threats of Autonomous Agent Actions: Reflections about Public Opinion and Technology Implementations","authors":"Kimon Kieslich","doi":"10.1080/07370024.2021.1976642","DOIUrl":"https://doi.org/10.1080/07370024.2021.1976642","url":null,"abstract":"In the paper Avoiding Adverse Autonomous Agent Actions , Hancock (2021) sketches the technolo-gical development of automomous agents leading to a point in the (near) future, where machines become truly independent agents. He further elaborates that this development comes with both great promises, but also serious, even existential threats. Hancock concludes with highlighting the importance to prepare against problematic actions that autonomous agents might enact and suggests measures for humanity to take. and when intelligence will exceed human intelligence. Instead, I will reflect on the societal challenges outlined in Hancock’s article. More specifically, I will address the role of public opinion as a factor in the implementation of autonomous agents into society. Thereby, public perception potential strengths and opportunites may lead to exaggerated expectations, while public perception of potential weaknesses and threats may lead to overexceeded","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"385 1","pages":"259 - 262"},"PeriodicalIF":5.3,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77682608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automation and redistribution of work: the impact of social distancing on live TV production 自动化和工作再分配:社交距离对电视直播制作的影响
IF 5.3 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2021-11-23 DOI: 10.1080/07370024.2021.1984917
Pavel Okopnyi, Frode Guribye, V. Caruso, O. Juhlin
ABSTRACT The TV industry has long been under pressure to adapt its workflows to use advanced Internet technologies. It also must face competition from social media, video blogs, and livestreaming platforms, which are enabled by lightweight production tools and new distribution channels. The social-distancing regulations introduced due to the COVID-19 pandemic added to the list of challenging adaptations. One of the remaining bastions of legacy TV production is the live broadcast of sporting events and news. These production practices rely on tight collaboration in small spaces, such as control rooms and outside broadcast vans. This paper focuses on current socio-technical changes, especially those changes and adaptations in collaborative practices and workflows in TV production. Some changes necessary during the pandemic may be imposed, temporary adjustments to the ongoing situation, but some might induce permanent changes in key work practices in TV production. Further, these imposed changes are aligned with already ongoing changes in the industry, which are now being accelerated. We characterize the changes along two main dimensions: redistribution of work and automation.
长期以来,电视行业一直面临着调整其工作流程以使用先进的互联网技术的压力。它还必须面对来自社交媒体、视频博客和直播平台的竞争,这些平台都是由轻量级的生产工具和新的分销渠道实现的。由于COVID-19大流行而引入的社交距离规定增加了具有挑战性的适应清单。遗留下来的电视制作堡垒之一是体育赛事和新闻的现场直播。这些生产实践依赖于小空间内的紧密合作,例如控制室和外部广播车。本文关注当前的社会技术变革,特别是电视制作中协作实践和工作流程中的变化和适应。大流行期间可能会强制实施一些必要的改变,对目前的情况进行临时调整,但有些改变可能会导致电视制作中的关键工作惯例发生永久性改变。此外,这些强加的变化与行业中已经发生的变化相一致,这些变化现在正在加速。我们从两个主要方面来描述这些变化:工作的再分配和自动化。
{"title":"Automation and redistribution of work: the impact of social distancing on live TV production","authors":"Pavel Okopnyi, Frode Guribye, V. Caruso, O. Juhlin","doi":"10.1080/07370024.2021.1984917","DOIUrl":"https://doi.org/10.1080/07370024.2021.1984917","url":null,"abstract":"ABSTRACT The TV industry has long been under pressure to adapt its workflows to use advanced Internet technologies. It also must face competition from social media, video blogs, and livestreaming platforms, which are enabled by lightweight production tools and new distribution channels. The social-distancing regulations introduced due to the COVID-19 pandemic added to the list of challenging adaptations. One of the remaining bastions of legacy TV production is the live broadcast of sporting events and news. These production practices rely on tight collaboration in small spaces, such as control rooms and outside broadcast vans. This paper focuses on current socio-technical changes, especially those changes and adaptations in collaborative practices and workflows in TV production. Some changes necessary during the pandemic may be imposed, temporary adjustments to the ongoing situation, but some might induce permanent changes in key work practices in TV production. Further, these imposed changes are aligned with already ongoing changes in the industry, which are now being accelerated. We characterize the changes along two main dimensions: redistribution of work and automation.","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"6 1","pages":"1 - 24"},"PeriodicalIF":5.3,"publicationDate":"2021-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87617503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Commentary: “Autonomous” agents? What should we worry about? What should we do? 评论:“自主”代理?我们应该担心什么呢?我们该怎么办?
IF 5.3 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2021-11-23 DOI: 10.1080/07370024.2021.1977129
Loren Terveen
{"title":"Commentary: “Autonomous” agents? What should we worry about? What should we do?","authors":"Loren Terveen","doi":"10.1080/07370024.2021.1977129","DOIUrl":"https://doi.org/10.1080/07370024.2021.1977129","url":null,"abstract":"","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"75 1","pages":"240 - 242"},"PeriodicalIF":5.3,"publicationDate":"2021-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77207548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Commentary: controlling the demon: autonomous agents and the urgent need for controls 评论:控制恶魔:自主代理和控制的迫切需要
IF 5.3 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2021-11-22 DOI: 10.1080/07370024.2021.1977127
P. Salmon
In “Avoiding adverse autonomous agent actions,” Hancock (This issue) argues that controlling and exploiting autonomous systems represents one of the fundamental challenges of the 21 century. His parting shot is the disquieting and challenging observation that, with autonomous agents, we may be creating a new “peak predator” from which there will be no recovery of human control. The next generation of Artificial Intelligence (AI), Artificial General Intelligence (AGI) could see the idea of a new technological peak predator become reality. AGI will possess the capacity to learn, evolve and modify its functional capabilities and could quickly become intellectually superior to humans (Bostrom, 2014). Though estimates on when AGI will appear vary, the exact time of arrival is perhaps a moot point. What is more important, as Hancock alludes to, is that work is required immediately to ensure that the impact on humanity is positive rather than negative (Salmon et al., 2021). Should we take a reactive approach and only focus our efforts once AGI is created, it will already be too late (Bostrom, 2014). The first AGI system will quickly become uncontrollable.
在“避免不利的自主代理行为”一文中,汉考克(本期)认为控制和利用自主系统是21世纪的基本挑战之一。他的临别陈词是一个令人不安和具有挑战性的观察,即有了自主代理,我们可能正在创造一个新的“峰值捕食者”,人类的控制将无法恢复。下一代人工智能(AI),人工通用智能(AGI)可能会看到新的技术峰值捕食者的想法成为现实。AGI将拥有学习、进化和修改其功能的能力,并可能很快在智力上超越人类(Bostrom, 2014)。虽然对AGI何时出现的估计各不相同,但确切的到来时间可能是一个有争议的问题。更重要的是,正如汉考克所暗示的那样,需要立即开展工作,以确保对人类的影响是积极的,而不是消极的(Salmon等人,2021)。如果我们采取一种被动的方法,只在AGI创建后才集中精力,那就太晚了(Bostrom, 2014)。第一个AGI系统将很快变得无法控制。
{"title":"Commentary: controlling the demon: autonomous agents and the urgent need for controls","authors":"P. Salmon","doi":"10.1080/07370024.2021.1977127","DOIUrl":"https://doi.org/10.1080/07370024.2021.1977127","url":null,"abstract":"In “Avoiding adverse autonomous agent actions,” Hancock (This issue) argues that controlling and exploiting autonomous systems represents one of the fundamental challenges of the 21 century. His parting shot is the disquieting and challenging observation that, with autonomous agents, we may be creating a new “peak predator” from which there will be no recovery of human control. The next generation of Artificial Intelligence (AI), Artificial General Intelligence (AGI) could see the idea of a new technological peak predator become reality. AGI will possess the capacity to learn, evolve and modify its functional capabilities and could quickly become intellectually superior to humans (Bostrom, 2014). Though estimates on when AGI will appear vary, the exact time of arrival is perhaps a moot point. What is more important, as Hancock alludes to, is that work is required immediately to ensure that the impact on humanity is positive rather than negative (Salmon et al., 2021). Should we take a reactive approach and only focus our efforts once AGI is created, it will already be too late (Bostrom, 2014). The first AGI system will quickly become uncontrollable.","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"5 1","pages":"246 - 247"},"PeriodicalIF":5.3,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81654628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Commentary: Should humans look forward to autonomous others? 评论:人类应该期待自主的他人吗?
IF 5.3 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2021-11-17 DOI: 10.1080/07370024.2021.1976639
John M. Carroll
Hancock (this issue) identifies potential adverse consequences of emerging autonomous agent technology. He is not talking about Roomba and Waymo, but about future systems that could be more capable, and potentially, far more autonomous. As is often the case, it is difficult to fit a precise timeline to future technologies, and, as Hancock argues, human inability to fathom or even to perceive the timeline for autonomous agents is a specific challenge in this regard (see below). One concern Hancock raises is that humans, in pursuing the development of autonomous agents, are ipso facto ceding control of life on Earth to a new “peak predator.” At least in part, he is led to the predator framing through recognizing that autonomous systems are developing in a context of human conflict, indeed, often as “implements of human conflict.” Hancock argues that the transition to a new peak predator, even if not a Skynet/Terminator catastrophe, is unlikely to be the path “ultimately most conducive to human welfare,” and that, having ceded the peak position, it is doubtful humans could ever again regain control. Hancock also raises a set of concerns regarding the incommensurability of fundamental aspects of experience for human and autonomous agents. For example, humans and emerging autonomous agents could experience time very differently. Hancock criticizes the expression “real time” as indicative of how humans uncritically privilege a conception of time and duration indexed closely to the parameters of human perception and cognition. However, emerging autonomous agents might think through and carry out complex courses of action before humans could notice that anything even happened. Indeed, Hancock notes that the very transition from contemporary AI to truly autonomous agents could occur in what humans will experience as “a single perceptual moment.” Later in his paper, Hancock broadens the incommensurability point: “ . . . there is no necessary reason why autonomous cognition need resemble human cognition in any manner.” These concerns are worth analyzing, criticizing, and planning for now. The peak predator concern is the most vivid and concretely problematic hypothetical in Hancock’s paper. However, to the extent that emerging autonomous agents become autonomous, they would not be mere implements of human conflict. What is their stake in human conflicts? Why would they participate at all? They might be peak predators in the low-level sense of lethal capability, but predators are motivated to be predators for extrinsic reasons. What could those reasons be for autonomous agents? If we grant that they would be driven by logic, we ought to be able to come up with concrete possibilities, future scenarios that we can identify and plan for. There are, most definitely, reasons to question the human project of developing autonomous weapon systems, and, more specifically, of exploring possibilities for extremely powerful and autonomous weapon systems without a deep u
汉考克(本期)指出了新兴的自主代理技术的潜在不利后果。他说的不是Roomba和Waymo,而是未来可能更强大、更自主的系统。通常情况下,很难将一个精确的时间线与未来的技术相匹配,而且,正如汉考克所说,人类无法理解甚至感知自主代理的时间线是这方面的一个具体挑战(见下文)。汉考克提出的一个担忧是,人类在追求自主智能体的发展的过程中,实际上是把地球上生命的控制权交给了一个新的“峰值捕食者”。至少在某种程度上,他通过认识到自主系统是在人类冲突的背景下发展起来的,实际上,往往是“人类冲突的工具”,从而得出了捕食者框架。汉考克认为,即使不是天网/终结者的灾难,向新的峰值捕食者的过渡也不太可能是“最终最有利于人类福利”的道路,而且,在放弃了峰值位置之后,人类是否能再次获得控制权是值得怀疑的。汉考克还提出了一系列关于人类和自主主体的基本经验方面的不可通约性的担忧。例如,人类和新兴的自主代理对时间的体验可能非常不同。汉考克批评“实时”一词表明,人类不加批判地赋予了一种时间和持续时间的概念,这种概念与人类感知和认知的参数密切相关。然而,新兴的自主智能体可能会在人类注意到任何事情发生之前,思考并执行复杂的行动过程。事实上,汉考克指出,从当代人工智能到真正自主代理的转变可能发生在人类将经历的“单一感知时刻”。后来在他的论文中,汉考克扩大了不可通约性的观点:“……没有必要的理由说明自主认知必须在任何方面与人类认知相似。”这些担忧现在值得分析、批评和计划。在汉考克的论文中,对捕食者的担忧是最生动、最具体的有问题的假设。然而,在某种程度上,新兴的自主代理变得自主,它们将不仅仅是人类冲突的工具。他们在人类冲突中的利益是什么?他们为什么要参与呢?从低级的致命能力来看,它们可能是顶级的捕食者,但捕食者之所以成为捕食者是有外在原因的。对于自主代理来说,这些原因是什么呢?如果我们承认它们是由逻辑驱动的,我们应该能够提出具体的可能性,我们可以识别和规划的未来场景。毫无疑问,有理由质疑人类开发自主武器系统的计划,更具体地说,是在没有深刻理解其含义的情况下探索极其强大的自主武器系统的可能性。汉考克引用了卡恩(1962)对“意外战争”的情景分析,这成为了《奇爱博士》的背景情节,以及其他关于冷战的核噩梦叙述。即使我们认为食肉动物的峰值情景更有可能是人类的一个具有挑战性的拐点,而不是
{"title":"Commentary: Should humans look forward to autonomous others?","authors":"John M. Carroll","doi":"10.1080/07370024.2021.1976639","DOIUrl":"https://doi.org/10.1080/07370024.2021.1976639","url":null,"abstract":"Hancock (this issue) identifies potential adverse consequences of emerging autonomous agent technology. He is not talking about Roomba and Waymo, but about future systems that could be more capable, and potentially, far more autonomous. As is often the case, it is difficult to fit a precise timeline to future technologies, and, as Hancock argues, human inability to fathom or even to perceive the timeline for autonomous agents is a specific challenge in this regard (see below). One concern Hancock raises is that humans, in pursuing the development of autonomous agents, are ipso facto ceding control of life on Earth to a new “peak predator.” At least in part, he is led to the predator framing through recognizing that autonomous systems are developing in a context of human conflict, indeed, often as “implements of human conflict.” Hancock argues that the transition to a new peak predator, even if not a Skynet/Terminator catastrophe, is unlikely to be the path “ultimately most conducive to human welfare,” and that, having ceded the peak position, it is doubtful humans could ever again regain control. Hancock also raises a set of concerns regarding the incommensurability of fundamental aspects of experience for human and autonomous agents. For example, humans and emerging autonomous agents could experience time very differently. Hancock criticizes the expression “real time” as indicative of how humans uncritically privilege a conception of time and duration indexed closely to the parameters of human perception and cognition. However, emerging autonomous agents might think through and carry out complex courses of action before humans could notice that anything even happened. Indeed, Hancock notes that the very transition from contemporary AI to truly autonomous agents could occur in what humans will experience as “a single perceptual moment.” Later in his paper, Hancock broadens the incommensurability point: “ . . . there is no necessary reason why autonomous cognition need resemble human cognition in any manner.” These concerns are worth analyzing, criticizing, and planning for now. The peak predator concern is the most vivid and concretely problematic hypothetical in Hancock’s paper. However, to the extent that emerging autonomous agents become autonomous, they would not be mere implements of human conflict. What is their stake in human conflicts? Why would they participate at all? They might be peak predators in the low-level sense of lethal capability, but predators are motivated to be predators for extrinsic reasons. What could those reasons be for autonomous agents? If we grant that they would be driven by logic, we ought to be able to come up with concrete possibilities, future scenarios that we can identify and plan for. There are, most definitely, reasons to question the human project of developing autonomous weapon systems, and, more specifically, of exploring possibilities for extremely powerful and autonomous weapon systems without a deep u","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"37 1","pages":"251 - 253"},"PeriodicalIF":5.3,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74544677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Commentary: the intentions of washing machines 评论:洗衣机的意图
IF 5.3 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2021-11-16 DOI: 10.1080/07370024.2021.1976640
Richard H. R. Harper
Hancock makes a range of claims but the most important is this: if a machine ‘learns,’ then, eventually, it will become ‘self-aware.’ It is self-awareness, he argues, that will distinguish machines that are merely autonomous (i.e., which work without human intervention, of which there are many) and those which do something else, which become, in the things they do, like us I cannot understand why one would think this move from learning to awareness would happen but Hancock is convinced. One might add that it is not his discipline that leads to this view – there is no human factors research that asserts or demonstrates that self-awareness emerges through learning, for example; or at least as far as I am aware of. Certainly, Hancock does not cite any. On the contrary, it seems that Hancock takes this idea from the AI community, though as it happens it is an argument that coat-tails on similar notions put forward by cognitive scientists. Some philosophers argue the same, too, such as Dennett (For the view from AI and computer science, see Russell, 2019; for the view of cognitive science, see Tallis, 2011; for a review of the philosophy see Harper et al, 2016). Be that as it may, let me focus on this claim and ask what ‘self-awareness’ might mean or how it might be measured. It seems to me that this is a question to do with anthropology. Hence, one way of approaching this is through imagining how people would act when self-awareness is at issue (Pihlström, 2003, pp. 259–286). Or, put another way, one can approach it by asking what someone might mean when they say they are ‘self-aware’? One might ask, too, why would they say it? I think they do so if they are ‘conscious’ of such things as their intentions. ‘I am about to do this’ they say when they are wanting some advice on that course of action. Intentions are a measure of selfawareness. So, is Hancock saying that autonomous machines would be conscious of their intentions and would that mean, too, that they would treat these intentions as accountable matters? Would that mean, say, that washing machines could have intentions of various kinds? And more, would it mean that these emerge from the learning that washine machines engage in? There are a number of thoughts that arise given this anthropological ‘vignette’ of washing machines and their intentions. How would these intentions be shown? Would these machines need to speak? Besides, when would these machines have these intentions? At what point during learning would they arise? After they have been working a while? One might presuppose some answers here – a machine might only ‘speak’ (if that is its mode of accountability) only once it is switched on. Moreover, one imagines a washing machine would not have any intentions when it was being assembled nor would it have any when it was being disassembled either (as it happens, Hancock refers to similar matters when he reminds the reader of one of his many phrases in earlier human factor articles: this t
汉考克提出了一系列主张,但最重要的是:如果机器“学习”,那么最终它将具有“自我意识”。他认为,自我意识将区分仅仅是自主的机器(即不需要人类干预的机器,有很多这样的机器)和那些做其他事情的机器,它们在做的事情中变得像我们一样。我不明白为什么有人会认为这种从学习到意识的转变会发生,但汉考克相信。有人可能会补充说,并不是他的学科导致了这种观点——例如,没有人为因素研究断言或证明自我意识是通过学习产生的;至少据我所知是这样。当然,汉考克没有引用任何例子。相反,汉考克似乎从人工智能社区借鉴了这一观点,尽管碰巧这是一种基于认知科学家提出的类似观点的论点。一些哲学家也有同样的观点,比如丹尼特(关于人工智能和计算机科学的观点,见Russell, 2019;关于认知科学的观点,见Tallis, 2011;有关哲学的回顾,请参阅Harper et al, 2016)。尽管如此,让我把重点放在这一说法上,并问一下“自我意识”可能意味着什么,或者如何衡量它。在我看来,这是一个与人类学有关的问题。因此,解决这个问题的一种方法是想象人们在自我意识问题上的行为(Pihlström, 2003, pp. 259-286)。或者,换句话说,一个人可以通过问某人说他们“有自我意识”是什么意思来接近它?有人可能会问,他们为什么要这么说?我认为如果他们“意识到”他们的意图之类的事情,他们就会这样做。“我要做这件事了”,当他们需要一些关于行动方针的建议时,他们会说。意图是衡量自我意识的标准。那么,汉考克是说自动机器会意识到自己的意图,这是否也意味着,它们会把这些意图视为可问责的事情?这是否意味着,比如说,洗衣机可以有各种各样的意图?更重要的是,这是否意味着这些都是从洗衣机所参与的学习中产生的?鉴于这个关于洗衣机及其意图的人类学“小插曲”,人们产生了许多想法。这些意图将如何表现?这些机器需要说话吗?再说,这些机器什么时候会有这些意图呢?在学习过程中,它们会在什么时候出现?在他们工作了一段时间之后?人们可能会在这里预设一些答案——一台机器可能只有在开机时才会“说话”(如果这是它的问责模式的话)。此外,人们想象一台洗衣机在组装时不会有任何意图,在拆卸时也不会有任何意图(事实上,汉考克在提醒读者他在早期关于人的因素的文章中提到的许多短语之一时,也提到了类似的问题:这次是短语“自治岛”。这是一个暗指,当前的机器只在它们生命中的特定时刻是自主的,因为在生命的其他地方,它们受制于人类的控制和管理。所以,这里:洗衣机可能只有在制造出来并打开开关时才有意图)。
{"title":"Commentary: the intentions of washing machines","authors":"Richard H. R. Harper","doi":"10.1080/07370024.2021.1976640","DOIUrl":"https://doi.org/10.1080/07370024.2021.1976640","url":null,"abstract":"Hancock makes a range of claims but the most important is this: if a machine ‘learns,’ then, eventually, it will become ‘self-aware.’ It is self-awareness, he argues, that will distinguish machines that are merely autonomous (i.e., which work without human intervention, of which there are many) and those which do something else, which become, in the things they do, like us I cannot understand why one would think this move from learning to awareness would happen but Hancock is convinced. One might add that it is not his discipline that leads to this view – there is no human factors research that asserts or demonstrates that self-awareness emerges through learning, for example; or at least as far as I am aware of. Certainly, Hancock does not cite any. On the contrary, it seems that Hancock takes this idea from the AI community, though as it happens it is an argument that coat-tails on similar notions put forward by cognitive scientists. Some philosophers argue the same, too, such as Dennett (For the view from AI and computer science, see Russell, 2019; for the view of cognitive science, see Tallis, 2011; for a review of the philosophy see Harper et al, 2016). Be that as it may, let me focus on this claim and ask what ‘self-awareness’ might mean or how it might be measured. It seems to me that this is a question to do with anthropology. Hence, one way of approaching this is through imagining how people would act when self-awareness is at issue (Pihlström, 2003, pp. 259–286). Or, put another way, one can approach it by asking what someone might mean when they say they are ‘self-aware’? One might ask, too, why would they say it? I think they do so if they are ‘conscious’ of such things as their intentions. ‘I am about to do this’ they say when they are wanting some advice on that course of action. Intentions are a measure of selfawareness. So, is Hancock saying that autonomous machines would be conscious of their intentions and would that mean, too, that they would treat these intentions as accountable matters? Would that mean, say, that washing machines could have intentions of various kinds? And more, would it mean that these emerge from the learning that washine machines engage in? There are a number of thoughts that arise given this anthropological ‘vignette’ of washing machines and their intentions. How would these intentions be shown? Would these machines need to speak? Besides, when would these machines have these intentions? At what point during learning would they arise? After they have been working a while? One might presuppose some answers here – a machine might only ‘speak’ (if that is its mode of accountability) only once it is switched on. Moreover, one imagines a washing machine would not have any intentions when it was being assembled nor would it have any when it was being disassembled either (as it happens, Hancock refers to similar matters when he reminds the reader of one of his many phrases in earlier human factor articles: this t","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"28 1","pages":"248 - 250"},"PeriodicalIF":5.3,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91201087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring Anima: a brain–computer interface for peripheral materialization of mindfulness states during mandala coloring 探索动物:曼荼罗上色过程中正念状态外围物化的脑机接口
IF 5.3 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2021-11-16 DOI: 10.1080/07370024.2021.1968864
Claudia Daudén Roquet, C. Sas, Dominic Potts
I could feel my mind buzzing after another long day at work. Driving home, I am looking forward to my “me time” ritual of playing with colors. As I arrive, I get myself comfortable, pick up an orange crayon, and start coloring a mandala with beautiful lace-like details. For that, I have to fully concentrate, and my attention is focused on the unfolding present experience of slowly and mindfully filling in the mandala with color. Once I filled in all the little spaces from the central layer, I pick up a green crayon and color the next layer. When I make mistakes is usually because I am not paying attention. I now tend to accept and work my way around them. Before I know it, my mandala is complete, and my buzzing mind has calmed down. I can even pinpoint some subtle feelings unreachable when I started, wondering also how I could do better next time. By looking at the colored mandala, I can see from my mistakes when I was less mindful and lost focus. I also know that there were other moments of lost focus, albeit I cannot see them in my mandala. Maybe because these happened while coloring larger areas, and then mistakes are easier to avoid even without concentration. This scenario inspired by our study findings illustrates the richness of mandala coloring as an illustration of a focused attention mindfulness (FAM) practice. It shows the importance of intention, attention, and non-judgmental acceptance, with an invitation to explore how the materialization of mindfulness states onto colors may provide value to this practice. While acknowledging the complexity of mindfulness constructs (Hart et al., 2013), for the purpose of our work we adopt the working definition of mindfulness as “the awareness that emerges through paying attention on purpose, in the present moment, and non-judgmentally to the unfolding of experience moment by moment” [pp. 145] (Kabat-Zinn, 2009). Nevertheless, consistent findings in the literature indicate that the skills required to sustain and regulate attention are challenging to develop (Kerr et al., 2013; Sas & Chopra, 2015). Mindfulness practices have been broadly categorized under focused attention – involving sustained attention on an intended object, and open monitoring – with broader attentional focus, hence no explicit object of attention (Lutz et al., 2008). While FAM targets the focus and maintenance of attention by narrowing it to a selected stimulus despite competing others and, when attention is lost, disengaging from these distracting stimuli to redirect it back to the selected one, rather than narrowing it, open monitoring involves broadening the focus of attention through a receptive and non-judgmental stance toward moment-to-moment internal salient stimuli such as difficult thoughts and emotions (Britton, 2018). FAM is typically the starting point for novice meditators, with the main object of attention being either internal (e.g., focus on the breathing in sitting meditation (Prpa et al., 2018; Vidyarthi et al
又工作了漫长的一天,我能感觉到我的脑子嗡嗡作响。开车回家时,我期待着玩颜色的“个人时间”仪式。当我到达时,我让自己舒服起来,拿起一支橙色的蜡笔,开始用美丽的蕾丝细节给曼荼罗上色。为此,我必须全神贯注,我的注意力集中在正在展开的当下体验上,慢慢地、正念地用颜色填充曼荼罗。一旦我填满了中心图层的所有小空间,我拿起绿色蜡笔给下一层上色。当我犯错误时,通常是因为我没有注意。我现在倾向于接受它们,并以自己的方式解决它们。在我意识到之前,我的曼荼罗已经完成了,我嗡嗡作响的头脑已经平静下来。我甚至可以精确地指出一些微妙的感觉,当我开始时无法触及,也想知道我下次如何做得更好。通过观察彩色的曼陀罗,我可以从我的错误中看到,当我不太注意和失去注意力时。我也知道还有其他时刻失去了焦点,尽管我无法从我的曼陀罗中看到它们。也许是因为这些都是在较大的区域着色时发生的,这样即使不集中注意力也更容易避免错误。这个由我们的研究结果启发的场景说明了曼荼罗色彩的丰富性,作为集中注意力正念(FAM)练习的例证。它显示了意图,注意力和非判断性接受的重要性,并邀请探索正念状态在颜色上的物化如何为这种实践提供价值。虽然承认正念结构的复杂性(Hart et al., 2013),但为了我们的工作目的,我们采用了正念的工作定义为“通过有意识地、在当下时刻、不加判断地对体验的每时每一刻的展开进行关注而产生的意识”(Kabat-Zinn, 2009)。然而,文献中的一致发现表明,维持和调节注意力所需的技能很难培养(Kerr et al., 2013;Sas & Chopra, 2015)。正念练习被广泛地分为集中注意力——包括对预期对象的持续关注和开放监控——具有更广泛的注意力焦点,因此没有明确的注意对象(Lutz et al., 2008)。FAM的目标是通过将注意力缩小到一个选定的刺激上来集中和维持注意力,尽管有其他的竞争,当注意力失去时,从这些分散注意力的刺激中脱离出来,将注意力重新定向到选定的刺激上,而不是缩小它,开放式监测涉及通过对即时的内部显著刺激(如困难的想法和情绪)的接受和非判断立场来扩大注意力的焦点(Britton, 2018)。FAM通常是新手冥想的起点,主要注意对象要么是内部的(例如,专注于静坐冥想中的呼吸)(Prpa等人,2018;Vidyarthi et al., 2012),或行走冥想时的身体运动(s.s. Chen et al., 2015)或太极
{"title":"Exploring Anima: a brain–computer interface for peripheral materialization of mindfulness states during mandala coloring","authors":"Claudia Daudén Roquet, C. Sas, Dominic Potts","doi":"10.1080/07370024.2021.1968864","DOIUrl":"https://doi.org/10.1080/07370024.2021.1968864","url":null,"abstract":"I could feel my mind buzzing after another long day at work. Driving home, I am looking forward to my “me time” ritual of playing with colors. As I arrive, I get myself comfortable, pick up an orange crayon, and start coloring a mandala with beautiful lace-like details. For that, I have to fully concentrate, and my attention is focused on the unfolding present experience of slowly and mindfully filling in the mandala with color. Once I filled in all the little spaces from the central layer, I pick up a green crayon and color the next layer. When I make mistakes is usually because I am not paying attention. I now tend to accept and work my way around them. Before I know it, my mandala is complete, and my buzzing mind has calmed down. I can even pinpoint some subtle feelings unreachable when I started, wondering also how I could do better next time. By looking at the colored mandala, I can see from my mistakes when I was less mindful and lost focus. I also know that there were other moments of lost focus, albeit I cannot see them in my mandala. Maybe because these happened while coloring larger areas, and then mistakes are easier to avoid even without concentration. This scenario inspired by our study findings illustrates the richness of mandala coloring as an illustration of a focused attention mindfulness (FAM) practice. It shows the importance of intention, attention, and non-judgmental acceptance, with an invitation to explore how the materialization of mindfulness states onto colors may provide value to this practice. While acknowledging the complexity of mindfulness constructs (Hart et al., 2013), for the purpose of our work we adopt the working definition of mindfulness as “the awareness that emerges through paying attention on purpose, in the present moment, and non-judgmentally to the unfolding of experience moment by moment” [pp. 145] (Kabat-Zinn, 2009). Nevertheless, consistent findings in the literature indicate that the skills required to sustain and regulate attention are challenging to develop (Kerr et al., 2013; Sas & Chopra, 2015). Mindfulness practices have been broadly categorized under focused attention – involving sustained attention on an intended object, and open monitoring – with broader attentional focus, hence no explicit object of attention (Lutz et al., 2008). While FAM targets the focus and maintenance of attention by narrowing it to a selected stimulus despite competing others and, when attention is lost, disengaging from these distracting stimuli to redirect it back to the selected one, rather than narrowing it, open monitoring involves broadening the focus of attention through a receptive and non-judgmental stance toward moment-to-moment internal salient stimuli such as difficult thoughts and emotions (Britton, 2018). FAM is typically the starting point for novice meditators, with the main object of attention being either internal (e.g., focus on the breathing in sitting meditation (Prpa et al., 2018; Vidyarthi et al","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"44 1","pages":"259 - 299"},"PeriodicalIF":5.3,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80168910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Existential time and historicity in interaction design 交互设计中的存在时间和历史性
IF 5.3 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2021-11-16 DOI: 10.1080/07370024.2021.1912607
F. V. Amstel, R. Gonzatto
Time is considered a defining factor for interaction design (Kolko, 2011; Löwgren, 2002; Malouf, 2007; Mazé, 2007; Smith, 2007), yet little is known about its history in this field. The history of time is non-linear and uneven, understood as part of each society’s cultural development (Friedman, 1990; Souza, 2016). As experienced by humans, time is socially constructed, using the available concepts, measurement devices, and technology in a specific culture. Since each human culture produces its own history, there are also multiple courses of time. The absolute, chronological, and standardized clock time is just one of them, yet one often imposed on other cultures through colonialism, imperialism, globalization, and other international relationships (Nanni, 2017; Rifkin, 2017). Digital technology is vital for this imposition, and interaction design has responsibility for it. As everyday life becomes increasingly mediated by digital technologies, their rhythms (Lefebvre, 2004) are formalized, structured, or replaced by algorithms that structure everyday life rhythms (a.ka. algorhythms) that offer little accountability and local autonomy (Finn, 2019; Firmino et al., 2018; Miyazaki, 2013; Pagallo, 2018). These algo-rhythms enforce absolute time over other courses of time as a means to pour modern values like progress, efficiency, and profit-making. Despite the appearance of universality, these values do have a local origin. They come from developed nations, where modernity and, more recently, neoliberalism were invented and dispatched to the rest of the world – as if they were the only viable modes of collective existence (Berardi, 2017; Harvey, 2007). Interaction design contributes to this dispatch by embedding – and hiding – modern and neoliberal values and modes of existence into digital technology’s temporal form (Bidwell et al., 2013; Lindley, 2015, 2018; Mazé, 2007). In the last 15 years, critical and speculative design research has questioned absolute time in interaction design (Huybrechts et al., 2017; Mazé, 2019; Nooney & Brain, 2019; Prado de O. Martins & Vieira de Oliveira, 2016). This research stream made the case that time can also be designed in relative terms: given a certain present, what are the possible pasts and futures? Looking at alternative futures (Bardzell, 2018; Coulton et al., 2016; Duggan et al., 2017; Linehan et al., 2014; Tanenbaum et al., 2016) or alternatives pasts (Coulton & Lindley, 2017; Eriksson & Pargman, 2018; Huybrechts et al., 2017) enables realizing alternative presents and alternative designs (Auger, 2013; Coulton et al., 2016; Dunne & Raby, 2013). These alternatives often include deviations from the (apparently) inevitable single-story future shaped by digital technologies envisioned by big tech companies. The deviation expands the design space – the scenarios considered in a design project (Van Amstel et al., 2016; Van Amstel & Garde, 2016) – to every kind of social activity, even the noncommercial. Dystopia
时间被视为交互设计的决定性因素(Kolko, 2011;Lowgren, 2002;Malouf, 2007;迷宫,2007;Smith, 2007),但对其在这一领域的历史知之甚少。时间的历史是非线性和不平衡的,被理解为每个社会文化发展的一部分(Friedman, 1990;Souza, 2016)。正如人类所经历的那样,时间是社会建构的,使用特定文化中可用的概念、测量设备和技术。由于每一种人类文化都产生了自己的历史,所以也有多种时间进程。绝对的、按时间顺序的和标准化的时钟时间只是其中之一,但它经常通过殖民主义、帝国主义、全球化和其他国际关系强加给其他文化(Nanni, 2017;里夫金,2017)。数字技术在这一过程中起着至关重要的作用,交互设计对此负有责任。随着日常生活越来越多地受到数字技术的影响,它们的节奏(Lefebvre, 2004)被形式化、结构化或被结构化日常生活节奏的算法所取代。算法节奏)提供很少的问责制和地方自治(Finn, 2019;Firmino等人,2018;宫崎骏,2013;Pagallo, 2018)。这些算法节奏将绝对时间强加于其他时间进程之上,作为一种灌输进步、效率和盈利等现代价值观的手段。尽管表面上具有普遍性,但这些价值观确实有其地方性的根源。它们来自发达国家,在那里,现代性和最近的新自由主义被发明出来,并被派遣到世界其他地方——仿佛它们是集体存在的唯一可行模式(Berardi, 2017;哈维,2007)。交互设计通过将现代和新自由主义的价值观和存在模式嵌入并隐藏到数字技术的时间形式中,从而促进了这种分配(Bidwell等人,2013;Lindley, 2015, 2018;迷宫,2007)。在过去的15年里,批判性和投机性的设计研究质疑交互设计中的绝对时间(Huybrechts et al., 2017;迷宫,2019;Nooney & Brain, 2019;Prado de O. Martins & Vieira de Oliveira, 2016)。这一研究表明,时间也可以用相对术语来设计:给定一个特定的现在,可能的过去和未来是什么?展望未来(Bardzell, 2018;Coulton et al., 2016;Duggan et al., 2017;Linehan et al., 2014;Tanenbaum等人,2016)或替代过去(Coulton & Lindley, 2017;Eriksson & Pargman, 2018;Huybrechts等人,2017)能够实现替代礼物和替代设计(Auger, 2013;Coulton et al., 2016;Dunne & Raby, 2013)。这些替代方案往往与大型科技公司设想的数字技术塑造的(显然)不可避免的单层未来有所不同。偏差扩大了设计空间——设计项目中考虑的场景(Van Amstel等人,2016;Van Amstel & Garde, 2016)——对每一种社会活动,甚至是非商业性的。反乌托邦的“如果”情景揭示了某些公众会反对的不受欢迎的现代未来(Dunne & Raby, 2013),而乌托邦的“我们如何可能”情景产生了社区可能承诺的理想的地方未来(Baumann等人,2017;DiSalvo, 2014)。每个社区都有不同的时间观念,需要不同的方式来表示时间,
{"title":"Existential time and historicity in interaction design","authors":"F. V. Amstel, R. Gonzatto","doi":"10.1080/07370024.2021.1912607","DOIUrl":"https://doi.org/10.1080/07370024.2021.1912607","url":null,"abstract":"Time is considered a defining factor for interaction design (Kolko, 2011; Löwgren, 2002; Malouf, 2007; Mazé, 2007; Smith, 2007), yet little is known about its history in this field. The history of time is non-linear and uneven, understood as part of each society’s cultural development (Friedman, 1990; Souza, 2016). As experienced by humans, time is socially constructed, using the available concepts, measurement devices, and technology in a specific culture. Since each human culture produces its own history, there are also multiple courses of time. The absolute, chronological, and standardized clock time is just one of them, yet one often imposed on other cultures through colonialism, imperialism, globalization, and other international relationships (Nanni, 2017; Rifkin, 2017). Digital technology is vital for this imposition, and interaction design has responsibility for it. As everyday life becomes increasingly mediated by digital technologies, their rhythms (Lefebvre, 2004) are formalized, structured, or replaced by algorithms that structure everyday life rhythms (a.ka. algorhythms) that offer little accountability and local autonomy (Finn, 2019; Firmino et al., 2018; Miyazaki, 2013; Pagallo, 2018). These algo-rhythms enforce absolute time over other courses of time as a means to pour modern values like progress, efficiency, and profit-making. Despite the appearance of universality, these values do have a local origin. They come from developed nations, where modernity and, more recently, neoliberalism were invented and dispatched to the rest of the world – as if they were the only viable modes of collective existence (Berardi, 2017; Harvey, 2007). Interaction design contributes to this dispatch by embedding – and hiding – modern and neoliberal values and modes of existence into digital technology’s temporal form (Bidwell et al., 2013; Lindley, 2015, 2018; Mazé, 2007). In the last 15 years, critical and speculative design research has questioned absolute time in interaction design (Huybrechts et al., 2017; Mazé, 2019; Nooney & Brain, 2019; Prado de O. Martins & Vieira de Oliveira, 2016). This research stream made the case that time can also be designed in relative terms: given a certain present, what are the possible pasts and futures? Looking at alternative futures (Bardzell, 2018; Coulton et al., 2016; Duggan et al., 2017; Linehan et al., 2014; Tanenbaum et al., 2016) or alternatives pasts (Coulton & Lindley, 2017; Eriksson & Pargman, 2018; Huybrechts et al., 2017) enables realizing alternative presents and alternative designs (Auger, 2013; Coulton et al., 2016; Dunne & Raby, 2013). These alternatives often include deviations from the (apparently) inevitable single-story future shaped by digital technologies envisioned by big tech companies. The deviation expands the design space – the scenarios considered in a design project (Van Amstel et al., 2016; Van Amstel & Garde, 2016) – to every kind of social activity, even the noncommercial. Dystopia","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"49 1","pages":"29 - 68"},"PeriodicalIF":5.3,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84841714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Avoiding adverse autonomous agent actions 避免不利的自主代理行为
IF 5.3 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2021-11-16 DOI: 10.1080/07370024.2021.1970556
P. Hancock
Few today would dispute that the age of autonomous machines is nearly upon us (cf., Kurzweil, 2005; Moravec, 1988), if it is not already. While it is doubtful that one can identify any fully autonomous machine system at this point in time, especially one that is openly and publicly acknowledged to be so, it is far less debatable that our present line of technological evolution is leading toward this eventuality (Endsley, 2017; Hancock, 2017a). It is this specter of the consequences of even existentially threatening adverse events, emanating from these penetrative autonomous systems, which is the focus of the present work. The impending and imperative question is what we intend to do about these prospective challenges? As with essentially all of human discourse, we can imagine two sides to this question. One side is represented by an optimistic vision of a near utopian future, underwritten by AI-support and some inherent degree of intrinsic benevolence. The opposing vision promulgates a dystopian nightmare in which machines have gained almost total ascendency and only a few “plucky” humans remain. The latter is most especially a featured trope of the human heroic narrative (Campbell, 1949). It will be most probably the case that neither of the extremes on this putative spectrum of possibilities will represent the eventual reality that we will actually experience. However, the ground rules are now in the process of being set which will predispose us toward one of these directions over the other (Feng et al., 2016; Hancock, 2017a). Traditionally, many have approached this general form of technological inquiry by asking questions about strengths, weaknesses, threats, and opportunities. Consequently, it is within this general framework that this present work is offered. What follows are some overall considerations of the balance of the value of such autonomous systems’ inauguration and penetration. These observations provide the bedrock from which to consider the specific strengths, weaknesses, threats (risks), and promises (opportunity) dimensions. The specific consideration of the application of the protective strategies of the well-known hierarchy of controls (Haddon, 1973) then acts as a final prefatory consideration to the concluding discussion which examines the adverse actions of autonomous technological systems as a potential human existential threat. The term autonomy is one that has been, and still currently is, the subject of much attention, debate, and even abuse (and see Ezenkwu & Starkey, 2019). To an extent, the term seems to be flexible enough to encompass almost whatever the proximal user requires of it. For example, a simple, descriptive word-cloud (Figure 1), illustrates the various terminologies that surrounds our present use of this focal term. It is not the present purpose here to engage in a long, polemic and potentially unedifying dispute specifically about the term’s definition. This is because the present concern is with auto
今天很少有人会否认,自主机器的时代即将来临(参见,Kurzweil, 2005;Moravec, 1988),如果还没有的话。虽然在这个时间点上,人们能否识别出任何完全自主的机器系统是值得怀疑的,尤其是一个公开和公开承认的机器系统,但我们目前的技术进化路线正朝着这种可能性发展,这一点是毋庸置疑的(Endsley, 2017;汉考克,2017)。从这些渗透的自主系统中产生的甚至是威胁存在的不良事件的后果的幽灵,这是当前工作的重点。迫在眉睫和迫切的问题是,我们打算如何应对这些潜在的挑战?就像人类所有的话语一样,我们可以想象这个问题的两个方面。一方是对未来乌托邦的乐观看法,由人工智能支持和某种程度的内在仁慈所支撑。相反的观点宣扬了一种反乌托邦式的噩梦,在这种噩梦中,机器几乎占据了全部优势,只有少数“勇敢”的人类幸存下来。后者是人类英雄叙事中最具特色的比喻(Campbell, 1949)。很可能的情况是,这一系列假定的可能性中的任何一个极端都不能代表我们实际经历的最终现实。然而,基本规则现在正在制定过程中,这将使我们倾向于其中一个方向而不是另一个方向(Feng等人,2016;汉考克,2017)。传统上,许多人通过询问优势、劣势、威胁和机会来处理这种一般形式的技术调查。因此,本文正是在这一总体框架内提出的。以下是对此类自主系统启动和渗透的价值平衡的一些总体考虑。这些观察为考虑特定的优势、劣势、威胁(风险)和承诺(机会)维度提供了基础。对众所周知的控制层次的保护策略应用的具体考虑(Haddon, 1973),然后作为最后的前置考虑,最后的讨论将审查自主技术系统的不利行为作为潜在的人类生存威胁。自治一词一直是,现在仍然是备受关注、争论甚至滥用的主题(见Ezenkwu & Starkey, 2019)。在某种程度上,这个术语似乎足够灵活,可以涵盖几乎所有近端用户的要求。例如,一个简单的描述性词云(图1)说明了围绕我们当前使用的这个重点术语的各种术语。在这里,我们的目的并不是要就这个术语的定义进行一场冗长的、争论性的、潜在的无益的争论。这是因为目前关注的是自主技术系统,而不是自主性本身的更大意义,无论是作为一种属性还是作为一个过程。这里采用的定义是:“自主系统是生成的,可以学习、进化,并根据操作和上下文信息的输入永久地改变其功能能力。”他们的行为必然变得更加
{"title":"Avoiding adverse autonomous agent actions","authors":"P. Hancock","doi":"10.1080/07370024.2021.1970556","DOIUrl":"https://doi.org/10.1080/07370024.2021.1970556","url":null,"abstract":"Few today would dispute that the age of autonomous machines is nearly upon us (cf., Kurzweil, 2005; Moravec, 1988), if it is not already. While it is doubtful that one can identify any fully autonomous machine system at this point in time, especially one that is openly and publicly acknowledged to be so, it is far less debatable that our present line of technological evolution is leading toward this eventuality (Endsley, 2017; Hancock, 2017a). It is this specter of the consequences of even existentially threatening adverse events, emanating from these penetrative autonomous systems, which is the focus of the present work. The impending and imperative question is what we intend to do about these prospective challenges? As with essentially all of human discourse, we can imagine two sides to this question. One side is represented by an optimistic vision of a near utopian future, underwritten by AI-support and some inherent degree of intrinsic benevolence. The opposing vision promulgates a dystopian nightmare in which machines have gained almost total ascendency and only a few “plucky” humans remain. The latter is most especially a featured trope of the human heroic narrative (Campbell, 1949). It will be most probably the case that neither of the extremes on this putative spectrum of possibilities will represent the eventual reality that we will actually experience. However, the ground rules are now in the process of being set which will predispose us toward one of these directions over the other (Feng et al., 2016; Hancock, 2017a). Traditionally, many have approached this general form of technological inquiry by asking questions about strengths, weaknesses, threats, and opportunities. Consequently, it is within this general framework that this present work is offered. What follows are some overall considerations of the balance of the value of such autonomous systems’ inauguration and penetration. These observations provide the bedrock from which to consider the specific strengths, weaknesses, threats (risks), and promises (opportunity) dimensions. The specific consideration of the application of the protective strategies of the well-known hierarchy of controls (Haddon, 1973) then acts as a final prefatory consideration to the concluding discussion which examines the adverse actions of autonomous technological systems as a potential human existential threat. The term autonomy is one that has been, and still currently is, the subject of much attention, debate, and even abuse (and see Ezenkwu & Starkey, 2019). To an extent, the term seems to be flexible enough to encompass almost whatever the proximal user requires of it. For example, a simple, descriptive word-cloud (Figure 1), illustrates the various terminologies that surrounds our present use of this focal term. It is not the present purpose here to engage in a long, polemic and potentially unedifying dispute specifically about the term’s definition. This is because the present concern is with auto","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"29 1","pages":"211 - 236"},"PeriodicalIF":5.3,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81647783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Commentary: extraordinary excitement empowering enhancing everyone 评论:非凡的兴奋赋予每个人力量
IF 5.3 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2021-10-05 DOI: 10.1080/07370024.2021.1977128
B. Shneiderman
I eagerly support Peter Hancock’s desire to avoid adverse autonomous agent actions, but I think that he should change from his negative and pessimistic view to a more constructive stance about how ...
我热切地支持Peter Hancock避免不利的自主代理行为的愿望,但我认为他应该从消极和悲观的观点转变为一种更具建设性的立场,即如何……
{"title":"Commentary: extraordinary excitement empowering enhancing everyone","authors":"B. Shneiderman","doi":"10.1080/07370024.2021.1977128","DOIUrl":"https://doi.org/10.1080/07370024.2021.1977128","url":null,"abstract":"I eagerly support Peter Hancock’s desire to avoid adverse autonomous agent actions, but I think that he should change from his negative and pessimistic view to a more constructive stance about how ...","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"120 1","pages":"243 - 245"},"PeriodicalIF":5.3,"publicationDate":"2021-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86164882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Human-Computer Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1