{"title":"Avoiding adverse autonomous agent actions","authors":"P. Hancock","doi":"10.1080/07370024.2021.1970556","DOIUrl":null,"url":null,"abstract":"Few today would dispute that the age of autonomous machines is nearly upon us (cf., Kurzweil, 2005; Moravec, 1988), if it is not already. While it is doubtful that one can identify any fully autonomous machine system at this point in time, especially one that is openly and publicly acknowledged to be so, it is far less debatable that our present line of technological evolution is leading toward this eventuality (Endsley, 2017; Hancock, 2017a). It is this specter of the consequences of even existentially threatening adverse events, emanating from these penetrative autonomous systems, which is the focus of the present work. The impending and imperative question is what we intend to do about these prospective challenges? As with essentially all of human discourse, we can imagine two sides to this question. One side is represented by an optimistic vision of a near utopian future, underwritten by AI-support and some inherent degree of intrinsic benevolence. The opposing vision promulgates a dystopian nightmare in which machines have gained almost total ascendency and only a few “plucky” humans remain. The latter is most especially a featured trope of the human heroic narrative (Campbell, 1949). It will be most probably the case that neither of the extremes on this putative spectrum of possibilities will represent the eventual reality that we will actually experience. However, the ground rules are now in the process of being set which will predispose us toward one of these directions over the other (Feng et al., 2016; Hancock, 2017a). Traditionally, many have approached this general form of technological inquiry by asking questions about strengths, weaknesses, threats, and opportunities. Consequently, it is within this general framework that this present work is offered. What follows are some overall considerations of the balance of the value of such autonomous systems’ inauguration and penetration. These observations provide the bedrock from which to consider the specific strengths, weaknesses, threats (risks), and promises (opportunity) dimensions. The specific consideration of the application of the protective strategies of the well-known hierarchy of controls (Haddon, 1973) then acts as a final prefatory consideration to the concluding discussion which examines the adverse actions of autonomous technological systems as a potential human existential threat. The term autonomy is one that has been, and still currently is, the subject of much attention, debate, and even abuse (and see Ezenkwu & Starkey, 2019). To an extent, the term seems to be flexible enough to encompass almost whatever the proximal user requires of it. For example, a simple, descriptive word-cloud (Figure 1), illustrates the various terminologies that surrounds our present use of this focal term. It is not the present purpose here to engage in a long, polemic and potentially unedifying dispute specifically about the term’s definition. This is because the present concern is with autonomous technological systems, and not about the greater meaning of autonomy per se, either as a property or as a process. The definition which is adopted here is that: “autonomous systems are generative and learn, evolve, and permanently change their functional capacities as a result of the input of operational and contextual information. Their actions necessarily become more","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"29 1","pages":"211 - 236"},"PeriodicalIF":4.5000,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human-Computer Interaction","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1080/07370024.2021.1970556","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 16
Abstract
Few today would dispute that the age of autonomous machines is nearly upon us (cf., Kurzweil, 2005; Moravec, 1988), if it is not already. While it is doubtful that one can identify any fully autonomous machine system at this point in time, especially one that is openly and publicly acknowledged to be so, it is far less debatable that our present line of technological evolution is leading toward this eventuality (Endsley, 2017; Hancock, 2017a). It is this specter of the consequences of even existentially threatening adverse events, emanating from these penetrative autonomous systems, which is the focus of the present work. The impending and imperative question is what we intend to do about these prospective challenges? As with essentially all of human discourse, we can imagine two sides to this question. One side is represented by an optimistic vision of a near utopian future, underwritten by AI-support and some inherent degree of intrinsic benevolence. The opposing vision promulgates a dystopian nightmare in which machines have gained almost total ascendency and only a few “plucky” humans remain. The latter is most especially a featured trope of the human heroic narrative (Campbell, 1949). It will be most probably the case that neither of the extremes on this putative spectrum of possibilities will represent the eventual reality that we will actually experience. However, the ground rules are now in the process of being set which will predispose us toward one of these directions over the other (Feng et al., 2016; Hancock, 2017a). Traditionally, many have approached this general form of technological inquiry by asking questions about strengths, weaknesses, threats, and opportunities. Consequently, it is within this general framework that this present work is offered. What follows are some overall considerations of the balance of the value of such autonomous systems’ inauguration and penetration. These observations provide the bedrock from which to consider the specific strengths, weaknesses, threats (risks), and promises (opportunity) dimensions. The specific consideration of the application of the protective strategies of the well-known hierarchy of controls (Haddon, 1973) then acts as a final prefatory consideration to the concluding discussion which examines the adverse actions of autonomous technological systems as a potential human existential threat. The term autonomy is one that has been, and still currently is, the subject of much attention, debate, and even abuse (and see Ezenkwu & Starkey, 2019). To an extent, the term seems to be flexible enough to encompass almost whatever the proximal user requires of it. For example, a simple, descriptive word-cloud (Figure 1), illustrates the various terminologies that surrounds our present use of this focal term. It is not the present purpose here to engage in a long, polemic and potentially unedifying dispute specifically about the term’s definition. This is because the present concern is with autonomous technological systems, and not about the greater meaning of autonomy per se, either as a property or as a process. The definition which is adopted here is that: “autonomous systems are generative and learn, evolve, and permanently change their functional capacities as a result of the input of operational and contextual information. Their actions necessarily become more
期刊介绍:
Human-Computer Interaction (HCI) is a multidisciplinary journal defining and reporting
on fundamental research in human-computer interaction. The goal of HCI is to be a journal
of the highest quality that combines the best research and design work to extend our
understanding of human-computer interaction. The target audience is the research
community with an interest in both the scientific implications and practical relevance of
how interactive computer systems should be designed and how they are actually used. HCI is
concerned with the theoretical, empirical, and methodological issues of interaction science
and system design as it affects the user.