Pub Date : 2021-11-24DOI: 10.1080/07370024.2021.1976642
Kimon Kieslich
In the paper Avoiding Adverse Autonomous Agent Actions , Hancock (2021) sketches the technolo-gical development of automomous agents leading to a point in the (near) future, where machines become truly independent agents. He further elaborates that this development comes with both great promises, but also serious, even existential threats. Hancock concludes with highlighting the importance to prepare against problematic actions that autonomous agents might enact and suggests measures for humanity to take. and when intelligence will exceed human intelligence. Instead, I will reflect on the societal challenges outlined in Hancock’s article. More specifically, I will address the role of public opinion as a factor in the implementation of autonomous agents into society. Thereby, public perception potential strengths and opportunites may lead to exaggerated expectations, while public perception of potential weaknesses and threats may lead to overexceeded
{"title":"Commentary: Societal Reactions to Hopes and Threats of Autonomous Agent Actions: Reflections about Public Opinion and Technology Implementations","authors":"Kimon Kieslich","doi":"10.1080/07370024.2021.1976642","DOIUrl":"https://doi.org/10.1080/07370024.2021.1976642","url":null,"abstract":"In the paper Avoiding Adverse Autonomous Agent Actions , Hancock (2021) sketches the technolo-gical development of automomous agents leading to a point in the (near) future, where machines become truly independent agents. He further elaborates that this development comes with both great promises, but also serious, even existential threats. Hancock concludes with highlighting the importance to prepare against problematic actions that autonomous agents might enact and suggests measures for humanity to take. and when intelligence will exceed human intelligence. Instead, I will reflect on the societal challenges outlined in Hancock’s article. More specifically, I will address the role of public opinion as a factor in the implementation of autonomous agents into society. Thereby, public perception potential strengths and opportunites may lead to exaggerated expectations, while public perception of potential weaknesses and threats may lead to overexceeded","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"385 1","pages":"259 - 262"},"PeriodicalIF":5.3,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77682608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-23DOI: 10.1080/07370024.2021.1984917
Pavel Okopnyi, Frode Guribye, V. Caruso, O. Juhlin
ABSTRACT The TV industry has long been under pressure to adapt its workflows to use advanced Internet technologies. It also must face competition from social media, video blogs, and livestreaming platforms, which are enabled by lightweight production tools and new distribution channels. The social-distancing regulations introduced due to the COVID-19 pandemic added to the list of challenging adaptations. One of the remaining bastions of legacy TV production is the live broadcast of sporting events and news. These production practices rely on tight collaboration in small spaces, such as control rooms and outside broadcast vans. This paper focuses on current socio-technical changes, especially those changes and adaptations in collaborative practices and workflows in TV production. Some changes necessary during the pandemic may be imposed, temporary adjustments to the ongoing situation, but some might induce permanent changes in key work practices in TV production. Further, these imposed changes are aligned with already ongoing changes in the industry, which are now being accelerated. We characterize the changes along two main dimensions: redistribution of work and automation.
{"title":"Automation and redistribution of work: the impact of social distancing on live TV production","authors":"Pavel Okopnyi, Frode Guribye, V. Caruso, O. Juhlin","doi":"10.1080/07370024.2021.1984917","DOIUrl":"https://doi.org/10.1080/07370024.2021.1984917","url":null,"abstract":"ABSTRACT The TV industry has long been under pressure to adapt its workflows to use advanced Internet technologies. It also must face competition from social media, video blogs, and livestreaming platforms, which are enabled by lightweight production tools and new distribution channels. The social-distancing regulations introduced due to the COVID-19 pandemic added to the list of challenging adaptations. One of the remaining bastions of legacy TV production is the live broadcast of sporting events and news. These production practices rely on tight collaboration in small spaces, such as control rooms and outside broadcast vans. This paper focuses on current socio-technical changes, especially those changes and adaptations in collaborative practices and workflows in TV production. Some changes necessary during the pandemic may be imposed, temporary adjustments to the ongoing situation, but some might induce permanent changes in key work practices in TV production. Further, these imposed changes are aligned with already ongoing changes in the industry, which are now being accelerated. We characterize the changes along two main dimensions: redistribution of work and automation.","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"6 1","pages":"1 - 24"},"PeriodicalIF":5.3,"publicationDate":"2021-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87617503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-23DOI: 10.1080/07370024.2021.1977129
Loren Terveen
{"title":"Commentary: “Autonomous” agents? What should we worry about? What should we do?","authors":"Loren Terveen","doi":"10.1080/07370024.2021.1977129","DOIUrl":"https://doi.org/10.1080/07370024.2021.1977129","url":null,"abstract":"","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"75 1","pages":"240 - 242"},"PeriodicalIF":5.3,"publicationDate":"2021-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77207548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-22DOI: 10.1080/07370024.2021.1977127
P. Salmon
In “Avoiding adverse autonomous agent actions,” Hancock (This issue) argues that controlling and exploiting autonomous systems represents one of the fundamental challenges of the 21 century. His parting shot is the disquieting and challenging observation that, with autonomous agents, we may be creating a new “peak predator” from which there will be no recovery of human control. The next generation of Artificial Intelligence (AI), Artificial General Intelligence (AGI) could see the idea of a new technological peak predator become reality. AGI will possess the capacity to learn, evolve and modify its functional capabilities and could quickly become intellectually superior to humans (Bostrom, 2014). Though estimates on when AGI will appear vary, the exact time of arrival is perhaps a moot point. What is more important, as Hancock alludes to, is that work is required immediately to ensure that the impact on humanity is positive rather than negative (Salmon et al., 2021). Should we take a reactive approach and only focus our efforts once AGI is created, it will already be too late (Bostrom, 2014). The first AGI system will quickly become uncontrollable.
{"title":"Commentary: controlling the demon: autonomous agents and the urgent need for controls","authors":"P. Salmon","doi":"10.1080/07370024.2021.1977127","DOIUrl":"https://doi.org/10.1080/07370024.2021.1977127","url":null,"abstract":"In “Avoiding adverse autonomous agent actions,” Hancock (This issue) argues that controlling and exploiting autonomous systems represents one of the fundamental challenges of the 21 century. His parting shot is the disquieting and challenging observation that, with autonomous agents, we may be creating a new “peak predator” from which there will be no recovery of human control. The next generation of Artificial Intelligence (AI), Artificial General Intelligence (AGI) could see the idea of a new technological peak predator become reality. AGI will possess the capacity to learn, evolve and modify its functional capabilities and could quickly become intellectually superior to humans (Bostrom, 2014). Though estimates on when AGI will appear vary, the exact time of arrival is perhaps a moot point. What is more important, as Hancock alludes to, is that work is required immediately to ensure that the impact on humanity is positive rather than negative (Salmon et al., 2021). Should we take a reactive approach and only focus our efforts once AGI is created, it will already be too late (Bostrom, 2014). The first AGI system will quickly become uncontrollable.","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"5 1","pages":"246 - 247"},"PeriodicalIF":5.3,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81654628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-17DOI: 10.1080/07370024.2021.1976639
John M. Carroll
Hancock (this issue) identifies potential adverse consequences of emerging autonomous agent technology. He is not talking about Roomba and Waymo, but about future systems that could be more capable, and potentially, far more autonomous. As is often the case, it is difficult to fit a precise timeline to future technologies, and, as Hancock argues, human inability to fathom or even to perceive the timeline for autonomous agents is a specific challenge in this regard (see below). One concern Hancock raises is that humans, in pursuing the development of autonomous agents, are ipso facto ceding control of life on Earth to a new “peak predator.” At least in part, he is led to the predator framing through recognizing that autonomous systems are developing in a context of human conflict, indeed, often as “implements of human conflict.” Hancock argues that the transition to a new peak predator, even if not a Skynet/Terminator catastrophe, is unlikely to be the path “ultimately most conducive to human welfare,” and that, having ceded the peak position, it is doubtful humans could ever again regain control. Hancock also raises a set of concerns regarding the incommensurability of fundamental aspects of experience for human and autonomous agents. For example, humans and emerging autonomous agents could experience time very differently. Hancock criticizes the expression “real time” as indicative of how humans uncritically privilege a conception of time and duration indexed closely to the parameters of human perception and cognition. However, emerging autonomous agents might think through and carry out complex courses of action before humans could notice that anything even happened. Indeed, Hancock notes that the very transition from contemporary AI to truly autonomous agents could occur in what humans will experience as “a single perceptual moment.” Later in his paper, Hancock broadens the incommensurability point: “ . . . there is no necessary reason why autonomous cognition need resemble human cognition in any manner.” These concerns are worth analyzing, criticizing, and planning for now. The peak predator concern is the most vivid and concretely problematic hypothetical in Hancock’s paper. However, to the extent that emerging autonomous agents become autonomous, they would not be mere implements of human conflict. What is their stake in human conflicts? Why would they participate at all? They might be peak predators in the low-level sense of lethal capability, but predators are motivated to be predators for extrinsic reasons. What could those reasons be for autonomous agents? If we grant that they would be driven by logic, we ought to be able to come up with concrete possibilities, future scenarios that we can identify and plan for. There are, most definitely, reasons to question the human project of developing autonomous weapon systems, and, more specifically, of exploring possibilities for extremely powerful and autonomous weapon systems without a deep u
{"title":"Commentary: Should humans look forward to autonomous others?","authors":"John M. Carroll","doi":"10.1080/07370024.2021.1976639","DOIUrl":"https://doi.org/10.1080/07370024.2021.1976639","url":null,"abstract":"Hancock (this issue) identifies potential adverse consequences of emerging autonomous agent technology. He is not talking about Roomba and Waymo, but about future systems that could be more capable, and potentially, far more autonomous. As is often the case, it is difficult to fit a precise timeline to future technologies, and, as Hancock argues, human inability to fathom or even to perceive the timeline for autonomous agents is a specific challenge in this regard (see below). One concern Hancock raises is that humans, in pursuing the development of autonomous agents, are ipso facto ceding control of life on Earth to a new “peak predator.” At least in part, he is led to the predator framing through recognizing that autonomous systems are developing in a context of human conflict, indeed, often as “implements of human conflict.” Hancock argues that the transition to a new peak predator, even if not a Skynet/Terminator catastrophe, is unlikely to be the path “ultimately most conducive to human welfare,” and that, having ceded the peak position, it is doubtful humans could ever again regain control. Hancock also raises a set of concerns regarding the incommensurability of fundamental aspects of experience for human and autonomous agents. For example, humans and emerging autonomous agents could experience time very differently. Hancock criticizes the expression “real time” as indicative of how humans uncritically privilege a conception of time and duration indexed closely to the parameters of human perception and cognition. However, emerging autonomous agents might think through and carry out complex courses of action before humans could notice that anything even happened. Indeed, Hancock notes that the very transition from contemporary AI to truly autonomous agents could occur in what humans will experience as “a single perceptual moment.” Later in his paper, Hancock broadens the incommensurability point: “ . . . there is no necessary reason why autonomous cognition need resemble human cognition in any manner.” These concerns are worth analyzing, criticizing, and planning for now. The peak predator concern is the most vivid and concretely problematic hypothetical in Hancock’s paper. However, to the extent that emerging autonomous agents become autonomous, they would not be mere implements of human conflict. What is their stake in human conflicts? Why would they participate at all? They might be peak predators in the low-level sense of lethal capability, but predators are motivated to be predators for extrinsic reasons. What could those reasons be for autonomous agents? If we grant that they would be driven by logic, we ought to be able to come up with concrete possibilities, future scenarios that we can identify and plan for. There are, most definitely, reasons to question the human project of developing autonomous weapon systems, and, more specifically, of exploring possibilities for extremely powerful and autonomous weapon systems without a deep u","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"37 1","pages":"251 - 253"},"PeriodicalIF":5.3,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74544677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-16DOI: 10.1080/07370024.2021.1976640
Richard H. R. Harper
Hancock makes a range of claims but the most important is this: if a machine ‘learns,’ then, eventually, it will become ‘self-aware.’ It is self-awareness, he argues, that will distinguish machines that are merely autonomous (i.e., which work without human intervention, of which there are many) and those which do something else, which become, in the things they do, like us I cannot understand why one would think this move from learning to awareness would happen but Hancock is convinced. One might add that it is not his discipline that leads to this view – there is no human factors research that asserts or demonstrates that self-awareness emerges through learning, for example; or at least as far as I am aware of. Certainly, Hancock does not cite any. On the contrary, it seems that Hancock takes this idea from the AI community, though as it happens it is an argument that coat-tails on similar notions put forward by cognitive scientists. Some philosophers argue the same, too, such as Dennett (For the view from AI and computer science, see Russell, 2019; for the view of cognitive science, see Tallis, 2011; for a review of the philosophy see Harper et al, 2016). Be that as it may, let me focus on this claim and ask what ‘self-awareness’ might mean or how it might be measured. It seems to me that this is a question to do with anthropology. Hence, one way of approaching this is through imagining how people would act when self-awareness is at issue (Pihlström, 2003, pp. 259–286). Or, put another way, one can approach it by asking what someone might mean when they say they are ‘self-aware’? One might ask, too, why would they say it? I think they do so if they are ‘conscious’ of such things as their intentions. ‘I am about to do this’ they say when they are wanting some advice on that course of action. Intentions are a measure of selfawareness. So, is Hancock saying that autonomous machines would be conscious of their intentions and would that mean, too, that they would treat these intentions as accountable matters? Would that mean, say, that washing machines could have intentions of various kinds? And more, would it mean that these emerge from the learning that washine machines engage in? There are a number of thoughts that arise given this anthropological ‘vignette’ of washing machines and their intentions. How would these intentions be shown? Would these machines need to speak? Besides, when would these machines have these intentions? At what point during learning would they arise? After they have been working a while? One might presuppose some answers here – a machine might only ‘speak’ (if that is its mode of accountability) only once it is switched on. Moreover, one imagines a washing machine would not have any intentions when it was being assembled nor would it have any when it was being disassembled either (as it happens, Hancock refers to similar matters when he reminds the reader of one of his many phrases in earlier human factor articles: this t
汉考克提出了一系列主张,但最重要的是:如果机器“学习”,那么最终它将具有“自我意识”。他认为,自我意识将区分仅仅是自主的机器(即不需要人类干预的机器,有很多这样的机器)和那些做其他事情的机器,它们在做的事情中变得像我们一样。我不明白为什么有人会认为这种从学习到意识的转变会发生,但汉考克相信。有人可能会补充说,并不是他的学科导致了这种观点——例如,没有人为因素研究断言或证明自我意识是通过学习产生的;至少据我所知是这样。当然,汉考克没有引用任何例子。相反,汉考克似乎从人工智能社区借鉴了这一观点,尽管碰巧这是一种基于认知科学家提出的类似观点的论点。一些哲学家也有同样的观点,比如丹尼特(关于人工智能和计算机科学的观点,见Russell, 2019;关于认知科学的观点,见Tallis, 2011;有关哲学的回顾,请参阅Harper et al, 2016)。尽管如此,让我把重点放在这一说法上,并问一下“自我意识”可能意味着什么,或者如何衡量它。在我看来,这是一个与人类学有关的问题。因此,解决这个问题的一种方法是想象人们在自我意识问题上的行为(Pihlström, 2003, pp. 259-286)。或者,换句话说,一个人可以通过问某人说他们“有自我意识”是什么意思来接近它?有人可能会问,他们为什么要这么说?我认为如果他们“意识到”他们的意图之类的事情,他们就会这样做。“我要做这件事了”,当他们需要一些关于行动方针的建议时,他们会说。意图是衡量自我意识的标准。那么,汉考克是说自动机器会意识到自己的意图,这是否也意味着,它们会把这些意图视为可问责的事情?这是否意味着,比如说,洗衣机可以有各种各样的意图?更重要的是,这是否意味着这些都是从洗衣机所参与的学习中产生的?鉴于这个关于洗衣机及其意图的人类学“小插曲”,人们产生了许多想法。这些意图将如何表现?这些机器需要说话吗?再说,这些机器什么时候会有这些意图呢?在学习过程中,它们会在什么时候出现?在他们工作了一段时间之后?人们可能会在这里预设一些答案——一台机器可能只有在开机时才会“说话”(如果这是它的问责模式的话)。此外,人们想象一台洗衣机在组装时不会有任何意图,在拆卸时也不会有任何意图(事实上,汉考克在提醒读者他在早期关于人的因素的文章中提到的许多短语之一时,也提到了类似的问题:这次是短语“自治岛”。这是一个暗指,当前的机器只在它们生命中的特定时刻是自主的,因为在生命的其他地方,它们受制于人类的控制和管理。所以,这里:洗衣机可能只有在制造出来并打开开关时才有意图)。
{"title":"Commentary: the intentions of washing machines","authors":"Richard H. R. Harper","doi":"10.1080/07370024.2021.1976640","DOIUrl":"https://doi.org/10.1080/07370024.2021.1976640","url":null,"abstract":"Hancock makes a range of claims but the most important is this: if a machine ‘learns,’ then, eventually, it will become ‘self-aware.’ It is self-awareness, he argues, that will distinguish machines that are merely autonomous (i.e., which work without human intervention, of which there are many) and those which do something else, which become, in the things they do, like us I cannot understand why one would think this move from learning to awareness would happen but Hancock is convinced. One might add that it is not his discipline that leads to this view – there is no human factors research that asserts or demonstrates that self-awareness emerges through learning, for example; or at least as far as I am aware of. Certainly, Hancock does not cite any. On the contrary, it seems that Hancock takes this idea from the AI community, though as it happens it is an argument that coat-tails on similar notions put forward by cognitive scientists. Some philosophers argue the same, too, such as Dennett (For the view from AI and computer science, see Russell, 2019; for the view of cognitive science, see Tallis, 2011; for a review of the philosophy see Harper et al, 2016). Be that as it may, let me focus on this claim and ask what ‘self-awareness’ might mean or how it might be measured. It seems to me that this is a question to do with anthropology. Hence, one way of approaching this is through imagining how people would act when self-awareness is at issue (Pihlström, 2003, pp. 259–286). Or, put another way, one can approach it by asking what someone might mean when they say they are ‘self-aware’? One might ask, too, why would they say it? I think they do so if they are ‘conscious’ of such things as their intentions. ‘I am about to do this’ they say when they are wanting some advice on that course of action. Intentions are a measure of selfawareness. So, is Hancock saying that autonomous machines would be conscious of their intentions and would that mean, too, that they would treat these intentions as accountable matters? Would that mean, say, that washing machines could have intentions of various kinds? And more, would it mean that these emerge from the learning that washine machines engage in? There are a number of thoughts that arise given this anthropological ‘vignette’ of washing machines and their intentions. How would these intentions be shown? Would these machines need to speak? Besides, when would these machines have these intentions? At what point during learning would they arise? After they have been working a while? One might presuppose some answers here – a machine might only ‘speak’ (if that is its mode of accountability) only once it is switched on. Moreover, one imagines a washing machine would not have any intentions when it was being assembled nor would it have any when it was being disassembled either (as it happens, Hancock refers to similar matters when he reminds the reader of one of his many phrases in earlier human factor articles: this t","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"28 1","pages":"248 - 250"},"PeriodicalIF":5.3,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91201087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-16DOI: 10.1080/07370024.2021.1968864
Claudia Daudén Roquet, C. Sas, Dominic Potts
I could feel my mind buzzing after another long day at work. Driving home, I am looking forward to my “me time” ritual of playing with colors. As I arrive, I get myself comfortable, pick up an orange crayon, and start coloring a mandala with beautiful lace-like details. For that, I have to fully concentrate, and my attention is focused on the unfolding present experience of slowly and mindfully filling in the mandala with color. Once I filled in all the little spaces from the central layer, I pick up a green crayon and color the next layer. When I make mistakes is usually because I am not paying attention. I now tend to accept and work my way around them. Before I know it, my mandala is complete, and my buzzing mind has calmed down. I can even pinpoint some subtle feelings unreachable when I started, wondering also how I could do better next time. By looking at the colored mandala, I can see from my mistakes when I was less mindful and lost focus. I also know that there were other moments of lost focus, albeit I cannot see them in my mandala. Maybe because these happened while coloring larger areas, and then mistakes are easier to avoid even without concentration. This scenario inspired by our study findings illustrates the richness of mandala coloring as an illustration of a focused attention mindfulness (FAM) practice. It shows the importance of intention, attention, and non-judgmental acceptance, with an invitation to explore how the materialization of mindfulness states onto colors may provide value to this practice. While acknowledging the complexity of mindfulness constructs (Hart et al., 2013), for the purpose of our work we adopt the working definition of mindfulness as “the awareness that emerges through paying attention on purpose, in the present moment, and non-judgmentally to the unfolding of experience moment by moment” [pp. 145] (Kabat-Zinn, 2009). Nevertheless, consistent findings in the literature indicate that the skills required to sustain and regulate attention are challenging to develop (Kerr et al., 2013; Sas & Chopra, 2015). Mindfulness practices have been broadly categorized under focused attention – involving sustained attention on an intended object, and open monitoring – with broader attentional focus, hence no explicit object of attention (Lutz et al., 2008). While FAM targets the focus and maintenance of attention by narrowing it to a selected stimulus despite competing others and, when attention is lost, disengaging from these distracting stimuli to redirect it back to the selected one, rather than narrowing it, open monitoring involves broadening the focus of attention through a receptive and non-judgmental stance toward moment-to-moment internal salient stimuli such as difficult thoughts and emotions (Britton, 2018). FAM is typically the starting point for novice meditators, with the main object of attention being either internal (e.g., focus on the breathing in sitting meditation (Prpa et al., 2018; Vidyarthi et al
又工作了漫长的一天,我能感觉到我的脑子嗡嗡作响。开车回家时,我期待着玩颜色的“个人时间”仪式。当我到达时,我让自己舒服起来,拿起一支橙色的蜡笔,开始用美丽的蕾丝细节给曼荼罗上色。为此,我必须全神贯注,我的注意力集中在正在展开的当下体验上,慢慢地、正念地用颜色填充曼荼罗。一旦我填满了中心图层的所有小空间,我拿起绿色蜡笔给下一层上色。当我犯错误时,通常是因为我没有注意。我现在倾向于接受它们,并以自己的方式解决它们。在我意识到之前,我的曼荼罗已经完成了,我嗡嗡作响的头脑已经平静下来。我甚至可以精确地指出一些微妙的感觉,当我开始时无法触及,也想知道我下次如何做得更好。通过观察彩色的曼陀罗,我可以从我的错误中看到,当我不太注意和失去注意力时。我也知道还有其他时刻失去了焦点,尽管我无法从我的曼陀罗中看到它们。也许是因为这些都是在较大的区域着色时发生的,这样即使不集中注意力也更容易避免错误。这个由我们的研究结果启发的场景说明了曼荼罗色彩的丰富性,作为集中注意力正念(FAM)练习的例证。它显示了意图,注意力和非判断性接受的重要性,并邀请探索正念状态在颜色上的物化如何为这种实践提供价值。虽然承认正念结构的复杂性(Hart et al., 2013),但为了我们的工作目的,我们采用了正念的工作定义为“通过有意识地、在当下时刻、不加判断地对体验的每时每一刻的展开进行关注而产生的意识”(Kabat-Zinn, 2009)。然而,文献中的一致发现表明,维持和调节注意力所需的技能很难培养(Kerr et al., 2013;Sas & Chopra, 2015)。正念练习被广泛地分为集中注意力——包括对预期对象的持续关注和开放监控——具有更广泛的注意力焦点,因此没有明确的注意对象(Lutz et al., 2008)。FAM的目标是通过将注意力缩小到一个选定的刺激上来集中和维持注意力,尽管有其他的竞争,当注意力失去时,从这些分散注意力的刺激中脱离出来,将注意力重新定向到选定的刺激上,而不是缩小它,开放式监测涉及通过对即时的内部显著刺激(如困难的想法和情绪)的接受和非判断立场来扩大注意力的焦点(Britton, 2018)。FAM通常是新手冥想的起点,主要注意对象要么是内部的(例如,专注于静坐冥想中的呼吸)(Prpa等人,2018;Vidyarthi et al., 2012),或行走冥想时的身体运动(s.s. Chen et al., 2015)或太极
{"title":"Exploring Anima: a brain–computer interface for peripheral materialization of mindfulness states during mandala coloring","authors":"Claudia Daudén Roquet, C. Sas, Dominic Potts","doi":"10.1080/07370024.2021.1968864","DOIUrl":"https://doi.org/10.1080/07370024.2021.1968864","url":null,"abstract":"I could feel my mind buzzing after another long day at work. Driving home, I am looking forward to my “me time” ritual of playing with colors. As I arrive, I get myself comfortable, pick up an orange crayon, and start coloring a mandala with beautiful lace-like details. For that, I have to fully concentrate, and my attention is focused on the unfolding present experience of slowly and mindfully filling in the mandala with color. Once I filled in all the little spaces from the central layer, I pick up a green crayon and color the next layer. When I make mistakes is usually because I am not paying attention. I now tend to accept and work my way around them. Before I know it, my mandala is complete, and my buzzing mind has calmed down. I can even pinpoint some subtle feelings unreachable when I started, wondering also how I could do better next time. By looking at the colored mandala, I can see from my mistakes when I was less mindful and lost focus. I also know that there were other moments of lost focus, albeit I cannot see them in my mandala. Maybe because these happened while coloring larger areas, and then mistakes are easier to avoid even without concentration. This scenario inspired by our study findings illustrates the richness of mandala coloring as an illustration of a focused attention mindfulness (FAM) practice. It shows the importance of intention, attention, and non-judgmental acceptance, with an invitation to explore how the materialization of mindfulness states onto colors may provide value to this practice. While acknowledging the complexity of mindfulness constructs (Hart et al., 2013), for the purpose of our work we adopt the working definition of mindfulness as “the awareness that emerges through paying attention on purpose, in the present moment, and non-judgmentally to the unfolding of experience moment by moment” [pp. 145] (Kabat-Zinn, 2009). Nevertheless, consistent findings in the literature indicate that the skills required to sustain and regulate attention are challenging to develop (Kerr et al., 2013; Sas & Chopra, 2015). Mindfulness practices have been broadly categorized under focused attention – involving sustained attention on an intended object, and open monitoring – with broader attentional focus, hence no explicit object of attention (Lutz et al., 2008). While FAM targets the focus and maintenance of attention by narrowing it to a selected stimulus despite competing others and, when attention is lost, disengaging from these distracting stimuli to redirect it back to the selected one, rather than narrowing it, open monitoring involves broadening the focus of attention through a receptive and non-judgmental stance toward moment-to-moment internal salient stimuli such as difficult thoughts and emotions (Britton, 2018). FAM is typically the starting point for novice meditators, with the main object of attention being either internal (e.g., focus on the breathing in sitting meditation (Prpa et al., 2018; Vidyarthi et al","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"44 1","pages":"259 - 299"},"PeriodicalIF":5.3,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80168910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-16DOI: 10.1080/07370024.2021.1912607
F. V. Amstel, R. Gonzatto
Time is considered a defining factor for interaction design (Kolko, 2011; Löwgren, 2002; Malouf, 2007; Mazé, 2007; Smith, 2007), yet little is known about its history in this field. The history of time is non-linear and uneven, understood as part of each society’s cultural development (Friedman, 1990; Souza, 2016). As experienced by humans, time is socially constructed, using the available concepts, measurement devices, and technology in a specific culture. Since each human culture produces its own history, there are also multiple courses of time. The absolute, chronological, and standardized clock time is just one of them, yet one often imposed on other cultures through colonialism, imperialism, globalization, and other international relationships (Nanni, 2017; Rifkin, 2017). Digital technology is vital for this imposition, and interaction design has responsibility for it. As everyday life becomes increasingly mediated by digital technologies, their rhythms (Lefebvre, 2004) are formalized, structured, or replaced by algorithms that structure everyday life rhythms (a.ka. algorhythms) that offer little accountability and local autonomy (Finn, 2019; Firmino et al., 2018; Miyazaki, 2013; Pagallo, 2018). These algo-rhythms enforce absolute time over other courses of time as a means to pour modern values like progress, efficiency, and profit-making. Despite the appearance of universality, these values do have a local origin. They come from developed nations, where modernity and, more recently, neoliberalism were invented and dispatched to the rest of the world – as if they were the only viable modes of collective existence (Berardi, 2017; Harvey, 2007). Interaction design contributes to this dispatch by embedding – and hiding – modern and neoliberal values and modes of existence into digital technology’s temporal form (Bidwell et al., 2013; Lindley, 2015, 2018; Mazé, 2007). In the last 15 years, critical and speculative design research has questioned absolute time in interaction design (Huybrechts et al., 2017; Mazé, 2019; Nooney & Brain, 2019; Prado de O. Martins & Vieira de Oliveira, 2016). This research stream made the case that time can also be designed in relative terms: given a certain present, what are the possible pasts and futures? Looking at alternative futures (Bardzell, 2018; Coulton et al., 2016; Duggan et al., 2017; Linehan et al., 2014; Tanenbaum et al., 2016) or alternatives pasts (Coulton & Lindley, 2017; Eriksson & Pargman, 2018; Huybrechts et al., 2017) enables realizing alternative presents and alternative designs (Auger, 2013; Coulton et al., 2016; Dunne & Raby, 2013). These alternatives often include deviations from the (apparently) inevitable single-story future shaped by digital technologies envisioned by big tech companies. The deviation expands the design space – the scenarios considered in a design project (Van Amstel et al., 2016; Van Amstel & Garde, 2016) – to every kind of social activity, even the noncommercial. Dystopia
时间被视为交互设计的决定性因素(Kolko, 2011;Lowgren, 2002;Malouf, 2007;迷宫,2007;Smith, 2007),但对其在这一领域的历史知之甚少。时间的历史是非线性和不平衡的,被理解为每个社会文化发展的一部分(Friedman, 1990;Souza, 2016)。正如人类所经历的那样,时间是社会建构的,使用特定文化中可用的概念、测量设备和技术。由于每一种人类文化都产生了自己的历史,所以也有多种时间进程。绝对的、按时间顺序的和标准化的时钟时间只是其中之一,但它经常通过殖民主义、帝国主义、全球化和其他国际关系强加给其他文化(Nanni, 2017;里夫金,2017)。数字技术在这一过程中起着至关重要的作用,交互设计对此负有责任。随着日常生活越来越多地受到数字技术的影响,它们的节奏(Lefebvre, 2004)被形式化、结构化或被结构化日常生活节奏的算法所取代。算法节奏)提供很少的问责制和地方自治(Finn, 2019;Firmino等人,2018;宫崎骏,2013;Pagallo, 2018)。这些算法节奏将绝对时间强加于其他时间进程之上,作为一种灌输进步、效率和盈利等现代价值观的手段。尽管表面上具有普遍性,但这些价值观确实有其地方性的根源。它们来自发达国家,在那里,现代性和最近的新自由主义被发明出来,并被派遣到世界其他地方——仿佛它们是集体存在的唯一可行模式(Berardi, 2017;哈维,2007)。交互设计通过将现代和新自由主义的价值观和存在模式嵌入并隐藏到数字技术的时间形式中,从而促进了这种分配(Bidwell等人,2013;Lindley, 2015, 2018;迷宫,2007)。在过去的15年里,批判性和投机性的设计研究质疑交互设计中的绝对时间(Huybrechts et al., 2017;迷宫,2019;Nooney & Brain, 2019;Prado de O. Martins & Vieira de Oliveira, 2016)。这一研究表明,时间也可以用相对术语来设计:给定一个特定的现在,可能的过去和未来是什么?展望未来(Bardzell, 2018;Coulton et al., 2016;Duggan et al., 2017;Linehan et al., 2014;Tanenbaum等人,2016)或替代过去(Coulton & Lindley, 2017;Eriksson & Pargman, 2018;Huybrechts等人,2017)能够实现替代礼物和替代设计(Auger, 2013;Coulton et al., 2016;Dunne & Raby, 2013)。这些替代方案往往与大型科技公司设想的数字技术塑造的(显然)不可避免的单层未来有所不同。偏差扩大了设计空间——设计项目中考虑的场景(Van Amstel等人,2016;Van Amstel & Garde, 2016)——对每一种社会活动,甚至是非商业性的。反乌托邦的“如果”情景揭示了某些公众会反对的不受欢迎的现代未来(Dunne & Raby, 2013),而乌托邦的“我们如何可能”情景产生了社区可能承诺的理想的地方未来(Baumann等人,2017;DiSalvo, 2014)。每个社区都有不同的时间观念,需要不同的方式来表示时间,
{"title":"Existential time and historicity in interaction design","authors":"F. V. Amstel, R. Gonzatto","doi":"10.1080/07370024.2021.1912607","DOIUrl":"https://doi.org/10.1080/07370024.2021.1912607","url":null,"abstract":"Time is considered a defining factor for interaction design (Kolko, 2011; Löwgren, 2002; Malouf, 2007; Mazé, 2007; Smith, 2007), yet little is known about its history in this field. The history of time is non-linear and uneven, understood as part of each society’s cultural development (Friedman, 1990; Souza, 2016). As experienced by humans, time is socially constructed, using the available concepts, measurement devices, and technology in a specific culture. Since each human culture produces its own history, there are also multiple courses of time. The absolute, chronological, and standardized clock time is just one of them, yet one often imposed on other cultures through colonialism, imperialism, globalization, and other international relationships (Nanni, 2017; Rifkin, 2017). Digital technology is vital for this imposition, and interaction design has responsibility for it. As everyday life becomes increasingly mediated by digital technologies, their rhythms (Lefebvre, 2004) are formalized, structured, or replaced by algorithms that structure everyday life rhythms (a.ka. algorhythms) that offer little accountability and local autonomy (Finn, 2019; Firmino et al., 2018; Miyazaki, 2013; Pagallo, 2018). These algo-rhythms enforce absolute time over other courses of time as a means to pour modern values like progress, efficiency, and profit-making. Despite the appearance of universality, these values do have a local origin. They come from developed nations, where modernity and, more recently, neoliberalism were invented and dispatched to the rest of the world – as if they were the only viable modes of collective existence (Berardi, 2017; Harvey, 2007). Interaction design contributes to this dispatch by embedding – and hiding – modern and neoliberal values and modes of existence into digital technology’s temporal form (Bidwell et al., 2013; Lindley, 2015, 2018; Mazé, 2007). In the last 15 years, critical and speculative design research has questioned absolute time in interaction design (Huybrechts et al., 2017; Mazé, 2019; Nooney & Brain, 2019; Prado de O. Martins & Vieira de Oliveira, 2016). This research stream made the case that time can also be designed in relative terms: given a certain present, what are the possible pasts and futures? Looking at alternative futures (Bardzell, 2018; Coulton et al., 2016; Duggan et al., 2017; Linehan et al., 2014; Tanenbaum et al., 2016) or alternatives pasts (Coulton & Lindley, 2017; Eriksson & Pargman, 2018; Huybrechts et al., 2017) enables realizing alternative presents and alternative designs (Auger, 2013; Coulton et al., 2016; Dunne & Raby, 2013). These alternatives often include deviations from the (apparently) inevitable single-story future shaped by digital technologies envisioned by big tech companies. The deviation expands the design space – the scenarios considered in a design project (Van Amstel et al., 2016; Van Amstel & Garde, 2016) – to every kind of social activity, even the noncommercial. Dystopia","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"49 1","pages":"29 - 68"},"PeriodicalIF":5.3,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84841714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-16DOI: 10.1080/07370024.2021.1970556
P. Hancock
Few today would dispute that the age of autonomous machines is nearly upon us (cf., Kurzweil, 2005; Moravec, 1988), if it is not already. While it is doubtful that one can identify any fully autonomous machine system at this point in time, especially one that is openly and publicly acknowledged to be so, it is far less debatable that our present line of technological evolution is leading toward this eventuality (Endsley, 2017; Hancock, 2017a). It is this specter of the consequences of even existentially threatening adverse events, emanating from these penetrative autonomous systems, which is the focus of the present work. The impending and imperative question is what we intend to do about these prospective challenges? As with essentially all of human discourse, we can imagine two sides to this question. One side is represented by an optimistic vision of a near utopian future, underwritten by AI-support and some inherent degree of intrinsic benevolence. The opposing vision promulgates a dystopian nightmare in which machines have gained almost total ascendency and only a few “plucky” humans remain. The latter is most especially a featured trope of the human heroic narrative (Campbell, 1949). It will be most probably the case that neither of the extremes on this putative spectrum of possibilities will represent the eventual reality that we will actually experience. However, the ground rules are now in the process of being set which will predispose us toward one of these directions over the other (Feng et al., 2016; Hancock, 2017a). Traditionally, many have approached this general form of technological inquiry by asking questions about strengths, weaknesses, threats, and opportunities. Consequently, it is within this general framework that this present work is offered. What follows are some overall considerations of the balance of the value of such autonomous systems’ inauguration and penetration. These observations provide the bedrock from which to consider the specific strengths, weaknesses, threats (risks), and promises (opportunity) dimensions. The specific consideration of the application of the protective strategies of the well-known hierarchy of controls (Haddon, 1973) then acts as a final prefatory consideration to the concluding discussion which examines the adverse actions of autonomous technological systems as a potential human existential threat. The term autonomy is one that has been, and still currently is, the subject of much attention, debate, and even abuse (and see Ezenkwu & Starkey, 2019). To an extent, the term seems to be flexible enough to encompass almost whatever the proximal user requires of it. For example, a simple, descriptive word-cloud (Figure 1), illustrates the various terminologies that surrounds our present use of this focal term. It is not the present purpose here to engage in a long, polemic and potentially unedifying dispute specifically about the term’s definition. This is because the present concern is with auto
{"title":"Avoiding adverse autonomous agent actions","authors":"P. Hancock","doi":"10.1080/07370024.2021.1970556","DOIUrl":"https://doi.org/10.1080/07370024.2021.1970556","url":null,"abstract":"Few today would dispute that the age of autonomous machines is nearly upon us (cf., Kurzweil, 2005; Moravec, 1988), if it is not already. While it is doubtful that one can identify any fully autonomous machine system at this point in time, especially one that is openly and publicly acknowledged to be so, it is far less debatable that our present line of technological evolution is leading toward this eventuality (Endsley, 2017; Hancock, 2017a). It is this specter of the consequences of even existentially threatening adverse events, emanating from these penetrative autonomous systems, which is the focus of the present work. The impending and imperative question is what we intend to do about these prospective challenges? As with essentially all of human discourse, we can imagine two sides to this question. One side is represented by an optimistic vision of a near utopian future, underwritten by AI-support and some inherent degree of intrinsic benevolence. The opposing vision promulgates a dystopian nightmare in which machines have gained almost total ascendency and only a few “plucky” humans remain. The latter is most especially a featured trope of the human heroic narrative (Campbell, 1949). It will be most probably the case that neither of the extremes on this putative spectrum of possibilities will represent the eventual reality that we will actually experience. However, the ground rules are now in the process of being set which will predispose us toward one of these directions over the other (Feng et al., 2016; Hancock, 2017a). Traditionally, many have approached this general form of technological inquiry by asking questions about strengths, weaknesses, threats, and opportunities. Consequently, it is within this general framework that this present work is offered. What follows are some overall considerations of the balance of the value of such autonomous systems’ inauguration and penetration. These observations provide the bedrock from which to consider the specific strengths, weaknesses, threats (risks), and promises (opportunity) dimensions. The specific consideration of the application of the protective strategies of the well-known hierarchy of controls (Haddon, 1973) then acts as a final prefatory consideration to the concluding discussion which examines the adverse actions of autonomous technological systems as a potential human existential threat. The term autonomy is one that has been, and still currently is, the subject of much attention, debate, and even abuse (and see Ezenkwu & Starkey, 2019). To an extent, the term seems to be flexible enough to encompass almost whatever the proximal user requires of it. For example, a simple, descriptive word-cloud (Figure 1), illustrates the various terminologies that surrounds our present use of this focal term. It is not the present purpose here to engage in a long, polemic and potentially unedifying dispute specifically about the term’s definition. This is because the present concern is with auto","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"29 1","pages":"211 - 236"},"PeriodicalIF":5.3,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81647783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-05DOI: 10.1080/07370024.2021.1977128
B. Shneiderman
I eagerly support Peter Hancock’s desire to avoid adverse autonomous agent actions, but I think that he should change from his negative and pessimistic view to a more constructive stance about how ...
{"title":"Commentary: extraordinary excitement empowering enhancing everyone","authors":"B. Shneiderman","doi":"10.1080/07370024.2021.1977128","DOIUrl":"https://doi.org/10.1080/07370024.2021.1977128","url":null,"abstract":"I eagerly support Peter Hancock’s desire to avoid adverse autonomous agent actions, but I think that he should change from his negative and pessimistic view to a more constructive stance about how ...","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"120 1","pages":"243 - 245"},"PeriodicalIF":5.3,"publicationDate":"2021-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86164882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}