On the Mutual Influence of Human and Artificial Life: an Experimental Investigation

Stefano Furlan, E. Medvet, Giorgia Nadizar, F. Pigozzi
{"title":"On the Mutual Influence of Human and Artificial Life: an Experimental Investigation","authors":"Stefano Furlan, E. Medvet, Giorgia Nadizar, F. Pigozzi","doi":"10.1162/isal_a_00492","DOIUrl":null,"url":null,"abstract":"Our modern world is teeming with non-biological agents, whose growing complexity brings them so close to living beings that they can be cataloged as artificial creatures, i.e., a form of Artificial Life (ALife). Ranging from disembodied intelligent agents to robots of conspicuous dimensions, all these artifacts are united by the fact that they are designed, built, and possibly trained by humans taking inspiration from natural elements. Hence, humans play a fundamental role in relation to ALife, both as creators and as final users, which calls attention to the need of studying the mutual influence of human and artificial life. Here we attempt an experimental investigation of the reciprocal effects of the human-ALife interaction. To this extent, we design an artificial world populated by life-like creatures, and resort to open-ended evolution to foster the creatures adaptation. We allow bidirectional communication between the system and humans, who can observe the artificial world and voluntarily choose to perform positive or negative actions towards the creatures populating it; those actions may have a shortor long-term impact on the artificial creatures. Our experimental results show that the creatures are capable of evolving under the influence of humans, even though the impact of the interaction remains uncertain. In addition, we find that ALife gives rise to disparate feelings in humans who interact with it, who are not always aware of the importance of their conduct. Introduction and related works In the 1990s, the commercial craze of “Tamagotchi” (Clyde, 1998), a game where players nourish and care for virtual pets, swept through the world. Albeit naive, that game is a noteworthy instance of an Artificial Life (ALife) (Langton, 1997), i.e., a simulation of a living system, which does not exist in isolation, but in deep entanglement with human life. It also reveals that ALife is not completely detached from humans, who might need to rethink their role and responsibilities toward ALife. We already train artificial agents by reinforcement or supervision: trained agents are notoriously as biased as the datasets we feed them (Kasperkevic, 2015), and examples abound1. For instance, chatbot Tay shifted from lovely to toxic communication after a few hours of interaction with users of a social network (Hunt, 2016). The https://github.com/daviddao/awful-ai field of robotics is no exception to the case, and while robots, a relevant example of ALife agents, are becoming pervasive in our society, we—the creators—define and influence them (Pigozzi, 2022). One day in the future, a robot could browse for videos of the very first robots that were built, eager to learn more about its ancestors. Suppose a video shows up, displaying engineers that ruthlessly beat up and thrust a robot in the attempt of testing its resilience (Vincent, 2019). How brutal and condemnable would that act look to its electric eyes? Would our robotic brainchildren disown us and label us “a virus” as Agent Smith (the villain, himself an artificial creature) does in the “Matrix” movie (Wachowski et al., 1999)? At the same time, how would such responsibility affect the creators themselves? Broadly speaking, when dealing with complex systems involving humans and artificial agents, whose actions are deeply intertwined, what results from the mutual interaction of humans and ALife? In particular, do artificial agents react to the actions of humans, displaying short-term adaptation in response to stimuli? Do these actions influence the inherited traits of artificial creatures, steering their evolutionary path and long-term adaptation? And, conversely, are humans aware of their influence on ALife? Do they shift their conduct accordingly? We consider a system that addresses these questions in a minimalist way. We design and implement an artificial world (Figure 1), populated by virtual creatures that actively search for food, and expose it to a pool of volunteer participants in a human experiment. We consider three design objectives: (a) interaction, that is bidirectional between human and ALife; (b) adaptation, of creatures to external stimuli, including human presence; (c) realism, of creatures to look “familiar” and engaging for participants. Participants interact with the creatures through actions that are either “good” (placing food) or “bad” (eliminating a creature): we then record the participants’ reactions. At the same time, creatures can sense human presence. We achieve long-term adaptation through artificial evolution, and, for the sake of realism, we design the creatures to be life-like. As a result, the goodness or badness of human actions can potentially Figure 1: Our artificial world: worm-like agents are creatures that search for food (the green dots). affect the evolutionary path of creatures, as well as their relationship with humans. Humans, on the other side, can feel emotions in the process. Participants thus play the role of a “superior being”, absolute from any conditioning authority (Milgram, 1963), with power of life and death upon the creatures. Whether their actions will be good or bad is up to them: a philosophical debate on human nature that goes back to Thomas Hobbes (1651) and Jean-Jacques Rousseau (1755), with their opposing views propagating through history. Other studies crafted artificial worlds, e.g., Tierra (Ray, 1992), PolyWorld (Yaeger et al., 1994), and Avida (Ofria and Wilke, 2004), with several different goals: they mostly investigate questions related to evolutionary biology (Lenski et al., 2003), ecology (Ventrella, 2005), open-ended evolution (Soros and Stanley, 2014), social learning (Bartoli et al., 2020), or are sources of entertainment and gaming (Dewdney, 1984; Grand and Cliff, 1998). Albeit fascinating, none of these addresses the main research question of this paper, i.e., the mutual influence of human life and ALife. Our work also differs from multi-agent platforms, whose focus is on optimizing multi-agent policies for a task (Suarez et al., 2019; Terry et al., 2021). The work that is the most similar to ours pivots around the “Twitch Plays Robotics” platform of Bongard et al. (2018). While paving the way for crowdsourcing robotics experiments, it is, rather than an artificial world, an instance of “interactive evolution” (with participants issuing reinforcements to morphologically-evolving creatures), and does not detail the influence of creatures on participants. We instead concentrate on the bidirectionality of interaction, and branch into two complementary studies: the first aimed at quantifying the effects of human interaction on artificial creatures, and the second focused on surveying how humans perceive and interface themselves with ALife. Concerning the former, we simulate human actions on the system and analyze the progress over time of some indexes, whereas for the latter we perform a user study involving a pool of volunteer participants interacting with the creatures. The experimental results confirm the importance of focusing on the bidirectionality of human-ALife interaction, and open a way towards more in depth analyses and studies in the field. Not surprisingly, we find that an artificial world subjected to human influence is capable of evolving, yet the real impact of human behavior on it, be it positive or negative, remains enigmatic. In addition, we discover two main currents of thought among people who interface themselves with ALife: those who feel involved and are aware of the consequences of their actions on an artificial world, and those who perceive ALife as a not attention-worthy farfetched artifact. The artificial world","PeriodicalId":309725,"journal":{"name":"The 2022 Conference on Artificial Life","volume":"1059 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2022 Conference on Artificial Life","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/isal_a_00492","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Our modern world is teeming with non-biological agents, whose growing complexity brings them so close to living beings that they can be cataloged as artificial creatures, i.e., a form of Artificial Life (ALife). Ranging from disembodied intelligent agents to robots of conspicuous dimensions, all these artifacts are united by the fact that they are designed, built, and possibly trained by humans taking inspiration from natural elements. Hence, humans play a fundamental role in relation to ALife, both as creators and as final users, which calls attention to the need of studying the mutual influence of human and artificial life. Here we attempt an experimental investigation of the reciprocal effects of the human-ALife interaction. To this extent, we design an artificial world populated by life-like creatures, and resort to open-ended evolution to foster the creatures adaptation. We allow bidirectional communication between the system and humans, who can observe the artificial world and voluntarily choose to perform positive or negative actions towards the creatures populating it; those actions may have a shortor long-term impact on the artificial creatures. Our experimental results show that the creatures are capable of evolving under the influence of humans, even though the impact of the interaction remains uncertain. In addition, we find that ALife gives rise to disparate feelings in humans who interact with it, who are not always aware of the importance of their conduct. Introduction and related works In the 1990s, the commercial craze of “Tamagotchi” (Clyde, 1998), a game where players nourish and care for virtual pets, swept through the world. Albeit naive, that game is a noteworthy instance of an Artificial Life (ALife) (Langton, 1997), i.e., a simulation of a living system, which does not exist in isolation, but in deep entanglement with human life. It also reveals that ALife is not completely detached from humans, who might need to rethink their role and responsibilities toward ALife. We already train artificial agents by reinforcement or supervision: trained agents are notoriously as biased as the datasets we feed them (Kasperkevic, 2015), and examples abound1. For instance, chatbot Tay shifted from lovely to toxic communication after a few hours of interaction with users of a social network (Hunt, 2016). The https://github.com/daviddao/awful-ai field of robotics is no exception to the case, and while robots, a relevant example of ALife agents, are becoming pervasive in our society, we—the creators—define and influence them (Pigozzi, 2022). One day in the future, a robot could browse for videos of the very first robots that were built, eager to learn more about its ancestors. Suppose a video shows up, displaying engineers that ruthlessly beat up and thrust a robot in the attempt of testing its resilience (Vincent, 2019). How brutal and condemnable would that act look to its electric eyes? Would our robotic brainchildren disown us and label us “a virus” as Agent Smith (the villain, himself an artificial creature) does in the “Matrix” movie (Wachowski et al., 1999)? At the same time, how would such responsibility affect the creators themselves? Broadly speaking, when dealing with complex systems involving humans and artificial agents, whose actions are deeply intertwined, what results from the mutual interaction of humans and ALife? In particular, do artificial agents react to the actions of humans, displaying short-term adaptation in response to stimuli? Do these actions influence the inherited traits of artificial creatures, steering their evolutionary path and long-term adaptation? And, conversely, are humans aware of their influence on ALife? Do they shift their conduct accordingly? We consider a system that addresses these questions in a minimalist way. We design and implement an artificial world (Figure 1), populated by virtual creatures that actively search for food, and expose it to a pool of volunteer participants in a human experiment. We consider three design objectives: (a) interaction, that is bidirectional between human and ALife; (b) adaptation, of creatures to external stimuli, including human presence; (c) realism, of creatures to look “familiar” and engaging for participants. Participants interact with the creatures through actions that are either “good” (placing food) or “bad” (eliminating a creature): we then record the participants’ reactions. At the same time, creatures can sense human presence. We achieve long-term adaptation through artificial evolution, and, for the sake of realism, we design the creatures to be life-like. As a result, the goodness or badness of human actions can potentially Figure 1: Our artificial world: worm-like agents are creatures that search for food (the green dots). affect the evolutionary path of creatures, as well as their relationship with humans. Humans, on the other side, can feel emotions in the process. Participants thus play the role of a “superior being”, absolute from any conditioning authority (Milgram, 1963), with power of life and death upon the creatures. Whether their actions will be good or bad is up to them: a philosophical debate on human nature that goes back to Thomas Hobbes (1651) and Jean-Jacques Rousseau (1755), with their opposing views propagating through history. Other studies crafted artificial worlds, e.g., Tierra (Ray, 1992), PolyWorld (Yaeger et al., 1994), and Avida (Ofria and Wilke, 2004), with several different goals: they mostly investigate questions related to evolutionary biology (Lenski et al., 2003), ecology (Ventrella, 2005), open-ended evolution (Soros and Stanley, 2014), social learning (Bartoli et al., 2020), or are sources of entertainment and gaming (Dewdney, 1984; Grand and Cliff, 1998). Albeit fascinating, none of these addresses the main research question of this paper, i.e., the mutual influence of human life and ALife. Our work also differs from multi-agent platforms, whose focus is on optimizing multi-agent policies for a task (Suarez et al., 2019; Terry et al., 2021). The work that is the most similar to ours pivots around the “Twitch Plays Robotics” platform of Bongard et al. (2018). While paving the way for crowdsourcing robotics experiments, it is, rather than an artificial world, an instance of “interactive evolution” (with participants issuing reinforcements to morphologically-evolving creatures), and does not detail the influence of creatures on participants. We instead concentrate on the bidirectionality of interaction, and branch into two complementary studies: the first aimed at quantifying the effects of human interaction on artificial creatures, and the second focused on surveying how humans perceive and interface themselves with ALife. Concerning the former, we simulate human actions on the system and analyze the progress over time of some indexes, whereas for the latter we perform a user study involving a pool of volunteer participants interacting with the creatures. The experimental results confirm the importance of focusing on the bidirectionality of human-ALife interaction, and open a way towards more in depth analyses and studies in the field. Not surprisingly, we find that an artificial world subjected to human influence is capable of evolving, yet the real impact of human behavior on it, be it positive or negative, remains enigmatic. In addition, we discover two main currents of thought among people who interface themselves with ALife: those who feel involved and are aware of the consequences of their actions on an artificial world, and those who perceive ALife as a not attention-worthy farfetched artifact. The artificial world
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
论人与人工生命的相互影响:一项实验研究
我们的现代世界充满了非生物媒介,它们日益增长的复杂性使它们与生物如此接近,以至于它们可以被归类为人工生物,即一种形式的人工生命(ALife)。从没有实体的智能代理到尺寸明显的机器人,所有这些人工制品都是由人类从自然元素中获得灵感而设计、制造和训练的。因此,人类作为创造者和最终使用者,在与生命的关系中发挥着根本作用,这就要求人们注意有必要研究人类生命和人工生命的相互影响。在这里,我们试图对人与生命相互作用的互惠效应进行实验研究。在这种程度上,我们设计了一个由类生命生物组成的人工世界,并采用开放式进化来促进生物的适应。我们允许系统和人类之间的双向交流,人类可以观察人工世界,并自愿选择对其中的生物采取积极或消极的行动;这些行为可能对人造生物产生短期的长期影响。我们的实验结果表明,这些生物能够在人类的影响下进化,尽管这种相互作用的影响仍然不确定。此外,我们发现,在与生命互动的人类中,生命会产生不同的感觉,而人类并不总是意识到自己行为的重要性。20世纪90年代,《电子宠物》(Tamagotchi, Clyde, 1998)这款玩家培育和照顾虚拟宠物的游戏风靡全球。尽管很幼稚,但这款游戏是人工生命(ALife)的一个值得注意的例子,即模拟一个生命系统,它不是孤立存在的,而是与人类生活紧密相连的。它还表明,人工智能并非完全脱离人类,人类可能需要重新思考自己对人工智能的角色和责任。我们已经通过强化或监督来训练人工智能体:众所周知,训练过的智能体与我们提供给它们的数据集一样有偏见(Kasperkevic, 2015),而且例子很多1。例如,聊天机器人Tay在与社交网络用户互动几个小时后,从可爱的交流转变为有毒的交流(Hunt, 2016)。https://github.com/daviddao/awful-ai机器人领域也不例外,虽然机器人是人工智能代理的一个相关例子,在我们的社会中越来越普遍,但我们——创造者——定义并影响它们(Pigozzi, 2022)。在未来的某一天,机器人可以浏览最早制造出来的机器人的视频,渴望更多地了解它的祖先。假设出现一段视频,显示工程师无情地殴打和推推机器人,试图测试其弹性(文森特,2019)。在它的电眼看来,这种行为是多么残忍和应受谴责啊?我们的机器人大脑是否会像《黑客帝国》(Matrix)电影中的特工史密斯(Agent Smith)(反派,他自己也是一个人造生物)那样,与我们断绝关系,给我们贴上“病毒”的标签?同时,这种责任又会对创作者自身产生怎样的影响呢?从广义上讲,当处理涉及人类和人工智能体的复杂系统时,它们的行为深深地交织在一起,人类和生命的相互作用会产生什么结果?特别是,人工代理是否会对人类的行为做出反应,对刺激表现出短期适应?这些行为是否影响了人造生物的遗传特征,引导了它们的进化路径和长期适应?反过来说,人类意识到自己对生命的影响了吗?他们会相应地改变自己的行为吗?我们考虑一个以极简方式解决这些问题的系统。我们设计并实现了一个人工世界(图1),其中充满了积极寻找食物的虚拟生物,并将其暴露给一群参与人类实验的志愿者。我们考虑了三个设计目标:(a)交互,即人与生命之间的双向交互;(b)生物对外部刺激的适应,包括人类的存在;(c)现实主义,生物看起来“熟悉”并吸引参与者。参与者通过“好”(放置食物)或“坏”(消灭生物)的行为与生物互动:然后我们记录参与者的反应。与此同时,生物可以感觉到人类的存在。我们通过人工进化来实现长期的适应,而且,为了真实起见,我们把生物设计得像生命一样。图1:我们的人工世界:类蠕虫的代理是寻找食物的生物(绿点)。影响生物的进化路径,以及它们与人类的关系。另一方面,人类在这个过程中可以感受到情绪。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Glaberish: Generalizing the Continuously-Valued Lenia Framework to Arbitrary Life-Like Cellular Automata The 2022 Conference on Artificial Life (full proceedings PDF) Cost-efficiency of institutional reward and punishment in cooperation dilemmas Towards Hierarchical Hybrid Architectures for Human-Swarm Interaction Network Diversity Promotes Safety Adoption in Swift Artificial Intelligence Development
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1