Can machines ever be sentient? Could they perceive and feel things, be conscious of their surroundings? What are the prospects of achieving sentience in a machine? What are the dangers associated with such an endeavor, and is it even ethical to embark on such a path to begin with? In the series of articles of this column, I discuss one possible path toward “general intelligence” in machines: to use the process of Darwinian evolution to produce artificial brains that can be grafted onto mobile robotic platforms, with the goal of achieving fully embodied sentient machines
{"title":"The Elements of Intelligence","authors":"Christoph Adami","doi":"10.1162/artl_a_00410","DOIUrl":"10.1162/artl_a_00410","url":null,"abstract":"Can machines ever be sentient? Could they perceive and feel things, be conscious of their surroundings? What are the prospects of achieving sentience in a machine? What are the dangers associated with such an endeavor, and is it even ethical to embark on such a path to begin with? In the series of articles of this column, I discuss one possible path toward “general intelligence” in machines: to use the process of Darwinian evolution to produce artificial brains that can be grafted onto mobile robotic platforms, with the goal of achieving fully embodied sentient machines","PeriodicalId":55574,"journal":{"name":"Artificial Life","volume":"29 3","pages":"293-307"},"PeriodicalIF":2.6,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10472828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Plants thrive in virtually all natural and human-adapted environments and are becoming popular models for developing robotics systems because of their strategies of morphological and behavioral adaptation. Such adaptation and high plasticity offer new approaches for designing, modeling, and controlling artificial systems acting in unstructured scenarios. At the same time, the development of artifacts based on their working principles reveals how plants promote innovative approaches for preservation and management plans and opens new applications for engineering-driven plant science. Environmentally mediated growth patterns (e.g., tropisms) are clear examples of adaptive behaviors displayed through morphological phenotyping. Plants also create networks with other plants through subterranean roots–fungi symbiosis and use these networks to exchange resources or warning signals. This article discusses the functional behaviors of plants and shows the close similarities with a perceptron-like model that could act as a behavior-based control model in plants. We begin by analyzing communication rules and growth behaviors of plants; we then show how we translated plant behaviors into algorithmic solutions for bioinspired robot controllers; and finally, we discuss how those solutions can be extended to embrace original approaches to networking and robotics control architectures.
{"title":"Perspectives on Computation in Plants","authors":"Emanuela Del Dottore;Barbara Mazzolai","doi":"10.1162/artl_a_00396","DOIUrl":"10.1162/artl_a_00396","url":null,"abstract":"Plants thrive in virtually all natural and human-adapted environments and are becoming popular models for developing robotics systems because of their strategies of morphological and behavioral adaptation. Such adaptation and high plasticity offer new approaches for designing, modeling, and controlling artificial systems acting in unstructured scenarios. At the same time, the development of artifacts based on their working principles reveals how plants promote innovative approaches for preservation and management plans and opens new applications for engineering-driven plant science. Environmentally mediated growth patterns (e.g., tropisms) are clear examples of adaptive behaviors displayed through morphological phenotyping. Plants also create networks with other plants through subterranean roots–fungi symbiosis and use these networks to exchange resources or warning signals. This article discusses the functional behaviors of plants and shows the close similarities with a perceptron-like model that could act as a behavior-based control model in plants. We begin by analyzing communication rules and growth behaviors of plants; we then show how we translated plant behaviors into algorithmic solutions for bioinspired robot controllers; and finally, we discuss how those solutions can be extended to embrace original approaches to networking and robotics control architectures.","PeriodicalId":55574,"journal":{"name":"Artificial Life","volume":"29 3","pages":"336-350"},"PeriodicalIF":2.6,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10170779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The design and implementation of adaptive chemical reaction networks, capable of adjusting their behavior over time in response to experience, is a key goal for the fields of molecular computing and DNA nanotechnology. Mainstream machine learning research offers powerful tools for implementing learning behavior that could one day be realized in a wet chemistry system. Here we develop an abstract chemical reaction network model that implements the backpropagation learning algorithm for a feedforward neural network whose nodes employ the nonlinear “leaky rectified linear unit” transfer function. Our network directly implements the mathematics behind this well-studied learning algorithm, and we demonstrate its capabilities by training the system to learn a linearly inseparable decision surface, specifically, the XOR logic function. We show that this simulation quantitatively follows the definition of the underlying algorithm. To implement this system, we also report ProBioSim, a simulator that enables arbitrary training protocols for simulated chemical reaction networks to be straightforwardly defined using constructs from the host programming language. This work thus provides new insight into the capabilities of learning chemical reaction networks and also develops new computational tools to simulate their behavior, which could be applied in the design and implementations of adaptive artificial life.
{"title":"Design and Simulation of a Multilayer Chemical Neural Network That Learns via Backpropagation","authors":"Matthew R. Lakin","doi":"10.1162/artl_a_00405","DOIUrl":"10.1162/artl_a_00405","url":null,"abstract":"The design and implementation of adaptive chemical reaction networks, capable of adjusting their behavior over time in response to experience, is a key goal for the fields of molecular computing and DNA nanotechnology. Mainstream machine learning research offers powerful tools for implementing learning behavior that could one day be realized in a wet chemistry system. Here we develop an abstract chemical reaction network model that implements the backpropagation learning algorithm for a feedforward neural network whose nodes employ the nonlinear “leaky rectified linear unit” transfer function. Our network directly implements the mathematics behind this well-studied learning algorithm, and we demonstrate its capabilities by training the system to learn a linearly inseparable decision surface, specifically, the XOR logic function. We show that this simulation quantitatively follows the definition of the underlying algorithm. To implement this system, we also report ProBioSim, a simulator that enables arbitrary training protocols for simulated chemical reaction networks to be straightforwardly defined using constructs from the host programming language. This work thus provides new insight into the capabilities of learning chemical reaction networks and also develops new computational tools to simulate their behavior, which could be applied in the design and implementations of adaptive artificial life.","PeriodicalId":55574,"journal":{"name":"Artificial Life","volume":"29 3","pages":"308-335"},"PeriodicalIF":2.6,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10107301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Much research in robotic artificial intelligence (AI) and Artificial Life has focused on autonomous agents as an embodied and situated approach to AI. Such systems are commonly viewed as overcoming many of the philosophical problems associated with traditional computationalist AI and cognitive science, such as the grounding problem (Harnad) or the lack of intentionality (Searle), because they have the physical and sensorimotor grounding that traditional AI was argued to lack. Robot lawn mowers and self-driving cars, for example, more or less reliably avoid obstacles, approach charging stations, and so on—and therefore might be considered to have some form of artificial intentionality or intentional directedness. It should be noted, though, that the fact that robots share physical environments with people does not necessarily mean that they are situated in the same perceptual and social world as humans. For people encountering socially interactive systems, such as social robots or automated vehicles, this poses the nontrivial challenge to interpret them as intentional agents to understand and anticipate their behavior but also to keep in mind that the intentionality of artificial bodies is fundamentally different from their natural counterparts. This requires, on one hand, a “suspension of disbelief ” but, on the other hand, also a capacity for the “suspension of belief.” This dual nature of (attributed) artificial intentionality has been addressed only rather superficially in embodied AI and social robotics research. It is therefore argued that Bourgine and Varela’s notion of Artificial Life as the practice of autonomous systems needs to be complemented with a practice of socially interactive autonomous systems, guided by a better understanding of the differences between artificial and biological bodies and their implications in the context of social interactions between people and technology.
{"title":"Understanding Social Robots: Attribution of Intentional Agency to Artificial and Biological Bodies","authors":"Tom Ziemke","doi":"10.1162/artl_a_00404","DOIUrl":"10.1162/artl_a_00404","url":null,"abstract":"Much research in robotic artificial intelligence (AI) and Artificial Life has focused on autonomous agents as an embodied and situated approach to AI. Such systems are commonly viewed as overcoming many of the philosophical problems associated with traditional computationalist AI and cognitive science, such as the grounding problem (Harnad) or the lack of intentionality (Searle), because they have the physical and sensorimotor grounding that traditional AI was argued to lack. Robot lawn mowers and self-driving cars, for example, more or less reliably avoid obstacles, approach charging stations, and so on—and therefore might be considered to have some form of artificial intentionality or intentional directedness. It should be noted, though, that the fact that robots share physical environments with people does not necessarily mean that they are situated in the same perceptual and social world as humans. For people encountering socially interactive systems, such as social robots or automated vehicles, this poses the nontrivial challenge to interpret them as intentional agents to understand and anticipate their behavior but also to keep in mind that the intentionality of artificial bodies is fundamentally different from their natural counterparts. This requires, on one hand, a “suspension of disbelief ” but, on the other hand, also a capacity for the “suspension of belief.” This dual nature of (attributed) artificial intentionality has been addressed only rather superficially in embodied AI and social robotics research. It is therefore argued that Bourgine and Varela’s notion of Artificial Life as the practice of autonomous systems needs to be complemented with a practice of socially interactive autonomous systems, guided by a better understanding of the differences between artificial and biological bodies and their implications in the context of social interactions between people and technology.","PeriodicalId":55574,"journal":{"name":"Artificial Life","volume":"29 3","pages":"351-366"},"PeriodicalIF":2.6,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10116347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The proposal for this special issue was inspired by the main themes around which we organize a series of satellite workshops at Artificial Life conferences (including some of the latest European Conferences on Artificial Life), the title of which is “SB-AI: What can Synthetic Biology (SB) offer to Artificial Intelligence (AI)?” The workshop themes are part of a larger scenario in which we are interested and which we intend to develop. This scenario includes the entire taxonomy of new research frontiers generated within AI, based on the construction and experimental exploration of software, hardware, wetware
{"title":"Biology in AI: New Frontiers in Hardware, Software, and Wetware Modeling of Cognition","authors":"Luisa Damiano;Pasquale Stano","doi":"10.1162/artl_e_00412","DOIUrl":"10.1162/artl_e_00412","url":null,"abstract":"The proposal for this special issue was inspired by the main themes around which we organize a series of satellite workshops at Artificial Life conferences (including some of the latest European Conferences on Artificial Life), the title of which is “SB-AI: What can Synthetic Biology (SB) offer to Artificial Intelligence (AI)?” The workshop themes are part of a larger scenario in which we are interested and which we intend to develop. This scenario includes the entire taxonomy of new research frontiers generated within AI, based on the construction and experimental exploration of software, hardware, wetware","PeriodicalId":55574,"journal":{"name":"Artificial Life","volume":"29 3","pages":"289-292"},"PeriodicalIF":2.6,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10472826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-11DOI: 10.48550/arXiv.2304.05147
Roberto Casadei
Collectiveness is an important property of many systems-both natural and artificial. By exploiting a large number of individuals, it is often possible to produce effects that go far beyond the capabilities of the smartest individuals or even to produce intelligent collective behavior out of not-so-intelligent individuals. Indeed, collective intelligence, namely, the capability of a group to act collectively in a seemingly intelligent way, is increasingly often a design goal of engineered computational systems-motivated by recent technoscientific trends like the Internet of Things, swarm robotics, and crowd computing, to name only a few. For several years, the collective intelligence observed in natural and artificial systems has served as a source of inspiration for engineering ideas, models, and mechanisms. Today, artificial and computational collective intelligence are recognized research topics, spanning various techniques, kinds of target systems, and application domains. However, there is still a lot of fragmentation in the research panorama of the topic within computer science, and the verticality of most communities and contributions makes it difficult to extract the core underlying ideas and frames of reference. The challenge is to identify, place in a common structure, and ultimately connect the different areas and methods addressing intelligent collectives. To address this gap, this article considers a set of broad scoping questions providing a map of collective intelligence research, mostly by the point of view of computer scientists and engineers. Accordingly, it covers preliminary notions, fundamental concepts, and the main research perspectives, identifying opportunities and challenges for researchers on artificial and computational collective intelligence engineering.
{"title":"Artificial Collective Intelligence Engineering: a Survey of Concepts and Perspectives","authors":"Roberto Casadei","doi":"10.48550/arXiv.2304.05147","DOIUrl":"https://doi.org/10.48550/arXiv.2304.05147","url":null,"abstract":"Collectiveness is an important property of many systems-both natural and artificial. By exploiting a large number of individuals, it is often possible to produce effects that go far beyond the capabilities of the smartest individuals or even to produce intelligent collective behavior out of not-so-intelligent individuals. Indeed, collective intelligence, namely, the capability of a group to act collectively in a seemingly intelligent way, is increasingly often a design goal of engineered computational systems-motivated by recent technoscientific trends like the Internet of Things, swarm robotics, and crowd computing, to name only a few. For several years, the collective intelligence observed in natural and artificial systems has served as a source of inspiration for engineering ideas, models, and mechanisms. Today, artificial and computational collective intelligence are recognized research topics, spanning various techniques, kinds of target systems, and application domains. However, there is still a lot of fragmentation in the research panorama of the topic within computer science, and the verticality of most communities and contributions makes it difficult to extract the core underlying ideas and frames of reference. The challenge is to identify, place in a common structure, and ultimately connect the different areas and methods addressing intelligent collectives. To address this gap, this article considers a set of broad scoping questions providing a map of collective intelligence research, mostly by the point of view of computer scientists and engineers. Accordingly, it covers preliminary notions, fundamental concepts, and the main research perspectives, identifying opportunities and challenges for researchers on artificial and computational collective intelligence engineering.","PeriodicalId":55574,"journal":{"name":"Artificial Life","volume":"1 1","pages":"1-35"},"PeriodicalIF":2.6,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48045699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article presents the DigiHive system, an artificial chemistry simulation environment, and the results of preliminary simulation experiments leading toward building a self-replicating system resembling a living cell. The two-dimensional environment is populated by particles that can bond together and form complexes of particles. Some complexes can recognize and change the structures of surrounding complexes, where the functions they perform are encoded in their structure in the form of Prolog-like language expressions. After introducing the DigiHive environment, we present the results of simulations of two fundamental parts of a self-replicating system, the work of a universal constructor and a copying machine, and the growth and division of a cell-like wall. At the end of the article, the limitations and arising difficulties of modeling in the DigiHive environment are presented, along with a discussion of possible future experiments and applications of this type of modeling.
{"title":"DigiHive: Artificial Chemistry Environment for Modeling of Self-Organization Phenomena","authors":"Rafał Sienkiewicz;Wojciech Jędruch","doi":"10.1162/artl_a_00398","DOIUrl":"10.1162/artl_a_00398","url":null,"abstract":"The article presents the DigiHive system, an artificial chemistry simulation environment, and the results of preliminary simulation experiments leading toward building a self-replicating system resembling a living cell. The two-dimensional environment is populated by particles that can bond together and form complexes of particles. Some complexes can recognize and change the structures of surrounding complexes, where the functions they perform are encoded in their structure in the form of Prolog-like language expressions. After introducing the DigiHive environment, we present the results of simulations of two fundamental parts of a self-replicating system, the work of a universal constructor and a copying machine, and the growth and division of a cell-like wall. At the end of the article, the limitations and arising difficulties of modeling in the DigiHive environment are presented, along with a discussion of possible future experiments and applications of this type of modeling.","PeriodicalId":55574,"journal":{"name":"Artificial Life","volume":"29 2","pages":"235-260"},"PeriodicalIF":2.6,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9666202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cooperative survival “games” are situations in which, during a sequence of catastrophic events, no one survives unless everyone survives. Such situations can be further exacerbated by uncertainty over the timing and scale of the recurring catastrophes, while the resource management required for survival may depend on several interdependent subgames of resource extraction, distribution, and investment with conflicting priorities and preferences between survivors. In social systems, self-organization has been a critical feature of sustainability and survival; therefore, in this article we use the lens of artificial societies to investigate the effectiveness of socially constructed self-organization for cooperative survival games. We imagine a cooperative survival scenario with four parameters: scale, that is, n in an n-player game; uncertainty, with regard to the occurrence and magnitude of each catastrophe; complexity, concerning the number of subgames to be simultaneously “solved”; and opportunity, with respect to the number of self-organizing mechanisms available to the players. We design and implement a multiagent system for a situation composed of three entangled subgames—a stag hunt game, a common-pool resource management problem, and a collective risk dilemma—and specify algorithms for three self-organizing mechanisms for governance, trading, and forecasting. A series of experiments shows, as perhaps expected, a threshold for a critical mass of survivors and also that increasing dimensions of uncertainty and complexity require increasing opportunity for self-organization. Perhaps less expected are the ways in which self-organizing mechanisms may interact in pernicious but also self-reinforcing ways, highlighting the need for some reflection as a process in collective self-governance for cooperative survival.
{"title":"Interdependent Self-Organizing Mechanisms for Cooperative Survival","authors":"Matthew Scott;Jeremy Pitt","doi":"10.1162/artl_a_00403","DOIUrl":"10.1162/artl_a_00403","url":null,"abstract":"Cooperative survival “games” are situations in which, during a sequence of catastrophic events, no one survives unless everyone survives. Such situations can be further exacerbated by uncertainty over the timing and scale of the recurring catastrophes, while the resource management required for survival may depend on several interdependent subgames of resource extraction, distribution, and investment with conflicting priorities and preferences between survivors. In social systems, self-organization has been a critical feature of sustainability and survival; therefore, in this article we use the lens of artificial societies to investigate the effectiveness of socially constructed self-organization for cooperative survival games. We imagine a cooperative survival scenario with four parameters: scale, that is, n in an n-player game; uncertainty, with regard to the occurrence and magnitude of each catastrophe; complexity, concerning the number of subgames to be simultaneously “solved”; and opportunity, with respect to the number of self-organizing mechanisms available to the players. We design and implement a multiagent system for a situation composed of three entangled subgames—a stag hunt game, a common-pool resource management problem, and a collective risk dilemma—and specify algorithms for three self-organizing mechanisms for governance, trading, and forecasting. A series of experiments shows, as perhaps expected, a threshold for a critical mass of survivors and also that increasing dimensions of uncertainty and complexity require increasing opportunity for self-organization. Perhaps less expected are the ways in which self-organizing mechanisms may interact in pernicious but also self-reinforcing ways, highlighting the need for some reflection as a process in collective self-governance for cooperative survival.","PeriodicalId":55574,"journal":{"name":"Artificial Life","volume":"29 2","pages":"198-234"},"PeriodicalIF":2.6,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9666683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this ansatz we consider theoretical constructions of RNA polymers into automata, a form of computational structure. The bases for transitions in our automata are plausible RNA enzymes that may perform ligation or cleavage. Limited to these operations, we construct RNA automata of increasing complexity; from the Finite Automaton (RNA-FA) to the Turing machine equivalent 2-stack PDA (RNA-2PDA) and the universal RNA-UPDA. For each automaton we show how the enzymatic reactions match the logical operations of the RNA automaton. A critical theme of the ansatz is the self-reference in RNA automata configurations that exploits the program-data duality but results in computational undecidability. We describe how computational undecidability is exemplified in the self-referential Liar paradox that places a boundary on a logical system, and by construction, any RNA automata. We argue that an expansion of the evolutionary space for RNA-2PDA automata can be interpreted as a hierarchical resolution of computational undecidability by a meta-system (akin to Turing’s oracle), in a continual process analogous to Turing’s ordinal logics and Post’s extensible recursively generated logics. On this basis, we put forward the hypothesis that the resolution of undecidable configurations in RNA automata represent a novelty generation mechanism and propose avenues for future investigation of biological automata.
{"title":"An Ansatz for Computational Undecidability in RNA Automata","authors":"Adam J. Svahn;Mikhail Prokopenko","doi":"10.1162/artl_a_00370","DOIUrl":"10.1162/artl_a_00370","url":null,"abstract":"In this ansatz we consider theoretical constructions of RNA polymers into automata, a form of computational structure. The bases for transitions in our automata are plausible RNA enzymes that may perform ligation or cleavage. Limited to these operations, we construct RNA automata of increasing complexity; from the Finite Automaton (RNA-FA) to the Turing machine equivalent 2-stack PDA (RNA-2PDA) and the universal RNA-UPDA. For each automaton we show how the enzymatic reactions match the logical operations of the RNA automaton. A critical theme of the ansatz is the self-reference in RNA automata configurations that exploits the program-data duality but results in computational undecidability. We describe how computational undecidability is exemplified in the self-referential Liar paradox that places a boundary on a logical system, and by construction, any RNA automata. We argue that an expansion of the evolutionary space for RNA-2PDA automata can be interpreted as a hierarchical resolution of computational undecidability by a meta-system (akin to Turing’s oracle), in a continual process analogous to Turing’s ordinal logics and Post’s extensible recursively generated logics. On this basis, we put forward the hypothesis that the resolution of undecidable configurations in RNA automata represent a novelty generation mechanism and propose avenues for future investigation of biological automata.","PeriodicalId":55574,"journal":{"name":"Artificial Life","volume":"29 2","pages":"261-288"},"PeriodicalIF":2.6,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10030350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Even when concepts similar to emergence have been used since antiquity, we lack an agreed definition. However, emergence has been identified as one of the main features of complex systems. Most would agree on the statement “life is complex.” Thus understanding emergence and complexity should benefit the study of living systems. It can be said that life emerges from the interactions of complex molecules. But how useful is this to understanding living systems? Artificial Life (ALife) has been developed in recent decades to study life using a synthetic approach: Build it to understand it. ALife systems are not so complex, be they soft (simulations), hard (robots), or wet(protocells). Thus, we can aim at first understanding emergence in ALife, to then use this knowledge in biology. I argue that to understand emergence and life, it becomes useful to use information as a framework. In a general sense, I define emergence as information that is not present at one scale but present at another. This perspective avoids problems of studying emergence from a materialist framework and can also be useful in the study of self-organization and complexity.
{"title":"Emergence in Artificial Life","authors":"Carlos Gershenson","doi":"10.1162/artl_a_00397","DOIUrl":"10.1162/artl_a_00397","url":null,"abstract":"Even when concepts similar to emergence have been used since antiquity, we lack an agreed definition. However, emergence has been identified as one of the main features of complex systems. Most would agree on the statement “life is complex.” Thus understanding emergence and complexity should benefit the study of living systems. It can be said that life emerges from the interactions of complex molecules. But how useful is this to understanding living systems? Artificial Life (ALife) has been developed in recent decades to study life using a synthetic approach: Build it to understand it. ALife systems are not so complex, be they soft (simulations), hard (robots), or wet(protocells). Thus, we can aim at first understanding emergence in ALife, to then use this knowledge in biology. I argue that to understand emergence and life, it becomes useful to use information as a framework. In a general sense, I define emergence as information that is not present at one scale but present at another. This perspective avoids problems of studying emergence from a materialist framework and can also be useful in the study of self-organization and complexity.","PeriodicalId":55574,"journal":{"name":"Artificial Life","volume":"29 2","pages":"153-167"},"PeriodicalIF":2.6,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9666198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}