Abstract Existing information technology tools are harnessed and integrated to provide digital specification of human consciousness of individual persons. An incremental compilation technology is proposed as a transformation of LifeLog derived persona specifications into a Canonical representation of the neocortex architecture of the human brain. The primary purpose is to gain an understanding of the semantical allocation of the neocortex capacity. Novel neocortex content allocation simulators with browsers are proposed to experiment with various approaches of relieving the brain from overload conditions. An IT model of the neocortex is maintained, which is then updated each time new stimuli are received from the LifeLog data stream; new information is gained from brain signal measurements; and new functional dependencies are discovered between live persona consumed/produced signals
{"title":"Declarative Consciousness for Reconstruction","authors":"Leslie G. Seymour","doi":"10.2478/jagi-2013-0007","DOIUrl":"https://doi.org/10.2478/jagi-2013-0007","url":null,"abstract":"Abstract Existing information technology tools are harnessed and integrated to provide digital specification of human consciousness of individual persons. An incremental compilation technology is proposed as a transformation of LifeLog derived persona specifications into a Canonical representation of the neocortex architecture of the human brain. The primary purpose is to gain an understanding of the semantical allocation of the neocortex capacity. Novel neocortex content allocation simulators with browsers are proposed to experiment with various approaches of relieving the brain from overload conditions. An IT model of the neocortex is maintained, which is then updated each time new stimuli are received from the LifeLog data stream; new information is gained from brain signal measurements; and new functional dependencies are discovered between live persona consumed/produced signals","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128623012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Brain emulation is a hypothetical but extremely transformative technology which has a non-zero chance of appearing during the next century. This paper investigates whether such a technology would also have any predictable characteristics that give it a chance of being catastrophically dangerous, and whether there are any policy levers which might be used to make it safer. We conclude that the riskiness of brain emulation probably depends on the order of the preceding research trajectory. Broadly speaking, it appears safer for brain emulation to happen sooner, because slower CPUs would make the technology‘s impact more gradual. It may also be safer if brains are scanned before they are fully understood from a neuroscience perspective, thereby increasing the initial population of emulations, although this prediction is weaker and more scenario-dependent. The risks posed by brain emulation also seem strongly connected to questions about the balance of power between attackers and defenders in computer security contests. If economic property rights in CPU cycles1 are essentially enforceable, emulation appears to be comparatively safe; if CPU cycles are ultimately easy to steal, the appearance of brain emulation is more likely to be a destabilizing development for human geopolitics. Furthermore, if the computers used to run emulations can be kept secure, then it appears that making brain emulation technologies ―open‖ would make them safer. If, however, computer insecurity is deep and unavoidable, openness may actually be more dangerous. We point to some arguments that suggest the former may be true, tentatively implying that it would be good policy to work towards brain emulation using open scientific methodology and free/open source software codebases
{"title":"Is Brain Emulation Dangerous?","authors":"P. Eckersley, A. Sandberg","doi":"10.2478/jagi-2013-0011","DOIUrl":"https://doi.org/10.2478/jagi-2013-0011","url":null,"abstract":"Abstract Brain emulation is a hypothetical but extremely transformative technology which has a non-zero chance of appearing during the next century. This paper investigates whether such a technology would also have any predictable characteristics that give it a chance of being catastrophically dangerous, and whether there are any policy levers which might be used to make it safer. We conclude that the riskiness of brain emulation probably depends on the order of the preceding research trajectory. Broadly speaking, it appears safer for brain emulation to happen sooner, because slower CPUs would make the technology‘s impact more gradual. It may also be safer if brains are scanned before they are fully understood from a neuroscience perspective, thereby increasing the initial population of emulations, although this prediction is weaker and more scenario-dependent. The risks posed by brain emulation also seem strongly connected to questions about the balance of power between attackers and defenders in computer security contests. If economic property rights in CPU cycles1 are essentially enforceable, emulation appears to be comparatively safe; if CPU cycles are ultimately easy to steal, the appearance of brain emulation is more likely to be a destabilizing development for human geopolitics. Furthermore, if the computers used to run emulations can be kept secure, then it appears that making brain emulation technologies ―open‖ would make them safer. If, however, computer insecurity is deep and unavoidable, openness may actually be more dangerous. We point to some arguments that suggest the former may be true, tentatively implying that it would be good policy to work towards brain emulation using open scientific methodology and free/open source software codebases","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131402376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract On the verge of technological breakthroughs, which define and revolutionize our understanding of intelligence, cognition, and personhood, especially when speaking of artificial intelligences and mind uploads, one must consider the legal implications of granting personhood rights to artificial intelligences or emulated human entities
{"title":"The Outline of Personhood Law Regarding Artificial Intelligences and Emulated Human Entities","authors":"Kamil Muzyka","doi":"10.2478/jagi-2013-0010","DOIUrl":"https://doi.org/10.2478/jagi-2013-0010","url":null,"abstract":"Abstract On the verge of technological breakthroughs, which define and revolutionize our understanding of intelligence, cognition, and personhood, especially when speaking of artificial intelligences and mind uploads, one must consider the legal implications of granting personhood rights to artificial intelligences or emulated human entities","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129472083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Significant debate on fundamental issues remains in the subfields of cognitive science, including perception, memory, attention, action selection, learning, and others. Psychology, neuroscience, and artificial intelligence each contribute alternative and sometimes conflicting perspectives on the supervening problem of artificial general intelligence (AGI). Current efforts toward a broad-based, systems-level model of minds cannot await theoretical convergence in each of the relevant subfields. Such work therefore requires the formulation of tentative hypotheses, based on current knowledge, that serve to connect cognitive functions into a theoretical framework for the study of the mind. We term such hypotheses “conceptual commitments” and describe the hypotheses underlying one such model, the Learning Intelligent Distribution Agent (LIDA) Model. Our intention is to initiate a discussion among AGI researchers about which conceptual commitments are essential, or particularly useful, toward creating AGI agents.
{"title":"Conceptual Commitments of the LIDA Model of Cognition","authors":"S. Franklin, Steve Strain, R. McCall, B. Baars","doi":"10.2478/jagi-2013-0002","DOIUrl":"https://doi.org/10.2478/jagi-2013-0002","url":null,"abstract":"Abstract Significant debate on fundamental issues remains in the subfields of cognitive science, including perception, memory, attention, action selection, learning, and others. Psychology, neuroscience, and artificial intelligence each contribute alternative and sometimes conflicting perspectives on the supervening problem of artificial general intelligence (AGI). Current efforts toward a broad-based, systems-level model of minds cannot await theoretical convergence in each of the relevant subfields. Such work therefore requires the formulation of tentative hypotheses, based on current knowledge, that serve to connect cognitive functions into a theoretical framework for the study of the mind. We term such hypotheses “conceptual commitments” and describe the hypotheses underlying one such model, the Learning Intelligent Distribution Agent (LIDA) Model. Our intention is to initiate a discussion among AGI researchers about which conceptual commitments are essential, or particularly useful, toward creating AGI agents.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123265138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the ‘speciated non-dominated sorting genetic algorithm’ for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.
{"title":"Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments","authors":"Peter Lane, F. Gobet","doi":"10.2478/jagi-2013-0001","DOIUrl":"https://doi.org/10.2478/jagi-2013-0001","url":null,"abstract":"Abstract Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the ‘speciated non-dominated sorting genetic algorithm’ for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116354451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We propose a new measure of intelligence for general reinforcement learning agents, based on the notion that an agent’s environment can change at any step of execution of the agent. That is, an agent is considered to be interacting with its environment in real-time. In this sense, the resulting intelligence measure is more general than the universal intelligence measure (Legg and Hutter, 2007) and the anytime universal intelligence test (Hernández-Orallo and Dowe, 2010). A major advantage of the measure is that an agent’s computational complexity is factored into the measure in a natural manner. We show that there exist agents with intelligence arbitrarily close to the theoretical maximum, and that the intelligence of agents depends on their parallel processing capability. We thus believe that the measure can provide a better evaluation of agents and guidance for building practical agents with high intelligence.
{"title":"A Measure of Real-Time Intelligence","authors":"Vaibhav Gavane","doi":"10.2478/jagi-2013-0003","DOIUrl":"https://doi.org/10.2478/jagi-2013-0003","url":null,"abstract":"Abstract We propose a new measure of intelligence for general reinforcement learning agents, based on the notion that an agent’s environment can change at any step of execution of the agent. That is, an agent is considered to be interacting with its environment in real-time. In this sense, the resulting intelligence measure is more general than the universal intelligence measure (Legg and Hutter, 2007) and the anytime universal intelligence test (Hernández-Orallo and Dowe, 2010). A major advantage of the measure is that an agent’s computational complexity is factored into the measure in a natural manner. We show that there exist agents with intelligence arbitrarily close to the theoretical maximum, and that the intelligence of agents depends on their parallel processing capability. We thus believe that the measure can provide a better evaluation of agents and guidance for building practical agents with high intelligence.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123047946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-04DOI: 10.2478/v10229-011-0020-6
S. Pissanetzky
Abstract A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self-programming and the fundamental question about the origin of algorithms are inextricably linked. We discuss previously published, fully automated applications to self-programming, and present a virtual machine that supports the logic, an algorithm that allows for the virtual machine to be simulated on a digital computer, and a fully explained neural network implementation of the algorithm.
{"title":"Reasoning with Computer Code: a new Mathematical Logic","authors":"S. Pissanetzky","doi":"10.2478/v10229-011-0020-6","DOIUrl":"https://doi.org/10.2478/v10229-011-0020-6","url":null,"abstract":"Abstract A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self-programming and the fundamental question about the origin of algorithms are inextricably linked. We discuss previously published, fully automated applications to self-programming, and present a virtual machine that supports the logic, an algorithm that allows for the virtual machine to be simulated on a digital computer, and a fully explained neural network implementation of the algorithm.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"25 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125030332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-04DOI: 10.2478/v10229-011-0018-0
W. Skaba
Abstract The AGINAO is a project to create a human-level artificial general intelligence system (HL AGI) embodied in the Aldebaran Robotics' NAO humanoid robot. The dynamical and open-ended cognitive engine of the robot is represented by an embedded and multi-threaded control program, that is self-crafted rather than hand-crafted, and is executed on a simulated Universal Turing Machine (UTM). The actual structure of the cognitive engine emerges as a result of placing the robot in a natural preschool-like environment and running a core start-up system that executes self-programming of the cognitive layer on top of the core layer. The data from the robot's sensory devices supplies the training samples for the machine learning methods, while the commands sent to actuators enable testing hypotheses and getting a feedback. The individual self-created subroutines are supposed to reflect the patterns and concepts of the real world, while the overall program structure reflects the spatial and temporal hierarchy of the world dependencies. This paper focuses on the details of the self-programming approach, limiting the discussion of the applied cognitive architecture to a necessary minimum.
{"title":"The AGINAO Self-Programming Engine","authors":"W. Skaba","doi":"10.2478/v10229-011-0018-0","DOIUrl":"https://doi.org/10.2478/v10229-011-0018-0","url":null,"abstract":"Abstract The AGINAO is a project to create a human-level artificial general intelligence system (HL AGI) embodied in the Aldebaran Robotics' NAO humanoid robot. The dynamical and open-ended cognitive engine of the robot is represented by an embedded and multi-threaded control program, that is self-crafted rather than hand-crafted, and is executed on a simulated Universal Turing Machine (UTM). The actual structure of the cognitive engine emerges as a result of placing the robot in a natural preschool-like environment and running a core start-up system that executes self-programming of the cognitive layer on top of the core layer. The data from the robot's sensory devices supplies the training samples for the machine learning methods, while the commands sent to actuators enable testing hypotheses and getting a feedback. The individual self-created subroutines are supposed to reflect the patterns and concepts of the real world, while the overall program structure reflects the spatial and temporal hierarchy of the world dependencies. This paper focuses on the details of the self-programming approach, limiting the discussion of the applied cognitive architecture to a necessary minimum.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131888119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-04DOI: 10.2478/v10229-011-0017-1
K. Thórisson, Eric Nivel, R. Sanz, Pei Wang
Intuitively speaking, “self-programming” means the ability for a computer system to program its own actions. This notion is clearly related to Artificial Intelligence, and has been used by many researchers. Like many other high-level concepts, however, scrutiny shows that the term can be interpreted in several different ways. To make the discussion concrete and meaningful we introduce here a working definition of self-programming. In this definition we increase its concreteness while trying to keep the intuitive meaning of the concept. The activities of a computer system usually are considered to consist of atomic actions (which can also be called instructions, operations, behavior, or something else in different contexts). At any given moment the system’s primitive actions are in a finite and constant set A, meaning that they are distinct from each other, and can be enumerated. An action may take some input arguments, and produce some output arguments. The system can execute each of its actions,
{"title":"Editorial: Approaches and Assumptions of Self-Programming in Achieving Artificial General Intelligence","authors":"K. Thórisson, Eric Nivel, R. Sanz, Pei Wang","doi":"10.2478/v10229-011-0017-1","DOIUrl":"https://doi.org/10.2478/v10229-011-0017-1","url":null,"abstract":"Intuitively speaking, “self-programming” means the ability for a computer system to program its own actions. This notion is clearly related to Artificial Intelligence, and has been used by many researchers. Like many other high-level concepts, however, scrutiny shows that the term can be interpreted in several different ways. To make the discussion concrete and meaningful we introduce here a working definition of self-programming. In this definition we increase its concreteness while trying to keep the intuitive meaning of the concept. The activities of a computer system usually are considered to consist of atomic actions (which can also be called instructions, operations, behavior, or something else in different contexts). At any given moment the system’s primitive actions are in a finite and constant set A, meaning that they are distinct from each other, and can be enumerated. An action may take some input arguments, and produce some output arguments. The system can execute each of its actions,","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132190947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-04DOI: 10.2478/v10229-011-0021-5
Pei Wang
Abstract To solve a problem, an ordinary computer system executes an existing program. When no such program is available, an AGI system may still be able to solve a concrete problem instance. This paper introduces a new approach to do so in a reasoning system that adapts to its environment and works with insuffcient knowledge and resources. The related approaches are compared, and several conceptual issues are analyzed. It is concluded that an AGI system can solve a problem with or without a problem-specific program, and therefore can have human-like creativity and exibility.
{"title":"Solving a Problem With or Without a Program","authors":"Pei Wang","doi":"10.2478/v10229-011-0021-5","DOIUrl":"https://doi.org/10.2478/v10229-011-0021-5","url":null,"abstract":"Abstract To solve a problem, an ordinary computer system executes an existing program. When no such program is available, an AGI system may still be able to solve a concrete problem instance. This paper introduces a new approach to do so in a reasoning system that adapts to its environment and works with insuffcient knowledge and resources. The related approaches are compared, and several conceptual issues are analyzed. It is concluded that an AGI system can solve a problem with or without a problem-specific program, and therefore can have human-like creativity and exibility.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126603324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}