Stephen McGregor, Kat R. Agres, Matthew Purver, Geraint A. Wiggins
Abstract We investigate the relationship between lexical spaces and contextually-defined conceptual spaces, offering applications to creative concept discovery. We define a computational method for discovering members of concepts based on semantic spaces: starting with a standard distributional model derived from corpus co-occurrence statistics, we dynamically select characteristic dimensions associated with seed terms, and thus a subspace of terms defining the related concept. This approach performs as well as, and in some cases better than, leading distributional semantic models on a WordNet-based concept discovery task, while also providing a model of concepts as convex regions within a space with interpretable dimensions. In particular, it performs well on more specific, contextualized concepts; to investigate this we therefore move beyond WordNet to a set of human empirical studies, in which we compare output against human responses on a membership task for novel concepts. Finally, a separate panel of judges rate both model output and human responses, showing similar ratings in many cases, and some commonalities and divergences which reveal interesting issues for computational concept discovery.
{"title":"From Distributional Semantics to Conceptual Spaces: A Novel Computational Method for Concept Creation","authors":"Stephen McGregor, Kat R. Agres, Matthew Purver, Geraint A. Wiggins","doi":"10.1515/jagi-2015-0004","DOIUrl":"https://doi.org/10.1515/jagi-2015-0004","url":null,"abstract":"Abstract We investigate the relationship between lexical spaces and contextually-defined conceptual spaces, offering applications to creative concept discovery. We define a computational method for discovering members of concepts based on semantic spaces: starting with a standard distributional model derived from corpus co-occurrence statistics, we dynamically select characteristic dimensions associated with seed terms, and thus a subspace of terms defining the related concept. This approach performs as well as, and in some cases better than, leading distributional semantic models on a WordNet-based concept discovery task, while also providing a model of concepts as convex regions within a space with interpretable dimensions. In particular, it performs well on more specific, contextualized concepts; to investigate this we therefore move beyond WordNet to a set of human empirical studies, in which we compare output against human responses on a membership task for novel concepts. Finally, a separate panel of judges rate both model output and human responses, showing similar ratings in many cases, and some commonalities and divergences which reveal interesting issues for computational concept discovery.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116990520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Over the last decade, computational creativity as a field of scientific investigation and computational systems engineering has seen growing popularity. Still, the levels of development between projects aiming at systems for artistic production or performance and endeavours addressing creative problem-solving or models of creative cognitive capacities is diverging. While the former have already seen several great successes, the latter still remain in their infancy. This volume collects reports on work trying to close the accrued gap.
{"title":"Editorial: Computational Creativity, Concept Invention, and General Intelligence","authors":"Tarek R. Besold, Kai-Uwe Kühnberger, T. Veale","doi":"10.1515/jagi-2015-0001","DOIUrl":"https://doi.org/10.1515/jagi-2015-0001","url":null,"abstract":"Abstract Over the last decade, computational creativity as a field of scientific investigation and computational systems engineering has seen growing popularity. Still, the levels of development between projects aiming at systems for artistic production or performance and endeavours addressing creative problem-solving or models of creative cognitive capacities is diverging. While the former have already seen several great successes, the latter still remain in their infancy. This volume collects reports on work trying to close the accrued gap.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116532326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this paper we offer a model, drawing inspiration from human cognition and based upon the pipeline developed for IBM’s Watson, which solves clues in a type of word puzzle called syllacrostics. We briefly discuss its situation with respect to the greater field of artificial general intelligence (AGI) and how this process and model might be applied to other types of word puzzles. We present an overview of a system that has been developed to solve syllacrostics.
{"title":"A Play on Words: Using Cognitive Computing as a Basis for AI Solvers in Word Puzzles","authors":"Thomas Manzini, Simon Ellis, J. Hendler","doi":"10.1515/jagi-2015-0006","DOIUrl":"https://doi.org/10.1515/jagi-2015-0006","url":null,"abstract":"Abstract In this paper we offer a model, drawing inspiration from human cognition and based upon the pipeline developed for IBM’s Watson, which solves clues in a type of word puzzle called syllacrostics. We briefly discuss its situation with respect to the greater field of artificial general intelligence (AGI) and how this process and model might be applied to other types of word puzzles. We present an overview of a system that has been developed to solve syllacrostics.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114332003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper outlines a logical representation of certain aspects of the process of mathematical proving that are important from the point of view of Artificial Intelligence. Our starting-point is the concept of proof-event or proving, introduced by Goguen, instead of the traditional concept of mathematical proof. The reason behind this choice is that in contrast to the traditional static concept of mathematical proof, proof-events are understood as processes, which enables their use in Artificial Intelligence in such contexts, in which problem-solving procedures and strategies are studied. We represent proof-events as problem-centered spatio-temporal processes by means of the language of the calculus of events, which captures adequately certain temporal aspects of proof-events (i.e. that they have history and form sequences of proof-events evolving in time). Further, we suggest a “loose” semantics for the proof-events, by means of Kolmogorov’s calculus of problems. Finally, we expose the intented interpretations for our logical model from the fields of automated theorem-proving and Web-based collective proving.
{"title":"On Mathematical Proving","authors":"P. Stefaneas, Ioannis M. Vandoulakis","doi":"10.1515/jagi-2015-0007","DOIUrl":"https://doi.org/10.1515/jagi-2015-0007","url":null,"abstract":"Abstract This paper outlines a logical representation of certain aspects of the process of mathematical proving that are important from the point of view of Artificial Intelligence. Our starting-point is the concept of proof-event or proving, introduced by Goguen, instead of the traditional concept of mathematical proof. The reason behind this choice is that in contrast to the traditional static concept of mathematical proof, proof-events are understood as processes, which enables their use in Artificial Intelligence in such contexts, in which problem-solving procedures and strategies are studied. We represent proof-events as problem-centered spatio-temporal processes by means of the language of the calculus of events, which captures adequately certain temporal aspects of proof-events (i.e. that they have history and form sequences of proof-events evolving in time). Further, we suggest a “loose” semantics for the proof-events, by means of Kolmogorov’s calculus of problems. Finally, we expose the intented interpretations for our logical model from the fields of automated theorem-proving and Web-based collective proving.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132731925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract An agent achieves its goals by interacting with its environment, cyclically choosing and executing suitable actions. An action execution process is a reasonable and critical part of an entire cognitive architecture, because the process of generating executable motor commands is not only driven by low-level environmental information, but is also initiated and affected by the agent’s high-level mental processes. This review focuses on cognitive models of action, or more specifically, of the action execution process, as implemented in a set of popular cognitive architectures. We examine the representations and procedures inside the action execution process, as well as the cooperation between action execution and other high-level cognitive modules. We finally conclude with some general observations regarding the nature of action execution.
{"title":"The Action Execution Process Implemented in Different Cognitive Architectures: A Review","authors":"Daqi Dong, S. Franklin","doi":"10.2478/jagi-2014-0002","DOIUrl":"https://doi.org/10.2478/jagi-2014-0002","url":null,"abstract":"Abstract An agent achieves its goals by interacting with its environment, cyclically choosing and executing suitable actions. An action execution process is a reasonable and critical part of an entire cognitive architecture, because the process of generating executable motor commands is not only driven by low-level environmental information, but is also initiated and affected by the agent’s high-level mental processes. This review focuses on cognitive models of action, or more specifically, of the action execution process, as implemented in a set of popular cognitive architectures. We examine the representations and procedures inside the action execution process, as well as the cooperation between action execution and other high-level cognitive modules. We finally conclude with some general observations regarding the nature of action execution.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117295825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Whole brain emulation (WBE) is the possible replication of human brain dynamics that reproduces human behavior. If created, WBE would have significant impact on human society, and forecasts frequently place WBE as arriving within a century. However, WBE would be a complex technology with a complex network of prerequisite technologies. Most forecasts only consider a fraction of this technology network. The unconsidered portions of the network may contain bottlenecks, which are slowly-developing technologies that would impede the development of WBE. Here I describe how bottlenecks in the network can be non-obvious, and the merits of identifying them early. I show that bottlenecks may be predicted even with noisy forecasts. Accurate forecasts of WBE development must incorporate potential bottlenecks, which can be found using detailed descriptions of the WBE technology network. Bottlenecks identification can also increase the impact of WBE researchers by directing effort to those technologies that will immediately affect the timeline of WBE development
{"title":"Will We Hit a Wall? Forecasting Bottlenecks to Whole Brain Emulation Development","authors":"J. Alstott","doi":"10.2478/jagi-2013-0009","DOIUrl":"https://doi.org/10.2478/jagi-2013-0009","url":null,"abstract":"Abstract Whole brain emulation (WBE) is the possible replication of human brain dynamics that reproduces human behavior. If created, WBE would have significant impact on human society, and forecasts frequently place WBE as arriving within a century. However, WBE would be a complex technology with a complex network of prerequisite technologies. Most forecasts only consider a fraction of this technology network. The unconsidered portions of the network may contain bottlenecks, which are slowly-developing technologies that would impede the development of WBE. Here I describe how bottlenecks in the network can be non-obvious, and the merits of identifying them early. I show that bottlenecks may be predicted even with noisy forecasts. Accurate forecasts of WBE development must incorporate potential bottlenecks, which can be found using detailed descriptions of the WBE technology network. Bottlenecks identification can also increase the impact of WBE researchers by directing effort to those technologies that will immediately affect the timeline of WBE development","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122443495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Awareness of the possible existence of a yet-unknown principle of Physics that explains cognition and intelligence does exist in several projects of emulation, simulation, and replication of the human brain currently under way. Brain simulation projects define their success partly in terms of the emergence of non-explicitly programmed biophysical signals such as self-oscillation and spreading cortical waves. We propose that a recently discovered theory of Physics known as Causal Mathematical Logic (CML) that links intelligence with causality and entropy and explains intelligent behavior from first principles, is the missing link. We further propose the theory as a roadway to understanding more complex biophysical signals, and to explain the set of intelligence principles. The new theory applies to information considered as an entity by itself. The theory proposes that any device that processes information and exhibits intelligence must satisfy certain theoretical conditions irrespective of the substrate where it is being processed. The substrate can be the human brain, a part of it, a worm’s brain, a motor protein that self-locomotes in response to its environment, a computer. Here, we propose to extend the causal theory to systems in Neuroscience, because of its ability to model complex systems without heuristic approximations, and to predict emerging signals of intelligence directly from the models. The theory predicts the existence of a large number of observables (or “signals”), all of which emerge and can be directly and mathematically calculated from non-explicitly programmed detailed causal models. This approach is aiming for a universal and predictive language for Neuroscience and AGI based on causality and entropy, detailed enough to describe the finest structures and signals of the brain, yet general enough to accommodate the versatility and wholeness of intelligence. Experiments are focused on a black-box as one of the devices described above of which both the input and the output are precisely known, but not the internal implementation. The same input is separately supplied to a causal virtual machine, and the calculated output is compared with the measured output. The virtual machine, described in a previous paper, is a computer implementation of CML, fixed for all experiments and unrelated to the device in the black box. If the two outputs are equivalent, then the experiment has quantitatively succeeded and conclusions can be drawn regarding details of the internal implementation of the device. Several small black-box experiments were successfully performed and demonstrated the emergence of non-explicitly programmed cognitive function in each case
{"title":"Black-box Brain Experiments, Causal Mathematical Logic, and the Thermodynamics of Intelligence","authors":"S. Pissanetzky, Felix Lanzalaco","doi":"10.2478/jagi-2013-0005","DOIUrl":"https://doi.org/10.2478/jagi-2013-0005","url":null,"abstract":"Abstract Awareness of the possible existence of a yet-unknown principle of Physics that explains cognition and intelligence does exist in several projects of emulation, simulation, and replication of the human brain currently under way. Brain simulation projects define their success partly in terms of the emergence of non-explicitly programmed biophysical signals such as self-oscillation and spreading cortical waves. We propose that a recently discovered theory of Physics known as Causal Mathematical Logic (CML) that links intelligence with causality and entropy and explains intelligent behavior from first principles, is the missing link. We further propose the theory as a roadway to understanding more complex biophysical signals, and to explain the set of intelligence principles. The new theory applies to information considered as an entity by itself. The theory proposes that any device that processes information and exhibits intelligence must satisfy certain theoretical conditions irrespective of the substrate where it is being processed. The substrate can be the human brain, a part of it, a worm’s brain, a motor protein that self-locomotes in response to its environment, a computer. Here, we propose to extend the causal theory to systems in Neuroscience, because of its ability to model complex systems without heuristic approximations, and to predict emerging signals of intelligence directly from the models. The theory predicts the existence of a large number of observables (or “signals”), all of which emerge and can be directly and mathematically calculated from non-explicitly programmed detailed causal models. This approach is aiming for a universal and predictive language for Neuroscience and AGI based on causality and entropy, detailed enough to describe the finest structures and signals of the brain, yet general enough to accommodate the versatility and wholeness of intelligence. Experiments are focused on a black-box as one of the devices described above of which both the input and the output are precisely known, but not the internal implementation. The same input is separately supplied to a causal virtual machine, and the calculated output is compared with the measured output. The virtual machine, described in a previous paper, is a computer implementation of CML, fixed for all experiments and unrelated to the device in the black box. If the two outputs are equivalent, then the experiment has quantitatively succeeded and conclusions can be drawn regarding details of the internal implementation of the device. Several small black-box experiments were successfully performed and demonstrated the emergence of non-explicitly programmed cognitive function in each case","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115598888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Whole Brain Emulation (WBE), the theoretical technology of modeling a human brain in its entirety on a computer-thoughts, feelings, memories, and skills intact-is a staple of science fiction. Recently, proponents of WBE have suggested that it will be realized in the next few decades. In this paper, we investigate the plausibility of WBE being developed in the next 50 years (by 2063). We identify four essential requisite technologies: scanning the brain, translating the scan into a model, running the model on a computer, and simulating an environment and body. Additionally, we consider the cultural and social effects of WBE. We find the two most uncertain factors for WBE’s future to be the development of advanced miniscule probes that can amass neural data in vivo and the degree to which the culture surrounding WBE becomes cooperative or competitive. We identify four plausible scenarios from these uncertainties and suggest the most likely scenario to be one in which WBE is realized, and the technology is used for moderately cooperative ends
{"title":"The Prospects of Whole Brain Emulation within the next Half- Century","authors":"Daniel Eth, Juan-Carlos Foust, Brandon Whale","doi":"10.2478/jagi-2013-0008","DOIUrl":"https://doi.org/10.2478/jagi-2013-0008","url":null,"abstract":"Abstract Whole Brain Emulation (WBE), the theoretical technology of modeling a human brain in its entirety on a computer-thoughts, feelings, memories, and skills intact-is a staple of science fiction. Recently, proponents of WBE have suggested that it will be realized in the next few decades. In this paper, we investigate the plausibility of WBE being developed in the next 50 years (by 2063). We identify four essential requisite technologies: scanning the brain, translating the scan into a model, running the model on a computer, and simulating an environment and body. Additionally, we consider the cultural and social effects of WBE. We find the two most uncertain factors for WBE’s future to be the development of advanced miniscule probes that can amass neural data in vivo and the degree to which the culture surrounding WBE becomes cooperative or competitive. We identify four plausible scenarios from these uncertainties and suggest the most likely scenario to be one in which WBE is realized, and the technology is used for moderately cooperative ends","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128876888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Whole brain emulation (WBE) is a systematic approach to large-scale neuroprostheses with theintent to replicate the functions of a specific mind in some other operating substrate. The engineeringpractice of system identification can be applied in a way that makes this big problem a feasiblecollection of connected smaller system identification problems to solve.Whole brain emulation is an essential goal for neuroscience. Following Richard Feynman’sfamous 1988 Caltech chalkboard quote: “What I cannot create, I do not understand.” To create orbuild a human mind we need models, a combination of building blocks with processes. When weexplain something that is observed, e.g., mental functions and behaviors, we strive to make thatpredictable within constraints that satisfy our interests: We create boundaries, we measure withinthose well-defined outlines, and then we use those measurements to derive model processes enablingoutcome prediction. Within the defined system outlines of our model, taking into account definedsets of signals, we mathematically describe interactions (which may be expressed in informationtheoretic terms).Every aspect of modern science relies on creating representations of things. In each case, wefocus on the signals and the observables (or behavior) that interest us. Then, we try to interpret interms of functions what the system processes are doing. Where brain functions are concerned, somecognitive prosthetic work, such as the pioneering efforts of the labs of Theodore W. Berger at theUniversity of Southern California, has managed to carry out these steps and produced successfulexperimental results (Berger et al., 2012). Berger’s team has developed and tested an experimentalhippocampal neural prosthetic that is implemented on a bio-mimetic chip. A transfer functionwas identified and used to replicate the operational properties of biological neural circuitry in aregion of the rat hippocampus known as CA3. In experiments, the prosthesis is able to reproducethe way in which input to the region is turned into output from that region. This method ofdeveloping neuroprostheses, with demonstrated success in rats, is presently being tested in primates(Marmarelis et al., 2013).
全脑模拟(WBE)是一种大规模神经假体的系统方法,目的是在其他操作基质中复制特定思维的功能。系统识别的工程实践可以应用于一种方式,使这个大问题成为一个可行的连接的小系统识别问题的集合来解决。全脑仿真是神经科学的一个重要目标。正如理查德·费曼1988年在加州理工学院黑板上的名言:“我不能创造的东西,我就不理解。”为了创造或构建人类思维,我们需要模型,即构建模块与过程的结合。当我们解释观察到的东西时,例如,心理功能和行为,我们努力使其在满足我们兴趣的约束下可预测:我们创建边界,我们在那些定义良好的轮廓内测量,然后我们使用这些测量来推导模型过程,从而实现结果预测。在我们模型的已定义的系统大纲中,考虑到已定义的信号集,我们用数学方法描述了相互作用(可以用信息论术语表示)。现代科学的每一个方面都依赖于创造事物的表征。在每种情况下,我们都关注我们感兴趣的信号和可观察到的(或行为)。然后,我们试着用函数来解释系统过程在做什么。在大脑功能方面,一些认知假肢工作,如南加州大学西奥多·w·伯杰(Theodore W. Berger)实验室的开创性努力,已经成功地实施了这些步骤,并产生了成功的实验结果(伯杰等人,2012)。伯杰的团队已经开发并测试了一种基于仿生芯片的实验性海马神经假体。一个传递函数被确定并用于复制大鼠海马区CA3生物神经回路的操作特性。在实验中,该假肢能够再现该区域的输入转化为该区域的输出的方式。这种开发神经假体的方法在大鼠身上取得了成功,目前正在灵长类动物身上进行测试(Marmarelis et al., 2013)。
{"title":"Editorial: Whole Brain Emulation seeks to Implement a Mind and its General Intelligence through System Identification","authors":"R. Koene, Diana Deca","doi":"10.2478/jagi-2013-0012","DOIUrl":"https://doi.org/10.2478/jagi-2013-0012","url":null,"abstract":"Whole brain emulation (WBE) is a systematic approach to large-scale neuroprostheses with theintent to replicate the functions of a specific mind in some other operating substrate. The engineeringpractice of system identification can be applied in a way that makes this big problem a feasiblecollection of connected smaller system identification problems to solve.Whole brain emulation is an essential goal for neuroscience. Following Richard Feynman’sfamous 1988 Caltech chalkboard quote: “What I cannot create, I do not understand.” To create orbuild a human mind we need models, a combination of building blocks with processes. When weexplain something that is observed, e.g., mental functions and behaviors, we strive to make thatpredictable within constraints that satisfy our interests: We create boundaries, we measure withinthose well-defined outlines, and then we use those measurements to derive model processes enablingoutcome prediction. Within the defined system outlines of our model, taking into account definedsets of signals, we mathematically describe interactions (which may be expressed in informationtheoretic terms).Every aspect of modern science relies on creating representations of things. In each case, wefocus on the signals and the observables (or behavior) that interest us. Then, we try to interpret interms of functions what the system processes are doing. Where brain functions are concerned, somecognitive prosthetic work, such as the pioneering efforts of the labs of Theodore W. Berger at theUniversity of Southern California, has managed to carry out these steps and produced successfulexperimental results (Berger et al., 2012). Berger’s team has developed and tested an experimentalhippocampal neural prosthetic that is implemented on a bio-mimetic chip. A transfer functionwas identified and used to replicate the operational properties of biological neural circuitry in aregion of the rat hippocampus known as CA3. In experiments, the prosthesis is able to reproducethe way in which input to the region is turned into output from that region. This method ofdeveloping neuroprostheses, with demonstrated success in rats, is presently being tested in primates(Marmarelis et al., 2013).","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132867135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract A recent theory of physical information based on the fundamental principles of causality and thermodynamics has proposed that a large number of observable life and intelligence signals can be described in terms of the Causal Mathematical Logic (CML), which is proposed to encode the natural principles of intelligence across any physical domain and substrate. We attempt to expound the current definition of CML, the “Action functional” as a theory in terms of its ability to possess a superior explanatory power for the current neuroscientific data we use to measure the mammalian brains “intelligence” processes at its most general biophysical level. Brain simulation projects define their success partly in terms of the emergence of “non-explicitly programmed” complex biophysical signals such as self-oscillation and spreading cortical waves. Here we propose to extend the causal theory to predict and guide the understanding of these more complex emergent “intelligence Signals”. To achieve this we review whether causal logic is consistent with, can explain and predict the function of complete perceptual processes associated with intelligence. Primarily those are defined as the range of Event Related Potentials (ERP) which include their primary subcomponents; Event Related Desynchronization (ERD) and Event Related Synchronization (ERS). This approach is aiming for a universal and predictive logic for neurosimulation and AGi. The result of this investigation has produced a general “Information Engine” model from translation of the ERD and ERS. The CML algorithm run in terms of action cost predicts ERP signal contents and is consistent with the fundamental laws of thermodynamics. A working substrate independent natural information logic would be a major asset. An information theory consistent with fundamental physics can be an AGi. It can also operate within genetic information space and provides a roadmap to understand the live biophysical operation of the phenotype
{"title":"Causal Mathematical Logic as a guiding framework for the prediction of “Intelligence Signals” in brain simulations","authors":"Felix Lanzalaco, S. Pissanetzky","doi":"10.2478/jagi-2013-0006","DOIUrl":"https://doi.org/10.2478/jagi-2013-0006","url":null,"abstract":"Abstract A recent theory of physical information based on the fundamental principles of causality and thermodynamics has proposed that a large number of observable life and intelligence signals can be described in terms of the Causal Mathematical Logic (CML), which is proposed to encode the natural principles of intelligence across any physical domain and substrate. We attempt to expound the current definition of CML, the “Action functional” as a theory in terms of its ability to possess a superior explanatory power for the current neuroscientific data we use to measure the mammalian brains “intelligence” processes at its most general biophysical level. Brain simulation projects define their success partly in terms of the emergence of “non-explicitly programmed” complex biophysical signals such as self-oscillation and spreading cortical waves. Here we propose to extend the causal theory to predict and guide the understanding of these more complex emergent “intelligence Signals”. To achieve this we review whether causal logic is consistent with, can explain and predict the function of complete perceptual processes associated with intelligence. Primarily those are defined as the range of Event Related Potentials (ERP) which include their primary subcomponents; Event Related Desynchronization (ERD) and Event Related Synchronization (ERS). This approach is aiming for a universal and predictive logic for neurosimulation and AGi. The result of this investigation has produced a general “Information Engine” model from translation of the ERD and ERS. The CML algorithm run in terms of action cost predicts ERP signal contents and is consistent with the fundamental laws of thermodynamics. A working substrate independent natural information logic would be a major asset. An information theory consistent with fundamental physics can be an AGi. It can also operate within genetic information space and provides a roadmap to understand the live biophysical operation of the phenotype","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126724926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}