Pub Date : 2012-05-21DOI: 10.2478/v10229-011-0015-3
K. Thórisson, Helgi Helgasson
Abstract One of the original goals of artificial intelligence (AI) research was to create machines with very general cognitive capabilities and a relatively high level of autonomy. It has taken the field longer than many had expected to achieve even a fraction of this goal; the community has focused on building specific, targeted cognitive processes in isolation, and as of yet no system exists that integrates a broad range of capabilities or presents a general solution to autonomous acquisition of a large set of skills. Among the reasons for this are the highly limited machine learning and adaptation techniques available, and the inherent complexity of integrating numerous cognitive and learning capabilities in a coherent architecture. In this paper we review selected systems and architectures built expressly to address integrated skills. We highlight principles and features of these systems that seem promising for creating generally intelligent systems with some level of autonomy, and discuss them in the context of the development of future cognitive architectures. Autonomy is a key property for any system to be considered generally intelligent, in our view; we use this concept as an organizing principle for comparing the reviewed systems. Features that remain largely unaddressed in present research, but seem nevertheless necessary for such efforts to succeed, are also discussed.
{"title":"Cognitive Architectures and Autonomy: A Comparative Review","authors":"K. Thórisson, Helgi Helgasson","doi":"10.2478/v10229-011-0015-3","DOIUrl":"https://doi.org/10.2478/v10229-011-0015-3","url":null,"abstract":"Abstract One of the original goals of artificial intelligence (AI) research was to create machines with very general cognitive capabilities and a relatively high level of autonomy. It has taken the field longer than many had expected to achieve even a fraction of this goal; the community has focused on building specific, targeted cognitive processes in isolation, and as of yet no system exists that integrates a broad range of capabilities or presents a general solution to autonomous acquisition of a large set of skills. Among the reasons for this are the highly limited machine learning and adaptation techniques available, and the inherent complexity of integrating numerous cognitive and learning capabilities in a coherent architecture. In this paper we review selected systems and architectures built expressly to address integrated skills. We highlight principles and features of these systems that seem promising for creating generally intelligent systems with some level of autonomy, and discuss them in the context of the development of future cognitive architectures. Autonomy is a key property for any system to be considered generally intelligent, in our view; we use this concept as an organizing principle for comparing the reviewed systems. Features that remain largely unaddressed in present research, but seem nevertheless necessary for such efforts to succeed, are also discussed.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128842084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-05-17DOI: 10.2478/v10229-011-0014-4
H. Sommer, Lothar Schreiber
Abstract Dreyfus' call ‘to make artificial intelligence (AI) more Heideggerian‘ echoes Heidegger's affirmation that pure calculations produce no ‘intelligence’ (Dreyfus, 2007). But what exactly is it that AI needs more than mathematics? The question in the title gives rise to a reexamination of the basic principles of cognition in Husserl's Phenomenology. Using Husserl's Phenomenological Method, a formalization of these principles is presented that provides the principal idea of cognition, and as a consequence, a ‘natural logic’. Only in a second step, mathematics is obtained from this natural logic by abstraction. The limitations of pure reasoning are demonstrated for fundamental considerations (Hilbert's ‘finite Einstellung’) as well as for the task of solving practical problems. Principles will be presented for the design of general intelligent systems, which make use of a natural logic.
{"title":"Is Logic in the Mind or in the World? Why a Philosophical Question can Affect the Understanding of Intelligence","authors":"H. Sommer, Lothar Schreiber","doi":"10.2478/v10229-011-0014-4","DOIUrl":"https://doi.org/10.2478/v10229-011-0014-4","url":null,"abstract":"Abstract Dreyfus' call ‘to make artificial intelligence (AI) more Heideggerian‘ echoes Heidegger's affirmation that pure calculations produce no ‘intelligence’ (Dreyfus, 2007). But what exactly is it that AI needs more than mathematics? The question in the title gives rise to a reexamination of the basic principles of cognition in Husserl's Phenomenology. Using Husserl's Phenomenological Method, a formalization of these principles is presented that provides the principal idea of cognition, and as a consequence, a ‘natural logic’. Only in a second step, mathematics is obtained from this natural logic by abstraction. The limitations of pure reasoning are demonstrated for fundamental considerations (Hilbert's ‘finite Einstellung’) as well as for the task of solving practical problems. Principles will be presented for the design of general intelligent systems, which make use of a natural logic.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127395732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-16DOI: 10.2478/v10229-011-0013-5
B. Hibbard
Abstract Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.
{"title":"Model-based Utility Functions","authors":"B. Hibbard","doi":"10.2478/v10229-011-0013-5","DOIUrl":"https://doi.org/10.2478/v10229-011-0013-5","url":null,"abstract":"Abstract Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127733898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.2478/v10229-011-0006-4
C. Lebiere, Cleotilde González, Walter Warwick
Editorial: Cognitive Architectures, Model Comparison and AGI Cognitive Science and Artificial Intelligence share compatible goals of understanding and possibly generating broadly intelligent behavior. In order to determine if progress is made, it is essential to be able to evaluate the behavior of complex computational models, especially those built on general cognitive architectures, and compare it to benchmarks of intelligent behavior such as human performance. Significant methodological challenges arise, however, when trying to extend approaches used to compare model and human performance from tightly controlled laboratory tasks to complex tasks involving more open-ended behavior. This paper describes a model comparison challenge built around a dynamic control task, the Dynamic Stocks and Flows. We present and discuss distinct approaches to evaluating performance and comparing models. Lessons drawn from this challenge are discussed in light of the challenge of using cognitive architectures to achieve Artificial General Intelligence.
{"title":"Editorial: Cognitive Architectures, Model Comparison and AGI","authors":"C. Lebiere, Cleotilde González, Walter Warwick","doi":"10.2478/v10229-011-0006-4","DOIUrl":"https://doi.org/10.2478/v10229-011-0006-4","url":null,"abstract":"Editorial: Cognitive Architectures, Model Comparison and AGI Cognitive Science and Artificial Intelligence share compatible goals of understanding and possibly generating broadly intelligent behavior. In order to determine if progress is made, it is essential to be able to evaluate the behavior of complex computational models, especially those built on general cognitive architectures, and compare it to benchmarks of intelligent behavior such as human performance. Significant methodological challenges arise, however, when trying to extend approaches used to compare model and human performance from tightly controlled laboratory tasks to complex tasks involving more open-ended behavior. This paper describes a model comparison challenge built around a dynamic control task, the Dynamic Stocks and Flows. We present and discuss distinct approaches to evaluating performance and comparing models. Lessons drawn from this challenge are discussed in light of the challenge of using cognitive architectures to achieve Artificial General Intelligence.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121673704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.2478/v10229-011-0005-5
B. Rohrer
Accelerating progress in Artificial General Intelligence: Choosing a benchmark for natural world interaction Measuring progress in the field of Artificial General Intelligence (AGI) can be difficult without commonly accepted methods of evaluation. An AGI benchmark would allow evaluation and comparison of the many computational intelligence algorithms that have been developed. In this paper I propose that a benchmark for natural world interaction would possess seven key characteristics: fitness, breadth, specificity, low cost, simplicity, range, and task focus. I also outline two benchmark examples that meet most of these criteria. In the first, the direction task, a human coach directs a machine to perform a novel task in an unfamiliar environment. The direction task is extremely broad, but may be idealistic. In the second, the AGI battery, AGI candidates are evaluated based on their performance on a collection of more specific tasks. The AGI battery is designed to be appropriate to the capabilities of currently existing systems. Both the direction task and the AGI battery would require further definition before implementing. The paper concludes with a description of a task that might be included in the AGI battery: the search and retrieve task.
{"title":"Accelerating progress in Artificial General Intelligence: Choosing a benchmark for natural world interaction","authors":"B. Rohrer","doi":"10.2478/v10229-011-0005-5","DOIUrl":"https://doi.org/10.2478/v10229-011-0005-5","url":null,"abstract":"Accelerating progress in Artificial General Intelligence: Choosing a benchmark for natural world interaction Measuring progress in the field of Artificial General Intelligence (AGI) can be difficult without commonly accepted methods of evaluation. An AGI benchmark would allow evaluation and comparison of the many computational intelligence algorithms that have been developed. In this paper I propose that a benchmark for natural world interaction would possess seven key characteristics: fitness, breadth, specificity, low cost, simplicity, range, and task focus. I also outline two benchmark examples that meet most of these criteria. In the first, the direction task, a human coach directs a machine to perform a novel task in an unfamiliar environment. The direction task is extremely broad, but may be idealistic. In the second, the AGI battery, AGI candidates are evaluated based on their performance on a collection of more specific tasks. The AGI battery is designed to be appropriate to the capabilities of currently existing systems. Both the direction task and the AGI battery would require further definition before implementing. The paper concludes with a description of a task that might be included in the AGI battery: the search and retrieve task.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125728005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.2478/v10229-011-0008-2
M. Halbrügge
Keep it simple - A case study of model development in the context of the Dynamic Stocks and Flows (DSF) task This paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.
{"title":"Keep it simple - A case study of model development in the context of the Dynamic Stocks and Flows (DSF) task","authors":"M. Halbrügge","doi":"10.2478/v10229-011-0008-2","DOIUrl":"https://doi.org/10.2478/v10229-011-0008-2","url":null,"abstract":"Keep it simple - A case study of model development in the context of the Dynamic Stocks and Flows (DSF) task This paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134218808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.2478/v10229-011-0012-6
Christopher W. Myers, K. Gluck, G. Gunzelmann, M. Krusmark
Validating Computational Cognitive Process Models across Multiple Timescales Model comparison is vital to evaluating progress in the fields of artificial general intelligence (AGI) and cognitive architecture. As they mature, AGI and cognitive architectures will become increasingly capable of providing a single model that completes a multitude of tasks, some of which the model was not specifically engineered to perform. These models will be expected to operate for extended periods of time and serve functional roles in real-world contexts. Questions arise regarding how to evaluate such models appropriately, including issues pertaining to model comparison and validation. In this paper, we specifically address model validation across multiple levels of abstraction, using an existing computational process model of unmanned aerial vehicle basic maneuvering to illustrate the relationship between validity and timescales of analysis.
{"title":"Validating Computational Cognitive Process Models across Multiple Timescales","authors":"Christopher W. Myers, K. Gluck, G. Gunzelmann, M. Krusmark","doi":"10.2478/v10229-011-0012-6","DOIUrl":"https://doi.org/10.2478/v10229-011-0012-6","url":null,"abstract":"Validating Computational Cognitive Process Models across Multiple Timescales Model comparison is vital to evaluating progress in the fields of artificial general intelligence (AGI) and cognitive architecture. As they mature, AGI and cognitive architectures will become increasingly capable of providing a single model that completes a multitude of tasks, some of which the model was not specifically engineered to perform. These models will be expected to operate for extended periods of time and serve functional roles in real-world contexts. Questions arise regarding how to evaluate such models appropriately, including issues pertaining to model comparison and validation. In this paper, we specifically address model validation across multiple levels of abstraction, using an existing computational process model of unmanned aerial vehicle basic maneuvering to illustrate the relationship between validity and timescales of analysis.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124145921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.2478/v10229-011-0010-8
T. Stewart, R. West
Testing for Equivalence: A Methodology for Computational Cognitive Modelling The equivalence test (Stewart and West, 2007; Stewart, 2007) is a statistical measure for evaluating the similarity between a model and the system being modelled. It is designed to avoid over-fitting and to generate an easily interpretable summary of the quality of a model. We apply the equivalence test to two tasks: Repeated Binary Choice (Erev et al., 2010) and Dynamic Stocks and Flows (Gonzalez and Dutt, 2007). In the first case, we find a broad range of statistically equivalent models (and win a prediction competition) while identifying particular aspects of the task that are not yet adequately captured. In the second case, we re-evaluate results from the Dynamic Stocks and Flows challenge, demonstrating how our method emphasizes the breadth of coverage of a model and how it can be used for comparing different models. We argue that the explanatory power of models hinges on numerical similarity to empirical data over a broad set of measures.
等效性测试:计算认知模型的一种方法等效性测试(Stewart and West, 2007;Stewart, 2007)是评估模型和被建模系统之间相似性的统计度量。它的设计是为了避免过度拟合,并生成一个易于解释的模型质量摘要。我们将等价检验应用于两个任务:重复二元选择(Erev et al., 2010)和动态库存和流量(Gonzalez and Dutt, 2007)。在第一种情况下,我们找到了广泛的统计等效模型(并赢得了预测竞赛),同时确定了尚未充分捕获的任务的特定方面。在第二种情况下,我们重新评估动态库存和流量挑战的结果,展示我们的方法如何强调模型覆盖的广度,以及如何使用它来比较不同的模型。我们认为,模型的解释力取决于数值相似性的经验数据在一套广泛的措施。
{"title":"Testing for Equivalence: A Methodology for Computational Cognitive Modelling","authors":"T. Stewart, R. West","doi":"10.2478/v10229-011-0010-8","DOIUrl":"https://doi.org/10.2478/v10229-011-0010-8","url":null,"abstract":"Testing for Equivalence: A Methodology for Computational Cognitive Modelling The equivalence test (Stewart and West, 2007; Stewart, 2007) is a statistical measure for evaluating the similarity between a model and the system being modelled. It is designed to avoid over-fitting and to generate an easily interpretable summary of the quality of a model. We apply the equivalence test to two tasks: Repeated Binary Choice (Erev et al., 2010) and Dynamic Stocks and Flows (Gonzalez and Dutt, 2007). In the first case, we find a broad range of statistically equivalent models (and win a prediction competition) while identifying particular aspects of the task that are not yet adequately captured. In the second case, we re-evaluate results from the Dynamic Stocks and Flows challenge, demonstrating how our method emphasizes the breadth of coverage of a model and how it can be used for comparing different models. We argue that the explanatory power of models hinges on numerical similarity to empirical data over a broad set of measures.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130763314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.2478/v10229-011-0007-3
D. Reitter
Metacognition and Multiple Strategies in a Cognitive Model of Online Control We present a cognitive model performing the Dynamic Stocks&Flows control task, in which subjects control a system by counteracting a systematically changing external variable. The model uses a metacognitive layer that chooses a task strategy drawn from of two classes of strategies: precise calculation and imprecise estimation. The model, formulated within the ACT-R theory, monitors the success of each strategy continuously using instance-based learning and blended retrieval from declarative memory. The model underspecifies other portions of the task strategies, whose timing was determined as unbiased estimate from empirical data. The model's predictions were evaluated on data collected from novel experimental conditions, which did not inform the model's development and included discontinuous and noisy environmental change functions and a control delay. The model as well as the data show sudden changes in subject error and general learning of control; the model also correctly predicted oscillations of plausible magnitude. With its predictions, the model ranked first among the entries to the 2009 Dynamic Stocks&Flows modeling challenge.
{"title":"Metacognition and Multiple Strategies in a Cognitive Model of Online Control","authors":"D. Reitter","doi":"10.2478/v10229-011-0007-3","DOIUrl":"https://doi.org/10.2478/v10229-011-0007-3","url":null,"abstract":"Metacognition and Multiple Strategies in a Cognitive Model of Online Control We present a cognitive model performing the Dynamic Stocks&Flows control task, in which subjects control a system by counteracting a systematically changing external variable. The model uses a metacognitive layer that chooses a task strategy drawn from of two classes of strategies: precise calculation and imprecise estimation. The model, formulated within the ACT-R theory, monitors the success of each strategy continuously using instance-based learning and blended retrieval from declarative memory. The model underspecifies other portions of the task strategies, whose timing was determined as unbiased estimate from empirical data. The model's predictions were evaluated on data collected from novel experimental conditions, which did not inform the model's development and included discontinuous and noisy environmental change functions and a control delay. The model as well as the data show sudden changes in subject error and general learning of control; the model also correctly predicted oscillations of plausible magnitude. With its predictions, the model ranked first among the entries to the 2009 Dynamic Stocks&Flows modeling challenge.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115441784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-01DOI: 10.2478/v10229-011-0011-7
K. Gluck, Clayton Stanley, L. Moore, D. Reitter, M. Halbrügge
Exploration for Understanding in Cognitive Modeling The cognitive modeling and artificial general intelligence research communities may reap greater scientific return on research investments - may achieve an improved understanding of architectures and models - if there is more emphasis on systematic sensitivity and necessity analyses during model development, evaluation, and comparison. We demonstrate this methodological prescription with two of the models submitted for the Dynamic Stocks and Flows (DSF) Model Comparison Challenge, exploring the complex interactions among architectural mechanisms, knowledge-level strategy variants, and task conditions. To cope with the computational demands of these analyses we use a predictive analytics approach similar to regression trees, combined with parallelization on high performance computing clusters, to enable large scale, simultaneous search and exploration.
{"title":"Exploration for Understanding in Cognitive Modeling","authors":"K. Gluck, Clayton Stanley, L. Moore, D. Reitter, M. Halbrügge","doi":"10.2478/v10229-011-0011-7","DOIUrl":"https://doi.org/10.2478/v10229-011-0011-7","url":null,"abstract":"Exploration for Understanding in Cognitive Modeling The cognitive modeling and artificial general intelligence research communities may reap greater scientific return on research investments - may achieve an improved understanding of architectures and models - if there is more emphasis on systematic sensitivity and necessity analyses during model development, evaluation, and comparison. We demonstrate this methodological prescription with two of the models submitted for the Dynamic Stocks and Flows (DSF) Model Comparison Challenge, exploring the complex interactions among architectural mechanisms, knowledge-level strategy variants, and task conditions. To cope with the computational demands of these analyses we use a predictive analytics approach similar to regression trees, combined with parallelization on high performance computing clusters, to enable large scale, simultaneous search and exploration.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129391957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}