Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000153
Desmond C Ong, Tan Zhi-Xuan, Joshua B Tenenbaum, Noah D Goodman
We summarize the recent progress made by probabilistic programming as a unifying formalism for the probabilistic, symbolic, and data-driven aspects of human cognition. We highlight differences with meta-learning in flexibility, statistical assumptions and inferences about cogniton. We suggest that the meta-learning approach could be further strengthened by considering Connectionist and Bayesian approaches, rather than exclusively one or the other.
{"title":"Probabilistic programming versus meta-learning as models of cognition.","authors":"Desmond C Ong, Tan Zhi-Xuan, Joshua B Tenenbaum, Noah D Goodman","doi":"10.1017/S0140525X24000153","DOIUrl":"10.1017/S0140525X24000153","url":null,"abstract":"<p><p>We summarize the recent progress made by probabilistic programming as a unifying formalism for the probabilistic, symbolic, and data-driven aspects of human cognition. We highlight differences with meta-learning in flexibility, statistical assumptions and inferences about cogniton. We suggest that the meta-learning approach could be further strengthened by considering Connectionist <i>and</i> Bayesian approaches, rather than exclusively one or the other.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":"47 ","pages":"e158"},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000104
Erin Grant
The implementation of meta-learning targeted by Binz et al. inherits benefits and drawbacks from its nature as a connectionist model. Drawing from historical debates around bottom-up and top-down approaches to modeling in cognitive science, we should continue to bridge levels of analysis by constraining meta-learning and meta-learned models with complementary evidence from across the cognitive and computational sciences.
{"title":"The meta-learning toolkit needs stronger constraints.","authors":"Erin Grant","doi":"10.1017/S0140525X24000104","DOIUrl":"https://doi.org/10.1017/S0140525X24000104","url":null,"abstract":"<p><p>The implementation of meta-learning targeted by Binz et al. inherits benefits and drawbacks from its nature as a connectionist model. Drawing from historical debates around bottom-up and top-down approaches to modeling in cognitive science, we should continue to bridge levels of analysis by constraining meta-learning and meta-learned models with complementary evidence from across the cognitive and computational sciences.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":"47 ","pages":"e152"},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000086
Anna Székely, Gergő Orbán
Binz et al. argue that meta-learned models offer a new paradigm to study human cognition. Meta-learned models are proposed as alternatives to Bayesian models based on their capability to learn identical posterior predictive distributions. In our commentary, we highlight several arguments that reach beyond a predictive distribution-based comparison, offering new perspectives to evaluate the advantages of these modeling paradigms.
{"title":"Bayes beyond the predictive distribution.","authors":"Anna Székely, Gergő Orbán","doi":"10.1017/S0140525X24000086","DOIUrl":"https://doi.org/10.1017/S0140525X24000086","url":null,"abstract":"<p><p>Binz et al. argue that meta-learned models offer a new paradigm to study human cognition. Meta-learned models are proposed as alternatives to Bayesian models based on their capability to learn identical posterior predictive distributions. In our commentary, we highlight several arguments that reach beyond a predictive distribution-based comparison, offering new perspectives to evaluate the advantages of these modeling paradigms.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":"47 ","pages":"e166"},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000128
Margherita Calderan, Antonino Visalli
We challenge Binz et al.'s claim of meta-learned model superiority over Bayesian inference for large world problems. While comparing Bayesian priors to model-training decisions, we question meta-learning feature exclusivity. We assert no special justification for rational Bayesian solutions to large world problems, advocating exploring diverse theoretical frameworks beyond rational analysis of cognition for research advancement.
{"title":"Challenges of meta-learning and rational analysis in large worlds.","authors":"Margherita Calderan, Antonino Visalli","doi":"10.1017/S0140525X24000128","DOIUrl":"https://doi.org/10.1017/S0140525X24000128","url":null,"abstract":"<p><p>We challenge Binz et al.'s claim of meta-learned model superiority over Bayesian inference for large world problems. While comparing Bayesian priors to model-training decisions, we question meta-learning feature exclusivity. We assert no special justification for rational Bayesian solutions to large world problems, advocating exploring diverse theoretical frameworks beyond rational analysis of cognition for research advancement.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":"47 ","pages":"e148"},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000281
Kate Nussenbaum, Catherine A Hartley
Binz et al. argue that meta-learned models are essential tools for understanding adult cognition. Here, we propose that these models are particularly useful for testing hypotheses about why learning processes change across development. By leveraging their ability to discover optimal algorithms and account for capacity limitations, researchers can use these models to test competing theories of developmental change in learning.
{"title":"Meta-learned models as tools to test theories of cognitive development.","authors":"Kate Nussenbaum, Catherine A Hartley","doi":"10.1017/S0140525X24000281","DOIUrl":"https://doi.org/10.1017/S0140525X24000281","url":null,"abstract":"<p><p>Binz et al. argue that meta-learned models are essential tools for understanding adult cognition. Here, we propose that these models are particularly useful for testing hypotheses about why learning processes change across development. By leveraging their ability to discover optimal algorithms and account for capacity limitations, researchers can use these models to test competing theories of developmental change in learning.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":"47 ","pages":"e157"},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000219
Tim Vriens, Mattias Horan, Jacqueline Gottlieb, Massimo Silvetti
We argue that the type of meta-learning proposed by Binz et al. generates models with low interpretability and falsifiability that have limited usefulness for neuroscience research. An alternative approach to meta-learning based on hyperparameter optimization obviates these concerns and can generate empirically testable hypotheses of biological computations.
{"title":"The reinforcement metalearner as a biologically plausible meta-learning framework.","authors":"Tim Vriens, Mattias Horan, Jacqueline Gottlieb, Massimo Silvetti","doi":"10.1017/S0140525X24000219","DOIUrl":"https://doi.org/10.1017/S0140525X24000219","url":null,"abstract":"<p><p>We argue that the type of meta-learning proposed by Binz et al. generates models with low interpretability and falsifiability that have limited usefulness for neuroscience research. An alternative approach to meta-learning based on hyperparameter optimization obviates these concerns and can generate empirically testable hypotheses of biological computations.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":"47 ","pages":"e168"},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000189
Jacob Russin, Sam Whitman McGrath, Ellie Pavlick, Michael J Frank
Recent studies suggest that meta-learning may provide an original solution to an enduring puzzle about whether neural networks can explain compositionality - in particular, by raising the prospect that compositionality can be understood as an emergent property of an inner-loop learning algorithm. We elaborate on this hypothesis and consider its empirical predictions regarding the neural mechanisms and development of human compositionality.
{"title":"Is human compositionality meta-learned?","authors":"Jacob Russin, Sam Whitman McGrath, Ellie Pavlick, Michael J Frank","doi":"10.1017/S0140525X24000189","DOIUrl":"https://doi.org/10.1017/S0140525X24000189","url":null,"abstract":"<p><p>Recent studies suggest that meta-learning may provide an original solution to an enduring puzzle about whether neural networks can explain compositionality - in particular, by raising the prospect that compositionality can be understood as an emergent property of an inner-loop learning algorithm. We elaborate on this hypothesis and consider its empirical predictions regarding the neural mechanisms and development of human compositionality.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":"47 ","pages":"e162"},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000116
R Thomas McCoy, Thomas L Griffiths
Meta-learning is even more broadly relevant to the study of inductive biases than Binz et al. suggest: Its implications go beyond the extensions to rational analysis that they discuss. One noteworthy example is that meta-learning can act as a bridge between the vector representations of neural networks and the symbolic hypothesis spaces used in many Bayesian models.
{"title":"Meta-learning as a bridge between neural networks and symbolic Bayesian models.","authors":"R Thomas McCoy, Thomas L Griffiths","doi":"10.1017/S0140525X24000116","DOIUrl":"https://doi.org/10.1017/S0140525X24000116","url":null,"abstract":"<p><p>Meta-learning is even more broadly relevant to the study of inductive biases than Binz et al. suggest: Its implications go beyond the extensions to rational analysis that they discuss. One noteworthy example is that meta-learning can act as a bridge between the vector representations of neural networks and the symbolic hypothesis spaces used in many Bayesian models.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":"47 ","pages":"e155"},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000098
Ignacio Cea
The meta-learning framework proposed by Binz et al. would gain significantly from the inclusion of affective and homeostatic elements, currently neglected in their work. These components are crucial as cognition as we know it is profoundly influenced by affective states, which arise as intricate forms of homeostatic regulation in living bodies.
{"title":"Meta-learning modeling and the role of affective-homeostatic states in human cognition.","authors":"Ignacio Cea","doi":"10.1017/S0140525X24000098","DOIUrl":"https://doi.org/10.1017/S0140525X24000098","url":null,"abstract":"<p><p>The meta-learning framework proposed by Binz et al. would gain significantly from the inclusion of affective and homeostatic elements, currently neglected in their work. These components are crucial as cognition as we know it is profoundly influenced by affective states, which arise as intricate forms of homeostatic regulation in living bodies.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":"47 ","pages":"e149"},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000190
Jacques Pesnot-Lerousseau, Christopher Summerfield
Deep meta-learning is the driving force behind advances in contemporary AI research, and a promising theory of flexible cognition in natural intelligence. We agree with Binz et al. that many supposedly "model-based" behaviours may be better explained by meta-learning than by classical models. We argue that this invites us to revisit our neural theories of problem solving and goal-directed planning.
{"title":"Quo vadis, planning?","authors":"Jacques Pesnot-Lerousseau, Christopher Summerfield","doi":"10.1017/S0140525X24000190","DOIUrl":"https://doi.org/10.1017/S0140525X24000190","url":null,"abstract":"<p><p>Deep meta-learning is the driving force behind advances in contemporary AI research, and a promising theory of flexible cognition in natural intelligence. We agree with Binz et al. that many supposedly \"model-based\" behaviours may be better explained by meta-learning than by classical models. We argue that this invites us to revisit our neural theories of problem solving and goal-directed planning.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":"47 ","pages":"e160"},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}