Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000189
Jacob Russin, Sam Whitman McGrath, Ellie Pavlick, Michael J Frank
Recent studies suggest that meta-learning may provide an original solution to an enduring puzzle about whether neural networks can explain compositionality - in particular, by raising the prospect that compositionality can be understood as an emergent property of an inner-loop learning algorithm. We elaborate on this hypothesis and consider its empirical predictions regarding the neural mechanisms and development of human compositionality.
{"title":"Is human compositionality meta-learned?","authors":"Jacob Russin, Sam Whitman McGrath, Ellie Pavlick, Michael J Frank","doi":"10.1017/S0140525X24000189","DOIUrl":"https://doi.org/10.1017/S0140525X24000189","url":null,"abstract":"<p><p>Recent studies suggest that meta-learning may provide an original solution to an enduring puzzle about whether neural networks can explain compositionality - in particular, by raising the prospect that compositionality can be understood as an emergent property of an inner-loop learning algorithm. We elaborate on this hypothesis and consider its empirical predictions regarding the neural mechanisms and development of human compositionality.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000098
Ignacio Cea
The meta-learning framework proposed by Binz et al. would gain significantly from the inclusion of affective and homeostatic elements, currently neglected in their work. These components are crucial as cognition as we know it is profoundly influenced by affective states, which arise as intricate forms of homeostatic regulation in living bodies.
{"title":"Meta-learning modeling and the role of affective-homeostatic states in human cognition.","authors":"Ignacio Cea","doi":"10.1017/S0140525X24000098","DOIUrl":"https://doi.org/10.1017/S0140525X24000098","url":null,"abstract":"<p><p>The meta-learning framework proposed by Binz et al. would gain significantly from the inclusion of affective and homeostatic elements, currently neglected in their work. These components are crucial as cognition as we know it is profoundly influenced by affective states, which arise as intricate forms of homeostatic regulation in living bodies.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000190
Jacques Pesnot-Lerousseau, Christopher Summerfield
Deep meta-learning is the driving force behind advances in contemporary AI research, and a promising theory of flexible cognition in natural intelligence. We agree with Binz et al. that many supposedly "model-based" behaviours may be better explained by meta-learning than by classical models. We argue that this invites us to revisit our neural theories of problem solving and goal-directed planning.
{"title":"Quo vadis, planning?","authors":"Jacques Pesnot-Lerousseau, Christopher Summerfield","doi":"10.1017/S0140525X24000190","DOIUrl":"https://doi.org/10.1017/S0140525X24000190","url":null,"abstract":"<p><p>Deep meta-learning is the driving force behind advances in contemporary AI research, and a promising theory of flexible cognition in natural intelligence. We agree with Binz et al. that many supposedly \"model-based\" behaviours may be better explained by meta-learning than by classical models. We argue that this invites us to revisit our neural theories of problem solving and goal-directed planning.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000116
R Thomas McCoy, Thomas L Griffiths
Meta-learning is even more broadly relevant to the study of inductive biases than Binz et al. suggest: Its implications go beyond the extensions to rational analysis that they discuss. One noteworthy example is that meta-learning can act as a bridge between the vector representations of neural networks and the symbolic hypothesis spaces used in many Bayesian models.
{"title":"Meta-learning as a bridge between neural networks and symbolic Bayesian models.","authors":"R Thomas McCoy, Thomas L Griffiths","doi":"10.1017/S0140525X24000116","DOIUrl":"https://doi.org/10.1017/S0140525X24000116","url":null,"abstract":"<p><p>Meta-learning is even more broadly relevant to the study of inductive biases than Binz et al. suggest: Its implications go beyond the extensions to rational analysis that they discuss. One noteworthy example is that meta-learning can act as a bridge between the vector representations of neural networks and the symbolic hypothesis spaces used in many Bayesian models.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X2400013X
Sue Llewellyn
The authors' aim is to build "more biologically plausible learning algorithms" that work in naturalistic environments. Given that, first, human learning and memory are inextricable, and, second, that much human learning is unconscious, can the authors' first research question of how people improve their learning abilities over time be answered without addressing these two issues? I argue that it cannot.
{"title":"Learning and memory are inextricable.","authors":"Sue Llewellyn","doi":"10.1017/S0140525X2400013X","DOIUrl":"https://doi.org/10.1017/S0140525X2400013X","url":null,"abstract":"<p><p>The authors' aim is to build \"more biologically plausible learning algorithms\" that work in naturalistic environments. Given that, first, human learning and memory are inextricable, and, second, that much human learning is unconscious, can the authors' first research question of how people improve their learning abilities over time be answered without addressing these two issues? I argue that it cannot.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000268
Yosef Prat, Ehud Lamm
Binz et al. highlight the potential of meta-learning to greatly enhance the flexibility of AI algorithms, as well as to approximate human behavior more accurately than traditional learning methods. We wish to emphasize a basic problem that lies underneath these two objectives, and in turn suggest another perspective of the required notion of "meta" in meta-learning: knowing what to learn.
{"title":"The hard problem of meta-learning is what-to-learn.","authors":"Yosef Prat, Ehud Lamm","doi":"10.1017/S0140525X24000268","DOIUrl":"https://doi.org/10.1017/S0140525X24000268","url":null,"abstract":"<p><p>Binz et al. highlight the potential of meta-learning to greatly enhance the flexibility of AI algorithms, as well as to approximate human behavior more accurately than traditional learning methods. We wish to emphasize a basic problem that lies underneath these two objectives, and in turn suggest another perspective of the required notion of \"meta\" in meta-learning: knowing what to learn.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1017/S0140525X24000165
Adam N Sanborn, Haijiang Yan, Christian Tsvetkov
Meta-learned models of cognition make optimal predictions for the actual stimuli presented to participants, but investigating judgment biases by constraining neural networks will be unwieldy. We suggest combining them with cognitive process models, which are more intuitive and explain biases. Rational process models, those that can sequentially sample from the posterior distributions produced by meta-learned models, seem a natural fit.
{"title":"Combining meta-learned models with process models of cognition.","authors":"Adam N Sanborn, Haijiang Yan, Christian Tsvetkov","doi":"10.1017/S0140525X24000165","DOIUrl":"https://doi.org/10.1017/S0140525X24000165","url":null,"abstract":"<p><p>Meta-learned models of cognition make optimal predictions for the actual stimuli presented to participants, but investigating judgment biases by constraining neural networks will be unwieldy. We suggest combining them with cognitive process models, which are more intuitive and explain biases. Rational process models, those that can sequentially sample from the posterior distributions produced by meta-learned models, seem a natural fit.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1017/S0140525X2300314X
Hyowon Gweon, Peter Zhu
What we know about what babies know - as represented by the core knowledge proposal - is perhaps missing a place for the baby itself. By studying the baby as an actor rather than an observer, we can better understand the origins of human intelligence as an interface between perception and action, and how humans think and learn about themselves in a complex world.
{"title":"Where is the baby in core knowledge?","authors":"Hyowon Gweon, Peter Zhu","doi":"10.1017/S0140525X2300314X","DOIUrl":"https://doi.org/10.1017/S0140525X2300314X","url":null,"abstract":"<p><p>What we know about what babies know - as represented by the core knowledge proposal - is perhaps missing a place for the baby itself. By studying the baby as an actor rather than an observer, we can better understand the origins of human intelligence as an interface between perception and action, and how humans think and learn about themselves in a complex world.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141454979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1017/S0140525X23003217
Karen E Adolph, Mark A Schmuckler
Researchers must infer "what babies know" based on what babies do. Thus, to maximize information from doing, researchers should use tasks and tools that capture the richness of infants' behaviors. We clarify Gibson's views about the richness of infants' behavior and their exploration in the service of guiding action - what Gibson called "learning about affordances."
{"title":"What we don't know about what babies know: Reconsidering psychophysics, exploration, and infant behavior.","authors":"Karen E Adolph, Mark A Schmuckler","doi":"10.1017/S0140525X23003217","DOIUrl":"10.1017/S0140525X23003217","url":null,"abstract":"<p><p>Researchers must infer \"what babies know\" based on what babies do. Thus, to maximize information from doing, researchers should use tasks and tools that capture the richness of infants' behaviors. We clarify Gibson's views about the richness of infants' behavior and their exploration in the service of guiding action - what Gibson called \"learning about affordances.\"</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11212672/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141454978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1017/S0140525X23003199
Armin W Schulz
Questions can be raised about the central status that evolutionarily ancient core knowledge systems are given in Spelke's otherwise very compelling theory. So, the existence of domain-general learning capacities has to be admitted, too, and no clear reason is provided to doubt the existence of uniquely human cognitive adaptations. All of these factors should be acknowledged when explaining human thought.
{"title":"Core knowledge and its role in explaining uniquely human cognition: Some questions.","authors":"Armin W Schulz","doi":"10.1017/S0140525X23003199","DOIUrl":"https://doi.org/10.1017/S0140525X23003199","url":null,"abstract":"<p><p>Questions can be raised about the central status that evolutionarily ancient core knowledge systems are given in Spelke's otherwise very compelling theory. So, the existence of domain-general learning capacities has to be admitted, too, and no clear reason is provided to doubt the existence of uniquely human cognitive adaptations. All of these factors should be acknowledged when explaining human thought.</p>","PeriodicalId":8698,"journal":{"name":"Behavioral and Brain Sciences","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141454956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}