Alice Zhang , Max Langenkamp , Max Kleiman-Weiner , Tuomas Oikarinen , Fiery Cushman
{"title":"Similar failures of consideration arise in human and machine planning","authors":"Alice Zhang , Max Langenkamp , Max Kleiman-Weiner , Tuomas Oikarinen , Fiery Cushman","doi":"10.1016/j.cognition.2025.106108","DOIUrl":null,"url":null,"abstract":"<div><div>Humans are remarkably efficient at decision making, even in “open-ended” problems where the set of possible actions is too large for exhaustive evaluation. Our success relies, in part, on processes for calling to mind the right candidate actions. When these processes fail, the result is a kind of puzzle in which the value of a solution would be obvious once it is considered, but never gets considered in the first place. Recently, machine learning (ML) architectures have attained or even exceeded human performance on open-ended decision making tasks such as playing chess and Go. We ask whether the broad architectural principles that underlie ML success in these domains generate similar consideration failures to those observed in humans. We demonstrate a case in which they do, illuminating how humans make open-ended decisions, how this relates to ML approaches to similar problems, and how both architectures lead to characteristic patterns of success and failure.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"259 ","pages":"Article 106108"},"PeriodicalIF":2.8000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognition","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010027725000484","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
Humans are remarkably efficient at decision making, even in “open-ended” problems where the set of possible actions is too large for exhaustive evaluation. Our success relies, in part, on processes for calling to mind the right candidate actions. When these processes fail, the result is a kind of puzzle in which the value of a solution would be obvious once it is considered, but never gets considered in the first place. Recently, machine learning (ML) architectures have attained or even exceeded human performance on open-ended decision making tasks such as playing chess and Go. We ask whether the broad architectural principles that underlie ML success in these domains generate similar consideration failures to those observed in humans. We demonstrate a case in which they do, illuminating how humans make open-ended decisions, how this relates to ML approaches to similar problems, and how both architectures lead to characteristic patterns of success and failure.
期刊介绍:
Cognition is an international journal that publishes theoretical and experimental papers on the study of the mind. It covers a wide variety of subjects concerning all the different aspects of cognition, ranging from biological and experimental studies to formal analysis. Contributions from the fields of psychology, neuroscience, linguistics, computer science, mathematics, ethology and philosophy are welcome in this journal provided that they have some bearing on the functioning of the mind. In addition, the journal serves as a forum for discussion of social and political aspects of cognitive science.