Pub Date : 2023-07-01Epub Date: 2022-05-12DOI: 10.1037/rev0000363
Johannes Burge, Tyler Burge
Psychology and philosophy have long reflected on the role of perspective in vision. Since the dawn of modern vision science-roughly, since Helmholtz in the late 1800s-scientific explanations in vision have focused on understanding the computations that transform the sensed retinal image into percepts of the three-dimensional environment. The standard view in the science is that distal properties-viewpoint-independent properties of the environment (object shape) and viewpoint-dependent relational properties (3D orientation relative to the viewer)-are perceptually represented and that properties of the proximal stimulus (in vision, the retinal image) are not. This view is woven into the nature of scientific explanation in perceptual psychology, and has guided impressive advances over the past 150 years. A recently published article suggests that in shape perception, the standard view must be revised. It argues, on the basis of new empirical data, that a new entity-perspectival shape-should be introduced into scientific explanations of shape perception. Specifically, the article's centrally advertised claim is that, in addition to distal shape, perspectival shape is perceived. We argue that this claim rests on a series of mistakes. Problems in experimental design entail that the article provides no empirical support for any claims regarding either perspective or the perception of shape. There are further problems in scientific reasoning and conceptual development. Detailing these criticisms and explaining how science treats these issues are meant to clarify method and theory, and to improve exchanges between the science and philosophy of perception. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Shape, perspective, and what is and is not perceived: Comment on Morales, Bax, and Firestone (2020).","authors":"Johannes Burge, Tyler Burge","doi":"10.1037/rev0000363","DOIUrl":"10.1037/rev0000363","url":null,"abstract":"<p><p>Psychology and philosophy have long reflected on the role of perspective in vision. Since the dawn of modern vision science-roughly, since Helmholtz in the late 1800s-scientific explanations in vision have focused on understanding the computations that transform the sensed retinal image into percepts of the three-dimensional environment. The standard view in the science is that distal properties-viewpoint-independent properties of the environment (object shape) and viewpoint-dependent relational properties (3D orientation relative to the viewer)-are perceptually represented and that properties of the proximal stimulus (in vision, the retinal image) are not. This view is woven into the nature of scientific explanation in perceptual psychology, and has guided impressive advances over the past 150 years. A recently published article suggests that in shape perception, the standard view must be revised. It argues, on the basis of new empirical data, that a new entity-perspectival shape-should be introduced into scientific explanations of shape perception. Specifically, the article's centrally advertised claim is that, in addition to distal shape, perspectival shape is perceived. We argue that this claim rests on a series of mistakes. Problems in experimental design entail that the article provides no empirical support for <i>any</i> claims regarding either perspective or the perception of shape. There are further problems in scientific reasoning and conceptual development. Detailing these criticisms and explaining how science treats these issues are meant to clarify method and theory, and to improve exchanges between the science and philosophy of perception. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":21016,"journal":{"name":"Psychological review","volume":"130 4","pages":"1125-1136"},"PeriodicalIF":5.1,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11366222/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10030343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert D Hawkins, Michael Franke, Michael C Frank, Adele E Goldberg, Kenny Smith, Thomas L Griffiths, Noah D Goodman
Languages are powerful solutions to coordination problems: They provide stable, shared expectations about how the words we say correspond to the beliefs and intentions in our heads. Yet, language use in a variable and nonstationary social environment requires linguistic representations to be flexible: Old words acquire new ad hoc or partner-specific meanings on the fly. In this article, we introduce continual hierarchical adaptation through inference (CHAI), a hierarchical Bayesian theory of coordination and convention formation that aims to reconcile the long-standing tension between these two basic observations. We argue that the central computational problem of communication is not simply transmission, as in classical formulations, but continual learning and adaptation over multiple timescales. Partner-specific common ground quickly emerges from social inferences within dyadic interactions, while community-wide social conventions are stable priors that have been abstracted away from interactions with multiple partners. We present new empirical data alongside simulations showing how our model provides a computational foundation for several phenomena that have posed a challenge for previous accounts: (a) the convergence to more efficient referring expressions across repeated interaction with the same partner, (b) the gradual transfer of partner-specific common ground to strangers, and (c) the influence of communicative context on which conventions eventually form. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"From partners to populations: A hierarchical Bayesian account of coordination and convention.","authors":"Robert D Hawkins, Michael Franke, Michael C Frank, Adele E Goldberg, Kenny Smith, Thomas L Griffiths, Noah D Goodman","doi":"10.1037/rev0000348","DOIUrl":"https://doi.org/10.1037/rev0000348","url":null,"abstract":"<p><p>Languages are powerful solutions to coordination problems: They provide stable, shared expectations about how the words we say correspond to the beliefs and intentions in our heads. Yet, language use in a variable and nonstationary social environment requires linguistic representations to be flexible: Old words acquire new ad hoc or partner-specific meanings on the fly. In this article, we introduce continual hierarchical adaptation through inference (CHAI), a hierarchical Bayesian theory of coordination and convention formation that aims to reconcile the long-standing tension between these two basic observations. We argue that the central computational problem of communication is not simply transmission, as in classical formulations, but continual <i>learning</i> and <i>adaptation</i> over multiple timescales. Partner-specific common ground quickly emerges from social inferences within dyadic interactions, while community-wide social conventions are stable priors that have been abstracted away from interactions with multiple partners. We present new empirical data alongside simulations showing how our model provides a computational foundation for several phenomena that have posed a challenge for previous accounts: (a) the convergence to more efficient referring expressions across repeated interaction with the same partner, (b) the gradual transfer of partner-specific common ground to strangers, and (c) the influence of communicative context on which conventions eventually form. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":21016,"journal":{"name":"Psychological review","volume":"130 4","pages":"977-1016"},"PeriodicalIF":5.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9673331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Respiratory rhythms sustain biological life, governing the homeostatic exchange of oxygen and carbon dioxide. Until recently, however, the influence of breathing on the brain has largely been overlooked. Yet new evidence demonstrates that the act of breathing exerts a substantive, rhythmic influence on perception, emotion, and cognition, largely through the direct modulation of neural oscillations. Here, we synthesize these findings to motivate a new predictive coding model of respiratory brain coupling, in which breathing rhythmically modulates both local and global neural gain, to optimize cognitive and affective processing. Our model further explains how respiratory rhythms interact with the topology of the functional connectome, and we highlight key implications for the computational psychiatry of disordered respiratory and interoceptive inference. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Respiratory rhythms of the predictive mind.","authors":"Micah Allen, Somogy Varga, Detlef H Heck","doi":"10.1037/rev0000391","DOIUrl":"https://doi.org/10.1037/rev0000391","url":null,"abstract":"<p><p>Respiratory rhythms sustain biological life, governing the homeostatic exchange of oxygen and carbon dioxide. Until recently, however, the influence of breathing on the brain has largely been overlooked. Yet new evidence demonstrates that the act of breathing exerts a substantive, rhythmic influence on perception, emotion, and cognition, largely through the direct modulation of neural oscillations. Here, we synthesize these findings to motivate a new predictive coding model of respiratory brain coupling, in which breathing rhythmically modulates both local and global neural gain, to optimize cognitive and affective processing. Our model further explains how respiratory rhythms interact with the topology of the functional connectome, and we highlight key implications for the computational psychiatry of disordered respiratory and interoceptive inference. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":21016,"journal":{"name":"Psychological review","volume":"130 4","pages":"1066-1080"},"PeriodicalIF":5.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9701076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We systematically misjudge our own performance in simple economic tasks. First, we generally overestimate our ability to make correct choices-a bias called overconfidence. Second, we are more confident in our choices when we seek gains than when we try to avoid losses-a bias we refer to as the valence-induced confidence bias. Strikingly, these two biases are also present in reinforcement-learning (RL) contexts, despite the fact that outcomes are provided trial-by-trial and could, in principle, be used to recalibrate confidence judgments online. How confidence biases emerge and are maintained in reinforcement-learning contexts is thus puzzling and still unaccounted for. To explain this paradox, we propose that confidence biases stem from learning biases, and test this hypothesis using data from multiple experiments, where we concomitantly assessed instrumental choices and confidence judgments, during learning and transfer phases. Our results first show that participants' choices in both tasks are best accounted for by a reinforcement-learning model featuring context-dependent learning and confirmatory updating. We then demonstrate that the complex, biased pattern of confidence judgments elicited during both tasks can be explained by an overweighting of the learned value of the chosen option in the computation of confidence judgments. We finally show that, consequently, the individual learning model parameters responsible for the learning biases-confirmatory updating and outcome context-dependency-are predictive of the individual metacognitive biases. We conclude suggesting that the metacognitive biases originate from fundamentally biased learning computations. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Linking confidence biases to reinforcement-learning processes.","authors":"Nahuel Salem-Garcia, Stefano Palminteri, Maël Lebreton","doi":"10.1037/rev0000424","DOIUrl":"https://doi.org/10.1037/rev0000424","url":null,"abstract":"<p><p>We systematically misjudge our own performance in simple economic tasks. First, we generally overestimate our ability to make correct choices-a bias called overconfidence. Second, we are more confident in our choices when we seek gains than when we try to avoid losses-a bias we refer to as the valence-induced confidence bias. Strikingly, these two biases are also present in reinforcement-learning (RL) contexts, despite the fact that outcomes are provided trial-by-trial and could, in principle, be used to recalibrate confidence judgments online. How confidence biases emerge and are maintained in reinforcement-learning contexts is thus puzzling and still unaccounted for. To explain this paradox, we propose that confidence biases stem from learning biases, and test this hypothesis using data from multiple experiments, where we concomitantly assessed instrumental choices and confidence judgments, during learning and transfer phases. Our results first show that participants' choices in both tasks are best accounted for by a reinforcement-learning model featuring context-dependent learning and confirmatory updating. We then demonstrate that the complex, biased pattern of confidence judgments elicited during both tasks can be explained by an overweighting of the learned value of the chosen option in the computation of confidence judgments. We finally show that, consequently, the individual learning model parameters responsible for the learning biases-confirmatory updating and outcome context-dependency-are predictive of the individual metacognitive biases. We conclude suggesting that the metacognitive biases originate from fundamentally biased learning computations. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":21016,"journal":{"name":"Psychological review","volume":"130 4","pages":"1017-1043"},"PeriodicalIF":5.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9674961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The 11th version of the International Classification of Diseases (ICD-11) includes complex posttraumatic stress disorder (CPTSD) as a separate diagnostic entity alongside posttraumatic stress disorder (PTSD). ICD-11 CPTSD is defined by six sets of symptoms, three that are shared with PTSD (reexperiencing in the here and now, avoidance, and sense of current threat) and three (affective dysregulation, negative self-concept, and disturbances in relationships) representing pervasive "disturbances in self-organization" (DSO). There is considerable evidence supporting the construct validity of ICD-11 CPTSD, but no theoretical account of its development has thus far been presented. A theory is needed to explain several phenomena that are especially relevant to ICD-11 CPTSD such as the role played by prolonged and repeated trauma exposure, the functional independence between PTSD and DSO symptoms, and diagnostic heterogeneity following trauma exposure. The memory and identity theory of ICD-11 CPTSD states that single and multiple trauma exposure occur in a context of individual vulnerability which interact to give rise to intrusive, sensation-based traumatic memories and negative identities which, together, produce the PTSD and DSO symptoms that define ICD-11 CPTSD. The model emphasizes that the two major and related causal processes of intrusive memories and negative identities exist on a continuum from prereflective experience to full self-awareness. Theoretically derived implications for the assessment and treatment of ICD-11 CPTSD are discussed, as well as areas for future research and model testing. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"The memory and identity theory of ICD-11 complex posttraumatic stress disorder.","authors":"Philip Hyland, Mark Shevlin, Chris R Brewin","doi":"10.1037/rev0000418","DOIUrl":"https://doi.org/10.1037/rev0000418","url":null,"abstract":"The 11th version of the International Classification of Diseases (ICD-11) includes complex posttraumatic stress disorder (CPTSD) as a separate diagnostic entity alongside posttraumatic stress disorder (PTSD). ICD-11 CPTSD is defined by six sets of symptoms, three that are shared with PTSD (reexperiencing in the here and now, avoidance, and sense of current threat) and three (affective dysregulation, negative self-concept, and disturbances in relationships) representing pervasive \"disturbances in self-organization\" (DSO). There is considerable evidence supporting the construct validity of ICD-11 CPTSD, but no theoretical account of its development has thus far been presented. A theory is needed to explain several phenomena that are especially relevant to ICD-11 CPTSD such as the role played by prolonged and repeated trauma exposure, the functional independence between PTSD and DSO symptoms, and diagnostic heterogeneity following trauma exposure. The memory and identity theory of ICD-11 CPTSD states that single and multiple trauma exposure occur in a context of individual vulnerability which interact to give rise to intrusive, sensation-based traumatic memories and negative identities which, together, produce the PTSD and DSO symptoms that define ICD-11 CPTSD. The model emphasizes that the two major and related causal processes of intrusive memories and negative identities exist on a continuum from prereflective experience to full self-awareness. Theoretically derived implications for the assessment and treatment of ICD-11 CPTSD are discussed, as well as areas for future research and model testing. (PsycInfo Database Record (c) 2023 APA, all rights reserved).","PeriodicalId":21016,"journal":{"name":"Psychological review","volume":"130 4","pages":"1044-1065"},"PeriodicalIF":5.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9676769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cas W Coopmans, Karthikeya Kaushik, Andrea E Martin
Since the cognitive revolution, language and action have been compared as cognitive systems, with cross-domain convergent views recently gaining renewed interest in biology, neuroscience, and cognitive science. Language and action are both combinatorial systems whose mode of combination has been argued to be hierarchical, combining elements into constituents of increasingly larger size. This structural similarity has led to the suggestion that they rely on shared cognitive and neural resources. In this article, we compare the conceptual and formal properties of hierarchy in language and action using set theory. We show that the strong compositionality of language requires a particular formalism, a magma, to describe the algebraic structure corresponding to the set of hierarchical structures underlying sentences. When this formalism is applied to actions, it appears to be both too strong and too weak. To overcome these limitations, which are related to the weak compositionality and sequential nature of action structures, we formalize the algebraic structure corresponding to the set of actions as a trace monoid. We aim to capture the different system properties of language and action in terms of the distinction between hierarchical sets and hierarchical sequences and discuss the implications for the way both systems could be represented in the brain. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Hierarchical structure in language and action: A formal comparison.","authors":"Cas W Coopmans, Karthikeya Kaushik, Andrea E Martin","doi":"10.1037/rev0000429","DOIUrl":"https://doi.org/10.1037/rev0000429","url":null,"abstract":"<p><p>Since the cognitive revolution, language and action have been compared as cognitive systems, with cross-domain convergent views recently gaining renewed interest in biology, neuroscience, and cognitive science. Language and action are both combinatorial systems whose mode of combination has been argued to be hierarchical, combining elements into constituents of increasingly larger size. This structural similarity has led to the suggestion that they rely on shared cognitive and neural resources. In this article, we compare the conceptual and formal properties of hierarchy in language and action using set theory. We show that the strong compositionality of language requires a particular formalism, a magma, to describe the algebraic structure corresponding to the set of hierarchical structures underlying sentences. When this formalism is applied to actions, it appears to be both too strong and too weak. To overcome these limitations, which are related to the weak compositionality and sequential nature of action structures, we formalize the algebraic structure corresponding to the set of actions as a trace monoid. We aim to capture the different system properties of language and action in terms of the distinction between hierarchical sets and hierarchical sequences and discuss the implications for the way both systems could be represented in the brain. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":21016,"journal":{"name":"Psychological review","volume":"130 4","pages":"935-952"},"PeriodicalIF":5.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9679257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To achieve fluent language processing as a bilingual, a dominant theoretical framework assumes that the nontarget language is inhibited. This assumption is based on several empirical effects that are typically explained with inhibitory control. In the current article, we discuss four prominent effects linked to bilingual inhibition in language production (i.e., asymmetrical switch costs, n-2 language repetition costs, reversed language dominance, and the blocked language order effect). We argue that these effects require more empirical examination in order to arrive at a firmer basis for the assumption that inhibition plays a major role during bilingual language control. In particular, the empirical replicability of the phenomena themselves needs to be established more firmly, the underlying theoretical assumptions need further examination, and the alternative explanations of the empirical effects need to be scrutinized. In turn, we conclude that inhibitory control may provide a coherent framework for bilingual language production while outlining the challenges that the inhibition account still needs to face. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"The concept of inhibition in bilingual control.","authors":"Mathieu Declerck, Iring Koch","doi":"10.1037/rev0000367","DOIUrl":"https://doi.org/10.1037/rev0000367","url":null,"abstract":"<p><p>To achieve fluent language processing as a bilingual, a dominant theoretical framework assumes that the nontarget language is inhibited. This assumption is based on several empirical effects that are typically explained with inhibitory control. In the current article, we discuss four prominent effects linked to bilingual inhibition in language production (i.e., asymmetrical switch costs, n-2 language repetition costs, reversed language dominance, and the blocked language order effect). We argue that these effects require more empirical examination in order to arrive at a firmer basis for the assumption that inhibition plays a major role during bilingual language control. In particular, the empirical replicability of the phenomena themselves needs to be established more firmly, the underlying theoretical assumptions need further examination, and the alternative explanations of the empirical effects need to be scrutinized. In turn, we conclude that inhibitory control may provide a coherent framework for bilingual language production while outlining the challenges that the inhibition account still needs to face. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":21016,"journal":{"name":"Psychological review","volume":"130 4","pages":"953-976"},"PeriodicalIF":5.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9665694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding model complexity is important for developing useful psychological models. One way to think about model complexity is in terms of the predictions a model makes and the ability of empirical evidence to falsify those predictions. We argue that existing measures of falsifiability have important limitations and develop a new measure. KL-delta uses Kullback-Leibler divergence to compare the prior predictive distributions of models to the data prior that formalizes knowledge about the plausibility of different experimental outcomes. Using introductory conceptual examples and applications with existing models and experiments, we show that KL-delta challenges widely held scientific intuitions about model complexity and falsifiability. In a psychophysics application, we show that hierarchical models with more parameters are often more falsifiable than the original nonhierarchical model. This counters the intuition that adding parameters always makes a model more complex. In a decision-making application, we show that a choice model incorporating response determinism can be harder to falsify than its special case of probability matching. This counters the intuition that if one model is a special case of another, the special case must be less complex. In a memory recall application, we show that using informative data priors based on the serial position curve allows KL-delta to distinguish models that otherwise would be indistinguishable. This shows the value in model evaluation of extending the notion of possible falsifiability, in which all data are considered equally likely, to the more general notion of plausible falsifiability, in which some data are more likely than others. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Evaluating the complexity and falsifiability of psychological models.","authors":"Manuel Villarreal, Alexander Etz, Michael D Lee","doi":"10.1037/rev0000421","DOIUrl":"https://doi.org/10.1037/rev0000421","url":null,"abstract":"<p><p>Understanding model complexity is important for developing useful psychological models. One way to think about model complexity is in terms of the predictions a model makes and the ability of empirical evidence to falsify those predictions. We argue that existing measures of falsifiability have important limitations and develop a new measure. KL-delta uses Kullback-Leibler divergence to compare the prior predictive distributions of models to the data prior that formalizes knowledge about the plausibility of different experimental outcomes. Using introductory conceptual examples and applications with existing models and experiments, we show that KL-delta challenges widely held scientific intuitions about model complexity and falsifiability. In a psychophysics application, we show that hierarchical models with more parameters are often more falsifiable than the original nonhierarchical model. This counters the intuition that adding parameters always makes a model more complex. In a decision-making application, we show that a choice model incorporating response determinism can be harder to falsify than its special case of probability matching. This counters the intuition that if one model is a special case of another, the special case must be less complex. In a memory recall application, we show that using informative data priors based on the serial position curve allows KL-delta to distinguish models that otherwise would be indistinguishable. This shows the value in model evaluation of extending the notion of possible falsifiability, in which all data are considered equally likely, to the more general notion of plausible falsifiability, in which some data are more likely than others. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":21016,"journal":{"name":"Psychological review","volume":"130 4","pages":"853-872"},"PeriodicalIF":5.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9675233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fritz Günther, Marco Marelli, Sam Tureski, Marco Alessandro Petilli
Quantitative, data-driven models for mental representations have long enjoyed popularity and success in psychology (e.g., distributional semantic models in the language domain), but have largely been missing for the visual domain. To overcome this, we present ViSpa (Vision Spaces), high-dimensional vector spaces that include vision-based representation for naturalistic images as well as concept prototypes. These vectors are derived directly from visual stimuli through a deep convolutional neural network trained to classify images and allow us to compute vision-based similarity scores between any pair of images and/or concept prototypes. We successfully evaluate these similarities against human behavioral data in a series of large-scale studies, including off-line judgments-visual similarity judgments for the referents of word pairs (Study 1) and for image pairs (Study 2), and typicality judgments for images given a label (Study 3)-as well as online processing times and error rates in a discrimination (Study 4) and priming task (Study 5) with naturalistic image material. ViSpa similarities predict behavioral data across all tasks, which renders ViSpa a theoretically appealing model for vision-based representations and a valuable research tool for data analysis and the construction of experimental material: ViSpa allows for precise control over experimental material consisting of images and/or words denoting imageable concepts and introduces a specifically vision-based similarity for word pairs. To make ViSpa available to a wide audience, this article (a) includes (video) tutorials on how to use ViSpa in R and (b) presents a user-friendly web interface at http://vispa.fritzguenther.de. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"ViSpa (Vision Spaces): A computer-vision-based representation system for individual images and concept prototypes, with large-scale evaluation.","authors":"Fritz Günther, Marco Marelli, Sam Tureski, Marco Alessandro Petilli","doi":"10.1037/rev0000392","DOIUrl":"https://doi.org/10.1037/rev0000392","url":null,"abstract":"<p><p>Quantitative, data-driven models for mental representations have long enjoyed popularity and success in psychology (e.g., distributional semantic models in the language domain), but have largely been missing for the visual domain. To overcome this, we present ViSpa (Vision Spaces), high-dimensional vector spaces that include vision-based representation for naturalistic images as well as concept prototypes. These vectors are derived directly from visual stimuli through a deep convolutional neural network trained to classify images and allow us to compute vision-based similarity scores between any pair of images and/or concept prototypes. We successfully evaluate these similarities against human behavioral data in a series of large-scale studies, including off-line judgments-visual similarity judgments for the referents of word pairs (Study 1) and for image pairs (Study 2), and typicality judgments for images given a label (Study 3)-as well as online processing times and error rates in a discrimination (Study 4) and priming task (Study 5) with naturalistic image material. <i>ViSpa</i> similarities predict behavioral data across all tasks, which renders <i>ViSpa</i> a theoretically appealing model for vision-based representations and a valuable research tool for data analysis and the construction of experimental material: <i>ViSpa</i> allows for precise control over experimental material consisting of images and/or words denoting imageable concepts and introduces a specifically vision-based similarity for word pairs. To make <i>ViSpa</i> available to a wide audience, this article (a) includes (video) tutorials on how to use <i>ViSpa</i> in R and (b) presents a user-friendly web interface at http://vispa.fritzguenther.de. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":21016,"journal":{"name":"Psychological review","volume":"130 4","pages":"896-934"},"PeriodicalIF":5.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10030368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicolas Silvestrini, Sebastian Musslick, Anne S Berry, Eliana Vassena
An increasing number of cognitive, neurobiological, and computational models have been proposed in the last decade, seeking to explain how humans allocate physical or cognitive effort. Most models share conceptual similarities with motivational intensity theory (MIT), an influential classic psychological theory of motivation. Yet, little effort has been made to integrate such models, which remain confined within the explanatory level for which they were developed, that is, psychological, computational, neurobiological, and neuronal. In this critical review, we derive novel analyses of three recent computational and neuronal models of effort allocation-the expected value of control theory, the reinforcement meta-learner (RML) model, and the neuronal model of attentional effort-and establish a formal relationship between these models and MIT. Our analyses reveal striking similarities between predictions made by these models, with a shared key tenet: a nonmonotonic relationship between perceived task difficulty and effort, following a sawtooth or inverted U shape. In addition, the models converge on the proposition that the dorsal anterior cingulate cortex may be responsible for determining the allocation of effort and cognitive control. We conclude by discussing the distinct contributions and strengths of each theory toward understanding neurocomputational processes of effort allocation. Finally, we highlight the necessity for a unified understanding of effort allocation, by drawing novel connections between different theorizing of adaptive effort allocation as described by the presented models. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"An integrative effort: Bridging motivational intensity theory and recent neurocomputational and neuronal models of effort and control allocation.","authors":"Nicolas Silvestrini, Sebastian Musslick, Anne S Berry, Eliana Vassena","doi":"10.1037/rev0000372","DOIUrl":"https://doi.org/10.1037/rev0000372","url":null,"abstract":"<p><p>An increasing number of cognitive, neurobiological, and computational models have been proposed in the last decade, seeking to explain how humans allocate physical or cognitive effort. Most models share conceptual similarities with motivational intensity theory (MIT), an influential classic psychological theory of motivation. Yet, little effort has been made to integrate such models, which remain confined within the explanatory level for which they were developed, that is, psychological, computational, neurobiological, and neuronal. In this critical review, we derive novel analyses of three recent computational and neuronal models of effort allocation-the expected value of control theory, the reinforcement meta-learner (RML) model, and the neuronal model of attentional effort-and establish a formal relationship between these models and MIT. Our analyses reveal striking similarities between predictions made by these models, with a shared key tenet: a nonmonotonic relationship between perceived task difficulty and effort, following a sawtooth or inverted U shape. In addition, the models converge on the proposition that the dorsal anterior cingulate cortex may be responsible for determining the allocation of effort and cognitive control. We conclude by discussing the distinct contributions and strengths of each theory toward understanding neurocomputational processes of effort allocation. Finally, we highlight the necessity for a unified understanding of effort allocation, by drawing novel connections between different theorizing of adaptive effort allocation as described by the presented models. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":21016,"journal":{"name":"Psychological review","volume":"130 4","pages":"1081-1103"},"PeriodicalIF":5.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9674707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}