The Rescorla-Wagner rule remains the most popular tool to describe human behavior in reinforcement learning tasks. Nevertheless, it cannot fit human learning in complex environments. Previous work proposed several hierarchical extensions of this learning rule. However, it remains unclear when a flat (nonhierarchical) versus a hierarchical strategy is adaptive, or when it is implemented by humans. To address this question, current work applies a nested modeling approach to evaluate multiple models in multiple reinforcement learning environments both computationally (which approach performs best) and empirically (which approach fits human data best). We consider 10 empirical data sets (N = 407) divided over three reinforcement learning environments. Our results demonstrate that different environments are best solved with different learning strategies; and that humans adaptively select the learning strategy that allows best performance. Specifically, while flat learning fitted best in less complex stable learning environments, humans employed more hierarchically complex models in more complex environments. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Most of us have experienced moments when we could not recall some piece of information but felt that it was just out of reach. Research in metamemory has established that such judgments are often accurate; but what adaptive purpose do they serve? Here, we present an optimal model of how metacognitive monitoring (feeling of knowing) could dynamically inform metacognitive control of memory (the direction of retrieval efforts). In two experiments, we find that, consistent with the optimal model, people report having a stronger memory for targets they are likely to recall and direct their search efforts accordingly, cutting off the search when it is unlikely to succeed and prioritizing the search for stronger memories. Our results suggest that metamemory is indeed adaptive and motivate the development of process-level theories that account for the dynamic interplay between monitoring and control. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
For over 35 years, the violation-of-expectation paradigm has been used to study the development of expectations in the first 3 years of life. A wide range of expectations has been examined, including physical, psychological, sociomoral, biological, numerical, statistical, probabilistic, and linguistic expectations. Surprisingly, despite the paradigm's widespread use and the many seminal findings it has contributed to psychological science, so far no one has tried to provide a detailed and in-depth conceptual overview of the paradigm. Here, we attempted to do just that. We first focus on the rationale of the paradigm and discuss how it has evolved over time. We then show how improved descriptions of infants' looking behavior, together with the addition of a rich panoply of brain and behavioral measures, have helped deepen our understanding of infants' responses to violations. Next, we review the paradigm's strengths and limitations. Finally, we end with a discussion of challenges that have been leveled against the paradigm over the years. Through it all, our goal was twofold. First, we sought to provide psychologists and other scientists interested in the paradigm with an informed and constructive analysis of its theoretical origins and development. Second, we wanted to take stock of what the paradigm has revealed to date about how infants reason about events, and about how surprise at unexpected events, in or out of the laboratory, can lead to learning, by prompting infants to revise their working model of the world. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
People often form polarized beliefs, imbuing objects (e.g., themselves or others) with unambiguously positive or negative qualities. In clinical settings, this is referred to as dichotomous thinking or "splitting" and is a feature of several psychiatric disorders. Here, we introduce a Bayesian model of splitting that parameterizes a tendency to rigidly categorize objects as either entirely "Bad" or "Good," rather than to flexibly learn dispositions along a continuous scale. Distinct from the previous descriptive theories, the model makes quantitative predictions about how dichotomous beliefs emerge and are updated in light of new information. Specifically, the model addresses how splitting is context-dependent, yet exhibits stability across time. A key model feature is that phases of devaluation and/or idealization are consolidated by rationally attributing counter-evidence to external factors. For example, when another person is idealized, their less-than-perfect behavior is attributed to unfavorable external circumstances. However, sufficient counter-evidence can trigger switches of polarity, producing bistable dynamics. We show that the model can be fitted to empirical data, to measure individual susceptibility to relational instability. For example, we find that a latent categorical belief that others are "Good" accounts for less changeable, and more certain, character impressions of benevolent as opposed to malevolent others among healthy participants. By comparison, character impressions made by participants with borderline personality disorder reveal significantly higher and more symmetric splitting. The generative framework proposed invites applications for modeling oscillatory relational and affective dynamics in psychotherapeutic contexts. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Face matching consists of the ability to decide whether two face images (or more) belong to the same person or to different identities. Face matching is crucial for efficient face recognition and plays an important role in applied settings such as passport control and eyewitness memory. However, despite extensive research, the mechanisms that govern face-matching performance are still not well understood. Moreover, to date, many researchers hold on to the belief that match and mismatch conditions are governed by two separate systems, an assumption that likely thwarted the development of a unified model of face matching. The present study outlines a unified unequal variance confidence-similarity signal detection-based model of face-matching performance, one that facilitates the use of receiver operating characteristics (ROC) and confidence-accuracy plots to better understand the relations between match and mismatch conditions, and their relations to factors of confidence and similarity. A binomial feature-matching mechanism is developed to support this signal detection model. The model can account for the presence of both within-identities and between-identities sources of variation in face recognition and explains a myriad of face-matching phenomena, including the match-mismatch dissociation. The model is also capable of generating new predictions concerning the role of confidence and similarity and their intricate relations with accuracy. The new model was tested against six alternative competing models (some postulate discrete rather than continuous representations) in three experiments. Data analyses consisted of hierarchically nested model fitting, ROC curve analyses, and confidence-accuracy plots analyses. All of these provided substantial support in the signal detection-based confidence-similarity model. The model suggests that the accuracy of face-matching performance can be predicted by the degree of similarity/dissimilarity of the depicted faces and the level of confidence in the decision. Moreover, according to the model, confidence and similarity ratings are strongly correlated. (PsycInfo Database Record (c) 2024 APA, all rights reserved).