Considerable research shows that causal perception emerges between 6 and 10 months of age. Yet, because this research tends to use artificial stimuli, it is unanswered how or through what mechanisms of change human infants learn about the causal properties of real-world categories such as animate entities and inanimate objects. One answer to this question is that this knowledge is innate (i.e., unlearned, evolutionarily ancient, and possibly present at birth) and underpinned by core knowledge and core cognition. An alternative perspective that is tested here through computer simulations is that infants acquire this knowledge via domain-general associative learning. This article demonstrates that associative learning alone-as instantiated in an artificial neural network-is sufficient to explain the data presented in four classic infancy studies: Spelke et al. (1995), Saxe et al. (2005), Saxe et al. (2007), and Markson and Spelke (2006). This work not only advances theoretical perspectives within developmental psychology but also has implications for the design of artificial intelligence systems inspired by human cognitive development. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Any series of sensorimotor actions shows fluctuations in speed and accuracy from repetition to repetition, even when the sensory input and motor output requirements remain identical over time. Such fluctuations are particularly prominent in reaction time (RT) series from laboratory neurocognitive tasks. Despite their omnipresent nature, trial-to-trial fluctuations remain poorly understood. Here, we systematically analyzed RT series from various neurocognitive tasks, quantifying how much of the total trial-to-trial RT variance can be explained with general linear models (GLMs) by three sources of variability that are frequently investigated in behavioral and neuroscientific research: (1) experimental conditions, employed to induce systematic patterns in variability, (2) short-term temporal dependencies such as the autocorrelation between subsequent trials, and (3) long-term temporal trends over experimental blocks and sessions. Furthermore, we examined to what extent the explained variances by these sources are shared or unique. We analyzed 1913 unique RT series from 30 different cognitive control and perception-based tasks. On average, the three sources together explained ∼8%-17% of the total variance. The experimental conditions explained on average ∼2.5%-3.5% but did not share explained variance with temporal dependencies. Thus, the largest part of the trial-to-trial fluctuations in RT remained unexplained by these three sources. Unexplained fluctuations may take on nonlinear forms that are not picked up by GLMs. They may also be partially attributable to observable endogenous factors, such as fluctuations in brain activity and bodily states. Still, some extent of randomness may be a feature of the neurobiological system rather than just nuisance. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
The U-shaped curve has long been recognized as a fundamental concept in psychological science, particularly in theories about motivational accounts and cognitive control. In this study (N = 330), we empirically tested the prediction of a nonmonotonic, curvilinear relationship between task difficulty and control adaptation. Drawing from motivational intensity theory and the expected value of control framework, we hypothesized that control intensity would increase with task difficulty until a maximum tolerable level, after which it would decrease. To examine this hypothesis, we conducted two experiments utilizing Stroop-like conflict tasks, systematically manipulating the number of distractors to vary task difficulty. We assessed control adaptation and measured subjective task difficulty. Our results revealed a curvilinear pattern between perceived task difficulty and adaptation of control. The findings provide empirical support for the theoretical accounts of motivational intensity theory and expected value of control, highlighting the nonlinear nature of the relationship between task difficulty and cognitive control. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
The effects of emotion on memory are wide-ranging and powerful, but they are not uniform. Although there is agreement that emotion enhances memory for individual items, how it influences memory for the associated contextual details (relational memory, RM) remains debated. The prevalent view suggests that emotion impairs RM, but there is also evidence that emotion enhances RM. To reconcile these diverging results, we carried out three studies incorporating the following features: (1) testing RM with increased specificity, distinguishing between subjective (recollection based) and objective (item-context match) RM accuracy, (2) accounting for emotion-attention interactions via eye-tracking and task manipulation, and (3) using stimuli with integrated item-context content. Challenging the prevalent view, we identified both enhancing and impairing effects. First, emotion enhanced subjective RM, separately and when confirmed by accurate objective RM. Second, emotion impaired objective RM through attention capturing, but it enhanced RM accuracy when attentional effects were statistically accounted for using eye-tracking data. Third, emotion also enhanced RM when participants were cued to focus on contextual details during encoding, likely by increasing item-context binding. Finally, functional magnetic resonance imaging data recorded from a subset of participants showed that emotional enhancement of RM was associated with increased activity in the medial temporal lobe (MTL) and ventrolateral prefrontal cortex, along with increased intra-MTL and ventrolateral prefrontal cortex-MTL functional connectivity. Overall, these findings reconcile evidence regarding opposing effects of emotion on RM and point to possible training interventions to increase RM specificity in healthy functioning, posttraumatic stress disorder, and aging, by promoting item-context binding and diminishing memory decontextualization. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
It is common practice in speech research to only sample participants who self-report being "native English speakers." Although there is research on differences in language processing between native and nonnative listeners (see Lecumberri et al., 2010, for a review), the majority of speech research that aims to establish general findings (e.g., testing models of spoken word recognition) only includes native speakers in their sample. Not only is the "native English speaker" criterion poorly defined, but it also excludes historically underrepresented groups from speech perception research, often without attention to whether this exclusion is likely to affect study outcomes. The purpose of this study is to empirically test whether and how using different inclusion criteria ("native English speakers" vs. "nonnative English speakers") affects several well-known phenomena in speech perception research. Five hundred participants completed word (N = 200) and sentence (N = 300) identification tasks in quiet and in moderate levels of background noise. Results indicate that multiple classic findings in speech perception research-including the effects of noise level, lexical density, and semantic context on speech intelligibility-persist regardless of "native English" speaking status. However, the magnitude of some of these effects differed across participant groups. Taken together, these results suggest that researchers should carefully consider whether native speaker status is likely to affect outcomes and make decisions about inclusion criteria on a study-by-study basis. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Sound-shape associations (e.g., preferentially matching angular shapes with high-pitched sounds and smooth shapes with low-pitched ones) have been almost universally observed in humans. If cross-modally congruent sounds and shapes are more robustly integrated in humans, distinguishing them in time might be hypothetically more challenging compared to incongruent sound-shape pairings. Supporting this premise, a highly cited work by Parise and Spence (2009; n = 12) reported worse temporal order judgement performance for audiovisual stimuli with congruent compared to incongruent sound-shape associations. Here, we report the results of five experiments across two laboratories, including a preregistered replication attempt, all (∑n = 102) failing to replicate the original results. Additionally, frequentist and Bayesian meta-analyses found no evidence against the null hypothesis, revealing a negligible effect size. The combined results indicate that multisensory temporal resolution in humans is unaffected by sound-shape associations, which might arise at a later (or parallel) processing stage compared to cross-modal temporal order judgements. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Despite having little economic utility, people are sometimes motivated to seek challenges (i.e., proactively choosing to work on a more difficult task than an easier one). The present study investigated whether just observing others' challenge-seeking behaviors could motivate people to seek more challenging tasks-the social contagion effect of challenge-seeking. The participants were presented with pairs of options, each associated with a math word problem of a certain difficulty level. We examined whether the participants' preference for a more challenging (i.e., more difficult) option changes after observing the decisions of others who hold a challenge-seeking or a challenge-avoiding attitude. Five experiments consistently showed that, while the participants generally avoided challenging word problems, observing challenge-seeking in others increased the probability of participants choosing more challenging options. These results indicate that our motivation to seek challenges may be instilled, in part, through social processes. (PsycInfo Database Record (c) 2024 APA, all rights reserved).