Pub Date : 2025-09-27DOI: 10.1016/j.jml.2025.104700
Thomas Hikaru Clark , Greta Tuckute , Bryan Medina , Evelina Fedorenko
Prior work on visual memory has suggested that humans have a high-capacity but imperfect memory: image representations accumulate noise over time, which makes similar images confusable. This account – the noisy representation hypothesis – was recently extended to the verbal domain: in line with past evidence that words are encoded in memory by their meanings, it was shown that words with distinctive meanings are most memorable. Here, we leverage recent advances in natural language processing to ask whether the same holds true for compositional linguistic stimuli — sentences. In a recognition memory experiment with responses from 443 participants to 2500 six-word-long target sentences, we found that a sentence’s semantic distinctiveness – as estimated through contextual representations from a large language model – predicts the accuracy and speed of its recognition. These effects were observed for both intrinsic sentence memorability (distinctiveness of a sentence relative to a large corpus of sentences) and contextual memorability (distinctiveness relative to recently encountered sentences in the experiment), and cannot be reduced to properties of the sentence’s constituent words. Our findings suggest that sentence memorability, similar to image and word memorability, is related to meaning distinctiveness, thus extending the noisy representation hypothesis to compositional linguistic stimuli.
{"title":"A distinctive meaning makes a sentence memorable","authors":"Thomas Hikaru Clark , Greta Tuckute , Bryan Medina , Evelina Fedorenko","doi":"10.1016/j.jml.2025.104700","DOIUrl":"10.1016/j.jml.2025.104700","url":null,"abstract":"<div><div>Prior work on visual memory has suggested that humans have a high-capacity but imperfect memory: image representations accumulate noise over time, which makes similar images confusable. This account – the <em>noisy representation hypothesis</em> – was recently extended to the verbal domain: in line with past evidence that words are encoded in memory by their meanings, it was shown that words with distinctive meanings are most memorable. Here, we leverage recent advances in natural language processing to ask whether the same holds true for compositional linguistic stimuli — sentences. In a recognition memory experiment with responses from 443 participants to 2500 six-word-long target sentences, we found that a sentence’s semantic distinctiveness – as estimated through contextual representations from a large language model – predicts the accuracy and speed of its recognition. These effects were observed for both intrinsic sentence memorability (distinctiveness of a sentence relative to a large corpus of sentences) and contextual memorability (distinctiveness relative to recently encountered sentences in the experiment), and cannot be reduced to properties of the sentence’s constituent words. Our findings suggest that sentence memorability, similar to image and word memorability, is related to meaning distinctiveness, thus extending the noisy representation hypothesis to compositional linguistic stimuli.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"146 ","pages":"Article 104700"},"PeriodicalIF":3.0,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145155792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1016/j.jml.2025.104691
Alexia Galati , Rick Dale , Camila Alviar , Moreno I. Coco
Collaborative task performance is assumed to benefit from interpersonal coordination between interacting individuals. Prominent views of language use and social behavior, including the Interactive Alignment Model (IAM; Pickering & Garrod, 2004), support this view by building on tasks that require monitoring a partner’s perspective (e.g., in route planning), proposing that behavioral alignment enables conceptual convergence. However, the role of alignment in tasks requiring complementarity (e.g., a “divide and conquer” strategy during joint visual search) remains underexplored. We address this gap by manipulating task goals (route planning vs. visual search) as forty dyads completed ten trials involving subway maps while their eye movements and speech were co-registered. We used Cross Recurrence Quantification Analysis (CRQA) to examine the temporal relationships between partners’ eye fixations and word sequences, generating measures that reveal similarity and dynamic coupling. Dyads exhibited more gaze alignment in route planning than visual search across a range of CRQA metrics. Gaze alignment also varied across the trial and related differently to accuracy: in visual search, greater alignment late in the trial predicted better performance. In speech, route planning prompted longer and more entropic word sequences, but lower overall recurrence than visual search. This finding suggests that the two modalities organize in a compensatory fashion to support distinct task demands. These results support a theoretical framework more general than IAM, in which interactive alignment emerges as a consequence of dynamic adaptation to task goals. Overall, task goals constrain how people coordinate behavior and offer insights into how collaborating partners distribute their multimodal contributions.
{"title":"Task goals constrain the alignment in eye-movements and speech during interpersonal coordination","authors":"Alexia Galati , Rick Dale , Camila Alviar , Moreno I. Coco","doi":"10.1016/j.jml.2025.104691","DOIUrl":"10.1016/j.jml.2025.104691","url":null,"abstract":"<div><div>Collaborative task performance is assumed to benefit from interpersonal coordination between interacting individuals. Prominent views of language use and social behavior, including the Interactive Alignment Model (IAM; <span><span>Pickering & Garrod, 2004</span></span>), support this view by building on tasks that require monitoring a partner’s perspective (e.g., in route planning), proposing that behavioral alignment enables conceptual convergence. However, the role of alignment in tasks requiring complementarity (e.g., a “divide and conquer” strategy during joint visual search) remains underexplored. We address this gap by manipulating task goals (route planning vs. visual search) as forty dyads completed ten trials involving subway maps while their eye movements and speech were co-registered. We used Cross Recurrence Quantification Analysis (CRQA) to examine the temporal relationships between partners’ eye fixations and word sequences, generating measures that reveal similarity and dynamic coupling. Dyads exhibited more gaze alignment in route planning than visual search across a range of CRQA metrics. Gaze alignment also varied across the trial and related differently to accuracy: in visual search, greater alignment late in the trial predicted better performance. In speech, route planning prompted longer and more entropic word sequences, but lower overall recurrence than visual search. This finding suggests that the two modalities organize in a compensatory fashion to support distinct task demands. These results support a theoretical framework more general than IAM, in which interactive alignment emerges as a consequence of dynamic adaptation to task goals. Overall, task goals constrain how people coordinate behavior and offer insights into how collaborating partners distribute their multimodal contributions.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"146 ","pages":"Article 104691"},"PeriodicalIF":3.0,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145155791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-12DOI: 10.1016/j.jml.2025.104675
Alyssa Loo , Ellie Pavlick , Roman Feiman
A central goal of cognitive science is to provide a computationally explicit account of both the structure of the mind and its development: what are the primitive representational building blocks of cognition, what are the rules via which those primitives combine, and where do these primitives and rules come from in the first place? A long-standing debate concerns the adequacy of artificial neural networks as computational models that can answer these questions, in particular in domains related to abstract cognitive function, such as language and logic. This paper argues that recent advances in neural networks – specifically, the advent of large language models (LLMs) – represent an important shift in this debate. We test a variety of LLMs on an existing experimental paradigm used for studying the induction of rules formulated over logical concepts. Across four experiments, we find converging empirical evidence that LLMs provide at least as good a fit to human behavior as models that implement a Bayesian probabilistic language of thought (pLoT), which have been the best computational models of human behavior on the same task. Moreover, we show that the LLMs make qualitatively different predictions about the nature of the rules that are inferred and deployed in order to complete the task, indicating that the LLM is unlikely to be a mere implementation of the pLoT solution. Based on these results, we argue that LLMs may instantiate a novel theoretical account of the primitive representations and computations necessary to explain human logical concepts, with which future work in cognitive science should engage.
{"title":"LLMs model how humans induce logically structured rules","authors":"Alyssa Loo , Ellie Pavlick , Roman Feiman","doi":"10.1016/j.jml.2025.104675","DOIUrl":"10.1016/j.jml.2025.104675","url":null,"abstract":"<div><div>A central goal of cognitive science is to provide a computationally explicit account of both the structure of the mind and its development: what are the primitive representational building blocks of cognition, what are the rules via which those primitives combine, and where do these primitives and rules come from in the first place? A long-standing debate concerns the adequacy of artificial neural networks as computational models that can answer these questions, in particular in domains related to abstract cognitive function, such as language and logic. This paper argues that recent advances in neural networks – specifically, the advent of large language models (LLMs) – represent an important shift in this debate. We test a variety of LLMs on an existing experimental paradigm used for studying the induction of rules formulated over logical concepts. Across four experiments, we find converging empirical evidence that LLMs provide at least as good a fit to human behavior as models that implement a Bayesian probabilistic language of thought (pLoT), which have been the best computational models of human behavior on the same task. Moreover, we show that the LLMs make qualitatively different predictions about the nature of the rules that are inferred and deployed in order to complete the task, indicating that the LLM is unlikely to be a mere implementation of the pLoT solution. Based on these results, we argue that LLMs may instantiate a novel theoretical account of the primitive representations and computations necessary to explain human logical concepts, with which future work in cognitive science should engage.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"146 ","pages":"Article 104675"},"PeriodicalIF":3.0,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145047508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-08DOI: 10.1016/j.jml.2025.104693
Alexander S. LaTourrette , Charles Yang , John Trueswell
Children often encounter new words in referentially and semantically ambiguous environments. Thus, they will generally make many incorrect guesses about a word’s meaning before arriving at its correct meaning. Here, we asked whether these initial incorrect guesses might nevertheless be useful to learners by providing information about a word’s semantic neighborhood (e.g., if most guesses were food items, perhaps the word has a food-related meaning). To test this, we analyzed datasets from previous tasks in which adults guessed the word which caregivers uttered in interactions with their children. We first tested whether adults’ incorrect guesses are, indeed, semantically similar to the correct meaning. In Study 1, we established that learners’ incorrect guesses were semantically similar to the target word. We then asked whether adults successfully used these semantically similar guesses as “stepping-stones” to arrive at the correct meaning across exposures. Study 2 showed that overall, learners’ guesses were semantically consistent across exposures. However, this effect was small, and correct guesses were not judged to be similar to learners’ prior, incorrect guesses. Moreover, Study 3 revealed that semantically close-to-target guesses did not improve learners’ subsequent accuracy. Thus, even adult word learners fail to use semantic similarity in cross-situational word learning. Study 4 confirmed this result in a new word learning experiment: even for maximally similar meaning pairs, adults failed to generate thematically or taxonomically similar meanings across exposures. While learners’ incorrect guesses tend to be similar to the correct meaning, learners do not successfully use this information to learn words across exposures.
{"title":"Close enough isn’t good enough in word learning: successful cross-situational word mappings are semantically independent of previous mappings","authors":"Alexander S. LaTourrette , Charles Yang , John Trueswell","doi":"10.1016/j.jml.2025.104693","DOIUrl":"10.1016/j.jml.2025.104693","url":null,"abstract":"<div><div>Children often encounter new words in referentially and semantically ambiguous environments. Thus, they will generally make many incorrect guesses about a word’s meaning before arriving at its correct meaning. Here, we asked whether these initial incorrect guesses might nevertheless be useful to learners by providing information about a word’s semantic neighborhood (e.g., if most guesses were food items, perhaps the word has a food-related meaning). To test this, we analyzed datasets from previous tasks in which adults guessed the word which caregivers uttered in interactions with their children. We first tested whether adults’ incorrect guesses are, indeed, semantically similar to the correct meaning. In Study 1, we established that learners’ incorrect guesses were semantically similar to the target word. We then asked whether adults successfully used these semantically similar guesses as “stepping-stones” to arrive at the correct meaning across exposures. Study 2 showed that overall, learners’ guesses were semantically consistent across exposures. However, this effect was small, and correct guesses were not judged to be similar to learners’ prior, incorrect guesses. Moreover, Study 3 revealed that semantically close-to-target guesses did not improve learners’ subsequent accuracy. Thus, even adult word learners fail to use semantic similarity in cross-situational word learning. Study 4 confirmed this result in a new word learning experiment: even for maximally similar meaning pairs, adults failed to generate thematically or taxonomically similar meanings across exposures. While learners’ incorrect guesses tend to be similar to the correct meaning, learners do not successfully use this information to learn words across exposures.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"146 ","pages":"Article 104693"},"PeriodicalIF":3.0,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145011009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-30DOI: 10.1016/j.jml.2025.104692
Lauren L. Richmond , Lois K. Burnett , Julia Kearley , Sam J. Gilbert , Alexandra B. Morrison , B. Hunter Ball
{"title":"Corrigendum to “Individual differences in prospective and retrospective memory offloading” [J. Mem. Lang. 142 (2025) 104617","authors":"Lauren L. Richmond , Lois K. Burnett , Julia Kearley , Sam J. Gilbert , Alexandra B. Morrison , B. Hunter Ball","doi":"10.1016/j.jml.2025.104692","DOIUrl":"10.1016/j.jml.2025.104692","url":null,"abstract":"","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"145 ","pages":"Article 104692"},"PeriodicalIF":3.0,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145019229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-16DOI: 10.1016/j.jml.2025.104690
Chuchu Li , Sin Hang Lau , Victor S. Ferreira
Priming experiments and speech error studies have found cross-linguistic differences in phonological encoding. Notably, the first selectable unit (the proximate unit) differs between English and Mandarin Chinese, with the former selecting segmental units like consonants (Cs) and vowels (Vs) first, while the latter selects syllables as a whole. Further, Mandarin Chinese is tonal, meaning the same syllable is a different word depending on the tone it is spoken with. However, it remains unclear how tone is represented and processed during phonological encoding in speech production – attached to the vowel or CV, or processed independently. Across three experiments, we investigated these questions by measuring how quickly speakers produced sequences of tone-bearing CV syllables. Unlike English, speed of production was not directly linked to plan reuse (see Sevald & Dell, 1994). Instead, speech rate was robustly faster when each CV was produced with only one tone (i.e., about equal speech rate for ba2 di1 da1 bi2 and ba1 ba1 ba1 ba1), compared to when a particular CV was produced with more than one tone (i.e., slower speech rate for ba1 ba2 ba1 ba2). We suggest that Mandarin speakers represent CVs as syllable “chunks,” integrating tone—a part of the structural frame with the CV (rather than a vowel), and producing the same CV with more than one tone in a sequence is difficult as a result of needing to reassign different tones to the same CV chunk.
{"title":"Lexical tone is different and special: evidence from a speeded repeated production task","authors":"Chuchu Li , Sin Hang Lau , Victor S. Ferreira","doi":"10.1016/j.jml.2025.104690","DOIUrl":"10.1016/j.jml.2025.104690","url":null,"abstract":"<div><div>Priming experiments and speech error studies have found cross-linguistic differences in phonological encoding. Notably, the first selectable unit (the proximate unit) differs between English and Mandarin Chinese, with the former selecting segmental units like consonants (Cs) and vowels (Vs) first, while the latter selects syllables as a whole. Further, Mandarin Chinese is tonal, meaning the same syllable is a different word depending on the tone it is spoken with. However, it remains unclear how tone is represented and processed during phonological encoding in speech production – attached to the vowel or CV, or processed independently. Across three experiments, we investigated these questions by measuring how quickly speakers produced sequences of tone-bearing CV syllables. Unlike English, speed of production was not directly linked to plan reuse (see <span><span>Sevald & Dell, 1994</span></span>). Instead, speech rate was robustly faster when each CV was produced with only one tone (i.e., about equal speech rate for ba2 di1 da1 bi2 and ba1 ba1 ba1 ba1), compared to when a particular CV was produced with more than one tone (i.e., slower speech rate for ba1 ba2 ba1 ba2). We suggest that Mandarin speakers represent CVs as syllable “chunks,” integrating tone—a part of the structural frame with the CV (rather than a vowel), and producing the same CV with more than one tone in a sequence is difficult as a result of needing to reassign different tones to the same CV chunk.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"145 ","pages":"Article 104690"},"PeriodicalIF":3.0,"publicationDate":"2025-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144858438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The predictive processing framework suggests that the brain generates semantic and phonological predictions to facilitate real-time language comprehension. While adults engage in both types of prediction, how these abilities develop in early childhood remains unclear. The present study explores the emergence of semantic and phonological predictions in toddlers aged 18, 24, and 30 months in three preferential looking experiments. Toddlers were presented with highly constrained sentence contexts paired with visual stimuli to assess their predictive abilities. Experiment 1 measured word prediction accuracy using predictable and unpredictable sentence conditions. Experiment 2 tested semantic prediction by introducing a semantic competitor, while Experiment 3 evaluated phonological prediction using phonologically similar competitors. Results showed that by 18 months, toddlers exhibited anticipatory looks toward the expected target. By 24 months, toddlers showed anticipatory looks toward not only the predictable target word but also toward semantically related items, and by 30 months, this pattern extended to phonologically related items. This developmental pattern—characterized by the earlier emergence of semantic relations followed by phonological relations—is consistent with the idea that semantic predictions provide a foundation for the subsequent development of phonological predictions. We discuss the data considering different prediction mechanisms, such as hierarchical predictive coding, prediction-by-production, and prediction through associations; we propose that these mechanisms are complementary components of a unified predictive system.
{"title":"Hierarchical prediction in toddlers: Semantic and phonological development","authors":"Armando Quetzalcóatl Angulo-Chavira, Alejandra Mitzi Castellón-Flores, Natalia Arias-Trejo","doi":"10.1016/j.jml.2025.104688","DOIUrl":"10.1016/j.jml.2025.104688","url":null,"abstract":"<div><div>The predictive processing framework suggests that the brain generates semantic and phonological predictions to facilitate real-time language comprehension. While adults engage in both types of prediction, how these abilities develop in early childhood remains unclear. The present study explores the emergence of semantic and phonological predictions in toddlers aged 18, 24, and 30 months in three preferential looking experiments. Toddlers were presented with highly constrained sentence contexts paired with visual stimuli to assess their predictive abilities. Experiment 1 measured word prediction accuracy using predictable and unpredictable sentence conditions. Experiment 2 tested semantic prediction by introducing a semantic competitor, while Experiment 3 evaluated phonological prediction using phonologically similar competitors. Results showed that by 18 months, toddlers exhibited anticipatory looks toward the expected target. By 24 months, toddlers showed anticipatory looks toward not only the predictable target word but also toward semantically related items, and by 30 months, this pattern extended to phonologically related items. This developmental pattern—characterized by the earlier emergence of semantic relations followed by phonological relations—is consistent with the idea that semantic predictions provide a foundation for the subsequent development of phonological predictions. We discuss the data considering different prediction mechanisms, such as hierarchical predictive coding, prediction-by-production, and prediction through associations; we propose that these mechanisms are complementary components of a unified predictive system.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"145 ","pages":"Article 104688"},"PeriodicalIF":3.0,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-13DOI: 10.1016/j.jml.2025.104689
Kurt Winsler, Steven J. Luck
Visual perception is ordinarily impaired for objects that are tightly crowded by other objects. This might be expected to make reading very difficult given that letters are tightly crowded together within words. However, a lifetime of reading experience may lead to changes in visual processing that reduce the effects of crowding on letters. Study 1 examined this hypothesis experimentally by comparing crowding thresholds (measured as the closest spacing that yields recognition accuracy of 82% correct) for upright letters, inverted letters, and Gabor patches in 60 experienced readers of English. We found that crowding thresholds were reduced for upright letters compared to other stimuli classes, especially for stimuli close to the fovea. In other words, experienced readers could tolerate closer spacing for highly familiar upright letters than for less familiar types of stimuli. Crowding thresholds were also reduced to the right of fixation, matching the left-to-right direction of English reading. Study 2 measured crowding in 250 observers and asked whether individual differences in proxies of reading experience were associated with reduced crowding. We found that higher scores on these proxy measures were associated with lower crowding thresholds for upright letters, especially in the right visual field. These results provide evidence that a lifetime of reading experience alters aspects of visual perception, such that upright letters can be perceived under more-crowded conditions than other stimuli. At a practical level, this means that deleterious effects of letter crowding are significantly reduced for experienced readers, which has implications for both models of visual crowding and reading.
{"title":"A lifetime of reading experience facilitates the perception of crowded letters","authors":"Kurt Winsler, Steven J. Luck","doi":"10.1016/j.jml.2025.104689","DOIUrl":"10.1016/j.jml.2025.104689","url":null,"abstract":"<div><div>Visual perception is ordinarily impaired for objects that are tightly crowded by other objects. This might be expected to make reading very difficult given that letters are tightly crowded together within words. However, a lifetime of reading experience may lead to changes in visual processing that reduce the effects of crowding on letters. Study 1 examined this hypothesis experimentally by comparing crowding thresholds (measured as the closest spacing that yields recognition accuracy of 82% correct) for upright letters, inverted letters, and Gabor patches in 60 experienced readers of English. We found that crowding thresholds were reduced for upright letters compared to other stimuli classes, especially for stimuli close to the fovea. In other words, experienced readers could tolerate closer spacing for highly familiar upright letters than for less familiar types of stimuli. Crowding thresholds were also reduced to the right of fixation, matching the left-to-right direction of English reading. Study 2 measured crowding in 250 observers and asked whether individual differences in proxies of reading experience were associated with reduced crowding. We found that higher scores on these proxy measures were associated with lower crowding thresholds for upright letters, especially in the right visual field. These results provide evidence that a lifetime of reading experience alters aspects of visual perception, such that upright letters can be perceived under more-crowded conditions than other stimuli. At a practical level, this means that deleterious effects of letter crowding are significantly reduced for experienced readers, which has implications for both models of visual crowding and reading.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"145 ","pages":"Article 104689"},"PeriodicalIF":3.0,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-06DOI: 10.1016/j.jml.2025.104678
Paola Calabrese, Nicholas Hedger, Katherine Pritchard, Vesna Stojanovik, Emma Pagnamenta
Many children with Developmental Language Disorder (DLD) find learning new words difficult, which negatively affects their educational and psycho-social outcomes. Word learning involves encoding, consolidation and reconsolidation of words, but the most challenging phase and factors which moderate word learning remain unclear.
We conducted a systematic review and meta-analysis to determine which phase is most challenging and which factors predict oral word learning success in children with DLD. The search including PsycINFO, PubMed, Web of Science, and LLBA identified forty-six studies published before April 2024 comparing children with DLD and typically developing (TD) age-matched peers in word learning tasks. Seventy-eight effect sizes were calculated for encoding (n DLD = 1462, n TD = 2161), eight for consolidation (n DLD = 107, n TD = 112), and 19 for reconsolidation (n DLD = 296, n TD = 278).
The random effect model identified an effect for encoding (k = 78, d = 0.82, [0.66, 0.98], p < .001) but not consolidation (k = 8, d = −0.2, [−0.68, 0.29], p = .43) or reconsolidation (k = 19, d = 0.23, [−0.14, 0.59], p = .22) of new words. The moderator analysis via random effects models identified verbal short-term memory and lexical knowledge as significant moderators of encoding, while word length was the most important task characteristic.
Despite limited data for consolidation and reconsolidation, our findings provide new insights into oral word learning difficulties in children with DLD. These insights help clinicians and teachers identify support strategies while also highlighting gaps in existing research, driving future studies forward.
许多患有发展性语言障碍(DLD)的儿童发现学习新单词很困难,这对他们的教育和心理社会结果产生了负面影响。单词学习包括单词的编码、巩固和再巩固,但最具挑战性的阶段和调节单词学习的因素尚不清楚。我们进行了一项系统回顾和荟萃分析,以确定哪一阶段是最具挑战性的,哪些因素可以预测DLD儿童口语单词学习的成功。包括PsycINFO, PubMed, Web of Science和LLBA在内的搜索确定了2024年4月之前发表的46项研究,比较了DLD儿童和正常发育(TD)同龄儿童在单词学习任务中的表现。计算了编码的78个效应量(n DLD = 1462, n TD = 2161),整合的8个效应量(n DLD = 107, n TD = 112),再整合的19个效应量(n DLD = 296, n TD = 278)。随机效应模型确定了编码的影响(k = 78, d = 0.82, [0.66, 0.98], p <;措施)但不整合(k = 8 d =−0.2−0.68、0.29,p =点)或重新整合(d = 0.23 k = 19日,(−0.14,0.59),p = 22)的新单词。通过随机效应模型进行调节分析,发现言语短期记忆和词汇知识是编码的显著调节因子,而单词长度是最重要的任务特征。尽管巩固和再巩固的数据有限,但我们的研究结果为DLD儿童口语单词学习困难提供了新的见解。这些见解有助于临床医生和教师确定支持策略,同时也突出了现有研究中的差距,推动未来的研究向前发展。
{"title":"Word learning in children with developmental language disorder: A meta-analysis testing the encoding hypothesis","authors":"Paola Calabrese, Nicholas Hedger, Katherine Pritchard, Vesna Stojanovik, Emma Pagnamenta","doi":"10.1016/j.jml.2025.104678","DOIUrl":"10.1016/j.jml.2025.104678","url":null,"abstract":"<div><div>Many children with Developmental Language Disorder (DLD) find learning new words difficult, which negatively affects their educational and psycho-social outcomes. Word learning involves encoding, consolidation and reconsolidation of words, but the most challenging phase and factors which moderate word learning remain unclear.</div><div>We conducted a systematic review and meta-analysis to determine which phase is most challenging and which factors predict oral word learning success in children with DLD. The search including PsycINFO, PubMed, Web of Science, and LLBA identified forty-six studies published before April 2024 comparing children with DLD and typically developing (TD) age-matched peers in word learning tasks. Seventy-eight effect sizes were calculated for encoding (n DLD = 1462, n TD = 2161), eight for consolidation (n DLD = 107, n TD = 112), and 19 for reconsolidation (n DLD = 296, n TD = 278).</div><div>The random effect model identified an effect for encoding (k = 78, d = 0.82, [0.66, 0.98], p < .001) but not consolidation (k = 8, d = −0.2, [−0.68, 0.29], p = .43) or reconsolidation (k = 19, d = 0.23, [−0.14, 0.59], p = .22) of new words. The moderator analysis via random effects models identified verbal short-term memory and lexical knowledge as significant moderators of encoding, while word length was the most important task characteristic.</div><div>Despite limited data for consolidation and reconsolidation, our findings provide new insights into oral word learning difficulties in children with DLD. These insights help clinicians and teachers identify support strategies while also highlighting gaps in existing research, driving future studies forward.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"145 ","pages":"Article 104678"},"PeriodicalIF":3.0,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144780605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-03DOI: 10.1016/j.jml.2025.104671
Shota Momma , Norvin Richards , Victor S. Ferreira
Do speakers encode abstract structural representations devoid of perceptual-motor content, that is, phonology? In six recall-based production experiments, we examined whether English speakers encode the null complementizer in sentence production using structural priming, the tendency for speakers to reuse the structure they have recently encountered. The results show that the null complementizer can be primed across distinct construction types and that this priming effect cannot be explained as the priming of the absence of the overt complementizer. These results are difficult to capture in semantic, pragmatic, or phonological terms. Furthermore, we evaluated two varieties of neural network language models (based on transformers and long short term memory) for their capacity to reproduce human priming patterns. Although they could reproduce basic priming effects, neural network language models were simultaneously more sensitive to constructional differences and less sensitive to abstract similarities across constructions than humans. This suggests that distributional cues alone are likely not sufficient for learning the generalization governing the distribution of English complementizers. Based on these results, we argue that the structural representations speakers construct during production go beyond what they hear and say.
{"title":"Speakers encode silent structures: Evidence from complementizer priming in English","authors":"Shota Momma , Norvin Richards , Victor S. Ferreira","doi":"10.1016/j.jml.2025.104671","DOIUrl":"10.1016/j.jml.2025.104671","url":null,"abstract":"<div><div>Do speakers encode abstract structural representations devoid of perceptual-motor content, that is, phonology? In six recall-based production experiments, we examined whether English speakers encode the null complementizer in sentence production using <em>structural priming</em>, the tendency for speakers to reuse the structure they have recently encountered. The results show that the null complementizer can be primed across distinct construction types and that this priming effect cannot be explained as the priming of the absence of the overt complementizer. These results are difficult to capture in semantic, pragmatic, or phonological terms. Furthermore, we evaluated two varieties of neural network language models (based on transformers and long short term memory) for their capacity to reproduce human priming patterns. Although they could reproduce basic priming effects, neural network language models were simultaneously more sensitive to constructional differences and less sensitive to abstract similarities across constructions than humans. This suggests that distributional cues alone are likely not sufficient for learning the generalization governing the distribution of English complementizers. Based on these results, we argue that the structural representations speakers construct during production go beyond what they hear and say.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"145 ","pages":"Article 104671"},"PeriodicalIF":3.0,"publicationDate":"2025-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144766593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}