Pub Date : 2024-03-26eCollection Date: 2024-01-01DOI: 10.1162/opmi_a_00131
Margaret A McMullin, Rohit Kumar, Nathan C Higgins, Brian Gygi, Mounya Elhilali, Joel S Snyder
Theories of auditory and visual scene analysis suggest the perception of scenes relies on the identification and segregation of objects within it, resembling a detail-oriented processing style. However, a more global process may occur while analyzing scenes, which has been evidenced in the visual domain. It is our understanding that a similar line of research has not been explored in the auditory domain; therefore, we evaluated the contributions of high-level global and low-level acoustic information to auditory scene perception. An additional aim was to increase the field's ecological validity by using and making available a new collection of high-quality auditory scenes. Participants rated scenes on 8 global properties (e.g., open vs. enclosed) and an acoustic analysis evaluated which low-level features predicted the ratings. We submitted the acoustic measures and average ratings of the global properties to separate exploratory factor analyses (EFAs). The EFA of the acoustic measures revealed a seven-factor structure explaining 57% of the variance in the data, while the EFA of the global property measures revealed a two-factor structure explaining 64% of the variance in the data. Regression analyses revealed each global property was predicted by at least one acoustic variable (R2 = 0.33-0.87). These findings were extended using deep neural network models where we examined correlations between human ratings of global properties and deep embeddings of two computational models: an object-based model and a scene-based model. The results support that participants' ratings are more strongly explained by a global analysis of the scene setting, though the relationship between scene perception and auditory perception is multifaceted, with differing correlation patterns evident between the two models. Taken together, our results provide evidence for the ability to perceive auditory scenes from a global perspective. Some of the acoustic measures predicted ratings of global scene perception, suggesting representations of auditory objects may be transformed through many stages of processing in the ventral auditory stream, similar to what has been proposed in the ventral visual stream. These findings and the open availability of our scene collection will make future studies on perception, attention, and memory for natural auditory scenes possible.
{"title":"Preliminary Evidence for Global Properties in Human Listeners During Natural Auditory Scene Perception.","authors":"Margaret A McMullin, Rohit Kumar, Nathan C Higgins, Brian Gygi, Mounya Elhilali, Joel S Snyder","doi":"10.1162/opmi_a_00131","DOIUrl":"https://doi.org/10.1162/opmi_a_00131","url":null,"abstract":"<p><p>Theories of auditory and visual scene analysis suggest the perception of scenes relies on the identification and segregation of objects within it, resembling a detail-oriented processing style. However, a more global process may occur while analyzing scenes, which has been evidenced in the visual domain. It is our understanding that a similar line of research has not been explored in the auditory domain; therefore, we evaluated the contributions of high-level global and low-level acoustic information to auditory scene perception. An additional aim was to increase the field's ecological validity by using and making available a new collection of high-quality auditory scenes. Participants rated scenes on 8 global properties (e.g., open vs. enclosed) and an acoustic analysis evaluated which low-level features predicted the ratings. We submitted the acoustic measures and average ratings of the global properties to separate exploratory factor analyses (EFAs). The EFA of the acoustic measures revealed a seven-factor structure explaining 57% of the variance in the data, while the EFA of the global property measures revealed a two-factor structure explaining 64% of the variance in the data. Regression analyses revealed each global property was predicted by at least one acoustic variable (R<sup>2</sup> = 0.33-0.87). These findings were extended using deep neural network models where we examined correlations between human ratings of global properties and deep embeddings of two computational models: an object-based model and a scene-based model. The results support that participants' ratings are more strongly explained by a global analysis of the scene setting, though the relationship between scene perception and auditory perception is multifaceted, with differing correlation patterns evident between the two models. Taken together, our results provide evidence for the ability to perceive auditory scenes from a global perspective. Some of the acoustic measures predicted ratings of global scene perception, suggesting representations of auditory objects may be transformed through many stages of processing in the ventral auditory stream, similar to what has been proposed in the ventral visual stream. These findings and the open availability of our scene collection will make future studies on perception, attention, and memory for natural auditory scenes possible.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"8 ","pages":"333-365"},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10990578/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140872348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26eCollection Date: 2024-01-01DOI: 10.1162/opmi_a_00128
Shubhamkar Ayare, Nisheeth Srivastava
Multiple object tracking (MOT) involves simultaneous tracking of a certain number of target objects amongst a larger set of objects as they all move unpredictably over time. The prevalent explanation for successful target tracking by humans in MOT involving visually identical objects is based on the Visual Indexing Theory. This assumes that each target is indexed by a pointer using a non-conceptual mechanism to maintain an object's identity even as its properties change over time. Thus, successful tracking requires successful indexing and the absence of identification errors. Identity maintenance and successful tracking are measured in terms of identification (ID) and tracking accuracy respectively, with higher accuracy indicating better identity maintenance or better tracking. Existing evidence suggests that humans have high tracking accuracy despite poor identification accuracy, suggesting that it might be possible to perform MOT without indexing. Our work adds to existing evidence for this position through two experiments, and presents a computational model of multiple object tracking that does not require indexes. Our empirical results show that identification accuracy is aligned with tracking accuracy in humans for tracking up to three, but is lower when tracking more objects. Our computational model of MOT without indexing accounts for several empirical tracking accuracy patterns shown in earlier studies, reproduces the dissociation between tracking and identification accuracy produced earlier in the literature as well as in our experiments, and makes several novel predictions.
多目标跟踪(MOT)是指在一组较大的物体中同时跟踪一定数量的目标物体,因为这些物体都会随着时间发生不可预知的移动。在涉及视觉相同物体的多目标追踪中,人类成功追踪目标的普遍解释是基于视觉索引理论。该理论假定,每个目标都由一个指针进行索引,该指针使用一种非概念机制来保持物体的身份,即使其属性随时间发生变化。因此,成功的追踪需要成功的索引和无识别错误。身份维护和成功追踪分别以识别(ID)和追踪准确度来衡量,准确度越高,表明身份维护越好或追踪越好。现有证据表明,尽管人类的识别准确率较低,但其追踪准确率却很高,这表明人类有可能在不进行索引的情况下完成 MOT。我们的研究通过两个实验为这一观点提供了更多证据,并提出了一个不需要索引的多目标跟踪计算模型。我们的实证结果表明,在追踪最多三个物体时,识别准确率与人类的追踪准确率一致,但当追踪更多物体时,识别准确率就会降低。我们的无索引 MOT 计算模型解释了早期研究中显示的几种经验追踪准确性模式,再现了早期文献和我们的实验中出现的追踪和识别准确性之间的分离,并做出了几项新的预测。
{"title":"Multiple Object Tracking Without Pre-attentive Indexing.","authors":"Shubhamkar Ayare, Nisheeth Srivastava","doi":"10.1162/opmi_a_00128","DOIUrl":"https://doi.org/10.1162/opmi_a_00128","url":null,"abstract":"<p><p>Multiple object tracking (MOT) involves simultaneous tracking of a certain number of target objects amongst a larger set of objects as they all move unpredictably over time. The prevalent explanation for successful target tracking by humans in MOT involving visually identical objects is based on the Visual Indexing Theory. This assumes that each target is indexed by a pointer using a non-conceptual mechanism to maintain an object's identity even as its properties change over time. Thus, successful tracking requires successful indexing and the absence of identification errors. Identity maintenance and successful tracking are measured in terms of identification (ID) and tracking accuracy respectively, with higher accuracy indicating better identity maintenance or better tracking. Existing evidence suggests that humans have high tracking accuracy despite poor identification accuracy, suggesting that it might be possible to perform MOT without indexing. Our work adds to existing evidence for this position through two experiments, and presents a computational model of multiple object tracking that does not require indexes. Our empirical results show that identification accuracy is aligned with tracking accuracy in humans for tracking up to three, but is lower when tracking more objects. Our computational model of MOT without indexing accounts for several empirical tracking accuracy patterns shown in earlier studies, reproduces the dissociation between tracking and identification accuracy produced earlier in the literature as well as in our experiments, and makes several novel predictions.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"8 ","pages":"278-308"},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10990572/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26eCollection Date: 2024-01-01DOI: 10.1162/opmi_a_00127
Samuel J Cheyette, Steven T Piantadosi
In a large (N = 300), pre-registered experiment and data analysis model, we find that individual variation in overall performance on Raven's Progressive Matrices is substantially driven by differential strategizing in the face of difficulty. Some participants choose to spend more time on hard problems while others choose to spend less and these differences explain about 42% of the variance in overall performance. In a data analysis jointly predicting participants' reaction times and accuracy on each item, we find that the Raven's task captures at most half of participants' variation in time-controlled ability (48%) down to almost none (3%), depending on which notion of ability is assumed. Our results highlight the role that confounding factors such as motivation play in explaining individuals' differential performance in IQ testing.
{"title":"Response to Difficulty Drives Variation in IQ Test Performance.","authors":"Samuel J Cheyette, Steven T Piantadosi","doi":"10.1162/opmi_a_00127","DOIUrl":"10.1162/opmi_a_00127","url":null,"abstract":"<p><p>In a large (<i>N</i> = 300), pre-registered experiment and data analysis model, we find that individual variation in overall performance on Raven's Progressive Matrices is substantially driven by differential strategizing in the face of difficulty. Some participants choose to spend more time on hard problems while others choose to spend less and these differences explain about 42% of the variance in overall performance. In a data analysis jointly predicting participants' reaction times and accuracy on each item, we find that the Raven's task captures at most half of participants' variation in time-controlled ability (48%) down to almost none (3%), depending on which notion of ability is assumed. Our results highlight the role that confounding factors such as motivation play in explaining individuals' differential performance in IQ testing.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"8 ","pages":"265-277"},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10990577/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26eCollection Date: 2024-01-01DOI: 10.1162/opmi_a_00130
Joseph R Coffey, Margarita Zeitlin, Jean Crawford, Jesse Snedeker
Prior studies have found that children are more likely to learn words that are frequent in the input and highly imageable. Many theories of word learning, however, predict that these variables should interact, particularly early in development: frequency of a form is of little use if you cannot infer its meaning, and a concrete word cannot be acquired if you never hear it. The present study explores this interaction, how it changes over time and its relationship to syntactic category effects in children acquiring American English. We analyzed 1461 monolingual English-speaking children aged 1;4-2;6 from the MB-CDI norming study (Fenson et al., 1994). Word frequency was estimated from the CHILDES database, and imageability was measured using adult ratings. There was a strong over-additive interaction between frequency and imageability, such that children were more likely to learn a word if it was both highly imageable and very frequent. This interaction was larger in younger children than in older children. There were reliable differences between syntactic categories independent of frequency and imageability, which did not interact with age. These findings are consistent with theories in which children's early words are acquired by mapping frequent word forms onto concrete, perceptually available referents, such that highly frequent items are only acquired if they are also imageable, and vice versa.
{"title":"It's All in the Interaction: Early Acquired Words Are Both Frequent and Highly Imageable.","authors":"Joseph R Coffey, Margarita Zeitlin, Jean Crawford, Jesse Snedeker","doi":"10.1162/opmi_a_00130","DOIUrl":"https://doi.org/10.1162/opmi_a_00130","url":null,"abstract":"<p><p>Prior studies have found that children are more likely to learn words that are frequent in the input and highly imageable. Many theories of word learning, however, predict that these variables should interact, particularly early in development: frequency of a form is of little use if you cannot infer its meaning, and a concrete word cannot be acquired if you never hear it. The present study explores this interaction, how it changes over time and its relationship to syntactic category effects in children acquiring American English. We analyzed 1461 monolingual English-speaking children aged 1;4-2;6 from the MB-CDI norming study (Fenson et al., 1994). Word frequency was estimated from the CHILDES database, and imageability was measured using adult ratings. There was a strong over-additive interaction between frequency and imageability, such that children were more likely to learn a word if it was both highly imageable and very frequent. This interaction was larger in younger children than in older children. There were reliable differences between syntactic categories independent of frequency and imageability, which did not interact with age. These findings are consistent with theories in which children's early words are acquired by mapping frequent word forms onto concrete, perceptually available referents, such that highly frequent items are only acquired if they are also imageable, and vice versa.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"8 ","pages":"309-332"},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10990573/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140868360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26eCollection Date: 2024-01-01DOI: 10.1162/opmi_a_00135
Rebecca Tollan, Bilge Palaz
A core goal of research in language is to understand the factors that guide choice of linguistic form where more than one option is syntactically well-formed. We discuss one case of optionality that has generated longstanding discussion: the choice of either using or dropping the English complementizer that in sentences like I think (that) the cat followed the dog. Existing psycholinguistic analyses tie that-usage to production pressures associated with sentence planning (Ferreira & Dell, 2000), avoidance of ambiguity (Hawkins, 2004), and relative information density (Jaeger, 2010). Building on observations from cross-linguistic fieldwork, we present a novel proposal in which English that can serve to mark a speaker's "epistemic authority" over the information packaged within the embedded clause; that is, it indicates that the speaker has more knowledge of the embedded proposition compared with their addressee and thus has a perspective that they believe their addressee doesn't share. Testing this proposal with a forced-choice task and a series of corpus surveys, we find that English that is keyed to the use of embedded speaker (first-person) subject pronouns and occurs in sentences containing newsworthy information. Our account of that-optionality takes into account why that is associated with both (i) a dense information signal and (ii) semantic-pragmatic content, as well as extending to cases of non-optionality in subject/sentence-initial clauses (e.g., *(That) the cat is following the dog, I already know) and fragment answers (e.g., What do you already know? *(That) the cat is following the dog), where that is required.
语言研究的一个核心目标是了解在句法上有不止一种选择的情况下,指导语言形式选择的因素。我们将讨论一个引起长期讨论的选择性案例:在 I think (that) the cat followed the dog 这样的句子中选择使用或放弃英语补语 that。现有的心理语言学分析将that的使用与句子规划(Ferreira & Dell, 2000)、避免歧义(Hawkins, 2004)和相对信息密度(Jaeger, 2010)相关的生产压力联系起来。基于跨语言实地调查的观察结果,我们提出了一个新颖的建议,即英语中的 "that "可以用来标记说话者对嵌入式分句中的信息的 "认识权威";也就是说,它表明说话者比其收信人对嵌入式命题有更多的了解,因此拥有他们认为收信人所不具备的观点。通过强迫选择任务和一系列语料库调查对这一提议进行检验,我们发现,英语中的that-optionality与嵌入式说话人(第一人称)主语代词的使用有关,并且出现在包含有新闻价值信息的句子中。我们对that-optionality的解释考虑到了为什么that与(i)密集的信息信号和(ii)语义-语用内容相关联,并扩展到主语/句子首句中的非optionality情况(例如:*(That) the cat is following the dog, I already know)和片段回答(例如:What do you already know?*(That) the cat is following the dog)中的主语/句首句和片段回答(如:What do you already know?
{"title":"What Does <i>That</i> Mean? Complementizers and Epistemic Authority.","authors":"Rebecca Tollan, Bilge Palaz","doi":"10.1162/opmi_a_00135","DOIUrl":"https://doi.org/10.1162/opmi_a_00135","url":null,"abstract":"<p><p>A core goal of research in language is to understand the factors that guide choice of linguistic form where more than one option is syntactically well-formed. We discuss one case of optionality that has generated longstanding discussion: the choice of either using or dropping the English complementizer <i>that</i> in sentences like <i>I think (that) the cat followed the dog</i>. Existing psycholinguistic analyses tie <i>that</i>-usage to production pressures associated with sentence planning (Ferreira & Dell, 2000), avoidance of ambiguity (Hawkins, 2004), and relative information density (Jaeger, 2010). Building on observations from cross-linguistic fieldwork, we present a novel proposal in which English <i>that</i> can serve to mark a speaker's \"epistemic authority\" over the information packaged within the embedded clause; that is, it indicates that the speaker has more knowledge of the embedded proposition compared with their addressee and thus has a perspective that they believe their addressee doesn't share. Testing this proposal with a forced-choice task and a series of corpus surveys, we find that English <i>that</i> is keyed to the use of embedded speaker (first-person) subject pronouns and occurs in sentences containing newsworthy information. Our account of <i>that</i>-optionality takes into account why <i>that</i> is associated with both (i) a dense information signal and (ii) semantic-pragmatic content, as well as extending to cases of non-optionality in subject/sentence-initial clauses (e.g., *<i>(That) the cat is following the dog, I already know</i>) and fragment answers (e.g., <i>What do you already know?</i> *<i>(That) the cat is following the dog</i>), where <i>that</i> is required.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"8 ","pages":"366-394"},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10990574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140863179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-05eCollection Date: 2024-01-01DOI: 10.1162/opmi_a_00124
Vanessa Kudrnova, Elizabeth S Spelke, Ashley J Thomas
Infants are born into rich social networks and are faced with the challenge of learning about them. When infants observe social interactions, they make predictions about future behavior, but it is not clear whether these predictions are based on social dispositions, social relationships, or both. The current studies (N = 188, N = 90 males) address this question in 12-month-old infants and 16- to 18-month-old toddlers who observe social interactions involving imitation. In Studies 1 and 3, infants and toddlers expected that imitators, compared to non-imitators, would respond to their social partners' distress. Likewise, they expected the targets of imitation, compared to non-targets, to respond to their partner's distress. In Study 2, these expectations did not generalize to interactions with a new partner, providing evidence that infants learned about the relationships between individuals as opposed to their dispositions. In Study 3, infants did not make predictions about responses to laughter, suggesting that infants see imitation as indicative of a specific kind of social relationship. Together, these results provide evidence that imitative interactions support infants' and toddlers' learning about the social relationships connecting unknown individuals.
{"title":"Infants Infer Social Relationships Between Individuals Who Engage in Imitative Social Interactions.","authors":"Vanessa Kudrnova, Elizabeth S Spelke, Ashley J Thomas","doi":"10.1162/opmi_a_00124","DOIUrl":"10.1162/opmi_a_00124","url":null,"abstract":"<p><p>Infants are born into rich social networks and are faced with the challenge of learning about them. When infants observe social interactions, they make predictions about future behavior, but it is not clear whether these predictions are based on social dispositions, social relationships, or both. The current studies (N = 188, N = 90 males) address this question in 12-month-old infants and 16- to 18-month-old toddlers who observe social interactions involving imitation. In Studies 1 and 3, infants and toddlers expected that imitators, compared to non-imitators, would respond to their social partners' distress. Likewise, they expected the targets of imitation, compared to non-targets, to respond to their partner's distress. In Study 2, these expectations did not generalize to interactions with a new partner, providing evidence that infants learned about the relationships between individuals as opposed to their dispositions. In Study 3, infants did not make predictions about responses to laughter, suggesting that infants see imitation as indicative of a specific kind of social relationship. Together, these results provide evidence that imitative interactions support infants' and toddlers' learning about the social relationships connecting unknown individuals.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"8 ","pages":"202-216"},"PeriodicalIF":0.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10932586/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140112614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-05eCollection Date: 2024-01-01DOI: 10.1162/opmi_a_00119
Cory Shain
Many studies of human language processing have shown that readers slow down at less frequent or less predictable words, but there is debate about whether frequency and predictability effects reflect separable cognitive phenomena: are cognitive operations that retrieve words from the mental lexicon based on sensory cues distinct from those that predict upcoming words based on context? Previous evidence for a frequency-predictability dissociation is mostly based on small samples (both for estimating predictability and frequency and for testing their effects on human behavior), artificial materials (e.g., isolated constructed sentences), and implausible modeling assumptions (discrete-time dynamics, linearity, additivity, constant variance, and invariance over time), which raises the question: do frequency and predictability dissociate in ordinary language comprehension, such as story reading? This study leverages recent progress in open data and computational modeling to address this question at scale. A large collection of naturalistic reading data (six datasets, >2.2 M datapoints) is analyzed using nonlinear continuous-time regression, and frequency and predictability are estimated using statistical language models trained on more data than is currently typical in psycholinguistics. Despite the use of naturalistic data, strong predictability estimates, and flexible regression models, results converge with earlier experimental studies in supporting dissociable and additive frequency and predictability effects.
{"title":"Word Frequency and Predictability Dissociate in Naturalistic Reading.","authors":"Cory Shain","doi":"10.1162/opmi_a_00119","DOIUrl":"10.1162/opmi_a_00119","url":null,"abstract":"<p><p>Many studies of human language processing have shown that readers slow down at less frequent or less predictable words, but there is debate about whether frequency and predictability effects reflect separable cognitive phenomena: are cognitive operations that retrieve words from the mental lexicon based on sensory cues distinct from those that predict upcoming words based on context? Previous evidence for a frequency-predictability dissociation is mostly based on small samples (both for estimating predictability and frequency and for testing their effects on human behavior), artificial materials (e.g., isolated constructed sentences), and implausible modeling assumptions (discrete-time dynamics, linearity, additivity, constant variance, and invariance over time), which raises the question: do frequency and predictability dissociate in ordinary language comprehension, such as story reading? This study leverages recent progress in open data and computational modeling to address this question at scale. A large collection of naturalistic reading data (six datasets, >2.2 M datapoints) is analyzed using nonlinear continuous-time regression, and frequency and predictability are estimated using statistical language models trained on more data than is currently typical in psycholinguistics. Despite the use of naturalistic data, strong predictability estimates, and flexible regression models, results converge with earlier experimental studies in supporting dissociable and additive frequency and predictability effects.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"8 ","pages":"177-201"},"PeriodicalIF":0.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10932590/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140111558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-05eCollection Date: 2024-01-01DOI: 10.1162/opmi_a_00125
Tom S Juzek
The Smooth Signal Redundancy Hypothesis explains variations in syllable length as a means to more uniformly distribute information throughout the speech signal. The Uniform Information Density hypothesis seeks to generalize this to choices on all linguistic levels, particularly syntactic choices. While there is some evidence for the Uniform Information Density hypothesis, it faces several challenges, four of which are discussed in this paper. First, it is not clear what exactly counts as uniform. Second, there are syntactic alternations that occur systematically but that can cause notable fluctuations in the information signature. Third, there is an increasing body of negative results. Fourth, there is a lack of large-scale evidence. As to the fourth point, this paper provides a broader array of data-936 sentence pairs for nine syntactic constructions-and analyzes them in a test setup that treats the hypothesis as a classifier. For our data, the Uniform Information Density hypothesis showed little predictive capacity. We explore ways to reconcile our data with theory.
{"title":"Signal Smoothing and Syntactic Choices: A Critical Reflection on the UID Hypothesis.","authors":"Tom S Juzek","doi":"10.1162/opmi_a_00125","DOIUrl":"10.1162/opmi_a_00125","url":null,"abstract":"<p><p>The Smooth Signal Redundancy Hypothesis explains variations in syllable length as a means to more uniformly distribute information throughout the speech signal. The Uniform Information Density hypothesis seeks to generalize this to choices on all linguistic levels, particularly syntactic choices. While there is some evidence for the Uniform Information Density hypothesis, it faces several challenges, four of which are discussed in this paper. First, it is not clear what exactly counts as uniform. Second, there are syntactic alternations that occur systematically but that can cause notable fluctuations in the information signature. Third, there is an increasing body of negative results. Fourth, there is a lack of large-scale evidence. As to the fourth point, this paper provides a broader array of data-936 sentence pairs for nine syntactic constructions-and analyzes them in a test setup that treats the hypothesis as a classifier. For our data, the Uniform Information Density hypothesis showed little predictive capacity. We explore ways to reconcile our data with theory.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"8 ","pages":"217-234"},"PeriodicalIF":0.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10932588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140111557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01eCollection Date: 2024-01-01DOI: 10.1162/opmi_a_00117
Yang Wu, Megan Merrick, Hyowon Gweon
Human infants show systematic responses to events that violate their expectations. Can they also revise these expectations based on others' expressions of surprise? Here we ask whether infants (N = 156, mean = 15.2 months, range: 12.0-18.0 months) can use an experimenter's expression of surprise to revise their own expectations about statistically probable vs. improbable events. An experimenter sampled a ball from a box of red and white balls and briefly displayed either a surprised or an unsurprised expression at the outcome before revealing it to the infant. Following an unsurprised expression, the results were consistent with prior work; infants looked longer at a statistically improbable outcome than a probable outcome. Following a surprised expression, however, this standard pattern disappeared or was even reversed. These results suggest that even before infants can observe the unexpected events themselves, they can use others' surprise to expect the unexpected. Starting early in life, human learners can leverage social information that signals others' prediction error to update their own predictions.
{"title":"Expecting the Unexpected: Infants Use Others' Surprise to Revise Their Own Expectations.","authors":"Yang Wu, Megan Merrick, Hyowon Gweon","doi":"10.1162/opmi_a_00117","DOIUrl":"10.1162/opmi_a_00117","url":null,"abstract":"<p><p>Human infants show systematic responses to events that violate their expectations. Can they also revise these expectations based on others' expressions of surprise? Here we ask whether infants (<i>N</i> = 156, mean = 15.2 months, range: 12.0-18.0 months) can use an experimenter's expression of surprise to revise their own expectations about statistically probable vs. improbable events. An experimenter sampled a ball from a box of red and white balls and briefly displayed either a surprised or an unsurprised expression at the outcome before revealing it to the infant. Following an unsurprised expression, the results were consistent with prior work; infants looked longer at a statistically improbable outcome than a probable outcome. Following a surprised expression, however, this standard pattern disappeared or was even reversed. These results suggest that even before infants can observe the unexpected events themselves, they can use others' surprise to <i>expect the unexpected</i>. Starting early in life, human learners can leverage social information that signals others' prediction error to update their own predictions.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"8 ","pages":"67-83"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10898783/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140022738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01eCollection Date: 2024-01-01DOI: 10.1162/opmi_a_00123
Andrew J Nam, James L McClelland
We investigate human adults' ability to learn an abstract reasoning task quickly and to generalize outside of the range of training examples. Using a task based on a solution strategy in Sudoku, we provide Sudoku-naive participants with a brief instructional tutorial with explanatory feedback using a narrow range of training examples. We find that most participants who master the task do so within 10 practice trials and generalize well to puzzles outside of the training range. We also find that most of those who master the task can describe a valid solution strategy, and such participants perform better on transfer puzzles than those whose strategy descriptions are vague or incomplete. Interestingly, fewer than half of our human participants were successful in acquiring a valid solution strategy, and this ability was associated with completion of high school algebra and geometry. We consider the implications of these findings for understanding human systematic reasoning, as well as the challenges these findings pose for building computational models that capture all aspects of our findings, and we point toward a role for learning from instructions and explanations to support rapid learning and generalization.
{"title":"Systematic Human Learning and Generalization From a Brief Tutorial With Explanatory Feedback.","authors":"Andrew J Nam, James L McClelland","doi":"10.1162/opmi_a_00123","DOIUrl":"10.1162/opmi_a_00123","url":null,"abstract":"<p><p>We investigate human adults' ability to learn an abstract reasoning task quickly and to generalize outside of the range of training examples. Using a task based on a solution strategy in Sudoku, we provide Sudoku-naive participants with a brief instructional tutorial with explanatory feedback using a narrow range of training examples. We find that most participants who master the task do so within 10 practice trials and generalize well to puzzles outside of the training range. We also find that most of those who master the task can describe a valid solution strategy, and such participants perform better on transfer puzzles than those whose strategy descriptions are vague or incomplete. Interestingly, fewer than half of our human participants were successful in acquiring a valid solution strategy, and this ability was associated with completion of high school algebra and geometry. We consider the implications of these findings for understanding human systematic reasoning, as well as the challenges these findings pose for building computational models that capture all aspects of our findings, and we point toward a role for learning from instructions and explanations to support rapid learning and generalization.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"8 ","pages":"148-176"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10898786/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140022740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}