Pub Date : 2024-05-22DOI: 10.1016/j.cognition.2024.105809
Jon W. Carr, Kathleen Rastle
It is widely acknowledged that opaque orthographies place additional demands on learning, often requiring many years to fully acquire. It is less widely recognized, however, that such opacity may offer certain benefits in the context of reading. For example, heterographic homophones such as ⟨knight⟩ and ⟨night⟩ (words that sound the same but which are spelled differently) impose additional costs in learning but reduce ambiguity in reading. Here, we consider the possibility that—left to evolve freely—writing systems will sometimes choose to forego some simplicity for the sake of informativeness when there is functional pressure to do so. We investigate this hypothesis by simulating the evolution of orthography as it is transmitted from one generation to the next, both with and without a communicative pressure for ambiguity avoidance. In addition, we consider two mechanisms by which informative heterography might be selected for: differentiation, in which new spellings are created to differentiate meaning (e.g., ⟨lite⟩ vs. ⟨light⟩), and conservation, in which heterography arises as a byproduct of sound change (e.g., ⟨meat⟩ vs. ⟨meet⟩). Under pressure from learning alone, orthographic systems become transparent, but when combined with communicative pressure, they tend to favor some additional informativeness. Nevertheless, our findings also suggest that, in the long term, simpler, transparent spellings may be preferred in the absence of top-down explicit teaching.
人们普遍承认,不透明的正字法对学习提出了额外的要求,往往需要多年才能完全掌握。然而,人们较少认识到,这种不透明可能会给阅读带来某些好处。例如,⟨knight⟩和⟨night⟩(读音相同但拼写不同的词)等异形同音词会增加学习成本,但会减少阅读中的歧义。在这里,我们考虑了这样一种可能性:在功能压力的作用下,任其自由发展的书写系统有时会为了信息量而选择放弃一些简单性。我们通过模拟正字法从一代传给下一代的演变过程来研究这一假设,其中既包括有交流压力的情况,也包括没有避免歧义的情况。此外,我们还考虑了信息性异形拼写可能被选择的两种机制:分化(differentiation),即创造新的拼写来区分意义(例如: ➎➎➎➎)、⟨lite⟩ vs. ⟨light⟩),而异形拼写则是音变的副产品(如⟨meat⟩ vs. ⟨meet⟩)。在单纯的学习压力下,正字法系统会变得透明,但当与交际压力相结合时,它们就会倾向于增加一些信息量。尽管如此,我们的研究结果也表明,从长远来看,在没有自上而下的明确教学的情况下,简单、透明的拼写可能更受欢迎。
{"title":"Why do languages tolerate heterography? An experimental investigation into the emergence of informative orthography","authors":"Jon W. Carr, Kathleen Rastle","doi":"10.1016/j.cognition.2024.105809","DOIUrl":"https://doi.org/10.1016/j.cognition.2024.105809","url":null,"abstract":"<div><p>It is widely acknowledged that opaque orthographies place additional demands on learning, often requiring many years to fully acquire. It is less widely recognized, however, that such opacity may offer certain benefits in the context of reading. For example, heterographic homophones such as ⟨knight⟩ and ⟨night⟩ (words that sound the same but which are spelled differently) impose additional costs in learning but reduce ambiguity in reading. Here, we consider the possibility that—left to evolve freely—writing systems will sometimes choose to forego some simplicity for the sake of informativeness when there is functional pressure to do so. We investigate this hypothesis by simulating the evolution of orthography as it is transmitted from one generation to the next, both with and without a communicative pressure for ambiguity avoidance. In addition, we consider two mechanisms by which informative heterography might be selected for: differentiation, in which new spellings are created to differentiate meaning (e.g., ⟨lite⟩ vs. ⟨light⟩), and conservation, in which heterography arises as a byproduct of sound change (e.g., ⟨meat⟩ vs. ⟨meet⟩). Under pressure from learning alone, orthographic systems become transparent, but when combined with communicative pressure, they tend to favor some additional informativeness. Nevertheless, our findings also suggest that, in the long term, simpler, transparent spellings may be preferred in the absence of top-down explicit teaching.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0010027724000957/pdfft?md5=50893fea53a241d025d92ce676e70219&pid=1-s2.0-S0010027724000957-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141083868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-21DOI: 10.1016/j.cognition.2024.105811
Laura Wagner , Carlo Geraci , Jeremy Kuhn , Kathryn Davidson , Brent Strickland
Adults with no knowledge of sign languages can perceive distinctive markers that signal event boundedness (telicity), suggesting that telicity is a cognitively natural semantic feature that can be marked iconically (Strickland et al., 2015). This study asks if non-signing children (5-year-olds) can also link telicity to iconic markers in sign. Experiment 1 attempted three close replications of Strickland et al. (2015) and found only limited success. However, Experiment 2 showed that children can both perceive the relevant visual feature and can succeed at linking the visual property to telicity semantics when allowed to filter their answer through their own linguistic choices. Children's performance demonstrates the cognitive naturalness and early availability of the semantics of telicity, supporting the idea that telicity helps guide the language acquisition process.
{"title":"Non-signing children's assessment of telicity in sign language","authors":"Laura Wagner , Carlo Geraci , Jeremy Kuhn , Kathryn Davidson , Brent Strickland","doi":"10.1016/j.cognition.2024.105811","DOIUrl":"https://doi.org/10.1016/j.cognition.2024.105811","url":null,"abstract":"<div><p>Adults with no knowledge of sign languages can perceive distinctive markers that signal event boundedness (telicity), suggesting that telicity is a cognitively natural semantic feature that can be marked iconically (<span>Strickland et al., 2015</span>). This study asks if non-signing children (5-year-olds) can also link telicity to iconic markers in sign. Experiment 1 attempted three close replications of <span>Strickland et al. (2015)</span> and found only limited success. However, Experiment 2 showed that children can both perceive the relevant visual feature and can succeed at linking the visual property to telicity semantics when allowed to filter their answer through their own linguistic choices. Children's performance demonstrates the cognitive naturalness and early availability of the semantics of telicity, supporting the idea that telicity helps guide the language acquisition process.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0010027724000970/pdfft?md5=113646deebc2b06b83d329f0f8a69186&pid=1-s2.0-S0010027724000970-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141072629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-21DOI: 10.1016/j.cognition.2024.105808
Shengnan Zhu , Yongqi Li , Yingtao Fu , Jun Yin , Mowei Shen , Hui Chen
This study aimed to determine the unit for switching representational states in visual working memory (VWM). Two opposing hypotheses were investigated: (a) the unit of switching being a feature (feature-based hypothesis), and (b) the unit of switching being an object (object-based hypothesis). Participants (N = 180) were instructed to hold two features from either one or two objects in their VWM. The memory-driven attentional capture effect, suggesting that actively held information in VWM can cause attention to be drawn towards matched distractors, was employed to assess representational states of the first and second probed colors (indicated by a retro-cue). The results showed that only the feature indicated to be probed first could elicit memory related capture for the condition of separate objects. Importantly, features from an integrated object could guide attention regardless of the probe order. These findings were observed across three experiments involving features of different dimensions, same dimensions, or perceptual objects defined by Gestalt principles. They provide convergent evidence supporting the object-based hypothesis by indicating that features within a single object cannot exist in different states.
{"title":"The object as the unit for state switching in visual working memory","authors":"Shengnan Zhu , Yongqi Li , Yingtao Fu , Jun Yin , Mowei Shen , Hui Chen","doi":"10.1016/j.cognition.2024.105808","DOIUrl":"https://doi.org/10.1016/j.cognition.2024.105808","url":null,"abstract":"<div><p>This study aimed to determine the unit for switching representational states in visual working memory (VWM). Two opposing hypotheses were investigated: (a) the unit of switching being a feature (feature-based hypothesis), and (b) the unit of switching being an object (object-based hypothesis). Participants (<em>N</em> = 180) were instructed to hold two features from either one or two objects in their VWM. The memory-driven attentional capture effect, suggesting that actively held information in VWM can cause attention to be drawn towards matched distractors, was employed to assess representational states of the first and second probed colors (indicated by a retro-cue). The results showed that only the feature indicated to be probed first could elicit memory related capture for the condition of separate objects. Importantly, features from an integrated object could guide attention regardless of the probe order. These findings were observed across three experiments involving features of different dimensions, same dimensions, or perceptual objects defined by Gestalt principles. They provide convergent evidence supporting the object-based hypothesis by indicating that features within a single object cannot exist in different states.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141077661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1016/j.cognition.2024.105765
Ethan Gotlieb Wilcox , Tiago Pimentel , Clara Meister , Ryan Cotterell
Regressions, or backward saccades, are common during reading, accounting for between 5% and 20% of all saccades. And yet, relatively little is known about what causes them. We provide an information-theoretic operationalization for two previous qualitative hypotheses about regressions, which we dub reactivation and reanalysis. We argue that these hypotheses make different predictions about the pointwise mutual information or pmi between a regression’s source and target. Intuitively, the pmi between two words measures how much more (or less) likely one word is to be present given the other. On one hand, the reactivation hypothesis predicts that regressions occur between words that are associated, implying high positive values of pmi. On the other hand, the reanalysis hypothesis predicts that regressions should occur between words that are not associated with each other, implying negative, low values of pmi. As a second theoretical contribution, we expand on previous theories by considering not only pmi but also expected values of pmi, [pmi], where the expectation is taken over all possible realizations of the regression’s target. The rationale for this is that language processing involves making inferences under uncertainty, and readers may be uncertain about what they have read, especially if a previous word was skipped. To test both theories, we use contemporary language models to estimate pmi-based statistics over word pairs in three corpora of eye tracking data in English, as well as in six languages across three language families (Indo-European, Uralic, and Turkic). Our results are consistent across languages and models tested: Positive values of pmi and [pmi] consistently help to predict the patterns of regressions during reading, whereas negative values of pmi and [pmi] do not. Our information-theoretic interpretation increases the predictive scope of both theories and our studies present the first systematic crosslinguistic analysis of regressions in the literature. Our results support the reactivation hypothesis and, more broadly, they expand the number of language processing behaviors that can be linked to information-theoretic principles.
{"title":"An information-theoretic analysis of targeted regressions during reading","authors":"Ethan Gotlieb Wilcox , Tiago Pimentel , Clara Meister , Ryan Cotterell","doi":"10.1016/j.cognition.2024.105765","DOIUrl":"https://doi.org/10.1016/j.cognition.2024.105765","url":null,"abstract":"<div><p>Regressions, or backward saccades, are common during reading, accounting for between 5% and 20% of all saccades. And yet, relatively little is known about what causes them. We provide an information-theoretic operationalization for two previous qualitative hypotheses about regressions, which we dub <em>reactivation</em> and <em>reanalysis</em>. We argue that these hypotheses make different predictions about the pointwise mutual information or <span>pmi</span> between a regression’s source and target. Intuitively, the <span>pmi</span> between two words measures how much more (or less) likely one word is to be present given the other. On one hand, the reactivation hypothesis predicts that regressions occur between words that are associated, implying high positive values of <span>pmi</span>. On the other hand, the reanalysis hypothesis predicts that regressions should occur between words that are not associated with each other, implying negative, low values of <span>pmi</span>. As a second theoretical contribution, we expand on previous theories by considering not only <span>pmi</span> but also expected values of <span>pmi</span>, <span><math><mi>E</mi></math></span>[<span>pmi</span>], where the expectation is taken over all possible realizations of the regression’s target. The rationale for this is that language processing involves making inferences under uncertainty, and readers may be uncertain about what they have read, especially if a previous word was skipped. To test both theories, we use contemporary language models to estimate <span>pmi</span>-based statistics over word pairs in three corpora of eye tracking data in English, as well as in six languages across three language families (Indo-European, Uralic, and Turkic). Our results are consistent across languages and models tested: Positive values of <span>pmi</span> and <span><math><mi>E</mi></math></span>[<span>pmi</span>] consistently help to predict the patterns of regressions during reading, whereas negative values of <span>pmi</span> and <span><math><mi>E</mi></math></span>[<span>pmi</span>] do not. Our information-theoretic interpretation increases the predictive scope of both theories and our studies present the first systematic crosslinguistic analysis of regressions in the literature. Our results support the reactivation hypothesis and, more broadly, they expand the number of language processing behaviors that can be linked to information-theoretic principles.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0010027724000519/pdfft?md5=4f323cb270d662df6ed6abd6df2bfac1&pid=1-s2.0-S0010027724000519-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141072627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1016/j.cognition.2024.105818
Vsevolod Kapatsinski , Adam A. Bramlett , Kaori Idemaru
In language comprehension, we use perceptual cues to infer meanings. Some of these cues reside on perceptual dimensions. For example, the difference between bear and pear is cued by a difference in voice onset time (VOT), which is a continuous perceptual dimension. The present paper asks whether, and when, experience with a single value on a dimension behaving unexpectedly is used by the learner to reweight the whole dimension. We show that learners reweight the whole VOT dimension when exposed to a single VOT value (e.g., 45 ms) and provided with feedback indicating that the speaker intended to produce a /b/ 50% of the time and a /p/ the other 50% of the time. Importantly, dimensional reweighting occurs only if 1) the 50/50 feedback is unexpected for the VOT value, and 2) there is another dimension that is predictive of feedback. When no predictive dimension is available, listeners reassociate the experienced VOT value with the more surprising outcome but do not downweight the entire VOT dimension. These results provide support for perceptual representations of speech sounds that combine cues and dimensions, for viewing perceptual learning in speech as a combination of error-driven cue reassociation and dimensional reweighting, and for considering dimensional reweighting to be reallocation of attention that occurs only when there is evidence that reallocating attention would improve prediction accuracy (Harmon, Z., Idemaru, K., & Kapatsinski, V. 2019. Learning mechanisms in cue reweighting. Cognition, 189, 76–88.).
在语言理解中,我们利用感知线索来推断含义。其中一些线索存在于知觉维度上。例如,"熊 "和 "梨 "的区别是由语音起始时间(VOT)的差异提示的,而语音起始时间是一个连续的感知维度。本文提出的问题是,学习者是否以及何时会利用某一维度上的单个值的意外表现来重新调整整个维度的权重。我们的研究表明,当学习者接触到一个单一的 VOT 值(例如 45 毫秒),并得到反馈表明说话者在 50%的时间里想要发出 /b/,而在另外 50%的时间里想要发出 /p/时,学习者会对整个 VOT 维度重新加权。重要的是,只有在以下情况下才会发生维度重权:1)50/50 的反馈对于 VOT 值来说是意料之外的;2)存在另一个可以预测反馈的维度。如果没有可预测的维度,听者会将经验的 VOT 值与更令人惊讶的结果重新关联起来,但不会降低整个 VOT 维度的权重。这些结果支持将线索和维度结合起来的语音知觉表征,支持将语音知觉学习视为错误驱动的线索重新关联和维度重新加权的结合,支持将维度重新加权视为注意力的重新分配,只有当有证据表明重新分配注意力会提高预测准确性时才会发生(Harmon, Z., Idemaru, K., & Kapatsinski, V. 2019.线索再权重的学习机制。认知》,189,76-88)。
{"title":"What do you learn from a single cue? Dimensional reweighting and cue reassociation from experience with a newly unreliable phonetic cue","authors":"Vsevolod Kapatsinski , Adam A. Bramlett , Kaori Idemaru","doi":"10.1016/j.cognition.2024.105818","DOIUrl":"https://doi.org/10.1016/j.cognition.2024.105818","url":null,"abstract":"<div><p>In language comprehension, we use perceptual cues to infer meanings. Some of these cues reside on perceptual dimensions. For example, the difference between <em>bear</em> and <em>pear</em> is cued by a difference in voice onset time (VOT), which is a continuous perceptual dimension. The present paper asks whether, and when, experience with a single value on a dimension behaving unexpectedly is used by the learner to reweight the whole dimension. We show that learners reweight the whole VOT dimension when exposed to a single VOT value (e.g., 45 ms) and provided with feedback indicating that the speaker intended to produce a /b/ 50% of the time and a /p/ the other 50% of the time. Importantly, dimensional reweighting occurs only if 1) the 50/50 feedback is unexpected for the VOT value, and 2) there is another dimension that is predictive of feedback. When no predictive dimension is available, listeners reassociate the experienced VOT value with the more surprising outcome but do not downweight the entire VOT dimension. These results provide support for perceptual representations of speech sounds that combine cues and dimensions, for viewing perceptual learning in speech as a combination of error-driven cue reassociation and dimensional reweighting, and for considering dimensional reweighting to be reallocation of attention that occurs only when there is evidence that reallocating attention would improve prediction accuracy (Harmon, Z., Idemaru, K., & Kapatsinski, V. 2019. Learning mechanisms in cue reweighting. <em>Cognition</em>, <em>189</em>, 76–88.).</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141072628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-18DOI: 10.1016/j.cognition.2024.105812
Dóra Fogd, Natalie Sebanz, Ágnes Melinda Kovács
Successful interactions require not only representing others’ mental states but also flexibly updating them, whenever one’s original inferences may no longer hold. Such situations arise, for instance, when a partner’s behavior is incongruent with one’s expectations. Although these situations are rather common, the question whether people update others’ mental states spontaneously upon encountering unexpected behaviors and whether they use the updated mental states in novel contexts, has been largely unexplored. We addressed these issues in two experiments. In each experiment participants first performed an anticipatory looking task, reacting to a virtual ‘partner’, who categorized pictures based on their ambiguous or non-ambiguous color. Importantly, to perform the task participants did not have to track their partner’s perspective. Following a correct categorization phase, the ‘partner’ started to systematically miscategorize one of the ambiguous colors (e.g., as if she would now believe that the greenish blue is green). We measured how participants’ anticipatory looking preceding the partner’s categorization changed across trials. Afterward, we asked whether participants implicitly transferred their knowledge about the partner’s updated perspective to a new task. Finally, they performed an explicit perspective-taking task, to test whether they selectively updated the partner’s perspective, but not their own. Results revealed that correct anticipations started to emerge only after a few miscategorizations, indicating the spontaneous updating of the other’s perspective regarding the miscategorized color. Signatures of updating emerged somewhat earlier when the partner made similarity judgments (Experiment 2), highlighting the subjective nature of her decisions, compared to when following an explicit color-categorization rule (Experiment 1). In the explicit perspective-taking task of both experiments, roughly half of the participants could categorize items according to the partner’s (spontaneously updated) perspective and also used their partner’s updated perspective in the implicit transfer task to some degree, while they were the ones who displayed more pronounced anticipatory patterns as well. Such data provides strong evidence that the observed changes in anticipatory looking reflect spontaneous and flexible mental state updating. In addition, the findings also point to a high individual variability both in the updating of attributed mental states and the use of the updated mental state content.
{"title":"Flexible social monitoring as revealed by eye movements: Spontaneous mental state updating triggered by others’ unexpected actions","authors":"Dóra Fogd, Natalie Sebanz, Ágnes Melinda Kovács","doi":"10.1016/j.cognition.2024.105812","DOIUrl":"10.1016/j.cognition.2024.105812","url":null,"abstract":"<div><p>Successful interactions require not only representing others’ mental states but also flexibly updating them, whenever one’s original inferences may no longer hold. Such situations arise, for instance, when a partner’s behavior is incongruent with one’s expectations. Although these situations are rather common, the question whether people update others’ mental states spontaneously upon encountering unexpected behaviors and whether they use the updated mental states in novel contexts, has been largely unexplored. We addressed these issues in two experiments. In each experiment participants first performed an anticipatory looking task, reacting to a virtual ‘partner’, who categorized pictures based on their ambiguous or non-ambiguous color. Importantly, to perform the task participants did not have to track their partner’s perspective. Following a correct categorization phase, the ‘partner’ started to systematically miscategorize one of the ambiguous colors (e.g., as if she would now believe that the greenish blue is green). We measured how participants’ anticipatory looking preceding the partner’s categorization changed across trials. Afterward, we asked whether participants implicitly transferred their knowledge about the partner’s updated perspective to a new task. Finally, they performed an explicit perspective-taking task, to test whether they selectively updated the partner’s perspective, but not their own. Results revealed that correct anticipations started to emerge only after a few miscategorizations, indicating the spontaneous updating of the other’s perspective regarding the miscategorized color. Signatures of updating emerged somewhat earlier when the partner made similarity judgments (Experiment 2), highlighting the subjective nature of her decisions, compared to when following an explicit color-categorization rule (Experiment 1). In the explicit perspective-taking task of both experiments, roughly half of the participants could categorize items according to the partner’s (spontaneously updated) perspective and also used their partner’s updated perspective in the implicit transfer task to some degree, while they were the ones who displayed more pronounced anticipatory patterns as well. Such data provides strong evidence that the observed changes in anticipatory looking reflect spontaneous and flexible mental state updating. In addition, the findings also point to a high individual variability both in the updating of attributed mental states and the use of the updated mental state content.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0010027724000982/pdfft?md5=197aaca3e056261ca4ff8f855259bd27&pid=1-s2.0-S0010027724000982-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141058593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-18DOI: 10.1016/j.cognition.2024.105814
Teresa Flanagan , Nicholas C. Georgiou , Brian Scassellati , Tamar Kushnir
We expect children to learn new words, skills, and ideas from various technologies. When learning from humans, children prefer people who are reliable and trustworthy, yet children also forgive people's occasional mistakes. Are the dynamics of children learning from technologies, which can also be unreliable, similar to learning from humans? We tackle this question by focusing on early childhood, an age at which children are expected to master foundational academic skills. In this project, 168 4–7-year-old children (Study 1) and 168 adults (Study 2) played a word-guessing game with either a human or robot. The partner first gave a sequence of correct answers, but then followed this with a sequence of wrong answers, with a reaction following each one. Reactions varied by condition, either expressing an accident, an accident marked with an apology, or an unhelpful intention. We found that older children were less trusting than both younger children and adults and were even more skeptical after errors. Trust decreased most rapidly when errors were intentional, but only children (and especially older children) outright rejected help from intentionally unhelpful partners. As an exception to this general trend, older children maintained their trust for longer when a robot (but not a human) apologized for its mistake. Our work suggests that educational technology design cannot be one size fits all but rather must account for developmental changes in children's learning goals.
{"title":"School-age children are more skeptical of inaccurate robots than adults","authors":"Teresa Flanagan , Nicholas C. Georgiou , Brian Scassellati , Tamar Kushnir","doi":"10.1016/j.cognition.2024.105814","DOIUrl":"10.1016/j.cognition.2024.105814","url":null,"abstract":"<div><p>We expect children to learn new words, skills, and ideas from various technologies. When learning from humans, children prefer people who are reliable and trustworthy, yet children also forgive people's occasional mistakes. Are the dynamics of children learning from technologies, which can also be unreliable, similar to learning from humans? We tackle this question by focusing on early childhood, an age at which children are expected to master foundational academic skills. In this project, 168 4–7-year-old children (Study 1) and 168 adults (Study 2) played a word-guessing game with either a human or robot. The partner first gave a sequence of correct answers, but then followed this with a sequence of wrong answers, with a reaction following each one. Reactions varied by condition, either expressing an accident, an accident marked with an apology, or an unhelpful intention. We found that older children were less trusting than both younger children and adults and were even more skeptical after errors. Trust decreased most rapidly when errors were intentional, but only children (and especially older children) outright rejected help from intentionally unhelpful partners. As an exception to this general trend, older children maintained their trust for longer when a robot (but not a human) apologized for its mistake. Our work suggests that educational technology design cannot be one size fits all but rather must account for developmental changes in children's learning goals.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141058594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-18DOI: 10.1016/j.cognition.2024.105792
Jonathan E. Prunty , Rob Jenkins , Rana Qarooni , Markus Bindemann
Faces are highly informative social stimuli, yet before any information can be accessed, the face must first be detected in the visual field. A detection template that serves this purpose must be able to accommodate the wide variety of face images we encounter, but how this generality could be achieved remains unknown. In this study, we investigate whether statistical averages of previously encountered faces can form the basis of a general face detection template. We provide converging evidence from a range of methods—human similarity judgements and PCA-based image analysis of face averages (Experiment 1–3), human detection behaviour for faces embedded in complex scenes (Experiment 4 and 5), and simulations with a template-matching algorithm (Experiment 6 and 7)—to examine the formation, stability and robustness of statistical image averages as cognitive templates for human face detection. We integrate these findings with existing knowledge of face identification, ensemble coding, and the development of face perception.
{"title":"A cognitive template for human face detection","authors":"Jonathan E. Prunty , Rob Jenkins , Rana Qarooni , Markus Bindemann","doi":"10.1016/j.cognition.2024.105792","DOIUrl":"10.1016/j.cognition.2024.105792","url":null,"abstract":"<div><p>Faces are highly informative social stimuli, yet before any information can be accessed, the face must first be detected in the visual field. A detection template that serves this purpose must be able to accommodate the wide variety of face images we encounter, but how this generality could be achieved remains unknown. In this study, we investigate whether statistical averages of previously encountered faces can form the basis of a general face detection template. We provide converging evidence from a range of methods—human similarity judgements and PCA-based image analysis of face averages (Experiment 1–3), human detection behaviour for faces embedded in complex scenes (Experiment 4 and 5), and simulations with a template-matching algorithm (Experiment 6 and 7)—to examine the formation, stability and robustness of statistical image averages as cognitive templates for human face detection. We integrate these findings with existing knowledge of face identification, ensemble coding, and the development of face perception.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141058653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-17DOI: 10.1016/j.cognition.2024.105815
Sonja Walcher , Živa Korda , Christof Körner , Mathias Benedek
Eyes are active in memory recall and visual imagination, yet our grasp of the underlying qualities and factors of these internally coupled eye movements is limited. To explore this, we studied 50 participants, examining how workload, spatial reference availability, and imagined movement direction influence internal coupling of eye movements. We designed a visuospatial working memory task in which participants mentally moved a black patch along a path within a matrix and each trial involved one step along this path (presented via speakers: up, down, left, or right). We varied workload by adjusting matrix size (3 × 3 vs. 5 × 5), manipulated availability of a spatial frame of reference by presenting either a blank screen (requiring participants to rely solely on their mental representation of the matrix) or spatial reference in the form of an empty matrix, and contrasted active task performance to two control conditions involving only active or passive listening. Our findings show that eye movements consistently matched the imagined movement of the patch in the matrix, not driven solely by auditory or semantic cues. While workload influenced pupil diameter, perceived demand, and performance, it had no observable impact on internal coupling. The availability of spatial reference enhanced coupling of eye movements, leading more frequent, precise, and resilient saccades against noise and bias. The absence of workload effects on coupled saccades in our study, in combination with the relatively high degree of coupling observed even in the invisible matrix condition, indicates that eye movements align with shifts in attention across both visually and internally represented information. This suggests that coupled eye movements are not merely strategic efforts to reduce workload, but rather a natural response to where attention is directed.
{"title":"How workload and availability of spatial reference shape eye movement coupling in visuospatial working memory","authors":"Sonja Walcher , Živa Korda , Christof Körner , Mathias Benedek","doi":"10.1016/j.cognition.2024.105815","DOIUrl":"10.1016/j.cognition.2024.105815","url":null,"abstract":"<div><p>Eyes are active in memory recall and visual imagination, yet our grasp of the underlying qualities and factors of these internally coupled eye movements is limited. To explore this, we studied 50 participants, examining how workload, spatial reference availability, and imagined movement direction influence internal coupling of eye movements. We designed a visuospatial working memory task in which participants mentally moved a black patch along a path within a matrix and each trial involved one step along this path (presented via speakers: up, down, left, or right). We varied workload by adjusting matrix size (3 × 3 vs. 5 × 5), manipulated availability of a spatial frame of reference by presenting either a blank screen (requiring participants to rely solely on their mental representation of the matrix) or spatial reference in the form of an empty matrix, and contrasted active task performance to two control conditions involving only active or passive listening. Our findings show that eye movements consistently matched the imagined movement of the patch in the matrix, not driven solely by auditory or semantic cues. While workload influenced pupil diameter, perceived demand, and performance, it had no observable impact on internal coupling. The availability of spatial reference enhanced coupling of eye movements, leading more frequent, precise, and resilient saccades against noise and bias. The absence of workload effects on coupled saccades in our study, in combination with the relatively high degree of coupling observed even in the invisible matrix condition, indicates that eye movements align with shifts in attention across both visually and internally represented information. This suggests that coupled eye movements are not merely strategic efforts to reduce workload, but rather a natural response to where attention is directed.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S001002772400101X/pdfft?md5=0bcdbc0f417e60c3ab4be70ae1946cf0&pid=1-s2.0-S001002772400101X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140960249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-17DOI: 10.1016/j.cognition.2024.105805
Nicola Di Stefano , Charles Spence
Absolute pitch is the name given to the rare ability to identify a musical note in an automatic and effortless manner without the need for a reference tone. Those individuals with absolute pitch can, for example, name the note they hear, identify all of the tones of a given chord, and/or name the pitches of everyday sounds, such as car horns or sirens. Hence, absolute pitch can be seen as providing a rare example of absolute sensory judgment in audition. Surprisingly, however, the intriguing question of whether such an ability presents unique features in the domain of sensory perception, or whether instead similar perceptual skills also exist in other sensory domains, has not been explicitly addressed previously. In this paper, this question is addressed by systematically reviewing research on absolute pitch using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) method. Thereafter, we compare absolute pitch with two rare types of sensory experience, namely synaesthesia and eidetic memory, to understand if and how these phenomena exhibit similar features to absolute pitch. Furthermore, a common absolute perceptual ability that has been often compared to absolute pitch, namely colour perception, is also discussed. Arguments are provided supporting the notion that none of the examined abilities can be considered like absolute pitch. Therefore, we conclude by suggesting that absolute pitch does indeed appear to constitute a unique kind of absolute sensory judgment in humans, and we discuss some open issues and novel directions for future research in absolute pitch.
{"title":"Should absolute pitch be considered as a unique kind of absolute sensory judgment in humans? A systematic and theoretical review of the literature","authors":"Nicola Di Stefano , Charles Spence","doi":"10.1016/j.cognition.2024.105805","DOIUrl":"10.1016/j.cognition.2024.105805","url":null,"abstract":"<div><p>Absolute pitch is the name given to the rare ability to identify a musical note in an automatic and effortless manner without the need for a reference tone. Those individuals with absolute pitch can, for example, name the note they hear, identify all of the tones of a given chord, and/or name the pitches of everyday sounds, such as car horns or sirens. Hence, absolute pitch can be seen as providing a rare example of absolute sensory judgment in audition. Surprisingly, however, the intriguing question of whether such an ability presents unique features in the domain of sensory perception, or whether instead similar perceptual skills also exist in other sensory domains, has not been explicitly addressed previously. In this paper, this question is addressed by systematically reviewing research on absolute pitch using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) method. Thereafter, we compare absolute pitch with two rare types of sensory experience, namely synaesthesia and eidetic memory, to understand if and how these phenomena exhibit similar features to absolute pitch. Furthermore, a common absolute perceptual ability that has been often compared to absolute pitch, namely colour perception, is also discussed. Arguments are provided supporting the notion that none of the examined abilities can be considered like absolute pitch. Therefore, we conclude by suggesting that absolute pitch does indeed appear to constitute a unique kind of absolute sensory judgment in humans, and we discuss some open issues and novel directions for future research in absolute pitch.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S001002772400091X/pdfft?md5=a13e56231c52d3d75bb4dbf2cdbdba92&pid=1-s2.0-S001002772400091X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140960251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}