首页 > 最新文献

Cognitive Science最新文献

英文 中文
Visual Statistical Learning in Children Aged 3−9 Years 3-9岁儿童的视觉统计学习。
IF 2.4 2区 心理学 Q2 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-10-23 DOI: 10.1111/cogs.70130
Anton Rogachev, Tatiana Logvinenko, Anna Rebreikina, Olga Sysoeva

Visual statistical learning (visual SL) is the ability to implicitly extract statistical patterns from visual stimuli. Visual SL could be assessed using online measures, evaluating reaction times (RTs) to stimuli during task performance, and offline measures, which assess recognition of the presented patterns. We examined 96 children aged 3−9 years using a visual SL task that included online and offline measures. In the online phase, children viewed sequences of cartoon aliens presented one at a time, organized into triplets. The task was to press a button to two target stimuli: one predictable (the last alien in the triplet), and one unpredictable (the first in the triplet). In the offline phase, children performed a two-alternative-forced choice task, where they viewed two triplets and selected the one matching the sequence from the online phase. In online measures, we observed a gradual increase in RT for unpredictable stimulus and a slight decrease in RT for predictable stimulus over the experiment, with fewer errors for predictable stimulus, indicating an SL effect. In offline measures, the SL effect was also observed, though less robust: recognition rates for correct triplets exceeded chance level only for triplets containing predictable stimuli. Notably, while online measures remained stable across age, offline recognition rates increased with age, suggesting a link to the development of cognitive functions needed for explicit task performance. We propose that SL is not purely an implicit process but rather an active learning process shaped by experimental task requirements and goal setting.

视觉统计学习(Visual statistical learning, Visual SL)是从视觉刺激中隐式提取统计模式的能力。Visual SL可以通过在线测量(评估任务执行过程中对刺激的反应时间)和离线测量(评估对呈现模式的识别)来评估。我们使用包括在线和离线测量在内的视觉SL任务对96名3-9岁的儿童进行了测试。在网络阶段,孩子们观看卡通外星人的序列,一次一个,组织成三胞胎。这项任务是按下一个按钮来显示两个目标刺激物:一个是可预测的(三组中的最后一个外星人),另一个是不可预测的(三组中的第一个)。在离线阶段,孩子们完成了一个两种选择的强迫选择任务,在这个任务中,他们看到两个三胞胎,并选择与在线阶段的序列相匹配的那个。在在线测量中,我们观察到在实验过程中,不可预测刺激的RT逐渐增加,可预测刺激的RT略有下降,可预测刺激的误差更小,表明存在SL效应。在离线测量中,SL效应也被观察到,尽管不那么强大:只有在包含可预测刺激的三胞胎中,正确三胞胎的识别率才超过机会水平。值得注意的是,尽管在线测量在年龄上保持稳定,但离线识别率随着年龄的增长而增加,这表明与明确任务表现所需的认知功能的发展有关。我们认为语言学习不是一个纯粹的内隐过程,而是一个由实验任务要求和目标设定塑造的主动学习过程。
{"title":"Visual Statistical Learning in Children Aged 3−9 Years","authors":"Anton Rogachev,&nbsp;Tatiana Logvinenko,&nbsp;Anna Rebreikina,&nbsp;Olga Sysoeva","doi":"10.1111/cogs.70130","DOIUrl":"10.1111/cogs.70130","url":null,"abstract":"<p>Visual statistical learning (visual SL) is the ability to implicitly extract statistical patterns from visual stimuli. Visual SL could be assessed using online measures, evaluating reaction times (RTs) to stimuli during task performance, and offline measures, which assess recognition of the presented patterns. We examined 96 children aged 3−9 years using a visual SL task that included online and offline measures. In the online phase, children viewed sequences of cartoon aliens presented one at a time, organized into triplets. The task was to press a button to two target stimuli: one predictable (the last alien in the triplet), and one unpredictable (the first in the triplet). In the offline phase, children performed a two-alternative-forced choice task, where they viewed two triplets and selected the one matching the sequence from the online phase. In online measures, we observed a gradual increase in RT for unpredictable stimulus and a slight decrease in RT for predictable stimulus over the experiment, with fewer errors for predictable stimulus, indicating an SL effect. In offline measures, the SL effect was also observed, though less robust: recognition rates for correct triplets exceeded chance level only for triplets containing predictable stimuli. Notably, while online measures remained stable across age, offline recognition rates increased with age, suggesting a link to the development of cognitive functions needed for explicit task performance. We propose that SL is not purely an implicit process but rather an active learning process shaped by experimental task requirements and goal setting.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 10","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individualization Without Internalization 没有内化的个性化。
IF 2.4 2区 心理学 Q2 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-10-23 DOI: 10.1111/cogs.70132
Ludger van Dijk

What is that “inner” voice that keeps you up at night or that tells you to stop as you reach for another chocolate? Advances in embodied cognitive science raise doubts about explaining the “self” as the result of internalizing our shared world. On that emerging view, there is nothing to transport from outside to inside the skull. But, if not an inner state of mind, then how should we understand the experience of a self? This paper develops a relational approach to individualization by aligning ecological thinking with practice theory through Meadian considerations. On this account, we continuously experience a meaningful world, filled with possibilities for action, tied to things in places and practices. Practices are intergenerational processes in which materials get organized by what we do, while in turn organizing us. Becoming a “self” requires learning to attend to such communal organizations as one's relation to the world expands across development. As we learn to engage various such organizations skillfully, we can experience them responding to us. Situated across practices, the “self” develops as a reciprocal relation between multiple timescales: notably between communal practices and a person's skilled activities. When we close our eyes and our thoughts come to the fore, we experience this reciprocal relation directly. To get this relational self into view, psychology needs to get out of our heads and study the worldly conditions that make us.

是什么“内心”的声音让你夜不能寐,或者在你伸手去拿另一块巧克力时告诉你停下来?具身认知科学的进步对将“自我”解释为内化我们共同世界的结果提出了质疑。根据这一新兴观点,没有任何东西可以从头骨外部运输到内部。但是,如果不是心灵的内在状态,那么我们该如何理解自我的体验呢?本文通过中值考虑,将生态思维与实践理论结合起来,提出了一种个性化的关联方法。因此,我们不断体验一个有意义的世界,充满了行动的可能性,与地方和实践中的事物联系在一起。实践是代际过程,在这个过程中,材料被我们所做的事情组织起来,同时反过来组织我们。成为一个“自我”需要学习参加这样的社区组织,因为一个人与世界的关系随着发展而扩大。当我们学会巧妙地与各种这样的组织打交道时,我们可以体验到它们对我们的回应。在实践中,“自我”作为多个时间尺度之间的互惠关系而发展:特别是在公共实践和一个人的技能活动之间。当我们闭上眼睛,我们的思想浮现,我们直接体验到这种相互关系。为了看清这种关系自我,心理学需要跳出我们的头脑,研究造就我们的世俗环境。
{"title":"Individualization Without Internalization","authors":"Ludger van Dijk","doi":"10.1111/cogs.70132","DOIUrl":"10.1111/cogs.70132","url":null,"abstract":"<p>What is that “inner” voice that keeps you up at night or that tells you to stop as you reach for another chocolate? Advances in embodied cognitive science raise doubts about explaining the “self” as the result of internalizing our shared world. On that emerging view, there is nothing to transport from outside to inside the skull. But, if not an inner state of mind, then how should we understand the experience of a self? This paper develops a relational approach to individualization by aligning ecological thinking with practice theory through Meadian considerations. On this account, we continuously experience a meaningful world, filled with possibilities for action, tied to things in places and practices. Practices are intergenerational processes in which materials get organized by what we do, while in turn organizing us. Becoming a “self” requires learning to attend to such communal organizations as one's relation to the world expands across development. As we learn to engage various such organizations skillfully, we can experience them responding to us. Situated across practices, the “self” develops as a reciprocal relation between multiple timescales: notably between communal practices and a person's skilled activities. When we close our eyes and our thoughts come to the fore, we experience this reciprocal relation directly. To get this relational self into view, psychology needs to get out of our heads and study the worldly conditions that make us.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 10","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70132","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Ideological Turing Test: A Behavioral Measure of Open-Mindedness and Perspective-Taking 意识形态图灵测试:一种思想开放和换位思考的行为测量。
IF 2.4 2区 心理学 Q2 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-10-14 DOI: 10.1111/cogs.70126
Charlotte O. Brand, Daniel Brady, Tom Stafford

Understanding our ideological opponents is crucial for the effective exchange of arguments and the avoidance of escalation, and the reduction of conflict. We operationalize the idea of an “Ideological Turing Test” to measure the accuracy with which people represent the arguments of their ideological opponents. Crucially, this offers a behavioral measure of open-mindedness which goes beyond mere self-report. We recruited 200 participants from opposite sides of three topics with potential for polarization in the UK of the early 2020s (1200 participants total). Participants were asked to provide reasons both for and against their position. Their reasons were then rated by participants from the opposite side. Our criteria for “passing” the test was if an argument was agreed with by opponents to the same extent or higher than arguments made by proponents. We found evidence for high levels of mutual understanding across all three topics. We also found that those who passed were more open-minded toward their opponents, in that they were less likely to rate them as ignorant, immoral, or irrational. Our method provides a behavioral measure of open-mindedness and ability to mimic counterpartisan perspectives that goes beyond self-report measures. Our results offer encouragement that, even in highly polarized debates, high levels of mutual understanding persist.

理解我们意识形态上的对手对于有效地交换意见、避免冲突升级和减少冲突至关重要。我们实施了“意识形态图灵测试”的概念,以衡量人们代表其意识形态对手的论点的准确性。至关重要的是,这提供了一种超越自我报告的开放性行为衡量标准。我们招募了200名参与者,他们来自21世纪20年代初英国可能出现两极分化的三个主题的对立面(总共1200名参与者)。参与者被要求提供支持和反对他们立场的理由。然后,来自另一边的参与者对他们的理由进行打分。我们“通过”测试的标准是,一个论点是否与支持者的论点得到相同程度或更高程度的同意。我们发现在这三个主题上都有高度的相互理解。我们还发现,那些通过测试的人对对手更开放,他们不太可能认为对手无知、不道德或不理性。我们的方法提供了一种超越自我报告测量的开放思想和模仿对立观点的能力的行为测量。我们的研究结果鼓励我们,即使在高度两极化的辩论中,高度的相互理解仍然存在。
{"title":"The Ideological Turing Test: A Behavioral Measure of Open-Mindedness and Perspective-Taking","authors":"Charlotte O. Brand,&nbsp;Daniel Brady,&nbsp;Tom Stafford","doi":"10.1111/cogs.70126","DOIUrl":"10.1111/cogs.70126","url":null,"abstract":"<p>Understanding our ideological opponents is crucial for the effective exchange of arguments and the avoidance of escalation, and the reduction of conflict. We operationalize the idea of an “Ideological Turing Test” to measure the accuracy with which people represent the arguments of their ideological opponents. Crucially, this offers a behavioral measure of open-mindedness which goes beyond mere self-report. We recruited 200 participants from opposite sides of three topics with potential for polarization in the UK of the early 2020s (1200 participants total). Participants were asked to provide reasons both for and against their position. Their reasons were then rated by participants from the opposite side. Our criteria for “passing” the test was if an argument was agreed with by opponents to the same extent or higher than arguments made by proponents. We found evidence for high levels of mutual understanding across all three topics. We also found that those who passed were more open-minded toward their opponents, in that they were less likely to rate them as ignorant, immoral, or irrational. Our method provides a behavioral measure of open-mindedness and ability to mimic counterpartisan perspectives that goes beyond self-report measures. Our results offer encouragement that, even in highly polarized debates, high levels of mutual understanding persist.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 10","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12519043/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145287222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scope of Message Planning: Evidence From Production of Sentences With Heavy Sentence-Final NPs 信息规划的范围:来自重句最终NPs语句生成的证据。
IF 2.4 2区 心理学 Q2 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-10-14 DOI: 10.1111/cogs.70110
Agnieszka E. Konopka

Speaking begins with the generation of a preverbal message. While a common assumption is that the scope of message-level planning (i.e., the size of message-level increments) can be more extensive than the scope of sentence-level planning, it is unclear how much information is typically encoded at the message level in advance of sentence-level planning during spontaneous production. This study assessed the scope and granularity of early message-level planning in English by tracking production of sentences with light versus heavy sentence-final NPs. Speakers produced SVO sentences to describe pictures showing an agent acting on a patient. Half of the pictures showed one-patient events, eliciting sentences with unmodified patient names (e.g., “The tailor is cutting the dress”), and half showed two-patient events with a target patient and a non-target patient. The presence of a non-target patient required production of a prenominal or postnominal modifier to uniquely identify the target patient (e.g., “The tailor is cutting the long dress” / “the dress with sleeves”). Analyses of speech onsets and eye movements before speech onset showed strong effects of the complexity of the sentence-final character, suggesting that early message-level planning does not proceed strictly word by word (or “from left to right”) but instead includes basic information about the identity of both the sentence-initial and sentence-final characters. This is consistent with theories that assume extensive message-level planning before the start of sentence-level encoding and provides new evidence about the level of conceptual detail incorporated into early message plans.

说话开始于言语前信息的产生。虽然一个常见的假设是消息级规划的范围(即消息级增量的大小)可能比句子级规划的范围更广,但是在自发生产过程中,在句子级规划之前,通常在消息级编码了多少信息,这一点并不清楚。本研究通过跟踪轻句子结尾NPs和重句子结尾NPs的生成,评估了英语早期消息级规划的范围和粒度。讲话者用SVO句子来描述一种药剂对病人起作用的图片。一半的图片显示了一个病人的事件,引出了没有修改病人名字的句子(例如,“裁缝正在裁剪衣服”),一半的图片显示了两个病人的事件,有一个目标病人和一个非目标病人。非目标患者的存在需要产生一个名称前或名称后修饰语来唯一地识别目标患者(例如,“裁缝正在剪裁长裙”/“有袖子的连衣裙”)。对言语开始和言语开始前的眼球运动的分析表明,言语开始前的眼球运动受到句子结尾字符复杂性的强烈影响,这表明早期的信息级规划不是严格地一个词一个词地进行(或“从左到右”),而是包括关于句子开头和句子结尾字符的基本信息。这与假设在开始句子级编码之前进行广泛的消息级规划的理论是一致的,并提供了关于早期消息计划中包含的概念细节水平的新证据。
{"title":"Scope of Message Planning: Evidence From Production of Sentences With Heavy Sentence-Final NPs","authors":"Agnieszka E. Konopka","doi":"10.1111/cogs.70110","DOIUrl":"10.1111/cogs.70110","url":null,"abstract":"<p>Speaking begins with the generation of a preverbal message. While a common assumption is that the scope of message-level planning (i.e., the size of message-level increments) can be more extensive than the scope of sentence-level planning, it is unclear how much information is typically encoded at the message level in advance of sentence-level planning during spontaneous production. This study assessed the scope and granularity of early message-level planning in English by tracking production of sentences with light versus heavy sentence-final NPs. Speakers produced SVO sentences to describe pictures showing an agent acting on a patient. Half of the pictures showed one-patient events, eliciting sentences with unmodified patient names (e.g., “<i>The tailor is cutting the dress</i>”), and half showed two-patient events with a target patient and a non-target patient. The presence of a non-target patient required production of a prenominal or postnominal modifier to uniquely identify the target patient (e.g., “<i>The tailor is cutting the long dress</i>” / “<i>the dress with sleeves</i>”). Analyses of speech onsets and eye movements before speech onset showed strong effects of the complexity of the sentence-final character, suggesting that early message-level planning does not proceed strictly word by word (or “from left to right”) but instead includes basic information about the identity of both the sentence-initial and sentence-final characters. This is consistent with theories that assume extensive message-level planning before the start of sentence-level encoding and provides new evidence about the level of conceptual detail incorporated into early message plans.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 10","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12519050/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145287237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gestural and Verbal Evidence of Conceptual Representation Differences in Blind and Sighted Individuals 盲人和正常人概念表征差异的手势和语言证据。
IF 2.4 2区 心理学 Q2 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-10-13 DOI: 10.1111/cogs.70125
Ezgi Mamus, Laura J. Speed, Gerardo Ortega, Asifa Majid, Aslı Özyürek

This preregistered study examined whether visual experience influences conceptual representations by examining both gestural expression and feature listing. Gestures—mostly driven by analog mappings of visuospatial and motoric experiences onto the body—offer a unique window into conceptual representations and provide complementary information not offered by language-based features, which have been the focus of previous work. Thirty congenitally or early blind and 30 sighted Turkish speakers produced silent gestures and features for concepts from semantic categories that differentially rely on experience in visual (non-manipulable objects and animals) and motor (manipulable objects) information. Blind individuals were less likely than sighted individuals to produce gestures for non-manipulable objects and animals, but not for manipulable objects. Overall, the tendency to use a particular gesture strategy for specific semantic categories was similar across groups. However, blind participants relied less on drawing and personification strategies depicting visuospatial aspects of concepts than sighted participants. Feature-listing revealed that blind participants share considerable conceptual knowledge with sighted participants, but their understanding differs in fine-grained details, particularly for animals. Thus, while concepts appear broadly similar in blind and sighted individuals, this study reveals nuanced differences, too, highlighting the intricate role of visual experience in conceptual representations.

本预注册研究通过检查手势表达和特征列表来检验视觉经验是否影响概念表征。手势——主要是由视觉空间和运动体验在身体上的模拟映射驱动的——为概念表征提供了一个独特的窗口,并提供了基于语言的特征无法提供的补充信息,这是以前工作的重点。30名先天或早期失明的土耳其人和30名视力正常的土耳其人对语义类别的概念产生了无声的手势和特征,这些概念不同地依赖于视觉(不可操纵的物体和动物)和运动(可操纵的物体)信息的经验。与视力正常的人相比,盲人对不可操纵的物体和动物做出手势的可能性更小,但对可操纵的物体则不然。总体而言,对于特定语义类别使用特定手势策略的倾向在各组之间是相似的。然而,与视力正常的参与者相比,盲人参与者较少依赖于描绘概念的视觉空间方面的绘画和拟人化策略。特征列表显示,盲人参与者与视力正常的参与者共享相当多的概念知识,但他们的理解在细粒度的细节上有所不同,尤其是对动物的理解。因此,虽然盲人和正常人的概念大致相似,但这项研究也揭示了细微的差异,强调了视觉经验在概念表征中的复杂作用。
{"title":"Gestural and Verbal Evidence of Conceptual Representation Differences in Blind and Sighted Individuals","authors":"Ezgi Mamus,&nbsp;Laura J. Speed,&nbsp;Gerardo Ortega,&nbsp;Asifa Majid,&nbsp;Aslı Özyürek","doi":"10.1111/cogs.70125","DOIUrl":"10.1111/cogs.70125","url":null,"abstract":"<p>This preregistered study examined whether visual experience influences conceptual representations by examining both gestural expression and feature listing. Gestures—mostly driven by analog mappings of visuospatial and motoric experiences onto the body—offer a unique window into conceptual representations and provide complementary information not offered by language-based features, which have been the focus of previous work. Thirty congenitally or early blind and 30 sighted Turkish speakers produced silent gestures and features for concepts from semantic categories that differentially rely on experience in visual (non-manipulable objects and animals) and motor (manipulable objects) information. Blind individuals were less likely than sighted individuals to produce gestures for non-manipulable objects and animals, but not for manipulable objects. Overall, the tendency to use a particular gesture strategy for specific semantic categories was similar across groups. However, blind participants relied less on drawing and personification strategies depicting visuospatial aspects of concepts than sighted participants. Feature-listing revealed that blind participants share considerable conceptual knowledge with sighted participants, but their understanding differs in fine-grained details, particularly for animals. Thus, while concepts appear broadly similar in blind and sighted individuals, this study reveals nuanced differences, too, highlighting the intricate role of visual experience in conceptual representations.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 10","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12517398/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145281441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social Context Matters for Turn-Taking Dynamics: A Comparative Study of Autistic and Typically Developing Children 社会环境对轮转动力学的影响:自闭症儿童与正常发育儿童的比较研究。
IF 2.4 2区 心理学 Q2 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-10-13 DOI: 10.1111/cogs.70124
Christopher Cox, Riccardo Fusaroli, Yngwie A. Nielsen, Sunghye Cho, Roberta Rocca, Arndis Simonsen, Azia Knox, Meg Lyons, Mark Liberman, Christopher Cieri, Sarah Schillinger, Amanda L. Lee, Aili Hauptmann, Kimberly Tena, Christopher Chatham, Judith S. Miller, Juhi Pandey, Alison S. Russell, Robert T. Schultz, Julia Parish-Morris

Engaging in fluent conversation is a surprisingly complex task that requires interlocutors to promptly respond to each other in a way that is appropriate to the social context. In this study, we disentangled different dimensions of turn-taking by investigating how the dynamics of child–adult interactions changed according to the activity (task-oriented vs. freer conversation) and the familiarity of the interlocutor (familiar vs. unfamiliar). Twenty-eight autistic children (16 male; Mage$M_{age}$ = 10.8 years) and 20 age-matched typically developing children (8 male; Mage$M_{age}$ = 9.6 years) participated in seven task-orientated face-to-face conversations with their caregivers (336 total conversations) and seven more telephone conversations alternately with their caregivers (144 total conversations, 60 with the typical development group) and an experimenter (191 total conversations, 112 with the autism group). By modeling inter-turn response latencies in multi-level Bayesian location-scale models, we found that inter-turn response latencies were consistent across repeated measures within social contexts, but exhibited substantial differences across social contexts. Autistic children exhibited more overlaps, produced faster response latencies and shorter pauses than typically developing children—and these group differences were stronger when conversing with the unfamiliar experimenter. Unfamiliarity also made the relation between individual differences and latencies evident: only in conversations with the experimenter were higher sociocognitive skills and lower social awareness associated with faster responses. Information flow and shared tempo were also influenced by familiarity: children adapted their response latencies to the predictability and tempo of their interlocutor's turn, but only when interacting with their caregivers and not the experimenter. These results highlight the need to construe turn-taking as a multicomponential construct that is shaped by individual differences, interpersonal dynamics, and the affordances of the context.

进行流畅的对话是一项非常复杂的任务,它要求对话者以一种适合社会背景的方式迅速回应对方。在本研究中,我们通过调查儿童-成人互动的动态如何根据活动(任务导向与更自由的对话)和对话者的熟悉程度(熟悉与不熟悉)而变化,来解开轮流的不同维度。28名自闭症儿童(16名男性,年龄= 10.8岁)和20名年龄匹配的正常发育儿童(8名男性,年龄= 9.6岁)与照顾者进行了7次以任务为导向的面对面对话(共336次),并与照顾者(共144次对话,与典型发育组60次)和一名实验者(共191次对话,与自闭症组112次)进行了7次电话交谈。通过建立多层次贝叶斯区位尺度模型,我们发现在不同的社会背景下,重复测量的回合间反应潜伏期是一致的,但在不同的社会背景下表现出显著的差异。自闭症儿童比正常发育的儿童表现出更多的重叠,产生更快的反应延迟和更短的停顿——当与不熟悉的实验者交谈时,这些组间差异更明显。不熟悉也使个体差异和延迟之间的关系变得明显:只有在与实验者交谈时,较高的社会认知技能和较低的社会意识与更快的反应有关。信息流和共享的节奏也受到熟悉程度的影响:儿童使他们的反应延迟适应对话者轮到他们的可预测性和节奏,但只有在与他们的照顾者互动时,而不是与实验者互动时。这些结果强调需要将轮流解释为一个由个体差异、人际动态和情境启示形成的多成分结构。
{"title":"Social Context Matters for Turn-Taking Dynamics: A Comparative Study of Autistic and Typically Developing Children","authors":"Christopher Cox,&nbsp;Riccardo Fusaroli,&nbsp;Yngwie A. Nielsen,&nbsp;Sunghye Cho,&nbsp;Roberta Rocca,&nbsp;Arndis Simonsen,&nbsp;Azia Knox,&nbsp;Meg Lyons,&nbsp;Mark Liberman,&nbsp;Christopher Cieri,&nbsp;Sarah Schillinger,&nbsp;Amanda L. Lee,&nbsp;Aili Hauptmann,&nbsp;Kimberly Tena,&nbsp;Christopher Chatham,&nbsp;Judith S. Miller,&nbsp;Juhi Pandey,&nbsp;Alison S. Russell,&nbsp;Robert T. Schultz,&nbsp;Julia Parish-Morris","doi":"10.1111/cogs.70124","DOIUrl":"10.1111/cogs.70124","url":null,"abstract":"<p>Engaging in fluent conversation is a surprisingly complex task that requires interlocutors to promptly respond to each other in a way that is appropriate to the social context. In this study, we disentangled different dimensions of turn-taking by investigating how the dynamics of child–adult interactions changed according to the activity (task-oriented vs. freer conversation) and the familiarity of the interlocutor (familiar vs. unfamiliar). Twenty-eight autistic children (16 male; <span></span><math>\u0000 <semantics>\u0000 <msub>\u0000 <mi>M</mi>\u0000 <mrow>\u0000 <mi>a</mi>\u0000 <mi>g</mi>\u0000 <mi>e</mi>\u0000 </mrow>\u0000 </msub>\u0000 <annotation>$M_{age}$</annotation>\u0000 </semantics></math> = 10.8 years) and 20 age-matched typically developing children (8 male; <span></span><math>\u0000 <semantics>\u0000 <msub>\u0000 <mi>M</mi>\u0000 <mrow>\u0000 <mi>a</mi>\u0000 <mi>g</mi>\u0000 <mi>e</mi>\u0000 </mrow>\u0000 </msub>\u0000 <annotation>$M_{age}$</annotation>\u0000 </semantics></math> = 9.6 years) participated in seven task-orientated face-to-face conversations with their caregivers (336 total conversations) and seven more telephone conversations alternately with their caregivers (144 total conversations, 60 with the typical development group) and an experimenter (191 total conversations, 112 with the autism group). By modeling inter-turn response latencies in multi-level Bayesian location-scale models, we found that inter-turn response latencies were consistent across repeated measures within social contexts, but exhibited substantial differences across social contexts. Autistic children exhibited more overlaps, produced faster response latencies and shorter pauses than typically developing children—and these group differences were stronger when conversing with the unfamiliar experimenter. Unfamiliarity also made the relation between individual differences and latencies evident: only in conversations with the experimenter were higher sociocognitive skills and lower social awareness associated with faster responses. Information flow and shared tempo were also influenced by familiarity: children adapted their response latencies to the predictability and tempo of their interlocutor's turn, but only when interacting with their caregivers and not the experimenter. These results highlight the need to construe turn-taking as a multicomponential construct that is shaped by individual differences, interpersonal dynamics, and the affordances of the context.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 10","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12517399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145281476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can We Really “See” Through Others' Eyes? Evidence of Embodied Visual-Spatial Representation From an Altercentric Viewpoint 我们真的能“看”别人的眼睛吗?从异中心观点看具身视觉空间表征的证据。
IF 2.4 2区 心理学 Q2 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-10-09 DOI: 10.1111/cogs.70116
Nanbo Wang, Shen Zhang, Haiyan Geng

Social interactions often require the ability to “stand in others’ shoes” and perceive the world “through others’ eyes,” but it remains unclear the extent to which we can actually see others’ visual worlds. Prior research has primarily focused on mental-body transformation in visual-spatial perspective taking (VSPT), yet the subsequent visual processing under the adopted perspective has been less explored. Addressing this gap, our study investigated mental representation of the visual scene as a direct outcome of perceiving from another's viewpoint. Using modified VSPT tasks, we paired avatar-perspective trials with self-perspective trials to create opportunities for observing priming effects resulting from potential mental representations formed under the avatar's perspective. We hypothesized that if individuals form embodied representations of visual scenes while explicitly processing stimuli from the avatar's viewpoint, these representations should be stored in memory, and elicit priming effects when later encountering similar scenes from their own perspective. Across four experiments, we provide the first evidence that (1) explicitly engaging in embodied VSPT produces robust mental representations of the visual scene from the adopted perspective, (2) these representations are visual-spatial rather than semantic in nature, and (3) these representations arise from embodied processing rather than from self-perspective strategies. Additionally, our findings reveal that individuals implicitly process visual stimuli from their own perspective during other-perspective tasks, forming distinct but weaker self-perspective representations. Overall, our findings demonstrate the existence of embodied representations in VSPT and offer significant insights into the processing mechanisms involved when we “stand in others’ shoes.”

社会互动通常需要“站在别人的立场上”和“通过别人的眼睛”感知世界的能力,但我们到底能在多大程度上看到别人的视觉世界,目前还不清楚。以往的研究主要集中在视觉空间视角的身心转换上,但对该视角下的后续视觉加工研究较少。为了解决这一差距,我们的研究调查了视觉场景的心理表征,作为从他人的角度感知的直接结果。使用改进的VSPT任务,我们将化身视角试验与自我视角试验配对,以创造机会观察在化身视角下形成的潜在心理表征所产生的启动效应。我们假设,如果个体在明确处理虚拟角色视角的刺激时形成视觉场景的具体化表征,那么这些表征应该存储在记忆中,并在以后从他们自己的视角遇到类似的场景时引发启动效应。在四个实验中,我们提供了第一个证据:(1)明确参与具身VSPT从被采纳的角度产生视觉场景的稳健心理表征;(2)这些表征本质上是视觉空间而不是语义的;(3)这些表征来自具身加工而不是自我视角策略。此外,我们的研究结果表明,个体在其他视角任务中内隐地处理自己视角的视觉刺激,形成明显但较弱的自我视角表征。总的来说,我们的研究结果证明了VSPT中具身表征的存在,并为我们“站在别人的角度”时所涉及的加工机制提供了重要的见解。
{"title":"Can We Really “See” Through Others' Eyes? Evidence of Embodied Visual-Spatial Representation From an Altercentric Viewpoint","authors":"Nanbo Wang,&nbsp;Shen Zhang,&nbsp;Haiyan Geng","doi":"10.1111/cogs.70116","DOIUrl":"10.1111/cogs.70116","url":null,"abstract":"<p>Social interactions often require the ability to “stand in others’ shoes” and perceive the world “through others’ eyes,” but it remains unclear the extent to which we can actually see others’ visual worlds. Prior research has primarily focused on mental-body transformation in visual-spatial perspective taking (VSPT), yet the subsequent visual processing under the adopted perspective has been less explored. Addressing this gap, our study investigated mental representation of the visual scene as a direct outcome of perceiving from another's viewpoint. Using modified VSPT tasks, we paired avatar-perspective trials with self-perspective trials to create opportunities for observing priming effects resulting from potential mental representations formed under the avatar's perspective. We hypothesized that if individuals form embodied representations of visual scenes while explicitly processing stimuli from the avatar's viewpoint, these representations should be stored in memory, and elicit priming effects when later encountering similar scenes from their own perspective. Across four experiments, we provide the first evidence that (1) explicitly engaging in embodied VSPT produces robust mental representations of the visual scene from the adopted perspective, (2) these representations are visual-spatial rather than semantic in nature, and (3) these representations arise from embodied processing rather than from self-perspective strategies. Additionally, our findings reveal that individuals implicitly process visual stimuli from their own perspective during other-perspective tasks, forming distinct but weaker self-perspective representations. Overall, our findings demonstrate the existence of embodied representations in VSPT and offer significant insights into the processing mechanisms involved when we “stand in others’ shoes.”</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 10","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Between Two Grammatical Gender Systems: Exploring the Impact of Grammatical Gender on Memory Recall in Ukrainian−Russian Simultaneous Bilinguals 在两种语法性别系统之间:探讨语法性别对乌克兰-俄罗斯双语者记忆回忆的影响
IF 2.4 2区 心理学 Q2 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-10-02 DOI: 10.1111/cogs.70117
Oleksandra Osypenko, Silke Brandt, Panos Athanasopoulos

This study examines the impact of grammatical gender on memory recall among simultaneous bilinguals with two three-gendered languages (Ukrainian and Russian). Ukrainian−Russian bilinguals and English monolingual controls were tested on their ability to remember names assigned to objects with either matching or mismatching grammatical genders across their two languages. Results showed that bilinguals recalled names more accurately when the biological sex of the names was congruent with the grammatical gender of objects in both languages (e.g., recalling a male name assigned to a noun with masculine grammatical gender in both L1s, rather than a female name). English monolinguals, in contrast, showed no difference in recall. However, when grammatical gender mismatched across Ukrainian and Russian, the expected influence of the more proficient language on recall accuracy was not observed. These findings suggest that converging grammatical information from two L1s creates stronger memory associations, enhancing recall accuracy of simultaneous bilinguals. Conversely, mismatching grammatical genders appear to negate this effect. Taken together, these findings highlight the interconnected nature of bilingual conceptual representation.

本研究考察了语法性别对同时使用两种三性别语言(乌克兰语和俄语)的双语者记忆回忆的影响。研究人员测试了乌克兰-俄罗斯双语者和英语单语对照者对两种语言中语法性别匹配或不匹配的物体名称的记忆能力。结果表明,当名字的生理性别与两种语言中物体的语法性别一致时,双语者更准确地回忆起名字(例如,在两种语言中,一个具有男性语法性别的名词对应的是男性名字,而不是女性名字)。相比之下,英语单语者在回忆方面没有表现出差异。然而,当乌克兰语和俄语的语法性别不匹配时,没有观察到更熟练的语言对回忆准确性的预期影响。这些发现表明,两个l15的语法信息融合产生了更强的记忆关联,提高了同时双语者的回忆准确性。相反,不匹配的语法性别似乎否定了这种效果。综上所述,这些发现强调了双语概念表征的相互联系本质。
{"title":"Between Two Grammatical Gender Systems: Exploring the Impact of Grammatical Gender on Memory Recall in Ukrainian−Russian Simultaneous Bilinguals","authors":"Oleksandra Osypenko,&nbsp;Silke Brandt,&nbsp;Panos Athanasopoulos","doi":"10.1111/cogs.70117","DOIUrl":"https://doi.org/10.1111/cogs.70117","url":null,"abstract":"<p>This study examines the impact of grammatical gender on memory recall among simultaneous bilinguals with two three-gendered languages (Ukrainian and Russian). Ukrainian−Russian bilinguals and English monolingual controls were tested on their ability to remember names assigned to objects with either matching or mismatching grammatical genders across their two languages. Results showed that bilinguals recalled names more accurately when the biological sex of the names was congruent with the grammatical gender of objects in both languages (e.g., recalling a male name assigned to a noun with masculine grammatical gender in both L1s, rather than a female name). English monolinguals, in contrast, showed no difference in recall. However, when grammatical gender mismatched across Ukrainian and Russian, the expected influence of the more proficient language on recall accuracy was not observed. These findings suggest that converging grammatical information from two L1s creates stronger memory associations, enhancing recall accuracy of simultaneous bilinguals. Conversely, mismatching grammatical genders appear to negate this effect. Taken together, these findings highlight the interconnected nature of bilingual conceptual representation.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 10","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70117","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145204801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Communicating Through Acting: Affording Communicative Intention in Pantomimes 通过表演进行交流:哑剧中的交流意图
IF 2.4 2区 心理学 Q2 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-10-02 DOI: 10.1111/cogs.70120
Siyi Gong, Kaiwen Jiang, Jessica G. Li, Mireille Karadanaian, Ziyi Meng, Tao Gao

How do people intuitively recognize communicative intention in pantomimes, even though such actions kinematically resemble instrumental behaviors directed at changing the world? We focus on two alternative hypotheses: one posits that instrumental intention competes with communicative intention, such that the weaker the former, the stronger the latter; the other suggests that instrumental intention is nested within communicative intention, such that the presence of the former facilitates the latter. To test these hypotheses, we compiled a video dataset of action-object pairs with varying frequencies in the English corpus. Using the concept of affordance, we qualitatively varied the degree to which a scene visually supports the execution of an action. Across two empirical experiments, we found a nonmonotonic relationship between affordance and communicative ratings: partial affordance, where the scene provides some support for an action's instrumental purpose, elicited the strongest perception of communicative intention. In contrast, full affordance or no affordance resulted in weaker interpretations of communicative intention. We also found that recognizing the instrumental components of pantomime-like actions predicted a higher communicativeness rating. Our study, on top of confirming humans' ability to interpret novel pantomimes, reveals a novel mechanism of communicative intention: recognizing an instrumental goal and perceiving suboptimal conditions for achieving it together enhance the communicative signal. This work contributes toward an integrated theory of pantomimes, demonstrating how the rationality principle not only aids in distinguishing communicative intention but also supports the identification of instrumental content embedded within it.

人们如何直观地识别哑剧中的交际意图,即使这些行为在运动学上类似于旨在改变世界的工具性行为?我们关注两个可选的假设:一个假设工具意图与交际意图竞争,前者越弱,后者越强;另一种观点认为工具意图嵌套在交际意图中,因此前者的存在促进了后者。为了验证这些假设,我们编译了一个英语语料库中不同频率的动作-对象对的视频数据集。使用可视性的概念,我们定性地改变场景在视觉上支持动作执行的程度。在两项实证实验中,我们发现可视性与交际评分之间存在非单调关系:部分可视性,即场景为行为的工具性目的提供了一些支持,引发了最强烈的交际意图感知。相反,充分提供或不提供则导致对交际意图的解释较弱。我们还发现,认识到类似哑剧的动作的工具成分预示着更高的交流性评级。我们的研究在证实了人类对新哑剧的理解能力的基础上,揭示了一种新的交际意图机制:识别工具目标和感知实现该目标的次优条件共同增强了交际信号。这项工作有助于哑剧的综合理论,展示了合理性原则如何不仅有助于区分交际意图,而且还支持识别其中嵌入的工具性内容。
{"title":"Communicating Through Acting: Affording Communicative Intention in Pantomimes","authors":"Siyi Gong,&nbsp;Kaiwen Jiang,&nbsp;Jessica G. Li,&nbsp;Mireille Karadanaian,&nbsp;Ziyi Meng,&nbsp;Tao Gao","doi":"10.1111/cogs.70120","DOIUrl":"https://doi.org/10.1111/cogs.70120","url":null,"abstract":"<p>How do people intuitively recognize communicative intention in pantomimes, even though such actions kinematically resemble instrumental behaviors directed at changing the world? We focus on two alternative hypotheses: one posits that instrumental intention competes with communicative intention, such that the weaker the former, the stronger the latter; the other suggests that instrumental intention is nested within communicative intention, such that the presence of the former facilitates the latter. To test these hypotheses, we compiled a video dataset of action-object pairs with varying frequencies in the English corpus. Using the concept of affordance, we qualitatively varied the degree to which a scene visually supports the execution of an action. Across two empirical experiments, we found a nonmonotonic relationship between affordance and communicative ratings: partial affordance, where the scene provides some support for an action's instrumental purpose, elicited the strongest perception of communicative intention. In contrast, full affordance or no affordance resulted in weaker interpretations of communicative intention. We also found that recognizing the instrumental components of pantomime-like actions predicted a higher communicativeness rating. Our study, on top of confirming humans' ability to interpret novel pantomimes, reveals a novel mechanism of communicative intention: recognizing an instrumental goal and perceiving suboptimal conditions for achieving it together enhance the communicative signal. This work contributes toward an integrated theory of pantomimes, demonstrating how the rationality principle not only aids in distinguishing communicative intention but also supports the identification of instrumental content embedded within it.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 10","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145204806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coordinating Attention in Face-to-Face Collaboration: The Dynamics of Gaze, Pointing, and Verbal Reference 在面对面合作中协调注意力:凝视、指向和言语参考的动态
IF 2.4 2区 心理学 Q2 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-10-02 DOI: 10.1111/cogs.70123
Lucas Haraped, D. Jacob Gerlofs, Olive Chung-Hui Huang, Cam Hickling, Walter F. Bischof, Pierre Sachse, Alan Kingstone

During real-world interactions, people rely on gaze, gestures, and verbal references to coordinate attention and establish shared understanding. Yet, it remains unclear if and how these modalities couple within and between interacting individuals in face-to-face settings. The current study addressed this issue by analyzing dyadic face-to-face interactions, where participants (n = 52) collaboratively ranked paintings while their gaze, pointing gestures, and verbal references were recorded. Using cross-recurrence quantification analysis, we found that participants readily used pointing gestures to complement gaze and verbal reference cues and that gaze directed toward the partner followed canonical conversational patterns, that is, more looks to the other's face when listening than speaking. Further, gaze, pointing, and verbal references showed significant coupling both within and between individuals, with pointing gestures and verbal references guiding the partner's gaze to shared targets and speaker gaze leading listener gaze. Moreover, simultaneous pointing and verbal referencing led to more sustained attention coupling compared to pointing alone. These findings highlight the multimodal nature of joint attention coordination, extending theories of embodied, interactive cognition by demonstrating how gaze, gestures, and language dynamically integrate into a shared cognitive system.

在现实世界的互动中,人们依靠凝视、手势和言语来协调注意力,建立共同的理解。然而,目前尚不清楚这些模式是否以及如何在面对面的互动环境中相互作用。目前的研究通过分析二元面对面的互动来解决这个问题,参与者(n = 52)在他们的凝视、指向手势和口头引用被记录的同时,共同对画作进行排名。通过交叉循环量化分析,我们发现参与者很容易使用指向的手势来补充凝视和口头参考线索,并且指向伴侣的凝视遵循规范的对话模式,即在听时比在说时更多地看对方的脸。此外,凝视、指向和言语暗示在个体内部和个体之间都显示出显著的耦合,指向的手势和言语暗示引导伴侣的目光转向共同的目标,说话者的目光引导听者的目光。此外,与单独指向相比,同时指向和言语引用导致了更持久的注意耦合。这些发现强调了联合注意协调的多模态本质,通过展示凝视、手势和语言如何动态地整合到一个共享的认知系统中,扩展了具身、互动认知的理论。
{"title":"Coordinating Attention in Face-to-Face Collaboration: The Dynamics of Gaze, Pointing, and Verbal Reference","authors":"Lucas Haraped,&nbsp;D. Jacob Gerlofs,&nbsp;Olive Chung-Hui Huang,&nbsp;Cam Hickling,&nbsp;Walter F. Bischof,&nbsp;Pierre Sachse,&nbsp;Alan Kingstone","doi":"10.1111/cogs.70123","DOIUrl":"https://doi.org/10.1111/cogs.70123","url":null,"abstract":"<p>During real-world interactions, people rely on gaze, gestures, and verbal references to coordinate attention and establish shared understanding. Yet, it remains unclear if and how these modalities couple within and between interacting individuals in face-to-face settings. The current study addressed this issue by analyzing dyadic face-to-face interactions, where participants (<i>n</i> = 52) collaboratively ranked paintings while their gaze, pointing gestures, and verbal references were recorded. Using cross-recurrence quantification analysis, we found that participants readily used pointing gestures to complement gaze and verbal reference cues and that gaze directed toward the partner followed canonical conversational patterns, that is, more looks to the other's face when listening than speaking. Further, gaze, pointing, and verbal references showed significant coupling both within and between individuals, with pointing gestures and verbal references guiding the partner's gaze to shared targets and speaker gaze leading listener gaze. Moreover, simultaneous pointing and verbal referencing led to more sustained attention coupling compared to pointing alone. These findings highlight the multimodal nature of joint attention coordination, extending theories of embodied, interactive cognition by demonstrating how gaze, gestures, and language dynamically integrate into a shared cognitive system.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 10","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70123","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145204802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Cognitive Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1