Diversion from the syntactic norm, as manifested in the absence of otherwise expected lexical and syntactic material, has been extensively studied in theoretical syntax. Such modifications are observed in headlines, telegrams, labels, and other specialized contexts, collectively referred to as "reduced" registers. Focusing on search queries, a type of reduced register, I propose that they are generated by a simpler grammar that lacks a full-fledged syntactic component. The analysis is couched in the Parallel Architecture framework, whose assumption of relative independence of linguistic components-their parallelism-and the rejection of syntactocentrism are essential to explain properties of queries.
{"title":"Syntactic Variation in Reduced Registers Through the Lens of the Parallel Architecture.","authors":"Anastasia Smirnova","doi":"10.1111/tops.12747","DOIUrl":"https://doi.org/10.1111/tops.12747","url":null,"abstract":"<p><p>Diversion from the syntactic norm, as manifested in the absence of otherwise expected lexical and syntactic material, has been extensively studied in theoretical syntax. Such modifications are observed in headlines, telegrams, labels, and other specialized contexts, collectively referred to as \"reduced\" registers. Focusing on search queries, a type of reduced register, I propose that they are generated by a simpler grammar that lacks a full-fledged syntactic component. The analysis is couched in the Parallel Architecture framework, whose assumption of relative independence of linguistic components-their parallelism-and the rejection of syntactocentrism are essential to explain properties of queries.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141535576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-05-23DOI: 10.1111/tops.12737
Janet Hui-Wen Hsiao
One important goal of cognitive science is to understand the mind in terms of its representational and computational capacities, where computational modeling plays an essential role in providing theoretical explanations and predictions of human behavior and mental phenomena. In my research, I have been using computational modeling, together with behavioral experiments and cognitive neuroscience methods, to investigate the information processing mechanisms underlying learning and visual cognition in terms of perceptual representation and attention strategy. In perceptual representation, I have used neural network models to understand how the split architecture in the human visual system influences visual cognition, and to examine perceptual representation development as the results of expertise. In attention strategy, I have developed the Eye Movement analysis with Hidden Markov Models method for quantifying eye movement pattern and consistency using both spatial and temporal information, which has led to novel findings across disciplines not discoverable using traditional methods. By integrating it with deep neural networks (DNN), I have developed DNN+HMM to account for eye movement strategy learning in human visual cognition. The understanding of the human mind through computational modeling also facilitates research on artificial intelligence's (AI) comparability with human cognition, which can in turn help explainable AI systems infer humans' belief on AI's operations and provide human-centered explanations to enhance human-AI interaction and mutual understanding. Together, these demonstrate the essential role of computational modeling methods in providing theoretical accounts of the human mind as well as its interaction with its environment and AI systems.
{"title":"Understanding Human Cognition Through Computational Modeling.","authors":"Janet Hui-Wen Hsiao","doi":"10.1111/tops.12737","DOIUrl":"10.1111/tops.12737","url":null,"abstract":"<p><p>One important goal of cognitive science is to understand the mind in terms of its representational and computational capacities, where computational modeling plays an essential role in providing theoretical explanations and predictions of human behavior and mental phenomena. In my research, I have been using computational modeling, together with behavioral experiments and cognitive neuroscience methods, to investigate the information processing mechanisms underlying learning and visual cognition in terms of perceptual representation and attention strategy. In perceptual representation, I have used neural network models to understand how the split architecture in the human visual system influences visual cognition, and to examine perceptual representation development as the results of expertise. In attention strategy, I have developed the Eye Movement analysis with Hidden Markov Models method for quantifying eye movement pattern and consistency using both spatial and temporal information, which has led to novel findings across disciplines not discoverable using traditional methods. By integrating it with deep neural networks (DNN), I have developed DNN+HMM to account for eye movement strategy learning in human visual cognition. The understanding of the human mind through computational modeling also facilitates research on artificial intelligence's (AI) comparability with human cognition, which can in turn help explainable AI systems infer humans' belief on AI's operations and provide human-centered explanations to enhance human-AI interaction and mutual understanding. Together, these demonstrate the essential role of computational modeling methods in providing theoretical accounts of the human mind as well as its interaction with its environment and AI systems.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"349-376"},"PeriodicalIF":2.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141088953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2022-12-27DOI: 10.1111/tops.12634
Filipa Correia, Francisco S Melo, Ana Paiva
Creating effective teamwork between humans and robots involves not only addressing their performance as a team but also sustaining the quality and sense of unity among teammates, also known as cohesion. This paper explores the research problem of: how can we endow robotic teammates with social capabilities to improve the cohesive alliance with humans? By defining the concept of a human-robot cohesive alliance in the light of the multidimensional construct of cohesion from the social sciences, we propose to address this problem through the idea of multifaceted human-robot cohesion. We present our preliminary effort from previous works to examine each of the five dimensions of cohesion: social, collective, emotional, structural, and task. We finish the paper with a discussion on how human-robot cohesion contributes to the key questions and ongoing challenges of creating robotic teammates. Overall, cohesion in human-robot teams might be a key factor to propel team performance and it should be considered in the design, development, and evaluation of robotic teammates.
{"title":"When a Robot Is Your Teammate.","authors":"Filipa Correia, Francisco S Melo, Ana Paiva","doi":"10.1111/tops.12634","DOIUrl":"10.1111/tops.12634","url":null,"abstract":"<p><p>Creating effective teamwork between humans and robots involves not only addressing their performance as a team but also sustaining the quality and sense of unity among teammates, also known as cohesion. This paper explores the research problem of: how can we endow robotic teammates with social capabilities to improve the cohesive alliance with humans? By defining the concept of a human-robot cohesive alliance in the light of the multidimensional construct of cohesion from the social sciences, we propose to address this problem through the idea of multifaceted human-robot cohesion. We present our preliminary effort from previous works to examine each of the five dimensions of cohesion: social, collective, emotional, structural, and task. We finish the paper with a discussion on how human-robot cohesion contributes to the key questions and ongoing challenges of creating robotic teammates. Overall, cohesion in human-robot teams might be a key factor to propel team performance and it should be considered in the design, development, and evaluation of robotic teammates.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"527-553"},"PeriodicalIF":2.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10787386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-06-09DOI: 10.1111/tops.12744
Christopher W Myers, Nancy J Cooke, Jamie C Gorman, Nathan J McNeese
Teams are a fundamental aspect of life-from sports to business, to defense, to science, to education. While the cognitive sciences tend to focus on information processing within individuals, others have argued that teams are also capable of demonstrating cognitive capacities similar to humans, such as skill acquisition and forgetting (cf., Cooke, Gorman, Myers, & Duran, 2013; Fiore et al., 2010). As artificially intelligent and autonomous systems improve in their ability to learn, reason, interact, and coordinate with human teammates combined with the observation that teams can express cognitive capacities typically seen in individuals, a cognitive science of teams is emerging. Consequently, new questions are being asked about teams regarding teamness, trust, the introduction and effects of autonomous systems on teams, and how best to measure team behavior and phenomena. In this topic, four facets of human-autonomy team cognition are introduced with leaders in the field providing in-depth articles associated with one or more of the facets: (1) defining teams; (2) how trust is established, maintained, and repaired when broken; (3) autonomous systems operating as teammates; and (4) metrics for evaluating team cognition across communication, coordination, and performance.
{"title":"Introduction to the Emerging Cognitive Science of Distributed Human-Autonomy Teams.","authors":"Christopher W Myers, Nancy J Cooke, Jamie C Gorman, Nathan J McNeese","doi":"10.1111/tops.12744","DOIUrl":"10.1111/tops.12744","url":null,"abstract":"<p><p>Teams are a fundamental aspect of life-from sports to business, to defense, to science, to education. While the cognitive sciences tend to focus on information processing within individuals, others have argued that teams are also capable of demonstrating cognitive capacities similar to humans, such as skill acquisition and forgetting (cf., Cooke, Gorman, Myers, & Duran, 2013; Fiore et al., 2010). As artificially intelligent and autonomous systems improve in their ability to learn, reason, interact, and coordinate with human teammates combined with the observation that teams can express cognitive capacities typically seen in individuals, a cognitive science of teams is emerging. Consequently, new questions are being asked about teams regarding teamness, trust, the introduction and effects of autonomous systems on teams, and how best to measure team behavior and phenomena. In this topic, four facets of human-autonomy team cognition are introduced with leaders in the field providing in-depth articles associated with one or more of the facets: (1) defining teams; (2) how trust is established, maintained, and repaired when broken; (3) autonomous systems operating as teammates; and (4) metrics for evaluating team cognition across communication, coordination, and performance.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"377-390"},"PeriodicalIF":2.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141297036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2023-06-30DOI: 10.1111/tops.12669
Matthias Scheutz, Shuchin Aeron, Ayca Aygun, J P de Ruiter, Sergio Fantini, Cristianne Fernandez, Zachary Haga, Thuan Nguyen, Boyang Lyu
As human-machine teams are being considered for a variety of mixed-initiative tasks, detecting and being responsive to human cognitive states, in particular systematic cognitive states, is among the most critical capabilities for artificial systems to ensure smooth interactions with humans and high overall team performance. Various human physiological parameters, such as heart rate, respiration rate, blood pressure, and skin conductance, as well as brain activity inferred from functional near-infrared spectroscopy or electroencephalogram, have been linked to different systemic cognitive states, such as workload, distraction, or mind-wandering among others. Whether these multimodal signals are indeed sufficient to isolate such cognitive states across individuals performing tasks or whether additional contextual information (e.g., about the task state or the task environment) is required for making appropriate inferences remains an important open problem. In this paper, we introduce an experimental and machine learning framework for investigating these questions and focus specifically on using physiological and neurophysiological measurements to learn classifiers associated with systemic cognitive states like cognitive load, distraction, sense of urgency, mind wandering, and interference. Specifically, we describe a multitasking interactive experimental setting used to obtain a comprehensive multimodal data set which provided the foundation for a first evaluation of various standard state-of-the-art machine learning techniques with respect to their effectiveness in inferring systemic cognitive states. While the classification success of these standard methods based on just the physiological and neurophysiological signals across subjects was modest, which is to be expected given the complexity of the classification problem and the possibility that higher accuracy rates might not in general be achievable, the results nevertheless can serve as a baseline for evaluating future efforts to improve classification, especially methods that take contextual aspects such as task and environmental states into account.
{"title":"Estimating Systemic Cognitive States from a Mixture of Physiological and Brain Signals.","authors":"Matthias Scheutz, Shuchin Aeron, Ayca Aygun, J P de Ruiter, Sergio Fantini, Cristianne Fernandez, Zachary Haga, Thuan Nguyen, Boyang Lyu","doi":"10.1111/tops.12669","DOIUrl":"10.1111/tops.12669","url":null,"abstract":"<p><p>As human-machine teams are being considered for a variety of mixed-initiative tasks, detecting and being responsive to human cognitive states, in particular systematic cognitive states, is among the most critical capabilities for artificial systems to ensure smooth interactions with humans and high overall team performance. Various human physiological parameters, such as heart rate, respiration rate, blood pressure, and skin conductance, as well as brain activity inferred from functional near-infrared spectroscopy or electroencephalogram, have been linked to different systemic cognitive states, such as workload, distraction, or mind-wandering among others. Whether these multimodal signals are indeed sufficient to isolate such cognitive states across individuals performing tasks or whether additional contextual information (e.g., about the task state or the task environment) is required for making appropriate inferences remains an important open problem. In this paper, we introduce an experimental and machine learning framework for investigating these questions and focus specifically on using physiological and neurophysiological measurements to learn classifiers associated with systemic cognitive states like cognitive load, distraction, sense of urgency, mind wandering, and interference. Specifically, we describe a multitasking interactive experimental setting used to obtain a comprehensive multimodal data set which provided the foundation for a first evaluation of various standard state-of-the-art machine learning techniques with respect to their effectiveness in inferring systemic cognitive states. While the classification success of these standard methods based on just the physiological and neurophysiological signals across subjects was modest, which is to be expected given the complexity of the classification problem and the possibility that higher accuracy rates might not in general be achievable, the results nevertheless can serve as a baseline for evaluating future efforts to improve classification, especially methods that take contextual aspects such as task and environmental states into account.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"485-526"},"PeriodicalIF":2.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9752704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2023-11-27DOI: 10.1111/tops.12713
Heather C Lum, Elizabeth K Phillips
The relationship between humans and animals is complex and influenced by multiple variables. Humans display a remarkably flexible and rich array of social competencies, demonstrating the ability to interpret, predict, and react appropriately to the behavior of others, as well as to engage others in a variety of complex social interactions. Developing computational systems that have similar social abilities is a critical step in designing robots, animated characters, and other computer agents that appear intelligent and capable in their interactions with humans and each other. Further, it will improve their ability to cooperate with people as capable partners, learn from natural instruction, and provide intuitive and engaging interactions for human partners. Thus, human-animal team analogs can be one means through which to foster veridical mental models of robots that provide a more accurate representation of their near-future capabilities. Some digital twins of human-animal teams currently exist but are often incomplete. Therefore, this article focuses on issues within and surrounding the current models of human-animal teams, previous research surrounding this connection, and the challenges when using such an analogy for human-autonomy teams.
{"title":"Understanding Human-Autonomy Teams Through a Human-Animal Teaming Model.","authors":"Heather C Lum, Elizabeth K Phillips","doi":"10.1111/tops.12713","DOIUrl":"10.1111/tops.12713","url":null,"abstract":"<p><p>The relationship between humans and animals is complex and influenced by multiple variables. Humans display a remarkably flexible and rich array of social competencies, demonstrating the ability to interpret, predict, and react appropriately to the behavior of others, as well as to engage others in a variety of complex social interactions. Developing computational systems that have similar social abilities is a critical step in designing robots, animated characters, and other computer agents that appear intelligent and capable in their interactions with humans and each other. Further, it will improve their ability to cooperate with people as capable partners, learn from natural instruction, and provide intuitive and engaging interactions for human partners. Thus, human-animal team analogs can be one means through which to foster veridical mental models of robots that provide a more accurate representation of their near-future capabilities. Some digital twins of human-animal teams currently exist but are often incomplete. Therefore, this article focuses on issues within and surrounding the current models of human-animal teams, previous research surrounding this connection, and the challenges when using such an analogy for human-autonomy teams.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"554-567"},"PeriodicalIF":2.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138446605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The necessity for introducing interactionist and parallelism approaches in different branches of cognitive science emerged as a reaction to classical sequential stage-based models. Functional psychological models that emphasized and explained how different components interact, dynamically producing cognitive and perceptual states, influenced multiple disciplines. Chiefly among them were experimental psycholinguistics and the many applied areas that dealt with humans' ability to process different types of information in different contexts. Understanding how bilinguals represent and process verbal and visual input, how their neural and psychological states facilitate such interactions, and how linguistic and nonlinguistic processing overlap, has now emerged as an important area of multidisciplinary research. In this article, we will review available evidence from different language-speaking groups of bilinguals in India with a focus on situational context. In the discussion, we will address models of language processing in bilinguals within a cognitive psychological approach with a focus on existent models of inhibitory control. The paper's stated goal will be to show that the parallel architecture framework can serve as a theoretical foundation for examining bilingual language processing and its interface with external factors such as social context.
{"title":"Parallel Interactions Between Linguistic and Contextual Factors in Bilinguals.","authors":"Ramesh K Mishra, Seema Prasad","doi":"10.1111/tops.12745","DOIUrl":"https://doi.org/10.1111/tops.12745","url":null,"abstract":"<p><p>The necessity for introducing interactionist and parallelism approaches in different branches of cognitive science emerged as a reaction to classical sequential stage-based models. Functional psychological models that emphasized and explained how different components interact, dynamically producing cognitive and perceptual states, influenced multiple disciplines. Chiefly among them were experimental psycholinguistics and the many applied areas that dealt with humans' ability to process different types of information in different contexts. Understanding how bilinguals represent and process verbal and visual input, how their neural and psychological states facilitate such interactions, and how linguistic and nonlinguistic processing overlap, has now emerged as an important area of multidisciplinary research. In this article, we will review available evidence from different language-speaking groups of bilinguals in India with a focus on situational context. In the discussion, we will address models of language processing in bilinguals within a cognitive psychological approach with a focus on existent models of inhibitory control. The paper's stated goal will be to show that the parallel architecture framework can serve as a theoretical foundation for examining bilingual language processing and its interface with external factors such as social context.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141459953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gesture and speech are tightly linked and form a single system in typical development. In this review, we ask whether and how the role of gesture and relations between speech and gesture vary in atypical development by focusing on two groups of children: those with peri- or prenatal unilateral brain injury (children with BI) and preterm born (PT) children. We describe the gestures of children with BI and PT children and the relations between gesture and speech, as well as highlight various cognitive and motor antecedents of the speech-gesture link observed in these populations. We then examine possible factors contributing to the variability in gesture production of these atypically developing children. Last, we discuss the potential role of seeing others' gestures, particularly those of parents, in mediating the predictive relationships between early gestures and upcoming changes in speech. We end the review by charting new areas for future research that will help us better understand the robust roles of gestures for typical and atypically-developing child populations.
{"title":"Through Thick and Thin: Gesture and Speech Remain as an Integrated System in Atypical Development.","authors":"Ö Ece Demir-Lira, Tilbe Göksun","doi":"10.1111/tops.12739","DOIUrl":"https://doi.org/10.1111/tops.12739","url":null,"abstract":"<p><p>Gesture and speech are tightly linked and form a single system in typical development. In this review, we ask whether and how the role of gesture and relations between speech and gesture vary in atypical development by focusing on two groups of children: those with peri- or prenatal unilateral brain injury (children with BI) and preterm born (PT) children. We describe the gestures of children with BI and PT children and the relations between gesture and speech, as well as highlight various cognitive and motor antecedents of the speech-gesture link observed in these populations. We then examine possible factors contributing to the variability in gesture production of these atypically developing children. Last, we discuss the potential role of seeing others' gestures, particularly those of parents, in mediating the predictive relationships between early gestures and upcoming changes in speech. We end the review by charting new areas for future research that will help us better understand the robust roles of gestures for typical and atypically-developing child populations.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141297037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Describing our visual environments is challenging because although an enormous amount of information is simultaneously available to the visual system, the language channel must impose a linear order on that information. Moreover, the production system is at least moderately incremental, meaning that it interleaves planning and speaking processes. Here, we address how the operations of these two cognitive systems are coordinated given their different characteristics. We propose the concept of a perceptual clause, defined as an interface representation that allows the visual and linguistic systems to exchange information. The perceptual clause serves as the input to the language formulator, which translates the representation into a linguistic sequence. Perceptual clauses capture speakers' ability to describe visual scenes coherently while at the same time taking advantage of the incremental abilities of the language production system.
{"title":"Perceptual Clauses as Units of Production in Visual Descriptions.","authors":"Fernanda Ferreira, Madison Barker","doi":"10.1111/tops.12738","DOIUrl":"https://doi.org/10.1111/tops.12738","url":null,"abstract":"<p><p>Describing our visual environments is challenging because although an enormous amount of information is simultaneously available to the visual system, the language channel must impose a linear order on that information. Moreover, the production system is at least moderately incremental, meaning that it interleaves planning and speaking processes. Here, we address how the operations of these two cognitive systems are coordinated given their different characteristics. We propose the concept of a perceptual clause, defined as an interface representation that allows the visual and linguistic systems to exchange information. The perceptual clause serves as the input to the language formulator, which translates the representation into a linguistic sequence. Perceptual clauses capture speakers' ability to describe visual scenes coherently while at the same time taking advantage of the incremental abilities of the language production system.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141088949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}