首页 > 最新文献

Language and Cognition最新文献

英文 中文
Facial cues to anger affect meaning interpretation of subsequent spoken prosody 愤怒的面部暗示会影响对后续口语拟声词的意义解释
IF 1.8 3区 心理学 Q1 Arts and Humanities Pub Date : 2024-03-19 DOI: 10.1017/langcog.2024.3
Caterina Petrone, Francesca Carbone, Nicolas Audibert, Maud Champagne-Lavau
In everyday life, visual information often precedes the auditory one, hence influencing its evaluation (e.g., seeing somebody’s angry face makes us expect them to speak to us angrily). By using the cross-modal affective paradigm, we investigated the influence of facial gestures when the subsequent acoustic signal is emotionally unclear (neutral or produced with a limited repertoire of cues to anger). Auditory stimuli spoken with angry or neutral prosody were presented in isolation or preceded by pictures showing emotionally related or unrelated facial gestures (angry or neutral faces). In two experiments, participants rated the valence and emotional intensity of the auditory stimuli only. These stimuli were created from acted speech from movies and delexicalized via speech synthesis, then manipulated by partially preserving or degrading their global spectral characteristics. All participants relied on facial cues when the auditory stimuli were acoustically impoverished; however, only a subgroup of participants used angry faces to interpret subsequent neutral prosody. Thus, listeners are sensitive to facial cues for evaluating what they are about to hear, especially when the auditory input is less reliable. These results extend findings on face perception to the auditory domain and confirm inter-individual variability in considering different sources of emotional information.
在日常生活中,视觉信息往往先于听觉信息出现,从而影响对信息的评价(例如,看到某人生气的脸,我们就会预期他会生气地对我们说话)。通过使用跨模态情感范式,我们研究了当随后的声音信号情感不明确(中性或产生的愤怒线索有限)时面部姿态的影响。带有愤怒或中性拟声词的听觉刺激会单独出现,或在其之前出现与情绪相关或无关的面部手势(愤怒或中性面孔)图片。在两项实验中,受试者只对听觉刺激的价值和情绪强度进行评分。这些刺激是根据电影中的表演语言制作的,并通过语音合成进行了去词汇化处理,然后通过部分保留或降低其全局频谱特征进行处理。当听觉刺激在声学上被削弱时,所有参与者都依赖于面部线索;然而,只有一小部分参与者使用愤怒的面孔来解释随后的中性前音。因此,听者在评估即将听到的内容时对面部线索很敏感,尤其是在听觉输入不太可靠的情况下。这些结果将人脸感知的研究结果扩展到了听觉领域,并证实了个体间在考虑不同情绪信息来源时的差异性。
{"title":"Facial cues to anger affect meaning interpretation of subsequent spoken prosody","authors":"Caterina Petrone, Francesca Carbone, Nicolas Audibert, Maud Champagne-Lavau","doi":"10.1017/langcog.2024.3","DOIUrl":"https://doi.org/10.1017/langcog.2024.3","url":null,"abstract":"In everyday life, visual information often precedes the auditory one, hence influencing its evaluation (e.g., seeing somebody’s angry face makes us expect them to speak to us angrily). By using the cross-modal affective paradigm, we investigated the influence of facial gestures when the subsequent acoustic signal is emotionally unclear (neutral or produced with a limited repertoire of cues to anger). Auditory stimuli spoken with angry or neutral prosody were presented in isolation or preceded by pictures showing emotionally related or unrelated facial gestures (angry or neutral faces). In two experiments, participants rated the valence and emotional intensity of the auditory stimuli only. These stimuli were created from acted speech from movies and delexicalized via speech synthesis, then manipulated by partially preserving or degrading their global spectral characteristics. All participants relied on facial cues when the auditory stimuli were acoustically impoverished; however, only a subgroup of participants used angry faces to interpret subsequent neutral prosody. Thus, listeners are sensitive to facial cues for evaluating what they are about to hear, especially when the auditory input is less reliable. These results extend findings on face perception to the auditory domain and confirm inter-individual variability in considering different sources of emotional information.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140167613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Word-object and action-object learning in a unimodal context during early childhood 幼儿期单模态语境下的词-物和动-物学习
IF 1.8 3区 心理学 Q1 Arts and Humanities Pub Date : 2024-03-19 DOI: 10.1017/langcog.2024.7
Sarah Eiteljoerge, Birgit Elsner, Nivedita Mani
Word-object and action-object learning in children aged 30 to 48 months appears to develop at a similar time scale and adheres to similar attentional constraints. However, children below 36 months show different patterns of learning word-object and action-object associations when this information is presented in a bimodal context (Eiteljoerge et al., 2019b). Here, we investigated 12- and 24-month-olds’ word-object and action-object learning when this information is presented in a unimodal context. Forty 12- and 24-month-olds were presented with two novel objects that were either first associated with a novel label (word learning task) and then later with a novel action (action learning task) or vice versa. In subsequent yoked test phases, children either heard one of the novel labels or saw a hand performing one of the actions presented with the two objects on screen while we measured their target looking. Generalized linear mixed models indicate that 12-month-olds learned action-object associations but not word-object associations and 24-month-olds learned neither word- nor action-object associations. These results extend previous findings (Eiteljoerge et al., 2019b) and, together, suggest that children appear to learn action-object associations early in development while struggling with learning word-object associations in certain contexts until 2 years of age.
30至48个月大儿童的词-物和动-物学习似乎以相似的时间尺度发展,并遵循相似的注意限制。然而,当这些信息在双模情境中呈现时,36 个月以下的儿童在学习词-物和动-物关联时会表现出不同的模式(Eiteljoerge 等人,2019b)。在此,我们研究了 12 个月和 24 个月大的儿童在单模态情境中学习词-物和动-物时的情况。我们向 40 名 12 个月和 24 个月大的儿童展示了两个新奇的物体,这两个物体先是与一个新奇的标签相关联(单词学习任务),然后又与一个新奇的动作相关联(动作学习任务),反之亦然。在随后的配对测试阶段,儿童要么听到其中一个新标签,要么看到一只手在做屏幕上出现的两个物体的其中一个动作,同时我们测量他们的目标视力。广义线性混合模型表明,12 个月大的幼儿学会了动作-物体联想,但没有学会词-物体联想;24 个月大的幼儿既没有学会词,也没有学会动作-物体联想。这些结果扩展了之前的研究结果(Eiteljoerge et al.
{"title":"Word-object and action-object learning in a unimodal context during early childhood","authors":"Sarah Eiteljoerge, Birgit Elsner, Nivedita Mani","doi":"10.1017/langcog.2024.7","DOIUrl":"https://doi.org/10.1017/langcog.2024.7","url":null,"abstract":"Word-object and action-object learning in children aged 30 to 48 months appears to develop at a similar time scale and adheres to similar attentional constraints. However, children below 36 months show different patterns of learning word-object and action-object associations when this information is presented in a bimodal context (Eiteljoerge et al., 2019b). Here, we investigated 12- and 24-month-olds’ word-object and action-object learning when this information is presented in a unimodal context. Forty 12- and 24-month-olds were presented with two novel objects that were either first associated with a novel label (word learning task) and then later with a novel action (action learning task) or vice versa. In subsequent yoked test phases, children either heard one of the novel labels or saw a hand performing one of the actions presented with the two objects on screen while we measured their target looking. Generalized linear mixed models indicate that 12-month-olds learned action-object associations but not word-object associations and 24-month-olds learned neither word- nor action-object associations. These results extend previous findings (Eiteljoerge et al., 2019b) and, together, suggest that children appear to learn action-object associations early in development while struggling with learning word-object associations in certain contexts until 2 years of age.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140167685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Language and executive function relationships in the real world: insights from deafness 真实世界中的语言与执行功能关系:从耳聋中获得的启示
IF 1.8 3区 心理学 Q1 Arts and Humanities Pub Date : 2024-03-18 DOI: 10.1017/langcog.2024.10
Mario Figueroa, Nicola Botting, Gary Morgan

Executive functions (EFs) in both regulatory and meta-cognitive contexts are important for a wide variety of children’s daily activities, including play and learning. Despite the growing literature supporting the relationship between EF and language, few studies have focused on these links during everyday behaviours. Data were collected on 208 children from 6 to 12 years old of whom 89 were deaf children (55% female; M = 8;8; SD = 1;9) and 119 were typically hearing children (56% female, M = 8;9; SD = 1;5). Parents completed two inventories: to assess EFs and language proficiency. Parents of deaf children reported greater difficulties with EFs in daily activities than those of hearing children. Correlation analysis between EFs and language showed significant levels only in the deaf group, especially in relation to meta-cognitive EFs. The results are discussed in terms of the role of early parent–child interaction and the relevance of EFs for everyday conversational situations.

执行功能(EFs)在调节和元认知方面对儿童的各种日常活动(包括游戏和学习)都很重要。尽管越来越多的文献支持执行功能与语言之间的关系,但很少有研究关注这些关系在日常行为中的表现。本研究收集了 208 名 6 至 12 岁儿童的数据,其中 89 名是聋哑儿童(55% 为女性;中位数 = 8;8;SD = 1;9),119 名是典型听力儿童(56% 为女性;中位数 = 8;9;SD = 1;5)。家长们填写了两份调查问卷:评估幼儿的情感态度和语言能力。与听力正常儿童的家长相比,失聪儿童的家长在日常活动中遇到更多的EFs困难。EFs与语言之间的相关分析表明,只有聋儿组的EFs达到了显著水平,尤其是在元认知EFs方面。研究结果从早期亲子互动的作用和 EFs 与日常会话的相关性两个方面进行了讨论。
{"title":"Language and executive function relationships in the real world: insights from deafness","authors":"Mario Figueroa, Nicola Botting, Gary Morgan","doi":"10.1017/langcog.2024.10","DOIUrl":"https://doi.org/10.1017/langcog.2024.10","url":null,"abstract":"<p>Executive functions (EFs) in both regulatory and meta-cognitive contexts are important for a wide variety of children’s daily activities, including play and learning. Despite the growing literature supporting the relationship between EF and language, few studies have focused on these links during everyday behaviours. Data were collected on 208 children from 6 to 12 years old of whom 89 were deaf children (55% female; <span>M</span> = 8;8; <span>SD</span> = 1;9) and 119 were typically hearing children (56% female, <span>M</span> = 8;9; <span>SD</span> = 1;5). Parents completed two inventories: to assess EFs and language proficiency. Parents of deaf children reported greater difficulties with EFs in daily activities than those of hearing children. Correlation analysis between EFs and language showed significant levels only in the deaf group, especially in relation to meta-cognitive EFs. The results are discussed in terms of the role of early parent–child interaction and the relevance of EFs for everyday conversational situations.</p>","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The immediate integration of semantic selectional restrictions of Chinese social hierarchical verbs with extralinguistic social hierarchical information in comprehension 汉语社会等级动词的语义选择限制与语外社会等级信息在理解中的即时整合
IF 1.8 3区 心理学 Q1 Arts and Humanities Pub Date : 2024-03-18 DOI: 10.1017/langcog.2024.11
Yajiao Shi, Tongquan Zhou, Simin Zhao, Zhenghui Sun, Zude Zhu
Social hierarchical information impacts language comprehension. Nevertheless, the specific process underlying the integration of linguistic and extralinguistic sources of social hierarchical information has not been identified. For example, the Chinese social hierarchical verb 赡养, /shan4yang3/, ‘support: provide for the needs and comfort of one’s elders’, only allows its Agent to have a lower social status than the Patient. Using eye-tracking, we examined the precise time course of the integration of these semantic selectional restrictions of Chinese social hierarchical verbs and extralinguistic social hierarchical information during natural reading. A 2 (Verb Type: hierarchical vs. non-hierarchical) × 2 (Social Hierarchy Sequence: match vs. mismatch) design was constructed to investigate the effect of the interaction on early and late eye-tracking measures. Thirty-two participants (15 males; age range: 18–24 years) read sentences and judged the plausibility of each sentence. The results showed that violations of semantic selectional restrictions of Chinese social hierarchical verbs induced shorter first fixation duration but longer regression path duration and longer total reading time on sentence-final nouns (NP2). These differences were absent under non-hierarchical conditions. The results suggest that a mismatch between linguistic and extralinguistic social hierarchical information is immediately detected and processed.
社会等级信息影响语言理解。然而,整合社会等级信息的语言和语言外来源的具体过程尚未确定。例如,汉语社会等级动词 "赡养"(/shan4yang3/,"support: provide for the needs and comfort of one's elders")只允许其代理人的社会地位低于患者。通过眼动跟踪,我们研究了在自然阅读过程中,汉语社会等级动词的这些语义选择限制与语外社会等级信息整合的精确时间过程。我们构建了一个 2(动词类型:分层与非分层)×2(社会等级序列:匹配与不匹配)的设计,以研究交互作用对早期和晚期眼动跟踪测量的影响。32 名参与者(15 名男性;年龄范围:18-24 岁)阅读句子并判断每个句子的可信度。结果表明,违反汉语社会等级动词的语义选择限制会导致句末名词(NP2)的首次定格持续时间缩短,但回归路径持续时间延长,总阅读时间延长。这些差异在非等级条件下不存在。结果表明,语言和语言外社会等级信息之间的不匹配会被立即检测和处理。
{"title":"The immediate integration of semantic selectional restrictions of Chinese social hierarchical verbs with extralinguistic social hierarchical information in comprehension","authors":"Yajiao Shi, Tongquan Zhou, Simin Zhao, Zhenghui Sun, Zude Zhu","doi":"10.1017/langcog.2024.11","DOIUrl":"https://doi.org/10.1017/langcog.2024.11","url":null,"abstract":"\u0000 Social hierarchical information impacts language comprehension. Nevertheless, the specific process underlying the integration of linguistic and extralinguistic sources of social hierarchical information has not been identified. For example, the Chinese social hierarchical verb 赡养, /shan4yang3/, ‘support: provide for the needs and comfort of one’s elders’, only allows its Agent to have a lower social status than the Patient. Using eye-tracking, we examined the precise time course of the integration of these semantic selectional restrictions of Chinese social hierarchical verbs and extralinguistic social hierarchical information during natural reading. A 2 (Verb Type: hierarchical vs. non-hierarchical) × 2 (Social Hierarchy Sequence: match vs. mismatch) design was constructed to investigate the effect of the interaction on early and late eye-tracking measures. Thirty-two participants (15 males; age range: 18–24 years) read sentences and judged the plausibility of each sentence. The results showed that violations of semantic selectional restrictions of Chinese social hierarchical verbs induced shorter first fixation duration but longer regression path duration and longer total reading time on sentence-final nouns (NP2). These differences were absent under non-hierarchical conditions. The results suggest that a mismatch between linguistic and extralinguistic social hierarchical information is immediately detected and processed.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140231485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influences of narrative perspective shift and scene detail on narrative semantic processing 叙事视角转换和场景细节对叙事语义加工的影响
IF 1.8 3区 心理学 Q1 Arts and Humanities Pub Date : 2024-03-15 DOI: 10.1017/langcog.2024.9
Jian Jin, Siyun Liu
The embodied view of semantic processing holds that readers achieve reading comprehension through mental simulation of the objects and events described in the narrative. However, it remains unclear whether and how the encoding of linguistic factors in narrative descriptions impacts narrative semantic processing. This study aims to explore this issue under the narrative context with and without perspective shift, which is an important and common linguistic factor in narratives. A sentence-picture verification paradigm combined with eye-tracking measures was used to explore the issue. The results showed that (1) the inter-role perspective shift made the participants’ to evenly allocate their first fixation to different elements in the scene following the new perspective; (2) the internal–external perspective shift increased the participants’ total fixation count when they read the sentence with the perspective shift; (3) the scene detail depicted in the picture did not influence the process of narrative semantic processing. These results suggest that perspective shift can disrupt the coherence of situation model and increase the cognitive load of readers during reading. Moreover, scene detail could not be constructed by readers in natural narrative reading.
语义加工的具体化观点认为,读者通过对叙事中描述的对象和事件进行心理模拟来实现阅读理解。然而,叙事描述中语言因素的编码是否以及如何影响叙事语义加工,目前仍不清楚。视角转换是叙事中重要而常见的语言因素,本研究旨在探索在有视角转换和无视角转换的叙事语境下的这一问题。研究采用了句子-图片验证范式,并结合眼动跟踪测量来探讨这一问题。结果表明:(1) 角色间视角转换使被试在新视角下将首次定点平均分配给场景中的不同元素;(2) 内外视角转换增加了被试在阅读视角转换句子时的总定点数;(3) 图片中描绘的场景细节并不影响被试的叙事语义加工过程。这些结果表明,视角转换会破坏情境模型的连贯性,增加读者在阅读过程中的认知负荷。此外,在自然叙事阅读中,读者无法构建场景细节。
{"title":"The influences of narrative perspective shift and scene detail on narrative semantic processing","authors":"Jian Jin, Siyun Liu","doi":"10.1017/langcog.2024.9","DOIUrl":"https://doi.org/10.1017/langcog.2024.9","url":null,"abstract":"The embodied view of semantic processing holds that readers achieve reading comprehension through mental simulation of the objects and events described in the narrative. However, it remains unclear whether and how the encoding of linguistic factors in narrative descriptions impacts narrative semantic processing. This study aims to explore this issue under the narrative context with and without perspective shift, which is an important and common linguistic factor in narratives. A sentence-picture verification paradigm combined with eye-tracking measures was used to explore the issue. The results showed that (1) the inter-role perspective shift made the participants’ to evenly allocate their first fixation to different elements in the scene following the new perspective; (2) the internal–external perspective shift increased the participants’ total fixation count when they read the sentence with the perspective shift; (3) the scene detail depicted in the picture did not influence the process of narrative semantic processing. These results suggest that perspective shift can disrupt the coherence of situation model and increase the cognitive load of readers during reading. Moreover, scene detail could not be constructed by readers in natural narrative reading.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The role of consciousness in Chinese nominal metaphor processing: a psychophysical approach 意识在汉语名词隐喻加工中的作用:一种心理物理学方法
IF 1.8 3区 心理学 Q1 Arts and Humanities Pub Date : 2024-03-15 DOI: 10.1017/langcog.2023.67
Kaiwen Cheng, Yu Chen, Hongmei Yan, Ling Wang
Conceptual metaphor theory (CMT) holds that most conceptual metaphors are processed unconsciously. However, whether multiple words can be integrated into a holistic metaphoric sentence without consciousness remains controversial in cognitive science and psychology. This study aims to investigate the role of consciousness in processing Chinese nominal metaphoric sentences ‘A是B(A is[like] B) with a psychophysical experimental paradigm referred to as breaking continuous flash suppression (b-CFS). We manipulated sentence types (metaphoric, literal and anomalous) and word forms (upright, inverted) in a two-staged experiment (CFS and non-CFS). No difference was found in the breakthrough times among all three types of sentences in the CFS stage, while literal sentences were detected more slowly than either metaphoric or anomalous sentences in the non-CFS stage. The results suggest that the integration of multiple words may not succeed without the participation of consciousness, let alone metaphoric processing. These findings may redefine ‘unconscious’ in CMT as ‘preconscious’ and support the indirect access view regarding how the metaphoric meaning is processed in the brain.
概念隐喻理论(CMT)认为,大多数概念隐喻都是在无意识状态下处理的。然而,在认知科学和心理学中,多个词是否能在无意识的情况下整合成一个整体隐喻句仍然存在争议。本研究旨在通过一种心理物理实验范式--断裂连续闪光抑制(b-CFS)--研究意识在处理汉语名词性隐喻句 "A是B"(A is[like] B)中的作用。我们在两阶段实验(CFS 和非 CFS)中对句子类型(隐喻句、字面句和反常句)和词形(直立句和倒立句)进行了操作。结果发现,在 CFS 阶段,三种类型句子的突破时间没有差异,而在非 CFS 阶段,字面句子的检测时间比隐喻句子或反常句子都要慢。结果表明,如果没有意识的参与,多词整合可能无法成功,更不用说隐喻加工了。这些发现可能会将 CMT 中的 "无意识 "重新定义为 "前意识",并支持关于隐喻意义如何在大脑中处理的间接访问观点。
{"title":"The role of consciousness in Chinese nominal metaphor processing: a psychophysical approach","authors":"Kaiwen Cheng, Yu Chen, Hongmei Yan, Ling Wang","doi":"10.1017/langcog.2023.67","DOIUrl":"https://doi.org/10.1017/langcog.2023.67","url":null,"abstract":"Conceptual metaphor theory (CMT) holds that most conceptual metaphors are processed unconsciously. However, whether multiple words can be integrated into a holistic metaphoric sentence without consciousness remains controversial in cognitive science and psychology. This study aims to investigate the role of consciousness in processing Chinese nominal metaphoric sentences ‘<jats:italic>A是B</jats:italic>’ <jats:italic>(A is[like]</jats:italic> B) with a psychophysical experimental paradigm referred to as breaking continuous flash suppression (b-CFS). We manipulated sentence types (metaphoric, literal and anomalous) and word forms (upright, inverted) in a two-staged experiment (CFS and non-CFS). No difference was found in the breakthrough times among all three types of sentences in the CFS stage, while literal sentences were detected more slowly than either metaphoric or anomalous sentences in the non-CFS stage. The results suggest that the integration of multiple words may not succeed without the participation of consciousness, let alone metaphoric processing. These findings may redefine ‘unconscious’ in CMT as ‘preconscious’ and support the indirect access view regarding how the metaphoric meaning is processed in the brain.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prosody of focus in Turkish Sign Language 土耳其手语中的重点拟声
IF 1.8 3区 心理学 Q1 Arts and Humanities Pub Date : 2024-03-04 DOI: 10.1017/langcog.2024.4
Serpil Karabüklü, Aslı Gürer
Prosodic realization of focus has been a widely investigated topic across languages and modalities. Simultaneous focus strategies are intriguing to see how they interact regarding their functional and temporal alignment. We explored the multichannel (manual and nonmanual) realization of focus in Turkish Sign Language. We elicited data with focus type, syntactic roles and movement type variables from 20 signers. The results revealed the focus is encoded via increased duration in manual signs, and nonmanuals do not necessarily accompany focused signs. With a multichanneled structure, sign languages use two available channels or opt for one to express focushood.
在各种语言和模式中,焦点的前言实现一直是一个被广泛研究的课题。同时聚焦策略在功能和时间上如何相互配合,这一点很有意思。我们探讨了土耳其手语中焦点的多通道(手动和非手动)实现。我们从 20 位手语者那里获得了有关焦点类型、句法角色和动作类型变量的数据。结果显示,在手动手语中,重点是通过增加持续时间来编码的,而非手动手语并不一定伴随着重点手语。在多通道结构中,手语使用两个可用通道或选择一个通道来表达焦点。
{"title":"Prosody of focus in Turkish Sign Language","authors":"Serpil Karabüklü, Aslı Gürer","doi":"10.1017/langcog.2024.4","DOIUrl":"https://doi.org/10.1017/langcog.2024.4","url":null,"abstract":"Prosodic realization of focus has been a widely investigated topic across languages and modalities. Simultaneous focus strategies are intriguing to see how they interact regarding their functional and temporal alignment. We explored the multichannel (manual and nonmanual) realization of focus in Turkish Sign Language. We elicited data with focus type, syntactic roles and movement type variables from 20 signers. The results revealed the focus is encoded via increased duration in manual signs, and nonmanuals do not necessarily accompany focused signs. With a multichanneled structure, sign languages use two available channels or opt for one to express focushood.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140047143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrasting the semantic space of ‘shame’ and ‘guilt’ in English and Japanese 对比英语和日语中 "羞愧 "和 "内疚 "的语义空间
IF 1.8 3区 心理学 Q1 Arts and Humanities Pub Date : 2024-03-01 DOI: 10.1017/langcog.2024.6
Eugenia Diegoli, Emily Öhman
This article sheds light on the significant yet nuanced roles of shame and guilt in influencing moral behaviour, a phenomenon that became particularly prominent during the COVID-19 pandemic with the community’s heightened desire to be seen as moral. These emotions are central to human interactions, and the question of how they are conveyed linguistically is a vast and important one. Our study contributes to this area by analysing the discourses around shame and guilt in English and Japanese online forums, focusing on the terms shame, guilt, haji (‘shame’) and zaiakukan (‘guilt’). We utilise a mix of corpus-based methods and natural language processing tools, including word embeddings, to examine the contexts of these emotion terms and identify semantically similar expressions. Our findings indicate both overlaps and distinct differences in the semantic landscapes of shame and guilt within and across the two languages, highlighting nuanced ways in which these emotions are expressed and distinguished. This investigation provides insights into the complex dynamics between emotion words and the internal states they denote, suggesting avenues for further research in this linguistically rich area.
这篇文章揭示了羞耻感和负罪感在影响道德行为方面所起的重要而又微妙的作用,这种现象在 COVID-19 大流行期间尤为突出,因为当时的人们更希望被视为有道德的人。这些情绪是人际交往的核心,如何在语言中传达这些情绪是一个庞大而重要的问题。我们的研究通过分析英语和日语在线论坛中围绕羞耻感和负罪感的话语,重点关注羞耻感、负罪感、haji("羞耻")和zaiakukan("负罪感")等术语,为这一领域做出了贡献。我们混合使用了基于语料库的方法和自然语言处理工具(包括单词嵌入)来研究这些情感术语的语境,并识别语义相似的表达方式。我们的研究结果表明,羞愧和内疚在两种语言内部和之间的语义景观既有重叠,也有明显差异,突出了这些情绪的细微表达和区分方式。这项研究深入揭示了情绪词与它们所表示的内部状态之间的复杂动态关系,为在这一语言丰富的领域开展进一步研究提供了思路。
{"title":"Contrasting the semantic space of ‘shame’ and ‘guilt’ in English and Japanese","authors":"Eugenia Diegoli, Emily Öhman","doi":"10.1017/langcog.2024.6","DOIUrl":"https://doi.org/10.1017/langcog.2024.6","url":null,"abstract":"This article sheds light on the significant yet nuanced roles of shame and guilt in influencing moral behaviour, a phenomenon that became particularly prominent during the COVID-19 pandemic with the community’s heightened desire to be seen as moral. These emotions are central to human interactions, and the question of how they are conveyed linguistically is a vast and important one. Our study contributes to this area by analysing the discourses around shame and guilt in English and Japanese online forums, focusing on the terms <jats:italic>shame</jats:italic>, <jats:italic>guilt</jats:italic>, <jats:italic>haji</jats:italic> (‘shame’) and <jats:italic>zaiakukan</jats:italic> (‘guilt’). We utilise a mix of corpus-based methods and natural language processing tools, including word embeddings, to examine the contexts of these emotion terms and identify semantically similar expressions. Our findings indicate both overlaps and distinct differences in the semantic landscapes of shame and guilt within and across the two languages, highlighting nuanced ways in which these emotions are expressed and distinguished. This investigation provides insights into the complex dynamics between emotion words and the internal states they denote, suggesting avenues for further research in this linguistically rich area.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140019326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Better letter: iconicity in the manual alphabets of American Sign Language and Swedish Sign Language 更好的字母:美国手语和瑞典手语手动字母的图标性
IF 1.8 3区 心理学 Q1 Arts and Humanities Pub Date : 2024-02-29 DOI: 10.1017/langcog.2024.5
Carl Börstell
While iconicity has sometimes been defined as meaning transparency, it is better defined as a subjective phenomenon bound to an individual’s perception and influenced by their previous language experience. In this article, I investigate the subjective nature of iconicity through an experiment in which 72 deaf, hard-of-hearing and hearing (signing and non-signing) participants rate the iconicity of individual letters of the American Sign Language (ASL) and Swedish Sign Language (STS) manual alphabets. It is shown that L1 signers of ASL and STS rate their own (L1) manual alphabet as more iconic than the foreign one. Hearing L2 signers of ASL and STS exhibit the same pattern as L1 signers, showing an iconic preference for their own (L2) manual alphabet. In comparison, hearing non-signers show no general iconic preference for either manual alphabet. Across all groups, some letters are consistently rated as more iconic in one sign language than the other, illustrating general iconic preferences. Overall, the results align with earlier findings from sign language linguistics that point to language experience affecting iconicity ratings and that one’s own signs are rated as more iconic than foreign signs with the same meaning, even if similar iconic mappings are used.
虽然图标性有时被定义为意义透明度,但它更适合被定义为一种主观现象,与个人的感知息息相关,并受其以往语言经验的影响。在本文中,我通过一项实验研究了标志性的主观性质,在该实验中,72 名聋人、重听人和听力正常人(手语和非手语)参与者对美国手语(ASL)和瑞典手语(STS)手写字母表中单个字母的标志性进行了评分。结果表明,ASL 和 STS 的第一手语手语者认为他们自己(第一手语)的手动字母表比外来字母表更具标志性。听力正常的 ASL 和 STS L2 手语手语者表现出与 L1 手语者相同的模式,对自己的(L2)手动字母表表现出标志性偏好。相比之下,非手语听力障碍者对任何一个手语字母表都没有普遍的图标偏好。在所有组别中,有些字母在一种手语中始终比在另一种手语中更具标志性,这说明了一般的标志性偏好。总体而言,这些结果与手语语言学的早期研究结果一致,即语言经验会影响图标性评分,即使使用了相似的图标映射,自己的手语比具有相同含义的外来手语更具图标性。
{"title":"Better letter: iconicity in the manual alphabets of American Sign Language and Swedish Sign Language","authors":"Carl Börstell","doi":"10.1017/langcog.2024.5","DOIUrl":"https://doi.org/10.1017/langcog.2024.5","url":null,"abstract":"While iconicity has sometimes been defined as meaning transparency, it is better defined as a subjective phenomenon bound to an individual’s perception and influenced by their previous language experience. In this article, I investigate the subjective nature of iconicity through an experiment in which 72 deaf, hard-of-hearing and hearing (signing and non-signing) participants rate the iconicity of individual letters of the American Sign Language (ASL) and Swedish Sign Language (STS) manual alphabets. It is shown that L1 signers of ASL and STS rate their own (L1) manual alphabet as more iconic than the foreign one. Hearing L2 signers of ASL and STS exhibit the same pattern as L1 signers, showing an iconic preference for their own (L2) manual alphabet. In comparison, hearing non-signers show no general iconic preference for either manual alphabet. Across all groups, some letters are consistently rated as more iconic in one sign language than the other, illustrating general iconic preferences. Overall, the results align with earlier findings from sign language linguistics that point to language experience affecting iconicity ratings and that one’s own signs are rated as more iconic than foreign signs with the same meaning, even if similar iconic mappings are used.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140019524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Backchannel behavior is idiosyncratic 反向渠道行为具有特异性
IF 1.8 3区 心理学 Q1 Arts and Humanities Pub Date : 2024-02-22 DOI: 10.1017/langcog.2024.1
Peter Blomsma, Julija Vaitonyté, Gabriel Skantze, Marc Swerts

In spoken conversations, speakers and their addressees constantly seek and provide different forms of audiovisual feedback, also known as backchannels, which include nodding, vocalizations and facial expressions. It has previously been shown that addressees backchannel at specific points during an interaction, namely after a speaker provided a cue to elicit feedback from the addressee. However, addressees may differ in the frequency and type of feedback that they provide, and likewise, speakers may vary the type of cues they generate to signal the backchannel opportunity points (BOPs). Research on the extent to which backchanneling is idiosyncratic is scant. In this article, we quantify and analyze the variability in feedback behavior of 14 addressees who all interacted with the same speaker stimulus. We conducted this research by means of a previously developed experimental paradigm that generates spontaneous interactions in a controlled manner. Our results show that (1) backchanneling behavior varies between listeners (some addressees are more active than others) and (2) backchanneling behavior varies between BOPs (some points trigger more responses than others). We discuss the relevance of these results for models of human–human and human–machine interactions.

在口语会话中,说话人和被说话人不断寻求并提供不同形式的视听反馈,也称为反向渠道,其中包括点头、发声和面部表情。以前的研究表明,在互动过程中的特定时刻,即在说话人发出提示以引起受话人反馈之后,受话人会发出反向信道。然而,受话者提供反馈的频率和类型可能会有所不同,同样,说话者也可能会改变他们发出反向信道机会点(BOPs)信号的提示类型。关于反向渠道在多大程度上具有特异性的研究还很少。在本文中,我们量化并分析了 14 个受话人反馈行为的变异性,这些受话人都与相同的说话者刺激进行了互动。我们通过之前开发的实验范式进行了这项研究,该范式能以受控方式产生自发互动。我们的研究结果表明:(1) 听者之间的反向信道行为各不相同(有些听众比其他听众更活跃);(2) BOP 之间的反向信道行为各不相同(有些点比其他点引发更多的回应)。我们将讨论这些结果对人际和人机交互模型的意义。
{"title":"Backchannel behavior is idiosyncratic","authors":"Peter Blomsma, Julija Vaitonyté, Gabriel Skantze, Marc Swerts","doi":"10.1017/langcog.2024.1","DOIUrl":"https://doi.org/10.1017/langcog.2024.1","url":null,"abstract":"<p>In spoken conversations, speakers and their addressees constantly seek and provide different forms of audiovisual feedback, also known as backchannels, which include nodding, vocalizations and facial expressions. It has previously been shown that addressees backchannel at specific points during an interaction, namely after a speaker provided a cue to elicit feedback from the addressee. However, addressees may differ in the frequency and type of feedback that they provide, and likewise, speakers may vary the type of cues they generate to signal the backchannel opportunity points (BOPs). Research on the extent to which backchanneling is idiosyncratic is scant. In this article, we quantify and analyze the variability in feedback behavior of 14 addressees who all interacted with the same speaker stimulus. We conducted this research by means of a previously developed experimental paradigm that generates spontaneous interactions in a controlled manner. Our results show that (1) backchanneling behavior varies between listeners (some addressees are more active than others) and (2) backchanneling behavior varies between BOPs (some points trigger more responses than others). We discuss the relevance of these results for models of human–human and human–machine interactions.</p>","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139921497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Language and Cognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1