首页 > 最新文献

Gesture最新文献

英文 中文
The road to language through gesture 通过手势通向语言之路
IF 1 4区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-11-27 DOI: 10.1075/gest.22001.wil
Beatrijs Wille, Hilde Nyffels, O. Capirci
This study explores the role of gestures in Flemish Sign Language (VGT) development through a longitudinal observation of three deaf children’s early interactions. These children were followed over a period of one and a half year, at the ages of 6, 9, 12, 18 and 24 months. This research compares the communicative development of a deaf child growing up in a deaf family and two deaf children growing up in hearing families. The latter two children received early cochlear implants when they were respectively 10 and 7 months old. It is the first study describing the types and tokens of children’s gestures used in early dyadic interactions in Flanders (Belgium). The description of our observations shows three distinct developmental patterns in terms of the use of gestures and the production of combinations. The study supports the finding that children’s gestural output is subject to their parental language, and it further indicates an impact of age of cochlear implantation.
本研究通过对三名聋哑儿童早期互动的纵向观察,探讨手势在佛兰德手语(VGT)发展中的作用。这些儿童分别在 6、9、12、18 和 24 个月大时接受了为期一年半的跟踪观察。这项研究比较了一名在聋人家庭中成长的聋儿和两名在健听家庭中成长的聋儿的交流发展情况。后两名儿童分别在 10 个月和 7 个月大时接受了早期人工耳蜗植入。这是第一项描述比利时佛兰德斯地区儿童在早期二人互动中使用的手势类型和标记的研究。我们的观察结果表明,儿童在手势的使用和组合方面有三种不同的发展模式。这项研究证实了儿童的手势输出受父母语言的影响,并进一步表明了人工耳蜗植入年龄的影响。
{"title":"The road to language through gesture","authors":"Beatrijs Wille, Hilde Nyffels, O. Capirci","doi":"10.1075/gest.22001.wil","DOIUrl":"https://doi.org/10.1075/gest.22001.wil","url":null,"abstract":"This study explores the role of gestures in Flemish Sign Language (VGT) development through a longitudinal observation of three deaf children’s early interactions. These children were followed over a period of one and a half year, at the ages of 6, 9, 12, 18 and 24 months. This research compares the communicative development of a deaf child growing up in a deaf family and two deaf children growing up in hearing families. The latter two children received early cochlear implants when they were respectively 10 and 7 months old. It is the first study describing the types and tokens of children’s gestures used in early dyadic interactions in Flanders (Belgium). The description of our observations shows three distinct developmental patterns in terms of the use of gestures and the production of combinations. The study supports the finding that children’s gestural output is subject to their parental language, and it further indicates an impact of age of cochlear implantation.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":"80 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139228588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakest link or strongest link? 最弱的环节还是最强的环节?
4区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-11-14 DOI: 10.1075/gest.21021.don
Andrea Marquardt Donovan, Sarah A. Brown, Martha W. Alibali
Teachers often use gestures to connect representations of mathematical ideas. This research examined (1) whether such linking gestures help students understand connections among representations and (2) whether sets of gestures that include repeated handshapes and motions – termed gestural catchments – are particularly beneficial. Undergraduates viewed one of four video lessons connecting two representations of multiplication. In the control lesson, the instructor produced beat gestures that did not link the representations. In the link-only lesson, the instructor used gestures to link representations, but the gestures did not form a catchment. In the consistent-catchment lesson, the instructor highlighted corresponding elements of the two representations using identical gestures. In the inconsistent-catchment lesson, the instructor highlighted non-corresponding elements of the two representations using identical gestures. Participants who saw the lesson with the consistent catchment – which highlighted similarities between representations – were most likely to understand the novel representation and to report learning from the lesson.
教师经常使用手势来连接数学概念的表示。这项研究考察了(1)这种连接手势是否有助于学生理解表征之间的联系;(2)包含重复手势和动作的手势组(称为手势集)是否特别有益。本科生们观看了四个视频课程中的一个,这些视频课程连接了乘法的两种表示。在对照课上,教师做出的节拍手势与表征没有联系。在只有链接的课程中,教师使用手势来连接表征,但手势并没有形成集合。在一致性集水课中,教师使用相同的手势强调了两种表示的相应元素。在不一致集水课中,教师使用相同的手势强调了两种表示的不对应元素。那些观看了具有一致集合的课程的参与者——强调了表征之间的相似性——最有可能理解新的表征并报告从课程中学到了什么。
{"title":"Weakest link or strongest link?","authors":"Andrea Marquardt Donovan, Sarah A. Brown, Martha W. Alibali","doi":"10.1075/gest.21021.don","DOIUrl":"https://doi.org/10.1075/gest.21021.don","url":null,"abstract":"Teachers often use gestures to connect representations of mathematical ideas. This research examined (1) whether\u0000 such linking gestures help students understand connections among representations and (2) whether sets of gestures that include\u0000 repeated handshapes and motions – termed gestural catchments – are particularly beneficial. Undergraduates viewed\u0000 one of four video lessons connecting two representations of multiplication. In the control lesson, the instructor\u0000 produced beat gestures that did not link the representations. In the link-only lesson, the instructor used\u0000 gestures to link representations, but the gestures did not form a catchment. In the consistent-catchment lesson,\u0000 the instructor highlighted corresponding elements of the two representations using identical gestures. In the\u0000 inconsistent-catchment lesson, the instructor highlighted non-corresponding elements of the two\u0000 representations using identical gestures. Participants who saw the lesson with the consistent catchment – which highlighted\u0000 similarities between representations – were most likely to understand the novel representation and to report learning from the\u0000 lesson.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":"31 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134901077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic tool to annotate smile intensities in conversational face-to-face interactions 在面对面对话中注释微笑强度的自动工具
IF 1 4区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-09-01 DOI: 10.1075/gest.22012.rau
S. Rauzy, Mary Amoyal
This study presents an automatic tool that allows to trace smile intensities along a video record of conversational face-to-face interactions. The processed output proposes a sequence of adjusted time intervals labeled following the Smiling Intensity Scale (Gironzetti, Attardo, and Pickering, 2016), a 5 levels scale varying from neutral facial expression to laughing smile. The underlying statistical model of this tool is trained on a manually annotated corpus of conversations featuring spontaneous facial expressions. This model will be detailed in this study. This tool can be used with benefits for annotating smile in interactions. The results are twofold. First, the evaluation reveals an observed agreement of 68% between manual and automatic annotations. Second, manually correcting the labels and interval boundaries of the automatic outputs reduces by a factor 10 the annotation time as compared with the time spent for manually annotating smile intensities without pretreatment. Our annotation engine makes use of the state-of-the-art toolbox OpenFace for tracking the face and for measuring the intensities of the facial Action Units of interest all along the video. The documentation and the scripts of our tool, the SMAD software, are available to download at the HMAD open source project URL page https://github.com/srauzy/HMAD (last access 31 July 2023).
这项研究提出了一种自动工具,可以根据对话面对面互动的视频记录追踪微笑的强度。处理后的输出提出了一系列调整后的时间间隔,按照微笑强度量表(Gironzetti, Attardo, and Pickering, 2016)标记,这是一个从中性面部表情到大笑微笑的5级量表。该工具的底层统计模型是在具有自发面部表情的手动注释的对话语料库上进行训练的。该模型将在本研究中详细介绍。这个工具可以用于在互动中注释微笑。结果是双重的。首先,评估显示手动注释和自动注释之间的一致性达到68%。其次,与不进行预处理的手动标注微笑强度的时间相比,手动校正自动输出的标签和间隔边界的标注时间减少了10倍。我们的注释引擎使用了最先进的工具箱OpenFace来跟踪面部并测量整个视频中感兴趣的面部动作单元的强度。我们的工具SMAD软件的文档和脚本可以在HMAD开源项目URL页面https://github.com/srauzy/HMAD上下载(最后访问日期为2023年7月31日)。
{"title":"Automatic tool to annotate smile intensities in conversational face-to-face interactions","authors":"S. Rauzy, Mary Amoyal","doi":"10.1075/gest.22012.rau","DOIUrl":"https://doi.org/10.1075/gest.22012.rau","url":null,"abstract":"\u0000 This study presents an automatic tool that allows to trace smile intensities along a video record of\u0000 conversational face-to-face interactions. The processed output proposes a sequence of adjusted time intervals labeled following\u0000 the Smiling Intensity Scale (Gironzetti, Attardo, and Pickering,\u0000 2016), a 5 levels scale varying from neutral facial expression to laughing smile. The underlying statistical model of this\u0000 tool is trained on a manually annotated corpus of conversations featuring spontaneous facial expressions. This model will be\u0000 detailed in this study. This tool can be used with benefits for annotating smile in interactions. The results are twofold. First,\u0000 the evaluation reveals an observed agreement of 68% between manual and automatic annotations. Second, manually correcting the\u0000 labels and interval boundaries of the automatic outputs reduces by a factor 10 the annotation time as compared with the time spent\u0000 for manually annotating smile intensities without pretreatment. Our annotation engine makes use of the state-of-the-art toolbox\u0000 OpenFace for tracking the face and for measuring the intensities of the facial Action Units of interest all along the video. The\u0000 documentation and the scripts of our tool, the SMAD software, are available to download at the HMAD open source project URL page\u0000 https://github.com/srauzy/HMAD (last access 31 July 2023).","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42991983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of Galhano-Rodrigues, Galvão & Cruz-Santos (2019): Recent perspectives on gesture and multimodality Galhano Rodrigues,Galvão&Cruz Santos评论(2019):手势和多模态的最新视角
IF 1 4区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-08-31 DOI: 10.1075/gest.20031.wan
Xi Wang, Fangfei Lv
{"title":"Review of Galhano-Rodrigues, Galvão & Cruz-Santos (2019): Recent perspectives on gesture and multimodality","authors":"Xi Wang, Fangfei Lv","doi":"10.1075/gest.20031.wan","DOIUrl":"https://doi.org/10.1075/gest.20031.wan","url":null,"abstract":"","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46986053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iconic gestures serve as primes for both auditory and visual word forms 标志性手势是听觉和视觉单词形式的启动词
IF 1 4区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-08-24 DOI: 10.1075/gest.20019.san
Iván Sánchez-Borges, C. J. Álvarez
Previous studies using cross-modal semantic priming have found that iconic gestures prime target words that are related with the gestures. In the present study, two analogous experiments examined this priming effect presenting prime and targets in high synchrony. In Experiment 1, participants performed an auditory primed lexical decision task where target words (e.g., “push”) and pseudowords had to be discriminated, primed by overlapping iconic gestures that could be semantically related (e.g., moving both hands forward) or not with the words. Experiment 2 was similar but with both gestures and words presented visually. The grammatical category of the words was also manipulated: they were nouns and verbs. It was found that words related to gestures were recognized faster and with fewer errors than the unrelated ones in both experiments and similarly for both types of words.
先前使用跨模态语义启动的研究发现,标志性手势会启动和手势相关的目标词。在本研究中,两个类似的实验检验了这种启动效应,呈现出高度同步的启动和目标。在实验1中,参与者进行了听觉引导的词汇决策任务,其中必须区分目标词(例如,“推”)和伪词,通过重叠的标志性手势来引导,这些手势可能与单词在语义上相关(例如,双手向前移动)或不与单词相关。实验2是相似的,但是手势和单词都是视觉呈现的。这些词的语法类别也被操纵了:它们是名词和动词。研究发现,在两个实验中,与手势相关的单词比不相关的单词识别得更快,错误更少,这两种类型的单词也是如此。
{"title":"Iconic gestures serve as primes for both auditory and visual word forms","authors":"Iván Sánchez-Borges, C. J. Álvarez","doi":"10.1075/gest.20019.san","DOIUrl":"https://doi.org/10.1075/gest.20019.san","url":null,"abstract":"\u0000Previous studies using cross-modal semantic priming have found that iconic gestures prime target words that are related with the gestures. In the present study, two analogous experiments examined this priming effect presenting prime and targets in high synchrony. In Experiment 1, participants performed an auditory primed lexical decision task where target words (e.g., “push”) and pseudowords had to be discriminated, primed by overlapping iconic gestures that could be semantically related (e.g., moving both hands forward) or not with the words. Experiment 2 was similar but with both gestures and words presented visually. The grammatical category of the words was also manipulated: they were nouns and verbs. It was found that words related to gestures were recognized faster and with fewer errors than the unrelated ones in both experiments and similarly for both types of words.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45461372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Raised Index Finger gesture in Hebrew multimodal interaction 希伯来语多模式交互中的竖起食指手势
IF 1 4区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-08-24 DOI: 10.1075/gest.21001.inb
Anna Inbar
The present study examines the roles that the gesture of the Raised Index Finger (RIF) plays in Hebrew multimodal interaction. The study reveals that the RIF is associated with diverse linguistic phenomena and tends to appear in contexts in which the speaker presents a message or speech act that violates the hearer’s expectations (based on either general knowledge or prior discourse). The study suggests that the RIF serves the function of discourse deixis: Speakers point to their message, creating a referent in the extralinguistic context to which they refer as an object of their stance, evaluating the content of the utterance or speech act as unexpected by the hearer, and displaying epistemic authority. Setting up such a frame by which the information is to be interpreted provides the basis for a swifter update of the common ground in situations of (assumed) differences between the assumptions of the speaker and the hearer.
本研究考察了竖起食指(RIF)的手势在希伯来语多模式互动中所起的作用。研究表明,RIF与多种语言现象有关,并且往往出现在说话者提出违反听话人期望的信息或言语行为的环境中(基于一般知识或先前话语)。研究表明,RIF具有话语指示的功能:说话人指向自己的信息,在语言外语境中创造一个指称对象,将其作为自己立场的对象,将话语或言语行为的内容评价为听话人意想不到的内容,并显示出知识权威。建立这样一个解释信息的框架,为在说话人和听话人的假设之间存在(假设的)差异的情况下更快地更新共同点提供了基础。
{"title":"The Raised Index Finger gesture in Hebrew multimodal interaction","authors":"Anna Inbar","doi":"10.1075/gest.21001.inb","DOIUrl":"https://doi.org/10.1075/gest.21001.inb","url":null,"abstract":"\u0000 The present study examines the roles that the gesture of the Raised Index Finger (RIF) plays in Hebrew multimodal\u0000 interaction. The study reveals that the RIF is associated with diverse linguistic phenomena and tends to appear in contexts in\u0000 which the speaker presents a message or speech act that violates the hearer’s expectations (based on either general knowledge or\u0000 prior discourse). The study suggests that the RIF serves the function of discourse deixis: Speakers point to\u0000 their message, creating a referent in the extralinguistic context to which they refer as an object of their stance, evaluating the\u0000 content of the utterance or speech act as unexpected by the hearer, and displaying epistemic authority. Setting up such a frame by\u0000 which the information is to be interpreted provides the basis for a swifter update of the common ground in situations of (assumed)\u0000 differences between the assumptions of the speaker and the hearer.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43533089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-speech gestures can interfere with learning foreign language words* 共语手势会干扰外语单词的学习*
IF 1 4区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-08-21 DOI: 10.1075/gest.18020.nic
E. Nicoladis, Paula Marentette, Candace Lam
Co-speech gestures can help the learning, processing, and memory of words and concepts, particularly motoric and spatial concepts such as verbs. The purpose of the present studies was to test whether co-speech gestures support the learning of words through gist traces of movement. We asked English monolinguals to learn 40 Cantonese words (20 verbs and 20 nouns). In two studies, we found support for the gist traces of congruent gestures being movement: participants who saw congruent gestures while hearing Cantonese words thought they had seen more verbs than participants in any other condition. However, gist traces were unrelated to the accurate recall of either nouns or verbs. In both studies, learning Cantonese words accompanied by congruent gestures tended to interfere with the learning of nouns (but not verbs). In Study 2, we ruled out the possibility that this interference was due either to gestures conveying representational information in another medium or to distraction from moving hands. We argue that gestures can interfere with learning foreign language words when they represent the referents (e.g., show shape or size) because learners must interpret the hands as something other than hands.
共语手势有助于单词和概念的学习、处理和记忆,尤其是动词等运动和空间概念。本研究的目的是通过运动的要点痕迹来测试共语手势是否支持单词的学习。我们让英语单语者学习40个广东话单词(20个动词和20个名词)。在两项研究中,我们发现一致手势的要点痕迹是运动:在听广东话时看到一致手势的参与者认为他们比任何其他条件下的参与者看到了更多的动词。然而,要旨痕迹与名词或动词的准确回忆无关。在这两项研究中,学习带有全等手势的粤语单词往往会干扰名词(而不是动词)的学习。在研究2中,我们排除了这种干扰的可能性,即这种干扰要么是由于在另一种媒介中传递代表性信息的手势,要么是由于移动手的注意力分散。我们认为,当手势代表指称物(例如,显示形状或大小)时,手势会干扰外语单词的学习,因为学习者必须将手解释为手以外的东西。
{"title":"Co-speech gestures can interfere with learning foreign language words*","authors":"E. Nicoladis, Paula Marentette, Candace Lam","doi":"10.1075/gest.18020.nic","DOIUrl":"https://doi.org/10.1075/gest.18020.nic","url":null,"abstract":"\u0000 Co-speech gestures can help the learning, processing, and memory of words and concepts, particularly motoric and spatial\u0000 concepts such as verbs. The purpose of the present studies was to test whether co-speech gestures support the learning of words through gist\u0000 traces of movement. We asked English monolinguals to learn 40 Cantonese words (20 verbs and 20 nouns). In two studies, we found support for\u0000 the gist traces of congruent gestures being movement: participants who saw congruent gestures while hearing Cantonese words thought they had\u0000 seen more verbs than participants in any other condition. However, gist traces were unrelated to the accurate recall of either nouns or\u0000 verbs. In both studies, learning Cantonese words accompanied by congruent gestures tended to interfere with the learning of nouns (but not\u0000 verbs). In Study 2, we ruled out the possibility that this interference was due either to gestures conveying representational information in\u0000 another medium or to distraction from moving hands. We argue that gestures can interfere with learning foreign language words when they\u0000 represent the referents (e.g., show shape or size) because learners must interpret the hands as something other than hands.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49444759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Obituary 讣告
IF 1 4区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-08-21 DOI: 10.1075/gest.00070.mul
C. Müller
{"title":"Obituary","authors":"C. Müller","doi":"10.1075/gest.00070.mul","DOIUrl":"https://doi.org/10.1075/gest.00070.mul","url":null,"abstract":"","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47010534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A recurring absence gesture in Northern Pastaza Kichwa 北Pastaza Kichwa地区反复出现的缺席姿态
IF 1 4区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-07-25 DOI: 10.1075/gest.21008.ric
Alexander Rice
In this paper I posit the use of a spread-fingered hand torque gesture among speakers of Northern Pastaza Kichwa (Quechuan, Ecuador) as a recurrent gesture conveying the semantic theme of absence. The data come from a documentary video corpus collected by multiple researchers. The gesture prototypically takes the form of at least one pair of rapid rotations of the palm (the torque). Fingers can be spread or slightly flexed towards the palm to varying degrees. This gesture is performed in a consistent manner across speakers (and expressions) and co-occurs with a set of speech strings with related semantic meanings. Taking a cognitive linguistic approach, I analyse the form, function, and contexts of this gesture and argue that, taken together, it should be considered a recurrent gesture that indicates absence.
在这篇论文中,我假设在厄瓜多尔克川北部Pastaza Kichwa语的使用者中,手指展开的手势是一种反复出现的手势,传达了缺席的语义主题。数据来自多个研究人员收集的纪录片视频语料库。这个手势的原型是手掌至少快速旋转一次(扭矩)。手指可以不同程度地向手掌展开或轻微弯曲。这个手势在说话者(和表达式)之间以一致的方式执行,并与一组具有相关语义的语音字符串共同出现。采用认知语言学的方法,我分析了这个手势的形式、功能和上下文,并认为,综合起来,它应该被认为是一个反复出现的手势,表示缺席。
{"title":"A recurring absence gesture in Northern Pastaza Kichwa","authors":"Alexander Rice","doi":"10.1075/gest.21008.ric","DOIUrl":"https://doi.org/10.1075/gest.21008.ric","url":null,"abstract":"\u0000 In this paper I posit the use of a spread-fingered hand torque gesture among speakers of Northern Pastaza Kichwa\u0000 (Quechuan, Ecuador) as a recurrent gesture conveying the semantic theme of absence. The data come from a documentary\u0000 video corpus collected by multiple researchers. The gesture prototypically takes the form of at least one pair of rapid rotations\u0000 of the palm (the torque). Fingers can be spread or slightly flexed towards the palm to varying degrees. This gesture is performed\u0000 in a consistent manner across speakers (and expressions) and co-occurs with a set of speech strings with related semantic\u0000 meanings. Taking a cognitive linguistic approach, I analyse the form, function, and contexts of this gesture and argue that, taken\u0000 together, it should be considered a recurrent gesture that indicates absence.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43059345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coordinating and sharing gesture spaces in collaborative reasoning 协同推理中手势空间的协调与共享
IF 1 4区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-07-04 DOI: 10.1075/gest.21005.wil
Robert F. Williams
In collaborative reasoning about what causes the seasons, phases of the moon, and tides, participants (three to four per group) introduce ideas by gesturing depictively in personal space. Other group members copy and vary these gestures, imbuing their gesture spaces with similar conceptual properties. This leads at times to gestures being produced in shared space as members elaborate and contest a developing group model. Gestures in the shared space mostly coincide with conversational turns; more rarely, participants gesture collaboratively as they enact a joint conception. An emergent shared space is sustained by the joint focus and actions of participants and may be repositioned, reoriented, or reshaped to meet changing representational demands as the discourse develops. Shared space is used alongside personal spaces, and further research could shed light on how gesture placement and other markers (such as eye gaze) contribute to the meaning or function of gestures in group activity.
在关于季节、月相和潮汐的原因的协作推理中,参与者(每组三到四人)通过在个人空间中描绘手势来介绍想法。其他小组成员复制并改变这些手势,使他们的手势空间具有类似的概念属性。这有时会导致在共享空间中,当成员们精心设计和竞争一个正在发展的群体模式时,会产生一些手势。在共享空间中,手势大多与对话的回合一致;更罕见的是,参与者在制定共同构想时做出协作手势。一个紧急的共享空间是由参与者的共同关注和行动来维持的,随着话语的发展,它可能会被重新定位、重新定向或重塑,以满足不断变化的表征需求。共享空间与个人空间一起使用,进一步的研究可以揭示手势位置和其他标记(如眼睛注视)如何影响手势在群体活动中的意义或功能。
{"title":"Coordinating and sharing gesture spaces in collaborative reasoning","authors":"Robert F. Williams","doi":"10.1075/gest.21005.wil","DOIUrl":"https://doi.org/10.1075/gest.21005.wil","url":null,"abstract":"\u0000 In collaborative reasoning about what causes the seasons, phases of the moon, and tides, participants (three to\u0000 four per group) introduce ideas by gesturing depictively in personal space. Other group members copy and vary these gestures,\u0000 imbuing their gesture spaces with similar conceptual properties. This leads at times to gestures being produced in shared space as\u0000 members elaborate and contest a developing group model. Gestures in the shared space mostly coincide with conversational turns;\u0000 more rarely, participants gesture collaboratively as they enact a joint conception. An emergent shared space is sustained by the\u0000 joint focus and actions of participants and may be repositioned, reoriented, or reshaped to meet changing representational demands\u0000 as the discourse develops. Shared space is used alongside personal spaces, and further research could shed light on how gesture\u0000 placement and other markers (such as eye gaze) contribute to the meaning or function of gestures in group activity.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46155409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Gesture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1