This study explores the role of gestures in Flemish Sign Language (VGT) development through a longitudinal observation of three deaf children’s early interactions. These children were followed over a period of one and a half year, at the ages of 6, 9, 12, 18 and 24 months. This research compares the communicative development of a deaf child growing up in a deaf family and two deaf children growing up in hearing families. The latter two children received early cochlear implants when they were respectively 10 and 7 months old. It is the first study describing the types and tokens of children’s gestures used in early dyadic interactions in Flanders (Belgium). The description of our observations shows three distinct developmental patterns in terms of the use of gestures and the production of combinations. The study supports the finding that children’s gestural output is subject to their parental language, and it further indicates an impact of age of cochlear implantation.
{"title":"The road to language through gesture","authors":"Beatrijs Wille, Hilde Nyffels, O. Capirci","doi":"10.1075/gest.22001.wil","DOIUrl":"https://doi.org/10.1075/gest.22001.wil","url":null,"abstract":"This study explores the role of gestures in Flemish Sign Language (VGT) development through a longitudinal observation of three deaf children’s early interactions. These children were followed over a period of one and a half year, at the ages of 6, 9, 12, 18 and 24 months. This research compares the communicative development of a deaf child growing up in a deaf family and two deaf children growing up in hearing families. The latter two children received early cochlear implants when they were respectively 10 and 7 months old. It is the first study describing the types and tokens of children’s gestures used in early dyadic interactions in Flanders (Belgium). The description of our observations shows three distinct developmental patterns in terms of the use of gestures and the production of combinations. The study supports the finding that children’s gestural output is subject to their parental language, and it further indicates an impact of age of cochlear implantation.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":"80 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139228588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Marquardt Donovan, Sarah A. Brown, Martha W. Alibali
Teachers often use gestures to connect representations of mathematical ideas. This research examined (1) whether such linking gestures help students understand connections among representations and (2) whether sets of gestures that include repeated handshapes and motions – termed gestural catchments – are particularly beneficial. Undergraduates viewed one of four video lessons connecting two representations of multiplication. In the control lesson, the instructor produced beat gestures that did not link the representations. In the link-only lesson, the instructor used gestures to link representations, but the gestures did not form a catchment. In the consistent-catchment lesson, the instructor highlighted corresponding elements of the two representations using identical gestures. In the inconsistent-catchment lesson, the instructor highlighted non-corresponding elements of the two representations using identical gestures. Participants who saw the lesson with the consistent catchment – which highlighted similarities between representations – were most likely to understand the novel representation and to report learning from the lesson.
{"title":"Weakest link or strongest link?","authors":"Andrea Marquardt Donovan, Sarah A. Brown, Martha W. Alibali","doi":"10.1075/gest.21021.don","DOIUrl":"https://doi.org/10.1075/gest.21021.don","url":null,"abstract":"Teachers often use gestures to connect representations of mathematical ideas. This research examined (1) whether\u0000 such linking gestures help students understand connections among representations and (2) whether sets of gestures that include\u0000 repeated handshapes and motions – termed gestural catchments – are particularly beneficial. Undergraduates viewed\u0000 one of four video lessons connecting two representations of multiplication. In the control lesson, the instructor\u0000 produced beat gestures that did not link the representations. In the link-only lesson, the instructor used\u0000 gestures to link representations, but the gestures did not form a catchment. In the consistent-catchment lesson,\u0000 the instructor highlighted corresponding elements of the two representations using identical gestures. In the\u0000 inconsistent-catchment lesson, the instructor highlighted non-corresponding elements of the two\u0000 representations using identical gestures. Participants who saw the lesson with the consistent catchment – which highlighted\u0000 similarities between representations – were most likely to understand the novel representation and to report learning from the\u0000 lesson.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":"31 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134901077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study presents an automatic tool that allows to trace smile intensities along a video record of conversational face-to-face interactions. The processed output proposes a sequence of adjusted time intervals labeled following the Smiling Intensity Scale (Gironzetti, Attardo, and Pickering, 2016), a 5 levels scale varying from neutral facial expression to laughing smile. The underlying statistical model of this tool is trained on a manually annotated corpus of conversations featuring spontaneous facial expressions. This model will be detailed in this study. This tool can be used with benefits for annotating smile in interactions. The results are twofold. First, the evaluation reveals an observed agreement of 68% between manual and automatic annotations. Second, manually correcting the labels and interval boundaries of the automatic outputs reduces by a factor 10 the annotation time as compared with the time spent for manually annotating smile intensities without pretreatment. Our annotation engine makes use of the state-of-the-art toolbox OpenFace for tracking the face and for measuring the intensities of the facial Action Units of interest all along the video. The documentation and the scripts of our tool, the SMAD software, are available to download at the HMAD open source project URL page https://github.com/srauzy/HMAD (last access 31 July 2023).
这项研究提出了一种自动工具,可以根据对话面对面互动的视频记录追踪微笑的强度。处理后的输出提出了一系列调整后的时间间隔,按照微笑强度量表(Gironzetti, Attardo, and Pickering, 2016)标记,这是一个从中性面部表情到大笑微笑的5级量表。该工具的底层统计模型是在具有自发面部表情的手动注释的对话语料库上进行训练的。该模型将在本研究中详细介绍。这个工具可以用于在互动中注释微笑。结果是双重的。首先,评估显示手动注释和自动注释之间的一致性达到68%。其次,与不进行预处理的手动标注微笑强度的时间相比,手动校正自动输出的标签和间隔边界的标注时间减少了10倍。我们的注释引擎使用了最先进的工具箱OpenFace来跟踪面部并测量整个视频中感兴趣的面部动作单元的强度。我们的工具SMAD软件的文档和脚本可以在HMAD开源项目URL页面https://github.com/srauzy/HMAD上下载(最后访问日期为2023年7月31日)。
{"title":"Automatic tool to annotate smile intensities in conversational face-to-face interactions","authors":"S. Rauzy, Mary Amoyal","doi":"10.1075/gest.22012.rau","DOIUrl":"https://doi.org/10.1075/gest.22012.rau","url":null,"abstract":"\u0000 This study presents an automatic tool that allows to trace smile intensities along a video record of\u0000 conversational face-to-face interactions. The processed output proposes a sequence of adjusted time intervals labeled following\u0000 the Smiling Intensity Scale (Gironzetti, Attardo, and Pickering,\u0000 2016), a 5 levels scale varying from neutral facial expression to laughing smile. The underlying statistical model of this\u0000 tool is trained on a manually annotated corpus of conversations featuring spontaneous facial expressions. This model will be\u0000 detailed in this study. This tool can be used with benefits for annotating smile in interactions. The results are twofold. First,\u0000 the evaluation reveals an observed agreement of 68% between manual and automatic annotations. Second, manually correcting the\u0000 labels and interval boundaries of the automatic outputs reduces by a factor 10 the annotation time as compared with the time spent\u0000 for manually annotating smile intensities without pretreatment. Our annotation engine makes use of the state-of-the-art toolbox\u0000 OpenFace for tracking the face and for measuring the intensities of the facial Action Units of interest all along the video. The\u0000 documentation and the scripts of our tool, the SMAD software, are available to download at the HMAD open source project URL page\u0000 https://github.com/srauzy/HMAD (last access 31 July 2023).","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42991983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous studies using cross-modal semantic priming have found that iconic gestures prime target words that are related with the gestures. In the present study, two analogous experiments examined this priming effect presenting prime and targets in high synchrony. In Experiment 1, participants performed an auditory primed lexical decision task where target words (e.g., “push”) and pseudowords had to be discriminated, primed by overlapping iconic gestures that could be semantically related (e.g., moving both hands forward) or not with the words. Experiment 2 was similar but with both gestures and words presented visually. The grammatical category of the words was also manipulated: they were nouns and verbs. It was found that words related to gestures were recognized faster and with fewer errors than the unrelated ones in both experiments and similarly for both types of words.
{"title":"Iconic gestures serve as primes for both auditory and visual word forms","authors":"Iván Sánchez-Borges, C. J. Álvarez","doi":"10.1075/gest.20019.san","DOIUrl":"https://doi.org/10.1075/gest.20019.san","url":null,"abstract":"\u0000Previous studies using cross-modal semantic priming have found that iconic gestures prime target words that are related with the gestures. In the present study, two analogous experiments examined this priming effect presenting prime and targets in high synchrony. In Experiment 1, participants performed an auditory primed lexical decision task where target words (e.g., “push”) and pseudowords had to be discriminated, primed by overlapping iconic gestures that could be semantically related (e.g., moving both hands forward) or not with the words. Experiment 2 was similar but with both gestures and words presented visually. The grammatical category of the words was also manipulated: they were nouns and verbs. It was found that words related to gestures were recognized faster and with fewer errors than the unrelated ones in both experiments and similarly for both types of words.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45461372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The present study examines the roles that the gesture of the Raised Index Finger (RIF) plays in Hebrew multimodal interaction. The study reveals that the RIF is associated with diverse linguistic phenomena and tends to appear in contexts in which the speaker presents a message or speech act that violates the hearer’s expectations (based on either general knowledge or prior discourse). The study suggests that the RIF serves the function of discourse deixis: Speakers point to their message, creating a referent in the extralinguistic context to which they refer as an object of their stance, evaluating the content of the utterance or speech act as unexpected by the hearer, and displaying epistemic authority. Setting up such a frame by which the information is to be interpreted provides the basis for a swifter update of the common ground in situations of (assumed) differences between the assumptions of the speaker and the hearer.
{"title":"The Raised Index Finger gesture in Hebrew multimodal interaction","authors":"Anna Inbar","doi":"10.1075/gest.21001.inb","DOIUrl":"https://doi.org/10.1075/gest.21001.inb","url":null,"abstract":"\u0000 The present study examines the roles that the gesture of the Raised Index Finger (RIF) plays in Hebrew multimodal\u0000 interaction. The study reveals that the RIF is associated with diverse linguistic phenomena and tends to appear in contexts in\u0000 which the speaker presents a message or speech act that violates the hearer’s expectations (based on either general knowledge or\u0000 prior discourse). The study suggests that the RIF serves the function of discourse deixis: Speakers point to\u0000 their message, creating a referent in the extralinguistic context to which they refer as an object of their stance, evaluating the\u0000 content of the utterance or speech act as unexpected by the hearer, and displaying epistemic authority. Setting up such a frame by\u0000 which the information is to be interpreted provides the basis for a swifter update of the common ground in situations of (assumed)\u0000 differences between the assumptions of the speaker and the hearer.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43533089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Co-speech gestures can help the learning, processing, and memory of words and concepts, particularly motoric and spatial concepts such as verbs. The purpose of the present studies was to test whether co-speech gestures support the learning of words through gist traces of movement. We asked English monolinguals to learn 40 Cantonese words (20 verbs and 20 nouns). In two studies, we found support for the gist traces of congruent gestures being movement: participants who saw congruent gestures while hearing Cantonese words thought they had seen more verbs than participants in any other condition. However, gist traces were unrelated to the accurate recall of either nouns or verbs. In both studies, learning Cantonese words accompanied by congruent gestures tended to interfere with the learning of nouns (but not verbs). In Study 2, we ruled out the possibility that this interference was due either to gestures conveying representational information in another medium or to distraction from moving hands. We argue that gestures can interfere with learning foreign language words when they represent the referents (e.g., show shape or size) because learners must interpret the hands as something other than hands.
{"title":"Co-speech gestures can interfere with learning foreign language words*","authors":"E. Nicoladis, Paula Marentette, Candace Lam","doi":"10.1075/gest.18020.nic","DOIUrl":"https://doi.org/10.1075/gest.18020.nic","url":null,"abstract":"\u0000 Co-speech gestures can help the learning, processing, and memory of words and concepts, particularly motoric and spatial\u0000 concepts such as verbs. The purpose of the present studies was to test whether co-speech gestures support the learning of words through gist\u0000 traces of movement. We asked English monolinguals to learn 40 Cantonese words (20 verbs and 20 nouns). In two studies, we found support for\u0000 the gist traces of congruent gestures being movement: participants who saw congruent gestures while hearing Cantonese words thought they had\u0000 seen more verbs than participants in any other condition. However, gist traces were unrelated to the accurate recall of either nouns or\u0000 verbs. In both studies, learning Cantonese words accompanied by congruent gestures tended to interfere with the learning of nouns (but not\u0000 verbs). In Study 2, we ruled out the possibility that this interference was due either to gestures conveying representational information in\u0000 another medium or to distraction from moving hands. We argue that gestures can interfere with learning foreign language words when they\u0000 represent the referents (e.g., show shape or size) because learners must interpret the hands as something other than hands.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49444759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper I posit the use of a spread-fingered hand torque gesture among speakers of Northern Pastaza Kichwa (Quechuan, Ecuador) as a recurrent gesture conveying the semantic theme of absence. The data come from a documentary video corpus collected by multiple researchers. The gesture prototypically takes the form of at least one pair of rapid rotations of the palm (the torque). Fingers can be spread or slightly flexed towards the palm to varying degrees. This gesture is performed in a consistent manner across speakers (and expressions) and co-occurs with a set of speech strings with related semantic meanings. Taking a cognitive linguistic approach, I analyse the form, function, and contexts of this gesture and argue that, taken together, it should be considered a recurrent gesture that indicates absence.
{"title":"A recurring absence gesture in Northern Pastaza Kichwa","authors":"Alexander Rice","doi":"10.1075/gest.21008.ric","DOIUrl":"https://doi.org/10.1075/gest.21008.ric","url":null,"abstract":"\u0000 In this paper I posit the use of a spread-fingered hand torque gesture among speakers of Northern Pastaza Kichwa\u0000 (Quechuan, Ecuador) as a recurrent gesture conveying the semantic theme of absence. The data come from a documentary\u0000 video corpus collected by multiple researchers. The gesture prototypically takes the form of at least one pair of rapid rotations\u0000 of the palm (the torque). Fingers can be spread or slightly flexed towards the palm to varying degrees. This gesture is performed\u0000 in a consistent manner across speakers (and expressions) and co-occurs with a set of speech strings with related semantic\u0000 meanings. Taking a cognitive linguistic approach, I analyse the form, function, and contexts of this gesture and argue that, taken\u0000 together, it should be considered a recurrent gesture that indicates absence.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43059345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In collaborative reasoning about what causes the seasons, phases of the moon, and tides, participants (three to four per group) introduce ideas by gesturing depictively in personal space. Other group members copy and vary these gestures, imbuing their gesture spaces with similar conceptual properties. This leads at times to gestures being produced in shared space as members elaborate and contest a developing group model. Gestures in the shared space mostly coincide with conversational turns; more rarely, participants gesture collaboratively as they enact a joint conception. An emergent shared space is sustained by the joint focus and actions of participants and may be repositioned, reoriented, or reshaped to meet changing representational demands as the discourse develops. Shared space is used alongside personal spaces, and further research could shed light on how gesture placement and other markers (such as eye gaze) contribute to the meaning or function of gestures in group activity.
{"title":"Coordinating and sharing gesture spaces in collaborative reasoning","authors":"Robert F. Williams","doi":"10.1075/gest.21005.wil","DOIUrl":"https://doi.org/10.1075/gest.21005.wil","url":null,"abstract":"\u0000 In collaborative reasoning about what causes the seasons, phases of the moon, and tides, participants (three to\u0000 four per group) introduce ideas by gesturing depictively in personal space. Other group members copy and vary these gestures,\u0000 imbuing their gesture spaces with similar conceptual properties. This leads at times to gestures being produced in shared space as\u0000 members elaborate and contest a developing group model. Gestures in the shared space mostly coincide with conversational turns;\u0000 more rarely, participants gesture collaboratively as they enact a joint conception. An emergent shared space is sustained by the\u0000 joint focus and actions of participants and may be repositioned, reoriented, or reshaped to meet changing representational demands\u0000 as the discourse develops. Shared space is used alongside personal spaces, and further research could shed light on how gesture\u0000 placement and other markers (such as eye gaze) contribute to the meaning or function of gestures in group activity.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46155409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}