Lauren Gawne, Chelsea Krajcik, Helene N. Andreassen, Andrea L. Berez-Kroeker, Barbara Kelly
Data is central to scholarly research, but the nature and location of data used is often under-reported in research publications. Greater transparency and citation of data have positive effects for the culture of research. This article presents the results of a survey of data citation in six years of articles published in the journal GESTURE (12.1-17.2). Gesture researchers draw on a broad range of data types, but the source and location of data are often not disclosed in publications. There is also still a strong research focus on only a small range of the world’s languages and their linguistic diversity. Published papers rarely cite back to the primary data, unless it is already published. We discuss both the implications of these findings and the ways that scholars in the field of gesture studies can build a positive culture around open data.
{"title":"Data transparency and citation in the journal Gesture","authors":"Lauren Gawne, Chelsea Krajcik, Helene N. Andreassen, Andrea L. Berez-Kroeker, Barbara Kelly","doi":"10.1075/gest.00034.gaw","DOIUrl":"https://doi.org/10.1075/gest.00034.gaw","url":null,"abstract":"Data is central to scholarly research, but the nature and location of data used is often under-reported in research publications. Greater transparency and citation of data have positive effects for the culture of research. This article presents the results of a survey of data citation in six years of articles published in the journal GESTURE (12.1-17.2). Gesture researchers draw on a broad range of data types, but the source and location of data are often not disclosed in publications. There is also still a strong research focus on only a small range of the world’s languages and their linguistic diversity. Published papers rarely cite back to the primary data, unless it is already published. We discuss both the implications of these findings and the ways that scholars in the field of gesture studies can build a positive culture around open data.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":"18 1","pages":"83-109"},"PeriodicalIF":1.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44996217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Asier Romero Andonegi, Irati de Pablo Delgado, Aintzane Etxebarria Lejarreta, Ainara Romero Andonegi
The aim of this study is to explore the multimodal communicative patterns used by infants during their first-words transition period. The combinatorial patterns of twelve children living in Basque Country with different mother tongues were analyzed longitudinally from 9 to 21 months of age. A total of 4,299 communicative behaviors were recorded and coded (vocalizations, gestures, and pragmatic functions). Results showed a significant increase in multimodal communicative patterns from 12 months onwards, and differences in the infants’ vocal construction depending on the specific types of gestures involved. Thus, it was observed that gestures and speech combinations have influence on the child’s pragmatic function and vocalizations structure.
{"title":"Dynamic processes of intermodal coordination in the ontogenesis of language","authors":"Asier Romero Andonegi, Irati de Pablo Delgado, Aintzane Etxebarria Lejarreta, Ainara Romero Andonegi","doi":"10.1075/gest.00033.rom","DOIUrl":"https://doi.org/10.1075/gest.00033.rom","url":null,"abstract":"\u0000 The aim of this study is to explore the multimodal communicative patterns used by infants during their first-words\u0000 transition period. The combinatorial patterns of twelve children living in Basque Country with different mother tongues were analyzed\u0000 longitudinally from 9 to 21 months of age. A total of 4,299 communicative behaviors were recorded and coded (vocalizations, gestures, and\u0000 pragmatic functions). Results showed a significant increase in multimodal communicative patterns from 12 months onwards, and differences in\u0000 the infants’ vocal construction depending on the specific types of gestures involved. Thus, it was observed that gestures and speech\u0000 combinations have influence on the child’s pragmatic function and vocalizations structure.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":"18 1","pages":"57-82"},"PeriodicalIF":1.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48965584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Anthropology of Gesture","authors":"","doi":"10.1075/gest.18.2-3","DOIUrl":"https://doi.org/10.1075/gest.18.2-3","url":null,"abstract":"","PeriodicalId":35125,"journal":{"name":"Gesture","volume":"1 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41448466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Further information and weblinks","authors":"","doi":"10.1075/gest.00043.fur","DOIUrl":"https://doi.org/10.1075/gest.00043.fur","url":null,"abstract":"","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44199351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Further information and weblinks","authors":"","doi":"10.1075/gest.00037.fur","DOIUrl":"https://doi.org/10.1075/gest.00037.fur","url":null,"abstract":"","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46658708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autumn B. Hostetter, Stuart H. Murch, Lyla Rothschild, Cierra S. Gillard
We examined the cognitive resources involved in processing speech with gesture compared to the same speech without gesture across four studies using a dual-task paradigm. Participants viewed videos of a woman describing spatial arrays either with gesture or without. They then attempted to choose the target array from among four choices. Participants’ cognitive load was measured as they completed this comprehension task by measuring how well they could remember the location and identity of digits in a secondary task. We found that addressees experience additional visuospatial load when processing gestures compared to speech alone, and that the load primarily comes when addressees attempt to use their memory of the descriptions with gesture to choose the target array. However, this cost only occurs when gestures about horizontal spatial relations (i.e., left and right) are produced from the speaker’s egocentric perspective.
{"title":"Does seeing gesture lighten or increase the\u0000 load?","authors":"Autumn B. Hostetter, Stuart H. Murch, Lyla Rothschild, Cierra S. Gillard","doi":"10.1075/GEST.17017.HOS","DOIUrl":"https://doi.org/10.1075/GEST.17017.HOS","url":null,"abstract":"\u0000 We examined the cognitive resources\u0000 involved in processing speech with gesture compared to the same speech without\u0000 gesture across four studies using a dual-task paradigm. Participants viewed videos of a woman describing\u0000 spatial arrays either with gesture or without. They then attempted to choose the\u0000 target array from among four choices. Participants’ cognitive load was measured\u0000 as they completed this comprehension task by measuring how well they could\u0000 remember the location and identity of digits in a secondary task. We found that addressees experience additional visuospatial load when processing gestures compared to speech alone, and that the load primarily comes when addressees attempt to use their memory of the descriptions with gesture to choose the target array. However,\u0000 this cost only occurs when gestures about horizontal spatial relations (i.e.,\u0000 left and right) are produced from the speaker’s egocentric perspective.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2018-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43093281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}