Experimental and cross-linguistic studies have shown that vocal iconicity is prevalent in words that carry meanings related to SIZE and SHAPE. Although these studies demonstrate the importance of vocal iconicity and reveal the cognitive biases underpinning it, there is less work demonstrating how these biases lead to the evolution of a sound symbolic lexicon in the first place. In this study, we show how words can be shaped by cognitive biases through cultural evolution. Using a simple experimental setup resembling the game telephone, we examined how a single word form changed as it was passed from one participant to the next by a process of immediate iterated learning. About 1,500 naı̈ve participants were recruited online and divided into five condition groups. The participants in the CONTROL-group received no information about the meaning of the word they were about to hear, while the participants in the remaining four groups were informed that the word meant either BIG or SMALL (with the meaning being presented in text), or ROUND or POINTY (with the meaning being presented as a picture). The first participant in a transmission chain was presented with a phonetically diverse word and asked to repeat it. Thereafter, the recording of the repeated word was played for the next participant in the same chain. The sounds of the audio recordings were then transcribed and categorized according to six binary sound parameters. By modelling the proportion of vowels or consonants for each sound parameter, the SMALL-condition showed increases of FRONT UNROUNDED vowels and the POINTY-condition increases of ACUTE consonants. The results show that linguistic transmission is sufficient for vocal iconicity to emerge, which demonstrates the role non-arbitrary associations play in the evolution of language.
{"title":"Cultural evolution leads to vocal iconicity in an experimental iterated learning task","authors":"Niklas Erben Johansson, J. Carr, S. Kirby","doi":"10.1093/JOLE/LZAB001","DOIUrl":"https://doi.org/10.1093/JOLE/LZAB001","url":null,"abstract":"Experimental and cross-linguistic studies have shown that vocal iconicity is prevalent in words that carry meanings related to SIZE and SHAPE. Although these studies demonstrate the importance of vocal iconicity and reveal the cognitive biases underpinning it, there is less work demonstrating how these biases lead to the evolution of a sound symbolic lexicon in the first place. In this study, we show how words can be shaped by cognitive biases through cultural evolution. Using a simple experimental setup resembling the game telephone, we examined how a single word form changed as it was passed from one participant to the next by a process of immediate iterated learning. About 1,500 naı̈ve participants were recruited online and divided into five condition groups. The participants in the CONTROL-group received no information about the meaning of the word they were about to hear, while the participants in the remaining four groups were informed that the word meant either BIG or SMALL (with the meaning being presented in text), or ROUND or POINTY (with the meaning being presented as a picture). The first participant in a transmission chain was presented with a phonetically diverse word and asked to repeat it. Thereafter, the recording of the repeated word was played for the next participant in the same chain. The sounds of the audio recordings were then transcribed and categorized according to six binary sound parameters. By modelling the proportion of vowels or consonants for each sound parameter, the SMALL-condition showed increases of FRONT UNROUNDED vowels and the POINTY-condition increases of ACUTE consonants. The results show that linguistic transmission is sufficient for vocal iconicity to emerge, which demonstrates the role non-arbitrary associations play in the evolution of language.","PeriodicalId":37118,"journal":{"name":"Journal of Language Evolution","volume":"6 1","pages":"1-25"},"PeriodicalIF":2.6,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44567270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fiona Kirton, S. Kirby, Kenny Smith, J. Culbertson, M. Schouwstra
Understanding the relationship between human cognition and linguistic structure is a central theme in language evolution research. Numerous studies have investigated this question using the silent gesture paradigm in which participants describe events using only gesture and no speech. Research using this paradigm has found that Agent–Patient–Action (APV) is the most commonly produced gesture order, regardless of the producer’s native language. However, studies have uncovered a range of factors that influence ordering preferences. One such factor is salience, which has been suggested as a key determiner of word order. Specifically, humans, who are typically agents, are more salient than inanimate objects, so tend to be mentioned first. In this study, we investigated the role of salience in more detail and asked whether manipulating the salience of a human agent would modulate the tendency to express humans before objects. We found, first, that APV was less common than expected based on previous literature. Secondly, salience influenced the relative ordering of the patient and action, but not the agent and patient. For events involving a non-salient agent, participants typically expressed the patient before the action and vice versa for salient agents. Thirdly, participants typically omitted non-salient agents from their descriptions. We present details of a novel computational solution that infers the orders participants would have produced had they expressed all three constituents on every trial. Our analysis showed that events involving salient agents tended to elicit AVP; those involving a non-salient agent were typically described with APV, modulated by a strong tendency to omit the agent. We argue that these findings provide evidence that the effect of salience is realized through its effect on the perspective from which a producer frames an event.
{"title":"Constituent order in silent gesture reflects the perspective of the producer","authors":"Fiona Kirton, S. Kirby, Kenny Smith, J. Culbertson, M. Schouwstra","doi":"10.1093/JOLE/LZAA010","DOIUrl":"https://doi.org/10.1093/JOLE/LZAA010","url":null,"abstract":"\u0000 Understanding the relationship between human cognition and linguistic structure is a central theme in language evolution research. Numerous studies have investigated this question using the silent gesture paradigm in which participants describe events using only gesture and no speech. Research using this paradigm has found that Agent–Patient–Action (APV) is the most commonly produced gesture order, regardless of the producer’s native language. However, studies have uncovered a range of factors that influence ordering preferences. One such factor is salience, which has been suggested as a key determiner of word order. Specifically, humans, who are typically agents, are more salient than inanimate objects, so tend to be mentioned first. In this study, we investigated the role of salience in more detail and asked whether manipulating the salience of a human agent would modulate the tendency to express humans before objects. We found, first, that APV was less common than expected based on previous literature. Secondly, salience influenced the relative ordering of the patient and action, but not the agent and patient. For events involving a non-salient agent, participants typically expressed the patient before the action and vice versa for salient agents. Thirdly, participants typically omitted non-salient agents from their descriptions. We present details of a novel computational solution that infers the orders participants would have produced had they expressed all three constituents on every trial. Our analysis showed that events involving salient agents tended to elicit AVP; those involving a non-salient agent were typically described with APV, modulated by a strong tendency to omit the agent. We argue that these findings provide evidence that the effect of salience is realized through its effect on the perspective from which a producer frames an event.","PeriodicalId":37118,"journal":{"name":"Journal of Language Evolution","volume":"1 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2021-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/JOLE/LZAA010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42219944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Dezecache, K. Zuberbühler, Marina Davila-Ross, Christoph D. Dahl
How did human language evolve from earlier forms of communication? One way to address this question is to compare prelinguistic human vocal behavior with nonhuman primate calls. An important finding has been that, prior to speech and from early on, human infant vocal behavior exhibits functional flexibility, or the capacity to produce sounds that are not tied to one specific function. This is reflected in human infants’ use of single categories of protophones (precursors of speech sounds) in various affective circumstances, such that a given call type can occur in and express positive, neutral, or negative affective states, depending on the occasion. Nonhuman primate vocal behavior, in contrast, is seen as comparably inflexible, with different call types tied to specific functions and sometimes to specific affective states (e.g. screams mostly occur in negative circumstances). As a first step toward addressing this claim, we examined the vocal behavior of six wild infant chimpanzees during their first year of life. We found that the most common vocal signal, grunts, occurred in a range of contexts that were deemed positive, neutral, and negative. Using automated feature extraction and supervised learning algorithms, we also found acoustic variants of grunts produced in the affective contexts, suggesting gradation within this vocal category. In contrast, the second most common call type of infant chimpanzees, the whimpers, was produced in only one affective context, in line with standard models of nonhuman primate vocal behavior. Insofar as our affective categorization reflects infants’ true affective state, our results suggest that the most common chimpanzee vocalization, the grunt is not affectively bound. Affective decoupling is a prerequisite for chimpanzee grunts (and other vocal categories) to be deemed ‘functionally flexible’. If later confirmed to be a functionally flexible vocal type, this would indicate that the evolution of this foundational vocal capability occurred before the split between the Homo and Pan lineages.
{"title":"Flexibility in wild infant chimpanzee vocal behavior","authors":"G. Dezecache, K. Zuberbühler, Marina Davila-Ross, Christoph D. Dahl","doi":"10.1093/jole/lzaa009","DOIUrl":"https://doi.org/10.1093/jole/lzaa009","url":null,"abstract":"\u0000 How did human language evolve from earlier forms of communication? One way to address this question is to compare prelinguistic human vocal behavior with nonhuman primate calls. An important finding has been that, prior to speech and from early on, human infant vocal behavior exhibits functional flexibility, or the capacity to produce sounds that are not tied to one specific function. This is reflected in human infants’ use of single categories of protophones (precursors of speech sounds) in various affective circumstances, such that a given call type can occur in and express positive, neutral, or negative affective states, depending on the occasion. Nonhuman primate vocal behavior, in contrast, is seen as comparably inflexible, with different call types tied to specific functions and sometimes to specific affective states (e.g. screams mostly occur in negative circumstances). As a first step toward addressing this claim, we examined the vocal behavior of six wild infant chimpanzees during their first year of life. We found that the most common vocal signal, grunts, occurred in a range of contexts that were deemed positive, neutral, and negative. Using automated feature extraction and supervised learning algorithms, we also found acoustic variants of grunts produced in the affective contexts, suggesting gradation within this vocal category. In contrast, the second most common call type of infant chimpanzees, the whimpers, was produced in only one affective context, in line with standard models of nonhuman primate vocal behavior. Insofar as our affective categorization reflects infants’ true affective state, our results suggest that the most common chimpanzee vocalization, the grunt is not affectively bound. Affective decoupling is a prerequisite for chimpanzee grunts (and other vocal categories) to be deemed ‘functionally flexible’. If later confirmed to be a functionally flexible vocal type, this would indicate that the evolution of this foundational vocal capability occurred before the split between the Homo and Pan lineages.","PeriodicalId":37118,"journal":{"name":"Journal of Language Evolution","volume":"1 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/jole/lzaa009","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43609400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human infants acquire motor patterns for speech during the first several years of their lives. Sequential vocalizations such as human speech are complex behaviors, and the ability to learn new vocalizations is limited to only a few animal species. Vocalizations are generated through the coordination of three types of organs: namely, vocal, respiratory, and articulatory organs. Moreover, sophisticated temporal respiratory control might be necessary for sequential vocalization involving human speech. However, it remains unknown how coordination develops in human infants and if this developmental process is shared with other vocal learners. To answer these questions, we analyzed temporal parameters of sequential vocalizations during the first year in human infants and compared these developmental changes to song development in the Bengalese finch, another vocal learner. In human infants, early cry was also analyzed as an innate sequential vocalization. The following three temporal parameters of sequential vocalizations were measured: note duration (ND), inter-onset interval, and inter-note interval (INI). The results showed that both human infants and Bengalese finches had longer INIs than ND in the early phase. Gradually, the INI and ND converged to a similar range throughout development. While ND increased until 6 months of age in infants, the INI decreased up to 60 days posthatching in finches. Regarding infant cry, ND and INI were within similar ranges, but the INI was more stable in length than ND. In sequential vocalizations, temporal parameters developed early with subsequent articulatory stabilization in both vocal learners. However, this developmental change was accomplished in a species-specific manner. These findings could provide important insights into our understanding of the evolution of vocal learning.
{"title":"How vocal temporal parameters develop: a comparative study between humans and songbirds, two distantly related vocal learners","authors":"M. Takahasi, K. Okanoya, R. Mazuka","doi":"10.1093/jole/lzaa008","DOIUrl":"https://doi.org/10.1093/jole/lzaa008","url":null,"abstract":"\u0000 Human infants acquire motor patterns for speech during the first several years of their lives. Sequential vocalizations such as human speech are complex behaviors, and the ability to learn new vocalizations is limited to only a few animal species. Vocalizations are generated through the coordination of three types of organs: namely, vocal, respiratory, and articulatory organs. Moreover, sophisticated temporal respiratory control might be necessary for sequential vocalization involving human speech. However, it remains unknown how coordination develops in human infants and if this developmental process is shared with other vocal learners. To answer these questions, we analyzed temporal parameters of sequential vocalizations during the first year in human infants and compared these developmental changes to song development in the Bengalese finch, another vocal learner. In human infants, early cry was also analyzed as an innate sequential vocalization. The following three temporal parameters of sequential vocalizations were measured: note duration (ND), inter-onset interval, and inter-note interval (INI). The results showed that both human infants and Bengalese finches had longer INIs than ND in the early phase. Gradually, the INI and ND converged to a similar range throughout development. While ND increased until 6 months of age in infants, the INI decreased up to 60 days posthatching in finches. Regarding infant cry, ND and INI were within similar ranges, but the INI was more stable in length than ND. In sequential vocalizations, temporal parameters developed early with subsequent articulatory stabilization in both vocal learners. However, this developmental change was accomplished in a species-specific manner. These findings could provide important insights into our understanding of the evolution of vocal learning.","PeriodicalId":37118,"journal":{"name":"Journal of Language Evolution","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2020-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/jole/lzaa008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49666785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many languages use pitch to express pragmatic meaning (henceforth ‘tune’). This requires segmental carriers with rich harmonic structure and high periodic energy, making vowels the optimal carriers of the tune. Tunes can be phonetically impoverished when there is a shortage of vowels, endangering the recovery of their function. This biases sound systems towards the optimisation of tune transmission by processes such as the insertion of vowels. Vocative constructions—used to attract and maintain the addressee’s attention—are often characterised by specific tunes. Many languages additionally mark vocatives morphologically. In this article, we argue that one potential pathway for the emergence of vocative morphemes is the morphological re-analysis of tune-driven phonetic variation that helps to carry pitch patterns. Looking at a corpus of 101 languages, we compare vocatives to structural case markers in terms of their phonological make-up. We find that vocatives are often characterised by additional prosodic modulation (vowel lengthening, stress shift, tone change) and contain substantially fewer consonants, supporting our hypothesis that the acoustic properties of tunes interact with segmental features and can shape the emergence of morphological markers. This fits with the view that the efficient transmission of information is a driving force in the evolution of languages, but also highlights the importance of defining ‘information’ broadly to include pragmatic, social, and affectual components alongside propositional meaning.
{"title":"When the tune shapes morphology: The origins of vocatives","authors":"Sóskuthy M, Roettger T.","doi":"10.1093/jole/lzaa007","DOIUrl":"https://doi.org/10.1093/jole/lzaa007","url":null,"abstract":"<span><div>Abstract</div>Many languages use pitch to express pragmatic meaning (henceforth ‘tune’). This requires segmental carriers with rich harmonic structure and high periodic energy, making vowels the optimal carriers of the tune. Tunes can be phonetically impoverished when there is a shortage of vowels, endangering the recovery of their function. This biases sound systems towards the optimisation of tune transmission by processes such as the insertion of vowels. Vocative constructions—used to attract and maintain the addressee’s attention—are often characterised by specific tunes. Many languages additionally mark vocatives morphologically. In this article, we argue that one potential pathway for the emergence of vocative morphemes is the morphological re-analysis of tune-driven phonetic variation that helps to carry pitch patterns. Looking at a corpus of 101 languages, we compare vocatives to structural case markers in terms of their phonological make-up. We find that vocatives are often characterised by additional prosodic modulation (vowel lengthening, stress shift, tone change) and contain substantially fewer consonants, supporting our hypothesis that the acoustic properties of tunes interact with segmental features and can shape the emergence of morphological markers. This fits with the view that the efficient transmission of information is a driving force in the evolution of languages, but also highlights the importance of defining ‘information’ broadly to include pragmatic, social, and affectual components alongside propositional meaning.</span>","PeriodicalId":37118,"journal":{"name":"Journal of Language Evolution","volume":"38 5","pages":""},"PeriodicalIF":2.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138519802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Biting into evolution of language","authors":"M A C (Riny) Huybregts","doi":"10.1093/jole/lzaa003","DOIUrl":"https://doi.org/10.1093/jole/lzaa003","url":null,"abstract":"","PeriodicalId":37118,"journal":{"name":"Journal of Language Evolution","volume":"5 1","pages":"175-183"},"PeriodicalIF":2.6,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/jole/lzaa003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42370986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Zlatev, Przemysław Żywiczyński, Sławomir Wacewicz
We propose reframing one of the key questions in the field of language evolution as what was the original human-specific communicative system? With the help of cognitive semiotics, first we clarify the difference between signals, which characterize animal communication, and signs, which do not replace but complement signals in human communication. We claim that the evolution of bodily mimesis allowed for the use of signs, and the social-cognitive skills needed to support them to emerge in hominin evolution. Neither signs nor signals operate single-handedly, but as part of semiotic systems. Communicative systems can be either monosemiotic or polysemiotic—the former consisting of a single semiotic system and the latter, of several. Our proposal is that pantomime, as the original human-specific communicative system, should be characterized as polysemiotic: dominated by gesture but also including vocalization, facial expression, and possibly the rudiments of depiction. Given that pantomimic gestures must have been maximally similar to bodily actions, we characterize them as typically (1) dominated by iconicity, (2) of the primary kind, (3) involving the whole body, (4) performed from a first-person perspective, (5) concerning peripersonal space, and (6) using the Enacting mode of representation.
{"title":"Pantomime as the original human-specific communicative system","authors":"J. Zlatev, Przemysław Żywiczyński, Sławomir Wacewicz","doi":"10.1093/jole/lzaa006","DOIUrl":"https://doi.org/10.1093/jole/lzaa006","url":null,"abstract":"We propose reframing one of the key questions in the field of language evolution as what was the original human-specific communicative system? With the help of cognitive semiotics, first we clarify the difference between signals, which characterize animal communication, and signs, which do not replace but complement signals in human communication. We claim that the evolution of bodily mimesis allowed for the use of signs, and the social-cognitive skills needed to support them to emerge in hominin evolution. Neither signs nor signals operate single-handedly, but as part of semiotic systems. Communicative systems can be either monosemiotic or polysemiotic—the former consisting of a single semiotic system and the latter, of several. Our proposal is that pantomime, as the original human-specific communicative system, should be characterized as polysemiotic: dominated by gesture but also including vocalization, facial expression, and possibly the rudiments of depiction. Given that pantomimic gestures must have been maximally similar to bodily actions, we characterize them as typically (1) dominated by iconicity, (2) of the primary kind, (3) involving the whole body, (4) performed from a first-person perspective, (5) concerning peripersonal space, and (6) using the Enacting mode of representation.","PeriodicalId":37118,"journal":{"name":"Journal of Language Evolution","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/jole/lzaa006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49600002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Languages exhibit structure at a number of levels, including at the level of phonology, the system of meaningless combinatorial units from which words are constructed. Phonological systems typically exhibit greater dispersion than would be expected by chance. Several theoretical models have been proposed to account for this, and a common theme is that such organization emerges as a result of the competing forces acting on production and perception. Fundamentally, this implies a cultural evolutionary explanation, by which emergent organization is an adaptive response to the pressures of communicative interaction. This process is hard to investigate empirically using natural-language data. We therefore designed an experimental task in which pairs of participants play a communicative game using a novel medium in which varying the position of one’s finger on a trackpad produced different colors. This task allowed us to manipulate the alignment of pressures acting on production and perception. Here we used it to investigate (a) whether above-chance levels of dispersion would emerge in the resulting systems, (b) whether dispersion would correlate with communicative success, and (c) how systems would differ if the pressures acting on perception were misaligned with pressures acting on production (and which would take precedence). We found that above-chance levels of dispersion emerged when pressures were aligned, but that the primary driver of communicative success was the alignment of production and perception pressures rather than dispersion itself. When they were misaligned, participants both found the task harder and (driven by perceptual demands) created systems with lower levels of dispersion.
{"title":"Dispersion, communication, and alignment: an experimental study of the emergence of structure in combinatorial phonology","authors":"Gareth Roberts, R. Clark","doi":"10.1093/jole/lzaa004","DOIUrl":"https://doi.org/10.1093/jole/lzaa004","url":null,"abstract":"Languages exhibit structure at a number of levels, including at the level of phonology, the system of meaningless combinatorial units from which words are constructed. Phonological systems typically exhibit greater dispersion than would be expected by chance. Several theoretical models have been proposed to account for this, and a common theme is that such organization emerges as a result of the competing forces acting on production and perception. Fundamentally, this implies a cultural evolutionary explanation, by which emergent organization is an adaptive response to the pressures of communicative interaction. This process is hard to investigate empirically using natural-language data. We therefore designed an experimental task in which pairs of participants play a communicative game using a novel medium in which varying the position of one’s finger on a trackpad produced different colors. This task allowed us to manipulate the alignment of pressures acting on production and perception. Here we used it to investigate (a) whether above-chance levels of dispersion would emerge in the resulting systems, (b) whether dispersion would correlate with communicative success, and (c) how systems would differ if the pressures acting on perception were misaligned with pressures acting on production (and which would take precedence). We found that above-chance levels of dispersion emerged when pressures were aligned, but that the primary driver of communicative success was the alignment of production and perception pressures rather than dispersion itself. When they were misaligned, participants both found the task harder and (driven by perceptual demands) created systems with lower levels of dispersion.","PeriodicalId":37118,"journal":{"name":"Journal of Language Evolution","volume":"5 1","pages":"121-139"},"PeriodicalIF":2.6,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/jole/lzaa004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47466007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rejoinder to Huijbregts’s: Biting into Evolution of Language","authors":"Steven Moran, B. Bickel","doi":"10.1093/jole/lzaa005","DOIUrl":"https://doi.org/10.1093/jole/lzaa005","url":null,"abstract":"","PeriodicalId":37118,"journal":{"name":"Journal of Language Evolution","volume":"5 1","pages":"184-187"},"PeriodicalIF":2.6,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/jole/lzaa005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43073072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seán G. Roberts, A. Killin, Angarika Deb, Catherine Sheard, Simon J. Greenhill, Kaius Sinnemäki, José Segovia-Martín, Jonas Nölle, Aleksandrs Berdicevskis, Archie Humphreys-Balkwill, H. Little, Christopher Opie, Guillaume Jacques, L. Bromham, Peeter Tinits, R. Ross, Sean Lee, Emily Gasser, Jasmine Calladine, Matthew Spike, S. Mann, O. Shcherbakova, R. Singer, Shuya Zhang, A. Benítez‐Burraco, Christian Kliesch, Ewan Thomas-Colquhoun, Hedvig Skirgård, M. Tamariz, S. Passmore, Thomas Pellard, Fiona M. Jordan
Language is one of the most complex of human traits. There are many hypotheses about how it originated, what factors shaped its diversity, and what ongoing processes drive how it changes. We present the Causal Hypotheses in Evolutionary Linguistics Database (CHIELD, https://chield.excd.org/), a tool for expressing, exploring, and evaluating hypotheses. It allows researchers to integrate multiple theories into a coherent narrative, helping to design future research. We present design goals, a formal specification, and an implementation for this database. Source code is freely available for other fields to take advantage of this tool. Some initial results are presented, including identifying conflicts in theories about gossip and ritual, comparing hypotheses relating population size and morphological complexity, and an author relation network.
{"title":"CHIELD: the causal hypotheses in evolutionary linguistics database","authors":"Seán G. Roberts, A. Killin, Angarika Deb, Catherine Sheard, Simon J. Greenhill, Kaius Sinnemäki, José Segovia-Martín, Jonas Nölle, Aleksandrs Berdicevskis, Archie Humphreys-Balkwill, H. Little, Christopher Opie, Guillaume Jacques, L. Bromham, Peeter Tinits, R. Ross, Sean Lee, Emily Gasser, Jasmine Calladine, Matthew Spike, S. Mann, O. Shcherbakova, R. Singer, Shuya Zhang, A. Benítez‐Burraco, Christian Kliesch, Ewan Thomas-Colquhoun, Hedvig Skirgård, M. Tamariz, S. Passmore, Thomas Pellard, Fiona M. Jordan","doi":"10.1093/JOLE/LZAA001","DOIUrl":"https://doi.org/10.1093/JOLE/LZAA001","url":null,"abstract":"\u0000 Language is one of the most complex of human traits. There are many hypotheses about how it originated, what factors shaped its diversity, and what ongoing processes drive how it changes. We present the Causal Hypotheses in Evolutionary Linguistics Database (CHIELD, https://chield.excd.org/), a tool for expressing, exploring, and evaluating hypotheses. It allows researchers to integrate multiple theories into a coherent narrative, helping to design future research. We present design goals, a formal specification, and an implementation for this database. Source code is freely available for other fields to take advantage of this tool. Some initial results are presented, including identifying conflicts in theories about gossip and ritual, comparing hypotheses relating population size and morphological complexity, and an author relation network.","PeriodicalId":37118,"journal":{"name":"Journal of Language Evolution","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2020-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/JOLE/LZAA001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48632726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}