Is Regularization Uniform across Linguistic Levels? Comparing Learning and Production of Unconditioned Probabilistic Variation in Morphology and Word Order
Carmen Saldana, Kenny Smith, S. Kirby, J. Culbertson
{"title":"Is Regularization Uniform across Linguistic Levels? Comparing Learning and Production of Unconditioned Probabilistic Variation in Morphology and Word Order","authors":"Carmen Saldana, Kenny Smith, S. Kirby, J. Culbertson","doi":"10.1080/15475441.2021.1876697","DOIUrl":null,"url":null,"abstract":"ABSTRACT Languages exhibit variation at all linguistic levels, from phonology, to the lexicon, to syntax. Importantly, that variation tends to be (at least partially) conditioned on some aspect of the social or linguistic context. When variation is unconditioned, language learners regularize it – removing some or all variants, or conditioning variant use on context. Previous studies using artificial language learning experiments have documented regularizing behavior in the learning of lexical, morphological, and syntactic variation. These studies implicitly assume that regularization reflects uniform mechanisms and processes across linguistic levels. However, studies on natural language learning and pidgin/creole formation suggest that morphological and syntactic variation may be treated differently. In particular, there is evidence that morphological variation may be more susceptible to regularization. Here we provide the first systematic comparison of the strength of regularization across these two linguistic levels. In line with previous studies, we find that the presence of a favored variant can induce different degrees of regularization. However, when input languages are carefully matched – with comparable initial variability, and no variant-specific biases – regularization can be comparable across morphology and word order. This is the case regardless of whether the task is explicitly communicative. Overall, our findings suggest an overarching regularizing mechanism at work, with apparent differences among levels likely due to differences in inherent complexity or variant-specific biases. Differences between production and encoding in our tasks further suggest this overarching mechanism is driven by production.","PeriodicalId":46642,"journal":{"name":"Language Learning and Development","volume":"298 1","pages":"158 - 188"},"PeriodicalIF":1.5000,"publicationDate":"2021-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language Learning and Development","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/15475441.2021.1876697","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 2
Abstract
ABSTRACT Languages exhibit variation at all linguistic levels, from phonology, to the lexicon, to syntax. Importantly, that variation tends to be (at least partially) conditioned on some aspect of the social or linguistic context. When variation is unconditioned, language learners regularize it – removing some or all variants, or conditioning variant use on context. Previous studies using artificial language learning experiments have documented regularizing behavior in the learning of lexical, morphological, and syntactic variation. These studies implicitly assume that regularization reflects uniform mechanisms and processes across linguistic levels. However, studies on natural language learning and pidgin/creole formation suggest that morphological and syntactic variation may be treated differently. In particular, there is evidence that morphological variation may be more susceptible to regularization. Here we provide the first systematic comparison of the strength of regularization across these two linguistic levels. In line with previous studies, we find that the presence of a favored variant can induce different degrees of regularization. However, when input languages are carefully matched – with comparable initial variability, and no variant-specific biases – regularization can be comparable across morphology and word order. This is the case regardless of whether the task is explicitly communicative. Overall, our findings suggest an overarching regularizing mechanism at work, with apparent differences among levels likely due to differences in inherent complexity or variant-specific biases. Differences between production and encoding in our tasks further suggest this overarching mechanism is driven by production.