{"title":"通过分布学习获得递归结构","authors":"Daoxin Li, Kathryn D. Schuler","doi":"10.1080/10489223.2023.2185522","DOIUrl":null,"url":null,"abstract":"ABSTRACT Languages differ regarding the depth, structure, and syntactic domains of recursive structures. Even within a single language, some structures allow infinite self-embedding while others are more restricted. For example, when expressing ownership relation, English allows infinite embedding of the prenominal genitive -s, whereas the postnominal genitive of is much more restricted. How do speakers learn which specific structures allow infinite embedding and which do not? The distributional learning proposal suggests that the recursion of a structure (e.g., X1’s-X2 ) is licensed if the X1 position and the X2 position are productively substitutable in non-recursive input. The present study tests this proposal with an artificial language learning experiment. We exposed adult participants to X1-ka-X2 strings. In the productive condition, almost all words attested in X1 position were also attested in X2 position; in the unproductive condition, only some were. We found that, as predicted, participants from the productive condition were more likely to accept unattested strings at both one- and two-embedding levels than participants from the unproductive condition. Our results suggest that speakers can use distributional information at one-embedding level to learn whether or not a structure is recursive.","PeriodicalId":46920,"journal":{"name":"Language Acquisition","volume":"30 1","pages":"323 - 336"},"PeriodicalIF":1.3000,"publicationDate":"2023-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Acquiring recursive structures through distributional learning\",\"authors\":\"Daoxin Li, Kathryn D. Schuler\",\"doi\":\"10.1080/10489223.2023.2185522\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Languages differ regarding the depth, structure, and syntactic domains of recursive structures. Even within a single language, some structures allow infinite self-embedding while others are more restricted. For example, when expressing ownership relation, English allows infinite embedding of the prenominal genitive -s, whereas the postnominal genitive of is much more restricted. How do speakers learn which specific structures allow infinite embedding and which do not? The distributional learning proposal suggests that the recursion of a structure (e.g., X1’s-X2 ) is licensed if the X1 position and the X2 position are productively substitutable in non-recursive input. The present study tests this proposal with an artificial language learning experiment. We exposed adult participants to X1-ka-X2 strings. In the productive condition, almost all words attested in X1 position were also attested in X2 position; in the unproductive condition, only some were. We found that, as predicted, participants from the productive condition were more likely to accept unattested strings at both one- and two-embedding levels than participants from the unproductive condition. Our results suggest that speakers can use distributional information at one-embedding level to learn whether or not a structure is recursive.\",\"PeriodicalId\":46920,\"journal\":{\"name\":\"Language Acquisition\",\"volume\":\"30 1\",\"pages\":\"323 - 336\"},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2023-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Language Acquisition\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1080/10489223.2023.2185522\",\"RegionNum\":3,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language Acquisition","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/10489223.2023.2185522","RegionNum":3,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
Acquiring recursive structures through distributional learning
ABSTRACT Languages differ regarding the depth, structure, and syntactic domains of recursive structures. Even within a single language, some structures allow infinite self-embedding while others are more restricted. For example, when expressing ownership relation, English allows infinite embedding of the prenominal genitive -s, whereas the postnominal genitive of is much more restricted. How do speakers learn which specific structures allow infinite embedding and which do not? The distributional learning proposal suggests that the recursion of a structure (e.g., X1’s-X2 ) is licensed if the X1 position and the X2 position are productively substitutable in non-recursive input. The present study tests this proposal with an artificial language learning experiment. We exposed adult participants to X1-ka-X2 strings. In the productive condition, almost all words attested in X1 position were also attested in X2 position; in the unproductive condition, only some were. We found that, as predicted, participants from the productive condition were more likely to accept unattested strings at both one- and two-embedding levels than participants from the unproductive condition. Our results suggest that speakers can use distributional information at one-embedding level to learn whether or not a structure is recursive.
期刊介绍:
The research published in Language Acquisition: A Journal of Developmental Linguistics makes a clear contribution to linguistic theory by increasing our understanding of how language is acquired. The journal focuses on the acquisition of syntax, semantics, phonology, and morphology, and considers theoretical, experimental, and computational perspectives. Coverage includes solutions to the logical problem of language acquisition, as it arises for particular grammatical proposals; discussion of acquisition data relevant to current linguistic questions; and perspectives derived from theory-driven studies of second language acquisition, language-impaired speakers, and other domains of cognition.