通过分布学习获得递归结构

IF 1.3 3区 文学 0 LANGUAGE & LINGUISTICS Language Acquisition Pub Date : 2023-03-22 DOI:10.1080/10489223.2023.2185522
Daoxin Li, Kathryn D. Schuler
{"title":"通过分布学习获得递归结构","authors":"Daoxin Li, Kathryn D. Schuler","doi":"10.1080/10489223.2023.2185522","DOIUrl":null,"url":null,"abstract":"ABSTRACT Languages differ regarding the depth, structure, and syntactic domains of recursive structures. Even within a single language, some structures allow infinite self-embedding while others are more restricted. For example, when expressing ownership relation, English allows infinite embedding of the prenominal genitive -s, whereas the postnominal genitive of is much more restricted. How do speakers learn which specific structures allow infinite embedding and which do not? The distributional learning proposal suggests that the recursion of a structure (e.g., X1’s-X2 ) is licensed if the X1 position and the X2 position are productively substitutable in non-recursive input. The present study tests this proposal with an artificial language learning experiment. We exposed adult participants to X1-ka-X2 strings. In the productive condition, almost all words attested in X1 position were also attested in X2 position; in the unproductive condition, only some were. We found that, as predicted, participants from the productive condition were more likely to accept unattested strings at both one- and two-embedding levels than participants from the unproductive condition. Our results suggest that speakers can use distributional information at one-embedding level to learn whether or not a structure is recursive.","PeriodicalId":46920,"journal":{"name":"Language Acquisition","volume":"30 1","pages":"323 - 336"},"PeriodicalIF":1.3000,"publicationDate":"2023-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Acquiring recursive structures through distributional learning\",\"authors\":\"Daoxin Li, Kathryn D. Schuler\",\"doi\":\"10.1080/10489223.2023.2185522\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Languages differ regarding the depth, structure, and syntactic domains of recursive structures. Even within a single language, some structures allow infinite self-embedding while others are more restricted. For example, when expressing ownership relation, English allows infinite embedding of the prenominal genitive -s, whereas the postnominal genitive of is much more restricted. How do speakers learn which specific structures allow infinite embedding and which do not? The distributional learning proposal suggests that the recursion of a structure (e.g., X1’s-X2 ) is licensed if the X1 position and the X2 position are productively substitutable in non-recursive input. The present study tests this proposal with an artificial language learning experiment. We exposed adult participants to X1-ka-X2 strings. In the productive condition, almost all words attested in X1 position were also attested in X2 position; in the unproductive condition, only some were. We found that, as predicted, participants from the productive condition were more likely to accept unattested strings at both one- and two-embedding levels than participants from the unproductive condition. Our results suggest that speakers can use distributional information at one-embedding level to learn whether or not a structure is recursive.\",\"PeriodicalId\":46920,\"journal\":{\"name\":\"Language Acquisition\",\"volume\":\"30 1\",\"pages\":\"323 - 336\"},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2023-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Language Acquisition\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1080/10489223.2023.2185522\",\"RegionNum\":3,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language Acquisition","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/10489223.2023.2185522","RegionNum":3,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 1

摘要

摘要语言在递归结构的深度、结构和句法领域方面各不相同。即使在一种语言中,一些结构也允许无限的自嵌入,而另一些结构则更受限制。例如,在表达所有权关系时,英语允许名词前属格-s的无限嵌入,而名词后属格的限制要大得多。说话者如何学习哪些特定结构允许无限嵌入,哪些不允许?分布式学习建议表明,如果X1位置和X2位置在非递归输入中是有效可替代的,则结构的递归(例如,X1’s X2)是许可的。本研究通过一个人工语言学习实验来验证这一提议。我们让成年参与者接触X1-ka-X2字符串。在生产条件下,几乎所有被证明在X1位置的单词也被证明在X2位置;在没有生产力的情况下,只有一些是。我们发现,正如预测的那样,与非生产性条件的参与者相比,生产性条件下的参与者更有可能在一个和两个嵌入级别上接受未经测试的字符串。我们的结果表明,说话者可以在一个嵌入级别上使用分布信息来学习结构是否是递归的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Acquiring recursive structures through distributional learning
ABSTRACT Languages differ regarding the depth, structure, and syntactic domains of recursive structures. Even within a single language, some structures allow infinite self-embedding while others are more restricted. For example, when expressing ownership relation, English allows infinite embedding of the prenominal genitive -s, whereas the postnominal genitive of is much more restricted. How do speakers learn which specific structures allow infinite embedding and which do not? The distributional learning proposal suggests that the recursion of a structure (e.g., X1’s-X2 ) is licensed if the X1 position and the X2 position are productively substitutable in non-recursive input. The present study tests this proposal with an artificial language learning experiment. We exposed adult participants to X1-ka-X2 strings. In the productive condition, almost all words attested in X1 position were also attested in X2 position; in the unproductive condition, only some were. We found that, as predicted, participants from the productive condition were more likely to accept unattested strings at both one- and two-embedding levels than participants from the unproductive condition. Our results suggest that speakers can use distributional information at one-embedding level to learn whether or not a structure is recursive.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.30
自引率
8.30%
发文量
20
期刊介绍: The research published in Language Acquisition: A Journal of Developmental Linguistics makes a clear contribution to linguistic theory by increasing our understanding of how language is acquired. The journal focuses on the acquisition of syntax, semantics, phonology, and morphology, and considers theoretical, experimental, and computational perspectives. Coverage includes solutions to the logical problem of language acquisition, as it arises for particular grammatical proposals; discussion of acquisition data relevant to current linguistic questions; and perspectives derived from theory-driven studies of second language acquisition, language-impaired speakers, and other domains of cognition.
期刊最新文献
Wh-word acquisition in Czech: Exploring the growing trees hypothesis Why second-language speakers sometimes, but not always, derive scalar inferences like first-language speakers: Effects of task demands Subject position in Greek and Spanish monolingual and bilingual production: Exploring the influence of verb type and definiteness Mandarin non-interrogative wh-words distinguished between children with Developmental Language Disorder and Language-Impaired autistic children Children’s early negative auxiliaries are true auxiliaries
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1