{"title":"基于学习的语音层级解释","authors":"Caleb Belth","doi":"10.1162/ling_a_00530","DOIUrl":null,"url":null,"abstract":"Morphophonological alternations often involve dependencies between adjacent segments. Despite the apparent distance between relevant segments in the alternations that arise in consonant and vowel harmony, these dependencies can usually be viewed as adjacent on a tier representation. However, the tier needed to render dependencies adjacent varies crosslinguistically, and the abstract nature of tier representations in comparison to flat, string-like representations has led phonologists to seek justification for their use in phonological theory. In this paper, I propose a learning-based account of tier-like representations. I argue that humans show a proclivity for tracking dependencies between adjacent items, and propose a simple learning algorithm that incorporates this proclivity by tracking only adjacent dependencies. The model changes representations in response to being unable to predict the surface form of alternating segments—a decision governed by the Tolerance Principle, which allows for learning despite the sparsity and exceptions inevitable in naturalistic data. Tier-like representations naturally emerge from this learning procedure, and, when trained on small amounts of natural language data, the model achieves high accuracy generalizing to held-out test words, while flexibly handling cross-linguistic complexities like neutral segments and blockers. The model also makes precise predictions about human generalization behavior, and these are consistently borne out in artificial language experiments.","PeriodicalId":48044,"journal":{"name":"Linguistic Inquiry","volume":null,"pages":null},"PeriodicalIF":1.6000,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Learning-Based Account of Phonological Tiers\",\"authors\":\"Caleb Belth\",\"doi\":\"10.1162/ling_a_00530\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Morphophonological alternations often involve dependencies between adjacent segments. Despite the apparent distance between relevant segments in the alternations that arise in consonant and vowel harmony, these dependencies can usually be viewed as adjacent on a tier representation. However, the tier needed to render dependencies adjacent varies crosslinguistically, and the abstract nature of tier representations in comparison to flat, string-like representations has led phonologists to seek justification for their use in phonological theory. In this paper, I propose a learning-based account of tier-like representations. I argue that humans show a proclivity for tracking dependencies between adjacent items, and propose a simple learning algorithm that incorporates this proclivity by tracking only adjacent dependencies. The model changes representations in response to being unable to predict the surface form of alternating segments—a decision governed by the Tolerance Principle, which allows for learning despite the sparsity and exceptions inevitable in naturalistic data. Tier-like representations naturally emerge from this learning procedure, and, when trained on small amounts of natural language data, the model achieves high accuracy generalizing to held-out test words, while flexibly handling cross-linguistic complexities like neutral segments and blockers. The model also makes precise predictions about human generalization behavior, and these are consistently borne out in artificial language experiments.\",\"PeriodicalId\":48044,\"journal\":{\"name\":\"Linguistic Inquiry\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2024-03-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Linguistic Inquiry\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1162/ling_a_00530\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Linguistic Inquiry","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1162/ling_a_00530","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
Morphophonological alternations often involve dependencies between adjacent segments. Despite the apparent distance between relevant segments in the alternations that arise in consonant and vowel harmony, these dependencies can usually be viewed as adjacent on a tier representation. However, the tier needed to render dependencies adjacent varies crosslinguistically, and the abstract nature of tier representations in comparison to flat, string-like representations has led phonologists to seek justification for their use in phonological theory. In this paper, I propose a learning-based account of tier-like representations. I argue that humans show a proclivity for tracking dependencies between adjacent items, and propose a simple learning algorithm that incorporates this proclivity by tracking only adjacent dependencies. The model changes representations in response to being unable to predict the surface form of alternating segments—a decision governed by the Tolerance Principle, which allows for learning despite the sparsity and exceptions inevitable in naturalistic data. Tier-like representations naturally emerge from this learning procedure, and, when trained on small amounts of natural language data, the model achieves high accuracy generalizing to held-out test words, while flexibly handling cross-linguistic complexities like neutral segments and blockers. The model also makes precise predictions about human generalization behavior, and these are consistently borne out in artificial language experiments.
期刊介绍:
Linguistic Inquiry leads the field in research on current topics in linguistics. This key resource explores new theoretical developments based on the latest international scholarship, capturing the excitement of contemporary debate in full-scale articles as well as shorter contributions (Squibs and Discussion) and more extensive commentary (Remarks and Replies).