{"title":"Adjective Order and Adnominal Modification in Naasioi","authors":"Jason Brown","doi":"10.1162/ling_a_00464","DOIUrl":"10.1162/ling_a_00464","url":null,"abstract":"","PeriodicalId":48044,"journal":{"name":"Linguistic Inquiry","volume":"55 2","pages":"423-435"},"PeriodicalIF":1.6,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44759805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Condition B and Other Conditions on Pronominal Licensing in Serbo-Croatian","authors":"Ivana Jovović","doi":"10.1162/ling_a_00475","DOIUrl":"10.1162/ling_a_00475","url":null,"abstract":"","PeriodicalId":48044,"journal":{"name":"Linguistic Inquiry","volume":"55 2","pages":"402-421"},"PeriodicalIF":1.6,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44922460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nasal Assimilation Counterfeeding and Allomorphy in Haitian: Nothing Is Still Something!","authors":"Mohamed Lahrouchi;Shanti Ulfsbjorninn","doi":"10.1162/ling_a_00469","DOIUrl":"https://doi.org/10.1162/ling_a_00469","url":null,"abstract":"","PeriodicalId":48044,"journal":{"name":"Linguistic Inquiry","volume":"55 2","pages":"255-286"},"PeriodicalIF":1.6,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140606014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How much of our linguistic knowledge is innate? According to much of theoretical linguistics, a fair amount. One of the best-known (and most contested) kinds of evidence for a large innate endowment is the so-called argument from the poverty of the stimulus (APS). In a nutshell, an APS obtains when human learners systematically make inductive leaps that are not warranted by the linguistic evidence. A weakness of the APS has been that it is very hard to assess what is warranted by the linguistic evidence. Current Artificial Neural Networks appear to offer a handle on this challenge, and a growing literature over the past few years has started to explore the potential implications of such models to questions of innateness. We focus here on Wilcox et al. (2023), who use several different networks to examine the available evidence as it pertains to wh-movement, including island constraints. They conclude that the (presumably linguistically-neutral) networks acquire an adequate knowledge of wh-movement, thus undermining an APS in this domain. We examine the evidence further, looking in particular at parasitic gaps and across-the-board movement, and argue that current networks do not, in fact, succeed in acquiring or even adequately approximating wh-movement from training corpora that roughly correspond in size to the linguistic input that children receive. We also show that the performance of one of the models improves considerably when the training data are artificially enriched with instances of parasitic gaps and across-the-board movement. This finding suggests, albeit tentatively, that the failure of the networks when trained on natural, unenriched corpora is due to the insufficient richness of the linguistic input, thus supporting the APS.
{"title":"Large Language Models and the Argument from the Poverty of the Stimulus","authors":"Nur Lan, Emmanuel Chemla, Roni Katzir","doi":"10.1162/ling_a_00533","DOIUrl":"https://doi.org/10.1162/ling_a_00533","url":null,"abstract":"How much of our linguistic knowledge is innate? According to much of theoretical linguistics, a fair amount. One of the best-known (and most contested) kinds of evidence for a large innate endowment is the so-called argument from the poverty of the stimulus (APS). In a nutshell, an APS obtains when human learners systematically make inductive leaps that are not warranted by the linguistic evidence. A weakness of the APS has been that it is very hard to assess what is warranted by the linguistic evidence. Current Artificial Neural Networks appear to offer a handle on this challenge, and a growing literature over the past few years has started to explore the potential implications of such models to questions of innateness. We focus here on Wilcox et al. (2023), who use several different networks to examine the available evidence as it pertains to wh-movement, including island constraints. They conclude that the (presumably linguistically-neutral) networks acquire an adequate knowledge of wh-movement, thus undermining an APS in this domain. We examine the evidence further, looking in particular at parasitic gaps and across-the-board movement, and argue that current networks do not, in fact, succeed in acquiring or even adequately approximating wh-movement from training corpora that roughly correspond in size to the linguistic input that children receive. We also show that the performance of one of the models improves considerably when the training data are artificially enriched with instances of parasitic gaps and across-the-board movement. This finding suggests, albeit tentatively, that the failure of the networks when trained on natural, unenriched corpora is due to the insufficient richness of the linguistic input, thus supporting the APS.","PeriodicalId":48044,"journal":{"name":"Linguistic Inquiry","volume":"285 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The possessive dative construction has been widely adopted as an unaccusativity diagnostic (Borer and Grodzinsky 1986). Gafter (2014) casts doubt on the relevance of unaccusativity to the acceptability of the construction. We ran a series of acceptability judgment experiments to investigate the validity of the possessive dative construction as an unaccusativity diagnostic, controlling for possible confounds such as animacy, definiteness, plausibility, lexical choice, type of possession and context salience. The experiments reveal that possessive datives are significantly more acceptable with unaccusative verbs than with unergatives, including reflexive and emission verbs. We conclude that unaccusatives, but not unergatives, are grammatical in the construction, and defend a structural account of the data.
{"title":"The Relevance of Unaccusativity to Possessive Datives","authors":"Ziv Plotnik, Aya Meltzer-Asscher, Tal Siloni","doi":"10.1162/ling_a_00532","DOIUrl":"https://doi.org/10.1162/ling_a_00532","url":null,"abstract":"The possessive dative construction has been widely adopted as an unaccusativity diagnostic (Borer and Grodzinsky 1986). Gafter (2014) casts doubt on the relevance of unaccusativity to the acceptability of the construction. We ran a series of acceptability judgment experiments to investigate the validity of the possessive dative construction as an unaccusativity diagnostic, controlling for possible confounds such as animacy, definiteness, plausibility, lexical choice, type of possession and context salience. The experiments reveal that possessive datives are significantly more acceptable with unaccusative verbs than with unergatives, including reflexive and emission verbs. We conclude that unaccusatives, but not unergatives, are grammatical in the construction, and defend a structural account of the data.","PeriodicalId":48044,"journal":{"name":"Linguistic Inquiry","volume":"10 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140602131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morphophonological alternations often involve dependencies between adjacent segments. Despite the apparent distance between relevant segments in the alternations that arise in consonant and vowel harmony, these dependencies can usually be viewed as adjacent on a tier representation. However, the tier needed to render dependencies adjacent varies crosslinguistically, and the abstract nature of tier representations in comparison to flat, string-like representations has led phonologists to seek justification for their use in phonological theory. In this paper, I propose a learning-based account of tier-like representations. I argue that humans show a proclivity for tracking dependencies between adjacent items, and propose a simple learning algorithm that incorporates this proclivity by tracking only adjacent dependencies. The model changes representations in response to being unable to predict the surface form of alternating segments—a decision governed by the Tolerance Principle, which allows for learning despite the sparsity and exceptions inevitable in naturalistic data. Tier-like representations naturally emerge from this learning procedure, and, when trained on small amounts of natural language data, the model achieves high accuracy generalizing to held-out test words, while flexibly handling cross-linguistic complexities like neutral segments and blockers. The model also makes precise predictions about human generalization behavior, and these are consistently borne out in artificial language experiments.
{"title":"A Learning-Based Account of Phonological Tiers","authors":"Caleb Belth","doi":"10.1162/ling_a_00530","DOIUrl":"https://doi.org/10.1162/ling_a_00530","url":null,"abstract":"Morphophonological alternations often involve dependencies between adjacent segments. Despite the apparent distance between relevant segments in the alternations that arise in consonant and vowel harmony, these dependencies can usually be viewed as adjacent on a tier representation. However, the tier needed to render dependencies adjacent varies crosslinguistically, and the abstract nature of tier representations in comparison to flat, string-like representations has led phonologists to seek justification for their use in phonological theory. In this paper, I propose a learning-based account of tier-like representations. I argue that humans show a proclivity for tracking dependencies between adjacent items, and propose a simple learning algorithm that incorporates this proclivity by tracking only adjacent dependencies. The model changes representations in response to being unable to predict the surface form of alternating segments—a decision governed by the Tolerance Principle, which allows for learning despite the sparsity and exceptions inevitable in naturalistic data. Tier-like representations naturally emerge from this learning procedure, and, when trained on small amounts of natural language data, the model achieves high accuracy generalizing to held-out test words, while flexibly handling cross-linguistic complexities like neutral segments and blockers. The model also makes precise predictions about human generalization behavior, and these are consistently borne out in artificial language experiments.","PeriodicalId":48044,"journal":{"name":"Linguistic Inquiry","volume":"13 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140071243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}