{"title":"为什么大型语言模型是人类语言认知的拙劣理论?回复 Piantadosi","authors":"Roni Katzir","doi":"10.5964/bioling.13153","DOIUrl":null,"url":null,"abstract":"In a recent manuscript entitled “Modern language models refute Chomsky’s approach to language”, Steven Piantadosi proposes that large language models such as GPT-3 can serve as serious theories of human linguistic cognition. In fact, he maintains that these models are significantly better linguistic theories than proposals emerging from within generative linguistics. The present note explains why this claim is wrong.","PeriodicalId":54041,"journal":{"name":"Biolinguistics","volume":null,"pages":null},"PeriodicalIF":0.6000,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Why large language models are poor theories of human linguistic cognition: A reply to Piantadosi\",\"authors\":\"Roni Katzir\",\"doi\":\"10.5964/bioling.13153\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In a recent manuscript entitled “Modern language models refute Chomsky’s approach to language”, Steven Piantadosi proposes that large language models such as GPT-3 can serve as serious theories of human linguistic cognition. In fact, he maintains that these models are significantly better linguistic theories than proposals emerging from within generative linguistics. The present note explains why this claim is wrong.\",\"PeriodicalId\":54041,\"journal\":{\"name\":\"Biolinguistics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2023-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biolinguistics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5964/bioling.13153\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biolinguistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5964/bioling.13153","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
Why large language models are poor theories of human linguistic cognition: A reply to Piantadosi
In a recent manuscript entitled “Modern language models refute Chomsky’s approach to language”, Steven Piantadosi proposes that large language models such as GPT-3 can serve as serious theories of human linguistic cognition. In fact, he maintains that these models are significantly better linguistic theories than proposals emerging from within generative linguistics. The present note explains why this claim is wrong.