{"title":"句法理论中的数据收敛与句对的作用","authors":"Tom S Juzek, Jana Häussler","doi":"10.1515/zfs-2020-2008","DOIUrl":null,"url":null,"abstract":"Abstract Most acceptability judgments reported in the syntactic literature are obtained by linguists being their own informants. For well-represented languages like English, this method of data collection is best described as a process of community agreement, given that linguists typically discuss their judgments with colleagues. However, the process itself is comparably opaque, and the reliability of its output has been questioned. Recent studies looking into this criticism have shown that judgments reported in the literature for English can be replicated in quantitative experiments to a near-perfect degree. However, the focus of those studies has been on testing sentence pairs. We argue that replication of only contrasts is not sufficient, because theory building necessarily includes comparison across pairs and across papers. Thus, we test items at large, i. e. independent of counterparts. We created a corpus of grammaticality judgments on sequences of American English reported in articles published in Linguistic Inquiry and then collected experimental ratings for a random subset of them. Overall, expert ratings and experimental ratings converge to a good degree, but there are numerous instances in which ratings do not converge. Based on this, we argue that for theory-critical data, the process of community agreement should be accompanied by quantitative methods whenever possible.","PeriodicalId":43494,"journal":{"name":"Zeitschrift Fur Sprachwissenschaft","volume":"39 1","pages":"109 - 147"},"PeriodicalIF":0.6000,"publicationDate":"2020-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/zfs-2020-2008","citationCount":"3","resultStr":"{\"title\":\"Data convergence in syntactic theory and the role of sentence pairs\",\"authors\":\"Tom S Juzek, Jana Häussler\",\"doi\":\"10.1515/zfs-2020-2008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Most acceptability judgments reported in the syntactic literature are obtained by linguists being their own informants. For well-represented languages like English, this method of data collection is best described as a process of community agreement, given that linguists typically discuss their judgments with colleagues. However, the process itself is comparably opaque, and the reliability of its output has been questioned. Recent studies looking into this criticism have shown that judgments reported in the literature for English can be replicated in quantitative experiments to a near-perfect degree. However, the focus of those studies has been on testing sentence pairs. We argue that replication of only contrasts is not sufficient, because theory building necessarily includes comparison across pairs and across papers. Thus, we test items at large, i. e. independent of counterparts. We created a corpus of grammaticality judgments on sequences of American English reported in articles published in Linguistic Inquiry and then collected experimental ratings for a random subset of them. Overall, expert ratings and experimental ratings converge to a good degree, but there are numerous instances in which ratings do not converge. Based on this, we argue that for theory-critical data, the process of community agreement should be accompanied by quantitative methods whenever possible.\",\"PeriodicalId\":43494,\"journal\":{\"name\":\"Zeitschrift Fur Sprachwissenschaft\",\"volume\":\"39 1\",\"pages\":\"109 - 147\"},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2020-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1515/zfs-2020-2008\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Zeitschrift Fur Sprachwissenschaft\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1515/zfs-2020-2008\",\"RegionNum\":3,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Zeitschrift Fur Sprachwissenschaft","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1515/zfs-2020-2008","RegionNum":3,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
Data convergence in syntactic theory and the role of sentence pairs
Abstract Most acceptability judgments reported in the syntactic literature are obtained by linguists being their own informants. For well-represented languages like English, this method of data collection is best described as a process of community agreement, given that linguists typically discuss their judgments with colleagues. However, the process itself is comparably opaque, and the reliability of its output has been questioned. Recent studies looking into this criticism have shown that judgments reported in the literature for English can be replicated in quantitative experiments to a near-perfect degree. However, the focus of those studies has been on testing sentence pairs. We argue that replication of only contrasts is not sufficient, because theory building necessarily includes comparison across pairs and across papers. Thus, we test items at large, i. e. independent of counterparts. We created a corpus of grammaticality judgments on sequences of American English reported in articles published in Linguistic Inquiry and then collected experimental ratings for a random subset of them. Overall, expert ratings and experimental ratings converge to a good degree, but there are numerous instances in which ratings do not converge. Based on this, we argue that for theory-critical data, the process of community agreement should be accompanied by quantitative methods whenever possible.
期刊介绍:
The aim of the journal is to promote linguistic research by publishing high-quality contributions and thematic special issues from all fields and trends of modern linguistics. In addition to articles and reviews, the journal also features contributions to discussions on current controversies in the field as well as overview articles outlining the state-of-the art of relevant research paradigms. Topics: -General Linguistics -Language Typology -Language acquisition, language change and synchronic variation -Empirical linguistics: experimental and corpus-based research -Contributions to theory-building