句法理论中的数据收敛与句对的作用

IF 0.6 3区 文学 0 LANGUAGE & LINGUISTICS Zeitschrift Fur Sprachwissenschaft Pub Date : 2020-03-27 DOI:10.1515/zfs-2020-2008
Tom S Juzek, Jana Häussler
{"title":"句法理论中的数据收敛与句对的作用","authors":"Tom S Juzek, Jana Häussler","doi":"10.1515/zfs-2020-2008","DOIUrl":null,"url":null,"abstract":"Abstract Most acceptability judgments reported in the syntactic literature are obtained by linguists being their own informants. For well-represented languages like English, this method of data collection is best described as a process of community agreement, given that linguists typically discuss their judgments with colleagues. However, the process itself is comparably opaque, and the reliability of its output has been questioned. Recent studies looking into this criticism have shown that judgments reported in the literature for English can be replicated in quantitative experiments to a near-perfect degree. However, the focus of those studies has been on testing sentence pairs. We argue that replication of only contrasts is not sufficient, because theory building necessarily includes comparison across pairs and across papers. Thus, we test items at large, i. e. independent of counterparts. We created a corpus of grammaticality judgments on sequences of American English reported in articles published in Linguistic Inquiry and then collected experimental ratings for a random subset of them. Overall, expert ratings and experimental ratings converge to a good degree, but there are numerous instances in which ratings do not converge. Based on this, we argue that for theory-critical data, the process of community agreement should be accompanied by quantitative methods whenever possible.","PeriodicalId":43494,"journal":{"name":"Zeitschrift Fur Sprachwissenschaft","volume":"39 1","pages":"109 - 147"},"PeriodicalIF":0.6000,"publicationDate":"2020-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/zfs-2020-2008","citationCount":"3","resultStr":"{\"title\":\"Data convergence in syntactic theory and the role of sentence pairs\",\"authors\":\"Tom S Juzek, Jana Häussler\",\"doi\":\"10.1515/zfs-2020-2008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Most acceptability judgments reported in the syntactic literature are obtained by linguists being their own informants. For well-represented languages like English, this method of data collection is best described as a process of community agreement, given that linguists typically discuss their judgments with colleagues. However, the process itself is comparably opaque, and the reliability of its output has been questioned. Recent studies looking into this criticism have shown that judgments reported in the literature for English can be replicated in quantitative experiments to a near-perfect degree. However, the focus of those studies has been on testing sentence pairs. We argue that replication of only contrasts is not sufficient, because theory building necessarily includes comparison across pairs and across papers. Thus, we test items at large, i. e. independent of counterparts. We created a corpus of grammaticality judgments on sequences of American English reported in articles published in Linguistic Inquiry and then collected experimental ratings for a random subset of them. Overall, expert ratings and experimental ratings converge to a good degree, but there are numerous instances in which ratings do not converge. Based on this, we argue that for theory-critical data, the process of community agreement should be accompanied by quantitative methods whenever possible.\",\"PeriodicalId\":43494,\"journal\":{\"name\":\"Zeitschrift Fur Sprachwissenschaft\",\"volume\":\"39 1\",\"pages\":\"109 - 147\"},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2020-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1515/zfs-2020-2008\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Zeitschrift Fur Sprachwissenschaft\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1515/zfs-2020-2008\",\"RegionNum\":3,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Zeitschrift Fur Sprachwissenschaft","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1515/zfs-2020-2008","RegionNum":3,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 3

摘要

摘要句法文献中报道的大多数可接受性判断都是由语言学家自己提供信息而获得的。对于英语等代表性很强的语言,考虑到语言学家通常会与同事讨论他们的判断,这种数据收集方法最好被描述为一个社区一致的过程。然而,该过程本身相对不透明,其输出的可靠性受到质疑。最近对这一批评的研究表明,文献中对英语的判断可以在定量实验中复制到近乎完美的程度。然而,这些研究的重点一直是测试句子对。我们认为,仅仅复制对比是不够的,因为理论构建必然包括跨对和跨论文的比较。因此,我们测试了大量的项目。 e.独立于同行。我们创建了一个语料库,对发表在《语言学调查》上的文章中的美国英语序列进行语法判断,然后收集了其中一个子集的实验评分。总体而言,专家评级和实验评级在很大程度上趋同,但在许多情况下,评级并不趋同。基于此,我们认为,对于理论关键数据,社区协议的过程应该尽可能地伴随着定量方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Data convergence in syntactic theory and the role of sentence pairs
Abstract Most acceptability judgments reported in the syntactic literature are obtained by linguists being their own informants. For well-represented languages like English, this method of data collection is best described as a process of community agreement, given that linguists typically discuss their judgments with colleagues. However, the process itself is comparably opaque, and the reliability of its output has been questioned. Recent studies looking into this criticism have shown that judgments reported in the literature for English can be replicated in quantitative experiments to a near-perfect degree. However, the focus of those studies has been on testing sentence pairs. We argue that replication of only contrasts is not sufficient, because theory building necessarily includes comparison across pairs and across papers. Thus, we test items at large, i. e. independent of counterparts. We created a corpus of grammaticality judgments on sequences of American English reported in articles published in Linguistic Inquiry and then collected experimental ratings for a random subset of them. Overall, expert ratings and experimental ratings converge to a good degree, but there are numerous instances in which ratings do not converge. Based on this, we argue that for theory-critical data, the process of community agreement should be accompanied by quantitative methods whenever possible.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.10
自引率
0.00%
发文量
19
审稿时长
20 weeks
期刊介绍: The aim of the journal is to promote linguistic research by publishing high-quality contributions and thematic special issues from all fields and trends of modern linguistics. In addition to articles and reviews, the journal also features contributions to discussions on current controversies in the field as well as overview articles outlining the state-of-the art of relevant research paradigms. Topics: -General Linguistics -Language Typology -Language acquisition, language change and synchronic variation -Empirical linguistics: experimental and corpus-based research -Contributions to theory-building
期刊最新文献
Frontmatter X-Wörter im Deutschen: Ein Wortbildungsmuster zur diskursiven Vermeidung von Begriffen An experimental investigation of the interaction of narrators’ and protagonists’ perspectival prominence in narrative texts What cues do children use to infer the meaning of unknown words while reading? Empirical data from German-speaking third graders In the periphery of an indefinite pronoun. Forms and functions of conceptual agreement with jemand
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1