Learnability and semantic universals

IF 1.4 0 LANGUAGE & LINGUISTICS Semantics & Pragmatics Pub Date : 2019-11-16 DOI:10.3765/sp.12.4
Shane Steinert-Threlkeld, Jakub Szymanik
{"title":"Learnability and semantic universals","authors":"Shane Steinert-Threlkeld, Jakub Szymanik","doi":"10.3765/sp.12.4","DOIUrl":null,"url":null,"abstract":"One of the great successes of the application of generalized quantifiers to natural language has been the ability to formulate robust semantic universals. When such a universal is attested, the question arises as to the source of the universal. In this paper, we explore the hypothesis that many semantic universals arise because expressions satisfying the universal are easier to learn than those that do not. While the idea that learnability explains universals is not new, explicit accounts of learning that can make good on this hypothesis are few and far between. We propose a model of learning — back-propagation through a recurrent neural network — which can make good on this promise. In particular, we discuss the universals of monotonicity, quantity, and conservativity and perform computational experiments of training such a network to learn to verify quantifiers. Our results are able to explain monotonicity and quantity quite well. We suggest that conservativity may have a different source than the other universals. \n \nEARLY ACCESS","PeriodicalId":45550,"journal":{"name":"Semantics & Pragmatics","volume":"12 1","pages":"4"},"PeriodicalIF":1.4000,"publicationDate":"2019-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"41","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Semantics & Pragmatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3765/sp.12.4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 41

Abstract

One of the great successes of the application of generalized quantifiers to natural language has been the ability to formulate robust semantic universals. When such a universal is attested, the question arises as to the source of the universal. In this paper, we explore the hypothesis that many semantic universals arise because expressions satisfying the universal are easier to learn than those that do not. While the idea that learnability explains universals is not new, explicit accounts of learning that can make good on this hypothesis are few and far between. We propose a model of learning — back-propagation through a recurrent neural network — which can make good on this promise. In particular, we discuss the universals of monotonicity, quantity, and conservativity and perform computational experiments of training such a network to learn to verify quantifiers. Our results are able to explain monotonicity and quantity quite well. We suggest that conservativity may have a different source than the other universals. EARLY ACCESS
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
可学习性与语义共性
广义量词在自然语言中的应用取得了巨大成功,其中之一就是能够形成强大的语义共性。当这种普遍性得到证实时,就产生了普遍性的来源问题。在本文中,我们探讨了这样一种假设,即许多语义普遍性的产生是因为满足普遍性的表达比不满足普遍性表达更容易学习。虽然可学习性解释普遍性的观点并不新鲜,但能够证明这一假设的明确的学习描述却很少。我们提出了一种学习模型——通过递归神经网络反向传播——它可以兑现这一承诺。特别是,我们讨论了单调性、数量性和保守性的普遍性,并进行了训练这样一个网络以学习验证量词的计算实验。我们的结果能够很好地解释单调性和数量。我们认为保守性可能与其他普遍性有不同的来源。早期访问
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
14
审稿时长
50 weeks
期刊最新文献
Using the Anna Karenina Principle to explain why cause favors negative-sentiment complements Putting oughts together Probabilities and logic in implicature computation: Two puzzles with embedded disjunction Context Dynamics Pair-list answers to questions with plural definites
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1