一类大型学习问题的时空下界

R. Raz
{"title":"一类大型学习问题的时空下界","authors":"R. Raz","doi":"10.1109/FOCS.2017.73","DOIUrl":null,"url":null,"abstract":"We prove a general time-space lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples. As a special case, this gives a new proof for the time-space lower bound for parity learning [R16]. Our result is stated in terms of the norm of the matrix that corresponds to the learning problem. Let X, A be two finite sets. Let M: A × X \\rightarrow \\{-1,1\\} be a matrix. The matrix M corresponds to the following learning problem: An unknown element x ∊ X was chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2)..., where for every i, a_i ∊ A is chosen uniformly at random and b_i = M(a_i,x). Let \\sigma be the largest singular value of M and note that always \\sigma ≤ |A|^{1/2} ⋅ |X|^{1/2}. We show that if \\sigma ≤ |A|^{1/2} ⋅ |X|^{1/2 - ≥ilon, then any learning algorithm for the corresponding learning problem requires either a memory of size quadratic in ≥ilon n or number of samples exponential in ≥ilon n, where n = \\log_2 |X|.As a special case, this gives a new proof for the memorysamples lower bound for parity learning [14].","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"211 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"55","resultStr":"{\"title\":\"A Time-Space Lower Bound for a Large Class of Learning Problems\",\"authors\":\"R. Raz\",\"doi\":\"10.1109/FOCS.2017.73\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We prove a general time-space lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples. As a special case, this gives a new proof for the time-space lower bound for parity learning [R16]. Our result is stated in terms of the norm of the matrix that corresponds to the learning problem. Let X, A be two finite sets. Let M: A × X \\\\rightarrow \\\\{-1,1\\\\} be a matrix. The matrix M corresponds to the following learning problem: An unknown element x ∊ X was chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2)..., where for every i, a_i ∊ A is chosen uniformly at random and b_i = M(a_i,x). Let \\\\sigma be the largest singular value of M and note that always \\\\sigma ≤ |A|^{1/2} ⋅ |X|^{1/2}. We show that if \\\\sigma ≤ |A|^{1/2} ⋅ |X|^{1/2 - ≥ilon, then any learning algorithm for the corresponding learning problem requires either a memory of size quadratic in ≥ilon n or number of samples exponential in ≥ilon n, where n = \\\\log_2 |X|.As a special case, this gives a new proof for the memorysamples lower bound for parity learning [14].\",\"PeriodicalId\":311592,\"journal\":{\"name\":\"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)\",\"volume\":\"211 4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"55\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FOCS.2017.73\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FOCS.2017.73","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 55

摘要

我们证明了一个适用于大量学习问题的一般时空下界,并表明对于该类中的每个问题,任何学习算法都需要二次大小的内存或指数数量的样本。作为特例,这给出了宇称学习的时空下界的新证明[R16]。我们的结果是用与学习问题相对应的矩阵范数来表示的。设X A是两个有限集合。设M: A ×X \右行\{-1,1\}是一个矩阵。矩阵M对应如下学习问题:未知元素x ∊X是均匀随机选择的。一个学习者试图从一系列样本中学习x, (a_1, b_1), (a_2, b_2)…,其中对于每个i, a_i ∊A均匀随机选取,且b_i = M(a_i,x)。设\sigma为M的最大奇异值并且注意总是\sigma ≤|的| ^ {5}& # x22C5;| | X ^{5}。我们证明了如果\sigma ≤|的| ^ {5}& # x22C5;|X|^{1/2 - ≥ilon,则任何相应学习问题的学习算法都需要大小为≥ilon n的二次型内存或≥ilon n的指数型样本数,其中n = \log_2 |X|。作为一种特殊情况,这为宇称学习的记忆样本下界提供了新的证明[14]。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Time-Space Lower Bound for a Large Class of Learning Problems
We prove a general time-space lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples. As a special case, this gives a new proof for the time-space lower bound for parity learning [R16]. Our result is stated in terms of the norm of the matrix that corresponds to the learning problem. Let X, A be two finite sets. Let M: A × X \rightarrow \{-1,1\} be a matrix. The matrix M corresponds to the following learning problem: An unknown element x ∊ X was chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2)..., where for every i, a_i ∊ A is chosen uniformly at random and b_i = M(a_i,x). Let \sigma be the largest singular value of M and note that always \sigma ≤ |A|^{1/2} ⋅ |X|^{1/2}. We show that if \sigma ≤ |A|^{1/2} ⋅ |X|^{1/2 - ≥ilon, then any learning algorithm for the corresponding learning problem requires either a memory of size quadratic in ≥ilon n or number of samples exponential in ≥ilon n, where n = \log_2 |X|.As a special case, this gives a new proof for the memorysamples lower bound for parity learning [14].
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
On Learning Mixtures of Well-Separated Gaussians Obfuscating Compute-and-Compare Programs under LWE Minor-Free Graphs Have Light Spanners Lockable Obfuscation How to Achieve Non-Malleability in One or Two Rounds
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1