{"title":"一类大型学习问题的时空下界","authors":"R. Raz","doi":"10.1109/FOCS.2017.73","DOIUrl":null,"url":null,"abstract":"We prove a general time-space lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples. As a special case, this gives a new proof for the time-space lower bound for parity learning [R16]. Our result is stated in terms of the norm of the matrix that corresponds to the learning problem. Let X, A be two finite sets. Let M: A × X \\rightarrow \\{-1,1\\} be a matrix. The matrix M corresponds to the following learning problem: An unknown element x ∊ X was chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2)..., where for every i, a_i ∊ A is chosen uniformly at random and b_i = M(a_i,x). Let \\sigma be the largest singular value of M and note that always \\sigma ≤ |A|^{1/2} ⋅ |X|^{1/2}. We show that if \\sigma ≤ |A|^{1/2} ⋅ |X|^{1/2 - ≥ilon, then any learning algorithm for the corresponding learning problem requires either a memory of size quadratic in ≥ilon n or number of samples exponential in ≥ilon n, where n = \\log_2 |X|.As a special case, this gives a new proof for the memorysamples lower bound for parity learning [14].","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"211 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"55","resultStr":"{\"title\":\"A Time-Space Lower Bound for a Large Class of Learning Problems\",\"authors\":\"R. Raz\",\"doi\":\"10.1109/FOCS.2017.73\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We prove a general time-space lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples. As a special case, this gives a new proof for the time-space lower bound for parity learning [R16]. Our result is stated in terms of the norm of the matrix that corresponds to the learning problem. Let X, A be two finite sets. Let M: A × X \\\\rightarrow \\\\{-1,1\\\\} be a matrix. The matrix M corresponds to the following learning problem: An unknown element x ∊ X was chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2)..., where for every i, a_i ∊ A is chosen uniformly at random and b_i = M(a_i,x). Let \\\\sigma be the largest singular value of M and note that always \\\\sigma ≤ |A|^{1/2} ⋅ |X|^{1/2}. We show that if \\\\sigma ≤ |A|^{1/2} ⋅ |X|^{1/2 - ≥ilon, then any learning algorithm for the corresponding learning problem requires either a memory of size quadratic in ≥ilon n or number of samples exponential in ≥ilon n, where n = \\\\log_2 |X|.As a special case, this gives a new proof for the memorysamples lower bound for parity learning [14].\",\"PeriodicalId\":311592,\"journal\":{\"name\":\"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)\",\"volume\":\"211 4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"55\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FOCS.2017.73\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FOCS.2017.73","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Time-Space Lower Bound for a Large Class of Learning Problems
We prove a general time-space lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples. As a special case, this gives a new proof for the time-space lower bound for parity learning [R16]. Our result is stated in terms of the norm of the matrix that corresponds to the learning problem. Let X, A be two finite sets. Let M: A × X \rightarrow \{-1,1\} be a matrix. The matrix M corresponds to the following learning problem: An unknown element x ∊ X was chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2)..., where for every i, a_i ∊ A is chosen uniformly at random and b_i = M(a_i,x). Let \sigma be the largest singular value of M and note that always \sigma ≤ |A|^{1/2} ⋅ |X|^{1/2}. We show that if \sigma ≤ |A|^{1/2} ⋅ |X|^{1/2 - ≥ilon, then any learning algorithm for the corresponding learning problem requires either a memory of size quadratic in ≥ilon n or number of samples exponential in ≥ilon n, where n = \log_2 |X|.As a special case, this gives a new proof for the memorysamples lower bound for parity learning [14].