{"title":"在多项式时间内学习单调决策树","authors":"Ryan O'Donnell, R. Servedio","doi":"10.1137/060669309","DOIUrl":null,"url":null,"abstract":"We give an algorithm that learns any monotone Boolean function f: {-1, 1}n rarr {-1, 1} to any constant accuracy, under the uniform distribution, in time polynomial in n and in the decision tree size of f. This is the first algorithm that can learn arbitrary monotone Boolean functions to high accuracy, using random examples only, in time polynomial in a reasonable measure of the complexity of f. A key ingredient of the result is a new bound showing that the average sensitivity of any monotone function computed by a decision tree of size s must be at most radic(log s). This bound has already proved to be of independent utility in the study of decision tree complexity (Schramm et al., 2005). We generalize the basic inequality and learning result described above in various ways; specifically, to partition size (a stronger complexity measure than decision tree size), p-biased measures over the Boolean cube (rather than just the uniform distribution), and real-valued (rather than just Boolean-valued) functions","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"109","resultStr":"{\"title\":\"Learning monotone decision trees in polynomial time\",\"authors\":\"Ryan O'Donnell, R. Servedio\",\"doi\":\"10.1137/060669309\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We give an algorithm that learns any monotone Boolean function f: {-1, 1}n rarr {-1, 1} to any constant accuracy, under the uniform distribution, in time polynomial in n and in the decision tree size of f. This is the first algorithm that can learn arbitrary monotone Boolean functions to high accuracy, using random examples only, in time polynomial in a reasonable measure of the complexity of f. A key ingredient of the result is a new bound showing that the average sensitivity of any monotone function computed by a decision tree of size s must be at most radic(log s). This bound has already proved to be of independent utility in the study of decision tree complexity (Schramm et al., 2005). We generalize the basic inequality and learning result described above in various ways; specifically, to partition size (a stronger complexity measure than decision tree size), p-biased measures over the Boolean cube (rather than just the uniform distribution), and real-valued (rather than just Boolean-valued) functions\",\"PeriodicalId\":325664,\"journal\":{\"name\":\"21st Annual IEEE Conference on Computational Complexity (CCC'06)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"109\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"21st Annual IEEE Conference on Computational Complexity (CCC'06)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1137/060669309\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1137/060669309","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 109
摘要
我们给出了一个算法,可以学习任何单调布尔函数f:{- 1,1}n rarr{- 1,1}到任意常数精度,在均匀分布下,在n的时间多项式内,在f的决策树大小内。这是第一个仅使用随机样本就能高精度学习任意单调布尔函数的算法。该结果的一个关键组成部分是一个新的界,表明由大小为s的决策树计算的任何单调函数的平均灵敏度必须最多为径向(log s)。该界已经被证明在决策树复杂性的研究中具有独立的效用(Schramm et al., 2005)。我们将上述的基本不等式和学习结果用不同的方法进行推广;特别是分区大小(比决策树大小更复杂的度量)、布尔立方体(而不仅仅是均匀分布)上的p偏度量和实值(而不仅仅是布尔值)函数
Learning monotone decision trees in polynomial time
We give an algorithm that learns any monotone Boolean function f: {-1, 1}n rarr {-1, 1} to any constant accuracy, under the uniform distribution, in time polynomial in n and in the decision tree size of f. This is the first algorithm that can learn arbitrary monotone Boolean functions to high accuracy, using random examples only, in time polynomial in a reasonable measure of the complexity of f. A key ingredient of the result is a new bound showing that the average sensitivity of any monotone function computed by a decision tree of size s must be at most radic(log s). This bound has already proved to be of independent utility in the study of decision tree complexity (Schramm et al., 2005). We generalize the basic inequality and learning result described above in various ways; specifically, to partition size (a stronger complexity measure than decision tree size), p-biased measures over the Boolean cube (rather than just the uniform distribution), and real-valued (rather than just Boolean-valued) functions