Amir Abboud, A. Backurs, K. Bringmann, Marvin Künnemann
{"title":"分析压缩数据的细粒度复杂性:对解压缩和求解的量化改进","authors":"Amir Abboud, A. Backurs, K. Bringmann, Marvin Künnemann","doi":"10.1109/FOCS.2017.26","DOIUrl":null,"url":null,"abstract":"Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size n of data that originally has size N, and we want to solve a problem with time complexity T(⋅). The naïve strategy of decompress-and-solve gives time T(N), whereas the gold standard is time T(n): to analyze the compression as efficiently as if the original data was small.We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar-Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design.We introduce a direly needed framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are:• The O(nN√log(N/n)) bound for LCS and the O(min(N log N, nM)) bound for Pattern Matching with Wildcards are optimal up to N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, M denotes the uncompressed length of the compressed pattern.)• Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the k-Clique conjecture.• We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"156 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"32","resultStr":"{\"title\":\"Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-and-Solve\",\"authors\":\"Amir Abboud, A. Backurs, K. Bringmann, Marvin Künnemann\",\"doi\":\"10.1109/FOCS.2017.26\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size n of data that originally has size N, and we want to solve a problem with time complexity T(⋅). The naïve strategy of decompress-and-solve gives time T(N), whereas the gold standard is time T(n): to analyze the compression as efficiently as if the original data was small.We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar-Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design.We introduce a direly needed framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are:• The O(nN√log(N/n)) bound for LCS and the O(min(N log N, nM)) bound for Pattern Matching with Wildcards are optimal up to N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, M denotes the uncompressed length of the compressed pattern.)• Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the k-Clique conjecture.• We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.\",\"PeriodicalId\":311592,\"journal\":{\"name\":\"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)\",\"volume\":\"156 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"32\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FOCS.2017.26\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FOCS.2017.26","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 32
摘要
我们能在不解压的情况下分析数据吗?随着我们的数据不断增长,理解压缩输入问题的时间复杂性,而不是以方便的未压缩形式,变得越来越重要。假设我们对原始大小为n的数据进行大小为n的压缩,并且我们想要解决一个时间复杂度为T(⋅)的问题。解压缩和求解的naï 5策略需要时间T(N),而黄金标准是时间T(N):像原始数据很小一样有效地分析压缩。我们将注意力限制在字符串形式的数据(文本、文件、基因组等)上,并研究最普遍的任务。虽然这个挑战似乎很大程度上取决于特定的压缩方案,但大多数实际相关的方法(Lempel-Ziv-family、字典方法等)都可以统一在语法压缩的优雅概念之下。大量的文献,跨越许多学科,将这一概念确立为算法设计的一个有影响力的概念。我们引入了一个迫切需要的框架来证明这个领域的(有条件的)下界,允许我们评估解压缩和求解是否可以改进,以及改进多少。我们的主要结果是:•在强指数时间假设下,LCS的O(nN√log(N/ N))界和通配符模式匹配的O(min(N log N, nM))界在N^{O(1)}因子范围内是最优的。(这里,M表示压缩模式的未压缩长度)•在k-Clique猜想下,解压缩并求解本质上是上下文无关语法解析和RNA折叠的最佳选择。•我们给出了一种算法,表明解压缩求解对于不连接不是最优的。
Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-and-Solve
Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size n of data that originally has size N, and we want to solve a problem with time complexity T(⋅). The naïve strategy of decompress-and-solve gives time T(N), whereas the gold standard is time T(n): to analyze the compression as efficiently as if the original data was small.We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar-Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design.We introduce a direly needed framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are:• The O(nN√log(N/n)) bound for LCS and the O(min(N log N, nM)) bound for Pattern Matching with Wildcards are optimal up to N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, M denotes the uncompressed length of the compressed pattern.)• Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the k-Clique conjecture.• We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.