Martin Aumüller, Tobias Christiani, R. Pagh, Francesco Silvestri
Locality-sensitive hashing (LSH) is an important tool for managing high-dimensional noisy or uncertain data, for example in connection with data cleaning (similarity join) and noise-robust search (similarity search). However, for a number of problems the LSH framework is not known to yield good solutions, and instead ad hoc solutions have been designed for particular similarity and distance measures. For example, this is true for output-sensitive similarity search/join, and for indexes supporting annulus queries that aim to report a point close to a certain given distance from the query point. In this paper we initiate the study of distance-sensitive hashing (DSH), a generalization of LSH that seeks a family of hash functions such that the probability of two points having the same hash value is a given function of the distance between them. More precisely, given a distance space (X, dist ) and a "collision probability function" (CPF) f: R -> [0,1] we seek a distribution over pairs of functions (h,g) such that for every pair of points x, y ın X the collision probability is ¶r[h(x)=g(y)] = f(dist(x,y)). Locality-sensitive hashing is the study of how fast a CPF can decrease as the distance grows. For many spaces, f can be made exponentially decreasing even if we restrict attention to the symmetric case where g=h. We show that the asymmetry achieved by having a pair of functions makes it possible to achieve CPFs that are, for example, increasing or unimodal, and show how this leads to principled solutions to problems not addressed by the LSH framework. This includes a novel application to privacy-preserving distance estimation. We believe that the DSH framework will find further applications in high-dimensional data management. To put the running time bounds of the proposed constructions into perspective, we show lower bounds for the performance of DSH constructions with increasing and decreasing CPFs under angular distance. Essentially, this shows that our constructions are tight up to lower order terms. In particular, we extend existing LSH lower bounds, showing that they also hold in the asymmetric setting.
位置敏感散列(LSH)是管理高维噪声数据或不确定数据的重要工具,例如用于数据清理(相似连接)和噪声鲁棒搜索(相似搜索)。然而,对于许多问题,LSH框架并没有产生好的解决方案,而是为特定的相似性和距离度量设计了特别的解决方案。例如,对于输出敏感的相似度搜索/连接,以及支持环状查询的索引都是如此,环状查询的目的是报告离查询点有一定给定距离的点。在本文中,我们开始研究距离敏感哈希(DSH),这是LSH的一种推广,它寻求一组哈希函数,使得两点具有相同哈希值的概率是它们之间距离的给定函数。更准确地说,给定一个距离空间(X, dist)和一个“碰撞概率函数”(CPF) f: R ->[0,1],我们在函数对(h,g)上寻求一个分布,使得对于每一对点X, y ın X,碰撞概率为¶R [h(X)=g(y)] = f(dist(X, y))。位置敏感散列是研究CPF随着距离的增加而减少的速度。对于许多空间,即使我们将注意力限制在g=h的对称情况下,f也可以呈指数递减。我们展示了通过拥有一对函数实现的不对称性使得实现cpf成为可能,例如,增加或单峰,并展示了这如何导致LSH框架未解决的问题的原则性解决方案。这包括一个新的应用,隐私保护距离估计。我们相信,DSH框架将在高维数据管理中得到进一步的应用。为了更好地理解所提出结构的运行时间界限,我们给出了在角距离下cpf增加和减少的DSH结构性能的下界。本质上,这表明我们的构造是紧绷于低阶项的。特别地,我们扩展了现有的LSH下界,表明它们在非对称情况下也成立。
{"title":"Distance-Sensitive Hashing","authors":"Martin Aumüller, Tobias Christiani, R. Pagh, Francesco Silvestri","doi":"10.1145/3196959.3196976","DOIUrl":"https://doi.org/10.1145/3196959.3196976","url":null,"abstract":"Locality-sensitive hashing (LSH) is an important tool for managing high-dimensional noisy or uncertain data, for example in connection with data cleaning (similarity join) and noise-robust search (similarity search). However, for a number of problems the LSH framework is not known to yield good solutions, and instead ad hoc solutions have been designed for particular similarity and distance measures. For example, this is true for output-sensitive similarity search/join, and for indexes supporting annulus queries that aim to report a point close to a certain given distance from the query point. In this paper we initiate the study of distance-sensitive hashing (DSH), a generalization of LSH that seeks a family of hash functions such that the probability of two points having the same hash value is a given function of the distance between them. More precisely, given a distance space (X, dist ) and a \"collision probability function\" (CPF) f: R -> [0,1] we seek a distribution over pairs of functions (h,g) such that for every pair of points x, y ın X the collision probability is ¶r[h(x)=g(y)] = f(dist(x,y)). Locality-sensitive hashing is the study of how fast a CPF can decrease as the distance grows. For many spaces, f can be made exponentially decreasing even if we restrict attention to the symmetric case where g=h. We show that the asymmetry achieved by having a pair of functions makes it possible to achieve CPFs that are, for example, increasing or unimodal, and show how this leads to principled solutions to problems not addressed by the LSH framework. This includes a novel application to privacy-preserving distance estimation. We believe that the DSH framework will find further applications in high-dimensional data management. To put the running time bounds of the proposed constructions into perspective, we show lower bounds for the performance of DSH constructions with increasing and decreasing CPFs under angular distance. Essentially, this shows that our constructions are tight up to lower order terms. In particular, we extend existing LSH lower bounds, showing that they also hold in the asymmetric setting.","PeriodicalId":344370,"journal":{"name":"Proceedings of the 37th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124402145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahmoud Abo Khamis, H. Ngo, X. Nguyen, Dan Olteanu, Maximilian Schleich
In-database analytics is of great practical importance as it avoids the costly repeated loop data scientists have to deal with on a daily basis: select features, export the data, convert data format, train models using an external tool, reimport the parameters. It is also a fertile ground of theoretically fundamental and challenging problems at the intersection of relational and statistical data models. This paper introduces a unified framework for training and evaluating a class of statistical learning models inside a relational database. This class includes ridge linear regression, polynomial regression, factorization machines, and principal component analysis. We show that, by synergizing key tools from relational database theory such as schema information, query structure, recent advances in query evaluation algorithms, and from linear algebra such as various tensor and matrix operations, one can formulate in-database learning problems and design efficient algorithms to solve them. The algorithms and models proposed in the paper have already been implemented and deployed in retail-planning and forecasting applications, with significant performance benefits over out-of-database solutions that require the costly data-export loop.
{"title":"In-Database Learning with Sparse Tensors","authors":"Mahmoud Abo Khamis, H. Ngo, X. Nguyen, Dan Olteanu, Maximilian Schleich","doi":"10.1145/3196959.3196960","DOIUrl":"https://doi.org/10.1145/3196959.3196960","url":null,"abstract":"In-database analytics is of great practical importance as it avoids the costly repeated loop data scientists have to deal with on a daily basis: select features, export the data, convert data format, train models using an external tool, reimport the parameters. It is also a fertile ground of theoretically fundamental and challenging problems at the intersection of relational and statistical data models. This paper introduces a unified framework for training and evaluating a class of statistical learning models inside a relational database. This class includes ridge linear regression, polynomial regression, factorization machines, and principal component analysis. We show that, by synergizing key tools from relational database theory such as schema information, query structure, recent advances in query evaluation algorithms, and from linear algebra such as various tensor and matrix operations, one can formulate in-database learning problems and design efficient algorithms to solve them. The algorithms and models proposed in the paper have already been implemented and deployed in retail-planning and forecasting applications, with significant performance benefits over out-of-database solutions that require the costly data-export loop.","PeriodicalId":344370,"journal":{"name":"Proceedings of the 37th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122979250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hypertree decompositions, as well as the more powerful generalized hypertree decompositions (GHDs), and the yet more general fractional hypertree decompositions (FHD) are hypergraph decomposition methods successfully used for answering conjunctive queries and for the solution of constraint satisfaction problems. Every hypergraph H has a width relative to each of these methods: its hypertree width hw(H), its generalized hypertree width ghw(H), and its fractional hypertree width fhw(H), respectively. It is known that hw(H) ≤ k can be checked in polynomial time for fixed k, while checking ghw(H) ≤ k is NP-complete for k >= 3. The complexity of checking fhw(H) ≤ k for a fixed k has been open for over a decade. We settle this open problem by showing that checking fhw(H) ≤ k is NP-complete, even for k=2. The same construction allows us to prove also the NP-completeness of checking ghw(H) ≤ k for k=2. After proving these results, we identify meaningful restrictions, for which checking for bounded ghw or fhw becomes tractable.
{"title":"General and Fractional Hypertree Decompositions: Hard and Easy Cases","authors":"Wolfgang Fischl, G. Gottlob, R. Pichler","doi":"10.1145/3196959.3196962","DOIUrl":"https://doi.org/10.1145/3196959.3196962","url":null,"abstract":"Hypertree decompositions, as well as the more powerful generalized hypertree decompositions (GHDs), and the yet more general fractional hypertree decompositions (FHD) are hypergraph decomposition methods successfully used for answering conjunctive queries and for the solution of constraint satisfaction problems. Every hypergraph H has a width relative to each of these methods: its hypertree width hw(H), its generalized hypertree width ghw(H), and its fractional hypertree width fhw(H), respectively. It is known that hw(H) ≤ k can be checked in polynomial time for fixed k, while checking ghw(H) ≤ k is NP-complete for k >= 3. The complexity of checking fhw(H) ≤ k for a fixed k has been open for over a decade. We settle this open problem by showing that checking fhw(H) ≤ k is NP-complete, even for k=2. The same construction allows us to prove also the NP-completeness of checking ghw(H) ≤ k for k=2. After proving these results, we identify meaningful restrictions, for which checking for bounded ghw or fhw becomes tractable.","PeriodicalId":344370,"journal":{"name":"Proceedings of the 37th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130398196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 37th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems","authors":"M. Krötzsch, M. Lenzerini, Michael Benedikt","doi":"10.1145/3196959","DOIUrl":"https://doi.org/10.1145/3196959","url":null,"abstract":"","PeriodicalId":344370,"journal":{"name":"Proceedings of the 37th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121720803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}