{"title":"基于近似推理处理的Sparql端点动态查询优化","authors":"Yuji Yamagata, Naoki Fukuta","doi":"10.1109/IIAI-AAI.2014.42","DOIUrl":null,"url":null,"abstract":"On a retrieval of Linked Open Data using SPARQL, it is important to construct an efficient query that considers its execution cost, especially when the query utilizes inference capability on the endpoint. A query often causes enormous consumption of endpoints' computing resources since it is sometimes difficult to understand and predict what computations will occur on the endpoints. Preventing such an execution of time-consuming queries, approximating the original query could reduce loads of endpoints. In this paper, we present a preliminary idea and its concept on building endpoints having a mechanism to automatically avoid unwanted amount of inference computation by predicting its computational costs and allowing it to transform such a query into speed optimized query. Our preliminary experiment shows a potential benefit on speed optimizations of query executions by applying query rewriting approach. We also present a preliminary prototype system that classifies whether a query execution is time-consuming or not by using machine learning techniques at the endpoint-side.","PeriodicalId":432222,"journal":{"name":"2014 IIAI 3rd International Conference on Advanced Applied Informatics","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"A Dynamic Query Optimization on a Sparql Endpoint by Approximate Inference Processing\",\"authors\":\"Yuji Yamagata, Naoki Fukuta\",\"doi\":\"10.1109/IIAI-AAI.2014.42\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"On a retrieval of Linked Open Data using SPARQL, it is important to construct an efficient query that considers its execution cost, especially when the query utilizes inference capability on the endpoint. A query often causes enormous consumption of endpoints' computing resources since it is sometimes difficult to understand and predict what computations will occur on the endpoints. Preventing such an execution of time-consuming queries, approximating the original query could reduce loads of endpoints. In this paper, we present a preliminary idea and its concept on building endpoints having a mechanism to automatically avoid unwanted amount of inference computation by predicting its computational costs and allowing it to transform such a query into speed optimized query. Our preliminary experiment shows a potential benefit on speed optimizations of query executions by applying query rewriting approach. We also present a preliminary prototype system that classifies whether a query execution is time-consuming or not by using machine learning techniques at the endpoint-side.\",\"PeriodicalId\":432222,\"journal\":{\"name\":\"2014 IIAI 3rd International Conference on Advanced Applied Informatics\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IIAI 3rd International Conference on Advanced Applied Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IIAI-AAI.2014.42\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IIAI 3rd International Conference on Advanced Applied Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IIAI-AAI.2014.42","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
摘要
在使用SPARQL检索Linked Open Data时,构造一个考虑其执行成本的高效查询是很重要的,特别是当查询利用端点上的推理能力时。查询通常会大量消耗端点的计算资源,因为有时很难理解和预测端点上将发生什么计算。防止这种耗时查询的执行,近似原始查询可以减少端点的负载。在本文中,我们提出了一个初步的想法和概念,即构建具有一种机制的端点,通过预测其计算成本并允许其将此类查询转换为速度优化查询来自动避免不必要的推理计算量。我们的初步实验显示了应用查询重写方法对查询执行速度优化的潜在好处。我们还提出了一个初步的原型系统,该系统通过在端点端使用机器学习技术来分类查询执行是否耗时。
A Dynamic Query Optimization on a Sparql Endpoint by Approximate Inference Processing
On a retrieval of Linked Open Data using SPARQL, it is important to construct an efficient query that considers its execution cost, especially when the query utilizes inference capability on the endpoint. A query often causes enormous consumption of endpoints' computing resources since it is sometimes difficult to understand and predict what computations will occur on the endpoints. Preventing such an execution of time-consuming queries, approximating the original query could reduce loads of endpoints. In this paper, we present a preliminary idea and its concept on building endpoints having a mechanism to automatically avoid unwanted amount of inference computation by predicting its computational costs and allowing it to transform such a query into speed optimized query. Our preliminary experiment shows a potential benefit on speed optimizations of query executions by applying query rewriting approach. We also present a preliminary prototype system that classifies whether a query execution is time-consuming or not by using machine learning techniques at the endpoint-side.