{"title":"PFunk-H: approximate query processing using perceptual models","authors":"Daniel Alabi, Eugene Wu","doi":"10.1145/2939502.2939512","DOIUrl":null,"url":null,"abstract":"Interactive visualization tools (e.g., crossfilter) are critical to many data analysts by making the discovery and verification of hypotheses quick and seamless. Increasing data sizes has made the scalability of these tools a necessity. To bridge the gap between data sizes and interactivity, many visualization systems have turned to sampling-based approximate query processing frameworks. However, these systems are currently oblivious to human perceptual visual accuracy. This could either lead to overly aggressive sampling when the approximation accuracy is higher than needed or an incorrect visual rendering when the accuracy is too lax. Thus, for both correctness and efficiency, we propose to use empirical knowledge of human perceptual limitations to automatically bound the error of approximate answers meant for visualization.\n This paper explores a preliminary model of sampling-based approximate query processing that uses perceptual models (encoded as functions) to construct approximate answers intended for visualization. We present initial results that show that the approximate and non-approximate answers for a given query differ by a perceptually indiscernible amount, as defined by perceptual functions.","PeriodicalId":356971,"journal":{"name":"HILDA '16","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"30","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"HILDA '16","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2939502.2939512","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 30
Abstract
Interactive visualization tools (e.g., crossfilter) are critical to many data analysts by making the discovery and verification of hypotheses quick and seamless. Increasing data sizes has made the scalability of these tools a necessity. To bridge the gap between data sizes and interactivity, many visualization systems have turned to sampling-based approximate query processing frameworks. However, these systems are currently oblivious to human perceptual visual accuracy. This could either lead to overly aggressive sampling when the approximation accuracy is higher than needed or an incorrect visual rendering when the accuracy is too lax. Thus, for both correctness and efficiency, we propose to use empirical knowledge of human perceptual limitations to automatically bound the error of approximate answers meant for visualization.
This paper explores a preliminary model of sampling-based approximate query processing that uses perceptual models (encoded as functions) to construct approximate answers intended for visualization. We present initial results that show that the approximate and non-approximate answers for a given query differ by a perceptually indiscernible amount, as defined by perceptual functions.