Pub Date : 2019-04-01DOI: 10.1109/ICDEW.2019.00-35
E. Shahverdi, Ahmed Awad, S. Sakr
As the world gets more instrumented and connected, we are witnessing a flood of digital data generated from various hardware (e.g., sensors) or software in the format of flowing streams of data. Real-time processing for such massive amounts of streaming data is a crucial requirement in several application domains including financial markets, surveillance systems, manufacturing, smart cities, and scalable monitoring infrastructure. In the last few years, several big stream processing engines have been introduced to tackle this challenge. In this article, we present an extensive experimental study of five popular systems in this domain, namely, Apache Storm, Apache Flink, Apache Spark, Kafka Streams and Hazelcast Jet. We report and analyze the performance characteristics of these systems. In addition, we report a set of insights and important lessons that we have learned from conducting our experiments.
{"title":"Big Stream Processing Systems: An Experimental Evaluation","authors":"E. Shahverdi, Ahmed Awad, S. Sakr","doi":"10.1109/ICDEW.2019.00-35","DOIUrl":"https://doi.org/10.1109/ICDEW.2019.00-35","url":null,"abstract":"As the world gets more instrumented and connected, we are witnessing a flood of digital data generated from various hardware (e.g., sensors) or software in the format of flowing streams of data. Real-time processing for such massive amounts of streaming data is a crucial requirement in several application domains including financial markets, surveillance systems, manufacturing, smart cities, and scalable monitoring infrastructure. In the last few years, several big stream processing engines have been introduced to tackle this challenge. In this article, we present an extensive experimental study of five popular systems in this domain, namely, Apache Storm, Apache Flink, Apache Spark, Kafka Streams and Hazelcast Jet. We report and analyze the performance characteristics of these systems. In addition, we report a set of insights and important lessons that we have learned from conducting our experiments.","PeriodicalId":186190,"journal":{"name":"2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132733409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-01DOI: 10.1109/ICDEW.2019.00058
Keyang Cheng, Muhammad Saddam Khokhar, Yunbo Rao, Rabia Tahir
A novel approach for background and scene activity modelling with spearman correlation analysis and customized deep learning model is introduced in this paper. It detects and gives correlated analytics between casual and temporal regional activities on the basis of similarities and primary dissimilarities in the same scene captured by several cameras. The experiment implement on four overlapped videos that are captured inside the hall from four cameras. Detected and analyzed by our model, 17.32% correlated co-occurrences is actual correlation among all videos. Rest of 82.68% of videos is background that shows similar and repetitive features in spearman rank tied result. Simulation results demonstrate that the proposed method can detect high correlation among all activities during the frame rate with tied features ability.
{"title":"Multi-camera Background and Scene Activity Modelling Based on Spearman Correlation Analysis and Inception-V3 Network","authors":"Keyang Cheng, Muhammad Saddam Khokhar, Yunbo Rao, Rabia Tahir","doi":"10.1109/ICDEW.2019.00058","DOIUrl":"https://doi.org/10.1109/ICDEW.2019.00058","url":null,"abstract":"A novel approach for background and scene activity modelling with spearman correlation analysis and customized deep learning model is introduced in this paper. It detects and gives correlated analytics between casual and temporal regional activities on the basis of similarities and primary dissimilarities in the same scene captured by several cameras. The experiment implement on four overlapped videos that are captured inside the hall from four cameras. Detected and analyzed by our model, 17.32% correlated co-occurrences is actual correlation among all videos. Rest of 82.68% of videos is background that shows similar and repetitive features in spearman rank tied result. Simulation results demonstrate that the proposed method can detect high correlation among all activities during the frame rate with tied features ability.","PeriodicalId":186190,"journal":{"name":"2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129332170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-01DOI: 10.1109/ICDEW.2019.00-17
Zongxi Li, Haoran Xie, Yingchao Zhao, Qing Li
Collaborative Filtering (CF) is a popular approach to generate predicted rating of a target user on an item by aggregating neighbor users' ratings; these ratings are weighted by a correlation coefficient between two users. Thus, the user-user similarity computation is a significant step in CF to select proper neighborhood and exploit suitable correlation coefficients for prediction, and multiple weighting techniques have been proposed to enhance the performance. However, existing approaches compute the similarity directly based on users' rating vectors, which may lead the system to suffer from severe low-sparsity problem, and will also cause the system to be less interpretive because the rating only represents user's preference on a certain item but does not include extra feature information like attributes or genres. In this paper, we propose a method to compute the user' correlations in latent space by incorporating matrix factorization (MF) technique, and exploit the correlation coefficients in the prediction step of CF. We have evaluated the proposed approach with variant methods on MovieLens dataset to validate the effectiveness in CF.
{"title":"Incorporating Latent Space Correlation Coefficients to Collaborative Filtering","authors":"Zongxi Li, Haoran Xie, Yingchao Zhao, Qing Li","doi":"10.1109/ICDEW.2019.00-17","DOIUrl":"https://doi.org/10.1109/ICDEW.2019.00-17","url":null,"abstract":"Collaborative Filtering (CF) is a popular approach to generate predicted rating of a target user on an item by aggregating neighbor users' ratings; these ratings are weighted by a correlation coefficient between two users. Thus, the user-user similarity computation is a significant step in CF to select proper neighborhood and exploit suitable correlation coefficients for prediction, and multiple weighting techniques have been proposed to enhance the performance. However, existing approaches compute the similarity directly based on users' rating vectors, which may lead the system to suffer from severe low-sparsity problem, and will also cause the system to be less interpretive because the rating only represents user's preference on a certain item but does not include extra feature information like attributes or genres. In this paper, we propose a method to compute the user' correlations in latent space by incorporating matrix factorization (MF) technique, and exploit the correlation coefficients in the prediction step of CF. We have evaluated the proposed approach with variant methods on MovieLens dataset to validate the effectiveness in CF.","PeriodicalId":186190,"journal":{"name":"2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126264043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-01DOI: 10.1109/ICDEW.2019.00-44
Zhe Peng, Haotian Wu, Bin Xiao, Songtao Guo
Blockchain, as the underlying technique of cryptocurrency, has triggered a wave of innovation in decentralized computing. Despite some research on blockchain data query, a primary concern for blockchain to be fully practical is to combat the data query inefficiency and query result authenticity. To provide both efficient and verifiable data query services for blockchain-based systems, we propose a Verifiable Query Layer (VQL). The middleware layer extracts transactions stored in the underlying blockchain system and efficiently reorganizes them in databases to provide various query services for public users. To prevent falsified data being stored in the middleware, a cryptographic hash value is calculated for each constructed database. The database fingerprint including the hash value and some database properties will be first verified by miners and then stored in the blockchain. We implement VQL and conduct extensive experiments based on a practical blockchain system Ethereum. The evaluation results demonstrate that VQL can effectively support various data query services and guarantee the authenticity of query results for the blockchain system.
{"title":"VQL: Providing Query Efficiency and Data Authenticity in Blockchain Systems","authors":"Zhe Peng, Haotian Wu, Bin Xiao, Songtao Guo","doi":"10.1109/ICDEW.2019.00-44","DOIUrl":"https://doi.org/10.1109/ICDEW.2019.00-44","url":null,"abstract":"Blockchain, as the underlying technique of cryptocurrency, has triggered a wave of innovation in decentralized computing. Despite some research on blockchain data query, a primary concern for blockchain to be fully practical is to combat the data query inefficiency and query result authenticity. To provide both efficient and verifiable data query services for blockchain-based systems, we propose a Verifiable Query Layer (VQL). The middleware layer extracts transactions stored in the underlying blockchain system and efficiently reorganizes them in databases to provide various query services for public users. To prevent falsified data being stored in the middleware, a cryptographic hash value is calculated for each constructed database. The database fingerprint including the hash value and some database properties will be first verified by miners and then stored in the blockchain. We implement VQL and conduct extensive experiments based on a practical blockchain system Ethereum. The evaluation results demonstrate that VQL can effectively support various data query services and guarantee the authenticity of query results for the blockchain system.","PeriodicalId":186190,"journal":{"name":"2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW)","volume":"254 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124705665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-01DOI: 10.1109/ICDEW.2019.00-27
Jan Kossmann, R. Schlosser
Database systems that autonomously manage their configuration and physical database design face numerous challenges: They need to anticipate future workloads, find satisfactory and robust configurations efficiently, and learn from recent actions. We describe a component-based framework for self-managed database systems to facilitate development and database integration with low overhead by relying on a clear separation of concerns. Our framework results in exchangeable and reusable components, which simplify experiments and promote further research. Furthermore, we propose an LP-based algorithm to find an efficient order to tune multiple dependent features in a recursive way.
{"title":"A Framework for Self-Managing Database Systems","authors":"Jan Kossmann, R. Schlosser","doi":"10.1109/ICDEW.2019.00-27","DOIUrl":"https://doi.org/10.1109/ICDEW.2019.00-27","url":null,"abstract":"Database systems that autonomously manage their configuration and physical database design face numerous challenges: They need to anticipate future workloads, find satisfactory and robust configurations efficiently, and learn from recent actions. We describe a component-based framework for self-managed database systems to facilitate development and database integration with low overhead by relying on a clear separation of concerns. Our framework results in exchangeable and reusable components, which simplify experiments and promote further research. Furthermore, we propose an LP-based algorithm to find an efficient order to tune multiple dependent features in a recursive way.","PeriodicalId":186190,"journal":{"name":"2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115481155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-01DOI: 10.1109/ICDEW.2019.00-22
Mayuresh Kunjir
There is a lot of interest today in building autonomous (or, self-driving) data processing systems. An emerging school of thought is to leverage the "black box" algorithm of Bayesian Optimization for problems of this flavor both due to its wider applicability and theoretical guarantees on the quality of results produced. The black-box approach, however, could be time and labor-intensive; or otherwise get stuck in a local minima. We study an important problem of auto-tuning the memory allocation for applications running on modern distributed data processing systems. A simple "white-box" model is developed which can quickly separate good configurations from bad ones. To combine the benefits of the two approaches to tuning, we build a framework called Guided Bayesian Optimization (GBO) that uses the white-box model as a guide during the Bayesian Optimization exploration process. An evaluation carried out on Apache Spark using industry-standard benchmark applications shows that GBO consistently provides performance speedups across the application workload with the magnitude of savings being close to 2x.
{"title":"Guided Bayesian Optimization to AutoTune Memory-Based Analytics","authors":"Mayuresh Kunjir","doi":"10.1109/ICDEW.2019.00-22","DOIUrl":"https://doi.org/10.1109/ICDEW.2019.00-22","url":null,"abstract":"There is a lot of interest today in building autonomous (or, self-driving) data processing systems. An emerging school of thought is to leverage the \"black box\" algorithm of Bayesian Optimization for problems of this flavor both due to its wider applicability and theoretical guarantees on the quality of results produced. The black-box approach, however, could be time and labor-intensive; or otherwise get stuck in a local minima. We study an important problem of auto-tuning the memory allocation for applications running on modern distributed data processing systems. A simple \"white-box\" model is developed which can quickly separate good configurations from bad ones. To combine the benefits of the two approaches to tuning, we build a framework called Guided Bayesian Optimization (GBO) that uses the white-box model as a guide during the Bayesian Optimization exploration process. An evaluation carried out on Apache Spark using industry-standard benchmark applications shows that GBO consistently provides performance speedups across the application workload with the magnitude of savings being close to 2x.","PeriodicalId":186190,"journal":{"name":"2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130045356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-01DOI: 10.1109/ICDEW.2019.00-42
Bing Liu, Yang Qin, X. Chu
Blockchain is a disruptive technique that finds many applications in FinTech, IoT, and token economy. Because of the asynchrony of network, the competition of mining, and the nondeterministic block propagation delay, forks in the blockchain occur frequently which not only waste a lot of computing resources but also result in potential security issues. This paper introduces PvScheme, a probabilistic verification scheme that can effectively reduce the block propagation delay and hence reduce the occurrence of blockchain forks. We further enhance the security of PvScheme to provide reliable block delivery. We also analyze the resistance of PvScheme to fake blocks and double spending attacks. The results of several comparative experiments show that our scheme can indeed reduce forks and improve the blockchain performance.
{"title":"Reducing Forks in the Blockchain via Probabilistic Verification","authors":"Bing Liu, Yang Qin, X. Chu","doi":"10.1109/ICDEW.2019.00-42","DOIUrl":"https://doi.org/10.1109/ICDEW.2019.00-42","url":null,"abstract":"Blockchain is a disruptive technique that finds many applications in FinTech, IoT, and token economy. Because of the asynchrony of network, the competition of mining, and the nondeterministic block propagation delay, forks in the blockchain occur frequently which not only waste a lot of computing resources but also result in potential security issues. This paper introduces PvScheme, a probabilistic verification scheme that can effectively reduce the block propagation delay and hence reduce the occurrence of blockchain forks. We further enhance the security of PvScheme to provide reliable block delivery. We also analyze the resistance of PvScheme to fake blocks and double spending attacks. The results of several comparative experiments show that our scheme can indeed reduce forks and improve the blockchain performance.","PeriodicalId":186190,"journal":{"name":"2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126919126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-01DOI: 10.1109/icdew.2019.00003
{"title":"[Copyright notice]","authors":"","doi":"10.1109/icdew.2019.00003","DOIUrl":"https://doi.org/10.1109/icdew.2019.00003","url":null,"abstract":"","PeriodicalId":186190,"journal":{"name":"2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133151144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-01DOI: 10.1109/ICDEW.2019.00-34
Tong Li, A. Alhilal, Anlan Zhang, M. A. Hoque, Dimitris Chatzopoulos, Zhu Xiao, Yong Li, P. Hui
The increasing number of privately owned vehicles in large metropolitan cities has contributed to traffic congestion, increased energy waste, raised CO2 emissions, and impacted our living conditions negatively. Analysis of data representing citizens' driving behavior can provide insights to reverse these conditions. This article presents a large-scale driving status and trajectory dataset consisting of 426,992,602 records collected from 68,069 vehicles over a month. From the dataset, we analyze the driving behavior and produce random distributions of trip duration and millage to characterize car trips. We have found that a private car has more than 17% probability to make four trips per day, and a trip has more than 25% probability to last 20-30 minutes and 33% probability to travel 10 Kilometers during the trip. The collective distributions of trip mileage and duration follow Weibull distribution, whereas the hourly trips follow the well known diurnal pattern and so the hourly fuel efficiency. Based on these findings, we have developed an application which recommends the drivers to find the nearby gas stations and possible favorite places from past trips. We further highlight that our dataset can be applied for developing dynamic Green maps for fuel-efficient routing, modeling efficient Vehicle-to-Vehicle (V2V) communications, verifying existing V2V protocols, and understanding user behavior in driving their private cars.
{"title":"Driving Big Data: A First Look at Driving Behavior via a Large-Scale Private Car Dataset","authors":"Tong Li, A. Alhilal, Anlan Zhang, M. A. Hoque, Dimitris Chatzopoulos, Zhu Xiao, Yong Li, P. Hui","doi":"10.1109/ICDEW.2019.00-34","DOIUrl":"https://doi.org/10.1109/ICDEW.2019.00-34","url":null,"abstract":"The increasing number of privately owned vehicles in large metropolitan cities has contributed to traffic congestion, increased energy waste, raised CO2 emissions, and impacted our living conditions negatively. Analysis of data representing citizens' driving behavior can provide insights to reverse these conditions. This article presents a large-scale driving status and trajectory dataset consisting of 426,992,602 records collected from 68,069 vehicles over a month. From the dataset, we analyze the driving behavior and produce random distributions of trip duration and millage to characterize car trips. We have found that a private car has more than 17% probability to make four trips per day, and a trip has more than 25% probability to last 20-30 minutes and 33% probability to travel 10 Kilometers during the trip. The collective distributions of trip mileage and duration follow Weibull distribution, whereas the hourly trips follow the well known diurnal pattern and so the hourly fuel efficiency. Based on these findings, we have developed an application which recommends the drivers to find the nearby gas stations and possible favorite places from past trips. We further highlight that our dataset can be applied for developing dynamic Green maps for fuel-efficient routing, modeling efficient Vehicle-to-Vehicle (V2V) communications, verifying existing V2V protocols, and understanding user behavior in driving their private cars.","PeriodicalId":186190,"journal":{"name":"2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127247310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-01DOI: 10.1109/ICDEW.2019.00-36
S. H. Kim, Abdullah Alfarrarjeh, G. Constantinou, C. Shahabi
This paper proposes a platform, dubbed "Translational Visual Data Platform (TVDP)", to collect, manage, analyze urban visual data which enables participating community members connected not only to enhance their individual operations but also to smartly incorporate visual data acquisition, access, analysis methods and results among them. Specifically, we focus on geo-tagged visual data since location information is essential in many smart city applications and provides a fundamental connection in managing and sharing data among collaborators. Furthermore, our study targets for an image-based machine learning platform to prepare users for the upcoming era of machine learning (ML) and artificial intelligence (AI) applications. TVDP will be used to pilot, test, and apply various visual data-intensive applications in a collaborative way. New data, methods, and extracted knowledge from one application can be effectively translated into other applications, ultimately making visual data and analysis as a smart city infrastructure. The goal is to make value creation through visual data and their analysis as broadly available as possible, thus to make social and economic problem solving more distributed and collaborative among users. This paper reports the design and implementation of TVDP in progress and partial experimental results to demonstrate its feasibility.
{"title":"TVDP: Translational Visual Data Platform for Smart Cities","authors":"S. H. Kim, Abdullah Alfarrarjeh, G. Constantinou, C. Shahabi","doi":"10.1109/ICDEW.2019.00-36","DOIUrl":"https://doi.org/10.1109/ICDEW.2019.00-36","url":null,"abstract":"This paper proposes a platform, dubbed \"Translational Visual Data Platform (TVDP)\", to collect, manage, analyze urban visual data which enables participating community members connected not only to enhance their individual operations but also to smartly incorporate visual data acquisition, access, analysis methods and results among them. Specifically, we focus on geo-tagged visual data since location information is essential in many smart city applications and provides a fundamental connection in managing and sharing data among collaborators. Furthermore, our study targets for an image-based machine learning platform to prepare users for the upcoming era of machine learning (ML) and artificial intelligence (AI) applications. TVDP will be used to pilot, test, and apply various visual data-intensive applications in a collaborative way. New data, methods, and extracted knowledge from one application can be effectively translated into other applications, ultimately making visual data and analysis as a smart city infrastructure. The goal is to make value creation through visual data and their analysis as broadly available as possible, thus to make social and economic problem solving more distributed and collaborative among users. This paper reports the design and implementation of TVDP in progress and partial experimental results to demonstrate its feasibility.","PeriodicalId":186190,"journal":{"name":"2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114754184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}