Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.11
Thomas Pasquier, J. Bacon, D. Eyers
Security concerns are widely seen as an obstacle to the adoption of cloud computing solutions and although a wealth of law and regulation has emerged, the technical basis for enforcing and demonstrating compliance lags behind. Our Cloud Safety Net project aims to show that Information Flow Control (IFC) can augment existing security mechanisms and provide continuous enforcement of extended. Finer-grained application-level security policy in the cloud. We present FlowK, a loadable kernel module for Linux, as part of a proof of concept that IFC can be provided for cloud computing. Following the principle of policy-mechanism separation, IFC policy is assumed to be expressed at application level and FlowK provides mechanisms to enforce IFC policy at runtime. FlowK's design minimises the changes required to existing software when IFC is provided. To show how FlowK can be integrated with cloud software we have designed and evaluated a framework for deploying IFC-aware web applications, suitable for use in a PaaS cloud.
{"title":"FlowK: Information Flow Control for the Cloud","authors":"Thomas Pasquier, J. Bacon, D. Eyers","doi":"10.1109/CloudCom.2014.11","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.11","url":null,"abstract":"Security concerns are widely seen as an obstacle to the adoption of cloud computing solutions and although a wealth of law and regulation has emerged, the technical basis for enforcing and demonstrating compliance lags behind. Our Cloud Safety Net project aims to show that Information Flow Control (IFC) can augment existing security mechanisms and provide continuous enforcement of extended. Finer-grained application-level security policy in the cloud. We present FlowK, a loadable kernel module for Linux, as part of a proof of concept that IFC can be provided for cloud computing. Following the principle of policy-mechanism separation, IFC policy is assumed to be expressed at application level and FlowK provides mechanisms to enforce IFC policy at runtime. FlowK's design minimises the changes required to existing software when IFC is provided. To show how FlowK can be integrated with cloud software we have designed and evaluated a framework for deploying IFC-aware web applications, suitable for use in a PaaS cloud.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127724729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.109
Lingfang Zeng, Shijie Xu, Yang Wang, Xiang Cui, Tan Wee Kiat, David Bremner, K. Kent
This paper proposes a replication cost model and two greedy algorithms, named GS QoS and GS QoS C1, for replication placements in cloud-based storage systems. The model aims to minimize replication cost with full consideration of quality of user access to storage nodes. Our two algorithms employ a utility measurement to guide placement procedures. Our final experimental results show that 1) GS QoS outperforms GS QoS C1, 2) both algorithms have more economical results than those from existing greedy replica placement algorithm.
{"title":"Monetary-and-QoS Aware Replica Placements in Cloud-Based Storage Systems","authors":"Lingfang Zeng, Shijie Xu, Yang Wang, Xiang Cui, Tan Wee Kiat, David Bremner, K. Kent","doi":"10.1109/CloudCom.2014.109","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.109","url":null,"abstract":"This paper proposes a replication cost model and two greedy algorithms, named GS QoS and GS QoS C1, for replication placements in cloud-based storage systems. The model aims to minimize replication cost with full consideration of quality of user access to storage nodes. Our two algorithms employ a utility measurement to guide placement procedures. Our final experimental results show that 1) GS QoS outperforms GS QoS C1, 2) both algorithms have more economical results than those from existing greedy replica placement algorithm.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130317569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.69
Zhaoxia Wang, Victor Joo Chuan Tong, Xin Xin, H. Chin
Anomaly detection in sentiment analysis refers to detecting abnormal opinions, sentiment patterns or special temporal aspects of such patterns in a collection of data. The anomalies detected may be due to sudden sentiment changes hidden in large amounts of text. If these anomalies are undetected or poorly managed, the consequences may be severe, e.g. A business whose customers reveal negative sentiments and will no longer support the establishment. Social media platforms, such as Twitter, provide a vast source of information, which includes user feedback, opinion and information on most issues. Many organizations also leverage social media platforms to publish information about events, products, services, policies and other topics frequently. Thus, analyzing social media data to identify abnormal events in a timely manner is a beneficial topic. It will enable the businesses and government organizations to intervene early or adopt proper strategies if needed. However, it is also a challenge due to the diversity and size of social media data. In this study, we survey existing anomaly analysis as well as sentiment analysis methods and analyze their limitations and challenges. To tackle the challenges, an enhanced sentiment classification method is proposed and discussed. We study the possibility of employing the proposed method to perform anomaly detection through sentiment analysis on social media data. We tested the applicability and robustness of the method through sentiment analysis on tweet data. The results demonstrate the capabilities of the proposed method and provide meaningful insights into this research area.
{"title":"Anomaly Detection through Enhanced Sentiment Analysis on Social Media Data","authors":"Zhaoxia Wang, Victor Joo Chuan Tong, Xin Xin, H. Chin","doi":"10.1109/CloudCom.2014.69","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.69","url":null,"abstract":"Anomaly detection in sentiment analysis refers to detecting abnormal opinions, sentiment patterns or special temporal aspects of such patterns in a collection of data. The anomalies detected may be due to sudden sentiment changes hidden in large amounts of text. If these anomalies are undetected or poorly managed, the consequences may be severe, e.g. A business whose customers reveal negative sentiments and will no longer support the establishment. Social media platforms, such as Twitter, provide a vast source of information, which includes user feedback, opinion and information on most issues. Many organizations also leverage social media platforms to publish information about events, products, services, policies and other topics frequently. Thus, analyzing social media data to identify abnormal events in a timely manner is a beneficial topic. It will enable the businesses and government organizations to intervene early or adopt proper strategies if needed. However, it is also a challenge due to the diversity and size of social media data. In this study, we survey existing anomaly analysis as well as sentiment analysis methods and analyze their limitations and challenges. To tackle the challenges, an enhanced sentiment classification method is proposed and discussed. We study the possibility of employing the proposed method to perform anomaly detection through sentiment analysis on social media data. We tested the applicability and robustness of the method through sentiment analysis on tweet data. The results demonstrate the capabilities of the proposed method and provide meaningful insights into this research area.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116993690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.90
Pongsakorn U-chupala, Koheix Ichikawa, Hajimu Iida, Nawawit Kessaraphong, P. Uthayopas, S. Date, H. Abe, Hiroaki Yamanaka, Eiji Kawai
Bandwidth and latency are two major factors that contribute the most to network application performance. Between each pair of switches in a network, there may be multiple paths connecting them. Each path has different properties because of multiple factors. Traditional shortest-path routing does not take this knowledge into consideration and may result in sub-optimal performance of applications and underutilization of network. We proposed a concept of "bandwidth and latency aware routing". The idea is that we could improve overall performance of the network by separating application into bandwidth-oriented and latency-oriented application and allocate different route for each type of application accordingly. We also proposed a design of this network system implemented using Open Flow. Routes are calculated from monitored information using Dijkstra algorithm and its variation. To support our design, we show a use case in which our design performs better than traditional routing as well as evaluation results.
{"title":"Application-Oriented Bandwidth and Latency Aware Routing with Open Flow Network","authors":"Pongsakorn U-chupala, Koheix Ichikawa, Hajimu Iida, Nawawit Kessaraphong, P. Uthayopas, S. Date, H. Abe, Hiroaki Yamanaka, Eiji Kawai","doi":"10.1109/CloudCom.2014.90","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.90","url":null,"abstract":"Bandwidth and latency are two major factors that contribute the most to network application performance. Between each pair of switches in a network, there may be multiple paths connecting them. Each path has different properties because of multiple factors. Traditional shortest-path routing does not take this knowledge into consideration and may result in sub-optimal performance of applications and underutilization of network. We proposed a concept of \"bandwidth and latency aware routing\". The idea is that we could improve overall performance of the network by separating application into bandwidth-oriented and latency-oriented application and allocate different route for each type of application accordingly. We also proposed a design of this network system implemented using Open Flow. Routes are calculated from monitored information using Dijkstra algorithm and its variation. To support our design, we show a use case in which our design performs better than traditional routing as well as evaluation results.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128206715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.42
Soramichi Akiyama, Takahiro Hirofuchi, S. Honiden
Energy efficiency of cloud data centers is of great concern today and has been tackled by many researchers. Dynamic VM placement is a well-known strategy to improve energy efficiency of a data center. Virtual machines (VMs) under light load are consolidated into a small number of physical machines (PMs) to turn idle PMs into low-power states. Although live migration is essential for dynamic VM placement, former studies have not yet revealed how energy overhead of live migration has impact on energy efficiency of dynamic VM placement. To tackle this problem, we conducted integrated simulation of energy overhead of live migration and dynamic VM placement sing Sim Grid. We used three dynamic VM placement policies and two live migration mechanisms (existing pre-copy and an accelerated mechanism invented by us) to thoroughly evaluate the energy overhead. The results showed that in the worst case energy overhead of live migration occupies 5.8% of total energy consumption of a data center.
{"title":"Evaluating Impact of Live Migration on Data Center Energy Saving","authors":"Soramichi Akiyama, Takahiro Hirofuchi, S. Honiden","doi":"10.1109/CloudCom.2014.42","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.42","url":null,"abstract":"Energy efficiency of cloud data centers is of great concern today and has been tackled by many researchers. Dynamic VM placement is a well-known strategy to improve energy efficiency of a data center. Virtual machines (VMs) under light load are consolidated into a small number of physical machines (PMs) to turn idle PMs into low-power states. Although live migration is essential for dynamic VM placement, former studies have not yet revealed how energy overhead of live migration has impact on energy efficiency of dynamic VM placement. To tackle this problem, we conducted integrated simulation of energy overhead of live migration and dynamic VM placement sing Sim Grid. We used three dynamic VM placement policies and two live migration mechanisms (existing pre-copy and an accelerated mechanism invented by us) to thoroughly evaluate the energy overhead. The results showed that in the worst case energy overhead of live migration occupies 5.8% of total energy consumption of a data center.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133325438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.126
F. Indaco, Teng-Sheng Moh
This paper introduces the concept of graphing the size of a level-set against its respective density threshold. This is used to develop a new recursive version of DBSCAN that successfully performs hierarchical clustering, called Level-Set Clustering (LSC).
{"title":"Hierarchical Density-Based Clustering Using Level-Sets","authors":"F. Indaco, Teng-Sheng Moh","doi":"10.1109/CloudCom.2014.126","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.126","url":null,"abstract":"This paper introduces the concept of graphing the size of a level-set against its respective density threshold. This is used to develop a new recursive version of DBSCAN that successfully performs hierarchical clustering, called Level-Set Clustering (LSC).","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123038942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.76
Dapeng Dong, J. Herbert
Analysing text-based data has become increasingly important due to the importance of text from sources such as social media, web contents, web searches. The growing volume of such data creates challenges for data analysis including efficient and scalable algorithm, effective computing platforms and energy efficiency. Compression is a standard method for reducing data size but current standard compression algorithms are destructive to the organisation of data contents. This work introduces Content-aware, Partial Compression (CaPC) for text using a dictionary-based approach. We simply use shorter codes to replace strings while maintaining the original data format and structure, so that the compressed contents can be directly consumed by analytic platforms. We evaluate our approach with a set of real-world datasets and several classical MapReduce jobs on Hadoop. We also provide a supplementary utility library for Hadoop, hence, existing MapReduce programs can be used directly on the compressed datasets with little or no modification. In evaluation, we demonstrate that CaPC works well with a wide variety of data analysis scenarios, experimental results show ~30% average data size reduction, and up to ~32% performance increase on some I/O intensive jobs on an in-house Hadoop cluster. While the gains may seem modest, the point is that these gains are 'for free' and act as supplementary to all other optimizations.
{"title":"Content-Aware Partial Compression for Big Textual Data Analysis Acceleration","authors":"Dapeng Dong, J. Herbert","doi":"10.1109/CloudCom.2014.76","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.76","url":null,"abstract":"Analysing text-based data has become increasingly important due to the importance of text from sources such as social media, web contents, web searches. The growing volume of such data creates challenges for data analysis including efficient and scalable algorithm, effective computing platforms and energy efficiency. Compression is a standard method for reducing data size but current standard compression algorithms are destructive to the organisation of data contents. This work introduces Content-aware, Partial Compression (CaPC) for text using a dictionary-based approach. We simply use shorter codes to replace strings while maintaining the original data format and structure, so that the compressed contents can be directly consumed by analytic platforms. We evaluate our approach with a set of real-world datasets and several classical MapReduce jobs on Hadoop. We also provide a supplementary utility library for Hadoop, hence, existing MapReduce programs can be used directly on the compressed datasets with little or no modification. In evaluation, we demonstrate that CaPC works well with a wide variety of data analysis scenarios, experimental results show ~30% average data size reduction, and up to ~32% performance increase on some I/O intensive jobs on an in-house Hadoop cluster. While the gains may seem modest, the point is that these gains are 'for free' and act as supplementary to all other optimizations.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"os-14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127760292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.165
B. Duncan, M. Whittington
All Cloud computing standards are dependent upon checklist methodology to implement and then audit the alignment of a company or an operation with the standards that have been set. An investigation of the use of checklists in other academic areas has shown there to be significant weaknesses in the checklist solution to both implementation and audit, these weaknesses will only be exacerbated by the fast-changing and developing nature of clouds. We examine the problems that are inherent with using checklists and seek to identify some mitigating strategies that might be adopted to improve their efficacy.
{"title":"Reflecting on Whether Checklists Can Tick the Box for Cloud Security","authors":"B. Duncan, M. Whittington","doi":"10.1109/CloudCom.2014.165","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.165","url":null,"abstract":"All Cloud computing standards are dependent upon checklist methodology to implement and then audit the alignment of a company or an operation with the standards that have been set. An investigation of the use of checklists in other academic areas has shown there to be significant weaknesses in the checklist solution to both implementation and audit, these weaknesses will only be exacerbated by the fast-changing and developing nature of clouds. We examine the problems that are inherent with using checklists and seek to identify some mitigating strategies that might be adopted to improve their efficacy.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127943855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.153
Shuli Zhang, Yan Zhang, Yifang Qin, Yanni Han, S. Ci
Due to the special topologies and communication pattern, in today's data center networks it is common that a large set of TCP flows and a small set of TCP flows get into different ingress ports of a switch and compete for a same egress port. However, in this case the throughput share of flows in the two sets will not be fair even though all flows have the same RTT. In this paper, we study this problem and find that TCP's fairness in data center networks is related with not only the network capacity but also the number of flows in the two sets. We propose a mathematical model of the average throughput ratio of the large set of flows to the small set of flows. This model can reveal the variation of TCP's fairness along with the change of network parameters (including buffer size, bandwidth, and propagation delay) as well as the number of flows in the two sets. We validate our model by comparing its numerical results with simulation results, finding that they match well.
{"title":"Modeling and Understanding TCP's Fairness Problem in Data Center Networks","authors":"Shuli Zhang, Yan Zhang, Yifang Qin, Yanni Han, S. Ci","doi":"10.1109/CloudCom.2014.153","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.153","url":null,"abstract":"Due to the special topologies and communication pattern, in today's data center networks it is common that a large set of TCP flows and a small set of TCP flows get into different ingress ports of a switch and compete for a same egress port. However, in this case the throughput share of flows in the two sets will not be fair even though all flows have the same RTT. In this paper, we study this problem and find that TCP's fairness in data center networks is related with not only the network capacity but also the number of flows in the two sets. We propose a mathematical model of the average throughput ratio of the large set of flows to the small set of flows. This model can reveal the variation of TCP's fairness along with the change of network parameters (including buffer size, bandwidth, and propagation delay) as well as the number of flows in the two sets. We validate our model by comparing its numerical results with simulation results, finding that they match well.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117159751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.44
Girma Kejela, R. Esteves, Chunming Rong
This work is based on a real-life data-set collected from sensors that monitor drilling processes and equipment in an oil and gas company. The sensor data stream-in at an interval of one second, which is equivalent to 86400 rows of data per day. After studying state-of-the-art Big Data analytics tools including Mahout, RHadoop and Spark, we chose Ox data's H2O for this particular problem because of its fast in-memory processing, strong machine learning engine, and ease of use. Accurate predictive analytics of big sensor data can be used to estimate missed values, or to replace incorrect readings due malfunctioning sensors or broken communication channel. It can also be used to anticipate situations that help in various decision makings, including maintenance planning and operation.
{"title":"Predictive Analytics of Sensor Data Using Distributed Machine Learning Techniques","authors":"Girma Kejela, R. Esteves, Chunming Rong","doi":"10.1109/CloudCom.2014.44","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.44","url":null,"abstract":"This work is based on a real-life data-set collected from sensors that monitor drilling processes and equipment in an oil and gas company. The sensor data stream-in at an interval of one second, which is equivalent to 86400 rows of data per day. After studying state-of-the-art Big Data analytics tools including Mahout, RHadoop and Spark, we chose Ox data's H2O for this particular problem because of its fast in-memory processing, strong machine learning engine, and ease of use. Accurate predictive analytics of big sensor data can be used to estimate missed values, or to replace incorrect readings due malfunctioning sensors or broken communication channel. It can also be used to anticipate situations that help in various decision makings, including maintenance planning and operation.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116297323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}