首页 > 最新文献

2014 IEEE 6th International Conference on Cloud Computing Technology and Science最新文献

英文 中文
FlowK: Information Flow Control for the Cloud FlowK:面向云的信息流控制
Pub Date : 2014-12-15 DOI: 10.1109/CloudCom.2014.11
Thomas Pasquier, J. Bacon, D. Eyers
Security concerns are widely seen as an obstacle to the adoption of cloud computing solutions and although a wealth of law and regulation has emerged, the technical basis for enforcing and demonstrating compliance lags behind. Our Cloud Safety Net project aims to show that Information Flow Control (IFC) can augment existing security mechanisms and provide continuous enforcement of extended. Finer-grained application-level security policy in the cloud. We present FlowK, a loadable kernel module for Linux, as part of a proof of concept that IFC can be provided for cloud computing. Following the principle of policy-mechanism separation, IFC policy is assumed to be expressed at application level and FlowK provides mechanisms to enforce IFC policy at runtime. FlowK's design minimises the changes required to existing software when IFC is provided. To show how FlowK can be integrated with cloud software we have designed and evaluated a framework for deploying IFC-aware web applications, suitable for use in a PaaS cloud.
安全问题被广泛视为采用云计算解决方案的一个障碍,尽管已经出现了大量的法律和法规,但执行和证明合规的技术基础仍然落后。我们的云安全网络项目旨在展示信息流控制(IFC)可以增强现有的安全机制,并提供扩展的持续执行。云中的细粒度应用程序级安全策略。我们提出了FlowK,一个Linux的可加载内核模块,作为IFC可以用于云计算的概念证明的一部分。遵循策略-机制分离的原则,假定IFC策略在应用程序级别表示,而FlowK提供了在运行时执行IFC策略的机制。当提供IFC时,FlowK的设计最大限度地减少了对现有软件的更改。为了展示FlowK如何与云软件集成,我们设计并评估了一个框架,用于部署ifc感知的web应用程序,适合在PaaS云中使用。
{"title":"FlowK: Information Flow Control for the Cloud","authors":"Thomas Pasquier, J. Bacon, D. Eyers","doi":"10.1109/CloudCom.2014.11","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.11","url":null,"abstract":"Security concerns are widely seen as an obstacle to the adoption of cloud computing solutions and although a wealth of law and regulation has emerged, the technical basis for enforcing and demonstrating compliance lags behind. Our Cloud Safety Net project aims to show that Information Flow Control (IFC) can augment existing security mechanisms and provide continuous enforcement of extended. Finer-grained application-level security policy in the cloud. We present FlowK, a loadable kernel module for Linux, as part of a proof of concept that IFC can be provided for cloud computing. Following the principle of policy-mechanism separation, IFC policy is assumed to be expressed at application level and FlowK provides mechanisms to enforce IFC policy at runtime. FlowK's design minimises the changes required to existing software when IFC is provided. To show how FlowK can be integrated with cloud software we have designed and evaluated a framework for deploying IFC-aware web applications, suitable for use in a PaaS cloud.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127724729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Big Data Processing for Prediction of Traffic Time Based on Vertical Data Arrangement 基于垂直数据排列的交通时间预测大数据处理
Pub Date : 2014-12-15 DOI: 10.1109/CloudCom.2014.54
Seungwoo Jeon, B. Hong, Byungsoo Kim
To predict future traffic conditions in each road with unique spatiotemporal pattern, it is necessary to analyze the conditions based on historical traffic data and select time series forecasting methods which can be predicting next pattern for each road according to the analyzed results. Our goal is to create a new statistical model and a new system for predictive graphs of traffic times based on big data processing tools. First, we suggest a vertical data arrangement, gathering past traffic times in the same time slot for long-term prediction. Second, we analyze each traffic pattern to select time-series variables because a time-series forecasting method for a location and a time will be selected according to the variables that are available. Third, we suggest a spatiotemporal prediction map, which is a two-dimensional map with time and location. Each element in the map represents a time-series forecasting method and an R-squared value as indicator of prediction accuracy. Finally, we introduce a new system including RHive as a middle point between R and Hadoop clusters for generating predicted data efficiently from big historical data.
为了预测具有独特时空格局的每条道路的未来交通状况,需要对历史交通数据进行分析,并根据分析结果选择时间序列预测方法来预测每条道路的未来交通状况。我们的目标是基于大数据处理工具创建一个新的统计模型和一个新的交通时间预测图系统。首先,我们建议采用垂直数据排列,在同一时间段收集过去的交通时间以进行长期预测。其次,我们分析每个交通模式来选择时间序列变量,因为我们会根据可用的变量来选择一个地点和一个时间的时间序列预测方法。第三,提出了一种时空预测图,即带有时间和地点的二维地图。图中的每个元素代表一种时间序列预测方法和一个r平方值,作为预测精度的指标。最后,我们介绍了一个新的系统,其中包括RHive作为R和Hadoop集群之间的中间点,用于从大历史数据中高效地生成预测数据。
{"title":"Big Data Processing for Prediction of Traffic Time Based on Vertical Data Arrangement","authors":"Seungwoo Jeon, B. Hong, Byungsoo Kim","doi":"10.1109/CloudCom.2014.54","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.54","url":null,"abstract":"To predict future traffic conditions in each road with unique spatiotemporal pattern, it is necessary to analyze the conditions based on historical traffic data and select time series forecasting methods which can be predicting next pattern for each road according to the analyzed results. Our goal is to create a new statistical model and a new system for predictive graphs of traffic times based on big data processing tools. First, we suggest a vertical data arrangement, gathering past traffic times in the same time slot for long-term prediction. Second, we analyze each traffic pattern to select time-series variables because a time-series forecasting method for a location and a time will be selected according to the variables that are available. Third, we suggest a spatiotemporal prediction map, which is a two-dimensional map with time and location. Each element in the map represents a time-series forecasting method and an R-squared value as indicator of prediction accuracy. Finally, we introduce a new system including RHive as a middle point between R and Hadoop clusters for generating predicted data efficiently from big historical data.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122418587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Underpinning a Cloud Brokerage Service Framework for Quality Assurance and Optimization 为质量保证和优化支持云经纪服务框架
Pub Date : 2014-12-15 DOI: 10.1109/CloudCom.2014.146
Simeon Veloudis, A. Friesen, I. Paraskakis, Giannis Verginadis, Ioannis Patiniotakis
With the pervasion of cloud computing, enterprises increasingly rely on ecosystems of distributed, task-oriented, modular, and collaborative cloud services. In order to effectively manage the complexity inherent in such ecosystems, enterprises are anticipated to depend upon brokerage mechanisms for performing policy-based governance and for recommending optimal services to consumers. Such mechanisms crucially depend upon the existence of a uniform, platform-independent representation of services, consumer preferences, and policies concerning service delivery. In this paper we propose an ontology-based approach to such a representation.
随着云计算的普及,企业越来越依赖分布式、面向任务、模块化、协作式的云服务生态系统。为了有效地管理这种生态系统中固有的复杂性,预计企业将依赖代理机制来执行基于策略的治理并向消费者推荐最佳服务。这种机制在很大程度上取决于是否存在统一的、独立于平台的服务表示、消费者偏好和有关服务交付的政策。在本文中,我们提出了一种基于本体的方法来实现这种表示。
{"title":"Underpinning a Cloud Brokerage Service Framework for Quality Assurance and Optimization","authors":"Simeon Veloudis, A. Friesen, I. Paraskakis, Giannis Verginadis, Ioannis Patiniotakis","doi":"10.1109/CloudCom.2014.146","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.146","url":null,"abstract":"With the pervasion of cloud computing, enterprises increasingly rely on ecosystems of distributed, task-oriented, modular, and collaborative cloud services. In order to effectively manage the complexity inherent in such ecosystems, enterprises are anticipated to depend upon brokerage mechanisms for performing policy-based governance and for recommending optimal services to consumers. Such mechanisms crucially depend upon the existence of a uniform, platform-independent representation of services, consumer preferences, and policies concerning service delivery. In this paper we propose an ontology-based approach to such a representation.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122487938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Seamless Enablement of Intelligent Protection for Enterprise Cloud Applications through Service Store 通过服务存储无缝实现企业云应用的智能保护
Pub Date : 2014-12-15 DOI: 10.1109/CloudCom.2014.92
Joshua Daniel, T. Dimitrakos, F. El-Moussa, G. Ducatel, P. Pawar, Ali Sajjad
Cloud IaaS and PaaS providers typically hold Cloud consumers accountable for protecting their applications, while Cloud users often find that protecting their proprietary system, application and data stacks on public or hybrid Cloud environments to be complex, expensive and time-consuming. In this paper we demonstrate, how integration of a security solution such as BT Intelligent Protection with the Service Store, results with security operations capability that can scale accordingly to the Cloud use. By enabling "click-to-buy" security services and "click-to-build" secure applications with a few mouse clicks, this integration creates a new paradigm for self-service Cloud-based integrity and security services.
云IaaS和PaaS提供商通常要求云消费者负责保护他们的应用程序,而云用户经常发现,在公共或混合云环境中保护他们的专有系统、应用程序和数据堆栈是复杂、昂贵和耗时的。在本文中,我们将演示如何将安全解决方案(如BT智能防护)与服务存储集成在一起,从而产生可以根据云使用相应地扩展的安全操作能力。通过启用“点击购买”安全服务和“点击构建”安全应用程序,只需点击几下鼠标,这种集成为自助式基于云的完整性和安全服务创建了一个新范例。
{"title":"Seamless Enablement of Intelligent Protection for Enterprise Cloud Applications through Service Store","authors":"Joshua Daniel, T. Dimitrakos, F. El-Moussa, G. Ducatel, P. Pawar, Ali Sajjad","doi":"10.1109/CloudCom.2014.92","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.92","url":null,"abstract":"Cloud IaaS and PaaS providers typically hold Cloud consumers accountable for protecting their applications, while Cloud users often find that protecting their proprietary system, application and data stacks on public or hybrid Cloud environments to be complex, expensive and time-consuming. In this paper we demonstrate, how integration of a security solution such as BT Intelligent Protection with the Service Store, results with security operations capability that can scale accordingly to the Cloud use. By enabling \"click-to-buy\" security services and \"click-to-build\" secure applications with a few mouse clicks, this integration creates a new paradigm for self-service Cloud-based integrity and security services.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130042246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Job Scheduling for Cloud Computing Integrated with Wireless Sensor Network 云计算与无线传感器网络集成的作业调度
Pub Date : 2014-12-15 DOI: 10.1109/CloudCom.2014.106
Chunsheng Zhu, Xiuhua Li, Victor C. M. Leung, Xiping Hu, L. Yang
The powerful data storage and data processing abilities of cloud computing (CC) and the ubiquitous data gathering capability of wireless sensor network (WSN) complement each other in CC-WSN integration, which is attracting growing interest from both academia and industry. However, job scheduling for CC integrated with WSN is a critical and unexplored topic. To fill this gap, this paper first analyzes the characteristics of job scheduling with respect to CC-WSN integration and then studies two traditional and popular job scheduling algorithms (i.e., Min-Min and Max-Min). Further, two novel job scheduling algorithms, namely priority-based two phase Min-Min (PTMM) and priority-based two phase Max-Min (PTAM), are proposed for CC integrated with WSN. Extensive experimental results show that PTMM and PTAM achieve shorter expected completion time than Min-Min and Max-Min, for CC integrated with WSN.
云计算(CC)强大的数据存储和数据处理能力与无线传感器网络(WSN)无处不在的数据采集能力在CC-WSN集成中相互补充,越来越受到学术界和工业界的关注。然而,集成无线传感器网络的CC作业调度是一个关键而未被探索的课题。为了填补这一空白,本文首先分析了CC-WSN集成作业调度的特点,然后研究了两种传统和流行的作业调度算法(Min-Min和Max-Min)。在此基础上,提出了基于优先级的两阶段Min-Min (PTMM)算法和基于优先级的两阶段Max-Min (PTAM)算法。大量的实验结果表明,对于集成了WSN的CC, PTMM和PTAM比Min-Min和Max-Min的期望完成时间更短。
{"title":"Job Scheduling for Cloud Computing Integrated with Wireless Sensor Network","authors":"Chunsheng Zhu, Xiuhua Li, Victor C. M. Leung, Xiping Hu, L. Yang","doi":"10.1109/CloudCom.2014.106","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.106","url":null,"abstract":"The powerful data storage and data processing abilities of cloud computing (CC) and the ubiquitous data gathering capability of wireless sensor network (WSN) complement each other in CC-WSN integration, which is attracting growing interest from both academia and industry. However, job scheduling for CC integrated with WSN is a critical and unexplored topic. To fill this gap, this paper first analyzes the characteristics of job scheduling with respect to CC-WSN integration and then studies two traditional and popular job scheduling algorithms (i.e., Min-Min and Max-Min). Further, two novel job scheduling algorithms, namely priority-based two phase Min-Min (PTMM) and priority-based two phase Max-Min (PTAM), are proposed for CC integrated with WSN. Extensive experimental results show that PTMM and PTAM achieve shorter expected completion time than Min-Min and Max-Min, for CC integrated with WSN.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127875413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Hierarchical Density-Based Clustering Using Level-Sets 使用水平集的分层密度聚类
Pub Date : 2014-12-15 DOI: 10.1109/CloudCom.2014.126
F. Indaco, Teng-Sheng Moh
This paper introduces the concept of graphing the size of a level-set against its respective density threshold. This is used to develop a new recursive version of DBSCAN that successfully performs hierarchical clustering, called Level-Set Clustering (LSC).
本文介绍了相对于其各自的密度阈值绘制水平集的大小的概念。这用于开发DBSCAN的新递归版本,该版本成功地执行分层聚类,称为水平集聚类(Level-Set clustering, LSC)。
{"title":"Hierarchical Density-Based Clustering Using Level-Sets","authors":"F. Indaco, Teng-Sheng Moh","doi":"10.1109/CloudCom.2014.126","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.126","url":null,"abstract":"This paper introduces the concept of graphing the size of a level-set against its respective density threshold. This is used to develop a new recursive version of DBSCAN that successfully performs hierarchical clustering, called Level-Set Clustering (LSC).","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123038942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content-Aware Partial Compression for Big Textual Data Analysis Acceleration 面向大文本数据分析加速的内容感知部分压缩
Pub Date : 2014-12-15 DOI: 10.1109/CloudCom.2014.76
Dapeng Dong, J. Herbert
Analysing text-based data has become increasingly important due to the importance of text from sources such as social media, web contents, web searches. The growing volume of such data creates challenges for data analysis including efficient and scalable algorithm, effective computing platforms and energy efficiency. Compression is a standard method for reducing data size but current standard compression algorithms are destructive to the organisation of data contents. This work introduces Content-aware, Partial Compression (CaPC) for text using a dictionary-based approach. We simply use shorter codes to replace strings while maintaining the original data format and structure, so that the compressed contents can be directly consumed by analytic platforms. We evaluate our approach with a set of real-world datasets and several classical MapReduce jobs on Hadoop. We also provide a supplementary utility library for Hadoop, hence, existing MapReduce programs can be used directly on the compressed datasets with little or no modification. In evaluation, we demonstrate that CaPC works well with a wide variety of data analysis scenarios, experimental results show ~30% average data size reduction, and up to ~32% performance increase on some I/O intensive jobs on an in-house Hadoop cluster. While the gains may seem modest, the point is that these gains are 'for free' and act as supplementary to all other optimizations.
由于来自社交媒体、网络内容、网络搜索等来源的文本的重要性,分析基于文本的数据变得越来越重要。这些数据量的增长为数据分析带来了挑战,包括高效和可扩展的算法、有效的计算平台和能源效率。压缩是减少数据大小的标准方法,但目前的标准压缩算法对数据内容的组织具有破坏性。这项工作介绍了使用基于字典的方法对文本进行内容感知的部分压缩(CaPC)。我们在保持原始数据格式和结构的同时,简单地使用更短的代码来替换字符串,这样压缩后的内容就可以直接被分析平台使用。我们用一组真实世界的数据集和Hadoop上的几个经典MapReduce作业来评估我们的方法。我们还为Hadoop提供了一个补充的实用程序库,因此,现有的MapReduce程序可以直接在压缩的数据集上使用,几乎不需要修改。在评估中,我们证明了CaPC在各种数据分析场景下都能很好地工作,实验结果表明,在内部Hadoop集群上,平均数据大小减少了30%,在一些I/O密集型任务上,性能提高了32%。虽然收益可能看起来不大,但关键是这些收益是“免费的”,并且可以作为所有其他优化的补充。
{"title":"Content-Aware Partial Compression for Big Textual Data Analysis Acceleration","authors":"Dapeng Dong, J. Herbert","doi":"10.1109/CloudCom.2014.76","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.76","url":null,"abstract":"Analysing text-based data has become increasingly important due to the importance of text from sources such as social media, web contents, web searches. The growing volume of such data creates challenges for data analysis including efficient and scalable algorithm, effective computing platforms and energy efficiency. Compression is a standard method for reducing data size but current standard compression algorithms are destructive to the organisation of data contents. This work introduces Content-aware, Partial Compression (CaPC) for text using a dictionary-based approach. We simply use shorter codes to replace strings while maintaining the original data format and structure, so that the compressed contents can be directly consumed by analytic platforms. We evaluate our approach with a set of real-world datasets and several classical MapReduce jobs on Hadoop. We also provide a supplementary utility library for Hadoop, hence, existing MapReduce programs can be used directly on the compressed datasets with little or no modification. In evaluation, we demonstrate that CaPC works well with a wide variety of data analysis scenarios, experimental results show ~30% average data size reduction, and up to ~32% performance increase on some I/O intensive jobs on an in-house Hadoop cluster. While the gains may seem modest, the point is that these gains are 'for free' and act as supplementary to all other optimizations.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"os-14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127760292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Reflecting on Whether Checklists Can Tick the Box for Cloud Security 关于清单是否能够为云安全打勾的思考
Pub Date : 2014-12-15 DOI: 10.1109/CloudCom.2014.165
B. Duncan, M. Whittington
All Cloud computing standards are dependent upon checklist methodology to implement and then audit the alignment of a company or an operation with the standards that have been set. An investigation of the use of checklists in other academic areas has shown there to be significant weaknesses in the checklist solution to both implementation and audit, these weaknesses will only be exacerbated by the fast-changing and developing nature of clouds. We examine the problems that are inherent with using checklists and seek to identify some mitigating strategies that might be adopted to improve their efficacy.
所有云计算标准都依赖于清单方法来实现,然后审计公司或操作是否符合已设置的标准。对其他学术领域检查表使用情况的调查表明,检查表解决方案在实施和审计方面都存在重大缺陷,这些缺陷只会随着云的快速变化和发展而加剧。我们检查了使用清单所固有的问题,并试图确定一些可能采用的缓解策略,以提高其有效性。
{"title":"Reflecting on Whether Checklists Can Tick the Box for Cloud Security","authors":"B. Duncan, M. Whittington","doi":"10.1109/CloudCom.2014.165","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.165","url":null,"abstract":"All Cloud computing standards are dependent upon checklist methodology to implement and then audit the alignment of a company or an operation with the standards that have been set. An investigation of the use of checklists in other academic areas has shown there to be significant weaknesses in the checklist solution to both implementation and audit, these weaknesses will only be exacerbated by the fast-changing and developing nature of clouds. We examine the problems that are inherent with using checklists and seek to identify some mitigating strategies that might be adopted to improve their efficacy.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127943855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Modeling and Understanding TCP's Fairness Problem in Data Center Networks 数据中心网络中TCP公平性问题的建模与理解
Pub Date : 2014-12-15 DOI: 10.1109/CloudCom.2014.153
Shuli Zhang, Yan Zhang, Yifang Qin, Yanni Han, S. Ci
Due to the special topologies and communication pattern, in today's data center networks it is common that a large set of TCP flows and a small set of TCP flows get into different ingress ports of a switch and compete for a same egress port. However, in this case the throughput share of flows in the two sets will not be fair even though all flows have the same RTT. In this paper, we study this problem and find that TCP's fairness in data center networks is related with not only the network capacity but also the number of flows in the two sets. We propose a mathematical model of the average throughput ratio of the large set of flows to the small set of flows. This model can reveal the variation of TCP's fairness along with the change of network parameters (including buffer size, bandwidth, and propagation delay) as well as the number of flows in the two sets. We validate our model by comparing its numerical results with simulation results, finding that they match well.
由于特殊的拓扑结构和通信模式,在当今的数据中心网络中,经常会出现大组TCP流和小组TCP流进入交换机的不同入口端口并争夺同一出口端口的情况。然而,在这种情况下,即使所有流具有相同的RTT,两个集合中的流的吞吐量份额也不会公平。本文对这一问题进行了研究,发现数据中心网络中TCP的公平性不仅与网络容量有关,还与两个集合中的流数有关。我们提出了大流量与小流量的平均吞吐量比的数学模型。该模型可以揭示TCP公平性随网络参数(包括缓冲区大小、带宽和传播延迟)的变化以及两组流的数量的变化。通过数值结果与仿真结果的比较,验证了模型的正确性。
{"title":"Modeling and Understanding TCP's Fairness Problem in Data Center Networks","authors":"Shuli Zhang, Yan Zhang, Yifang Qin, Yanni Han, S. Ci","doi":"10.1109/CloudCom.2014.153","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.153","url":null,"abstract":"Due to the special topologies and communication pattern, in today's data center networks it is common that a large set of TCP flows and a small set of TCP flows get into different ingress ports of a switch and compete for a same egress port. However, in this case the throughput share of flows in the two sets will not be fair even though all flows have the same RTT. In this paper, we study this problem and find that TCP's fairness in data center networks is related with not only the network capacity but also the number of flows in the two sets. We propose a mathematical model of the average throughput ratio of the large set of flows to the small set of flows. This model can reveal the variation of TCP's fairness along with the change of network parameters (including buffer size, bandwidth, and propagation delay) as well as the number of flows in the two sets. We validate our model by comparing its numerical results with simulation results, finding that they match well.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117159751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive Analytics of Sensor Data Using Distributed Machine Learning Techniques 使用分布式机器学习技术的传感器数据预测分析
Pub Date : 2014-12-15 DOI: 10.1109/CloudCom.2014.44
Girma Kejela, R. Esteves, Chunming Rong
This work is based on a real-life data-set collected from sensors that monitor drilling processes and equipment in an oil and gas company. The sensor data stream-in at an interval of one second, which is equivalent to 86400 rows of data per day. After studying state-of-the-art Big Data analytics tools including Mahout, RHadoop and Spark, we chose Ox data's H2O for this particular problem because of its fast in-memory processing, strong machine learning engine, and ease of use. Accurate predictive analytics of big sensor data can be used to estimate missed values, or to replace incorrect readings due malfunctioning sensors or broken communication channel. It can also be used to anticipate situations that help in various decision makings, including maintenance planning and operation.
这项工作基于从监测石油和天然气公司钻井过程和设备的传感器收集的真实数据集。传感器数据以一秒的间隔流入,相当于每天86400行数据。在研究了包括Mahout、rha和Spark在内的最先进的大数据分析工具后,我们选择了Ox Data的H2O来解决这个特殊的问题,因为它具有快速的内存处理、强大的机器学习引擎和易用性。大传感器数据的准确预测分析可用于估计缺失值,或替换由于传感器故障或通信通道中断而导致的错误读数。它还可以用于预测有助于各种决策制定的情况,包括维护计划和操作。
{"title":"Predictive Analytics of Sensor Data Using Distributed Machine Learning Techniques","authors":"Girma Kejela, R. Esteves, Chunming Rong","doi":"10.1109/CloudCom.2014.44","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.44","url":null,"abstract":"This work is based on a real-life data-set collected from sensors that monitor drilling processes and equipment in an oil and gas company. The sensor data stream-in at an interval of one second, which is equivalent to 86400 rows of data per day. After studying state-of-the-art Big Data analytics tools including Mahout, RHadoop and Spark, we chose Ox data's H2O for this particular problem because of its fast in-memory processing, strong machine learning engine, and ease of use. Accurate predictive analytics of big sensor data can be used to estimate missed values, or to replace incorrect readings due malfunctioning sensors or broken communication channel. It can also be used to anticipate situations that help in various decision makings, including maintenance planning and operation.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116297323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
期刊
2014 IEEE 6th International Conference on Cloud Computing Technology and Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1