Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.68
S. Muramatsu, Ryota Kawashima, S. Saito, H. Matsuo
An Edge-Overlay model constructing virtual networks using both virtual switches and IP tunnels is promising in cloud datacenter networks. But software-implemented virtual switches can cause performance problems because the packet processing load is concentrated on a particular CPU core. Although multi queue functions like Receive Side Scaling (RSS) can distribute the load onto multiple CPU cores, there are still problems to be solved such as IRQ core collision of heavy traffic flows as well as competitive resource use between physical and virtual for packet processing. In this paper, we propose a software packet processing unit named VSE (Virtual Switch Extension) to address these problems by adaptively determining softirq cores based on both CPU load and VM-running information. Furthermore, the behavior of VSE can be managed by Open Flow controllers. Our performance evaluation results showed that throughput of our approach was higher than an existing RSSbased model as packet processing load increased. In addition, we show that our method prevented performance of high-loaded flows from being degraded by priority-based CPU core selection.
利用虚拟交换机和IP隧道构建虚拟网络的边缘覆盖模型在云数据中心网络中具有广阔的应用前景。但是,软件实现的虚拟交换机可能会导致性能问题,因为数据包处理负载集中在特定的CPU核心上。虽然像接收端缩放(Receive Side Scaling, RSS)这样的多队列功能可以将负载分配到多个CPU内核上,但是仍然存在一些问题需要解决,比如大流量的IRQ内核碰撞,以及物理和虚拟之间在数据包处理方面的资源竞争。在本文中,我们提出了一个名为VSE (Virtual Switch Extension)的软件包处理单元,通过基于CPU负载和虚拟机运行信息自适应地确定软件内核来解决这些问题。此外,VSE的行为可以通过开放流量控制器进行管理。我们的性能评估结果表明,随着数据包处理负载的增加,我们的方法的吞吐量高于现有的基于rss的模型。此外,我们还表明,我们的方法可以防止基于优先级的CPU内核选择降低高负载流的性能。
{"title":"VSE: Virtual Switch Extension for Adaptive CPU Core Assignment in Softirq","authors":"S. Muramatsu, Ryota Kawashima, S. Saito, H. Matsuo","doi":"10.1109/CloudCom.2014.68","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.68","url":null,"abstract":"An Edge-Overlay model constructing virtual networks using both virtual switches and IP tunnels is promising in cloud datacenter networks. But software-implemented virtual switches can cause performance problems because the packet processing load is concentrated on a particular CPU core. Although multi queue functions like Receive Side Scaling (RSS) can distribute the load onto multiple CPU cores, there are still problems to be solved such as IRQ core collision of heavy traffic flows as well as competitive resource use between physical and virtual for packet processing. In this paper, we propose a software packet processing unit named VSE (Virtual Switch Extension) to address these problems by adaptively determining softirq cores based on both CPU load and VM-running information. Furthermore, the behavior of VSE can be managed by Open Flow controllers. Our performance evaluation results showed that throughput of our approach was higher than an existing RSSbased model as packet processing load increased. In addition, we show that our method prevented performance of high-loaded flows from being degraded by priority-based CPU core selection.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116298812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.64
Yukio Ogawa, G. Hasegawa, M. Murata
In a multitenant data center, nodes and links of tenants' virtual networks (VNs) share a single component of the physical substrate network (SN). A failure of the single SN component can thereby cause simultaneous failures of multiple nodes and links in a VN, this complex of failures must significantly disrupt the services offered on the VN. In the present paper, we clarify how the fault tolerance of a VN is affected by a SN failure, especially from the perspective of VN allocation in the SN. We propose a VN allocation model for multitenant data centers and formulate a problem that deals with the bandwidth loss in the VN due the SN failure. We conduct numerical simulations with the setting that has 1.7 × 108 bit/s bandwidth demand on each VN. The results show that the bandwidth loss can be reduced to 5.3 × 102 bit/s per VN, but the required bandwidth between physical servers in the SN increases to 1.0 × 109 bit/s per VN when each node in the VN is mapped to an individual physical server. The balance between the bandwidth loss and the required bandwidth between physical servers can be optimized by assigning every four nodes of the VN to each physical server, meaning that we minimize the bandwidth loss without providing too sufficient bandwidth in the core area of the SN.
{"title":"Virtual Network Allocation for Fault Tolerance with Bandwidth Efficiency in a Multi-tenant Data Center","authors":"Yukio Ogawa, G. Hasegawa, M. Murata","doi":"10.1109/CloudCom.2014.64","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.64","url":null,"abstract":"In a multitenant data center, nodes and links of tenants' virtual networks (VNs) share a single component of the physical substrate network (SN). A failure of the single SN component can thereby cause simultaneous failures of multiple nodes and links in a VN, this complex of failures must significantly disrupt the services offered on the VN. In the present paper, we clarify how the fault tolerance of a VN is affected by a SN failure, especially from the perspective of VN allocation in the SN. We propose a VN allocation model for multitenant data centers and formulate a problem that deals with the bandwidth loss in the VN due the SN failure. We conduct numerical simulations with the setting that has 1.7 × 108 bit/s bandwidth demand on each VN. The results show that the bandwidth loss can be reduced to 5.3 × 102 bit/s per VN, but the required bandwidth between physical servers in the SN increases to 1.0 × 109 bit/s per VN when each node in the VN is mapped to an individual physical server. The balance between the bandwidth loss and the required bandwidth between physical servers can be optimized by assigning every four nodes of the VN to each physical server, meaning that we minimize the bandwidth loss without providing too sufficient bandwidth in the core area of the SN.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122454969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.85
Aleksandar Hudic, Markus Tauber, T. Lorünser, M. Krotsiani, G. Spanoudakis, A. Mauthe, E. Weippl
Data with high security requirements is being processed and stored with increasing frequency in the Cloud. To guarantee that the data is being dealt in a secure manner we investigate the applicability of Assurance methodologies. In a typical Cloud environment the setup of multiple layers and different stakeholders determines security properties of individual components that are used to compose Cloud applications. We present a methodology adapted from Common Criteria for aggregating information reflecting the security properties of individual constituent components of Cloud applications. This aggregated information is used to categorise overall application security in terms of Assurance Levels and to provide a continuous assurance level evaluation. It gives the service owner an overview of the security of his service, without requiring detailed manual analyses of log files.
{"title":"A Multi-layer and MultiTenant Cloud Assurance Evaluation Methodology","authors":"Aleksandar Hudic, Markus Tauber, T. Lorünser, M. Krotsiani, G. Spanoudakis, A. Mauthe, E. Weippl","doi":"10.1109/CloudCom.2014.85","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.85","url":null,"abstract":"Data with high security requirements is being processed and stored with increasing frequency in the Cloud. To guarantee that the data is being dealt in a secure manner we investigate the applicability of Assurance methodologies. In a typical Cloud environment the setup of multiple layers and different stakeholders determines security properties of individual components that are used to compose Cloud applications. We present a methodology adapted from Common Criteria for aggregating information reflecting the security properties of individual constituent components of Cloud applications. This aggregated information is used to categorise overall application security in terms of Assurance Levels and to provide a continuous assurance level evaluation. It gives the service owner an overview of the security of his service, without requiring detailed manual analyses of log files.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115952234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.171
F. Yahya, V. Chang, R. Walters, G. Wills
As cloud becomes the tool of choice for more data storage services, the number of service providers has also increased. With these choices, organisations have a wide selection of services available to move their data to the cloud. However, the responsibility to maintain the security of sensitive data stored therein remains paramount. This paper will discuss some of the challenges of securing a cloud storage and putting it into context by reviewing relevant literature. The challenges associated with the three important security aspects (confidentiality, integrity and availability) are discussed together with the vulnerabilities linked to them. It is important to look into these challenges as cloud storage is not only about technological evolution but involves security considerations. We aim to provide insights of security challenges and its solutions to enhance cloud storage implementation.
{"title":"Security Challenges in Cloud Storages","authors":"F. Yahya, V. Chang, R. Walters, G. Wills","doi":"10.1109/CloudCom.2014.171","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.171","url":null,"abstract":"As cloud becomes the tool of choice for more data storage services, the number of service providers has also increased. With these choices, organisations have a wide selection of services available to move their data to the cloud. However, the responsibility to maintain the security of sensitive data stored therein remains paramount. This paper will discuss some of the challenges of securing a cloud storage and putting it into context by reviewing relevant literature. The challenges associated with the three important security aspects (confidentiality, integrity and availability) are discussed together with the vulnerabilities linked to them. It is important to look into these challenges as cloud storage is not only about technological evolution but involves security considerations. We aim to provide insights of security challenges and its solutions to enhance cloud storage implementation.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134015092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.155
Thomas Hage, Kyrre M. Begnum, A. Yazidi
Greener cloud computing has recently become an extremely pertinent research topic in academy and among practitioners. Despite the abundance of the state of the art studies that tackle the problem, the vast majority of them solely rely on simulation, and do not report real settings experience. Thus, the theoretical models might overlook some of the practical details that might emerge in real life scenarios. In this paper, we try to bridge the aforementioned gap in the literature by devising and also deploying algorithms for saving power in real-life cloud environments based on variants of the 2D/3D bin packing algorithms. The algorithms are tested on a large Open Stack deployment in use by staff and students at Oslo and Akers us University College, Norway. We present three different adoptions of 2D and 3D bin packing, incorporating different aspects of the cloud as constraints. Our real-life experimental results show that although the three algorithms yield a decrease in power consumption, they distinctly affect the way the cloud has to be managed. A simple bin packing algorithm provides useful mechanism to reduce power consumption while more sophisticated algorithms do not merely achieve power savings but also minimize the number of migrations.
{"title":"Saving the Planet with Bin Packing - Experiences Using 2D and 3D Bin Packing of Virtual Machines for Greener Clouds","authors":"Thomas Hage, Kyrre M. Begnum, A. Yazidi","doi":"10.1109/CloudCom.2014.155","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.155","url":null,"abstract":"Greener cloud computing has recently become an extremely pertinent research topic in academy and among practitioners. Despite the abundance of the state of the art studies that tackle the problem, the vast majority of them solely rely on simulation, and do not report real settings experience. Thus, the theoretical models might overlook some of the practical details that might emerge in real life scenarios. In this paper, we try to bridge the aforementioned gap in the literature by devising and also deploying algorithms for saving power in real-life cloud environments based on variants of the 2D/3D bin packing algorithms. The algorithms are tested on a large Open Stack deployment in use by staff and students at Oslo and Akers us University College, Norway. We present three different adoptions of 2D and 3D bin packing, incorporating different aspects of the cloud as constraints. Our real-life experimental results show that although the three algorithms yield a decrease in power consumption, they distinctly affect the way the cloud has to be managed. A simple bin packing algorithm provides useful mechanism to reduce power consumption while more sophisticated algorithms do not merely achieve power savings but also minimize the number of migrations.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131265543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CLOUDCOM.2014.48
Lin Xu, Jiannong Cao, Yan Wang, Lei Yang, Jing Li
Resource prediction (e.g. CPU/memory utilization) of cloud computing jobs has attracted substantial amount of attention. Existing works use regression methods based on historical information of jobs, with an impractical assumption that the job to be predicted has the same class as the historical jobs. To address this problem, we propose to take the category of the jobs into consideration for effective resource prediction. Existing works on job classification either ignores the temporal variance of resource consumption during job execution or use it in a naive way, resulting in unsatisfactory classification accuracy and/or slow speed. In this paper, we introduce a new and efficient job classification approach, called Bejo. Inspired by the textual document classification methods, which use distribution of text words to describe and classify a document, Bejo treats the job as a document, assigns each collected resource consumption snapshot to a certain "resource word", and uses the distribution of the words to describe and classify a job. An ℓ1 norm minimization formulation is used to assign each resource snapshot to a resource word, to especially address the unique challenges of high noise and tight time budget of cloud job classification. We collect a comprehensive dataset for job classification and resource consumption prediction on cloud platforms, and demonstrate superior quality and efficiency of Bejo over state-of-the-art algorithms. Experiments also show the relative error of resource consumption prediction can be dramatically reduced by adding an extra job classification step to the existing regression methods.
{"title":"Bejo: Behavior Based Job Classification for Resource Consumption Prediction in the Cloud","authors":"Lin Xu, Jiannong Cao, Yan Wang, Lei Yang, Jing Li","doi":"10.1109/CLOUDCOM.2014.48","DOIUrl":"https://doi.org/10.1109/CLOUDCOM.2014.48","url":null,"abstract":"Resource prediction (e.g. CPU/memory utilization) of cloud computing jobs has attracted substantial amount of attention. Existing works use regression methods based on historical information of jobs, with an impractical assumption that the job to be predicted has the same class as the historical jobs. To address this problem, we propose to take the category of the jobs into consideration for effective resource prediction. Existing works on job classification either ignores the temporal variance of resource consumption during job execution or use it in a naive way, resulting in unsatisfactory classification accuracy and/or slow speed. In this paper, we introduce a new and efficient job classification approach, called Bejo. Inspired by the textual document classification methods, which use distribution of text words to describe and classify a document, Bejo treats the job as a document, assigns each collected resource consumption snapshot to a certain \"resource word\", and uses the distribution of the words to describe and classify a job. An ℓ1 norm minimization formulation is used to assign each resource snapshot to a resource word, to especially address the unique challenges of high noise and tight time budget of cloud job classification. We collect a comprehensive dataset for job classification and resource consumption prediction on cloud platforms, and demonstrate superior quality and efficiency of Bejo over state-of-the-art algorithms. Experiments also show the relative error of resource consumption prediction can be dramatically reduced by adding an extra job classification step to the existing regression methods.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121361069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.40
Zhaoxia Wang, Victor Joo Chuan Tong, David Chan
Social media data consists of feedback, critiques and other comments that are posted online by internet users. Collectively, these comments may reflect sentiments that are sometimes not captured in traditional data collection methods such as administering a survey questionnaire. Thus, social media data offers a rich source of information, which can be adequately analyzed and understood. In this paper, we survey the extant research literature on sentiment analysis and discuss various limitations of the existing analytical methods. A major limitation in the large majority of existing research is the exclusive focus on social media data in the English language. There is a need to plug this research gap by developing effective analytic methods and approaches for sentiment analysis of data in non-English languages. These analyses of non-English language data should be integrated with the analysis of data in English language to better understand sentiments and address people-centric issues, particularly in multilingual societies. In addition, developing a high accuracy method, in which the customization of training datasets is not required, is also a challenge in current sentiment analysis. To address these various limitations and issues in current research, we propose a method that employs a new sentiment analysis scheme. The new scheme enables us to derive dominant valence as well as prominent positive and negative emotions by using an adaptive fuzzy inference method (FIM) with linguistics processors to minimize semantic ambiguity as well as multi-source lexicon integration and development. Our proposed method overcomes the limitations of the existing methods by not only improving the accuracy of the algorithm but also having the capability to perform analysis on non-English languages. Several case studies are included in this paper to illustrate the application and utility of our proposed method.
{"title":"Issues of Social Data Analytics with a New Method for Sentiment Analysis of Social Media Data","authors":"Zhaoxia Wang, Victor Joo Chuan Tong, David Chan","doi":"10.1109/CloudCom.2014.40","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.40","url":null,"abstract":"Social media data consists of feedback, critiques and other comments that are posted online by internet users. Collectively, these comments may reflect sentiments that are sometimes not captured in traditional data collection methods such as administering a survey questionnaire. Thus, social media data offers a rich source of information, which can be adequately analyzed and understood. In this paper, we survey the extant research literature on sentiment analysis and discuss various limitations of the existing analytical methods. A major limitation in the large majority of existing research is the exclusive focus on social media data in the English language. There is a need to plug this research gap by developing effective analytic methods and approaches for sentiment analysis of data in non-English languages. These analyses of non-English language data should be integrated with the analysis of data in English language to better understand sentiments and address people-centric issues, particularly in multilingual societies. In addition, developing a high accuracy method, in which the customization of training datasets is not required, is also a challenge in current sentiment analysis. To address these various limitations and issues in current research, we propose a method that employs a new sentiment analysis scheme. The new scheme enables us to derive dominant valence as well as prominent positive and negative emotions by using an adaptive fuzzy inference method (FIM) with linguistics processors to minimize semantic ambiguity as well as multi-source lexicon integration and development. Our proposed method overcomes the limitations of the existing methods by not only improving the accuracy of the algorithm but also having the capability to perform analysis on non-English languages. Several case studies are included in this paper to illustrate the application and utility of our proposed method.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123798644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.55
Peng Lu, Young Choon Lee, V. Gramoli, Luke M. Leslie, Albert Y. Zomaya
Resource capacity is often over provisioned to primarily deal with short periods of peak load. Shaping these peaks by shifting them to low utilization periods (valleys) is referred to as "resource consumption shaping". While originally aimed at the data center level, the resource consumption shaping we consider focuses on local resources, like CPU or I/O as we have identified that individual jobs also incur load peaks and valleys on these resources. In this paper, we present Local Resource Shaper (LRS), which limits fairness in resource sharing between co-located MapReduce tasks. LRS enables Hadoop to maximize resource utilization and minimize resource contention independently of job type. Co-located MapReduce tasks are often prone to resource contention (i.e., Load peak) due to similar resource usage patterns particularly with traditional fair resource sharing. In essence, LRS differentiates co-located tasks through active and passive slots that serve as containers for interchangeable map or reduce tasks. LRS lets an active slot consume as much resources as possible, and a passive slot make use of any unused resources. LRS leverages such slot differentiation with its new scheduler, Interleave. Our results show that LRS always outperforms the best static slot configuration with three Hadoop schedulers in terms of both resource utilization and performance.
{"title":"Local Resource Shaper for MapReduce","authors":"Peng Lu, Young Choon Lee, V. Gramoli, Luke M. Leslie, Albert Y. Zomaya","doi":"10.1109/CloudCom.2014.55","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.55","url":null,"abstract":"Resource capacity is often over provisioned to primarily deal with short periods of peak load. Shaping these peaks by shifting them to low utilization periods (valleys) is referred to as \"resource consumption shaping\". While originally aimed at the data center level, the resource consumption shaping we consider focuses on local resources, like CPU or I/O as we have identified that individual jobs also incur load peaks and valleys on these resources. In this paper, we present Local Resource Shaper (LRS), which limits fairness in resource sharing between co-located MapReduce tasks. LRS enables Hadoop to maximize resource utilization and minimize resource contention independently of job type. Co-located MapReduce tasks are often prone to resource contention (i.e., Load peak) due to similar resource usage patterns particularly with traditional fair resource sharing. In essence, LRS differentiates co-located tasks through active and passive slots that serve as containers for interchangeable map or reduce tasks. LRS lets an active slot consume as much resources as possible, and a passive slot make use of any unused resources. LRS leverages such slot differentiation with its new scheduler, Interleave. Our results show that LRS always outperforms the best static slot configuration with three Hadoop schedulers in terms of both resource utilization and performance.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123330265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.104
W. Zeng, M. Koutny, P. Watson
Federated cloud systems increase the reliability and reduce the cost of computational support to an organization. However, the resulting combination of secure private clouds and less secure public clouds impacts on the security requirements of the system. Therefore, applications need to be located within different clouds, which strongly affects the information flow security of the entire system. In this paper, the entities of a federated cloud system as well as the clouds are assigned security levels of a given security lattice. Then a dynamic flow sensitive security model for a federated cloud system is proposed within which the Bell-La Padula rules and cloud security rule can be captured. As a result, one can track and verify the security information flow in federated clouds. Moreover, an example is used to explain how Petri nets could be used to represent such a system, making it possible to verify secure information flow in federated clouds using the existing Petri net techniques.
{"title":"Verifying Secure Information Flow in Federated Clouds","authors":"W. Zeng, M. Koutny, P. Watson","doi":"10.1109/CloudCom.2014.104","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.104","url":null,"abstract":"Federated cloud systems increase the reliability and reduce the cost of computational support to an organization. However, the resulting combination of secure private clouds and less secure public clouds impacts on the security requirements of the system. Therefore, applications need to be located within different clouds, which strongly affects the information flow security of the entire system. In this paper, the entities of a federated cloud system as well as the clouds are assigned security levels of a given security lattice. Then a dynamic flow sensitive security model for a federated cloud system is proposed within which the Bell-La Padula rules and cloud security rule can be captured. As a result, one can track and verify the security information flow in federated clouds. Moreover, an example is used to explain how Petri nets could be used to represent such a system, making it possible to verify secure information flow in federated clouds using the existing Petri net techniques.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122714080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.59
T. Hacker, Alejandra J. Magana
Many cyber infrastructure and cloud computing systems have been developed and deployed over the past decade. Although use metrics are collected by many of these systems, there is not a clear link from these metrics to the ultimate effectiveness and impact of these systems on science communities. This paper describes a framework we developed that seeks to provide context for use and impact metrics to facilitate understanding of how these systems are used and ultimately adopted by science and engineering communities. We use this framework to present metrics of use, impact, and effectiveness collected from the NEES cyber infrastructure.
{"title":"A Framework for Measuring the Impact and Effectiveness of the NEES Cyberinfrastructure for Earthquake Engineering","authors":"T. Hacker, Alejandra J. Magana","doi":"10.1109/CloudCom.2014.59","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.59","url":null,"abstract":"Many cyber infrastructure and cloud computing systems have been developed and deployed over the past decade. Although use metrics are collected by many of these systems, there is not a clear link from these metrics to the ultimate effectiveness and impact of these systems on science communities. This paper describes a framework we developed that seeks to provide context for use and impact metrics to facilitate understanding of how these systems are used and ultimately adopted by science and engineering communities. We use this framework to present metrics of use, impact, and effectiveness collected from the NEES cyber infrastructure.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124763023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}