Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.116
Jiaqi Tan, R. Gandhi, P. Narasimhan
The massive growth in mobile devices is likely to give rise to the leasing out of compute and data resources on mobile devices to third-parties to enable applications to be run across multiple mobile devices. However, users who lease their mobile devices out need to run applications from unknown third-parties, and these untrusted applications may harm their devices or access unauthorized personal data. We propose STOVE, a data and execution model for structuring untrusted applications to be secure by construction, to achieve strict and verifiable execution isolation, and observable access control for data. STOVE uses formal logic to verify that untrusted code meets isolation properties which imply that hosts running the code cannot be harmed, and that untrusted code cannot directly access host data. STOVE performs all data accesses on behalf of untrusted code, allowing all access control decisions to be reliably performed in one place. Thus, users can run untrusted applications structured using the STOVE model on their systems, with strong guarantees, based on formal proofs, that these applications will not harm their system nor access unauthorized data.
{"title":"STOVE: Strict, Observable, Verifiable Data and Execution Models for Untrusted Applications","authors":"Jiaqi Tan, R. Gandhi, P. Narasimhan","doi":"10.1109/CloudCom.2014.116","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.116","url":null,"abstract":"The massive growth in mobile devices is likely to give rise to the leasing out of compute and data resources on mobile devices to third-parties to enable applications to be run across multiple mobile devices. However, users who lease their mobile devices out need to run applications from unknown third-parties, and these untrusted applications may harm their devices or access unauthorized personal data. We propose STOVE, a data and execution model for structuring untrusted applications to be secure by construction, to achieve strict and verifiable execution isolation, and observable access control for data. STOVE uses formal logic to verify that untrusted code meets isolation properties which imply that hosts running the code cannot be harmed, and that untrusted code cannot directly access host data. STOVE performs all data accesses on behalf of untrusted code, allowing all access control decisions to be reliably performed in one place. Thus, users can run untrusted applications structured using the STOVE model on their systems, with strong guarantees, based on formal proofs, that these applications will not harm their system nor access unauthorized data.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126620647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.177
K. Ravindran
Given cloud-based realization of a distributed system S, QoS auditing enables risk analysis and accounting of SLA violations under various security threats and resource depletion faced by S. The problem of QoS failures and security infringements arises due to third-party control of the underlying cloud resources and components. Here, a major issue is to reason about how well the system internal mechanisms are engineered to offer a required level of service to the application. We employ computational models of S to determine the optimal feasible output trajectory and verify how close is the actual behavior of S to this trajectory. The less-than-100% trust between the various sub-systems of S necessitates our model-based analysis of the service behavior vis-a-vis the SLA negotiated with S. The paper describes the modeling techniques to analyze the dependability of such a cloud-based system.
{"title":"Role of System Modeling for Audit of QoS Provisioning in Cloud Services","authors":"K. Ravindran","doi":"10.1109/CloudCom.2014.177","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.177","url":null,"abstract":"Given cloud-based realization of a distributed system S, QoS auditing enables risk analysis and accounting of SLA violations under various security threats and resource depletion faced by S. The problem of QoS failures and security infringements arises due to third-party control of the underlying cloud resources and components. Here, a major issue is to reason about how well the system internal mechanisms are engineered to offer a required level of service to the application. We employ computational models of S to determine the optimal feasible output trajectory and verify how close is the actual behavior of S to this trajectory. The less-than-100% trust between the various sub-systems of S necessitates our model-based analysis of the service behavior vis-a-vis the SLA negotiated with S. The paper describes the modeling techniques to analyze the dependability of such a cloud-based system.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114469519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CLOUDCOM.2014.10
Yuan Xiaoyong, Li Ying, Wu Zhonghai, Liu Tiancheng
This paper proposes a comparative study of cloud dependability between two methods -- bug analysis and fault injection for assessing the impact of component failure on cloud service availability. We focus on the IaaS cloud with open source platform Open Stack. The actual bug data are analyzed to show numerical examples of dependability assessment. A fault injection tool has also been developed to create failures of components and then observe their effects on services. The comparison analysis between two methods shows that bug analysis method has richer features for analyzing but not as precise as fault injection.
{"title":"Dependability Analysis on Open Stack IaaS Cloud: Bug Anaysis and Fault Injection","authors":"Yuan Xiaoyong, Li Ying, Wu Zhonghai, Liu Tiancheng","doi":"10.1109/CLOUDCOM.2014.10","DOIUrl":"https://doi.org/10.1109/CLOUDCOM.2014.10","url":null,"abstract":"This paper proposes a comparative study of cloud dependability between two methods -- bug analysis and fault injection for assessing the impact of component failure on cloud service availability. We focus on the IaaS cloud with open source platform Open Stack. The actual bug data are analyzed to show numerical examples of dependability assessment. A fault injection tool has also been developed to create failures of components and then observe their effects on services. The comparison analysis between two methods shows that bug analysis method has richer features for analyzing but not as precise as fault injection.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114575941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.74
Lu Tang, Zhigang Sun, Tao Li, Biao Han, Gaofeng Lv, W. Shi, Hui Yang
Network processing platform based on the multi-core CPU becomes more and more prevailing in nowadays. Buffer allocation/deallocation operations consume a large number of CPU cycles in packet I/O process. The problem becomes even worse in the scenario of packet forwarding, as buffer allocation/deallocation operations are more frequent than the host-based network applications. We thus propose a novel data structure for packet buffer management on multi-cores, named Self-Described Buffer (SDB), which merges the separated descriptor and metadata into packet buffer. SDB management overhead can be greatly reduced by utilizing the compact data structure, and zero-overhead buffer management can be further achieved by offloading SDB allocation/deallocation operations to NIC. We have prototyped SDB enabled NIC, named BcNIC, on NetFPGA-10G. In the demo, we will illustrate the advantages of the SDB scheme by comparing the performance of BcNIC with the traditional NIC on multi-core platforms.
{"title":"Demostration of Self-Described Buffer for Accelerating Packet Forwarding on Multi-core Servers","authors":"Lu Tang, Zhigang Sun, Tao Li, Biao Han, Gaofeng Lv, W. Shi, Hui Yang","doi":"10.1109/CloudCom.2014.74","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.74","url":null,"abstract":"Network processing platform based on the multi-core CPU becomes more and more prevailing in nowadays. Buffer allocation/deallocation operations consume a large number of CPU cycles in packet I/O process. The problem becomes even worse in the scenario of packet forwarding, as buffer allocation/deallocation operations are more frequent than the host-based network applications. We thus propose a novel data structure for packet buffer management on multi-cores, named Self-Described Buffer (SDB), which merges the separated descriptor and metadata into packet buffer. SDB management overhead can be greatly reduced by utilizing the compact data structure, and zero-overhead buffer management can be further achieved by offloading SDB allocation/deallocation operations to NIC. We have prototyped SDB enabled NIC, named BcNIC, on NetFPGA-10G. In the demo, we will illustrate the advantages of the SDB scheme by comparing the performance of BcNIC with the traditional NIC on multi-core platforms.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122122844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.124
B. Rowedder, Hui Wang, Y. Kuang
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time and significant computational overhead. By accessing the much more powerful computational resources of a cloud computing environment, GATE's run time can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulation using a commercial cloud computing services. A Monte Carlo cloud computing framework, Gate Cloud, for medical physics applications was proposed. Amazon's Elastic Compute Cloud (EC2) was used to launch several nodes equipped with GATE V6.1. The Positron emission tomography (PET) Benchmark included in the GATE software was repeated for various cluster sizes between 1 and 100 nodes in order to establish the ideal cluster size in terms of cost and time efficiency. The study shows that increasing the number of nodes in the cluster resulted in a decrease in calculation time that could be expressed with an inverse power model. Simulation results were not affected by the cluster size, indicating that integrity of a calculation is preserved in a cloud computing environment. With high power computing continuing to lower in price and accessibility, implementing Gate Cloud for clinical applications will continue to become more attractive.
{"title":"Gate Cloud: An Integration of Gate Monte Carlo Simulation with a Cloud Computing Environment","authors":"B. Rowedder, Hui Wang, Y. Kuang","doi":"10.1109/CloudCom.2014.124","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.124","url":null,"abstract":"The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time and significant computational overhead. By accessing the much more powerful computational resources of a cloud computing environment, GATE's run time can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulation using a commercial cloud computing services. A Monte Carlo cloud computing framework, Gate Cloud, for medical physics applications was proposed. Amazon's Elastic Compute Cloud (EC2) was used to launch several nodes equipped with GATE V6.1. The Positron emission tomography (PET) Benchmark included in the GATE software was repeated for various cluster sizes between 1 and 100 nodes in order to establish the ideal cluster size in terms of cost and time efficiency. The study shows that increasing the number of nodes in the cluster resulted in a decrease in calculation time that could be expressed with an inverse power model. Simulation results were not affected by the cluster size, indicating that integrity of a calculation is preserved in a cloud computing environment. With high power computing continuing to lower in price and accessibility, implementing Gate Cloud for clinical applications will continue to become more attractive.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"685 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115116335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CLOUDCOM.2014.122
Weiqing Liu, Jiannong Cao, Xuanjia Qiu, Jing Li
Improving performance of a mobile application by offloading its computation onto a cloudlet has become a prevalent paradigm. Among mobile applications, the category of interactive data-streaming applications is emerging while having not yet received sufficient attention. During computation offloading, the performance of this category of applications (including response time and throughput) depends on network latency and bandwidth between the mobile device and the cloudlet. Although a single cloudlet can provide satisfactory network latency, the bandwidth is always the bottleneck of the throughput. To address this issue, we propose to use multiple cloudlets for computation offloading so as to alleviate the bandwidth bottleneck. In addition, we propose to use multiple module instances to complete a module, enabling more fine-grained computation partitioning, since data processing in many modules of data-streaming applications could be highly parallelized. Specifically, at first we apply a fine-grained data-flow model to characterize mobile interactive data-streaming applications. Then we build a unified optimization framework that achieves maximization of the overall utilities of all mobile users, and design an efficient heuristic for the optimization problem, which is able to make trade-off between throughput and energy consumption at each mobile device. At the end we verify our algorithm with extensive simulation. The results show that the overall utility achieved by our heuristic is close to the precise optimum, and our multiple-cloudlet mechanism significantly outperforms the single-cloudlet mechanism.
{"title":"Improving Performance of Mobile Interactive Data-Streaming Applications with Multiple Cloudlets","authors":"Weiqing Liu, Jiannong Cao, Xuanjia Qiu, Jing Li","doi":"10.1109/CLOUDCOM.2014.122","DOIUrl":"https://doi.org/10.1109/CLOUDCOM.2014.122","url":null,"abstract":"Improving performance of a mobile application by offloading its computation onto a cloudlet has become a prevalent paradigm. Among mobile applications, the category of interactive data-streaming applications is emerging while having not yet received sufficient attention. During computation offloading, the performance of this category of applications (including response time and throughput) depends on network latency and bandwidth between the mobile device and the cloudlet. Although a single cloudlet can provide satisfactory network latency, the bandwidth is always the bottleneck of the throughput. To address this issue, we propose to use multiple cloudlets for computation offloading so as to alleviate the bandwidth bottleneck. In addition, we propose to use multiple module instances to complete a module, enabling more fine-grained computation partitioning, since data processing in many modules of data-streaming applications could be highly parallelized. Specifically, at first we apply a fine-grained data-flow model to characterize mobile interactive data-streaming applications. Then we build a unified optimization framework that achieves maximization of the overall utilities of all mobile users, and design an efficient heuristic for the optimization problem, which is able to make trade-off between throughput and energy consumption at each mobile device. At the end we verify our algorithm with extensive simulation. The results show that the overall utility achieved by our heuristic is close to the precise optimum, and our multiple-cloudlet mechanism significantly outperforms the single-cloudlet mechanism.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115547079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.102
M. Karakus, Zengxiang Li, Wentong Cai, T. Duong
Although cloud computing is a thriving technology trend in industry and academy, the resource renting cost is still the main obstacle for users to switch to cloud. The existing pricing models are not flexible enough for users. On-demand pricing model does not guarantee resource availability, while reserved pricing model may result in high risk of resource wasting. In this paper, we propose OMTiR: An Open Market for Trading Idle Cloud Resources, enabling users to sell their unused or underutilized resources on negotiable prices. Consequently, users, either as a resource seller or buyer, can reduce the resource renting cost. In addition, the cloud provider can increase revenue by taking arbitrage profit in the market and serving more users using the same amount of resource. A comparative study is conducted using a real world workload trace to show the advantages of the open market model over the existing price models in terms of resource utilization rate and task waiting time.
{"title":"OMTiR: Open Market for Trading Idle Cloud Resources","authors":"M. Karakus, Zengxiang Li, Wentong Cai, T. Duong","doi":"10.1109/CloudCom.2014.102","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.102","url":null,"abstract":"Although cloud computing is a thriving technology trend in industry and academy, the resource renting cost is still the main obstacle for users to switch to cloud. The existing pricing models are not flexible enough for users. On-demand pricing model does not guarantee resource availability, while reserved pricing model may result in high risk of resource wasting. In this paper, we propose OMTiR: An Open Market for Trading Idle Cloud Resources, enabling users to sell their unused or underutilized resources on negotiable prices. Consequently, users, either as a resource seller or buyer, can reduce the resource renting cost. In addition, the cloud provider can increase revenue by taking arbitrage profit in the market and serving more users using the same amount of resource. A comparative study is conducted using a real world workload trace to show the advantages of the open market model over the existing price models in terms of resource utilization rate and task waiting time.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127672489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.88
Yang Yu, Yongqing Zhu, W. Ng, J. Samsudin
The ever increasing amounts of digital data being stored in public and private clouds are challenging users to access and manage the data. With the corresponding storage system reaches Petabyte-scale, or even Exabyte-scale, metadata access will become a severe performance bottleneck. Hence, this paper proposes an efficient multi-dimensional metadata index and search solution for cloud data. By proposing some new mechanism for K-D-B tree based index/search and implementing index partitioning technique, our system can achieve optimized performance in terms of memory utilization and search speed. Experiments show that our system performs much better as compared with existing solutions. In addition, our system can safely scale out in a distributed manner with guaranteed performance.
{"title":"An Efficient Multidimension Metadata Index and Search System for Cloud Data","authors":"Yang Yu, Yongqing Zhu, W. Ng, J. Samsudin","doi":"10.1109/CloudCom.2014.88","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.88","url":null,"abstract":"The ever increasing amounts of digital data being stored in public and private clouds are challenging users to access and manage the data. With the corresponding storage system reaches Petabyte-scale, or even Exabyte-scale, metadata access will become a severe performance bottleneck. Hence, this paper proposes an efficient multi-dimensional metadata index and search solution for cloud data. By proposing some new mechanism for K-D-B tree based index/search and implementing index partitioning technique, our system can achieve optimized performance in terms of memory utilization and search speed. Experiments show that our system performs much better as compared with existing solutions. In addition, our system can safely scale out in a distributed manner with guaranteed performance.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127851752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CloudCom.2014.20
R. Calheiros, R. Buyya
The broad adoption of cloud services led to an increasing concentration of servers in a few data centers. Reports estimate the energy consumptions of these data centers to be between 1.1% and 1.5% of the worldwide electricity consumption. This extensive energy consumption precludes massive CO2 emissions, as a significant number of data centers are backed by "brown" power plants. While most researchers have focused on reducing energy consumption of cloud data centers via server consolidation, we propose an approach for reducing the power required to execute urgent, CPU-intensive Bag-of-Tasks applications on cloud infrastructures. It exploits intelligent scheduling combined with the Dynamic Voltage and Frequency Scaling (DVFS) capability of modern CPU processors to keep the CPU operating at the minimum voltage level (and consequently minimum frequency and power consumption) that enables the application to complete before a user-defined deadline. Experiments demonstrate that our approach reduces energy consumption with the extra feature of not requiring virtual machines to have knowledge about its underlying physical infrastructure, which is an assumption of previous works.
{"title":"Energy-Efficient Scheduling of Urgent Bag-of-Tasks Applications in Clouds through DVFS","authors":"R. Calheiros, R. Buyya","doi":"10.1109/CloudCom.2014.20","DOIUrl":"https://doi.org/10.1109/CloudCom.2014.20","url":null,"abstract":"The broad adoption of cloud services led to an increasing concentration of servers in a few data centers. Reports estimate the energy consumptions of these data centers to be between 1.1% and 1.5% of the worldwide electricity consumption. This extensive energy consumption precludes massive CO2 emissions, as a significant number of data centers are backed by \"brown\" power plants. While most researchers have focused on reducing energy consumption of cloud data centers via server consolidation, we propose an approach for reducing the power required to execute urgent, CPU-intensive Bag-of-Tasks applications on cloud infrastructures. It exploits intelligent scheduling combined with the Dynamic Voltage and Frequency Scaling (DVFS) capability of modern CPU processors to keep the CPU operating at the minimum voltage level (and consequently minimum frequency and power consumption) that enables the application to complete before a user-defined deadline. Experiments demonstrate that our approach reduces energy consumption with the extra feature of not requiring virtual machines to have knowledge about its underlying physical infrastructure, which is an assumption of previous works.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124608530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/CLOUDCOM.2014.160
Y. Demchenko, A. Belloum, D. Bernstein, C. D. Laat
This paper presents results and experience by the authors based on the few delivered courses on Cloud Computing for different target groups of students, specialists and trainees. The developed courses implement the proposed by the authors instructional methodology integrating the two major concepts of effective learning: the Bloom's Taxonomy of cognitive learning processes and Andragogy as the adult learning methodology. The central part of the proposed approach is the Common Body of Knowledge in Cloud Computing (CBK-CC) that defines the professional level of knowledge in the selected domain and allows consistent curricula structuring and profiling. The paper presents the structure of the courses and explains the principles used for developing course materials, such as Bloom's Taxonomy applied for technical education, and andragogy instructional model for professional education and training. The developed courses are based on the well-defined Cloud Computing architecture, service and operational model, and stakeholder roles/responsibilities. The paper provides a short description of the developed education and training courses on Cloud Computing that illustrate how the proposed CBK-CC and instructional methodologies are used in different learning environments and for different learners' groups.
{"title":"Experience of Profiling Curricula on Cloud Computing Technologies and Engineering for Different Target Groups","authors":"Y. Demchenko, A. Belloum, D. Bernstein, C. D. Laat","doi":"10.1109/CLOUDCOM.2014.160","DOIUrl":"https://doi.org/10.1109/CLOUDCOM.2014.160","url":null,"abstract":"This paper presents results and experience by the authors based on the few delivered courses on Cloud Computing for different target groups of students, specialists and trainees. The developed courses implement the proposed by the authors instructional methodology integrating the two major concepts of effective learning: the Bloom's Taxonomy of cognitive learning processes and Andragogy as the adult learning methodology. The central part of the proposed approach is the Common Body of Knowledge in Cloud Computing (CBK-CC) that defines the professional level of knowledge in the selected domain and allows consistent curricula structuring and profiling. The paper presents the structure of the courses and explains the principles used for developing course materials, such as Bloom's Taxonomy applied for technical education, and andragogy instructional model for professional education and training. The developed courses are based on the well-defined Cloud Computing architecture, service and operational model, and stakeholder roles/responsibilities. The paper provides a short description of the developed education and training courses on Cloud Computing that illustrate how the proposed CBK-CC and instructional methodologies are used in different learning environments and for different learners' groups.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117063111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}