Pub Date : 2011-02-16DOI: 10.1109/GRID.2010.5697986
D. Dyachuk, M. Mazzucco
With the increasing popularity of Internet-based services and applications, power efficiency is becoming a major concern for data center operators, as high electricity consumption not only increases greenhouse gas emissions, but also increases the cost of running the server farm itself. In this paper we address the problem of maximizing the revenue of a service provider by means of dynamic allocation policies that run the minimum amount of servers necessary to meet user's requirements in terms of performance. The results of several experiments executed using Wikipedia traces are described, showing that the proposed schemes work well, even if the workload is non-stationary. Since any resource allocation policy requires the use of forecasting mechanisms, various schemes allowing compensating errors in the load forecasts are presented and evaluated.
{"title":"On allocation policies for power and performance","authors":"D. Dyachuk, M. Mazzucco","doi":"10.1109/GRID.2010.5697986","DOIUrl":"https://doi.org/10.1109/GRID.2010.5697986","url":null,"abstract":"With the increasing popularity of Internet-based services and applications, power efficiency is becoming a major concern for data center operators, as high electricity consumption not only increases greenhouse gas emissions, but also increases the cost of running the server farm itself. In this paper we address the problem of maximizing the revenue of a service provider by means of dynamic allocation policies that run the minimum amount of servers necessary to meet user's requirements in terms of performance. The results of several experiments executed using Wikipedia traces are described, showing that the proposed schemes work well, even if the workload is non-stationary. Since any resource allocation policy requires the use of forecasting mechanisms, various schemes allowing compensating errors in the load forecasts are presented and evaluated.","PeriodicalId":6372,"journal":{"name":"2010 11th IEEE/ACM International Conference on Grid Computing","volume":"37 1","pages":"313-320"},"PeriodicalIF":0.0,"publicationDate":"2011-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82718903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-26DOI: 10.1109/GRID.2010.5698017
Xiangliang Zhang, C. Germain, M. Sebag
Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and grid-running logs. Toward Autonomic Grid Computing, adaptively detecting the changes in a grid system can help to alarm the anomalies, clean the noises, and report the new patterns. In this paper, we proposed an approach of self-adaptive change detection based on the Page-Hinkley statistic test. It handles the non-stationary distribution without the assumption of data distribution and the empirical setting of parameters. We validate the approach on the EGEE streaming jobs, and report its better performance on achieving higher accuracy comparing to the other change detection methods. Meanwhile this change detection process could help to discover the device fault which was not claimed in the system logs.
{"title":"Adaptively detecting changes in Autonomic Grid Computing","authors":"Xiangliang Zhang, C. Germain, M. Sebag","doi":"10.1109/GRID.2010.5698017","DOIUrl":"https://doi.org/10.1109/GRID.2010.5698017","url":null,"abstract":"Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and grid-running logs. Toward Autonomic Grid Computing, adaptively detecting the changes in a grid system can help to alarm the anomalies, clean the noises, and report the new patterns. In this paper, we proposed an approach of self-adaptive change detection based on the Page-Hinkley statistic test. It handles the non-stationary distribution without the assumption of data distribution and the empirical setting of parameters. We validate the approach on the EGEE streaming jobs, and report its better performance on achieving higher accuracy comparing to the other change detection methods. Meanwhile this change detection process could help to discover the device fault which was not claimed in the system logs.","PeriodicalId":6372,"journal":{"name":"2010 11th IEEE/ACM International Conference on Grid Computing","volume":"6 1","pages":"387-392"},"PeriodicalIF":0.0,"publicationDate":"2010-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78624628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-25DOI: 10.1109/GRID.2010.5697955
S. Abrishami, Mahmoud Naghibzadeh, D. Epema
Recently, utility grids have emerged as a new model of service provisioning in heterogeneous distributed systems. In this model, users negotiate with providers on their required Quality of Service and on the corresponding price to reach a Service Level Agreement. One of the most challenging problems in utility grids is workflow scheduling, i.e., the problem of satisfying users' QoS as well as minimizing the cost of workflow execution. In this paper, we propose a new QoS-based workflow scheduling algorithm based on a novel concept called Partial Critical Path. This algorithm recursively schedules the critical path ending at a recently scheduled node. The proposed algorithm tries to minimize the cost of workflow execution while meeting a user-defined deadline. The simulation results show that the performance of our algorithm is very promising.
{"title":"Cost-driven scheduling of grid workflows using Partial Critical Paths","authors":"S. Abrishami, Mahmoud Naghibzadeh, D. Epema","doi":"10.1109/GRID.2010.5697955","DOIUrl":"https://doi.org/10.1109/GRID.2010.5697955","url":null,"abstract":"Recently, utility grids have emerged as a new model of service provisioning in heterogeneous distributed systems. In this model, users negotiate with providers on their required Quality of Service and on the corresponding price to reach a Service Level Agreement. One of the most challenging problems in utility grids is workflow scheduling, i.e., the problem of satisfying users' QoS as well as minimizing the cost of workflow execution. In this paper, we propose a new QoS-based workflow scheduling algorithm based on a novel concept called Partial Critical Path. This algorithm recursively schedules the critical path ending at a recently scheduled node. The proposed algorithm tries to minimize the cost of workflow execution while meeting a user-defined deadline. The simulation results show that the performance of our algorithm is very promising.","PeriodicalId":6372,"journal":{"name":"2010 11th IEEE/ACM International Conference on Grid Computing","volume":"95 1","pages":"81-88"},"PeriodicalIF":0.0,"publicationDate":"2010-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80425377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-25DOI: 10.1109/GRID.2010.5697988
Arnaud Adelin, P. Owezarski, T. Gayraud
Research in the field of green-networking is raising more and more interest, in particular driven by energy saving purposes. The global Internet and its thousands of equipments consume an enormous energy amount, have an impact on global warming. In addition, nobody has a precise idea about what the Internet - or at least one of its AS (Autonomous System) - consumes. It is obvious designing new routing or management strategies for greening the Internet relies on an initial study of the energy consumption of network equipments at large, and routers on a more focuses way. That is why we study in this paper the power consumption of a router depending on several factors as the traffic rate it has to compute, and its configuration (in particular depending on queue management policy). This work then aims to establish an effective method to measure and analyze the power consumption of a router, as well as to provide data from a real router. This work was motivated by the fact that very little data on the power consumption of network devices is available, despite its huge importance for greening network communication. Based on these first results, a discussion is started on how it would be possible to change routing and management strategies and policies in the Internet for saving energy.
{"title":"On the impact of monitoring router energy consumption for greening the Internet","authors":"Arnaud Adelin, P. Owezarski, T. Gayraud","doi":"10.1109/GRID.2010.5697988","DOIUrl":"https://doi.org/10.1109/GRID.2010.5697988","url":null,"abstract":"Research in the field of green-networking is raising more and more interest, in particular driven by energy saving purposes. The global Internet and its thousands of equipments consume an enormous energy amount, have an impact on global warming. In addition, nobody has a precise idea about what the Internet - or at least one of its AS (Autonomous System) - consumes. It is obvious designing new routing or management strategies for greening the Internet relies on an initial study of the energy consumption of network equipments at large, and routers on a more focuses way. That is why we study in this paper the power consumption of a router depending on several factors as the traffic rate it has to compute, and its configuration (in particular depending on queue management policy). This work then aims to establish an effective method to measure and analyze the power consumption of a router, as well as to provide data from a real router. This work was motivated by the fact that very little data on the power consumption of network devices is available, despite its huge importance for greening network communication. Based on these first results, a discussion is started on how it would be possible to change routing and management strategies and policies in the Internet for saving energy.","PeriodicalId":6372,"journal":{"name":"2010 11th IEEE/ACM International Conference on Grid Computing","volume":"14 1","pages":"298-304"},"PeriodicalIF":0.0,"publicationDate":"2010-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86717997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/GRID.2010.5698009
Jiangming Jin, S. Turner, Bu-Sung Lee, S. Kuo, R. Goh, T. Hung
In many large-scale scientific applications, there may be a compute intensive kernel that largely determines the overall performance of the application. Sometimes algorithmic variations of the kernel may be available and a performance benefit can then be gained by choosing the optimal kernel at runtime. However, it is sometimes difficult to choose the most efficient kernel as the kernel algorithms have varying performance under different execution conditions. This paper shows how to construct a set of performance models to explore and analyze the bottleneck of an application. Furthermore, based on the performance models, a theoretical method is proposed to guide the kernel adaptation at runtime. A component-based large-scale infectious disease simulation is used to illustrate the method. The performance models of the different kernels are validated by a range of experiments. The use of runtime kernel adaptation shows a significant performance gain.
{"title":"Performance modeling for runtime kernel adaptation: A case study on infectious disease simulation","authors":"Jiangming Jin, S. Turner, Bu-Sung Lee, S. Kuo, R. Goh, T. Hung","doi":"10.1109/GRID.2010.5698009","DOIUrl":"https://doi.org/10.1109/GRID.2010.5698009","url":null,"abstract":"In many large-scale scientific applications, there may be a compute intensive kernel that largely determines the overall performance of the application. Sometimes algorithmic variations of the kernel may be available and a performance benefit can then be gained by choosing the optimal kernel at runtime. However, it is sometimes difficult to choose the most efficient kernel as the kernel algorithms have varying performance under different execution conditions. This paper shows how to construct a set of performance models to explore and analyze the bottleneck of an application. Furthermore, based on the performance models, a theoretical method is proposed to guide the kernel adaptation at runtime. A component-based large-scale infectious disease simulation is used to illustrate the method. The performance models of the different kernels are validated by a range of experiments. The use of runtime kernel adaptation shows a significant performance gain.","PeriodicalId":6372,"journal":{"name":"2010 11th IEEE/ACM International Conference on Grid Computing","volume":"36 1","pages":"349-358"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78509056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/GRID.2010.5697950
M. Portnoi, J. Zurawski, D. M. Swany
A distributed, hierarchical information service for computer networks might use several service instances, located in different layers. A distributed directory service, for example, might be comprised of upper level listings, and local directories. The upper level listings contain a compact version of the local directories. Clients desiring to access the information contained in local directories might first access the high-level listings, in order to locate the appropriate local instance. One of the keys for the competent operation of such service is the ability of properly summarizing the information which will be maintained in the upper level directories. We analyze the case of the Lookup Service in the Information Services plane of perfSONAR performance monitoring distributed architecture, which implements IP address summarization in its functions. We propose an empirical method, or heuristic, to perform the summarizations, based on the PATRICIA tree. We further apply the heuristic on a simulated distributed test bed and examine the results.
{"title":"An information services algorithm to heuristically summarize IP addresses for a distributed, hierarchical directory service","authors":"M. Portnoi, J. Zurawski, D. M. Swany","doi":"10.1109/GRID.2010.5697950","DOIUrl":"https://doi.org/10.1109/GRID.2010.5697950","url":null,"abstract":"A distributed, hierarchical information service for computer networks might use several service instances, located in different layers. A distributed directory service, for example, might be comprised of upper level listings, and local directories. The upper level listings contain a compact version of the local directories. Clients desiring to access the information contained in local directories might first access the high-level listings, in order to locate the appropriate local instance. One of the keys for the competent operation of such service is the ability of properly summarizing the information which will be maintained in the upper level directories. We analyze the case of the Lookup Service in the Information Services plane of perfSONAR performance monitoring distributed architecture, which implements IP address summarization in its functions. We propose an empirical method, or heuristic, to perform the summarizations, based on the PATRICIA tree. We further apply the heuristic on a simulated distributed test bed and examine the results.","PeriodicalId":6372,"journal":{"name":"2010 11th IEEE/ACM International Conference on Grid Computing","volume":"48 1","pages":"129-136"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77535305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/GRID.2010.5697952
T. Kleinjung, L. Nussbaum, Emmanuel Thomé
In Fall 2009, the final step of the factorization of rsa768 was carried out on several clusters of the Grid'5000 platform, leading to a new record in integer factorization. This step involves solving a huge sparse linear system defined over the binary field GF(2). This article aims at describing the algorithm used, the difficulties encountered, and the methodology which led to success. In particular, we illustrate how our use of the block Wiedemann algorithm led to a method which is suitable for use on a grid platform, with both adaptability to various clusters, and error detection and recovery procedures. While this was not obvious at first, it eventually turned out that the contribution of the Grid'5000 clusters to this computation was major.
{"title":"Using a grid platform for solving large sparse linear systems over GF(2)","authors":"T. Kleinjung, L. Nussbaum, Emmanuel Thomé","doi":"10.1109/GRID.2010.5697952","DOIUrl":"https://doi.org/10.1109/GRID.2010.5697952","url":null,"abstract":"In Fall 2009, the final step of the factorization of rsa768 was carried out on several clusters of the Grid'5000 platform, leading to a new record in integer factorization. This step involves solving a huge sparse linear system defined over the binary field GF(2). This article aims at describing the algorithm used, the difficulties encountered, and the methodology which led to success. In particular, we illustrate how our use of the block Wiedemann algorithm led to a method which is suitable for use on a grid platform, with both adaptability to various clusters, and error detection and recovery procedures. While this was not obvious at first, it eventually turned out that the contribution of the Grid'5000 clusters to this computation was major.","PeriodicalId":6372,"journal":{"name":"2010 11th IEEE/ACM International Conference on Grid Computing","volume":"5 1","pages":"161-168"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84256518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/GRID.2010.5697962
Wei Chen, A. Fekete, Young Choon Lee
We propose a novel deadline-based strategy in scheduling and rescheduling workflow applications on a heterogeneous Grid system. Instead of minimizing the makespan of a job by a greedy algorithm, our approach schedules tasks so that the overall job meets its deadline. The key innovation is how we allow some tasks to be rescheduled, in light of later job requests, to a different time slot or another resource instance; this can leave enough resource availability for more urgent tasks. In our rescheduling, tasks are rearranged individually within certain time slot boundaries so that the temporal constraints of each workflow are kept without needing to totally reconsider the schedules of other tasks. A performance study shows that more jobs can be finished before their deadlines and the overall resource utilization is improved. The rescheduling algorithm is efficient and scalable to large sets of tasks.
{"title":"Exploiting deadline flexibility in Grid workflow rescheduling","authors":"Wei Chen, A. Fekete, Young Choon Lee","doi":"10.1109/GRID.2010.5697962","DOIUrl":"https://doi.org/10.1109/GRID.2010.5697962","url":null,"abstract":"We propose a novel deadline-based strategy in scheduling and rescheduling workflow applications on a heterogeneous Grid system. Instead of minimizing the makespan of a job by a greedy algorithm, our approach schedules tasks so that the overall job meets its deadline. The key innovation is how we allow some tasks to be rescheduled, in light of later job requests, to a different time slot or another resource instance; this can leave enough resource availability for more urgent tasks. In our rescheduling, tasks are rearranged individually within certain time slot boundaries so that the temporal constraints of each workflow are kept without needing to totally reconsider the schedules of other tasks. A performance study shows that more jobs can be finished before their deadlines and the overall resource utilization is improved. The rescheduling algorithm is efficient and scalable to large sets of tasks.","PeriodicalId":6372,"journal":{"name":"2010 11th IEEE/ACM International Conference on Grid Computing","volume":"38 1","pages":"105-112"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82012505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/GRID.2010.5698003
Lesandro Ponciano, F. Brasileiro
Opportunistic grids are distributed computing infrastructures that harvest the idle computing cycles of computing resources geographically distributed. In these grids, the demand for resources is typically bursty. During bursts of resource demand, many grid resources are required, but on other times they remain idle for long periods. If the resources are kept powered on even when they are neither processing their owners workload nor grid jobs, their exploitation is not efficient in terms of energy consumption. One way to reduce the energy consumed in these idleness periods is to place the computers that form the grid in a “sleeping” mode which consumes less energy. We evaluated two sleeping strategies, denoted: standby and hibernate. Resources that comprise an opportunistic grid are normally very heterogeneous, and differ enormously on their processing power and energy consumption. It opens the possibility of implementing scheduling strategies that take energy-efficiency into account. We consider scheduling in two different levels. Firstly, how to choose which machine should be woken up, if several options are available. Secondly, how to decide which tasks to schedule to the available machines. In summary, our results presented a significant reduction in energy consumption, surpassing 80% in a scenario when the amount of resources in the grid was high. Moreover, this comes with limited impact on the response time of the applications.
{"title":"On the impact of energy-saving strategies in opportunistic grids","authors":"Lesandro Ponciano, F. Brasileiro","doi":"10.1109/GRID.2010.5698003","DOIUrl":"https://doi.org/10.1109/GRID.2010.5698003","url":null,"abstract":"Opportunistic grids are distributed computing infrastructures that harvest the idle computing cycles of computing resources geographically distributed. In these grids, the demand for resources is typically bursty. During bursts of resource demand, many grid resources are required, but on other times they remain idle for long periods. If the resources are kept powered on even when they are neither processing their owners workload nor grid jobs, their exploitation is not efficient in terms of energy consumption. One way to reduce the energy consumed in these idleness periods is to place the computers that form the grid in a “sleeping” mode which consumes less energy. We evaluated two sleeping strategies, denoted: standby and hibernate. Resources that comprise an opportunistic grid are normally very heterogeneous, and differ enormously on their processing power and energy consumption. It opens the possibility of implementing scheduling strategies that take energy-efficiency into account. We consider scheduling in two different levels. Firstly, how to choose which machine should be woken up, if several options are available. Secondly, how to decide which tasks to schedule to the available machines. In summary, our results presented a significant reduction in energy consumption, surpassing 80% in a scenario when the amount of resources in the grid was high. Moreover, this comes with limited impact on the response time of the applications.","PeriodicalId":6372,"journal":{"name":"2010 11th IEEE/ACM International Conference on Grid Computing","volume":"32 1","pages":"282-289"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77833481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/GRID.2010.5697974
I. Petri, G. Silaghi, O. Rana
Peer-to-Peer networks provide an important abstraction for modelling the trade of capabilities within a market environment. We consider a particular instance of such a market, where the traded object includes directly provisioned services (i.e. those that are delivered through capabilities directly owned by the provider), or indirectly provisioned services (i.e. those that are delivered through an alternative provider). As a Service Level Agreement (SLA) represents a contract to deliver capability at some point in the future, we use an SLA as a tradeable object whose value can fluctuate. We describe how a variation in value of an SLA can influence the overall “welfare” within a Peer-to-Peer system, and how such value is dependent on the overall demand for services and the redemption time associated with the SLA.
{"title":"Trading Service Level Agreements within a Peer-to-Peer market","authors":"I. Petri, G. Silaghi, O. Rana","doi":"10.1109/GRID.2010.5697974","DOIUrl":"https://doi.org/10.1109/GRID.2010.5697974","url":null,"abstract":"Peer-to-Peer networks provide an important abstraction for modelling the trade of capabilities within a market environment. We consider a particular instance of such a market, where the traded object includes directly provisioned services (i.e. those that are delivered through capabilities directly owned by the provider), or indirectly provisioned services (i.e. those that are delivered through an alternative provider). As a Service Level Agreement (SLA) represents a contract to deliver capability at some point in the future, we use an SLA as a tradeable object whose value can fluctuate. We describe how a variation in value of an SLA can influence the overall “welfare” within a Peer-to-Peer system, and how such value is dependent on the overall demand for services and the redemption time associated with the SLA.","PeriodicalId":6372,"journal":{"name":"2010 11th IEEE/ACM International Conference on Grid Computing","volume":"9 1","pages":"242-251"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81180534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}