Xiulin Li, Li Pan, Jiwei Huang, Shijun Liu, Yuliang Shi, Li-zhen Cui, C. Pu
{"title":"云计算中心使用M/M/c/r排队系统服务并行渲染作业的性能分析","authors":"Xiulin Li, Li Pan, Jiwei Huang, Shijun Liu, Yuliang Shi, Li-zhen Cui, C. Pu","doi":"10.1109/ICDCS.2017.132","DOIUrl":null,"url":null,"abstract":"Performance analysis is crucial to the successful development of cloud computing paradigm. And it is especially important for a cloud computing center serving parallelizable application jobs, for determining a proper degree of parallelism could reduce the mean service response time and thus improve the performance of cloud computing obviously. In this paper, taking the cloud based rendering service platform as an example application, we propose an approximate analytical model for cloud computing centers serving parallelizable jobs using M/M/c/r queuing systems, by modeling the rendering service platform as a multi-station multi-server system. We solve the proposed analytical model to obtain a complete probability distribution of response time, blocking probability and other important performance metrics for given cloud system settings. Thus this model can guide cloud operators to determine a proper setting, such as the number of servers, the buffer size and the degree of parallelism, for achieving specific performance levels. Through extensive simulations based on both synthetic data and real-world workload traces, we show that our proposed analytical model can provide approximate performance prediction results for cloud computing centers serving parallelizable jobs, even those job arrivals follow different distributions.","PeriodicalId":127689,"journal":{"name":"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Performance Analysis of Cloud Computing Centers Serving Parallelizable Rendering Jobs Using M/M/c/r Queuing Systems\",\"authors\":\"Xiulin Li, Li Pan, Jiwei Huang, Shijun Liu, Yuliang Shi, Li-zhen Cui, C. Pu\",\"doi\":\"10.1109/ICDCS.2017.132\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Performance analysis is crucial to the successful development of cloud computing paradigm. And it is especially important for a cloud computing center serving parallelizable application jobs, for determining a proper degree of parallelism could reduce the mean service response time and thus improve the performance of cloud computing obviously. In this paper, taking the cloud based rendering service platform as an example application, we propose an approximate analytical model for cloud computing centers serving parallelizable jobs using M/M/c/r queuing systems, by modeling the rendering service platform as a multi-station multi-server system. We solve the proposed analytical model to obtain a complete probability distribution of response time, blocking probability and other important performance metrics for given cloud system settings. Thus this model can guide cloud operators to determine a proper setting, such as the number of servers, the buffer size and the degree of parallelism, for achieving specific performance levels. Through extensive simulations based on both synthetic data and real-world workload traces, we show that our proposed analytical model can provide approximate performance prediction results for cloud computing centers serving parallelizable jobs, even those job arrivals follow different distributions.\",\"PeriodicalId\":127689,\"journal\":{\"name\":\"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDCS.2017.132\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCS.2017.132","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Performance Analysis of Cloud Computing Centers Serving Parallelizable Rendering Jobs Using M/M/c/r Queuing Systems
Performance analysis is crucial to the successful development of cloud computing paradigm. And it is especially important for a cloud computing center serving parallelizable application jobs, for determining a proper degree of parallelism could reduce the mean service response time and thus improve the performance of cloud computing obviously. In this paper, taking the cloud based rendering service platform as an example application, we propose an approximate analytical model for cloud computing centers serving parallelizable jobs using M/M/c/r queuing systems, by modeling the rendering service platform as a multi-station multi-server system. We solve the proposed analytical model to obtain a complete probability distribution of response time, blocking probability and other important performance metrics for given cloud system settings. Thus this model can guide cloud operators to determine a proper setting, such as the number of servers, the buffer size and the degree of parallelism, for achieving specific performance levels. Through extensive simulations based on both synthetic data and real-world workload traces, we show that our proposed analytical model can provide approximate performance prediction results for cloud computing centers serving parallelizable jobs, even those job arrivals follow different distributions.