The growth of server consolidation is due to virtualization technology that enables multiple servers to run on a single platform. However, virtualization may bring the overheads in performance. The prediction of virtualization performance is of especially important. The contribution of our paper is two-fold. First, we propose a general model to predict the performance of consolidation. Second, we study a load balancing problem that arises in server consolidation, where is to assign a number of workloads to a small number of high-performance target servers such that the workloads in each target servers are balancing. We first model the load balancing problem as an integer linear programming. Then, an fully polynomial time approximate scheme (FPTAS) is provided to get the near optimal solution. That is to say, for any given $varepsilon ≫ 0$, our algorithm achieves ($1+varepsilon$)-approximation, and its running time is polynomial of both the number of source servers and $1/varepsilon$ when the number of target servers and the dimensions are constants.
{"title":"Load Balancing in Server Consolidation","authors":"Deshi Ye, Hua Chen, Qinming He","doi":"10.1109/ISPA.2009.56","DOIUrl":"https://doi.org/10.1109/ISPA.2009.56","url":null,"abstract":"The growth of server consolidation is due to virtualization technology that enables multiple servers to run on a single platform. However, virtualization may bring the overheads in performance. The prediction of virtualization performance is of especially important. The contribution of our paper is two-fold. First, we propose a general model to predict the performance of consolidation. Second, we study a load balancing problem that arises in server consolidation, where is to assign a number of workloads to a small number of high-performance target servers such that the workloads in each target servers are balancing. We first model the load balancing problem as an integer linear programming. Then, an fully polynomial time approximate scheme (FPTAS) is provided to get the near optimal solution. That is to say, for any given $varepsilon ≫ 0$, our algorithm achieves ($1+varepsilon$)-approximation, and its running time is polynomial of both the number of source servers and $1/varepsilon$ when the number of target servers and the dimensions are constants.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132687672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Watt, R. Sinnott, Jipu Jiang, T. Doherty, C. Higgins, Michael Koutroumpas
Collaboration is at the heart of e-Science and e-Research more generally. Successful collaborations must address both the needs of the end user researchers and the providers that make resources available. Usability and security are two fundamental requirements that are demanded by many collaborations and both concerns must be considered from both the researcher and resource provider perspective. In this paper we outline tools and methods developed at the National e-Science Centre (NeSC) that provide users with seamless, secure access to distributed resources through security-oriented research environments, whilst also allowing resource providers to define and enforce their own local access and usage policies through intuitive user interfaces. We describe these tools and illustrate their application in the ESRC-funded Data Management through e-Social Science (DAMES) and the JISC-funded SeeGEO projects
{"title":"Tool Support for Security-Oriented Virtual Research Collaborations","authors":"J. Watt, R. Sinnott, Jipu Jiang, T. Doherty, C. Higgins, Michael Koutroumpas","doi":"10.1109/ISPA.2009.49","DOIUrl":"https://doi.org/10.1109/ISPA.2009.49","url":null,"abstract":"Collaboration is at the heart of e-Science and e-Research more generally. Successful collaborations must address both the needs of the end user researchers and the providers that make resources available. Usability and security are two fundamental requirements that are demanded by many collaborations and both concerns must be considered from both the researcher and resource provider perspective. In this paper we outline tools and methods developed at the National e-Science Centre (NeSC) that provide users with seamless, secure access to distributed resources through security-oriented research environments, whilst also allowing resource providers to define and enforce their own local access and usage policies through intuitive user interfaces. We describe these tools and illustrate their application in the ESRC-funded Data Management through e-Social Science (DAMES) and the JISC-funded SeeGEO projects","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132703819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Master/worker pattern is widely used to construct the cross-domain, large scale computing infrastructure. The applications supported by this kind of infrastructure usually features long-running, speculative execution etc. Fault recovery mechanism is significant to them especially in the wide area network environment, which consists of error prone components. Inter-node cooperation is urgent to make the recovery process more efficient. The traditional log-based rollback recovery mechanism which features independent recovery cannot fulfill the global cooperation requirement due to the waste of bandwidth and slow application data transfer which is caused by the exchange of a large amount of logs. In this paper, we propose a two-phase log-based recovery mechanism which is of merits such as space saving and global optimization and can be used as a complement of the current log-based rollback recovery approach in some specific situations. We have demonstrated the use of this mechanism in the Drug Discovery Grid environment, which is supported by China National Grid. Experiment results have proved efficiency of this mechanism.
{"title":"A Two-Phase Log-Based Fault Recovery Mechanism in Master/Worker Based Computing Environment","authors":"Ting Chen, Yongjian Wang, Yuanqiang Huang, Cheng Luo, D. Qian, Zhongzhi Luan","doi":"10.1109/ISPA.2009.53","DOIUrl":"https://doi.org/10.1109/ISPA.2009.53","url":null,"abstract":"The Master/worker pattern is widely used to construct the cross-domain, large scale computing infrastructure. The applications supported by this kind of infrastructure usually features long-running, speculative execution etc. Fault recovery mechanism is significant to them especially in the wide area network environment, which consists of error prone components. Inter-node cooperation is urgent to make the recovery process more efficient. The traditional log-based rollback recovery mechanism which features independent recovery cannot fulfill the global cooperation requirement due to the waste of bandwidth and slow application data transfer which is caused by the exchange of a large amount of logs. In this paper, we propose a two-phase log-based recovery mechanism which is of merits such as space saving and global optimization and can be used as a complement of the current log-based rollback recovery approach in some specific situations. We have demonstrated the use of this mechanism in the Drug Discovery Grid environment, which is supported by China National Grid. Experiment results have proved efficiency of this mechanism.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133995232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel and interesting application of RFID was a banknote attached a tag to determine the authenticity of money and to stop counterfeits. At Financial Cryptography 2003, Juels and Pappu firstly proposed a practical RFID banknote protection scheme (RBPS) by using the optical and the electrical (RFID) contacts. RFID-enabled notes can be tracked by the legal law agency and verified by the merchant. However, some effective attacks on the Juels-Pappu RBPS were subsequently proposed. In this paper, we enhance the Juels-Pappu RBPS to conquer such attacks.
{"title":"Enhancing Privacy and Security in RFID-Enabled Banknotes","authors":"Ching-Nung Yang, Jie-Ru Chen, Chih-Yang Chiu, Gen-Chin Wu, Chih-Cheng Wu","doi":"10.1109/ISPA.2009.77","DOIUrl":"https://doi.org/10.1109/ISPA.2009.77","url":null,"abstract":"A novel and interesting application of RFID was a banknote attached a tag to determine the authenticity of money and to stop counterfeits. At Financial Cryptography 2003, Juels and Pappu firstly proposed a practical RFID banknote protection scheme (RBPS) by using the optical and the electrical (RFID) contacts. RFID-enabled notes can be tracked by the legal law agency and verified by the merchant. However, some effective attacks on the Juels-Pappu RBPS were subsequently proposed. In this paper, we enhance the Juels-Pappu RBPS to conquer such attacks.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"449 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122963870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The past five years have witnessed a massive increase in the use of mobile cellular broadband devices such as smartphones. These devices often contain multiple radios permitting WiFi, WCDMA and HSPA access in a single device. Their increased processing power and multimedia capabilities have made them attractive for use for new services such as mobile Voice over IP (VoIP). We provide an empirical analysis of VoIP performance over WiFi, WCDMA and HSDPA radio interfaces on a typical high-end Smartphone, with measurements of VoIP quality metrics such as end to end delay, packet loss and jitter. We observe that the best performance in terms of mean opinion (MOS) scores was obtained in WiFi environments, while the poorest was recorded in WCDMA networks. We find that VoIP codec processing delay in these mobile devices is the most significant contributor to end to end delay, and that optimization in this area will provide the greatest improvements to mobile VoIP voice quality.
{"title":"VoIP Performance in Multi-radio Mobile Devices","authors":"A. Iwayemi, Chi Zhou","doi":"10.1109/ISPA.2009.97","DOIUrl":"https://doi.org/10.1109/ISPA.2009.97","url":null,"abstract":"The past five years have witnessed a massive increase in the use of mobile cellular broadband devices such as smartphones. These devices often contain multiple radios permitting WiFi, WCDMA and HSPA access in a single device. Their increased processing power and multimedia capabilities have made them attractive for use for new services such as mobile Voice over IP (VoIP). We provide an empirical analysis of VoIP performance over WiFi, WCDMA and HSDPA radio interfaces on a typical high-end Smartphone, with measurements of VoIP quality metrics such as end to end delay, packet loss and jitter. We observe that the best performance in terms of mean opinion (MOS) scores was obtained in WiFi environments, while the poorest was recorded in WCDMA networks. We find that VoIP codec processing delay in these mobile devices is the most significant contributor to end to end delay, and that optimization in this area will provide the greatest improvements to mobile VoIP voice quality.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125552715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract— When human culture advances, current problems in science and engineering become more complicated and need more computing power to tackle and analyze. A supercomputer is not the only choice for complex problems any more as a result of the speed-up of personal computers and networks. Grid technology, which connects a number of personal computers with high speed networks, can achieve the same computing power as a supercomputer does, also with a lower cost. However, grid is a heterogeneous system. Scheduling independent tasks on it is more complicated. In order to utilize the power of grid completely, we need an efficient job scheduling algorithm to assign jobs to resources in a grid. In this paper, we propose an Adaptive Scoring Job Scheduling algorithm (ASJS) for the grid environment. Compared to other methods, it can decrease the completion time of all submitted jobs, which may compose of computingintensive jobs and data-intensive jobs.
{"title":"Scheduling Jobs in Grids Adaptively","authors":"R. Chang, Chih-Yuan Lin, Chun-Fu Lin","doi":"10.1109/ISPA.2009.75","DOIUrl":"https://doi.org/10.1109/ISPA.2009.75","url":null,"abstract":"Abstract— When human culture advances, current problems in science and engineering become more complicated and need more computing power to tackle and analyze. A supercomputer is not the only choice for complex problems any more as a result of the speed-up of personal computers and networks. Grid technology, which connects a number of personal computers with high speed networks, can achieve the same computing power as a supercomputer does, also with a lower cost. However, grid is a heterogeneous system. Scheduling independent tasks on it is more complicated. In order to utilize the power of grid completely, we need an efficient job scheduling algorithm to assign jobs to resources in a grid. In this paper, we propose an Adaptive Scoring Job Scheduling algorithm (ASJS) for the grid environment. Compared to other methods, it can decrease the completion time of all submitted jobs, which may compose of computingintensive jobs and data-intensive jobs.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132287916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the popularity of Web services, a set of Web services with similar functions but different qualities can be found. Since services are often be composed through a service process model, the way to compose them affects the whole quality of the process model itself. In this paper, a quality optimization method for the service process model is proposed, which takes the structure of service process into account. Based on the proposed quality model, a genetic algorithm is proposed to optimize the service process model.
{"title":"A Quality Optimization Method for Service Process Model","authors":"Haiyan Zhao, Jian Cao, Xiaohan Sun","doi":"10.1109/ISPA.2009.39","DOIUrl":"https://doi.org/10.1109/ISPA.2009.39","url":null,"abstract":"With the popularity of Web services, a set of Web services with similar functions but different qualities can be found. Since services are often be composed through a service process model, the way to compose them affects the whole quality of the process model itself. In this paper, a quality optimization method for the service process model is proposed, which takes the structure of service process into account. Based on the proposed quality model, a genetic algorithm is proposed to optimize the service process model.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122822860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a emph{Wavelength Division Multiplexing} (WDM) network, the performance of the virtual topology designed for a pre-specified traffic pattern can be improved by performing {it virtual topology reconfiguration}. Simultaneously, the provision of survivability of WDM networks is important, because the transmission of huge data should be protected when a fiber fails. Thus, the combination of survivability and reconfiguration is an important issue in WDM networks. In this paper, the {it Virtual Topology Reconfiguration Problem/} ({it VTRP/}) on WDM networks with a reconfiguration constraint is studied. Given the physical topology, dedicated path-protection virtual topology and a new traffic demand matrix, the goal of VTRP is to reconfigure current virtual topology under the pre-specified reconfiguration constraint ($r$, a positive integer) so that the objective value can be minimized. The objective cost of VTRP is the {it average weighted propagation delay/} ({it AWPD/}). Because designing a polynomial time algorithm to find the optimal solution of VTRP is impractical when the reconfiguration constraint $r$ is larger, two heuristic algorithms are proposed to solve this problem. They are {it Positive Reconfiguration Heuristic Algorithm/} ({it PRHA/}) and {it Conservative Reconfiguration Heuristic Algorithm/} ({it CRHA/}). Experimental results of these algorithms are also given.
{"title":"Survivable Virtual Topology Reconfiguration Problem on WDM Networks with Reconfiguration Constraint","authors":"D. Din, Y. Chiu","doi":"10.1109/ISPA.2009.8","DOIUrl":"https://doi.org/10.1109/ISPA.2009.8","url":null,"abstract":"In a emph{Wavelength Division Multiplexing} (WDM) network, the performance of the virtual topology designed for a pre-specified traffic pattern can be improved by performing {it virtual topology reconfiguration}. Simultaneously, the provision of survivability of WDM networks is important, because the transmission of huge data should be protected when a fiber fails. Thus, the combination of survivability and reconfiguration is an important issue in WDM networks. In this paper, the {it Virtual Topology Reconfiguration Problem/} ({it VTRP/}) on WDM networks with a reconfiguration constraint is studied. Given the physical topology, dedicated path-protection virtual topology and a new traffic demand matrix, the goal of VTRP is to reconfigure current virtual topology under the pre-specified reconfiguration constraint ($r$, a positive integer) so that the objective value can be minimized. The objective cost of VTRP is the {it average weighted propagation delay/} ({it AWPD/}). Because designing a polynomial time algorithm to find the optimal solution of VTRP is impractical when the reconfiguration constraint $r$ is larger, two heuristic algorithms are proposed to solve this problem. They are {it Positive Reconfiguration Heuristic Algorithm/} ({it PRHA/}) and {it Conservative Reconfiguration Heuristic Algorithm/} ({it CRHA/}). Experimental results of these algorithms are also given.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116833670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we discuss the procedures how to make Viterbi decoder faster. The implementation in Intel CPU with SSE4 parallel processing instruction sets and some other methods achieves the decoding speed 47.05 Mbps (0.64 Mbps originally). The DVB-T mode used in Taiwan needs 13.27 Mbps to achieve real-time reception, so our implementation of software Viterbi decoder takes only 28% CPU loading.
{"title":"Software Viterbi Decoder with SSE4 Parallel Processing Instructions for Software DVB-T Receiver","authors":"S. Tseng, Yu-Chin Kuo, Yen-Chih Ku, Yueh-Teng Hsu","doi":"10.1109/ISPA.2009.100","DOIUrl":"https://doi.org/10.1109/ISPA.2009.100","url":null,"abstract":"In this paper, we discuss the procedures how to make Viterbi decoder faster. The implementation in Intel CPU with SSE4 parallel processing instruction sets and some other methods achieves the decoding speed 47.05 Mbps (0.64 Mbps originally). The DVB-T mode used in Taiwan needs 13.27 Mbps to achieve real-time reception, so our implementation of software Viterbi decoder takes only 28% CPU loading.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115610787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhiheng Zhou, Xiangxue Li, D. Zheng, Kefei Chen, Jianhua Li
Progressive Edge-Growth(PEG) Algorithm is a good candidate to generate Tanner Graphs with a large girth by establishing edges or connections between symbol and check nodes in an edge-by-edge manner. In this paper, we propose an extended PEG algorithm for constructing Low-Density Parity-Check (LDPC) codes with very high rate when given a lower bound of girth. Simulation results show the bit error rates of constructed LDPC codes with very high rate or large girth.
{"title":"Extended PEG Algorithm for High Rate LDPC Codes","authors":"Zhiheng Zhou, Xiangxue Li, D. Zheng, Kefei Chen, Jianhua Li","doi":"10.1109/ISPA.2009.80","DOIUrl":"https://doi.org/10.1109/ISPA.2009.80","url":null,"abstract":"Progressive Edge-Growth(PEG) Algorithm is a good candidate to generate Tanner Graphs with a large girth by establishing edges or connections between symbol and check nodes in an edge-by-edge manner. In this paper, we propose an extended PEG algorithm for constructing Low-Density Parity-Check (LDPC) codes with very high rate when given a lower bound of girth. Simulation results show the bit error rates of constructed LDPC codes with very high rate or large girth.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123262125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}