Yang Chen, Jianxin Li, Tianyu Wo, Chunming Hu, Wantao Liu
Network Virtualization has recently emerged to provide scalable, customized and on-demand virtual network services over a shared substrate network. How to provide VN services with resiliency guarantees against network failures has become a critical issue, meanwhile the service resource usages should be minimized under the strict constraints such as link bandwidth capability and service resiliency guarantees etc. In this paper, we present a resource allocation algorithm to balance the tradeoff between service resource consumptions and service resiliency. By exploiting a heuristic VN mapping scheme and a restoration path selection scheme based on intelligent bandwidth sharing, the algorithm simultaneously makes cost-effective usage of network resources and protects VN services against network failures. We perform evaluations and find that the algorithm is near optimal in terms of network resource usage, especially the additional restoration bandwidth cost for resiliency protection.
{"title":"Resilient Virtual Network Service Provision in Network Virtualization Environments","authors":"Yang Chen, Jianxin Li, Tianyu Wo, Chunming Hu, Wantao Liu","doi":"10.1109/ICPADS.2010.26","DOIUrl":"https://doi.org/10.1109/ICPADS.2010.26","url":null,"abstract":"Network Virtualization has recently emerged to provide scalable, customized and on-demand virtual network services over a shared substrate network. How to provide VN services with resiliency guarantees against network failures has become a critical issue, meanwhile the service resource usages should be minimized under the strict constraints such as link bandwidth capability and service resiliency guarantees etc. In this paper, we present a resource allocation algorithm to balance the tradeoff between service resource consumptions and service resiliency. By exploiting a heuristic VN mapping scheme and a restoration path selection scheme based on intelligent bandwidth sharing, the algorithm simultaneously makes cost-effective usage of network resources and protects VN services against network failures. We perform evaluations and find that the algorithm is near optimal in terms of network resource usage, especially the additional restoration bandwidth cost for resiliency protection.","PeriodicalId":365914,"journal":{"name":"2010 IEEE 16th International Conference on Parallel and Distributed Systems","volume":"79 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130595969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Search in peer-to-peer networks is a challenging problem due to the absence of any centralized control & the limited information available at each node. When information is available about the overall structure of the network, use of this information can significantly improve the efficiency of decentralized search algorithms. Many peer-to-peer networks have been shown to exhibit power-law degree distribution. We propose two new decentralized search algorithms that can be used for efficient search in networks exhibiting scale-free design. Unlike previous work, our algorithms perform efficient search for a large range of power-law coefficients. Our algorithms are also unique in that they complete decentralized searches efficiently even when the network has disconnected components. As a corollary of this, our algorithms are also more resilient to network failure.
{"title":"Decentralized Search in Scale-Free P2P Networks","authors":"Praphul Chandra, D. Arora","doi":"10.1109/ICPADS.2010.73","DOIUrl":"https://doi.org/10.1109/ICPADS.2010.73","url":null,"abstract":"Search in peer-to-peer networks is a challenging problem due to the absence of any centralized control & the limited information available at each node. When information is available about the overall structure of the network, use of this information can significantly improve the efficiency of decentralized search algorithms. Many peer-to-peer networks have been shown to exhibit power-law degree distribution. We propose two new decentralized search algorithms that can be used for efficient search in networks exhibiting scale-free design. Unlike previous work, our algorithms perform efficient search for a large range of power-law coefficients. Our algorithms are also unique in that they complete decentralized searches efficiently even when the network has disconnected components. As a corollary of this, our algorithms are also more resilient to network failure.","PeriodicalId":365914,"journal":{"name":"2010 IEEE 16th International Conference on Parallel and Distributed Systems","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130670725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Driven by the market demand for high-definition 3D graphics, commodity graphics processing units (GPUs) have evolved into highly parallel, multi-threaded, many-core processors, which are ideal for data parallel computing. Many applications have been ported to run on a single GPU with tremendous speedups using general C-style programming languages such as CUDA. However, large applications require multiple GPUs and demand explicit message passing. This paper presents a message passing toolkit, called GMH (GPU Message Handler), on NVIDIA GPUs. This toolkit utilizes a data-parallel thread group as a way to map multiple GPUs on a single host to an MPI rank, and introduces a notion of virtual GPUs as a way to bind a thread to a GPU automatically. This toolkit provides high performance MPI style point-to-point and collective communication, but more importantly, facilitates event-driven APIs to allow an application to be managed and executed by the toolkit at runtime.
{"title":"GMH: A Message Passing Toolkit for GPU Clusters","authors":"Jie Chen, W. Watson, W. Mao","doi":"10.1109/ICPADS.2010.35","DOIUrl":"https://doi.org/10.1109/ICPADS.2010.35","url":null,"abstract":"Driven by the market demand for high-definition 3D graphics, commodity graphics processing units (GPUs) have evolved into highly parallel, multi-threaded, many-core processors, which are ideal for data parallel computing. Many applications have been ported to run on a single GPU with tremendous speedups using general C-style programming languages such as CUDA. However, large applications require multiple GPUs and demand explicit message passing. This paper presents a message passing toolkit, called GMH (GPU Message Handler), on NVIDIA GPUs. This toolkit utilizes a data-parallel thread group as a way to map multiple GPUs on a single host to an MPI rank, and introduces a notion of virtual GPUs as a way to bind a thread to a GPU automatically. This toolkit provides high performance MPI style point-to-point and collective communication, but more importantly, facilitates event-driven APIs to allow an application to be managed and executed by the toolkit at runtime.","PeriodicalId":365914,"journal":{"name":"2010 IEEE 16th International Conference on Parallel and Distributed Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123401831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Heirman, D. Stroobandt, Narasinga Rao Miniskar, Roel Wuyts, F. Catthoor
As the number of cores in both embedded Multi-Processor Systems-on-Chip and general purpose processors keeps rising, on-chip communication becomes more and more important. In order to write efficient programs for these architectures it is therefore necessary to have a good idea of the communication behavior of an application. We present a communication profiler that extracts this behavior from compiled, sequential or parallel C/C++ programs, and constructs a dynamic data-flow graph at the level of major functional blocks. In contrast to existing methods of measuring inter-program communication, our tool automatically generates the program's data-flow graph and is less demanding for the developer. It can also be used to view differences between program phases (such as different video frames), which allows both input- and phase-specific optimizations to be made. We will also describe briefly how this information can subsequently be used to guide the effort of parallelizing the application, to co-design the software, memory hierarchy and communication hardware, and to provide new sources of communication-related runtime optimizations.
{"title":"PinComm: Characterizing Intra-application Communication for the Many-Core Era","authors":"W. Heirman, D. Stroobandt, Narasinga Rao Miniskar, Roel Wuyts, F. Catthoor","doi":"10.1109/ICPADS.2010.56","DOIUrl":"https://doi.org/10.1109/ICPADS.2010.56","url":null,"abstract":"As the number of cores in both embedded Multi-Processor Systems-on-Chip and general purpose processors keeps rising, on-chip communication becomes more and more important. In order to write efficient programs for these architectures it is therefore necessary to have a good idea of the communication behavior of an application. We present a communication profiler that extracts this behavior from compiled, sequential or parallel C/C++ programs, and constructs a dynamic data-flow graph at the level of major functional blocks. In contrast to existing methods of measuring inter-program communication, our tool automatically generates the program's data-flow graph and is less demanding for the developer. It can also be used to view differences between program phases (such as different video frames), which allows both input- and phase-specific optimizations to be made. We will also describe briefly how this information can subsequently be used to guide the effort of parallelizing the application, to co-design the software, memory hierarchy and communication hardware, and to provide new sources of communication-related runtime optimizations.","PeriodicalId":365914,"journal":{"name":"2010 IEEE 16th International Conference on Parallel and Distributed Systems","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124202999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Gross, Max Lehn, D. Stingl, A. Kovacevic, A. Buchmann, R. Steinmetz
Simulation has become an important evaluation method in the area of Peer-to-Peer (P2P) research due to the scalability limitations of evaluation test beds such as Planet Lab or G-Lab. Current simulators provide various abstraction levels for different underlay models, such that applications can be evaluated at different granularity. However, existing simulators suffer from a lack of interoperability and portability making the comparison of research results extremely difficult. To overcome this problem, we present an approach for a generic application interface for discrete-event P2P overlay network simulators. It enables porting of the same implementation of a targeted application once and then running it on various simulators as well as in a real network environment, thereby enabling a diverse and extensive evaluation. We established the feasibility of our approach and showed negligible memory and runtime overhead.
{"title":"Towards a Common Interface for Overlay Network Simulators","authors":"C. Gross, Max Lehn, D. Stingl, A. Kovacevic, A. Buchmann, R. Steinmetz","doi":"10.1109/ICPADS.2010.33","DOIUrl":"https://doi.org/10.1109/ICPADS.2010.33","url":null,"abstract":"Simulation has become an important evaluation method in the area of Peer-to-Peer (P2P) research due to the scalability limitations of evaluation test beds such as Planet Lab or G-Lab. Current simulators provide various abstraction levels for different underlay models, such that applications can be evaluated at different granularity. However, existing simulators suffer from a lack of interoperability and portability making the comparison of research results extremely difficult. To overcome this problem, we present an approach for a generic application interface for discrete-event P2P overlay network simulators. It enables porting of the same implementation of a targeted application once and then running it on various simulators as well as in a real network environment, thereby enabling a diverse and extensive evaluation. We established the feasibility of our approach and showed negligible memory and runtime overhead.","PeriodicalId":365914,"journal":{"name":"2010 IEEE 16th International Conference on Parallel and Distributed Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124405182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, datasets grow enormously both in size and complexity. One of the key issues confronted by large-scale dataset analysis is how to adapt systems to new, unprecedented query loads. Existing systems nail down the data organization scheme once and for all at the beginning of the system design, thus inevitably will see the performance goes down when user requirements change. In this paper, we propose a new paradigm, Data Vitalization, for large-scale dataset analysis. Our goal is to enable high flexibility such that the system is adaptive to complex analytical applications. Specifically, data are organized into a group of vitalized cells, each of which is a collection of data coupled with computing power. As user requirements change over time, cells evolve spontaneously to meet the potential new query loads. Besides basic functionality of Data Vitalization, we also explore an envisioned architecture of Data Vitalization including possible approaches for query processing, data evolution, as well as its tight-coupled mechanism for data storage and computing.
{"title":"Data Vitalization: A New Paradigm for Large-Scale Dataset Analysis","authors":"Zhang Xiong, Wuman Luo, Lei Chen, L. Ni","doi":"10.1109/ICPADS.2010.102","DOIUrl":"https://doi.org/10.1109/ICPADS.2010.102","url":null,"abstract":"Nowadays, datasets grow enormously both in size and complexity. One of the key issues confronted by large-scale dataset analysis is how to adapt systems to new, unprecedented query loads. Existing systems nail down the data organization scheme once and for all at the beginning of the system design, thus inevitably will see the performance goes down when user requirements change. In this paper, we propose a new paradigm, Data Vitalization, for large-scale dataset analysis. Our goal is to enable high flexibility such that the system is adaptive to complex analytical applications. Specifically, data are organized into a group of vitalized cells, each of which is a collection of data coupled with computing power. As user requirements change over time, cells evolve spontaneously to meet the potential new query loads. Besides basic functionality of Data Vitalization, we also explore an envisioned architecture of Data Vitalization including possible approaches for query processing, data evolution, as well as its tight-coupled mechanism for data storage and computing.","PeriodicalId":365914,"journal":{"name":"2010 IEEE 16th International Conference on Parallel and Distributed Systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121600685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Increasing speeds and volumes push network packet applications to use parallel processing to boost performance. Examining the packet payload (message content) is a key aspect of packet processing. Applications search payloads to find strings that match a pattern described by regular expressions (regex). Searching for multiple strings that may start anywhere in the payload is a major obstacle to performance. Commercial systems often employ multiple network processors to provide parallel processing in general and use regex software engines or special regex processors to speed up searching performance via parallelism. Typically, regex rules are prepared separately from the application program and compiled into a binary image to be read by a regex processor or software engine. Our approach integrates specifying search rules with specifying network application code written in packet C, a C dialect that hides host-machine specifics, supports coarse-grain parallelism and supplies high-level data type and operator extensions for packet processing. packetC provides a search set data type, as well as match and find operations, to support payload searching. We show that our search set operator implementation, using associative memory and regex processors, lets users enjoy the performance benefits of parallel regex technology without learning hardware-specifics or using a separate regex toolchain’s use.
{"title":"Packet Content Matching with packetC Searchsets","authors":"R. Duncan, P. Jungck, Kenneth Ross, S. Tillman","doi":"10.1109/ICPADS.2010.52","DOIUrl":"https://doi.org/10.1109/ICPADS.2010.52","url":null,"abstract":"Increasing speeds and volumes push network packet applications to use parallel processing to boost performance. Examining the packet payload (message content) is a key aspect of packet processing. Applications search payloads to find strings that match a pattern described by regular expressions (regex). Searching for multiple strings that may start anywhere in the payload is a major obstacle to performance. Commercial systems often employ multiple network processors to provide parallel processing in general and use regex software engines or special regex processors to speed up searching performance via parallelism. Typically, regex rules are prepared separately from the application program and compiled into a binary image to be read by a regex processor or software engine. Our approach integrates specifying search rules with specifying network application code written in packet C, a C dialect that hides host-machine specifics, supports coarse-grain parallelism and supplies high-level data type and operator extensions for packet processing. packetC provides a search set data type, as well as match and find operations, to support payload searching. We show that our search set operator implementation, using associative memory and regex processors, lets users enjoy the performance benefits of parallel regex technology without learning hardware-specifics or using a separate regex toolchain’s use.","PeriodicalId":365914,"journal":{"name":"2010 IEEE 16th International Conference on Parallel and Distributed Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116617579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Service availability and QoS, in terms of customer affecting performance metrics, is crucial for service systems. However, the increasing complexity in distributed service systems introduce hidden space for software faults, which undermine system availability, leading to fault or even down time. In this paper, we introduce a composition technique, Coordinated Selective Rejuvenation, to automate the whole procession of fault component identification and rejuvenation arbitration, in order to guarantee distributed service system's customer-affecting metrics. We take evaluation with fault injection experiment on RUBiS, which simulates distributed eCommerce of eBay.com. The results indicate that our request path analysis approach and system model technique are effective for fault component's location, Bayesian network technique is feasible for fault pinpointing, in terms of request tracing context. Meanwhile, the arbitration scheme, can effectively guarantee system QoS, by identifying and rejuvenating most likely performance fault tier, before the degradation of customer affecting performance metric become severe.
{"title":"Coordinated Selective Rejuvenation for Distributed Services","authors":"Guanhua Tian, Dan Meng","doi":"10.1109/ICPADS.2010.10","DOIUrl":"https://doi.org/10.1109/ICPADS.2010.10","url":null,"abstract":"Service availability and QoS, in terms of customer affecting performance metrics, is crucial for service systems. However, the increasing complexity in distributed service systems introduce hidden space for software faults, which undermine system availability, leading to fault or even down time. In this paper, we introduce a composition technique, Coordinated Selective Rejuvenation, to automate the whole procession of fault component identification and rejuvenation arbitration, in order to guarantee distributed service system's customer-affecting metrics. We take evaluation with fault injection experiment on RUBiS, which simulates distributed eCommerce of eBay.com. The results indicate that our request path analysis approach and system model technique are effective for fault component's location, Bayesian network technique is feasible for fault pinpointing, in terms of request tracing context. Meanwhile, the arbitration scheme, can effectively guarantee system QoS, by identifying and rejuvenating most likely performance fault tier, before the degradation of customer affecting performance metric become severe.","PeriodicalId":365914,"journal":{"name":"2010 IEEE 16th International Conference on Parallel and Distributed Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129596930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the core count in high-performance computing systems keeps increasing, faults are becoming common place. Check pointing addresses such faults but captures full process images even though only a subset of the process image changes between checkpoints. We have designed a hybrid check pointing technique for MPI tasks of high-performance applications. This technique alternates between full and incremental checkpoints: At incremental checkpoints, only data changed since the last checkpoint is captured. Our implementation integrates new BLCR and LAM/MPI features that complement traditional full checkpoints. This results in significantly reduced checkpoint sizes and overheads with only moderate increases in restart overhead. After accounting for cost and savings, benefits due to incremental checkpoints are an order of magnitude larger than overheads on restarts. We further derive qualitative results indicating an optimal balance between full/incremental checkpoints of our novel approach at a ratio of 1:9, which outperforms both always-full and always-incremental check pointing.
{"title":"Hybrid Checkpointing for MPI Jobs in HPC Environments","authors":"Chao Wang, F. Mueller, C. Engelmann, S. Scott","doi":"10.1109/ICPADS.2010.48","DOIUrl":"https://doi.org/10.1109/ICPADS.2010.48","url":null,"abstract":"As the core count in high-performance computing systems keeps increasing, faults are becoming common place. Check pointing addresses such faults but captures full process images even though only a subset of the process image changes between checkpoints. We have designed a hybrid check pointing technique for MPI tasks of high-performance applications. This technique alternates between full and incremental checkpoints: At incremental checkpoints, only data changed since the last checkpoint is captured. Our implementation integrates new BLCR and LAM/MPI features that complement traditional full checkpoints. This results in significantly reduced checkpoint sizes and overheads with only moderate increases in restart overhead. After accounting for cost and savings, benefits due to incremental checkpoints are an order of magnitude larger than overheads on restarts. We further derive qualitative results indicating an optimal balance between full/incremental checkpoints of our novel approach at a ratio of 1:9, which outperforms both always-full and always-incremental check pointing.","PeriodicalId":365914,"journal":{"name":"2010 IEEE 16th International Conference on Parallel and Distributed Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128932433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the domain of enterprise interoperability many uncertain factors affect the performance of the whole cross-organizational business process, e.g. uncertain business process executing time, uncertain business logic in a process, et al. Uncertain factors couldn’t be avoided but can be analyzed. In this paper a model about Enterprise Interoperability Domain (EID) is given and the main uncertain factors during enterprise interoperability are analyzed. To analyze the correlation between the business processes in an EID, an updated grey correlation analysis method is given to help calculating the grey relational degree between the elements with uncertainty in an EID. The simulation shows that the result of grey correlation degree can be very helpful for the further optimization of enterprise interoperability business process.
{"title":"A Qualitative Analysis of Uncertainty and Correlation Computing for the Business Processes of Enterprise Interoperability","authors":"Xiaofeng Liu, Xiaofei Xu, S. Deng","doi":"10.1109/ICPADS.2010.100","DOIUrl":"https://doi.org/10.1109/ICPADS.2010.100","url":null,"abstract":"In the domain of enterprise interoperability many uncertain factors affect the performance of the whole cross-organizational business process, e.g. uncertain business process executing time, uncertain business logic in a process, et al. Uncertain factors couldn’t be avoided but can be analyzed. In this paper a model about Enterprise Interoperability Domain (EID) is given and the main uncertain factors during enterprise interoperability are analyzed. To analyze the correlation between the business processes in an EID, an updated grey correlation analysis method is given to help calculating the grey relational degree between the elements with uncertainty in an EID. The simulation shows that the result of grey correlation degree can be very helpful for the further optimization of enterprise interoperability business process.","PeriodicalId":365914,"journal":{"name":"2010 IEEE 16th International Conference on Parallel and Distributed Systems","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130153099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}