Yong Su, Feilong Liu, Zheng Cao, Zhan Wang, Xiaoli Liu, Xuejun An, Ninghui Sun
The high-density blade server provides an attractive solution for the rapid increasing demand on computing. The degree of parallelism inside a blade enclosure nowadays has reach up to hundreds of cores. In such parallelism, it is necessary to accelerate communications inside a blade enclosure. However, commercial products seldom set foot in the optimization based on hardware. A hyper-node controller is proposed to provide a low overhead and high performance interconnection based on PCIe, which supports global address space, user-level communication, and efficient communication primitives. Furthermore, the efficient sharing of I/O resource is another goal of this design. The prototype of the hyper-node controller is implemented in FPGA. The testing results show the lowest latency is only 1.242us and the highest bandwidth is 3.19GB/s, which is almost 99.7% of the theoretic peak bandwidth.
{"title":"cHPP controller: A High Performance Hyper-node Hardware Accelerator","authors":"Yong Su, Feilong Liu, Zheng Cao, Zhan Wang, Xiaoli Liu, Xuejun An, Ninghui Sun","doi":"10.1109/PDCAT.2013.25","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.25","url":null,"abstract":"The high-density blade server provides an attractive solution for the rapid increasing demand on computing. The degree of parallelism inside a blade enclosure nowadays has reach up to hundreds of cores. In such parallelism, it is necessary to accelerate communications inside a blade enclosure. However, commercial products seldom set foot in the optimization based on hardware. A hyper-node controller is proposed to provide a low overhead and high performance interconnection based on PCIe, which supports global address space, user-level communication, and efficient communication primitives. Furthermore, the efficient sharing of I/O resource is another goal of this design. The prototype of the hyper-node controller is implemented in FPGA. The testing results show the lowest latency is only 1.242us and the highest bandwidth is 3.19GB/s, which is almost 99.7% of the theoretic peak bandwidth.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115373713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Processing the fingerprint image captured from the mobile phone camera is problematic. The addressed problem is such as degree of freedom, blurry, noisy and weak ridges-valleys images. In this paper, problem of blur assessment on fingerprint image is the main consent. In this regards, dataset of fingerprint image is collected from several subjects and different environment to obtain a clear and blur fingerprint image. The ground truth for this dataset is assigned by human vision. According to literature, the suitable blur evaluation is based on no-reference strategy. Hence, this paper modified existing no-reference blur assessment so it is suitable for fingerprint image. The proposed method achieved promising result and able to detect blur and clear fingerprint image.
{"title":"Performance Analysis on the Assessment of Fingerprint Image Based on Blur Measurement","authors":"M. Khalil, F. Kurniawan","doi":"10.1109/PDCAT.2013.58","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.58","url":null,"abstract":"Processing the fingerprint image captured from the mobile phone camera is problematic. The addressed problem is such as degree of freedom, blurry, noisy and weak ridges-valleys images. In this paper, problem of blur assessment on fingerprint image is the main consent. In this regards, dataset of fingerprint image is collected from several subjects and different environment to obtain a clear and blur fingerprint image. The ground truth for this dataset is assigned by human vision. According to literature, the suitable blur evaluation is based on no-reference strategy. Hence, this paper modified existing no-reference blur assessment so it is suitable for fingerprint image. The proposed method achieved promising result and able to detect blur and clear fingerprint image.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127587265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As Cloud Computing becomes popular, more and more data owners prefer to store their data into the cloud for great flexibility and economic savings. In order to protect the data privacy, sensitive data usually have to be encrypted before outsourcing, which makes effective data utilization a challenging task. Although traditional searchable symmetric encryption schemes allow users to securely search over encrypted data through keywords and selectively retrieve files of interest without capturing any relevance of data files or search keywords, and fuzzy keyword search on encrypted data allows minor typos and format inconsistencies, secure ranked keyword search captures the relevance of data files and returns the results that are wanted most by users. These techniques function unilaterally, which greatly reduces the system usability and efficiency. In this paper, for the first time, we define and solve the problem of privacy-preserving ranked fuzzy keyword search over encrypted cloud data. Ranked fuzzy keyword search greatly enhances system usability and efficiency when exact match fails. It returns the matching files in a ranked order with respect to certain relevance criteria (e.g., keyword frequency) based on keyword similarity semantics. In our solution, we exploit the edit distance to quantify keyword similarity and dictionary-based fuzzy set construction to construct fuzzy keyword sets, which greatly reduces the index size, storage and communication costs. We choose the efficient similarity measure of "coordinate matching", i.e., as many matches as possible, to obtain the relevance of data files to the search keywords.
{"title":"Privacy-Preserving Ranked Fuzzy Keyword Search over Encrypted Cloud Data","authors":"Qunqun Xu, Hong Shen, Yingpeng Sang, Hui Tian","doi":"10.1109/PDCAT.2013.44","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.44","url":null,"abstract":"As Cloud Computing becomes popular, more and more data owners prefer to store their data into the cloud for great flexibility and economic savings. In order to protect the data privacy, sensitive data usually have to be encrypted before outsourcing, which makes effective data utilization a challenging task. Although traditional searchable symmetric encryption schemes allow users to securely search over encrypted data through keywords and selectively retrieve files of interest without capturing any relevance of data files or search keywords, and fuzzy keyword search on encrypted data allows minor typos and format inconsistencies, secure ranked keyword search captures the relevance of data files and returns the results that are wanted most by users. These techniques function unilaterally, which greatly reduces the system usability and efficiency. In this paper, for the first time, we define and solve the problem of privacy-preserving ranked fuzzy keyword search over encrypted cloud data. Ranked fuzzy keyword search greatly enhances system usability and efficiency when exact match fails. It returns the matching files in a ranked order with respect to certain relevance criteria (e.g., keyword frequency) based on keyword similarity semantics. In our solution, we exploit the edit distance to quantify keyword similarity and dictionary-based fuzzy set construction to construct fuzzy keyword sets, which greatly reduces the index size, storage and communication costs. We choose the efficient similarity measure of \"coordinate matching\", i.e., as many matches as possible, to obtain the relevance of data files to the search keywords.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133247242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid growth of data and the growing demand from users on the system performance, data availability has become the most important issue in large-scale storage systems. Due to the ability to provide space-optimal data redundancy to protect against node failures, erasure codes have seen widely deployment. To ensure data availability, it is crucial to recover node failures quickly. In this paper, we propose PHR, a pipelined heterogeneous recovery optimization for failure recovery in RAID-6 coded storage systems. Our PHR takes into account both I/O parallelism and node heterogeneity in practical storage systems, and returns an efficient recovery solution timely. We parallelize our PHR algorithm in a pipelined manner, so as to further improve failure recovery performance. With the quantitative simulation studies and extensive test bed experiments, we show our PHR significantly reduces recovery time and also user response time.
{"title":"PHR: A Pipelined Heterogeneous Recovery for RAID6-Coded Storage Systems","authors":"Fang Niu, Yinlong Xu, Yunfeng Zhu, Yan Zhang","doi":"10.1109/PDCAT.2013.8","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.8","url":null,"abstract":"With the rapid growth of data and the growing demand from users on the system performance, data availability has become the most important issue in large-scale storage systems. Due to the ability to provide space-optimal data redundancy to protect against node failures, erasure codes have seen widely deployment. To ensure data availability, it is crucial to recover node failures quickly. In this paper, we propose PHR, a pipelined heterogeneous recovery optimization for failure recovery in RAID-6 coded storage systems. Our PHR takes into account both I/O parallelism and node heterogeneity in practical storage systems, and returns an efficient recovery solution timely. We parallelize our PHR algorithm in a pipelined manner, so as to further improve failure recovery performance. With the quantitative simulation studies and extensive test bed experiments, we show our PHR significantly reduces recovery time and also user response time.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128241588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teaching-learning based optimization (TLBO), inspired from the teaching-learning process in a classroom, is a newly developed population based algorithm. Except population size and maximum number of iteration, it does not require any specific parameters. TLBO consists of two modes of searching phase, teacher and learner phase. In this paper, every learner is assigned to at least one groups and, instead of a learner studied by interacting directly with other learners, group leader is responsible for raising up the member's knowledge, i.e., to explore for optimal solution. The idea is analog to group discussion in which group leader always dominate group discussion direction and performance. For simplicity, the proposed algorithm will be denoted as LTLBO. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with original TLBO and particle swarm optimization (PSO).
{"title":"Group Leader Dominated Teaching-Learning Based Optimization","authors":"Chang-Huang Chen","doi":"10.1109/PDCAT.2013.54","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.54","url":null,"abstract":"Teaching-learning based optimization (TLBO), inspired from the teaching-learning process in a classroom, is a newly developed population based algorithm. Except population size and maximum number of iteration, it does not require any specific parameters. TLBO consists of two modes of searching phase, teacher and learner phase. In this paper, every learner is assigned to at least one groups and, instead of a learner studied by interacting directly with other learners, group leader is responsible for raising up the member's knowledge, i.e., to explore for optimal solution. The idea is analog to group discussion in which group leader always dominate group discussion direction and performance. For simplicity, the proposed algorithm will be denoted as LTLBO. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with original TLBO and particle swarm optimization (PSO).","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128651513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we particularly focus on the use of free open-source software, so that end users do not need to spend a huge amount of software license fees. For cloud computing, virtualization technology delivers numerous benefits in addition to being one of the basic roles to build a cloud environment. By virtualization, enterprises can maximize working efficiency without the need to install more facilities in the computer room. In this study, we implemented a virtualization environment and performed experiments on it. The main subject of it is how to use the Open Stack open-source software to build a cloud infrastructure with high availability and a dynamic resource allocation mechanism. It provides a private cloud solution for business and organizations. It belongs to Infrastructure as a Service (IaaS), one of the three service models in the cloud. For the part of the user interface, a web interface was used to reduce the complexity of access to cloud resources for users. We measured the performance of live migration of virtual machines with different specifications and analyzed the data. Also according to live migration modes, we wrote an algorithm to solve the traditional migration problem that needs manually determining whether the machine load is too heavy or not, as a result, the virtual machine load level is automatically detected, and the purpose of automatic dynamic migration to balance resources of servers is achieved.
{"title":"Implementation of a Cloud IaaS with Dynamic Resource Allocation Method Using OpenStack","authors":"Chao-Tung Yang, Yu-Tso Liu, Jung-Chun Liu, Chih-Liang Chuang, Fuu-Cheng Jiang","doi":"10.1109/PDCAT.2013.18","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.18","url":null,"abstract":"In this work, we particularly focus on the use of free open-source software, so that end users do not need to spend a huge amount of software license fees. For cloud computing, virtualization technology delivers numerous benefits in addition to being one of the basic roles to build a cloud environment. By virtualization, enterprises can maximize working efficiency without the need to install more facilities in the computer room. In this study, we implemented a virtualization environment and performed experiments on it. The main subject of it is how to use the Open Stack open-source software to build a cloud infrastructure with high availability and a dynamic resource allocation mechanism. It provides a private cloud solution for business and organizations. It belongs to Infrastructure as a Service (IaaS), one of the three service models in the cloud. For the part of the user interface, a web interface was used to reduce the complexity of access to cloud resources for users. We measured the performance of live migration of virtual machines with different specifications and analyzed the data. Also according to live migration modes, we wrote an algorithm to solve the traditional migration problem that needs manually determining whether the machine load is too heavy or not, as a result, the virtual machine load level is automatically detected, and the purpose of automatic dynamic migration to balance resources of servers is achieved.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130548273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Benkner, Yuriy Kaniovskyi, Chris Borckholder, M. Bubak, P. Nowakowski, Darío Ruiz López, Steven Wood
The European VPH-Share project develops a comprehensive service framework with the objective of sharing clinical data, information, models and workflows focusing on the analysis of the human physiopathology within the Virtual Physiological Human (VPH) community. The project envisions an extensive and dynamic data infrastructure built on top of a secure hybrid Cloud environment. This paper presents the data service provisioning framework that builds up the data infrastructure, focusing on the deployment of data integration services in the hybrid Cloud, the associated mechanism for securing access to patient-specific datasets, and performance results for different deployment scenarios relevant within the scope of the project.
{"title":"A Secure and Flexible Data Infrastructure for the VPH-Share Community","authors":"S. Benkner, Yuriy Kaniovskyi, Chris Borckholder, M. Bubak, P. Nowakowski, Darío Ruiz López, Steven Wood","doi":"10.1109/PDCAT.2013.42","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.42","url":null,"abstract":"The European VPH-Share project develops a comprehensive service framework with the objective of sharing clinical data, information, models and workflows focusing on the analysis of the human physiopathology within the Virtual Physiological Human (VPH) community. The project envisions an extensive and dynamic data infrastructure built on top of a secure hybrid Cloud environment. This paper presents the data service provisioning framework that builds up the data infrastructure, focusing on the deployment of data integration services in the hybrid Cloud, the associated mechanism for securing access to patient-specific datasets, and performance results for different deployment scenarios relevant within the scope of the project.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122010266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data races are one of the major causes of concurrency bugs in multithreaded programs, but they are hard to find due to nondeterministic thread scheduling. Data race detectors are essential tools that help long-suffering programmers to locate data races in multithreaded programs. One type of detectors precisely detects data races but is sensitive to thread scheduling, whereas another type is less sensitive to thread scheduling but reports a considerable number of false positives. In this paper, we propose a new dynamic data race detector called SimpleLock that accurately detects data races in a scheduling insensitive manner with low execution overhead. We reduce execution overhead by using two assumptions. The first is that most data races are caused by the accessing of shared variables without locks. The second is that two accesses that cause a data race have not a long distance between them in an execution trace. The results of experiments conducted on the Road Runner framework confirm that these assumptions are valid and that our SimpleLock detector can efficiently and accurately detect real and potential data races in one execution trace. The results also indicate that the execution overhead of SimpleLock is not much higher than that of FastTrack, the fastest happens-before race detector.
{"title":"SimpleLock: Fast and Accurate Hybrid Data Race Detector","authors":"Misun Yu, Sang-Kyung Yoo, Doo-Hwan Bae","doi":"10.1109/PDCAT.2013.15","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.15","url":null,"abstract":"Data races are one of the major causes of concurrency bugs in multithreaded programs, but they are hard to find due to nondeterministic thread scheduling. Data race detectors are essential tools that help long-suffering programmers to locate data races in multithreaded programs. One type of detectors precisely detects data races but is sensitive to thread scheduling, whereas another type is less sensitive to thread scheduling but reports a considerable number of false positives. In this paper, we propose a new dynamic data race detector called SimpleLock that accurately detects data races in a scheduling insensitive manner with low execution overhead. We reduce execution overhead by using two assumptions. The first is that most data races are caused by the accessing of shared variables without locks. The second is that two accesses that cause a data race have not a long distance between them in an execution trace. The results of experiments conducted on the Road Runner framework confirm that these assumptions are valid and that our SimpleLock detector can efficiently and accurately detect real and potential data races in one execution trace. The results also indicate that the execution overhead of SimpleLock is not much higher than that of FastTrack, the fastest happens-before race detector.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129236575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Attribute reduction for big data is viewed as an important preprocessing step in the areas of pattern recognition, machine learning and data mining. In this paper, a novel parallel method based on MapReduce for large-scale attribute reduction is proposed. By using this method, several representative heuristic attribute reduction algorithms in rough set theory have been parallelized. Further, each of the improved parallel algorithms can select the same attribute reduct as its sequential version, therefore, owns the same classification accuracy. An extensive experimental evaluation shows that these parallel algorithms are effective for big data.
{"title":"PLAR: Parallel Large-Scale Attribute Reduction on Cloud Systems","authors":"Junbo Zhang, Tianrui Li, Yi Pan","doi":"10.1109/PDCAT.2013.36","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.36","url":null,"abstract":"Attribute reduction for big data is viewed as an important preprocessing step in the areas of pattern recognition, machine learning and data mining. In this paper, a novel parallel method based on MapReduce for large-scale attribute reduction is proposed. By using this method, several representative heuristic attribute reduction algorithms in rough set theory have been parallelized. Further, each of the improved parallel algorithms can select the same attribute reduct as its sequential version, therefore, owns the same classification accuracy. An extensive experimental evaluation shows that these parallel algorithms are effective for big data.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116014857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In P2P systems, because the buffer space of each peer is limited, most of P2P systems employ the cache-and-relay schemes that require each peer to cache the most recent video stream it receives. As long as the initial part of the video stream remains in its buffer, the peer can then relay the cached stream to late-arriving peers in a pipelining fashion and then the loading of a server is reduced. In our previous research work, we proposed a novel caching scheme for peer-to-peer on-demand streaming, called Dynamic Buffering, which relies on the feature of Multiple Description Coding to gradually reduce the number of cached descriptions in a peer once the buffer is full. In this paper we discuss service availability of a peer with dynamic buffering for various numbers of kinds of forwarded descriptions, and provide detailed analyses on how the number of kinds of forwarded descriptions affects average service availability of a peer. In addition, the mathematical formulas of the reduction of average service availability for various numbers of kinds of forwarded descriptions is derived. Our experiment results showed that the reduction of average service availability is only related to the number of kinds of forwarded descriptions.
{"title":"Service Availability for Various Forwarded Descriptions with Dynamic Buffering on Peer-to-Peer Streaming Networks","authors":"Chow-Sing Lin, Jhe-Wei Lin","doi":"10.1109/PDCAT.2013.32","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.32","url":null,"abstract":"In P2P systems, because the buffer space of each peer is limited, most of P2P systems employ the cache-and-relay schemes that require each peer to cache the most recent video stream it receives. As long as the initial part of the video stream remains in its buffer, the peer can then relay the cached stream to late-arriving peers in a pipelining fashion and then the loading of a server is reduced. In our previous research work, we proposed a novel caching scheme for peer-to-peer on-demand streaming, called Dynamic Buffering, which relies on the feature of Multiple Description Coding to gradually reduce the number of cached descriptions in a peer once the buffer is full. In this paper we discuss service availability of a peer with dynamic buffering for various numbers of kinds of forwarded descriptions, and provide detailed analyses on how the number of kinds of forwarded descriptions affects average service availability of a peer. In addition, the mathematical formulas of the reduction of average service availability for various numbers of kinds of forwarded descriptions is derived. Our experiment results showed that the reduction of average service availability is only related to the number of kinds of forwarded descriptions.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115728840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}