The simplicity of bare bone particle swarm optimization (BPSO) is attractive since no parameters tuning is required. Nevertheless, it also encounters the issue of premature convergence. To remedy this problem, by integrated global model and local model search strategies, a unified bare bone particle swarm optimization (UBPSO) is appeared in recently where the weightings of global and local search strategies may be constant or random varying. In this paper, a variant of UBPSO is proposed that stresses on global exploration ability in early stages and turns to local exploitation in later stages for searching optimal solution. Numerical results reveal that this variant is competitive to UBPSO and performs better than BPSO and PSO in most of the tested benchmark functions.
{"title":"A Variant of Unified Bare Bone Particle Swarm Optimizer","authors":"Chang-Huang Chen","doi":"10.1109/PDCAT.2013.10","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.10","url":null,"abstract":"The simplicity of bare bone particle swarm optimization (BPSO) is attractive since no parameters tuning is required. Nevertheless, it also encounters the issue of premature convergence. To remedy this problem, by integrated global model and local model search strategies, a unified bare bone particle swarm optimization (UBPSO) is appeared in recently where the weightings of global and local search strategies may be constant or random varying. In this paper, a variant of UBPSO is proposed that stresses on global exploration ability in early stages and turns to local exploitation in later stages for searching optimal solution. Numerical results reveal that this variant is competitive to UBPSO and performs better than BPSO and PSO in most of the tested benchmark functions.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121077089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deterministic replay is a key technique for debugging simultaneous multithreaded programs on multicore processor. With this scheme, software-only implementations generally incur large runtime overhead. Hardware assisted methods can significantly reduce the overhead, but most hardware based recorders are system oriented. They capture all orders happened in monitored application, Operating System, and other applications. This produces inefficiency and inconvenience for application programmers to debug their programs. This paper proposes a hardware assisted recorder (HRUL), which is lightweight and convenient to application programmers. HRUL uses a hybrid hardware-software method to extract dependencies from monitored application in a complex execution environment, and compresses the orders with a combination of online and offline compression algorithm. What' more, It also captures implicit dependencies caused by system call and scheduling in Operating System to make replay faithful. We evaluate the scheme with 16-core runs of PARSEC, our results show that HRUL introduces runtime overhead less than 3% and can reduce log size by 81% (only with online-hardware compression).
{"title":"HRUL: A Hardware Assisted Recorder for User-Level Application","authors":"Shibin Tang, Fenglong Song, Lingjun Fan, Yuanchao Xu, Dongrui Fan, Zhiyong Liu","doi":"10.1109/PDCAT.2013.28","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.28","url":null,"abstract":"Deterministic replay is a key technique for debugging simultaneous multithreaded programs on multicore processor. With this scheme, software-only implementations generally incur large runtime overhead. Hardware assisted methods can significantly reduce the overhead, but most hardware based recorders are system oriented. They capture all orders happened in monitored application, Operating System, and other applications. This produces inefficiency and inconvenience for application programmers to debug their programs. This paper proposes a hardware assisted recorder (HRUL), which is lightweight and convenient to application programmers. HRUL uses a hybrid hardware-software method to extract dependencies from monitored application in a complex execution environment, and compresses the orders with a combination of online and offline compression algorithm. What' more, It also captures implicit dependencies caused by system call and scheduling in Operating System to make replay faithful. We evaluate the scheme with 16-core runs of PARSEC, our results show that HRUL introduces runtime overhead less than 3% and can reduce log size by 81% (only with online-hardware compression).","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125067215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph Processing Units (GPUs) have recently evolved into a super multi-core and a fully programmable architecture. In the CUDA programming model, the programmers can simply implement parallelism ideas of a task on GPUs. The purpose of this paper is to accelerate Ant Colony Optimization (ACO) for Traveling Salesman Problems (TSP) with GPUs. In this paper, we propose a new parallel method, which is called the Transition Condition Method. Experimental results are extensively compared and evaluated on the performance side and the solution quality side. The TSP problems are used as a standard benchmark for our experiments. In terms of experimental results, our new parallel method achieves the maximal speed-up factor of 4.74 than the previous parallel method. On the other hand, the quality of solutions is similar to the original sequential ACO algorithm. It proves that the quality of solutions does not be sacrificed in the cause of speed-up.
{"title":"Using CUDA GPU to Accelerate the Ant Colony Optimization Algorithm","authors":"K. Wei, Chao-Chin Wu, Chien-Ju Wu","doi":"10.1109/PDCAT.2013.21","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.21","url":null,"abstract":"Graph Processing Units (GPUs) have recently evolved into a super multi-core and a fully programmable architecture. In the CUDA programming model, the programmers can simply implement parallelism ideas of a task on GPUs. The purpose of this paper is to accelerate Ant Colony Optimization (ACO) for Traveling Salesman Problems (TSP) with GPUs. In this paper, we propose a new parallel method, which is called the Transition Condition Method. Experimental results are extensively compared and evaluated on the performance side and the solution quality side. The TSP problems are used as a standard benchmark for our experiments. In terms of experimental results, our new parallel method achieves the maximal speed-up factor of 4.74 than the previous parallel method. On the other hand, the quality of solutions is similar to the original sequential ACO algorithm. It proves that the quality of solutions does not be sacrificed in the cause of speed-up.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126996469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We focus on the parallelization of two-dimensional square packing problem. In square packing problem, a list of square items need to be packed into a minimum number of unit square bins. All square items have side length smaller than or equal to 1 which is also the side length of each unit square bin. The total area of items that has been packed into one bin cannot exceed 1. Using the idea of harmonic, some squares can be put into the same bin without exceeding the bin limitation of side length 1. We try to concurrently pack all the corresponding squares into one bin by a parallel systerm of computation processing. A 9=4-worst case asymptotic error bound algorithm with time complexity (n) is showed. Let OPT(I) and A(I) denote, respectively, the cost of an optimal solution and the cost produced by an approximation algorithmA for an instance Iof the square packing problem. The best upper bound of on-line square packing to date is 2.1439 proved by Han et al. [23] by using complexity weighting functions. However the upper bound of our parallel algorithm is a litter worse than Han's algorithm, the analysis of our algorithm is more simple and the time complexity is improved. Han's algorithm needs O(nlogn) time, while our method only needs (n) time.
{"title":"A Parallel Algorithm for 2D Square Packing","authors":"Xiaofan Zhao, Hong Shen","doi":"10.1109/PDCAT.2013.35","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.35","url":null,"abstract":"We focus on the parallelization of two-dimensional square packing problem. In square packing problem, a list of square items need to be packed into a minimum number of unit square bins. All square items have side length smaller than or equal to 1 which is also the side length of each unit square bin. The total area of items that has been packed into one bin cannot exceed 1. Using the idea of harmonic, some squares can be put into the same bin without exceeding the bin limitation of side length 1. We try to concurrently pack all the corresponding squares into one bin by a parallel systerm of computation processing. A 9=4-worst case asymptotic error bound algorithm with time complexity (n) is showed. Let OPT(I) and A(I) denote, respectively, the cost of an optimal solution and the cost produced by an approximation algorithmA for an instance Iof the square packing problem. The best upper bound of on-line square packing to date is 2.1439 proved by Han et al. [23] by using complexity weighting functions. However the upper bound of our parallel algorithm is a litter worse than Han's algorithm, the analysis of our algorithm is more simple and the time complexity is improved. Han's algorithm needs O(nlogn) time, while our method only needs (n) time.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114198613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work presents the framework Cloud Testing, a solution to parallelize the execution of a test suite over a distributed cloud infrastructure. The use of a cloud as runtime environment for automated software testing provides a more efficient and effective solution when compared to traditional methods regarding the exploration of diversity and heterogeneity for testing coverage. The objective of this work is evaluate our solution regarding the performance gains achieved with the use of the framework showing that it is possible to improve the software testing process with very little configuration overhead and low costs.
{"title":"A Framework for Automated Software Testing on the Cloud","authors":"Gustavo Savio De Oliveira, A. Duarte","doi":"10.1109/PDCAT.2013.61","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.61","url":null,"abstract":"This work presents the framework Cloud Testing, a solution to parallelize the execution of a test suite over a distributed cloud infrastructure. The use of a cloud as runtime environment for automated software testing provides a more efficient and effective solution when compared to traditional methods regarding the exploration of diversity and heterogeneity for testing coverage. The objective of this work is evaluate our solution regarding the performance gains achieved with the use of the framework showing that it is possible to improve the software testing process with very little configuration overhead and low costs.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125458931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Energy efficiency is now used as an important metric for evaluating a computing system. However, saving energy is a big challenge due to many constraints. For example, in one of the most popular distributed processing frameworks, Hadoop, three replicas of each data block are randomly distributed in order to improve performance and fault tolerance. But such a mechanism limits the largest number of machines that can be turned off to save energy without affecting the data availability. To overcome this limitation, previous research introduces a new mechanism called covering subset which maintains a set of active nodes to ensure the immediate availability of data, even when all other nodes are turned off. This covering subset based mechanism works smoothly if no failure happens. However, a node in the covering subset may fail. In this paper, we study the energy-efficient failure recovery in Hadoop clusters. Rather than only using the replication as adopted by a Hadoop system by default, we investigate both replication and erasure coding as possible redundancy mechanisms. We develop failure recovery algorithms for both systems and analytically compare their energy efficiency.
{"title":"Energy Analysis of Hadoop Cluster Failure Recovery","authors":"Weiyue Xu, Ying Lu","doi":"10.1109/PDCAT.2013.29","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.29","url":null,"abstract":"Energy efficiency is now used as an important metric for evaluating a computing system. However, saving energy is a big challenge due to many constraints. For example, in one of the most popular distributed processing frameworks, Hadoop, three replicas of each data block are randomly distributed in order to improve performance and fault tolerance. But such a mechanism limits the largest number of machines that can be turned off to save energy without affecting the data availability. To overcome this limitation, previous research introduces a new mechanism called covering subset which maintains a set of active nodes to ensure the immediate availability of data, even when all other nodes are turned off. This covering subset based mechanism works smoothly if no failure happens. However, a node in the covering subset may fail. In this paper, we study the energy-efficient failure recovery in Hadoop clusters. Rather than only using the replication as adopted by a Hadoop system by default, we investigate both replication and erasure coding as possible redundancy mechanisms. We develop failure recovery algorithms for both systems and analytically compare their energy efficiency.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134186980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Architecture of interconnection network plays a significant role in the performance and energy consumption of Network-on-Chip (NoC) systems. In this paper we propose NoC implementation of Midi mew-connected Mesh Network (MMN). MMN is a Minimal Distance Mesh with Wrap-around (Midi mew) links network of multiple basic modules, in which the basic modules are 2D-mesh networks that are hierarchically interconnected for higher-level networks. For implementing all the links of level-3 MMN, minimum 4 layers are needed which is feasible with current and future VLSI technologies. With innovative combination of diagonal and hierarchical structure, MMN possesses several attractive features including constant node degree, small diameter, low cost, small average distance, and moderate bisection width than that of other conventional and hierarchical interconnection networks.
{"title":"Network-on-Chip Implementation of Midimew-Connected Mesh Network","authors":"Md. Rabiul Awal, M. Rahman","doi":"10.1109/PDCAT.2013.48","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.48","url":null,"abstract":"Architecture of interconnection network plays a significant role in the performance and energy consumption of Network-on-Chip (NoC) systems. In this paper we propose NoC implementation of Midi mew-connected Mesh Network (MMN). MMN is a Minimal Distance Mesh with Wrap-around (Midi mew) links network of multiple basic modules, in which the basic modules are 2D-mesh networks that are hierarchically interconnected for higher-level networks. For implementing all the links of level-3 MMN, minimum 4 layers are needed which is feasible with current and future VLSI technologies. With innovative combination of diagonal and hierarchical structure, MMN possesses several attractive features including constant node degree, small diameter, low cost, small average distance, and moderate bisection width than that of other conventional and hierarchical interconnection networks.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132810918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chao-Tung Yang, Jung-Chun Liu, Chi-Jui Liao, Chia-Cheng Wu, Fang-Yie Leu
Along with the improvement of sanitary conditions and changes of life style, people begin to pay attention to the modern concept of health promotion and preventive medicine. Therefore, we built an intelligent environment monitoring feedback system to collect data of physical conditions of employees and air conditions of the working environment, displayed the collected data with a real time interface, and sent out warming messages to prevent accidents. We hope that based on these real-time data, the proposed system can help people make right and timely decisions, and act on time to maintain a beneficial environment in the monitored area.
{"title":"On Construction of an Intelligent Environmental Monitoring System for Healthcare","authors":"Chao-Tung Yang, Jung-Chun Liu, Chi-Jui Liao, Chia-Cheng Wu, Fang-Yie Leu","doi":"10.1109/PDCAT.2013.45","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.45","url":null,"abstract":"Along with the improvement of sanitary conditions and changes of life style, people begin to pay attention to the modern concept of health promotion and preventive medicine. Therefore, we built an intelligent environment monitoring feedback system to collect data of physical conditions of employees and air conditions of the working environment, displayed the collected data with a real time interface, and sent out warming messages to prevent accidents. We hope that based on these real-time data, the proposed system can help people make right and timely decisions, and act on time to maintain a beneficial environment in the monitored area.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131680162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedro Carvalho Filho, Clodoaldo Brasilino, A. Duarte
BACKGROUND: There is a large body of literature on research about fault management in grid computing. Despite being a well-established research area, there are no systematic studies focusing on characterizing the sorts of research that have been conducted, identifying well-explored topics as well as opportunities for further research. OBJECTIVE: This study aims at surveying the existing research on fault management in grid computing in order to identify useful approaches and opportunities for future research. METHOD: We conducted a systematic mapping study to collect, classify and analyze the research literature on fault management in grid computing indexed by the main search engines in the field. RESULTS: Our study selected and classified 257 scientific papers and was able to answer five research questions regarding the distribution of the scientific production over the time and space. CONCLUSIONS: The majority of the selected studies focus on fault tolerance, with very few efforts towards fault prevention, prediction and removal.
{"title":"Ten Years of Research on Fault Management in Grid Computing: A Systematic Mapping Study","authors":"Pedro Carvalho Filho, Clodoaldo Brasilino, A. Duarte","doi":"10.1109/PDCAT.2013.60","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.60","url":null,"abstract":"BACKGROUND: There is a large body of literature on research about fault management in grid computing. Despite being a well-established research area, there are no systematic studies focusing on characterizing the sorts of research that have been conducted, identifying well-explored topics as well as opportunities for further research. OBJECTIVE: This study aims at surveying the existing research on fault management in grid computing in order to identify useful approaches and opportunities for future research. METHOD: We conducted a systematic mapping study to collect, classify and analyze the research literature on fault management in grid computing indexed by the main search engines in the field. RESULTS: Our study selected and classified 257 scientific papers and was able to answer five research questions regarding the distribution of the scientific production over the time and space. CONCLUSIONS: The majority of the selected studies focus on fault tolerance, with very few efforts towards fault prevention, prediction and removal.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114654818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. R. Valêncio, F. Almeida, J. M. Machado, A. Colombini, L. A. Neves, Rogéria Cristiane Gratão de Souza
One way to boost the performance of a Database Management System (DBMS) is by fetching data in advance of their use, a technique known as prefetching. However, depending on the resource being used (file, disk partition, memory, etc.), the way prefetching is done might be different or even not necessary, forcing a DBMS to be aware of the underlying Storage System. In this paper we propose a Storage System that frees the DBMS of this task by exposing the database through a unique interface, no matter what kind of resource hosts it. We have implemented a file resource that recognizes and exploits sequential access patterns that emerge over time to prefetch adjacent blocks to the requested ones. Our approach is speculative because it considers past accesses, but it also considers hints from the upper layers of the DBMS, which must specify the access context in which a read operation takes place. The informed access context is then mapped to one of the available channels in the file resource, which is equipped with a set of internal buffers, one per channel, for the management of fetched and prefetched data. Prefetched data are moved to the main cache of the DBMS only if really requested by the application, which helps to avoid cache pollution. So, we slightly introduced a two level cache hierarchy without any intervention of the DBMS kernel. We ran the tests with different buffer settings and compared the results against the OBL policy, which showed that it is possible to get a read time up to two times faster in a highly concurrent environment without sacrificing the performance when the system is not under intensive workloads.
{"title":"The Storage System for a Multimedia Data Manager Kernel","authors":"C. R. Valêncio, F. Almeida, J. M. Machado, A. Colombini, L. A. Neves, Rogéria Cristiane Gratão de Souza","doi":"10.1109/PDCAT.2013.41","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.41","url":null,"abstract":"One way to boost the performance of a Database Management System (DBMS) is by fetching data in advance of their use, a technique known as prefetching. However, depending on the resource being used (file, disk partition, memory, etc.), the way prefetching is done might be different or even not necessary, forcing a DBMS to be aware of the underlying Storage System. In this paper we propose a Storage System that frees the DBMS of this task by exposing the database through a unique interface, no matter what kind of resource hosts it. We have implemented a file resource that recognizes and exploits sequential access patterns that emerge over time to prefetch adjacent blocks to the requested ones. Our approach is speculative because it considers past accesses, but it also considers hints from the upper layers of the DBMS, which must specify the access context in which a read operation takes place. The informed access context is then mapped to one of the available channels in the file resource, which is equipped with a set of internal buffers, one per channel, for the management of fetched and prefetched data. Prefetched data are moved to the main cache of the DBMS only if really requested by the application, which helps to avoid cache pollution. So, we slightly introduced a two level cache hierarchy without any intervention of the DBMS kernel. We ran the tests with different buffer settings and compared the results against the OBL policy, which showed that it is possible to get a read time up to two times faster in a highly concurrent environment without sacrificing the performance when the system is not under intensive workloads.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125942412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}