Yaning Huang, Hai Jin, Xuanhua Shi, Song Wu, Yong Chen
{"title":"Cost-Aware Client-Side File Caching for Data-Intensive Applications","authors":"Yaning Huang, Hai Jin, Xuanhua Shi, Song Wu, Yong Chen","doi":"10.1109/CloudCom.2013.140","DOIUrl":null,"url":null,"abstract":"Parallel and distributed file systems are widely used to provide high throughput in high-performance computing and Cloud computing systems. To increase the parallelism, I/O requests are partitioned into multiple sub-requests (or `flows') and distributed across different data nodes. The performance of file systems is extremely poor if data nodes have highly unbalanced response time. Client-side caching offers a promising direction for addressing this issue. However, current work has primarily used client-side memory as a read cache and employed a write-through policy which requires synchronous update for every write and significantly under-utilizes the client-side cache when the applications are write-intensive. Realizing that the cost of an I/O request depends on the struggler sub-requests, we propose a cost-aware client-side file caching (CCFC) strategy, that is designed to cache the sub-requests with high I/O cost on the client end. This caching policy enables a new trade-off across write performance, consistency guarantee and cache size dimensions. Using benchmark workloads MADbench2, we evaluate our new cache policy alongside conventional write-through. We find that the proposed CCFC strategy can achieve up to 110% throughput improvement compared to the conventional write-through policies with the same cache size on an 85-node cluster.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CloudCom.2013.140","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Parallel and distributed file systems are widely used to provide high throughput in high-performance computing and Cloud computing systems. To increase the parallelism, I/O requests are partitioned into multiple sub-requests (or `flows') and distributed across different data nodes. The performance of file systems is extremely poor if data nodes have highly unbalanced response time. Client-side caching offers a promising direction for addressing this issue. However, current work has primarily used client-side memory as a read cache and employed a write-through policy which requires synchronous update for every write and significantly under-utilizes the client-side cache when the applications are write-intensive. Realizing that the cost of an I/O request depends on the struggler sub-requests, we propose a cost-aware client-side file caching (CCFC) strategy, that is designed to cache the sub-requests with high I/O cost on the client end. This caching policy enables a new trade-off across write performance, consistency guarantee and cache size dimensions. Using benchmark workloads MADbench2, we evaluate our new cache policy alongside conventional write-through. We find that the proposed CCFC strategy can achieve up to 110% throughput improvement compared to the conventional write-through policies with the same cache size on an 85-node cluster.