{"title":"Introducing SSDs to the Hadoop MapReduce Framework","authors":"Sangwhan Moon, J. Lee, Yang-Suk Kee","doi":"10.1109/CLOUD.2014.45","DOIUrl":null,"url":null,"abstract":"Solid State Drive (SSD) cost-per-bit continues to decrease. Consequently, system architects increasingly consider replacing Hard Disk Drives (HDDs) with SSDs to accelerate Hadoop MapReduce processing. When attempting this, system architects usually realize that SSD characteristics and today's Hadoop framework exhibit mismatches that impede indiscriminate SSD integration. Hence, cost-effective SSD utilization has proved challenging within many Hadoop environments. This paper compares SSD performance to HDD performance within a Hadoop MapReduce framework. It identifies extensible best practices that can exploit SSD benefits within Hadoop frameworks when combined with high network bandwidth and increased parallel storage access. Terasort benchmark results demonstrate that SSDs presently deliver significant cost-effectiveness when they store intermediate Hadoop data, leaving HDDs to store Hadoop Distributed File System (HDFS) source data.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"44","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE 7th International Conference on Cloud Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CLOUD.2014.45","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 44
Abstract
Solid State Drive (SSD) cost-per-bit continues to decrease. Consequently, system architects increasingly consider replacing Hard Disk Drives (HDDs) with SSDs to accelerate Hadoop MapReduce processing. When attempting this, system architects usually realize that SSD characteristics and today's Hadoop framework exhibit mismatches that impede indiscriminate SSD integration. Hence, cost-effective SSD utilization has proved challenging within many Hadoop environments. This paper compares SSD performance to HDD performance within a Hadoop MapReduce framework. It identifies extensible best practices that can exploit SSD benefits within Hadoop frameworks when combined with high network bandwidth and increased parallel storage access. Terasort benchmark results demonstrate that SSDs presently deliver significant cost-effectiveness when they store intermediate Hadoop data, leaving HDDs to store Hadoop Distributed File System (HDFS) source data.
SSD (Solid State Drive)的每比特成本持续下降。因此,系统架构师越来越多地考虑将hdd (Hard Disk Drives)替换为ssd来加速Hadoop MapReduce的处理。当尝试这样做时,系统架构师通常会意识到SSD特性和今天的Hadoop框架表现出不匹配,从而阻碍了SSD的任意集成。因此,在许多Hadoop环境中,具有成本效益的SSD利用率被证明是具有挑战性的。本文比较了Hadoop MapReduce框架下SSD和HDD的性能。它确定了可扩展的最佳实践,当与高网络带宽和增加的并行存储访问相结合时,可以在Hadoop框架中利用SSD的优势。Terasort基准测试结果表明,ssd目前在存储中间Hadoop数据时提供了显著的成本效益,而hdd则存储Hadoop分布式文件系统(HDFS)源数据。