{"title":"检查点间隔选择对Hadoop框架调度性能的影响","authors":"Yassir Samadi, M. Zbakh, Najlae Kasmi","doi":"10.1109/ICMCS.2018.8525971","DOIUrl":null,"url":null,"abstract":"MapReduce is one of the most popular paradigm for processing a huge volume of data (big data) in distributed manner. In addition, Hadoop is considred as one of the most well-known implemention of MapReduce for processing MapReduce programs. The scheduler in Hadoop manages and monitors the scheduling of tasks. In addition, if a failure takes place, Hadoop reschedules the failed tasks. This makes fault tolerance a critical issue for the efficient operation of any application running on Hadoop in order to ensure the quality of service (QoS) and to meet the end-users expectations. Among the well-used techniques for providing fault tolerance in distributed systems, there is the checkpointing technique. The idea behind checkpointing for MapReduce tasks is to use checkpoints to save intermediate results at some points in time. Once a task fails, it can restart from the checkpointed state. However, selecting an appropriate checkpointing interval is not a trivial task. Unnecessary frequent checkpointing may degrade the system performance. Consequently, the checkpointing interval must be selected taking into account the failure probability, as well as the nature of the workload. Towards this direction, we have analyzed the performance of Hadoop with presence of different types of failures (task failure, TaskTracker failure and NameNode failure). We then investigate via simulation the impact of checkpointing interval selection on the performance of Hadoop under various failure probabilities. This paper also discusses our findings and draws attention on how to improve the checkpointing interval selection on Hadoop.","PeriodicalId":386031,"journal":{"name":"International Conference on Multimedia Computing and Systems","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"The impact of checkpointing interval selection on the scheduling performance of Hadoop framework\",\"authors\":\"Yassir Samadi, M. Zbakh, Najlae Kasmi\",\"doi\":\"10.1109/ICMCS.2018.8525971\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"MapReduce is one of the most popular paradigm for processing a huge volume of data (big data) in distributed manner. In addition, Hadoop is considred as one of the most well-known implemention of MapReduce for processing MapReduce programs. The scheduler in Hadoop manages and monitors the scheduling of tasks. In addition, if a failure takes place, Hadoop reschedules the failed tasks. This makes fault tolerance a critical issue for the efficient operation of any application running on Hadoop in order to ensure the quality of service (QoS) and to meet the end-users expectations. Among the well-used techniques for providing fault tolerance in distributed systems, there is the checkpointing technique. The idea behind checkpointing for MapReduce tasks is to use checkpoints to save intermediate results at some points in time. Once a task fails, it can restart from the checkpointed state. However, selecting an appropriate checkpointing interval is not a trivial task. Unnecessary frequent checkpointing may degrade the system performance. Consequently, the checkpointing interval must be selected taking into account the failure probability, as well as the nature of the workload. Towards this direction, we have analyzed the performance of Hadoop with presence of different types of failures (task failure, TaskTracker failure and NameNode failure). We then investigate via simulation the impact of checkpointing interval selection on the performance of Hadoop under various failure probabilities. This paper also discusses our findings and draws attention on how to improve the checkpointing interval selection on Hadoop.\",\"PeriodicalId\":386031,\"journal\":{\"name\":\"International Conference on Multimedia Computing and Systems\",\"volume\":\"61 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Multimedia Computing and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMCS.2018.8525971\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Multimedia Computing and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMCS.2018.8525971","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The impact of checkpointing interval selection on the scheduling performance of Hadoop framework
MapReduce is one of the most popular paradigm for processing a huge volume of data (big data) in distributed manner. In addition, Hadoop is considred as one of the most well-known implemention of MapReduce for processing MapReduce programs. The scheduler in Hadoop manages and monitors the scheduling of tasks. In addition, if a failure takes place, Hadoop reschedules the failed tasks. This makes fault tolerance a critical issue for the efficient operation of any application running on Hadoop in order to ensure the quality of service (QoS) and to meet the end-users expectations. Among the well-used techniques for providing fault tolerance in distributed systems, there is the checkpointing technique. The idea behind checkpointing for MapReduce tasks is to use checkpoints to save intermediate results at some points in time. Once a task fails, it can restart from the checkpointed state. However, selecting an appropriate checkpointing interval is not a trivial task. Unnecessary frequent checkpointing may degrade the system performance. Consequently, the checkpointing interval must be selected taking into account the failure probability, as well as the nature of the workload. Towards this direction, we have analyzed the performance of Hadoop with presence of different types of failures (task failure, TaskTracker failure and NameNode failure). We then investigate via simulation the impact of checkpointing interval selection on the performance of Hadoop under various failure probabilities. This paper also discusses our findings and draws attention on how to improve the checkpointing interval selection on Hadoop.