Genetic optimized data deduplication for distributed big data storage systems

N. Kumar, Shobha Antwal, Ganesh Samarthyam, S. Jain
{"title":"Genetic optimized data deduplication for distributed big data storage systems","authors":"N. Kumar, Shobha Antwal, Ganesh Samarthyam, S. Jain","doi":"10.1109/ISPCC.2017.8269581","DOIUrl":null,"url":null,"abstract":"Content-Defined Chunking (CDC) detect maximum redundancy in data deduplication systems in the past years. In this research work, we focus on optimizing the deduplication system by adjusting the pertinent factors in content defined chunking (CDC) to identify as the key ingredients by declaring chunk cut-points and efficient fingerprint lookup using bucket based index partitioning. For efficient chunking, we propose Genetic Evolution (GE) algorithm based approach which is optimized Two Thresholds Two Divisors (TTTD-P) CDC algorithm where we significantly reduce the number of computing operations by using single dynamic optimal parameter divisor D with optimal threshold value exploiting the multi-operations nature of TTTD. To reduce the chunk-size variance, TTTD algorithm introduces an additional backup divisor D' that has a higher probability of finding cut-points. However, adding an additional divisor decreases the chunking throughput, meaning that TTTD algorithm aggravates Rabin's CDC performance bottleneck. To this end, Asymmetric Extremum (AE) significantly improves chunking throughput while providing comparable deduplication efficiency by using the local extreme value in a variable-sized asymmetric window to overcome the Rabin, MAXP and TTTD boundaries-shift problem. FAST CDC in the year 2016 is about 10 times faster than unimodal Rabin CDC and about 3 times faster than Gear and Asymmetric Extremum (AE) CDC, while achieving nearby the same deduplication ratio (DR). Therefore, we propose GE based TTTD-P optimized chunking to maximize chunking throughput with increased DR; and bucket indexing approach reduces hash values judgement time to identify and declare redundant chunk about 16 times than unimodal baseline Rabin CDC, 5 times than AE CDC, 1.6 times than FAST CDC. Our experimental results comparative analysis reveals that TTTD-P using fast BUZ rolling hash function with bucket indexing on Hadoop Distributed File System (HDFS) provide a comparatively maximum redundancy detection with higher throughput, higher deduplication ratio, lesser computation time and very low hash values comparison time as being best data deduplication for distributed big data storage systems.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPCC.2017.8269581","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Content-Defined Chunking (CDC) detect maximum redundancy in data deduplication systems in the past years. In this research work, we focus on optimizing the deduplication system by adjusting the pertinent factors in content defined chunking (CDC) to identify as the key ingredients by declaring chunk cut-points and efficient fingerprint lookup using bucket based index partitioning. For efficient chunking, we propose Genetic Evolution (GE) algorithm based approach which is optimized Two Thresholds Two Divisors (TTTD-P) CDC algorithm where we significantly reduce the number of computing operations by using single dynamic optimal parameter divisor D with optimal threshold value exploiting the multi-operations nature of TTTD. To reduce the chunk-size variance, TTTD algorithm introduces an additional backup divisor D' that has a higher probability of finding cut-points. However, adding an additional divisor decreases the chunking throughput, meaning that TTTD algorithm aggravates Rabin's CDC performance bottleneck. To this end, Asymmetric Extremum (AE) significantly improves chunking throughput while providing comparable deduplication efficiency by using the local extreme value in a variable-sized asymmetric window to overcome the Rabin, MAXP and TTTD boundaries-shift problem. FAST CDC in the year 2016 is about 10 times faster than unimodal Rabin CDC and about 3 times faster than Gear and Asymmetric Extremum (AE) CDC, while achieving nearby the same deduplication ratio (DR). Therefore, we propose GE based TTTD-P optimized chunking to maximize chunking throughput with increased DR; and bucket indexing approach reduces hash values judgement time to identify and declare redundant chunk about 16 times than unimodal baseline Rabin CDC, 5 times than AE CDC, 1.6 times than FAST CDC. Our experimental results comparative analysis reveals that TTTD-P using fast BUZ rolling hash function with bucket indexing on Hadoop Distributed File System (HDFS) provide a comparatively maximum redundancy detection with higher throughput, higher deduplication ratio, lesser computation time and very low hash values comparison time as being best data deduplication for distributed big data storage systems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
分布式大数据存储系统的遗传优化重复数据删除
CDC (Content-Defined Chunking)是一种检测重复数据删除系统过去几年最大冗余的技术。在本研究中,我们重点通过调整内容定义分块(CDC)中的相关因素来优化重复数据删除系统,通过声明块切点和使用基于桶的索引分区进行高效指纹查找来识别关键成分。为了实现高效的分块,我们提出了一种基于遗传进化(GE)算法的方法,该方法优化了两阈值两除数(TTTD- p) CDC算法,利用TTTD的多操作特性,使用具有最优阈值的单个动态最优参数除数D,大大减少了计算操作次数。为了减少块大小的差异,TTTD算法引入了一个额外的备份除数D',它具有更高的找到切割点的概率。然而,增加一个额外的除数会降低分块吞吐量,这意味着TTTD算法加剧了Rabin的CDC性能瓶颈。为此,非对称极值(AE)通过在可变大小的非对称窗口中使用局部极值来克服Rabin、MAXP和TTTD边界偏移问题,显著提高了分块吞吐量,同时提供了相当的重复数据删除效率。2016年FAST CDC比单峰Rabin CDC快约10倍,比齿轮和非对称极值(AE) CDC快约3倍,同时实现了几乎相同的重复数据删除比(DR)。因此,我们提出基于GE的TTTD-P优化分块,以在增加DR的情况下最大化分块吞吐量;桶索引方法识别和声明冗余块的哈希值判断时间比单峰基线Rabin CDC减少了16倍,比AE CDC减少了5倍,比FAST CDC减少了1.6倍。我们的实验结果对比分析表明,在Hadoop分布式文件系统(HDFS)上使用带有桶索引的快速BUZ滚动哈希函数的TTTD-P提供了相对最大的冗余检测,具有更高的吞吐量、更高的重复数据删除率、更少的计算时间和非常低的哈希值比较时间,是分布式大数据存储系统的最佳重复数据删除方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Performance comparison of Type-1 and Type-2 fuzzy logic systems Optimal sizing of standalone small rotor wind and diesel system with energy storage for low speed wind operation A distributed method of key issue and revocation of mobile ad hoc networks using curve fitting FPGA implementation of unsigned multiplier circuit based on quaternary signed digit number system A novel technique of cloud security based on hybrid encryption by Blowfish and MD5
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1