{"title":"Performance of parallel two-pass MDL context tree algorithm","authors":"Nikhil Krishnan, D. Baron","doi":"10.1109/GlobalSIP.2014.7032133","DOIUrl":null,"url":null,"abstract":"Computing problems that handle large amounts of data necessitate the use of lossless data compression for efficient storage and transmission. We present numerical results that showcase the advantages of a novel lossless universal data compression algorithm that uses parallel computational units to increase the throughput with minimal degradation in the compression quality. Our approach is to divide the data into blocks, estimate the minimum description length (MDL) context tree source underlying the entire input, and compress each block in parallel based on the MDL source. Numerical results from a prototype implementation suggest that our algorithm offers a better trade-off between compression and throughput than competing universal data compression algorithms.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GlobalSIP.2014.7032133","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Computing problems that handle large amounts of data necessitate the use of lossless data compression for efficient storage and transmission. We present numerical results that showcase the advantages of a novel lossless universal data compression algorithm that uses parallel computational units to increase the throughput with minimal degradation in the compression quality. Our approach is to divide the data into blocks, estimate the minimum description length (MDL) context tree source underlying the entire input, and compress each block in parallel based on the MDL source. Numerical results from a prototype implementation suggest that our algorithm offers a better trade-off between compression and throughput than competing universal data compression algorithms.