{"title":"面向大文本数据分析加速的内容感知部分压缩","authors":"Dapeng Dong, J. Herbert","doi":"10.1109/CloudCom.2014.76","DOIUrl":null,"url":null,"abstract":"Analysing text-based data has become increasingly important due to the importance of text from sources such as social media, web contents, web searches. The growing volume of such data creates challenges for data analysis including efficient and scalable algorithm, effective computing platforms and energy efficiency. Compression is a standard method for reducing data size but current standard compression algorithms are destructive to the organisation of data contents. This work introduces Content-aware, Partial Compression (CaPC) for text using a dictionary-based approach. We simply use shorter codes to replace strings while maintaining the original data format and structure, so that the compressed contents can be directly consumed by analytic platforms. We evaluate our approach with a set of real-world datasets and several classical MapReduce jobs on Hadoop. We also provide a supplementary utility library for Hadoop, hence, existing MapReduce programs can be used directly on the compressed datasets with little or no modification. In evaluation, we demonstrate that CaPC works well with a wide variety of data analysis scenarios, experimental results show ~30% average data size reduction, and up to ~32% performance increase on some I/O intensive jobs on an in-house Hadoop cluster. While the gains may seem modest, the point is that these gains are 'for free' and act as supplementary to all other optimizations.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"os-14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Content-Aware Partial Compression for Big Textual Data Analysis Acceleration\",\"authors\":\"Dapeng Dong, J. Herbert\",\"doi\":\"10.1109/CloudCom.2014.76\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Analysing text-based data has become increasingly important due to the importance of text from sources such as social media, web contents, web searches. The growing volume of such data creates challenges for data analysis including efficient and scalable algorithm, effective computing platforms and energy efficiency. Compression is a standard method for reducing data size but current standard compression algorithms are destructive to the organisation of data contents. This work introduces Content-aware, Partial Compression (CaPC) for text using a dictionary-based approach. We simply use shorter codes to replace strings while maintaining the original data format and structure, so that the compressed contents can be directly consumed by analytic platforms. We evaluate our approach with a set of real-world datasets and several classical MapReduce jobs on Hadoop. We also provide a supplementary utility library for Hadoop, hence, existing MapReduce programs can be used directly on the compressed datasets with little or no modification. In evaluation, we demonstrate that CaPC works well with a wide variety of data analysis scenarios, experimental results show ~30% average data size reduction, and up to ~32% performance increase on some I/O intensive jobs on an in-house Hadoop cluster. While the gains may seem modest, the point is that these gains are 'for free' and act as supplementary to all other optimizations.\",\"PeriodicalId\":249306,\"journal\":{\"name\":\"2014 IEEE 6th International Conference on Cloud Computing Technology and Science\",\"volume\":\"os-14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE 6th International Conference on Cloud Computing Technology and Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CloudCom.2014.76\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CloudCom.2014.76","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Content-Aware Partial Compression for Big Textual Data Analysis Acceleration
Analysing text-based data has become increasingly important due to the importance of text from sources such as social media, web contents, web searches. The growing volume of such data creates challenges for data analysis including efficient and scalable algorithm, effective computing platforms and energy efficiency. Compression is a standard method for reducing data size but current standard compression algorithms are destructive to the organisation of data contents. This work introduces Content-aware, Partial Compression (CaPC) for text using a dictionary-based approach. We simply use shorter codes to replace strings while maintaining the original data format and structure, so that the compressed contents can be directly consumed by analytic platforms. We evaluate our approach with a set of real-world datasets and several classical MapReduce jobs on Hadoop. We also provide a supplementary utility library for Hadoop, hence, existing MapReduce programs can be used directly on the compressed datasets with little or no modification. In evaluation, we demonstrate that CaPC works well with a wide variety of data analysis scenarios, experimental results show ~30% average data size reduction, and up to ~32% performance increase on some I/O intensive jobs on an in-house Hadoop cluster. While the gains may seem modest, the point is that these gains are 'for free' and act as supplementary to all other optimizations.