Big data compression using spiht in Hadoop: A case study in multi-lead ECG signals

G. Jati, Ilham Kusuma, M. Hilman, W. Jatmiko
{"title":"Big data compression using spiht in Hadoop: A case study in multi-lead ECG signals","authors":"G. Jati, Ilham Kusuma, M. Hilman, W. Jatmiko","doi":"10.1109/IWBIS.2016.7872902","DOIUrl":null,"url":null,"abstract":"Compression still become main concern in big data framework. The performance of big data depend on speed of data transfer. Compressed data can speed up transfer data between network. It also save more space for storage. Several compression method is provide by Hadoop as a most common big data framework. That method mostly for general purpose. But the performance still have to optimize especially for Biomedical record like ECG data. We propose Set Partitioning in Hierarchical Tree (SPIHT) for big data compression with study case ECG signal data. In this paper compression will run in Hadoop Framework. The proposed method has stages such as input signal, map input signal, spiht coding, and reduce bit-stream. The compression produce compressed data for intermediate (Map) output and final (reduce) output. The experiment using ECG data to measure compression performance. The proposed method gets Percentage Root-mean-square difference (PRD) is about 1.0. Compare to existing method, the proposed method get better Compression Ratio (CR) with competitive longer compression time. So proposed method gets better performance compare to other method especially for ECG dataset.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Workshop on Big Data and Information Security (IWBIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IWBIS.2016.7872902","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Compression still become main concern in big data framework. The performance of big data depend on speed of data transfer. Compressed data can speed up transfer data between network. It also save more space for storage. Several compression method is provide by Hadoop as a most common big data framework. That method mostly for general purpose. But the performance still have to optimize especially for Biomedical record like ECG data. We propose Set Partitioning in Hierarchical Tree (SPIHT) for big data compression with study case ECG signal data. In this paper compression will run in Hadoop Framework. The proposed method has stages such as input signal, map input signal, spiht coding, and reduce bit-stream. The compression produce compressed data for intermediate (Map) output and final (reduce) output. The experiment using ECG data to measure compression performance. The proposed method gets Percentage Root-mean-square difference (PRD) is about 1.0. Compare to existing method, the proposed method get better Compression Ratio (CR) with competitive longer compression time. So proposed method gets better performance compare to other method especially for ECG dataset.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在Hadoop中使用spit进行大数据压缩:多导联心电信号的案例研究
压缩仍然是大数据框架的主要关注点。大数据的性能取决于数据传输的速度。压缩数据可以加快网络间的数据传输速度。它还节省了更多的存储空间。Hadoop作为一个最常见的大数据框架提供了几种压缩方法。这种方法主要用于一般用途。但是对于像心电这样的生物医学记录,其性能还有待进一步优化。以心电信号数据为例,提出了基于层次树的集划分方法。在本文中,压缩将在Hadoop框架中运行。该方法具有输入信号、映射输入信号、精神编码和降码等阶段。压缩产生中间(Map)输出和最终(reduce)输出的压缩数据。实验采用心电数据来衡量压缩性能。所提方法得到的百分比均方根差(PRD)约为1.0。与现有方法相比,该方法具有更好的压缩比和更长的压缩时间。因此,与其他方法相比,该方法具有更好的性能,特别是在心电数据集上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Advancing public health genomics Overview of research center for information technology innovation in Taiwan Academia Sinica A survey of whole genome alignment tools and frameworks based on Hadoop's MapReduce Design and implementation of merchant acquirer data warehouse at PT. XYZ Spatial data mining for predicting of unobserved zinc pollutant using ordinary point Kriging
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1