首页 > 最新文献

2009 Data Compression Conference最新文献

英文 中文
Guaranteed Synchronization of Huffman Codes with Known Position of Decoder 解码器位置已知时霍夫曼码的保证同步
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.18
M. Biskup, Wojciech Plandowski
In Huffman-encoded data a bit error may propagate arbitrarily long. This paper introduces a method for limiting such error propagation to at most $L$ bits, $L$ being a parameter. It is required that the decoder knows the bit number currently being decoded. The method utilizes the inherent tendency of Huffman codes to resynchronize spontaneously and does not introduce any redundancy if such a~resynchronization takes place. The method is applied to parallel decoding of Huffman data and is tested on Jpeg compression.
在霍夫曼编码的数据中,一个比特错误可以传播任意长的时间。本文介绍了一种将这种误差传播限制在最多$L$位的方法,其中$L$是一个参数。要求解码器知道当前被解码的比特数。该方法利用了霍夫曼码自发重同步的固有倾向,并且在发生这种重同步时不会引入任何冗余。将该方法应用于霍夫曼数据的并行解码,并在Jpeg压缩下进行了测试。
{"title":"Guaranteed Synchronization of Huffman Codes with Known Position of Decoder","authors":"M. Biskup, Wojciech Plandowski","doi":"10.1109/DCC.2009.18","DOIUrl":"https://doi.org/10.1109/DCC.2009.18","url":null,"abstract":"In Huffman-encoded data a bit error may propagate arbitrarily long. This paper introduces a method for limiting such error propagation to at most $L$ bits, $L$ being a parameter. It is required that the decoder knows the bit number currently being decoded. The method utilizes the inherent tendency of Huffman codes to resynchronize spontaneously and does not introduce any redundancy if such a~resynchronization takes place. The method is applied to parallel decoding of Huffman data and is tested on Jpeg compression.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124004516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Implementation of an Incremental MDL-Based Two Part Compression Algorithm for Model Inference 基于增量mdl的模型推理两部分压缩算法的实现
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.66
T. S. Markham, S. Evans, J. Impson, E. Steinbrecher
We describe the implementation and performance of a compression-based model inference engine, MDLcompress. The MDL-based compression produces a two part code of the training data, with the model portion of the code being used to compress and classify test data. We present pseudo-code of the algorithms for model generation and explore the conflicting requirements between minimizing grammar size and minimizing descriptive cost. We show results of a MDL model-based classification system for network traffic anomaly detection.
我们描述了一个基于压缩的模型推理引擎MDLcompress的实现和性能。基于mdl的压缩产生训练数据的两部分代码,代码的模型部分用于压缩和分类测试数据。我们提出了模型生成算法的伪代码,并探讨了最小化语法大小和最小化描述成本之间的冲突要求。我们展示了一个基于MDL模型的网络流量异常检测分类系统的结果。
{"title":"Implementation of an Incremental MDL-Based Two Part Compression Algorithm for Model Inference","authors":"T. S. Markham, S. Evans, J. Impson, E. Steinbrecher","doi":"10.1109/DCC.2009.66","DOIUrl":"https://doi.org/10.1109/DCC.2009.66","url":null,"abstract":"We describe the implementation and performance of a compression-based model inference engine, MDLcompress. The MDL-based compression produces a two part code of the training data, with the model portion of the code being used to compress and classify test data. We present pseudo-code of the algorithms for model generation and explore the conflicting requirements between minimizing grammar size and minimizing descriptive cost. We show results of a MDL model-based classification system for network traffic anomaly detection.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127079345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Low-Complexity Joint Source/Channel Turbo Decoding of Arithmetic Codes with Image Transmission Application 低复杂度联合源信道Turbo译码在图像传输中的应用
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.31
Amin Zribi, S. Zaibi, R. Pyndiah, A. Bouallègue
In this paper a novel joint source channel (JSC) decoding technique is presented. The proposed approach enables iterative decoding for serially concatenated arithmetic codes and convolutional codes. Iterations are performed between Soft In Soft Out (SISO) component decoders. For arithmetic decoding, we proposed to employ a low complex trellis search technique to estimate the best transmitted codewords and generate soft outputs. Performance of the presented system are evaluated in terms of PER, in the case of transmission across the AWGN channel. Simulation results show that the proposed JSC iterative scheme leads to significant gain in comparison with a traditional separated decoding. Finally, the practical relevance of the proposed technique is validated under an image transmission system using the SPIHT codec.
提出了一种新的联合信源信道(JSC)译码技术。所提出的方法能够对串行连接的算术码和卷积码进行迭代解码。迭代在软进软出(SISO)组件解码器之间进行。对于算术解码,我们提出了一种低复杂度的网格搜索技术来估计最佳传输码字并产生软输出。在跨AWGN信道传输的情况下,根据PER来评估所提出系统的性能。仿真结果表明,与传统的分离译码相比,所提出的JSC迭代方案具有显著的增益。最后,在使用SPIHT编解码器的图像传输系统下验证了所提技术的实际相关性。
{"title":"Low-Complexity Joint Source/Channel Turbo Decoding of Arithmetic Codes with Image Transmission Application","authors":"Amin Zribi, S. Zaibi, R. Pyndiah, A. Bouallègue","doi":"10.1109/DCC.2009.31","DOIUrl":"https://doi.org/10.1109/DCC.2009.31","url":null,"abstract":"In this paper a novel joint source channel (JSC) decoding technique is presented. The proposed approach enables iterative decoding for serially concatenated arithmetic codes and convolutional codes. Iterations are performed between Soft In Soft Out (SISO) component decoders. For arithmetic decoding, we proposed to employ a low complex trellis search technique to estimate the best transmitted codewords and generate soft outputs. Performance of the presented system are evaluated in terms of PER, in the case of transmission across the AWGN channel. Simulation results show that the proposed JSC iterative scheme leads to significant gain in comparison with a traditional separated decoding. Finally, the practical relevance of the proposed technique is validated under an image transmission system using the SPIHT codec.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122269881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Multi Level Multiple Descriptions 多级别多描述
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.49
T. A. Beery, R. Zamir
Multiple Description (MD) source coding is a method to overcome unexpected information loss in a diversity system such as the internet, or a wireless network. While classic MD coding handles the situation where the rate in some channels drops to zero temporarily,thus causing unexpected packet-loss, it fails to accommodate more subtle changes in link rate such as rate reduction. In such a case, a classic scheme can’t use the link capacity left for information transfer, causing even minor rate reduction to be considered as link failure.In order to accommodate such a frequent situation, we propose a more modular design for transmitting over a diversity system, which can handle unexpected reduction in link's rate, by downgrading the original description into a more coarse description, so it would fit to the new link’s rate. The method is analyzed theoretically, and performance results are presented.
多描述码(Multiple Description, MD)是一种在分集系统(如internet或无线网络)中克服意外信息丢失的方法。虽然经典的MD编码可以处理某些信道的速率暂时降为零的情况,从而导致意外的丢包,但它无法适应链路速率的更细微的变化,例如速率降低。在这种情况下,经典方案不能使用剩余的链路容量来传输信息,即使是很小的速率降低也会被认为是链路故障。为了适应这种频繁的情况,我们提出了一种更模块化的分集系统传输设计,通过将原始描述降级为更粗糙的描述,使其适合新的链路速率,从而可以处理链路速率的意外降低。对该方法进行了理论分析,并给出了性能结果。
{"title":"Multi Level Multiple Descriptions","authors":"T. A. Beery, R. Zamir","doi":"10.1109/DCC.2009.49","DOIUrl":"https://doi.org/10.1109/DCC.2009.49","url":null,"abstract":"Multiple Description (MD) source coding is a method to overcome unexpected information loss in a diversity system such as the internet, or a wireless network. While classic MD coding handles the situation where the rate in some channels drops to zero temporarily,thus causing unexpected packet-loss, it fails to accommodate more subtle changes in link rate such as rate reduction. In such a case, a classic scheme can’t use the link capacity left for information transfer, causing even minor rate reduction to be considered as link failure.In order to accommodate such a frequent situation, we propose a more modular design for transmitting over a diversity system, which can handle unexpected reduction in link's rate, by downgrading the original description into a more coarse description, so it would fit to the new link’s rate. The method is analyzed theoretically, and performance results are presented.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129752191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On Compression of Data Encrypted with Block Ciphers 分组密码加密数据的压缩研究
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.71
D. Klinc, Carmit Hazay, A. Jagmohan, H. Krawczyk, T. Rabin
This paper investigates compression of encrypted data. It has been previously shown that data encrypted with Vernam's scheme, also known as the one-time pad, can be compressed without knowledge of the secret key, therefore this result can be applied to stream ciphers used in practice. However, it was not known how to compress data encrypted with non-stream ciphers. In this paper, we address the problem of  compressing data encrypted with block ciphers, such as the Advanced Encryption Standard (AES) used in conjunction with one of the commonly employed chaining modes.  We show that such data can be feasibly compressed without knowledge of the key. We present performance results for practical code constructions used to compress binary sources.
本文主要研究加密数据的压缩问题。以前已经证明,用Vernam方案加密的数据,也称为一次性密码,可以在不知道密钥的情况下进行压缩,因此该结果可以应用于实际使用的流密码。但是,不知道如何压缩用非流密码加密的数据。在本文中,我们解决了压缩用分组密码加密的数据的问题,例如与一种常用的链模式一起使用的高级加密标准(AES)。我们证明了这种数据可以在不知道密钥的情况下进行压缩。我们给出了用于压缩二进制源的实际代码结构的性能结果。
{"title":"On Compression of Data Encrypted with Block Ciphers","authors":"D. Klinc, Carmit Hazay, A. Jagmohan, H. Krawczyk, T. Rabin","doi":"10.1109/DCC.2009.71","DOIUrl":"https://doi.org/10.1109/DCC.2009.71","url":null,"abstract":"This paper investigates compression of encrypted data. It has been previously shown that data encrypted with Vernam's scheme, also known as the one-time pad, can be compressed without knowledge of the secret key, therefore this result can be applied to stream ciphers used in practice. However, it was not known how to compress data encrypted with non-stream ciphers. In this paper, we address the problem of  compressing data encrypted with block ciphers, such as the Advanced Encryption Standard (AES) used in conjunction with one of the commonly employed chaining modes.  We show that such data can be feasibly compressed without knowledge of the key. We present performance results for practical code constructions used to compress binary sources.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134292756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
Flexible Predictions Selection for Multi-view Video Coding 多视点视频编码的灵活预测选择
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.35
F. Zhao, Guizhong Liu, Feifei Ren, N. Zhang
Even though the fixed HHI’s (Fraunhofer Heinrich-Hertz-Institute) scheme for multi-view video coding can get very good performance by fully utilizing the predictions in both the temporal and view directions, the complexity of this inter-prediction is very high. This paper presents some techniques to reduce the complexity while maintaining the coding performance.
尽管固定的HHI (Fraunhofer Heinrich-Hertz-Institute)多视点视频编码方案充分利用了时间方向和视点方向的预测,可以获得很好的性能,但这种相互预测的复杂性非常高。本文提出了一些在保持编码性能的同时降低复杂度的技术。
{"title":"Flexible Predictions Selection for Multi-view Video Coding","authors":"F. Zhao, Guizhong Liu, Feifei Ren, N. Zhang","doi":"10.1109/DCC.2009.35","DOIUrl":"https://doi.org/10.1109/DCC.2009.35","url":null,"abstract":"Even though the fixed HHI’s (Fraunhofer Heinrich-Hertz-Institute) scheme for multi-view video coding can get very good performance by fully utilizing the predictions in both the temporal and view directions, the complexity of this inter-prediction is very high. This paper presents some techniques to reduce the complexity while maintaining the coding performance.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132189772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fast 15x15 Transform for Image and Video Coding Applications 快速15x15变换图像和视频编码应用程序
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.81
Y. Reznik, R. Chivukula
We derive factorization of DCT-II transform of size 15 which requires only 14 multiplications, 67 additions, and 3 multiplications by rational dyadic constants (implementable by shifts). This transform is significantly less complex than DCT-II of nearest dyadic size (16), and we suggest considering it for future image and video coding applications that can benefit from using larger (than 8x8) block sizes.
我们推导了大小为15的DCT-II变换的因式分解,它只需要14次乘法,67次加法和3次有理二元常数的乘法(可通过移位实现)。这种变换比最接近二进大小(16)的DCT-II要简单得多,我们建议在未来的图像和视频编码应用中考虑使用它,这些应用可以从使用更大(大于8x8)的块大小中受益。
{"title":"Fast 15x15 Transform for Image and Video Coding Applications","authors":"Y. Reznik, R. Chivukula","doi":"10.1109/DCC.2009.81","DOIUrl":"https://doi.org/10.1109/DCC.2009.81","url":null,"abstract":"We derive factorization of DCT-II transform of size 15 which requires only 14 multiplications, 67 additions, and 3 multiplications by rational dyadic constants (implementable by shifts). This transform is significantly less complex than DCT-II of nearest dyadic size (16), and we suggest considering it for future image and video coding applications that can benefit from using larger (than 8x8) block sizes.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130778406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
On Minimum-Redundancy Fix-Free Codes 关于最小冗余无固定码
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.39
S. Savari
Fix-free codes are variable length codes in which no codeword is the prefix or suffix of another codeword.  They are used in video compression standards because their property of efficient decoding in both the forward and backward directions assists with error resilience.  This property also potentially halves the average search time for a string in a compressed file relative to unidirectional variable length codes.  Relatively little is known about minimum-redundancy fix-free codes, and we describe some characteristics of and observations about such codes. We introduce a new heuristic to produce fix-free codes which is influenced by these ideas. The design of minimum-redundancy fix-free codes is an example of a constraint processing problem, and we offer the first approach to constructing them and a variation with an additional symmetry requirement.
无固定码是可变长度的码,其中没有码字是另一个码字的前缀或后缀。它们被用于视频压缩标准,因为它们在前向和后向都具有高效解码的特性,有助于提高错误恢复能力。与单向可变长度代码相比,此属性还可能使压缩文件中字符串的平均搜索时间减半。关于最小冗余无固定码的研究相对较少,我们描述了这种码的一些特征和观察结果。在这些思想的影响下,我们引入了一种新的启发式方法来生成无固定代码。最小冗余无固定码的设计是约束处理问题的一个例子,我们提供了构造它们的第一种方法以及具有额外对称性要求的变体。
{"title":"On Minimum-Redundancy Fix-Free Codes","authors":"S. Savari","doi":"10.1109/DCC.2009.39","DOIUrl":"https://doi.org/10.1109/DCC.2009.39","url":null,"abstract":"Fix-free codes are variable length codes in which no codeword is the prefix or suffix of another codeword.  They are used in video compression standards because their property of efficient decoding in both the forward and backward directions assists with error resilience.  This property also potentially halves the average search time for a string in a compressed file relative to unidirectional variable length codes.  Relatively little is known about minimum-redundancy fix-free codes, and we describe some characteristics of and observations about such codes. We introduce a new heuristic to produce fix-free codes which is influenced by these ideas. The design of minimum-redundancy fix-free codes is an example of a constraint processing problem, and we offer the first approach to constructing them and a variation with an additional symmetry requirement.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121168147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Universal Refinable Trellis Coded Quantization 通用可细化网格编码量化
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.16
S. Steger, T. Richter
We introduce a novel universal refinable trellis quantization scheme (URTCQ) that is suitable for bitplane coding with many reconstruction stages. Existing refinable trellis quantizers either require excessive codebook training and are outperformed by scalar quantization for more than two stages (MS-TCQ, E-TCQ), require a huge computational burden (SR-TCQ) or achieve a good rate distortion performance in the last stage only (UTCQ). The presented quantization technique is a mixture of a scalar quantizer and an improved version of the E-TCQ. For all supported sources only one time training to an i.i.d. uniform source is required and its incremental bitrate is not more than 1 bps for each stage. The complexity is proportional to the number of stages and the number of trellis states. We compare the rate distortion performance of our work on generalized Gaussian i.i.d. sources with the quantizers deployed in JPEG2000 (USDZQ, UTCQ). It turns out that it is in no stage worse than the scalar quantizer and usually outperforms the UTCQ except for the last stage.
提出了一种适用于多重构阶段位平面编码的通用可细化网格量化方案(URTCQ)。现有的可细化网格量化器要么需要过多的码本训练,并且在两个以上的阶段(MS-TCQ, E-TCQ)上被标量量化所优于,要么需要巨大的计算负担(SR-TCQ),要么仅在最后阶段(UTCQ)获得良好的率失真性能。所提出的量化技术是标量量化器和E-TCQ改进版本的混合。对于所有支持的源,只需要一次训练到i.i.d统一源,并且每个阶段的增量比特率不超过1bps。复杂度与阶段的数量和格子状态的数量成正比。我们比较了我们在广义高斯i.i.d源上的工作与在JPEG2000 (USDZQ, UTCQ)中部署的量化器的速率失真性能。事实证明,它在任何阶段都不比标量量化器差,并且通常优于UTCQ,除了最后阶段。
{"title":"Universal Refinable Trellis Coded Quantization","authors":"S. Steger, T. Richter","doi":"10.1109/DCC.2009.16","DOIUrl":"https://doi.org/10.1109/DCC.2009.16","url":null,"abstract":"We introduce a novel universal refinable trellis quantization scheme (URTCQ) that is suitable for bitplane coding with many reconstruction stages. Existing refinable trellis quantizers either require excessive codebook training and are outperformed by scalar quantization for more than two stages (MS-TCQ, E-TCQ), require a huge computational burden (SR-TCQ) or achieve a good rate distortion performance in the last stage only (UTCQ). The presented quantization technique is a mixture of a scalar quantizer and an improved version of the E-TCQ. For all supported sources only one time training to an i.i.d. uniform source is required and its incremental bitrate is not more than 1 bps for each stage. The complexity is proportional to the number of stages and the number of trellis states. We compare the rate distortion performance of our work on generalized Gaussian i.i.d. sources with the quantizers deployed in JPEG2000 (USDZQ, UTCQ). It turns out that it is in no stage worse than the scalar quantizer and usually outperforms the UTCQ except for the last stage.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126729256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Wavelet Image Two-Line Coder for Wireless Sensor Node with Extremely Little RAM 无线传感器节点小波图像双线编码器
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.30
Stephan Rein, Stephan Lehmann, C. Gühmann
This paper gives a novel wavelet image two-line (Wi2l) coder that is designed to fulfill the memory constraints of a typical wireless sensor node. The algorithm operates line-wisely on picture data stored on the sensor's flash memory card while it requires approximatively 1.5 kByte RAM to compress a monochrome picture with the size of 256x256 Bytes. The achieved data compression rates are the same as with the set partitioning in hierarchical trees (Spiht) algorithm. The coder works recursively on two lines of a wavelet subband while intermediate data of these lines is stored to backward encode the wavelet trees. Thus it does not need any list but three small buffers with a fixed dimension. The compression performance is evaluated by a PC-implementation in C, while time measurements are conducted on a typical wireless sensor node using a modified version of the PC-code.
本文提出了一种新的小波图像双线编码器(Wi2l),该编码器设计用于满足典型无线传感器节点的存储限制。该算法对存储在传感器闪存卡上的图像数据进行行智能操作,而压缩256x256字节的单色图像需要大约1.5 kByte的RAM。所获得的数据压缩率与分层树集划分(Spiht)算法相同。编码器递归地在小波子带的两行上工作,同时存储这些行的中间数据以对小波树进行反向编码。因此,它不需要任何列表,只需要三个固定维度的小缓冲区。压缩性能由C语言的pc实现进行评估,而时间测量则使用pc代码的修改版本在典型的无线传感器节点上进行。
{"title":"Wavelet Image Two-Line Coder for Wireless Sensor Node with Extremely Little RAM","authors":"Stephan Rein, Stephan Lehmann, C. Gühmann","doi":"10.1109/DCC.2009.30","DOIUrl":"https://doi.org/10.1109/DCC.2009.30","url":null,"abstract":"This paper gives a novel wavelet image two-line (Wi2l) coder that is designed to fulfill the memory constraints of a typical wireless sensor node. The algorithm operates line-wisely on picture data stored on the sensor's flash memory card while it requires approximatively 1.5 kByte RAM to compress a monochrome picture with the size of 256x256 Bytes. The achieved data compression rates are the same as with the set partitioning in hierarchical trees (Spiht) algorithm. The coder works recursively on two lines of a wavelet subband while intermediate data of these lines is stored to backward encode the wavelet trees. Thus it does not need any list but three small buffers with a fixed dimension. The compression performance is evaluated by a PC-implementation in C, while time measurements are conducted on a typical wireless sensor node using a modified version of the PC-code.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123899771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
2009 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1