In Huffman-encoded data a bit error may propagate arbitrarily long. This paper introduces a method for limiting such error propagation to at most $L$ bits, $L$ being a parameter. It is required that the decoder knows the bit number currently being decoded. The method utilizes the inherent tendency of Huffman codes to resynchronize spontaneously and does not introduce any redundancy if such a~resynchronization takes place. The method is applied to parallel decoding of Huffman data and is tested on Jpeg compression.
{"title":"Guaranteed Synchronization of Huffman Codes with Known Position of Decoder","authors":"M. Biskup, Wojciech Plandowski","doi":"10.1109/DCC.2009.18","DOIUrl":"https://doi.org/10.1109/DCC.2009.18","url":null,"abstract":"In Huffman-encoded data a bit error may propagate arbitrarily long. This paper introduces a method for limiting such error propagation to at most $L$ bits, $L$ being a parameter. It is required that the decoder knows the bit number currently being decoded. The method utilizes the inherent tendency of Huffman codes to resynchronize spontaneously and does not introduce any redundancy if such a~resynchronization takes place. The method is applied to parallel decoding of Huffman data and is tested on Jpeg compression.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124004516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. S. Markham, S. Evans, J. Impson, E. Steinbrecher
We describe the implementation and performance of a compression-based model inference engine, MDLcompress. The MDL-based compression produces a two part code of the training data, with the model portion of the code being used to compress and classify test data. We present pseudo-code of the algorithms for model generation and explore the conflicting requirements between minimizing grammar size and minimizing descriptive cost. We show results of a MDL model-based classification system for network traffic anomaly detection.
{"title":"Implementation of an Incremental MDL-Based Two Part Compression Algorithm for Model Inference","authors":"T. S. Markham, S. Evans, J. Impson, E. Steinbrecher","doi":"10.1109/DCC.2009.66","DOIUrl":"https://doi.org/10.1109/DCC.2009.66","url":null,"abstract":"We describe the implementation and performance of a compression-based model inference engine, MDLcompress. The MDL-based compression produces a two part code of the training data, with the model portion of the code being used to compress and classify test data. We present pseudo-code of the algorithms for model generation and explore the conflicting requirements between minimizing grammar size and minimizing descriptive cost. We show results of a MDL model-based classification system for network traffic anomaly detection.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127079345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper a novel joint source channel (JSC) decoding technique is presented. The proposed approach enables iterative decoding for serially concatenated arithmetic codes and convolutional codes. Iterations are performed between Soft In Soft Out (SISO) component decoders. For arithmetic decoding, we proposed to employ a low complex trellis search technique to estimate the best transmitted codewords and generate soft outputs. Performance of the presented system are evaluated in terms of PER, in the case of transmission across the AWGN channel. Simulation results show that the proposed JSC iterative scheme leads to significant gain in comparison with a traditional separated decoding. Finally, the practical relevance of the proposed technique is validated under an image transmission system using the SPIHT codec.
{"title":"Low-Complexity Joint Source/Channel Turbo Decoding of Arithmetic Codes with Image Transmission Application","authors":"Amin Zribi, S. Zaibi, R. Pyndiah, A. Bouallègue","doi":"10.1109/DCC.2009.31","DOIUrl":"https://doi.org/10.1109/DCC.2009.31","url":null,"abstract":"In this paper a novel joint source channel (JSC) decoding technique is presented. The proposed approach enables iterative decoding for serially concatenated arithmetic codes and convolutional codes. Iterations are performed between Soft In Soft Out (SISO) component decoders. For arithmetic decoding, we proposed to employ a low complex trellis search technique to estimate the best transmitted codewords and generate soft outputs. Performance of the presented system are evaluated in terms of PER, in the case of transmission across the AWGN channel. Simulation results show that the proposed JSC iterative scheme leads to significant gain in comparison with a traditional separated decoding. Finally, the practical relevance of the proposed technique is validated under an image transmission system using the SPIHT codec.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122269881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multiple Description (MD) source coding is a method to overcome unexpected information loss in a diversity system such as the internet, or a wireless network. While classic MD coding handles the situation where the rate in some channels drops to zero temporarily,thus causing unexpected packet-loss, it fails to accommodate more subtle changes in link rate such as rate reduction. In such a case, a classic scheme can’t use the link capacity left for information transfer, causing even minor rate reduction to be considered as link failure.In order to accommodate such a frequent situation, we propose a more modular design for transmitting over a diversity system, which can handle unexpected reduction in link's rate, by downgrading the original description into a more coarse description, so it would fit to the new link’s rate. The method is analyzed theoretically, and performance results are presented.
{"title":"Multi Level Multiple Descriptions","authors":"T. A. Beery, R. Zamir","doi":"10.1109/DCC.2009.49","DOIUrl":"https://doi.org/10.1109/DCC.2009.49","url":null,"abstract":"Multiple Description (MD) source coding is a method to overcome unexpected information loss in a diversity system such as the internet, or a wireless network. While classic MD coding handles the situation where the rate in some channels drops to zero temporarily,thus causing unexpected packet-loss, it fails to accommodate more subtle changes in link rate such as rate reduction. In such a case, a classic scheme can’t use the link capacity left for information transfer, causing even minor rate reduction to be considered as link failure.In order to accommodate such a frequent situation, we propose a more modular design for transmitting over a diversity system, which can handle unexpected reduction in link's rate, by downgrading the original description into a more coarse description, so it would fit to the new link’s rate. The method is analyzed theoretically, and performance results are presented.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129752191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Klinc, Carmit Hazay, A. Jagmohan, H. Krawczyk, T. Rabin
This paper investigates compression of encrypted data. It has been previously shown that data encrypted with Vernam's scheme, also known as the one-time pad, can be compressed without knowledge of the secret key, therefore this result can be applied to stream ciphers used in practice. However, it was not known how to compress data encrypted with non-stream ciphers. In this paper, we address the problem of compressing data encrypted with block ciphers, such as the Advanced Encryption Standard (AES) used in conjunction with one of the commonly employed chaining modes. We show that such data can be feasibly compressed without knowledge of the key. We present performance results for practical code constructions used to compress binary sources.
{"title":"On Compression of Data Encrypted with Block Ciphers","authors":"D. Klinc, Carmit Hazay, A. Jagmohan, H. Krawczyk, T. Rabin","doi":"10.1109/DCC.2009.71","DOIUrl":"https://doi.org/10.1109/DCC.2009.71","url":null,"abstract":"This paper investigates compression of encrypted data. It has been previously shown that data encrypted with Vernam's scheme, also known as the one-time pad, can be compressed without knowledge of the secret key, therefore this result can be applied to stream ciphers used in practice. However, it was not known how to compress data encrypted with non-stream ciphers. In this paper, we address the problem of compressing data encrypted with block ciphers, such as the Advanced Encryption Standard (AES) used in conjunction with one of the commonly employed chaining modes. We show that such data can be feasibly compressed without knowledge of the key. We present performance results for practical code constructions used to compress binary sources.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134292756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Even though the fixed HHI’s (Fraunhofer Heinrich-Hertz-Institute) scheme for multi-view video coding can get very good performance by fully utilizing the predictions in both the temporal and view directions, the complexity of this inter-prediction is very high. This paper presents some techniques to reduce the complexity while maintaining the coding performance.
{"title":"Flexible Predictions Selection for Multi-view Video Coding","authors":"F. Zhao, Guizhong Liu, Feifei Ren, N. Zhang","doi":"10.1109/DCC.2009.35","DOIUrl":"https://doi.org/10.1109/DCC.2009.35","url":null,"abstract":"Even though the fixed HHI’s (Fraunhofer Heinrich-Hertz-Institute) scheme for multi-view video coding can get very good performance by fully utilizing the predictions in both the temporal and view directions, the complexity of this inter-prediction is very high. This paper presents some techniques to reduce the complexity while maintaining the coding performance.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132189772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We derive factorization of DCT-II transform of size 15 which requires only 14 multiplications, 67 additions, and 3 multiplications by rational dyadic constants (implementable by shifts). This transform is significantly less complex than DCT-II of nearest dyadic size (16), and we suggest considering it for future image and video coding applications that can benefit from using larger (than 8x8) block sizes.
{"title":"Fast 15x15 Transform for Image and Video Coding Applications","authors":"Y. Reznik, R. Chivukula","doi":"10.1109/DCC.2009.81","DOIUrl":"https://doi.org/10.1109/DCC.2009.81","url":null,"abstract":"We derive factorization of DCT-II transform of size 15 which requires only 14 multiplications, 67 additions, and 3 multiplications by rational dyadic constants (implementable by shifts). This transform is significantly less complex than DCT-II of nearest dyadic size (16), and we suggest considering it for future image and video coding applications that can benefit from using larger (than 8x8) block sizes.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130778406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fix-free codes are variable length codes in which no codeword is the prefix or suffix of another codeword. They are used in video compression standards because their property of efficient decoding in both the forward and backward directions assists with error resilience. This property also potentially halves the average search time for a string in a compressed file relative to unidirectional variable length codes. Relatively little is known about minimum-redundancy fix-free codes, and we describe some characteristics of and observations about such codes. We introduce a new heuristic to produce fix-free codes which is influenced by these ideas. The design of minimum-redundancy fix-free codes is an example of a constraint processing problem, and we offer the first approach to constructing them and a variation with an additional symmetry requirement.
{"title":"On Minimum-Redundancy Fix-Free Codes","authors":"S. Savari","doi":"10.1109/DCC.2009.39","DOIUrl":"https://doi.org/10.1109/DCC.2009.39","url":null,"abstract":"Fix-free codes are variable length codes in which no codeword is the prefix or suffix of another codeword. They are used in video compression standards because their property of efficient decoding in both the forward and backward directions assists with error resilience. This property also potentially halves the average search time for a string in a compressed file relative to unidirectional variable length codes. Relatively little is known about minimum-redundancy fix-free codes, and we describe some characteristics of and observations about such codes. We introduce a new heuristic to produce fix-free codes which is influenced by these ideas. The design of minimum-redundancy fix-free codes is an example of a constraint processing problem, and we offer the first approach to constructing them and a variation with an additional symmetry requirement.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121168147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a novel universal refinable trellis quantization scheme (URTCQ) that is suitable for bitplane coding with many reconstruction stages. Existing refinable trellis quantizers either require excessive codebook training and are outperformed by scalar quantization for more than two stages (MS-TCQ, E-TCQ), require a huge computational burden (SR-TCQ) or achieve a good rate distortion performance in the last stage only (UTCQ). The presented quantization technique is a mixture of a scalar quantizer and an improved version of the E-TCQ. For all supported sources only one time training to an i.i.d. uniform source is required and its incremental bitrate is not more than 1 bps for each stage. The complexity is proportional to the number of stages and the number of trellis states. We compare the rate distortion performance of our work on generalized Gaussian i.i.d. sources with the quantizers deployed in JPEG2000 (USDZQ, UTCQ). It turns out that it is in no stage worse than the scalar quantizer and usually outperforms the UTCQ except for the last stage.
{"title":"Universal Refinable Trellis Coded Quantization","authors":"S. Steger, T. Richter","doi":"10.1109/DCC.2009.16","DOIUrl":"https://doi.org/10.1109/DCC.2009.16","url":null,"abstract":"We introduce a novel universal refinable trellis quantization scheme (URTCQ) that is suitable for bitplane coding with many reconstruction stages. Existing refinable trellis quantizers either require excessive codebook training and are outperformed by scalar quantization for more than two stages (MS-TCQ, E-TCQ), require a huge computational burden (SR-TCQ) or achieve a good rate distortion performance in the last stage only (UTCQ). The presented quantization technique is a mixture of a scalar quantizer and an improved version of the E-TCQ. For all supported sources only one time training to an i.i.d. uniform source is required and its incremental bitrate is not more than 1 bps for each stage. The complexity is proportional to the number of stages and the number of trellis states. We compare the rate distortion performance of our work on generalized Gaussian i.i.d. sources with the quantizers deployed in JPEG2000 (USDZQ, UTCQ). It turns out that it is in no stage worse than the scalar quantizer and usually outperforms the UTCQ except for the last stage.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126729256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper gives a novel wavelet image two-line (Wi2l) coder that is designed to fulfill the memory constraints of a typical wireless sensor node. The algorithm operates line-wisely on picture data stored on the sensor's flash memory card while it requires approximatively 1.5 kByte RAM to compress a monochrome picture with the size of 256x256 Bytes. The achieved data compression rates are the same as with the set partitioning in hierarchical trees (Spiht) algorithm. The coder works recursively on two lines of a wavelet subband while intermediate data of these lines is stored to backward encode the wavelet trees. Thus it does not need any list but three small buffers with a fixed dimension. The compression performance is evaluated by a PC-implementation in C, while time measurements are conducted on a typical wireless sensor node using a modified version of the PC-code.
{"title":"Wavelet Image Two-Line Coder for Wireless Sensor Node with Extremely Little RAM","authors":"Stephan Rein, Stephan Lehmann, C. Gühmann","doi":"10.1109/DCC.2009.30","DOIUrl":"https://doi.org/10.1109/DCC.2009.30","url":null,"abstract":"This paper gives a novel wavelet image two-line (Wi2l) coder that is designed to fulfill the memory constraints of a typical wireless sensor node. The algorithm operates line-wisely on picture data stored on the sensor's flash memory card while it requires approximatively 1.5 kByte RAM to compress a monochrome picture with the size of 256x256 Bytes. The achieved data compression rates are the same as with the set partitioning in hierarchical trees (Spiht) algorithm. The coder works recursively on two lines of a wavelet subband while intermediate data of these lines is stored to backward encode the wavelet trees. Thus it does not need any list but three small buffers with a fixed dimension. The compression performance is evaluated by a PC-implementation in C, while time measurements are conducted on a typical wireless sensor node using a modified version of the PC-code.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123899771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}