This paper presents and evaluates gFPC, a self-tuning implementation of the FPC compression algorithm for double-precision floating-point data. gFPC uses a genetic algorithm to repeatedly reconfigure four hash-function parameters, which enables it to adapt to changes in the data during compression. Self tuning increases the harmonic-mean compression ratio on thirteen scientific datasets from 22% to 28% with sixteen kilobyte hash tables and from 36% to 43% with one megabyte hash tables. Individual datasets compress up to 1.72 times better. The self-tuning overhead reduces the compression speed by a factor of four but makes decompression faster because of the higher compression ratio. On a 2.93 GHz Xeon processor, gFPC compresses at a throughput of almost one gigabit per second and decompresses at over seven gigabits per second.
{"title":"gFPC: A Self-Tuning Compression Algorithm","authors":"Martin Burtscher, P. Ratanaworabhan","doi":"10.1109/DCC.2010.42","DOIUrl":"https://doi.org/10.1109/DCC.2010.42","url":null,"abstract":"This paper presents and evaluates gFPC, a self-tuning implementation of the FPC compression algorithm for double-precision floating-point data. gFPC uses a genetic algorithm to repeatedly reconfigure four hash-function parameters, which enables it to adapt to changes in the data during compression. Self tuning increases the harmonic-mean compression ratio on thirteen scientific datasets from 22% to 28% with sixteen kilobyte hash tables and from 36% to 43% with one megabyte hash tables. Individual datasets compress up to 1.72 times better. The self-tuning overhead reduces the compression speed by a factor of four but makes decompression faster because of the higher compression ratio. On a 2.93 GHz Xeon processor, gFPC compresses at a throughput of almost one gigabit per second and decompresses at over seven gigabits per second.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130825837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David M. Chen, Sam S. Tsai, V. Chandrasekhar, Gabriel Takacs, Ramakrishna Vedantham, R. Grzeszczuk, B. Girod
To perform fast image matching against large databases, a Vocabulary Tree (VT) uses an inverted index that maps from each tree node to database images which have visited that node. The inverted index can require gigabytes of memory, which significantly slows down the database server. In this paper, we design, develop, and compare techniques for inverted index compression for image-based retrieval. We show that these techniques significantly reduce memory usage, by as much as 5x, without loss in recognition accuracy. Our work includes fast decoding methods, an offline database reordering scheme that exploits the similarity between images for additional memory savings, and a generalized coding scheme for soft-binned feature descriptor histograms. We also show that reduced index memory permits memory-intensive image matching techniques that boost recognition accuracy.
{"title":"Inverted Index Compression for Scalable Image Matching","authors":"David M. Chen, Sam S. Tsai, V. Chandrasekhar, Gabriel Takacs, Ramakrishna Vedantham, R. Grzeszczuk, B. Girod","doi":"10.1109/DCC.2010.53","DOIUrl":"https://doi.org/10.1109/DCC.2010.53","url":null,"abstract":"To perform fast image matching against large databases, a Vocabulary Tree (VT) uses an inverted index that maps from each tree node to database images which have visited that node. The inverted index can require gigabytes of memory, which significantly slows down the database server. In this paper, we design, develop, and compare techniques for inverted index compression for image-based retrieval. We show that these techniques significantly reduce memory usage, by as much as 5x, without loss in recognition accuracy. Our work includes fast decoding methods, an offline database reordering scheme that exploits the similarity between images for additional memory savings, and a generalized coding scheme for soft-binned feature descriptor histograms. We also show that reduced index memory permits memory-intensive image matching techniques that boost recognition accuracy.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132816349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pattern matching on text data has been a fundamental field ofComputer Science for nearly 40 years. Databases supporting full-textindexing functionality on text data are now widely used by biologists.In the theoretical literature, the most popular internal-memory index structures are thesuffix trees and the suffix arrays, and the most popular external-memory index structureis the string B-tree. However, the practical applicabilityof these indexes has been limited mainly because of their spaceconsumption and I/O issues. These structures use a lot more space(almost 20 to 50 times more) than the original text dataand are often disk-resident.Ferragina and Manzini (2005) and Grossi and Vitter (2005)gave the first compressed text indexes with efficient query times inthe internal-memory model. Recently, Chien et al (2008) presenteda compact text index in the external memory based on theconcept of Geometric Burrows-Wheeler Transform.They also presented lower bounds which suggested that it may be hardto obtain a good index structure in the external memory.In this paper, we investigate this issue from a practical point of view.On the positive side we show an external-memory text indexingstructure (based on R-trees and KD-trees) that saves space by aboutan order of magnitude as compared to the standard String B-tree.While saving space, these structures also maintain a comparable I/O efficiency to thatof String B-tree. We also show various space vs I/O efficiency trade-offsfor our structures.
近40年来,文本数据的模式匹配一直是计算机科学的一个基础领域。支持文本数据全文索引功能的数据库现在被生物学家广泛使用。在理论文献中,最流行的内存索引结构是后缀树和后缀数组,而最流行的外部内存索引结构是字符串b树。然而,这些索引的实际适用性受到限制,主要是因为它们的空间消耗和I/O问题。这些结构使用的空间比原始文本数据大得多(几乎是原始文本数据的20到50倍),并且通常位于磁盘上。Ferragina and Manzini(2005)和Grossi and Vitter(2005)在内存模型中给出了第一个具有高效查询时间的压缩文本索引。最近,Chien等人(2008)基于几何Burrows-Wheeler变换的概念提出了一种外部存储器中的紧凑文本索引。他们还提出了下界,这表明在外部存储器中可能很难获得良好的索引结构。在本文中,我们从实际的角度来研究这个问题。从积极的方面来看,我们展示了一个外部内存文本索引结构(基于r树和kd树),与标准字符串b树相比,它节省了大约一个数量级的空间。在节省空间的同时,这些结构也保持了与String B-tree相当的I/O效率。我们还展示了我们的结构的各种空间与I/O效率权衡。
{"title":"I/O-Efficient Compressed Text Indexes: From Theory to Practice","authors":"Sheng-Yuan Chiu, W. Hon, R. Shah, J. Vitter","doi":"10.1109/DCC.2010.45","DOIUrl":"https://doi.org/10.1109/DCC.2010.45","url":null,"abstract":"Pattern matching on text data has been a fundamental field ofComputer Science for nearly 40 years. Databases supporting full-textindexing functionality on text data are now widely used by biologists.In the theoretical literature, the most popular internal-memory index structures are thesuffix trees and the suffix arrays, and the most popular external-memory index structureis the string B-tree. However, the practical applicabilityof these indexes has been limited mainly because of their spaceconsumption and I/O issues. These structures use a lot more space(almost 20 to 50 times more) than the original text dataand are often disk-resident.Ferragina and Manzini (2005) and Grossi and Vitter (2005)gave the first compressed text indexes with efficient query times inthe internal-memory model. Recently, Chien et al (2008) presenteda compact text index in the external memory based on theconcept of Geometric Burrows-Wheeler Transform.They also presented lower bounds which suggested that it may be hardto obtain a good index structure in the external memory.In this paper, we investigate this issue from a practical point of view.On the positive side we show an external-memory text indexingstructure (based on R-trees and KD-trees) that saves space by aboutan order of magnitude as compared to the standard String B-tree.While saving space, these structures also maintain a comparable I/O efficiency to thatof String B-tree. We also show various space vs I/O efficiency trade-offsfor our structures.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132750833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a family of binary prefix condition codes in which each codeword is required to have a Hamming weight which is a multiple of w for some integer w≫=2. Such codes have intrinsic error resilience and are a special case of codes with codewords constrained to belong to a language accepted by a deterministic finite automaton. For a given source over n symbols and parameter w we offer an algorithm to construct a minimum-redundancy code among this class of prefix condition codes which has a running time of O(n^{w+2}).
{"title":"When Huffman Meets Hamming: A Class of Optimal Variable-Length Error Correcting Codes","authors":"S. Savari, J. Kliewer","doi":"10.1109/DCC.2010.35","DOIUrl":"https://doi.org/10.1109/DCC.2010.35","url":null,"abstract":"We introduce a family of binary prefix condition codes in which each codeword is required to have a Hamming weight which is a multiple of w for some integer w≫=2. Such codes have intrinsic error resilience and are a special case of codes with codewords constrained to belong to a language accepted by a deterministic finite automaton. For a given source over n symbols and parameter w we offer an algorithm to construct a minimum-redundancy code among this class of prefix condition codes which has a running time of O(n^{w+2}).","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133196419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper focuses on optimal analog mappings for zero-delay, distributed source-channel coding. The objective is to obtain the optimal vector trans- formations that map between m-dimensional source spaces and k-dimensional channel spaces, subject to a prescribed power constraint and assuming the mean square error distortion measure. Closed-form necessary conditions for optimality of encoding and decoding mappings are derived. An iterative de- sign algorithm is proposed, which updates encoder and decoder mappings by sequentially enforcing the complementary optimality conditions at each itera- tion. The obtained encoding functions are shown to be a continuous relative of, and in fact subsume as a special case, the Wyner-Ziv mappings encountered in digital distributed source coding systems, by mapping multiple source intervals to the same channel interval. Example mappings and performance results are presented for Gaussian sources and channels.
{"title":"Optimized Analog Mappings for Distributed Source-Channel Coding","authors":"E. Akyol, K. Rose, T. Ramstad","doi":"10.1109/DCC.2010.92","DOIUrl":"https://doi.org/10.1109/DCC.2010.92","url":null,"abstract":"This paper focuses on optimal analog mappings for zero-delay, distributed source-channel coding. The objective is to obtain the optimal vector trans- formations that map between m-dimensional source spaces and k-dimensional channel spaces, subject to a prescribed power constraint and assuming the mean square error distortion measure. Closed-form necessary conditions for optimality of encoding and decoding mappings are derived. An iterative de- sign algorithm is proposed, which updates encoder and decoder mappings by sequentially enforcing the complementary optimality conditions at each itera- tion. The obtained encoding functions are shown to be a continuous relative of, and in fact subsume as a special case, the Wyner-Ziv mappings encountered in digital distributed source coding systems, by mapping multiple source intervals to the same channel interval. Example mappings and performance results are presented for Gaussian sources and channels.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114612764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Three elementary coding tools -- a progressive order prediction tool, quantized order prediction tool, and adaptive and sub-frame base coding tool for separation parameters -- have been devised to enhance the compression performance of the prediction residual. These are intended for the lossless coding of G.711 log PCM symbols used in packet-based network application such as VoIP. All tools are shown to be effective for reducing the average code length without any significant increase of computational complexity. As a result, all have been adopted in the mapped domain predictive coding part of the ITU-T G.711.0 standard.
{"title":"Enhanced Lossless Coding Tools of LPC Residual for ITU-T G.711.0","authors":"T. Moriya, Y. Kamamoto, N. Harada","doi":"10.1109/DCC.2010.71","DOIUrl":"https://doi.org/10.1109/DCC.2010.71","url":null,"abstract":"Three elementary coding tools -- a progressive order prediction tool, quantized order prediction tool, and adaptive and sub-frame base coding tool for separation parameters -- have been devised to enhance the compression performance of the prediction residual. These are intended for the lossless coding of G.711 log PCM symbols used in packet-based network application such as VoIP. All tools are shown to be effective for reducing the average code length without any significant increase of computational complexity. As a result, all have been adopted in the mapped domain predictive coding part of the ITU-T G.711.0 standard.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127083039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiyuan Lu, Peizhao Zhang, Hongyang Chao, P. Fisher
Fractional pixel motion compensation technology is an area in video image compression that can provide significant gains for coding efficacy, but this improvement comes the associated cost of high computational complexity. This additional complexity arises from two aspects: fractional pixel motion estimation (FPME) and fractional pixel interpolation (FPI). Different from current fast algorithms, we use the internal link between FPME and FPI as a factor in considering optimization by integrally manipulating them rather than attempting to speed them up separately. To coordinate with FPME and FPI, our proposed algorithm estimates fractional motion vectors and interpolates fractional pixels in the same order, which will satisfy the criteria of cost/performance efficiency. Compared with the FFPS+XFPI (the FPI method in X264), the proposed algorithm has already reduced the speed by a factor of 60% without coding loss. Furthermore, the proposed algorithm also achieves a much higher speed and better R-D performance than other fast algorithms e.g. CBFPS+XFPI. This integrated algorithm, therefore, improves the overall video coding speed by a significant measure and its idea of jointly optimizing the computational cost and the R-D performance can be extended to speeding up an even finer fractional motion compensation, such as 1/8 pixel, and to designing new interpolation filters for H.265.
{"title":"An Integrated Algorithm for Fractional Pixel Interpolation and Motion Estimation of H.264","authors":"Jiyuan Lu, Peizhao Zhang, Hongyang Chao, P. Fisher","doi":"10.1109/DCC.2010.101","DOIUrl":"https://doi.org/10.1109/DCC.2010.101","url":null,"abstract":"Fractional pixel motion compensation technology is an area in video image compression that can provide significant gains for coding efficacy, but this improvement comes the associated cost of high computational complexity. This additional complexity arises from two aspects: fractional pixel motion estimation (FPME) and fractional pixel interpolation (FPI). Different from current fast algorithms, we use the internal link between FPME and FPI as a factor in considering optimization by integrally manipulating them rather than attempting to speed them up separately. To coordinate with FPME and FPI, our proposed algorithm estimates fractional motion vectors and interpolates fractional pixels in the same order, which will satisfy the criteria of cost/performance efficiency. Compared with the FFPS+XFPI (the FPI method in X264), the proposed algorithm has already reduced the speed by a factor of 60% without coding loss. Furthermore, the proposed algorithm also achieves a much higher speed and better R-D performance than other fast algorithms e.g. CBFPS+XFPI. This integrated algorithm, therefore, improves the overall video coding speed by a significant measure and its idea of jointly optimizing the computational cost and the R-D performance can be extended to speeding up an even finer fractional motion compensation, such as 1/8 pixel, and to designing new interpolation filters for H.265.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131041213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce Xampling, a design methodology for analog compressed sensing in which we sample analog bandlimited signals at rates far lower than Nyquist, without loss of information. This allows compression together with the sampling stage. The main principles underlying this framework are the ability to capture a broad signal model, low sampling rate, efficient analog and digital implementation and lowrate baseband processing. In order to break through the Nyquist barrier so as to compress the signals in the sampling process, one has to combine classic methods from sampling theory together with recent developments in compressed sensing. We show that previous attempts at sub-Nyquist sampling suffer from analog implementation issues, large computational loads, and have no baseband processing capabilities. We then introduce the modulated wideband converter which can satisfy all the Xampling desiderata. We also demonstrate a board implementation of our converter which exhibits sub-Nyquist sampling in practice.
{"title":"Xampling: Analog Data Compression","authors":"M. Mishali, Yonina C. Eldar","doi":"10.1109/DCC.2010.39","DOIUrl":"https://doi.org/10.1109/DCC.2010.39","url":null,"abstract":"We introduce Xampling, a design methodology for analog compressed sensing in which we sample analog bandlimited signals at rates far lower than Nyquist, without loss of information. This allows compression together with the sampling stage. The main principles underlying this framework are the ability to capture a broad signal model, low sampling rate, efficient analog and digital implementation and lowrate baseband processing. In order to break through the Nyquist barrier so as to compress the signals in the sampling process, one has to combine classic methods from sampling theory together with recent developments in compressed sensing. We show that previous attempts at sub-Nyquist sampling suffer from analog implementation issues, large computational loads, and have no baseband processing capabilities. We then introduce the modulated wideband converter which can satisfy all the Xampling desiderata. We also demonstrate a board implementation of our converter which exhibits sub-Nyquist sampling in practice.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122358990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenfei Jiang, Wenyu Liu, Longin Jan Latecki, Hui Liang, Changqing Wang, Bin Feng
High definition (HD) video has come into people’s life from movie theaters to HDTV. However, the compression of HD videos is a challenging problem due to flicker noise, caused by film grain. The flicker noise significantly limits the applicability of motion estimation (ME), which is a key factor of the efficient video compression in block-based coding standards. Due to the flicker noise, it is difficult to obtain a perfect match between a current block and a reference block. In block-based video coding standards including H.264 a given block is either encoded by inter-frame or intra-frame prediction. We propose a new coding scheme called Two-Step Coding (TSC) that utilizes both for each block. TSC first reduces the resolution of each frame by replacing each block with its DC coefficient of the DCT to the original color values. The flicker noise is greatly reduced in the obtained lower resolution frame, which we call DC frame. The key benefit is that ME becomes very efficient on DC frames, and consequently, the DC frame can be efficiently inter-frame coded. The difference between the original frame and DC frame is actually described by the AC coefficients of the DCT of the original frame. We utilize the existing H.264 tools to combine the intra-frame and inter-frame coded parts of blocks both on the encoder and decoder sides. The key benefit of the proposed TSC in comparison to the most popular standards, in particular, in comparison to H.264 lies in better utilization of inter-frame coding.. Due to flicker noise, H.264 mostly employs intra block coding on HD videos. However, it is well-known that inter-frame coding significantly outperforms intra coding in video compression rate if the temporal correllation is correctly utilized. By reducing each frame to DC frame, TSC makes it possible to apply inter-frame coding. We provide experimental data and analysis to illustrate this fact.
{"title":"Two-Step Coding for High Definition Video Compression","authors":"Wenfei Jiang, Wenyu Liu, Longin Jan Latecki, Hui Liang, Changqing Wang, Bin Feng","doi":"10.1109/DCC.2010.54","DOIUrl":"https://doi.org/10.1109/DCC.2010.54","url":null,"abstract":"High definition (HD) video has come into people’s life from movie theaters to HDTV. However, the compression of HD videos is a challenging problem due to flicker noise, caused by film grain. The flicker noise significantly limits the applicability of motion estimation (ME), which is a key factor of the efficient video compression in block-based coding standards. Due to the flicker noise, it is difficult to obtain a perfect match between a current block and a reference block. In block-based video coding standards including H.264 a given block is either encoded by inter-frame or intra-frame prediction. We propose a new coding scheme called Two-Step Coding (TSC) that utilizes both for each block. TSC first reduces the resolution of each frame by replacing each block with its DC coefficient of the DCT to the original color values. The flicker noise is greatly reduced in the obtained lower resolution frame, which we call DC frame. The key benefit is that ME becomes very efficient on DC frames, and consequently, the DC frame can be efficiently inter-frame coded. The difference between the original frame and DC frame is actually described by the AC coefficients of the DCT of the original frame. We utilize the existing H.264 tools to combine the intra-frame and inter-frame coded parts of blocks both on the encoder and decoder sides. The key benefit of the proposed TSC in comparison to the most popular standards, in particular, in comparison to H.264 lies in better utilization of inter-frame coding.. Due to flicker noise, H.264 mostly employs intra block coding on HD videos. However, it is well-known that inter-frame coding significantly outperforms intra coding in video compression rate if the temporal correllation is correctly utilized. By reducing each frame to DC frame, TSC makes it possible to apply inter-frame coding. We provide experimental data and analysis to illustrate this fact.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131972946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantization plays a central role in data compression. In speech systems, vector quantizers are used to compress speech parameters. In video systems, scalar quantizers are used to reduce variability in transform coefficients. More generally, quantizers are used to compress all forms of data. In most cases, the quantizers are based on some form of staircase function. Deriving an analytical expression for a uniform midrise quantizer is well known and straightforward. In this paper, we create an alternate method of deriving such an analytical expression with the hope that the steps involved will be useful in understanding quantization and its various applications.
{"title":"Modeling the Quantization Staircase Function","authors":"S. Aslam, A. Bobick, C. Barnes","doi":"10.1109/DCC.2010.89","DOIUrl":"https://doi.org/10.1109/DCC.2010.89","url":null,"abstract":"Quantization plays a central role in data compression. In speech systems, vector quantizers are used to compress speech parameters. In video systems, scalar quantizers are used to reduce variability in transform coefficients. More generally, quantizers are used to compress all forms of data. In most cases, the quantizers are based on some form of staircase function. Deriving an analytical expression for a uniform midrise quantizer is well known and straightforward. In this paper, we create an alternate method of deriving such an analytical expression with the hope that the steps involved will be useful in understanding quantization and its various applications.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126440151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}