首页 > 最新文献

2013 Data Compression Conference最新文献

英文 中文
Compressing Huffman Models on Large Alphabets 在大字母上压缩霍夫曼模型
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.46
G. Navarro, Alberto Ordóñez Pereira
A naive storage of a Huffman model on a text of length n over an alphabet of size σ requires O(σlog n) bits. This can be reduced to σ logσ + O(σ) bits using canonical codes. This overhead over the entropy can be significant when σ is comparable to n, and it also dictates the amount of main memory required to compress or decompress. We design an encoding scheme that requires σlog log n+O(σ+log2 n) bits in the worst case, and typically less, while supporting encoding and decoding of symbols in O(log log n) time. We show that our technique reduces the storage size of the model of state-of-the-art techniques to around 15% in various real-life sequences over large alphabets, while still offering reasonable compression/decompression times.
对于长度为n的文本,在大小为σ的字母表上,霍夫曼模型的朴素存储需要O(σlog n)位。使用规范码可以简化为σ logσ + O(σ)位。当σ与n相当时,熵上的开销可能非常大,而且它还决定了压缩或解压缩所需的主内存量。我们设计了一种编码方案,在最坏情况下需要σlog log n+O(σ+log2 n)位,通常更少,同时支持在O(log log n)时间内对符号进行编码和解码。我们表明,我们的技术将最先进的技术模型的存储大小在大型字母的各种现实序列中减少到15%左右,同时仍然提供合理的压缩/解压缩时间。
{"title":"Compressing Huffman Models on Large Alphabets","authors":"G. Navarro, Alberto Ordóñez Pereira","doi":"10.1109/DCC.2013.46","DOIUrl":"https://doi.org/10.1109/DCC.2013.46","url":null,"abstract":"A naive storage of a Huffman model on a text of length n over an alphabet of size σ requires O(σlog n) bits. This can be reduced to σ logσ + O(σ) bits using canonical codes. This overhead over the entropy can be significant when σ is comparable to n, and it also dictates the amount of main memory required to compress or decompress. We design an encoding scheme that requires σlog log n+O(σ+log2 n) bits in the worst case, and typically less, while supporting encoding and decoding of symbols in O(log log n) time. We show that our technique reduces the storage size of the model of state-of-the-art techniques to around 15% in various real-life sequences over large alphabets, while still offering reasonable compression/decompression times.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116016730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Quadratic Similarity Queries on Compressed Data 压缩数据的二次相似查询
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.52
A. Ingber, T. Courtade, T. Weissman
The problem of performing similarity queries on compressed data is considered. We study the fundamental tradeoff between compression rate, sequence length, and reliability of queries performed on compressed data. For a Gaussian source and quadratic similarity criterion, we show that queries can be answered reliably if and only if the compression rate exceeds a given threshold - the identification rate - which we explicitly characterize. When compression is performed at a rate greater than the identification rate, responses to queries on the compressed data can be made exponentially reliable. We give a complete characterization of this exponent, which is analogous to the error and excess-distortion exponents in channel and source coding, respectively. For a general source, we prove that the identification rate is at most that of a Gaussian source with the same variance. Therefore, as with classical compression, the Gaussian source requires the largest compression rate. Moreover, a scheme is described that attains this maximal rate for any source distribution.
考虑了对压缩数据执行相似度查询的问题。我们研究压缩率、序列长度和对压缩数据执行查询的可靠性之间的基本权衡。对于高斯源和二次相似准则,我们表明,当且仅当压缩率超过给定的阈值(识别率)时,查询可以可靠地回答,我们明确地描述了这一点。当压缩的执行速率大于识别速率时,对压缩数据的查询响应的可靠性就会呈指数级增长。我们给出了该指数的完整表征,它类似于信道编码中的误差指数和源编码中的过度失真指数。对于一般源,我们证明了具有相同方差的高斯源的识别率最多。因此,与经典压缩一样,高斯源需要最大的压缩率。此外,本文还描述了一种对任何源分布都能达到此最大速率的方案。
{"title":"Quadratic Similarity Queries on Compressed Data","authors":"A. Ingber, T. Courtade, T. Weissman","doi":"10.1109/DCC.2013.52","DOIUrl":"https://doi.org/10.1109/DCC.2013.52","url":null,"abstract":"The problem of performing similarity queries on compressed data is considered. We study the fundamental tradeoff between compression rate, sequence length, and reliability of queries performed on compressed data. For a Gaussian source and quadratic similarity criterion, we show that queries can be answered reliably if and only if the compression rate exceeds a given threshold - the identification rate - which we explicitly characterize. When compression is performed at a rate greater than the identification rate, responses to queries on the compressed data can be made exponentially reliable. We give a complete characterization of this exponent, which is analogous to the error and excess-distortion exponents in channel and source coding, respectively. For a general source, we prove that the identification rate is at most that of a Gaussian source with the same variance. Therefore, as with classical compression, the Gaussian source requires the largest compression rate. Moreover, a scheme is described that attains this maximal rate for any source distribution.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124931548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Ultra Fast H.264/AVC to HEVC Transcoder 超快速H.264/AVC到HEVC转码器
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.32
Tong Shen, Yao Lu, Ziyu Wen, Linxi Zou, Yucong Chen, Jiangtao Wen
The emerging High Efficiency Video Coding (HEVC) standard achieves significant performance improvement over H.264/AVC standard at a cost of much higher complexity. In this paper, we propose a ultra fast H.264/AVC to HEVC transcoder for multi-core processors implementing Wave front Parallel Processing (WPP) and SIMD acceleration, along with expedited motion estimation (ME) and mode decision (MD) by utilizing information extracted from the input H.264/AVC stream. Experiments using standard HEVC test bit streams show that the proposed transcoder achieves 70x speed up over the HEVC HM 8.1 reference software (including H.264 encoding) at very small rate distortion (RD) performance loss.
与H.264/AVC标准相比,新兴的高效视频编码(HEVC)标准以更高的复杂度为代价实现了显著的性能改进。本文提出了一种用于多核处理器的超快速H.264/AVC转HEVC转码器,该转码器实现了波前并行处理(WPP)和SIMD加速,以及利用从输入H.264/AVC流中提取的信息进行快速运动估计(ME)和模式决策(MD)。使用标准HEVC测试码流进行的实验表明,所提出的转码器比HEVC HM 8.1参考软件(包括H.264编码)的速度提高了70倍,并且性能损失很小。
{"title":"Ultra Fast H.264/AVC to HEVC Transcoder","authors":"Tong Shen, Yao Lu, Ziyu Wen, Linxi Zou, Yucong Chen, Jiangtao Wen","doi":"10.1109/DCC.2013.32","DOIUrl":"https://doi.org/10.1109/DCC.2013.32","url":null,"abstract":"The emerging High Efficiency Video Coding (HEVC) standard achieves significant performance improvement over H.264/AVC standard at a cost of much higher complexity. In this paper, we propose a ultra fast H.264/AVC to HEVC transcoder for multi-core processors implementing Wave front Parallel Processing (WPP) and SIMD acceleration, along with expedited motion estimation (ME) and mode decision (MD) by utilizing information extracted from the input H.264/AVC stream. Experiments using standard HEVC test bit streams show that the proposed transcoder achieves 70x speed up over the HEVC HM 8.1 reference software (including H.264 encoding) at very small rate distortion (RD) performance loss.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127177818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
High Compression Rate and Ratio Using Predefined Huffman Dictionaries 高压缩率和比例使用预定义的霍夫曼字典
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.119
Amit Golander, S. Tahar, Lior Glass, G. Biran, Sagi Manole
Current Huffman coding modes are optimal for a single metric: compression ratio (quality) or rate (performance). We recognize that real life data can usually be classified to families of data types and thus the Huffman dictionary can be reused instead of recalculated. In this paper, we show how to balance the trade-off between compression ratio and rate, without modifying existing standards and legacy decompression implementations.
目前的霍夫曼编码模式是最优的单一指标:压缩比(质量)或率(性能)。我们认识到,现实生活中的数据通常可以被分类为数据类型的家族,因此霍夫曼字典可以被重用,而不是重新计算。在本文中,我们将展示如何在不修改现有标准和遗留解压缩实现的情况下平衡压缩比和速率之间的权衡。
{"title":"High Compression Rate and Ratio Using Predefined Huffman Dictionaries","authors":"Amit Golander, S. Tahar, Lior Glass, G. Biran, Sagi Manole","doi":"10.1109/DCC.2013.119","DOIUrl":"https://doi.org/10.1109/DCC.2013.119","url":null,"abstract":"Current Huffman coding modes are optimal for a single metric: compression ratio (quality) or rate (performance). We recognize that real life data can usually be classified to families of data types and thus the Huffman dictionary can be reused instead of recalculated. In this paper, we show how to balance the trade-off between compression ratio and rate, without modifying existing standards and legacy decompression implementations.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129315480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analog Joint Source Channel Coding over Non-Linear Channels 非线性信道上的模拟联合源信道编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.75
Mohamed Hassanin, J. Garcia-Frías
We investigate the performance of analog joint source channel coding systems based on the use of spiral-like space filling curves for the transmission of Gaussian sources over non-linear channels. The non-linearity proceeds from a non-linear power amplifier in the transmitter that exhibits saturation effects at the extremes and also near the origin (see Figure below). Then, the output of the amplifier is sent through an AWGN channel which introduces attenuation that depends on the distance between the transmitter and the receiver. This means that the attenuation cannot be subsumed into the noise variance.
我们研究了基于使用螺旋状空间填充曲线的模拟联合源信道编码系统在非线性信道上传输高斯源的性能。非线性是由发射机中的非线性功率放大器引起的,它在极值和原点附近都表现出饱和效应(见下图)。然后,放大器的输出通过AWGN通道发送,该通道引入的衰减取决于发射器和接收器之间的距离。这意味着衰减不能包含在噪声方差中。
{"title":"Analog Joint Source Channel Coding over Non-Linear Channels","authors":"Mohamed Hassanin, J. Garcia-Frías","doi":"10.1109/DCC.2013.75","DOIUrl":"https://doi.org/10.1109/DCC.2013.75","url":null,"abstract":"We investigate the performance of analog joint source channel coding systems based on the use of spiral-like space filling curves for the transmission of Gaussian sources over non-linear channels. The non-linearity proceeds from a non-linear power amplifier in the transmitter that exhibits saturation effects at the extremes and also near the origin (see Figure below). Then, the output of the amplifier is sent through an AWGN channel which introduces attenuation that depends on the distance between the transmitter and the receiver. This means that the attenuation cannot be subsumed into the noise variance.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127864220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Near in Place Linear Time Minimum Redundancy Coding 近就地线性时间最小冗余编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.49
Juha Kärkkäinen, German Tischler
In this paper we discuss data structures and algorithms for linear time encoding and decoding of minimum redundancy codes. We show that a text of length n over an alphabet of cardinality σ can be encoded to minimum redundancy code and decoded from minimum redundancy code in time O(n) using only an additional space of O(σ) words (O(σ log n) bits) for handling the auxiliary data structures. The encoding process can replace the given block code by the corresponding minimum redundancy code in place. The decoding process is able to replace the minimum redundancy code given in sufficient space to store the block code by the corresponding block code.
本文讨论了最小冗余码的线性时间编码和解码的数据结构和算法。我们证明了在基数为σ的字母表上长度为n的文本可以编码为最小冗余码,并在O(n)时间内从最小冗余码解码,仅使用O(σ)个字(O(σ log n)位)的额外空间来处理辅助数据结构。编码过程可以用相应的最小冗余码替换给定的块码。译码过程能够用相应的块码替换在足够的存储块码的空间中给定的最小冗余码。
{"title":"Near in Place Linear Time Minimum Redundancy Coding","authors":"Juha Kärkkäinen, German Tischler","doi":"10.1109/DCC.2013.49","DOIUrl":"https://doi.org/10.1109/DCC.2013.49","url":null,"abstract":"In this paper we discuss data structures and algorithms for linear time encoding and decoding of minimum redundancy codes. We show that a text of length n over an alphabet of cardinality σ can be encoded to minimum redundancy code and decoded from minimum redundancy code in time O(n) using only an additional space of O(σ) words (O(σ log n) bits) for handling the auxiliary data structures. The encoding process can replace the given block code by the corresponding minimum redundancy code in place. The decoding process is able to replace the minimum redundancy code given in sufficient space to store the block code by the corresponding block code.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128947377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Variable-to-Fixed-Length Encoding for Large Texts Using Re-Pair Algorithm with Shared Dictionaries 基于共享字典的重对算法的大文本变到定长编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.97
Kei Sekine, Hirohito Sasakawa, S. Yoshida, T. Kida
The Re-Pair algorithm proposed by Larsson and Moffat in 1999 is a simple grammar-based compression method that achieves an extremely high compression ratio. However, Re-Pair is an offline and very space consuming algorithm. Thus, to apply it to a very large text, we need to divide the text into smaller blocks. Consequently, if we share a part of the dictionary among all blocks, we expect that the compression speed and ratio of the algorithm will improve. In this paper, we implemented our method with exploiting variable-to-fixed-length codes, and empirically show how the compression speed and ratio of the method vary by adjusting three parameters: block size, dictionary size, and size of shared dictionary. Finally, we discuss the tendencies of compression speed and ratio with respect to the three parameters.
1999年Larsson和Moffat提出的Re-Pair算法是一种简单的基于语法的压缩方法,实现了极高的压缩比。然而,Re-Pair是一种离线且非常占用空间的算法。因此,要将其应用于非常大的文本,我们需要将文本分成更小的块。因此,如果我们在所有块中共享字典的一部分,我们预计算法的压缩速度和压缩比将会提高。在本文中,我们利用可变长度到固定长度的代码实现了我们的方法,并通过调整三个参数:块大小、字典大小和共享字典大小,实证地展示了该方法的压缩速度和比率是如何变化的。最后,我们讨论了压缩速度和压缩比相对于这三个参数的趋势。
{"title":"Variable-to-Fixed-Length Encoding for Large Texts Using Re-Pair Algorithm with Shared Dictionaries","authors":"Kei Sekine, Hirohito Sasakawa, S. Yoshida, T. Kida","doi":"10.1109/DCC.2013.97","DOIUrl":"https://doi.org/10.1109/DCC.2013.97","url":null,"abstract":"The Re-Pair algorithm proposed by Larsson and Moffat in 1999 is a simple grammar-based compression method that achieves an extremely high compression ratio. However, Re-Pair is an offline and very space consuming algorithm. Thus, to apply it to a very large text, we need to divide the text into smaller blocks. Consequently, if we share a part of the dictionary among all blocks, we expect that the compression speed and ratio of the algorithm will improve. In this paper, we implemented our method with exploiting variable-to-fixed-length codes, and empirically show how the compression speed and ratio of the method vary by adjusting three parameters: block size, dictionary size, and size of shared dictionary. Finally, we discuss the tendencies of compression speed and ratio with respect to the three parameters.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"101 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114095193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Simplified HEVC FME Interpolation Unit Targeting a Low Cost and High Throughput Hardware Design 针对低成本和高吞吐量硬件设计的简化HEVC FME插补单元
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.55
Vladimir Afonso, Henrique Maich, L. Agostini, Denis Franco
Summary form only given. The new demands for high resolution digital video applications are pushing the development of new techniques in the video coding area. This paper presents a simplified version of the original Fractional Motion Estimation (FME) algorithm defined by the HEVC emerging video coding standard targeting a low cost and high throughput hardware design. Based on evaluations using the HEVC Model (HM), the HEVC reference software, a simplification strategy was defined to be used in the hardware design, drastically reducing the HEVC complexity, but with some losses in terms of compression rates and quality. The used strategy considered the use of only the most used PU size in the Motion Estimation process, avoiding the evaluation of the 24 PU sizes defined in the HEVC and avoiding also the RDO decision process. This expressively reduces the ME complexity and causes a bit-rate loss lower than 13.18% and a quality loss lower than 0.45dB. Even with the proposed simplification, the proposed solution is fully compliant with the current version of the HEVC standard. The FME interpolation was also simplified targeting the hardware design through some algebraic manipulations, converting multiplications in shift-adds and sharing sub-expressions. The simplified FME interpolator was designed in hardware and the results showed a low use of hardware resources and a processing rate high enough to process QFHD videos (3840x2160 pixels) in real time.
只提供摘要形式。高分辨率数字视频应用的新需求推动了视频编码领域新技术的发展。本文提出了一种简化版的原始分数运动估计(FME)算法,该算法由新兴视频编码标准HEVC定义,以低成本和高吞吐量的硬件设计为目标。在使用HEVC参考软件HEVC模型(HM)进行评估的基础上,定义了一种简化策略,用于硬件设计,大大降低了HEVC的复杂性,但在压缩率和质量方面有一定的损失。所使用的策略考虑了在运动估计过程中只使用最常用的PU尺寸,避免了对HEVC中定义的24个PU尺寸的评估,也避免了RDO决策过程。这显著降低了ME复杂度,比特率损失低于13.18%,质量损失低于0.45dB。即使采用了建议的简化,建议的解决方案也完全符合当前版本的HEVC标准。针对硬件设计,通过一些代数操作、移位加法乘法转换和共享子表达式,简化了FME插值。在硬件上设计了简化的FME插补器,结果表明,该插补器对硬件资源的占用少,处理速率高,可以实时处理QFHD视频(3840x2160像素)。
{"title":"Simplified HEVC FME Interpolation Unit Targeting a Low Cost and High Throughput Hardware Design","authors":"Vladimir Afonso, Henrique Maich, L. Agostini, Denis Franco","doi":"10.1109/DCC.2013.55","DOIUrl":"https://doi.org/10.1109/DCC.2013.55","url":null,"abstract":"Summary form only given. The new demands for high resolution digital video applications are pushing the development of new techniques in the video coding area. This paper presents a simplified version of the original Fractional Motion Estimation (FME) algorithm defined by the HEVC emerging video coding standard targeting a low cost and high throughput hardware design. Based on evaluations using the HEVC Model (HM), the HEVC reference software, a simplification strategy was defined to be used in the hardware design, drastically reducing the HEVC complexity, but with some losses in terms of compression rates and quality. The used strategy considered the use of only the most used PU size in the Motion Estimation process, avoiding the evaluation of the 24 PU sizes defined in the HEVC and avoiding also the RDO decision process. This expressively reduces the ME complexity and causes a bit-rate loss lower than 13.18% and a quality loss lower than 0.45dB. Even with the proposed simplification, the proposed solution is fully compliant with the current version of the HEVC standard. The FME interpolation was also simplified targeting the hardware design through some algebraic manipulations, converting multiplications in shift-adds and sharing sub-expressions. The simplified FME interpolator was designed in hardware and the results showed a low use of hardware resources and a processing rate high enough to process QFHD videos (3840x2160 pixels) in real time.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127397436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A High Throughput Multi Symbol CABAC Framework for Hybrid Video Codecs 用于混合视频编解码器的高吞吐量多符号CABAC框架
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.94
K. Rapaka, E. Yang
Summary form only given. This paper proposes a Multi-Symbol Context Adaptive Binary Arithmetic Coding (CABAC) Framework in Hybrid Video Coding. Advanced CABAC techniques have been employed in popular video coding technologies like H264-AVC, HEVC. The proposed framework aims at extending these technique by providing symbol level scalability in being able to code one or multi-symbols at a time without changing the existing framework. Such a coding not only can exploit higher order statistical dependencies on a syntax element level but also reduce the number of coded bins. New syntax elements and their Probability modeling are proposed as extensions to achieve Multi-Symbol coding. An example variant of this framework, that is coding only maximum of two symbols at a time for quantized coefficient Indices, was implemented on top of JM18.3-H264 CABAC. This example extension when tested with on HEVC test Sequences shows significant throughput improvement (i.e., significant reduction in number of bins to be coded) and at the same time reduces Bit-rate significantly. The Frame-work can be seamlessly extended to code Multiple Symbols greater than two.
只提供摘要形式。提出了一种用于混合视频编码的多符号上下文自适应二进制算术编码(CABAC)框架。先进的CABAC技术已应用于H264-AVC、HEVC等流行的视频编码技术中。提出的框架旨在通过提供符号级可扩展性来扩展这些技术,从而能够在不改变现有框架的情况下一次编码一个或多个符号。这样的编码不仅可以利用语法元素级别上的高阶统计依赖关系,还可以减少编码箱的数量。提出了新的语法元素及其概率建模作为实现多符号编码的扩展。该框架的一个示例变体是在JM18.3-H264 CABAC之上实现的,它一次只对量化系数索引编码最多两个符号。当在HEVC测试序列上测试时,这个示例扩展显示出显着的吞吐量改进(即,要编码的箱子数量显着减少),同时显着降低比特率。框架可以无缝地扩展到编码大于2的多个符号。
{"title":"A High Throughput Multi Symbol CABAC Framework for Hybrid Video Codecs","authors":"K. Rapaka, E. Yang","doi":"10.1109/DCC.2013.94","DOIUrl":"https://doi.org/10.1109/DCC.2013.94","url":null,"abstract":"Summary form only given. This paper proposes a Multi-Symbol Context Adaptive Binary Arithmetic Coding (CABAC) Framework in Hybrid Video Coding. Advanced CABAC techniques have been employed in popular video coding technologies like H264-AVC, HEVC. The proposed framework aims at extending these technique by providing symbol level scalability in being able to code one or multi-symbols at a time without changing the existing framework. Such a coding not only can exploit higher order statistical dependencies on a syntax element level but also reduce the number of coded bins. New syntax elements and their Probability modeling are proposed as extensions to achieve Multi-Symbol coding. An example variant of this framework, that is coding only maximum of two symbols at a time for quantized coefficient Indices, was implemented on top of JM18.3-H264 CABAC. This example extension when tested with on HEVC test Sequences shows significant throughput improvement (i.e., significant reduction in number of bins to be coded) and at the same time reduces Bit-rate significantly. The Frame-work can be seamlessly extended to code Multiple Symbols greater than two.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"18 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131452497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Lossless Compression of Rotated Maskless Lithography Images 旋转无掩模光刻图像的无损压缩
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.80
S. T. Klein, Dana Shapira, Gal Shelef
A new lossless image compression algorithm is presented, aimed at mask less lithography systems with mostly right-angled regular structures. Since these images appear often in slightly rotated form, an algorithm dealing with this special case is suggested, which improves performance relative to the state of the art alternatives.
针对多为直角规则结构的无掩模光刻系统,提出了一种新的无损图像压缩算法。由于这些图像经常以轻微旋转的形式出现,因此建议使用一种处理这种特殊情况的算法,相对于最先进的替代方案,它可以提高性能。
{"title":"Lossless Compression of Rotated Maskless Lithography Images","authors":"S. T. Klein, Dana Shapira, Gal Shelef","doi":"10.1109/DCC.2013.80","DOIUrl":"https://doi.org/10.1109/DCC.2013.80","url":null,"abstract":"A new lossless image compression algorithm is presented, aimed at mask less lithography systems with mostly right-angled regular structures. Since these images appear often in slightly rotated form, an algorithm dealing with this special case is suggested, which improves performance relative to the state of the art alternatives.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1997 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131273842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2013 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1