首页 > 最新文献

2013 Data Compression Conference最新文献

英文 中文
Ultra Fast H.264/AVC to HEVC Transcoder 超快速H.264/AVC到HEVC转码器
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.32
Tong Shen, Yao Lu, Ziyu Wen, Linxi Zou, Yucong Chen, Jiangtao Wen
The emerging High Efficiency Video Coding (HEVC) standard achieves significant performance improvement over H.264/AVC standard at a cost of much higher complexity. In this paper, we propose a ultra fast H.264/AVC to HEVC transcoder for multi-core processors implementing Wave front Parallel Processing (WPP) and SIMD acceleration, along with expedited motion estimation (ME) and mode decision (MD) by utilizing information extracted from the input H.264/AVC stream. Experiments using standard HEVC test bit streams show that the proposed transcoder achieves 70x speed up over the HEVC HM 8.1 reference software (including H.264 encoding) at very small rate distortion (RD) performance loss.
与H.264/AVC标准相比,新兴的高效视频编码(HEVC)标准以更高的复杂度为代价实现了显著的性能改进。本文提出了一种用于多核处理器的超快速H.264/AVC转HEVC转码器,该转码器实现了波前并行处理(WPP)和SIMD加速,以及利用从输入H.264/AVC流中提取的信息进行快速运动估计(ME)和模式决策(MD)。使用标准HEVC测试码流进行的实验表明,所提出的转码器比HEVC HM 8.1参考软件(包括H.264编码)的速度提高了70倍,并且性能损失很小。
{"title":"Ultra Fast H.264/AVC to HEVC Transcoder","authors":"Tong Shen, Yao Lu, Ziyu Wen, Linxi Zou, Yucong Chen, Jiangtao Wen","doi":"10.1109/DCC.2013.32","DOIUrl":"https://doi.org/10.1109/DCC.2013.32","url":null,"abstract":"The emerging High Efficiency Video Coding (HEVC) standard achieves significant performance improvement over H.264/AVC standard at a cost of much higher complexity. In this paper, we propose a ultra fast H.264/AVC to HEVC transcoder for multi-core processors implementing Wave front Parallel Processing (WPP) and SIMD acceleration, along with expedited motion estimation (ME) and mode decision (MD) by utilizing information extracted from the input H.264/AVC stream. Experiments using standard HEVC test bit streams show that the proposed transcoder achieves 70x speed up over the HEVC HM 8.1 reference software (including H.264 encoding) at very small rate distortion (RD) performance loss.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127177818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
STOL: Spatio-Temporal Online Dictionary Learning for Low Bit-Rate Video Coding 低比特率视频编码的时空在线词典学习
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.101
Xin Tang, H. Xiong
To speed up the convergence rate of learning dictionary in low bit-rate video coding, this paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to improve the original adaptive regularized dictionary learning with K-SVD which involves a high computational complexity and interfere with the coding efficiency. Considering the intrinsic dimensionality of the primitives in training each series of 2-D sub dictionaries is low, the 3-D low-frequency and high-frequency dictionary pair would be formed by the online dictionary learning to update the atoms for optimal sparse representation and convergence. Instead of classical first-order stochastic gradient descent on the constraint set, e.g. K-SVD, the online algorithm would exploit the structure of sparse coding in the design of an optimization procedure in terms of stochastic approximations. It depends on low memory consumption and lower computational cost without the need of explicit learning rate tuning. Through drawing a cubic from i.i.d. samples of a distribution in each inner loop and alternating classical sparse coding steps for computing the decomposition coefficient of the cubic over previous dictionary, the dictionary update problem is converted to solve the expected cost instead of the empirical cost. For dynamic training data over time, online dictionary learning behaves faster than second-order iteration batch alternatives, e.g. K-SVD. Through experiments, the super-resolution reconstruction based on STOL obviously reduces the computational complexity to 40% to 50% of the K-SVD learning-based schemes with a guaranteed accuracy.
为了加快低比特率视频编码中学习字典的收敛速度,本文提出了一种时空在线字典学习(STOL)算法,以改进基于K-SVD的自适应正则化字典学习算法,该算法计算量大,影响编码效率。考虑到训练每一组二维子字典时原语的固有维数较低,通过在线字典学习形成三维低频和高频字典对,更新原子以获得最优的稀疏表示和收敛性。在线算法将利用稀疏编码的结构来设计基于随机逼近的优化过程,而不是在约束集上的经典一阶随机梯度下降,例如K-SVD。它依赖于低内存消耗和较低的计算成本,而不需要显式的学习率调整。通过在每个内环中从一个分布的iid个样本中绘制一个三次,并交替进行经典稀疏编码步骤来计算该三次对前一个字典的分解系数,将字典更新问题转化为求解期望代价而不是经验代价。对于随时间变化的动态训练数据,在线字典学习比二阶迭代批处理方法(例如K-SVD)表现得更快。实验结果表明,基于STOL的超分辨率重建算法在保证精度的前提下,将计算复杂度降低到基于K-SVD学习方法的40% ~ 50%。
{"title":"STOL: Spatio-Temporal Online Dictionary Learning for Low Bit-Rate Video Coding","authors":"Xin Tang, H. Xiong","doi":"10.1109/DCC.2013.101","DOIUrl":"https://doi.org/10.1109/DCC.2013.101","url":null,"abstract":"To speed up the convergence rate of learning dictionary in low bit-rate video coding, this paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to improve the original adaptive regularized dictionary learning with K-SVD which involves a high computational complexity and interfere with the coding efficiency. Considering the intrinsic dimensionality of the primitives in training each series of 2-D sub dictionaries is low, the 3-D low-frequency and high-frequency dictionary pair would be formed by the online dictionary learning to update the atoms for optimal sparse representation and convergence. Instead of classical first-order stochastic gradient descent on the constraint set, e.g. K-SVD, the online algorithm would exploit the structure of sparse coding in the design of an optimization procedure in terms of stochastic approximations. It depends on low memory consumption and lower computational cost without the need of explicit learning rate tuning. Through drawing a cubic from i.i.d. samples of a distribution in each inner loop and alternating classical sparse coding steps for computing the decomposition coefficient of the cubic over previous dictionary, the dictionary update problem is converted to solve the expected cost instead of the empirical cost. For dynamic training data over time, online dictionary learning behaves faster than second-order iteration batch alternatives, e.g. K-SVD. Through experiments, the super-resolution reconstruction based on STOL obviously reduces the computational complexity to 40% to 50% of the K-SVD learning-based schemes with a guaranteed accuracy.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132579539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Compression Algorithm for Fluctuant Data in Smart Grid Database Systems 智能电网数据库系统中波动数据的压缩算法
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.67
Chi-Cheng Chuang, Y. Chiu, Zhi-Hung Chen, Hao-Ping Kang, Che-Rung Lee
In this paper, we present a lossless compression algorithm for fluctuant data, which can be integrated into database system and allows regular database insertion and queries. The algorithm is based on the observation that fluctuant data, although varied violently during small time intervals, have similar patterns over time. The algorithm first partitioned consecutive k records into segments. Those segments are normalized and treated as vectors in k-dimensional space. Classification algorithms are then applied to find representative vectors for those normalized vectors. The classification criterion is that any segments after normalization can find at least one representative vector such that their distance is less than a given threshold. Those representative vectors, called codes, are stored in a codebook. The codebook can be generated offline from a small training dataset, and used repeatedly. The online compression algorithm searches the nearest code for an input segment, and stores only the ID of the code and their difference. Since the difference is small, it can be compressed by Rice coding or Golomb coding.lossless compression algorithm.
本文提出了一种针对波动数据的无损压缩算法,该算法可以集成到数据库系统中,并允许定期插入和查询数据库。该算法基于对波动数据的观察,尽管在小时间间隔内剧烈变化,但随着时间的推移具有相似的模式。该算法首先将连续的k条记录划分为段。这些段被归一化并作为k维空间中的向量处理。然后应用分类算法为这些归一化向量找到代表向量。分类标准是任何归一化后的片段都能找到至少一个代表向量,使得它们的距离小于给定的阈值。这些有代表性的向量被称为代码,存储在代码本中。码本可以从一个小的训练数据集离线生成,并重复使用。在线压缩算法为输入段搜索最接近的代码,只存储代码的ID和它们之间的差异。由于差异很小,因此可以使用Rice编码或Golomb编码进行压缩。无损压缩算法。
{"title":"A Compression Algorithm for Fluctuant Data in Smart Grid Database Systems","authors":"Chi-Cheng Chuang, Y. Chiu, Zhi-Hung Chen, Hao-Ping Kang, Che-Rung Lee","doi":"10.1109/DCC.2013.67","DOIUrl":"https://doi.org/10.1109/DCC.2013.67","url":null,"abstract":"In this paper, we present a lossless compression algorithm for fluctuant data, which can be integrated into database system and allows regular database insertion and queries. The algorithm is based on the observation that fluctuant data, although varied violently during small time intervals, have similar patterns over time. The algorithm first partitioned consecutive k records into segments. Those segments are normalized and treated as vectors in k-dimensional space. Classification algorithms are then applied to find representative vectors for those normalized vectors. The classification criterion is that any segments after normalization can find at least one representative vector such that their distance is less than a given threshold. Those representative vectors, called codes, are stored in a codebook. The codebook can be generated offline from a small training dataset, and used repeatedly. The online compression algorithm searches the nearest code for an input segment, and stores only the ID of the code and their difference. Since the difference is small, it can be compressed by Rice coding or Golomb coding.lossless compression algorithm.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134461135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Natural Language Compression Optimized for Large Set of Files 为大型文件集优化的自然语言压缩
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.93
P. Procházka, J. Holub
Summary form only given. The web search engines store the web pages in the raw text form to build so called snippets (short text surrounding the searched pattern) or to perform so called positional ranking functions. We address the problem of the compression of a large collection of text files distributed in cluster of computers, where the single files need to be randomly accessed in very short time. The compression algorithm Set-of-Files Semi-Adaptive Two Byte Dense Code (SF-STBDC) is based on the word-based approach and the idea of combination of two statistical models: the global model (common for all the files of the set) and the local model. The latter is built as the set of changes which transform the global model to the proper model of the single compressed file. Except very good compression ratio the compression method allows fast searching on the compressed text, which is an attractive property especially for search engines property especially for search engines. Exactly the same problem (compression of a set of files using byte codes) was first stated in. Our algorithm SF-STBDC overcomes the algorithm based on (s,c) - Dense Code in compression ratio and at the same time it keeps a very good searching and decompression speed. The key idea to achieve this result is a usage of Semi-Adaptive Two Byte Dense Code which provides more effective coding of small portions ofof the text and still allows exact setting of the number of stoppers and continuers.
只提供摘要形式。网络搜索引擎以原始文本形式存储网页,以构建所谓的片段(围绕搜索模式的短文本)或执行所谓的位置排名功能。我们解决了分布在计算机集群中的大量文本文件的压缩问题,其中单个文件需要在很短的时间内随机访问。文件集半自适应两字节密集码(SF-STBDC)压缩算法基于基于词的方法和两种统计模型的组合思想:全局模型(用于集合的所有文件)和局部模型。后者被构建为将全局模型转换为单个压缩文件的适当模型的更改集。除了非常好的压缩比,压缩方法允许快速搜索压缩文本,这是一个有吸引力的属性,特别是对搜索引擎。完全相同的问题(使用字节码压缩一组文件)在。我们的算法SF-STBDC在压缩比上克服了基于(s,c) - Dense Code的算法,同时保持了非常好的搜索解压缩速度。实现这一结果的关键思想是使用半自适应双字节密集代码,它为文本的一小部分提供更有效的编码,并且仍然允许精确设置停止和连续的数量。
{"title":"Natural Language Compression Optimized for Large Set of Files","authors":"P. Procházka, J. Holub","doi":"10.1109/DCC.2013.93","DOIUrl":"https://doi.org/10.1109/DCC.2013.93","url":null,"abstract":"Summary form only given. The web search engines store the web pages in the raw text form to build so called snippets (short text surrounding the searched pattern) or to perform so called positional ranking functions. We address the problem of the compression of a large collection of text files distributed in cluster of computers, where the single files need to be randomly accessed in very short time. The compression algorithm Set-of-Files Semi-Adaptive Two Byte Dense Code (SF-STBDC) is based on the word-based approach and the idea of combination of two statistical models: the global model (common for all the files of the set) and the local model. The latter is built as the set of changes which transform the global model to the proper model of the single compressed file. Except very good compression ratio the compression method allows fast searching on the compressed text, which is an attractive property especially for search engines property especially for search engines. Exactly the same problem (compression of a set of files using byte codes) was first stated in. Our algorithm SF-STBDC overcomes the algorithm based on (s,c) - Dense Code in compression ratio and at the same time it keeps a very good searching and decompression speed. The key idea to achieve this result is a usage of Semi-Adaptive Two Byte Dense Code which provides more effective coding of small portions ofof the text and still allows exact setting of the number of stoppers and continuers.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133287552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Analog Joint Source Channel Coding over Non-Linear Channels 非线性信道上的模拟联合源信道编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.75
Mohamed Hassanin, J. Garcia-Frías
We investigate the performance of analog joint source channel coding systems based on the use of spiral-like space filling curves for the transmission of Gaussian sources over non-linear channels. The non-linearity proceeds from a non-linear power amplifier in the transmitter that exhibits saturation effects at the extremes and also near the origin (see Figure below). Then, the output of the amplifier is sent through an AWGN channel which introduces attenuation that depends on the distance between the transmitter and the receiver. This means that the attenuation cannot be subsumed into the noise variance.
我们研究了基于使用螺旋状空间填充曲线的模拟联合源信道编码系统在非线性信道上传输高斯源的性能。非线性是由发射机中的非线性功率放大器引起的,它在极值和原点附近都表现出饱和效应(见下图)。然后,放大器的输出通过AWGN通道发送,该通道引入的衰减取决于发射器和接收器之间的距离。这意味着衰减不能包含在噪声方差中。
{"title":"Analog Joint Source Channel Coding over Non-Linear Channels","authors":"Mohamed Hassanin, J. Garcia-Frías","doi":"10.1109/DCC.2013.75","DOIUrl":"https://doi.org/10.1109/DCC.2013.75","url":null,"abstract":"We investigate the performance of analog joint source channel coding systems based on the use of spiral-like space filling curves for the transmission of Gaussian sources over non-linear channels. The non-linearity proceeds from a non-linear power amplifier in the transmitter that exhibits saturation effects at the extremes and also near the origin (see Figure below). Then, the output of the amplifier is sent through an AWGN channel which introduces attenuation that depends on the distance between the transmitter and the receiver. This means that the attenuation cannot be subsumed into the noise variance.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127864220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Near in Place Linear Time Minimum Redundancy Coding 近就地线性时间最小冗余编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.49
Juha Kärkkäinen, German Tischler
In this paper we discuss data structures and algorithms for linear time encoding and decoding of minimum redundancy codes. We show that a text of length n over an alphabet of cardinality σ can be encoded to minimum redundancy code and decoded from minimum redundancy code in time O(n) using only an additional space of O(σ) words (O(σ log n) bits) for handling the auxiliary data structures. The encoding process can replace the given block code by the corresponding minimum redundancy code in place. The decoding process is able to replace the minimum redundancy code given in sufficient space to store the block code by the corresponding block code.
本文讨论了最小冗余码的线性时间编码和解码的数据结构和算法。我们证明了在基数为σ的字母表上长度为n的文本可以编码为最小冗余码,并在O(n)时间内从最小冗余码解码,仅使用O(σ)个字(O(σ log n)位)的额外空间来处理辅助数据结构。编码过程可以用相应的最小冗余码替换给定的块码。译码过程能够用相应的块码替换在足够的存储块码的空间中给定的最小冗余码。
{"title":"Near in Place Linear Time Minimum Redundancy Coding","authors":"Juha Kärkkäinen, German Tischler","doi":"10.1109/DCC.2013.49","DOIUrl":"https://doi.org/10.1109/DCC.2013.49","url":null,"abstract":"In this paper we discuss data structures and algorithms for linear time encoding and decoding of minimum redundancy codes. We show that a text of length n over an alphabet of cardinality σ can be encoded to minimum redundancy code and decoded from minimum redundancy code in time O(n) using only an additional space of O(σ) words (O(σ log n) bits) for handling the auxiliary data structures. The encoding process can replace the given block code by the corresponding minimum redundancy code in place. The decoding process is able to replace the minimum redundancy code given in sufficient space to store the block code by the corresponding block code.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128947377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Variable-to-Fixed-Length Encoding for Large Texts Using Re-Pair Algorithm with Shared Dictionaries 基于共享字典的重对算法的大文本变到定长编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.97
Kei Sekine, Hirohito Sasakawa, S. Yoshida, T. Kida
The Re-Pair algorithm proposed by Larsson and Moffat in 1999 is a simple grammar-based compression method that achieves an extremely high compression ratio. However, Re-Pair is an offline and very space consuming algorithm. Thus, to apply it to a very large text, we need to divide the text into smaller blocks. Consequently, if we share a part of the dictionary among all blocks, we expect that the compression speed and ratio of the algorithm will improve. In this paper, we implemented our method with exploiting variable-to-fixed-length codes, and empirically show how the compression speed and ratio of the method vary by adjusting three parameters: block size, dictionary size, and size of shared dictionary. Finally, we discuss the tendencies of compression speed and ratio with respect to the three parameters.
1999年Larsson和Moffat提出的Re-Pair算法是一种简单的基于语法的压缩方法,实现了极高的压缩比。然而,Re-Pair是一种离线且非常占用空间的算法。因此,要将其应用于非常大的文本,我们需要将文本分成更小的块。因此,如果我们在所有块中共享字典的一部分,我们预计算法的压缩速度和压缩比将会提高。在本文中,我们利用可变长度到固定长度的代码实现了我们的方法,并通过调整三个参数:块大小、字典大小和共享字典大小,实证地展示了该方法的压缩速度和比率是如何变化的。最后,我们讨论了压缩速度和压缩比相对于这三个参数的趋势。
{"title":"Variable-to-Fixed-Length Encoding for Large Texts Using Re-Pair Algorithm with Shared Dictionaries","authors":"Kei Sekine, Hirohito Sasakawa, S. Yoshida, T. Kida","doi":"10.1109/DCC.2013.97","DOIUrl":"https://doi.org/10.1109/DCC.2013.97","url":null,"abstract":"The Re-Pair algorithm proposed by Larsson and Moffat in 1999 is a simple grammar-based compression method that achieves an extremely high compression ratio. However, Re-Pair is an offline and very space consuming algorithm. Thus, to apply it to a very large text, we need to divide the text into smaller blocks. Consequently, if we share a part of the dictionary among all blocks, we expect that the compression speed and ratio of the algorithm will improve. In this paper, we implemented our method with exploiting variable-to-fixed-length codes, and empirically show how the compression speed and ratio of the method vary by adjusting three parameters: block size, dictionary size, and size of shared dictionary. Finally, we discuss the tendencies of compression speed and ratio with respect to the three parameters.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"101 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114095193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Simplified HEVC FME Interpolation Unit Targeting a Low Cost and High Throughput Hardware Design 针对低成本和高吞吐量硬件设计的简化HEVC FME插补单元
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.55
Vladimir Afonso, Henrique Maich, L. Agostini, Denis Franco
Summary form only given. The new demands for high resolution digital video applications are pushing the development of new techniques in the video coding area. This paper presents a simplified version of the original Fractional Motion Estimation (FME) algorithm defined by the HEVC emerging video coding standard targeting a low cost and high throughput hardware design. Based on evaluations using the HEVC Model (HM), the HEVC reference software, a simplification strategy was defined to be used in the hardware design, drastically reducing the HEVC complexity, but with some losses in terms of compression rates and quality. The used strategy considered the use of only the most used PU size in the Motion Estimation process, avoiding the evaluation of the 24 PU sizes defined in the HEVC and avoiding also the RDO decision process. This expressively reduces the ME complexity and causes a bit-rate loss lower than 13.18% and a quality loss lower than 0.45dB. Even with the proposed simplification, the proposed solution is fully compliant with the current version of the HEVC standard. The FME interpolation was also simplified targeting the hardware design through some algebraic manipulations, converting multiplications in shift-adds and sharing sub-expressions. The simplified FME interpolator was designed in hardware and the results showed a low use of hardware resources and a processing rate high enough to process QFHD videos (3840x2160 pixels) in real time.
只提供摘要形式。高分辨率数字视频应用的新需求推动了视频编码领域新技术的发展。本文提出了一种简化版的原始分数运动估计(FME)算法,该算法由新兴视频编码标准HEVC定义,以低成本和高吞吐量的硬件设计为目标。在使用HEVC参考软件HEVC模型(HM)进行评估的基础上,定义了一种简化策略,用于硬件设计,大大降低了HEVC的复杂性,但在压缩率和质量方面有一定的损失。所使用的策略考虑了在运动估计过程中只使用最常用的PU尺寸,避免了对HEVC中定义的24个PU尺寸的评估,也避免了RDO决策过程。这显著降低了ME复杂度,比特率损失低于13.18%,质量损失低于0.45dB。即使采用了建议的简化,建议的解决方案也完全符合当前版本的HEVC标准。针对硬件设计,通过一些代数操作、移位加法乘法转换和共享子表达式,简化了FME插值。在硬件上设计了简化的FME插补器,结果表明,该插补器对硬件资源的占用少,处理速率高,可以实时处理QFHD视频(3840x2160像素)。
{"title":"Simplified HEVC FME Interpolation Unit Targeting a Low Cost and High Throughput Hardware Design","authors":"Vladimir Afonso, Henrique Maich, L. Agostini, Denis Franco","doi":"10.1109/DCC.2013.55","DOIUrl":"https://doi.org/10.1109/DCC.2013.55","url":null,"abstract":"Summary form only given. The new demands for high resolution digital video applications are pushing the development of new techniques in the video coding area. This paper presents a simplified version of the original Fractional Motion Estimation (FME) algorithm defined by the HEVC emerging video coding standard targeting a low cost and high throughput hardware design. Based on evaluations using the HEVC Model (HM), the HEVC reference software, a simplification strategy was defined to be used in the hardware design, drastically reducing the HEVC complexity, but with some losses in terms of compression rates and quality. The used strategy considered the use of only the most used PU size in the Motion Estimation process, avoiding the evaluation of the 24 PU sizes defined in the HEVC and avoiding also the RDO decision process. This expressively reduces the ME complexity and causes a bit-rate loss lower than 13.18% and a quality loss lower than 0.45dB. Even with the proposed simplification, the proposed solution is fully compliant with the current version of the HEVC standard. The FME interpolation was also simplified targeting the hardware design through some algebraic manipulations, converting multiplications in shift-adds and sharing sub-expressions. The simplified FME interpolator was designed in hardware and the results showed a low use of hardware resources and a processing rate high enough to process QFHD videos (3840x2160 pixels) in real time.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127397436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A High Throughput Multi Symbol CABAC Framework for Hybrid Video Codecs 用于混合视频编解码器的高吞吐量多符号CABAC框架
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.94
K. Rapaka, E. Yang
Summary form only given. This paper proposes a Multi-Symbol Context Adaptive Binary Arithmetic Coding (CABAC) Framework in Hybrid Video Coding. Advanced CABAC techniques have been employed in popular video coding technologies like H264-AVC, HEVC. The proposed framework aims at extending these technique by providing symbol level scalability in being able to code one or multi-symbols at a time without changing the existing framework. Such a coding not only can exploit higher order statistical dependencies on a syntax element level but also reduce the number of coded bins. New syntax elements and their Probability modeling are proposed as extensions to achieve Multi-Symbol coding. An example variant of this framework, that is coding only maximum of two symbols at a time for quantized coefficient Indices, was implemented on top of JM18.3-H264 CABAC. This example extension when tested with on HEVC test Sequences shows significant throughput improvement (i.e., significant reduction in number of bins to be coded) and at the same time reduces Bit-rate significantly. The Frame-work can be seamlessly extended to code Multiple Symbols greater than two.
只提供摘要形式。提出了一种用于混合视频编码的多符号上下文自适应二进制算术编码(CABAC)框架。先进的CABAC技术已应用于H264-AVC、HEVC等流行的视频编码技术中。提出的框架旨在通过提供符号级可扩展性来扩展这些技术,从而能够在不改变现有框架的情况下一次编码一个或多个符号。这样的编码不仅可以利用语法元素级别上的高阶统计依赖关系,还可以减少编码箱的数量。提出了新的语法元素及其概率建模作为实现多符号编码的扩展。该框架的一个示例变体是在JM18.3-H264 CABAC之上实现的,它一次只对量化系数索引编码最多两个符号。当在HEVC测试序列上测试时,这个示例扩展显示出显着的吞吐量改进(即,要编码的箱子数量显着减少),同时显着降低比特率。框架可以无缝地扩展到编码大于2的多个符号。
{"title":"A High Throughput Multi Symbol CABAC Framework for Hybrid Video Codecs","authors":"K. Rapaka, E. Yang","doi":"10.1109/DCC.2013.94","DOIUrl":"https://doi.org/10.1109/DCC.2013.94","url":null,"abstract":"Summary form only given. This paper proposes a Multi-Symbol Context Adaptive Binary Arithmetic Coding (CABAC) Framework in Hybrid Video Coding. Advanced CABAC techniques have been employed in popular video coding technologies like H264-AVC, HEVC. The proposed framework aims at extending these technique by providing symbol level scalability in being able to code one or multi-symbols at a time without changing the existing framework. Such a coding not only can exploit higher order statistical dependencies on a syntax element level but also reduce the number of coded bins. New syntax elements and their Probability modeling are proposed as extensions to achieve Multi-Symbol coding. An example variant of this framework, that is coding only maximum of two symbols at a time for quantized coefficient Indices, was implemented on top of JM18.3-H264 CABAC. This example extension when tested with on HEVC test Sequences shows significant throughput improvement (i.e., significant reduction in number of bins to be coded) and at the same time reduces Bit-rate significantly. The Frame-work can be seamlessly extended to code Multiple Symbols greater than two.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"18 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131452497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Lossless Compression of Rotated Maskless Lithography Images 旋转无掩模光刻图像的无损压缩
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.80
S. T. Klein, Dana Shapira, Gal Shelef
A new lossless image compression algorithm is presented, aimed at mask less lithography systems with mostly right-angled regular structures. Since these images appear often in slightly rotated form, an algorithm dealing with this special case is suggested, which improves performance relative to the state of the art alternatives.
针对多为直角规则结构的无掩模光刻系统,提出了一种新的无损图像压缩算法。由于这些图像经常以轻微旋转的形式出现,因此建议使用一种处理这种特殊情况的算法,相对于最先进的替代方案,它可以提高性能。
{"title":"Lossless Compression of Rotated Maskless Lithography Images","authors":"S. T. Klein, Dana Shapira, Gal Shelef","doi":"10.1109/DCC.2013.80","DOIUrl":"https://doi.org/10.1109/DCC.2013.80","url":null,"abstract":"A new lossless image compression algorithm is presented, aimed at mask less lithography systems with mostly right-angled regular structures. Since these images appear often in slightly rotated form, an algorithm dealing with this special case is suggested, which improves performance relative to the state of the art alternatives.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1997 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131273842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2013 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1