首页 > 最新文献

2010 Data Compression Conference最新文献

英文 中文
Lossless Compression of Mapped Domain Linear Prediction Residual for ITU-T Recommendation G.711.0 ITU-T G.711.0建议的映射域线性预测残差无损压缩
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.69
N. Harada, Y. Kamamoto, T. Moriya
ITU-T Rec. G.711 is widely used for the narrow band speech communication. ITU-T has just established a very low complexity and efficient lossless coding standard for G.711, called G.711.0 - Lossless compression of G.711 pulse code modulation. This paper introduces some coding technologies newly proposed and applied to the G.711.0 codec, such as Plus-Minus zero mapping for the mapped domain linear predictive coding and escaped-Huffman coding combined with adaptive recursive Rice coding for lossless compression of the prediction residual. Performance test results for those coding tools are shown in comparison with the results for the conventional technology. The performance is measured based on the figure of merit (FoM), which is a function of the trade-off between compression performance and computational complexity. The proposed tools improve the compression performance by 0.16% in total while keeping the computational complexity of encoder/decoder pair low (about 1.0 WMOPS in average and 1.667 WMOPS in the worst-case).
ITU-T Rec. G.711广泛用于窄带语音通信。ITU-T刚刚为G.711建立了一个非常低复杂度和高效的无损编码标准,称为G.711.0 - G.711脉冲编码调制的无损压缩。本文介绍了在G.711.0编解码器中新提出并应用的编码技术,如映射域线性预测编码的正负零映射和预测残差无损压缩的逃逸-霍夫曼编码与自适应递归Rice编码相结合。给出了这些编码工具的性能测试结果,并与传统技术的测试结果进行了比较。性能是基于性能值(FoM)来衡量的,它是压缩性能和计算复杂性之间权衡的函数。所提出的工具在保持编码器/解码器对的计算复杂度较低(平均约1.0 WMOPS,最坏情况约1.667 WMOPS)的同时,将压缩性能提高了0.16%。
{"title":"Lossless Compression of Mapped Domain Linear Prediction Residual for ITU-T Recommendation G.711.0","authors":"N. Harada, Y. Kamamoto, T. Moriya","doi":"10.1109/DCC.2010.69","DOIUrl":"https://doi.org/10.1109/DCC.2010.69","url":null,"abstract":"ITU-T Rec. G.711 is widely used for the narrow band speech communication. ITU-T has just established a very low complexity and efficient lossless coding standard for G.711, called G.711.0 - Lossless compression of G.711 pulse code modulation. This paper introduces some coding technologies newly proposed and applied to the G.711.0 codec, such as Plus-Minus zero mapping for the mapped domain linear predictive coding and escaped-Huffman coding combined with adaptive recursive Rice coding for lossless compression of the prediction residual. Performance test results for those coding tools are shown in comparison with the results for the conventional technology. The performance is measured based on the figure of merit (FoM), which is a function of the trade-off between compression performance and computational complexity. The proposed tools improve the compression performance by 0.16% in total while keeping the computational complexity of encoder/decoder pair low (about 1.0 WMOPS in average and 1.667 WMOPS in the worst-case).","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115324339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A SAT-Based Scheme to Determine Optimal Fix-Free Codes 一种基于sat的最优无固定码确定方案
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.22
Navid Abedini, S. Khatri, S. Savari
Fix-free or reversible-variable-length codes are prefix condition codes which can also be decoded in the reverse direction. They have attracted attention from several communities and are used in video standards. Two variations of fix-free codes (with additional constraints) have also been considered for joint source-channel coding: 1) "symmetric" fix-free codes, which require the codewords to be palindromes; 2) fix-free codes with distance constraints on pairs of codewords. We propose a new approach to determine the existence of a fix-free code with a given set of codeword lengths, for each of the three variations of the problem. We also describe a branch-and-bound algorithm to find the collection of optimal codes for asymmetric and symmetric fix-free codes.
固定自由码或可逆变长码是前缀条件码,也可以反向解码。它们已经引起了一些社区的注意,并被用于视频标准中。对于联合源信道编码,还考虑了两种无固定码的变体(具有附加约束):1)“对称”无固定码,要求码字为回文;2)码字对上有距离约束的无固定码。我们提出了一种新的方法来确定具有给定码字长度集的无固定码的存在性,对于问题的三种变体中的每一种。我们还描述了一种分支定界算法,用于寻找非对称和对称无固定码的最优码集合。
{"title":"A SAT-Based Scheme to Determine Optimal Fix-Free Codes","authors":"Navid Abedini, S. Khatri, S. Savari","doi":"10.1109/DCC.2010.22","DOIUrl":"https://doi.org/10.1109/DCC.2010.22","url":null,"abstract":"Fix-free or reversible-variable-length codes are prefix condition codes which can also be decoded in the reverse direction. They have attracted attention from several communities and are used in video standards. Two variations of fix-free codes (with additional constraints) have also been considered for joint source-channel coding: 1) \"symmetric\" fix-free codes, which require the codewords to be palindromes; 2) fix-free codes with distance constraints on pairs of codewords. We propose a new approach to determine the existence of a fix-free code with a given set of codeword lengths, for each of the three variations of the problem. We also describe a branch-and-bound algorithm to find the collection of optimal codes for asymmetric and symmetric fix-free codes.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127282881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Low-Complexity PARCOR Coefficient Quantizer and Prediction Order Estimator for G.711.0 (Lossless Speech Coding) G.711.0无损语音编码的低复杂度PARCOR系数量化器和预测阶估计器
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.49
Y. Kamamoto, T. Moriya, N. Harada
This paper presents two low-complexity tools used for the new ITU-T recommendation G.711.0, which is the standard for lossless compression of G.711 (A-law/Mu-law logarithmic PCM) speech data. One is an algorithm for quantizing the PARCOR/reflection coefficients and the other is an estimation method for the optimal prediction order. Both tools are based on a criterion that minimizes the entropy of the prediction residual signals and can be implemented in a fixed-point low-complexity algorithm. G.711.0 with the developed practical tools will be widely used everywhere because it can losslessly reduce the data rate of G.711, the prevailing speech-coding technology.
本文介绍了用于新的ITU-T推荐G.711.0的两个低复杂度工具,G.711.0是G.711 (a律/ mu律对数PCM)语音数据无损压缩的标准。一种是量化parkor /反射系数的算法,另一种是最优预测阶数的估计方法。这两种工具都基于最小化预测残差信号熵的准则,并且可以在定点低复杂度算法中实现。由于G.711.0可以无损地降低目前流行的语音编码技术G.711的数据速率,因此开发出的实用工具将在各个地方得到广泛的应用。
{"title":"Low-Complexity PARCOR Coefficient Quantizer and Prediction Order Estimator for G.711.0 (Lossless Speech Coding)","authors":"Y. Kamamoto, T. Moriya, N. Harada","doi":"10.1109/DCC.2010.49","DOIUrl":"https://doi.org/10.1109/DCC.2010.49","url":null,"abstract":"This paper presents two low-complexity tools used for the new ITU-T recommendation G.711.0, which is the standard for lossless compression of G.711 (A-law/Mu-law logarithmic PCM) speech data. One is an algorithm for quantizing the PARCOR/reflection coefficients and the other is an estimation method for the optimal prediction order. Both tools are based on a criterion that minimizes the entropy of the prediction residual signals and can be implemented in a fixed-point low-complexity algorithm. G.711.0 with the developed practical tools will be widely used everywhere because it can losslessly reduce the data rate of G.711, the prevailing speech-coding technology.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115327699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
LDPC Codes for Information Embedding and Lossy Distributed Source Coding LDPC码的信息嵌入和有损分布式源编码
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.87
Mina Sartipi
Inspired by our recent work on lossy distributed source coding with side information available at the decoder, we propose a practical scheme for information embedding system with side information available at the encoder. Our proposed scheme is based on sending parity bits using LDPC codes. We provide a design procedure for the LDPC code that guarantees performance close to the Gelfand-Pinsker and Wyner-Ziv limits. Using simulation results, we show that the proposed method performs close to both Wyner-Ziv and Gelfand-Pinsker theoretical limits for even short length codes.
受我们最近在解码器处具有侧信息的有损分布式源编码工作的启发,我们提出了一种具有编码器处具有侧信息的信息嵌入系统的实用方案。我们提出的方案是基于使用LDPC码发送奇偶校验位。我们为LDPC代码提供了一个设计程序,保证性能接近Gelfand-Pinsker和Wyner-Ziv极限。仿真结果表明,对于短码,该方法的性能接近Wyner-Ziv和Gelfand-Pinsker理论极限。
{"title":"LDPC Codes for Information Embedding and Lossy Distributed Source Coding","authors":"Mina Sartipi","doi":"10.1109/DCC.2010.87","DOIUrl":"https://doi.org/10.1109/DCC.2010.87","url":null,"abstract":"Inspired by our recent work on lossy distributed source coding with side information available at the decoder, we propose a practical scheme for information embedding system with side information available at the encoder. Our proposed scheme is based on sending parity bits using LDPC codes. We provide a design procedure for the LDPC code that guarantees performance close to the Gelfand-Pinsker and Wyner-Ziv limits. Using simulation results, we show that the proposed method performs close to both Wyner-Ziv and Gelfand-Pinsker theoretical limits for even short length codes.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122553290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TreeZip: A New Algorithm for Compressing Large Collections of Evolutionary Trees TreeZip:一种压缩大型进化树集合的新算法
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.64
Suzanne J. Matthews, Seung-Jin Sul, T. Williams
Evolutionary trees are family trees that represent the relationships between a group of organisms. Phylogenetic analysis often produce thousands of hypothetical trees that can represent the true phylogeny. These large collections of trees are costly to store. We introduce TreeZip, a novel algorithm designed to losslessly compress phylogenetic trees. The advantage of TreeZip is its ability to uniquely store the shared information among trees and compress the relationships effectively. We evaluate the performance of our approach over fourteen tree collections ranging from 2,505 to 150,000 trees corresponding to 0.6MB to 434MB in storage. Our results demonstrate that TreeZip effectively compresses phylogenetic trees, typically compressing a file to 2% or less of its original size. When coupled with 7zip, TreeZip can compress a file to less than 1% of its original size. On our largest dataset, TreeZip+7zip compressed the input file to .008% of its original size. Our results strongly suggest that TreeZip is an ideal approach for compressing phylogenetic trees.
进化树是代表一组生物之间关系的家族树。系统发育分析通常会产生数以千计的假想树,这些树可以代表真正的系统发育。这些大量树木的储存成本很高。我们介绍了TreeZip,一个新的算法,旨在无损压缩系统发生树。TreeZip的优点是它能够在树之间唯一地存储共享信息并有效地压缩关系。我们通过14个树集合(从2,505到150,000棵树,对应0.6MB到434MB的存储空间)来评估我们的方法的性能。我们的结果表明,TreeZip有效地压缩了系统发育树,通常将文件压缩到原始大小的2%或更少。当与7zip结合使用时,TreeZip可以将文件压缩到小于其原始大小的1%。在我们最大的数据集上,TreeZip+7zip将输入文件压缩到原始大小的0.008%。我们的结果强烈表明,TreeZip是压缩系统发育树的理想方法。
{"title":"TreeZip: A New Algorithm for Compressing Large Collections of Evolutionary Trees","authors":"Suzanne J. Matthews, Seung-Jin Sul, T. Williams","doi":"10.1109/DCC.2010.64","DOIUrl":"https://doi.org/10.1109/DCC.2010.64","url":null,"abstract":"Evolutionary trees are family trees that represent the relationships between a group of organisms. Phylogenetic analysis often produce thousands of hypothetical trees that can represent the true phylogeny. These large collections of trees are costly to store. We introduce TreeZip, a novel algorithm designed to losslessly compress phylogenetic trees. The advantage of TreeZip is its ability to uniquely store the shared information among trees and compress the relationships effectively. We evaluate the performance of our approach over fourteen tree collections ranging from 2,505 to 150,000 trees corresponding to 0.6MB to 434MB in storage. Our results demonstrate that TreeZip effectively compresses phylogenetic trees, typically compressing a file to 2% or less of its original size. When coupled with 7zip, TreeZip can compress a file to less than 1% of its original size. On our largest dataset, TreeZip+7zip compressed the input file to .008% of its original size. Our results strongly suggest that TreeZip is an ideal approach for compressing phylogenetic trees.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122708609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Optimum String Match Choices in LZSS LZSS中最优字符串匹配选择
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.67
G. Little, J. Diamond
The LZ77 and LZ78 compression algorithms perform a greedy choice when looking for the next string of input symbols to match. That is, the longest string of symbols which is found in the current dictionary is chosen as the next match. Many variations of LZ77 and LZ78 have been proposed; some of these attempt to improve compression by sometimes choosing a non-maximal string, if it appears that such a choice might improve the overall compression ratio. These approaches make this decision based upon local criteria in an attempt to minimise the number of strings matched. In this paper we present an algorithm which computes a set of matches designed to minimize the number of bits output, not necessarily the number of strings matched.
LZ77和LZ78压缩算法在寻找下一个要匹配的输入符号字符串时执行贪婪选择。也就是说,选择在当前字典中找到的最长的符号字符串作为下一个匹配。LZ77和LZ78的许多变体已被提出;其中一些尝试通过有时选择非最大字符串来改进压缩,如果这样的选择可能会提高整体压缩比。这些方法根据局部标准做出决定,试图将匹配的字符串数量最小化。在本文中,我们提出了一种算法,该算法计算一组匹配,旨在使输出的比特数最小化,而不一定是匹配的字符串数。
{"title":"Optimum String Match Choices in LZSS","authors":"G. Little, J. Diamond","doi":"10.1109/DCC.2010.67","DOIUrl":"https://doi.org/10.1109/DCC.2010.67","url":null,"abstract":"The LZ77 and LZ78 compression algorithms perform a greedy choice when looking for the next string of input symbols to match. That is, the longest string of symbols which is found in the current dictionary is chosen as the next match. Many variations of LZ77 and LZ78 have been proposed; some of these attempt to improve compression by sometimes choosing a non-maximal string, if it appears that such a choice might improve the overall compression ratio. These approaches make this decision based upon local criteria in an attempt to minimise the number of strings matched. In this paper we present an algorithm which computes a set of matches designed to minimize the number of bits output, not necessarily the number of strings matched.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128216930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enhanced Adaptive Interpolation Filters for Video Coding 增强自适应插值滤波器的视频编码
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.46
Yan Ye, G. Motta, M. Karczewicz
H.264/AVC uses motion compensated prediction with fractional-pixel precision to reduce temporal redundancy of the input video signal. It has been shown that the Adaptive Interpolation Filter (AIF) framework [3] can significantly improve accuracy of the motion compensated prediction. In this paper, we present the Enhanced Adaptive Interpolation Filters (E-AIF) scheme, which enhances the AIF framework with a number of useful features, aimed at both improving performance and reducing complexity. These features include the full-pixel position filter and the filter offset, the radial-shaped 12-position filter support, and a RD-based filter selection. Simulations show that E-AIF can achieve up to 20% bit rate reduction compared to H.264/AVC. Compared to all other AIF schemes, E-AIF further reduces the bit rate by up to 6%, and demonstrates the highest performance consistently.
H.264/AVC采用分数像素精度的运动补偿预测来减少输入视频信号的时间冗余。研究表明,自适应插值滤波器(Adaptive Interpolation Filter, AIF)框架[3]可以显著提高运动补偿预测的精度。在本文中,我们提出了增强自适应插值滤波器(E-AIF)方案,该方案通过许多有用的特性增强了AIF框架,旨在提高性能和降低复杂性。这些功能包括全像素位置滤波器和滤波器偏移,径向12位置滤波器支持,以及基于rd的滤波器选择。仿真结果表明,与H.264/AVC相比,E-AIF可以实现高达20%的比特率降低。与所有其他AIF方案相比,E-AIF进一步降低比特率高达6%,并始终表现出最高的性能。
{"title":"Enhanced Adaptive Interpolation Filters for Video Coding","authors":"Yan Ye, G. Motta, M. Karczewicz","doi":"10.1109/DCC.2010.46","DOIUrl":"https://doi.org/10.1109/DCC.2010.46","url":null,"abstract":"H.264/AVC uses motion compensated prediction with fractional-pixel precision to reduce temporal redundancy of the input video signal. It has been shown that the Adaptive Interpolation Filter (AIF) framework [3] can significantly improve accuracy of the motion compensated prediction. In this paper, we present the Enhanced Adaptive Interpolation Filters (E-AIF) scheme, which enhances the AIF framework with a number of useful features, aimed at both improving performance and reducing complexity. These features include the full-pixel position filter and the filter offset, the radial-shaped 12-position filter support, and a RD-based filter selection. Simulations show that E-AIF can achieve up to 20% bit rate reduction compared to H.264/AVC. Compared to all other AIF schemes, E-AIF further reduces the bit rate by up to 6%, and demonstrates the highest performance consistently.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116487894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Packet Dropping for Widely Varying Bit Reduction Rates Using a Network-Based Packet Loss Visibility Model 基于网络的丢包可见性模型的大范围变比特减少率丢包
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.47
Ting-Lan Lin, Jihyun Shin, P. Cosman
We propose a packet dropping algorithm for various packet loss rates. A network-based packet loss visibility model is used to evaluate the visual importance of each H.264 packet inside the network. During network congestion, based on the estimated loss visibility of each packet, we drop the least visible frames and/or the least visible packets until the required bit reduction rate is achieved. Based on a computable perceptually-based metric, our algorithm performs better than an existing approach (dropping B packets or frames).
针对不同的丢包率,提出了一种丢包算法。基于网络的丢包可见性模型用于评估网络中每个H.264包的可见性重要性。在网络拥塞期间,基于估计的每个数据包的丢失可见性,我们丢弃最不可见的帧和/或最不可见的数据包,直到达到所需的比特减少率。基于可计算的基于感知的度量,我们的算法比现有的方法(丢弃B数据包或帧)执行得更好。
{"title":"Packet Dropping for Widely Varying Bit Reduction Rates Using a Network-Based Packet Loss Visibility Model","authors":"Ting-Lan Lin, Jihyun Shin, P. Cosman","doi":"10.1109/DCC.2010.47","DOIUrl":"https://doi.org/10.1109/DCC.2010.47","url":null,"abstract":"We propose a packet dropping algorithm for various packet loss rates. A network-based packet loss visibility model is used to evaluate the visual importance of each H.264 packet inside the network. During network congestion, based on the estimated loss visibility of each packet, we drop the least visible frames and/or the least visible packets until the required bit reduction rate is achieved. Based on a computable perceptually-based metric, our algorithm performs better than an existing approach (dropping B packets or frames).","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121533047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Lossless Compression of Maps, Charts, and Graphs via Color Separation 无损压缩地图,图表和图形通过颜色分离
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.102
S. alZahir, Arber Borici
In this paper, we present a fast lossless compression scheme for digital map images, and chart and graph images in the raster image format. This work contains two main contributions. The first is centered around the creation of a codebook that is based on symbol entropy. The second contribution is the introduction of a new row-column reduction coding algorithm. This scheme determines the number of different colors in the given image and creates a separate bi-level data layer for each color i.e., one for the color and the second is for the background. Then, the bi-level layers are individually compressed using the proposed method, which is based on symbol-entropy in conjunction with our row-column reduction coding algorithm. Our experimental results show that our lossless compression scheme scored a compression equal to 0.035 bpp on average for map images and 0.03 bpp on average for charts and graphs. These results are better than most reported results in the literature. Moreover, our scheme is simple and fast.
本文提出了一种快速无损压缩数字地图图像和栅格图像格式的图表图像的方案。这项工作有两个主要贡献。第一个是围绕创建一个基于符号熵的码本。第二个贡献是引入了一种新的行-列缩减编码算法。该方案确定给定图像中不同颜色的数量,并为每种颜色创建一个单独的双层数据层,即,一个用于颜色,第二个用于背景。然后,使用基于符号熵和我们的行-列约简编码算法的方法对双层层进行单独压缩。实验结果表明,我们的无损压缩方案对地图图像的平均压缩率为0.035 bpp,对图表和图形的平均压缩率为0.03 bpp。这些结果比文献中大多数报道的结果要好。此外,我们的方案简单,快速。
{"title":"Lossless Compression of Maps, Charts, and Graphs via Color Separation","authors":"S. alZahir, Arber Borici","doi":"10.1109/DCC.2010.102","DOIUrl":"https://doi.org/10.1109/DCC.2010.102","url":null,"abstract":"In this paper, we present a fast lossless compression scheme for digital map images, and chart and graph images in the raster image format. This work contains two main contributions. The first is centered around the creation of a codebook that is based on symbol entropy. The second contribution is the introduction of a new row-column reduction coding algorithm. This scheme determines the number of different colors in the given image and creates a separate bi-level data layer for each color i.e., one for the color and the second is for the background. Then, the bi-level layers are individually compressed using the proposed method, which is based on symbol-entropy in conjunction with our row-column reduction coding algorithm. Our experimental results show that our lossless compression scheme scored a compression equal to 0.035 bpp on average for map images and 0.03 bpp on average for charts and graphs. These results are better than most reported results in the literature. Moreover, our scheme is simple and fast.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126277214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Contribution of JPEG 2000 Images for Unidirectional Links JPEG 2000图像对单向链接的双重贡献
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.81
J. M. Barbero, Eugenio Santos, Abraham Gutierrez
The production of Broadcast content generates large files of video and audio content which must be transmitted among different production centers. The contribution of this material in certain circumstances is carried out on satellite links which usually have a relatively high error probability. In addition, the image suffers degradation in the processes of converting to base band video, transcoding to different compression systems, errors in transmission and drops in the link. To overcome this limitation, we present a transmissions system, based in a patent, which ensures the quality of JPEG2000 professional images.
广播内容的制作会产生大量的视频和音频内容,这些内容必须在不同的制作中心之间传输。在某些情况下,这种材料的贡献是在卫星链路上进行的,而卫星链路通常具有相对较高的误差概率。此外,图像在转换到基带视频、转码到不同的压缩系统、传输错误和链路下降的过程中也会受到影响。为了克服这一限制,我们提出了一种基于专利的传输系统,可以保证JPEG2000专业图像的质量。
{"title":"Dual Contribution of JPEG 2000 Images for Unidirectional Links","authors":"J. M. Barbero, Eugenio Santos, Abraham Gutierrez","doi":"10.1109/DCC.2010.81","DOIUrl":"https://doi.org/10.1109/DCC.2010.81","url":null,"abstract":"The production of Broadcast content generates large files of video and audio content which must be transmitted among different production centers. The contribution of this material in certain circumstances is carried out on satellite links which usually have a relatively high error probability. In addition, the image suffers degradation in the processes of converting to base band video, transcoding to different compression systems, errors in transmission and drops in the link. To overcome this limitation, we present a transmissions system, based in a patent, which ensures the quality of JPEG2000 professional images.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134600487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2010 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1