首页 > 最新文献

2013 Data Compression Conference最新文献

英文 中文
Genome Sequence Compression with Distributed Source Coding 基因组序列压缩与分布式源编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.104
Shuang Wang, Xiaoqian Jiang, Lijuan Cui, Wenrui Dai, N. Deligiannis, Pinghao Li, H. Xiong, Samuel Cheng, L. Ohno-Machado
In this paper, we develop a novel genome compression framework based on distributed source coding (DSC)[3], which is specially tailored to the need of miniaturized devices. At the encoder side, subsequences with adaptive code length can be compressed flexibly through either low complexity DSC based syndrome coding or hash coding with the decision determined by the existence of variations between source and reference known from the decoder feedback. Moreover, to tackle the variations between source and reference at the decoder, we carefully designed a factor graph based low-density parity-check (LDPC) decoder, which automatically detects insertion, deletion and substitution.
在本文中,我们开发了一种新的基于分布式源编码(DSC)[3]的基因组压缩框架,该框架是专门针对小型化设备的需求而量身定制的。在编码器端,具有自适应码长的子序列可以通过低复杂度的基于DSC的综合征编码或哈希编码进行灵活压缩,其决策由解码器反馈中已知的源和参考之间的变化来确定。此外,为了解决解码器中来源和参考之间的差异,我们精心设计了一个基于因子图的低密度奇偶校验(LDPC)解码器,该解码器可以自动检测插入、删除和替换。
{"title":"Genome Sequence Compression with Distributed Source Coding","authors":"Shuang Wang, Xiaoqian Jiang, Lijuan Cui, Wenrui Dai, N. Deligiannis, Pinghao Li, H. Xiong, Samuel Cheng, L. Ohno-Machado","doi":"10.1109/DCC.2013.104","DOIUrl":"https://doi.org/10.1109/DCC.2013.104","url":null,"abstract":"In this paper, we develop a novel genome compression framework based on distributed source coding (DSC)[3], which is specially tailored to the need of miniaturized devices. At the encoder side, subsequences with adaptive code length can be compressed flexibly through either low complexity DSC based syndrome coding or hash coding with the decision determined by the existence of variations between source and reference known from the decoder feedback. Moreover, to tackle the variations between source and reference at the decoder, we carefully designed a factor graph based low-density parity-check (LDPC) decoder, which automatically detects insertion, deletion and substitution.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115359694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Mode Duplication Based Multiview Multiple Description Video Coding 基于模式复制的多视图多描述视频编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.106
Xiaolan Wang, C. Cai
Compression ability is most concerned for multiview video (MVV) transmission system because of its massive amount of data. To improve coding efficiency, the Joint Video Team (JVT) standardization body has developed a joint multiview video coding model (JMVC), in which both intra-view and inter-view prediction techniques are exploited to yield a better coding gain. Therefore, how to prevent from error propagation has become a critical issue in multiview video coding (MVC). Error concealment methods for MVC have been widely studied in recent years, but little research has been conducted on error resilience for MVC. Multiple description coding (MDC) provides a promising solution for robust data transmission over error-prone channels, and has found many applications in monoview video communications. However, these MDC frameworks are not applicable to MVC because its prediction structure involves inter-view prediction. To develop an efficient and robust MVC scheme, a novel MDC algorithm for JMVC based on the mode duplication strategy is proposed in this paper. The input MVV sequence is firstly sub sampled in both horizontal and vertical directions, forming four subsequences, X1p, X1d, X2p, and X2d. Then, X1p and X1d are paired to form description 1, and X2p and X2d are grouped to form description 2, respectively. Secondly, X1p and X2p are directly encoded by separate JMVC encoders. Meanwhile, X1d/X2d adopts the best modes and prediction vectors (PVs) of X1p/X2p in corresponding (same spatial) locations to perform prediction coding. Consequently, neither code for best modes and PVs nor time load for mode decision is needed while coding X1d and X2d. Only coding for the prediction errors is required. Because subsequences in the same description are closely resembled each other, the extra prediction errors introduced by this best mode and PV reuse are negligible. Therefore, the bit rate and computational cost for coding X1d and X2d are greatly reduced. The proposed algorithm has been integrated into JMVC 6.0 and experimented on multiple MVV test sequences. The experimental results have shown that the proposed algorithm outperforms the state-of-the-arts of MDC for MVV and stereoscopic video, achieving improvements of 0.5-3dB in central decode and 0.5-3.5dB in side decode at the same bit rate over a wide range from 500kbps to 6000kbps. Comparing with original JMVC, the proposed algorithm saves about 40% encoding time in average.
由于多视场视频传输系统的数据量巨大,压缩能力成为多视场视频传输系统中最受关注的问题。为了提高编码效率,联合视频小组(JVT)标准化机构开发了一种联合多视点视频编码模型(JMVC),该模型利用视点内和视点间预测技术来获得更好的编码增益。因此,如何防止错误传播成为多视图视频编码(MVC)中的一个关键问题。近年来,对MVC的错误隐藏方法进行了广泛的研究,但对MVC的错误恢复能力的研究却很少。多描述编码(multi - description coding, MDC)为在易出错的信道上实现健壮的数据传输提供了一种很有前途的解决方案,并在单视图视频通信中得到了许多应用。然而,这些MDC框架并不适用于MVC,因为它的预测结构涉及到视图间预测。为了开发一种高效、鲁棒的MVC模式,本文提出了一种基于模式复制策略的JMVC多MDC算法。输入MVV序列首先在水平方向和垂直方向上进行子采样,形成X1p、X1d、X2p和X2d四个子序列。然后,将X1p和X1d配对形成描述1,将X2p和X2d分组形成描述2。其次,X1p和X2p由单独的JMVC编码器直接编码。同时,X1d/X2d采用X1p/X2p在对应(相同空间)位置的最佳模式和预测向量(pv)进行预测编码。因此,在编码X1d和X2d时,既不需要最佳模式和pv的代码,也不需要模式决策的时间负载。只需要对预测误差进行编码。由于同一描述中的子序列彼此非常相似,因此该最佳模式和PV重用引入的额外预测误差可以忽略不计。因此,大大降低了编码X1d和X2d的比特率和计算成本。该算法已集成到JMVC 6.0中,并在多个MVV测试序列上进行了实验。实验结果表明,该算法在MVV和立体视频中优于最先进的MDC,在500kbps到6000kbps的宽范围内,在相同的比特率下,中央解码提高了0.5-3dB,侧解码提高了0.5-3.5dB。与原有的JMVC算法相比,该算法平均节省约40%的编码时间。
{"title":"Mode Duplication Based Multiview Multiple Description Video Coding","authors":"Xiaolan Wang, C. Cai","doi":"10.1109/DCC.2013.106","DOIUrl":"https://doi.org/10.1109/DCC.2013.106","url":null,"abstract":"Compression ability is most concerned for multiview video (MVV) transmission system because of its massive amount of data. To improve coding efficiency, the Joint Video Team (JVT) standardization body has developed a joint multiview video coding model (JMVC), in which both intra-view and inter-view prediction techniques are exploited to yield a better coding gain. Therefore, how to prevent from error propagation has become a critical issue in multiview video coding (MVC). Error concealment methods for MVC have been widely studied in recent years, but little research has been conducted on error resilience for MVC. Multiple description coding (MDC) provides a promising solution for robust data transmission over error-prone channels, and has found many applications in monoview video communications. However, these MDC frameworks are not applicable to MVC because its prediction structure involves inter-view prediction. To develop an efficient and robust MVC scheme, a novel MDC algorithm for JMVC based on the mode duplication strategy is proposed in this paper. The input MVV sequence is firstly sub sampled in both horizontal and vertical directions, forming four subsequences, X1p, X1d, X2p, and X2d. Then, X1p and X1d are paired to form description 1, and X2p and X2d are grouped to form description 2, respectively. Secondly, X1p and X2p are directly encoded by separate JMVC encoders. Meanwhile, X1d/X2d adopts the best modes and prediction vectors (PVs) of X1p/X2p in corresponding (same spatial) locations to perform prediction coding. Consequently, neither code for best modes and PVs nor time load for mode decision is needed while coding X1d and X2d. Only coding for the prediction errors is required. Because subsequences in the same description are closely resembled each other, the extra prediction errors introduced by this best mode and PV reuse are negligible. Therefore, the bit rate and computational cost for coding X1d and X2d are greatly reduced. The proposed algorithm has been integrated into JMVC 6.0 and experimented on multiple MVV test sequences. The experimental results have shown that the proposed algorithm outperforms the state-of-the-arts of MDC for MVV and stereoscopic video, achieving improvements of 0.5-3dB in central decode and 0.5-3.5dB in side decode at the same bit rate over a wide range from 500kbps to 6000kbps. Comparing with original JMVC, the proposed algorithm saves about 40% encoding time in average.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126070036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Image Coding Using Nonlinear Evolutionary Transforms 基于非线性进化变换的图像编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.100
Seishi Takamura, A. Shimizu
Transform is one of the most important tools for image/video coding technology. In this paper, novel nonlinear transform generation based on genetic programming is proposed and implemented into H.264/AVC and HEVC reference software to enhance coding performance. The transform procedure itself is coded and transmitted. Despite this overhead, 0.590% (vs. JM18.0) and 1.711% (vs. HM5.0) coding gain was observed in our preliminary experiment.
Transform是图像/视频编码技术中最重要的工具之一。本文提出了一种基于遗传规划的非线性变换生成方法,并将其应用于H.264/AVC和HEVC参考软件中,以提高编码性能。转换过程本身被编码并传输。尽管有这种开销,在我们的初步实验中观察到0.590%(相对于JM18.0)和1.711%(相对于HM5.0)的编码增益。
{"title":"Image Coding Using Nonlinear Evolutionary Transforms","authors":"Seishi Takamura, A. Shimizu","doi":"10.1109/DCC.2013.100","DOIUrl":"https://doi.org/10.1109/DCC.2013.100","url":null,"abstract":"Transform is one of the most important tools for image/video coding technology. In this paper, novel nonlinear transform generation based on genetic programming is proposed and implemented into H.264/AVC and HEVC reference software to enhance coding performance. The transform procedure itself is coded and transmitted. Despite this overhead, 0.590% (vs. JM18.0) and 1.711% (vs. HM5.0) coding gain was observed in our preliminary experiment.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126970808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Low Complexity Embedded Quantization Scheme Compatible with Bitplane Image Coding 兼容位平面图像编码的低复杂度嵌入式量化方案
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.35
Francesc Auli-Llinas
Embedded quantization is a mechanism through which image coding systems provide quality progressivity. Although the most common embedded quantization approach is to use uniform scalar dead zone quantization (USDQ) together with bit plane coding (BPC), recent work suggested that similar coding performance as that achieved with USDQ+BPC can be obtained with a general embedded quantization (GEQ) scheme than performs fewer quantization stages. Unfortunately, practical approaches of GEQ can not be implemented in bit plane coding engines without substantially modifying their structure. This work overcomes this drawback introducing a 2-step scalar dead zone quantization (2SDQ) scheme compatible with bit plane image coding that provides the same advantages of practical GEQ approaches. Herein, 2SDQ is introduced in the framework of JPEG2000 to demonstrate its viability and efficiency.
嵌入式量化是一种机制,通过它,图像编码系统提供了质量进步性。虽然最常见的嵌入式量化方法是将均匀标量死区量化(USDQ)与位平面编码(BPC)结合使用,但最近的研究表明,使用通用的嵌入式量化(GEQ)方案可以获得与USDQ+BPC相似的编码性能,而无需执行更少的量化阶段。不幸的是,实际的GEQ方法不能在位平面编码引擎中实现,除非对其结构进行实质性的修改。这项工作克服了这一缺点,引入了与位平面图像编码兼容的两步标量死区量化(2SDQ)方案,该方案提供了与实际GEQ方法相同的优点。本文将2SDQ引入到JPEG2000框架中,以验证其可行性和有效性。
{"title":"Low Complexity Embedded Quantization Scheme Compatible with Bitplane Image Coding","authors":"Francesc Auli-Llinas","doi":"10.1109/DCC.2013.35","DOIUrl":"https://doi.org/10.1109/DCC.2013.35","url":null,"abstract":"Embedded quantization is a mechanism through which image coding systems provide quality progressivity. Although the most common embedded quantization approach is to use uniform scalar dead zone quantization (USDQ) together with bit plane coding (BPC), recent work suggested that similar coding performance as that achieved with USDQ+BPC can be obtained with a general embedded quantization (GEQ) scheme than performs fewer quantization stages. Unfortunately, practical approaches of GEQ can not be implemented in bit plane coding engines without substantially modifying their structure. This work overcomes this drawback introducing a 2-step scalar dead zone quantization (2SDQ) scheme compatible with bit plane image coding that provides the same advantages of practical GEQ approaches. Herein, 2SDQ is introduced in the framework of JPEG2000 to demonstrate its viability and efficiency.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114620179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Compact Data Structures for Temporal Graphs 时态图的紧凑数据结构
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.59
Guillermo de Bernardo, N. Brisaboa, Diego Caro, Michael A. Rodriguez
Summary form only given. In this paper we propose three compact data structures to answer queries on temporal graphs. We define a temporal graph as a graph whose edges appear or disappear along time. Possible queries are related to adjacency along time, for example, to get the neighbors of a node at a given time point or interval. A naive representation consists of a time-ordered sequence of graphs, each of them valid at a particular time instant. The main issue of this representation is the unnecessary use of space if many nodes and their connections remain unchanged during a long period of time. The work in this paper proposes to store only what changes at each time instant. The ttk2-tree is conceptually a dynamic k2-tree in which each leaf and internal node contains a change list of time instants when its bit value has changed. All the change lists are stored consecutively in a dynamic sequence. During query processing, the change lists are used to expand only valid regions in the dynamic k2-tree. It supports updates of the current or past states of the graph. The ltg-index is a set of snapshots and logs of changes between consecutive snapshots. The structure keeps a log for each node, storing the edge and the time where a change has been produced. To retrieve direct neighbors of a node, the previous snapshot is queried, and then the log is traversed adding or removing edges to the result. The differential k2-tree stores snapshots of some time instants in k2-trees. For the other time instants, a k2-tree is also built, but these are differential (they store the edges that differ from the last snapshot). To perform a query it accesses the k2-tree of the given time and the previous full snapshot. The edges that appear in exactly one of these two k2-trees will be the final results. We test our proposals using synthetic and real datasets. Our results show that the ltg-index obtains the smallest space in general. We also measure times for direct and reverse neighbor queries in a time instant or a time interval. For all these queries, the times of our best proposal range from tens of μs to several ms, depending on the size of the dataset and the number of results returned. The ltg-index is the fastest for direct queries (almost as fast as accessing a snapshot), but it is 5-20 times slower in reverse queries. The differential k2-tree is very fast in time instant queries, but slower in time interval queries. The ttk2-tree obtains similar times for direct and reverse queries and different time intervals, being the fastest in some reverse interval queries. It has also the advantage of being dynamic.
只提供摘要形式。在本文中,我们提出了三种紧凑的数据结构来回答时间图的查询。我们将时间图定义为其边随时间出现或消失的图。可能的查询与时间上的邻接性有关,例如,在给定的时间点或间隔上获取节点的邻居。朴素表示由时间顺序的图序列组成,其中每个图在特定的时间瞬间有效。这种表示的主要问题是,如果许多节点及其连接在很长一段时间内保持不变,则会不必要地使用空间。本文的工作建议只存储每个时刻的变化。ttk2-tree在概念上是一个动态的k2-tree,其中每个叶子和内部节点都包含其位值发生变化时的时间瞬间的变化列表。所有更改列表以动态顺序连续存储。在查询处理期间,更改列表仅用于扩展动态k2树中的有效区域。它支持对图的当前或过去状态进行更新。ltg-index是一组快照和连续快照之间的更改日志。该结构为每个节点保留日志,存储产生更改的边缘和时间。要检索节点的直接邻居,首先查询前一个快照,然后遍历日志,在结果中添加或删除边。差分k2-tree在k2-tree中存储了一些时间瞬间的快照。对于其他时间瞬间,也构建了一个k2树,但这些是不同的(它们存储与上次快照不同的边)。要执行查询,它将访问给定时间的k2树和上一个完整快照。在这两棵k2树中恰好出现的边就是最终结果。我们使用合成和真实的数据集来测试我们的建议。我们的结果表明,ltg-index在一般情况下获得最小的空间。我们还测量在一个时间瞬间或一个时间间隔内直接和反向邻居查询的时间。对于所有这些查询,我们的最佳建议的时间范围从几十μs到几ms,这取决于数据集的大小和返回的结果数量。ltg-index对于直接查询是最快的(几乎和访问快照一样快),但是对于反向查询要慢5-20倍。微分k2树在即时查询中非常快,但在时间间隔查询中较慢。ttk2-tree对于直接查询和反向查询以及不同的时间间隔获得相似的时间,在某些反向间隔查询中是最快的。它还具有动态的优势。
{"title":"Compact Data Structures for Temporal Graphs","authors":"Guillermo de Bernardo, N. Brisaboa, Diego Caro, Michael A. Rodriguez","doi":"10.1109/DCC.2013.59","DOIUrl":"https://doi.org/10.1109/DCC.2013.59","url":null,"abstract":"Summary form only given. In this paper we propose three compact data structures to answer queries on temporal graphs. We define a temporal graph as a graph whose edges appear or disappear along time. Possible queries are related to adjacency along time, for example, to get the neighbors of a node at a given time point or interval. A naive representation consists of a time-ordered sequence of graphs, each of them valid at a particular time instant. The main issue of this representation is the unnecessary use of space if many nodes and their connections remain unchanged during a long period of time. The work in this paper proposes to store only what changes at each time instant. The ttk2-tree is conceptually a dynamic k2-tree in which each leaf and internal node contains a change list of time instants when its bit value has changed. All the change lists are stored consecutively in a dynamic sequence. During query processing, the change lists are used to expand only valid regions in the dynamic k2-tree. It supports updates of the current or past states of the graph. The ltg-index is a set of snapshots and logs of changes between consecutive snapshots. The structure keeps a log for each node, storing the edge and the time where a change has been produced. To retrieve direct neighbors of a node, the previous snapshot is queried, and then the log is traversed adding or removing edges to the result. The differential k2-tree stores snapshots of some time instants in k2-trees. For the other time instants, a k2-tree is also built, but these are differential (they store the edges that differ from the last snapshot). To perform a query it accesses the k2-tree of the given time and the previous full snapshot. The edges that appear in exactly one of these two k2-trees will be the final results. We test our proposals using synthetic and real datasets. Our results show that the ltg-index obtains the smallest space in general. We also measure times for direct and reverse neighbor queries in a time instant or a time interval. For all these queries, the times of our best proposal range from tens of μs to several ms, depending on the size of the dataset and the number of results returned. The ltg-index is the fastest for direct queries (almost as fast as accessing a snapshot), but it is 5-20 times slower in reverse queries. The differential k2-tree is very fast in time instant queries, but slower in time interval queries. The ttk2-tree obtains similar times for direct and reverse queries and different time intervals, being the fastest in some reverse interval queries. It has also the advantage of being dynamic.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116154495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Effective Variable-Length-to-Fixed-Length Coding via a Re-Pair Algorithm 基于重对算法的变长到定长有效编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.111
S. Yoshida, T. Kida
Summary form only given. We address the problem of improving variable-length-to-fixed-length codes (VF codes). A VF code is an encoding scheme that uses a fixed-length code, and thus, one can easily access the compressed data. However, conventional VF codes usually have an inferior compression ratio to that of variable-length codes. Although a method proposed by T. Uemura et al. in 2010 achieves a good compression ratio comparable to that of gzip, it is very time consuming. In this study, we propose a new VF coding method that applies a fixed-length code to the set of rules extracted by the Re-Pair algorithm, proposed by N. J. Larsson and A. Moffat in 1999. The Re-Pair algorithm is a simple off-line grammar-based compression method that has good compression-ratio performance with moderate compression speed. Moreover, we present several experimental results to show that the proposed coding is superior to the existing VF coding.
只提供摘要形式。我们解决了改进变长到定长码(VF码)的问题。VF代码是一种使用固定长度代码的编码方案,因此可以很容易地访问压缩数据。然而,传统的VF码的压缩比通常低于变长码的压缩比。虽然T. Uemura等人在2010年提出的方法实现了与gzip相当的良好压缩比,但它非常耗时。在本研究中,我们提出了一种新的VF编码方法,该方法将固定长度的编码应用于N. J. Larsson和a . Moffat在1999年提出的Re-Pair算法提取的规则集。Re-Pair算法是一种简单的离线基于语法的压缩方法,压缩比性能好,压缩速度适中。此外,我们还给出了几个实验结果,表明所提出的编码优于现有的VF编码。
{"title":"Effective Variable-Length-to-Fixed-Length Coding via a Re-Pair Algorithm","authors":"S. Yoshida, T. Kida","doi":"10.1109/DCC.2013.111","DOIUrl":"https://doi.org/10.1109/DCC.2013.111","url":null,"abstract":"Summary form only given. We address the problem of improving variable-length-to-fixed-length codes (VF codes). A VF code is an encoding scheme that uses a fixed-length code, and thus, one can easily access the compressed data. However, conventional VF codes usually have an inferior compression ratio to that of variable-length codes. Although a method proposed by T. Uemura et al. in 2010 achieves a good compression ratio comparable to that of gzip, it is very time consuming. In this study, we propose a new VF coding method that applies a fixed-length code to the set of rules extracted by the Re-Pair algorithm, proposed by N. J. Larsson and A. Moffat in 1999. The Re-Pair algorithm is a simple off-line grammar-based compression method that has good compression-ratio performance with moderate compression speed. Moreover, we present several experimental results to show that the proposed coding is superior to the existing VF coding.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134000339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Algorithms for Compressed Inputs 压缩输入的算法
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.60
Nathan Brunelle, G. Robins, Abhi Shelat
We study compression-aware algorithms, i.e. algorithms that can exploit regularity in their input data by directly operating on compressed data. While popular with string algorithms, we consider this idea for algorithms operating on numeric sequences and graphs that have been compressed using a variety of schemes including LZ77, grammar-based compression, a graph interpretation of Re-Pair, and a method presented by Boldi and Vigna in The Web Graph Framework. In all cases, we discover algorithms outperforming a trivial approach: to decompress the input and run a standard algorithm. We aim to develop an algorithmic toolkit for basic tasks to operate on a variety of compression inputs.
我们研究压缩感知算法,即通过直接操作压缩数据来利用输入数据中的规律性的算法。虽然在字符串算法中很流行,但我们认为这个想法适用于使用各种方案压缩的数字序列和图的算法,包括LZ77、基于语法的压缩、Re-Pair的图解释以及Boldi和Vigna在The Web graph Framework中提出的方法。在所有情况下,我们都发现算法优于一种简单的方法:解压缩输入并运行标准算法。我们的目标是为基本任务开发一个算法工具包,以在各种压缩输入上操作。
{"title":"Algorithms for Compressed Inputs","authors":"Nathan Brunelle, G. Robins, Abhi Shelat","doi":"10.1109/DCC.2013.60","DOIUrl":"https://doi.org/10.1109/DCC.2013.60","url":null,"abstract":"We study compression-aware algorithms, i.e. algorithms that can exploit regularity in their input data by directly operating on compressed data. While popular with string algorithms, we consider this idea for algorithms operating on numeric sequences and graphs that have been compressed using a variety of schemes including LZ77, grammar-based compression, a graph interpretation of Re-Pair, and a method presented by Boldi and Vigna in The Web Graph Framework. In all cases, we discover algorithms outperforming a trivial approach: to decompress the input and run a standard algorithm. We aim to develop an algorithmic toolkit for basic tasks to operate on a variety of compression inputs.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"22 6S 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133301207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Efficient Parallelization of Different HEVC Decoding Stages 不同HEVC解码阶段的高效并行化
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.82
A. Kotra, M. Raulet, O. Déforges
Summary form only given. In this paper we present efficient parallelization implementations for different stages of the HEVC decoder, which are LCU decoding, deblocking filtering and SAO filtering. Each of the stages are parallelized in separate passes. The LCU decoding is parallelized using Wave front Parallel Processing (WPP). Deblocking and SAO filtering are parallelized by segmenting each picture into separate regions of consecutive LCU rows and processing each of the regions in a concurrent fashion. On a 6 core machine with 6 threads running concurrently, experimental results showed an average accelerating factor of 4.6, 5, 5.35 for the LCU decoding stage and 4.5, 4.9, 5 for deblocking filtering stage and 4, 4.5 and 5 for SAO filtering stages on HD, 1600p and 2160p sequences respectively.
只提供摘要形式。本文针对HEVC解码器的LCU解码、去块滤波和SAO滤波三个阶段提出了高效的并行化实现方案。每个阶段在单独的通道中并行化。LCU解码采用波前并行处理(WPP)进行并行化处理。通过将每个图像分割为连续LCU行的单独区域并以并发方式处理每个区域,可以并行化解块和SAO过滤。在6核6线程并行运行的机器上,实验结果表明,在HD、1600p和2160p序列上,LCU解码阶段的平均加速因子分别为4.6、5、5.35,去块滤波阶段的平均加速因子为4.5、4.9、5,SAO滤波阶段的平均加速因子为4、4.5、5。
{"title":"Efficient Parallelization of Different HEVC Decoding Stages","authors":"A. Kotra, M. Raulet, O. Déforges","doi":"10.1109/DCC.2013.82","DOIUrl":"https://doi.org/10.1109/DCC.2013.82","url":null,"abstract":"Summary form only given. In this paper we present efficient parallelization implementations for different stages of the HEVC decoder, which are LCU decoding, deblocking filtering and SAO filtering. Each of the stages are parallelized in separate passes. The LCU decoding is parallelized using Wave front Parallel Processing (WPP). Deblocking and SAO filtering are parallelized by segmenting each picture into separate regions of consecutive LCU rows and processing each of the regions in a concurrent fashion. On a 6 core machine with 6 threads running concurrently, experimental results showed an average accelerating factor of 4.6, 5, 5.35 for the LCU decoding stage and 4.5, 4.9, 5 for deblocking filtering stage and 4, 4.5 and 5 for SAO filtering stages on HD, 1600p and 2160p sequences respectively.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132166697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Random Extraction from Compressed Data - A Practical Study 从压缩数据中随机抽取——一个实用的研究
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.65
C. Constantinescu, Joseph S. Glider, D. Simha, D. Chambliss
Modern primary storage systems support or intend to add support for real time compression usually based on some flavor of the LZ77 and/or Huffman algorithm. There is a fundamental tradeoff in adding real time (adaptive) compression to such a system: to get good compression the amount of compressed data (the independently compressed block) should be large, to be able to read quickly from random places the blocks should be small. One idea is to let the independently compressed blocks be large but to be able to start decompressing the needed part of the block from a random location inside the compressed block. We explore this idea and compare it with a few alternatives, experimenting with the zlib code base.
现代主存储系统支持或打算添加对实时压缩的支持,通常基于LZ77和/或Huffman算法的某种风格。在这样的系统中添加实时(自适应)压缩有一个基本的权衡:为了获得良好的压缩,压缩的数据量(独立压缩的块)应该很大,为了能够从随机位置快速读取,块应该很小。一个想法是让独立压缩的块很大,但能够从压缩块内的随机位置开始解压缩块的所需部分。我们探索了这个想法,并将其与几个替代方案进行了比较,用zlib代码库进行了实验。
{"title":"Random Extraction from Compressed Data - A Practical Study","authors":"C. Constantinescu, Joseph S. Glider, D. Simha, D. Chambliss","doi":"10.1109/DCC.2013.65","DOIUrl":"https://doi.org/10.1109/DCC.2013.65","url":null,"abstract":"Modern primary storage systems support or intend to add support for real time compression usually based on some flavor of the LZ77 and/or Huffman algorithm. There is a fundamental tradeoff in adding real time (adaptive) compression to such a system: to get good compression the amount of compressed data (the independently compressed block) should be large, to be able to read quickly from random places the blocks should be small. One idea is to let the independently compressed blocks be large but to be able to start decompressing the needed part of the block from a random location inside the compressed block. We explore this idea and compare it with a few alternatives, experimenting with the zlib code base.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"58 37","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134505642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable Video Coding Extension for HEVC 可扩展的视频编码扩展HEVC
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.27
Jianle Chen, K. Rapaka, Xiang Li, V. Seregin, Liwei Guo, M. Karczewicz, G. V. D. Auwera, J. Solé, Xianglin Wang, Chengjie Tu, Ying Chen, R. Joshi
This paper describes a scalable video codec that was submitted as a response to the joint call for proposals issued by ISO/IEC MPEG and ITU-T VCEG on HEVC scalable extension. The proposed codec uses a multi-loop decoding structure. Several inter-layer texture prediction methods are employed to remove the inter-layer redundancy. Inter-layer prediction is also used when coding enhancement layer syntax elements such as motion parameter and intra prediction mode, to further reduce bit overhead. Additionally, alternative transforms as well as adaptive coefficients scanning are used to code the prediction residues more efficiently. Experimental results are presented to demonstrate the effectiveness of the proposed scheme. When compared to HEVC single-layer coding, the additional rate overhead for the proposed scalable extension is 1.2% to 6.4% to achieve two layers of SNR and spatial scalability.
本文描述了一种可扩展的视频编解码器,该编解码器是为响应ISO/IEC MPEG和ITU-T VCEG关于HEVC可扩展的联合提案征集而提交的。提出的编解码器采用多环解码结构。为了消除层间冗余,采用了多种层间纹理预测方法。在编码增强层语法元素(如运动参数和内部预测模式)时,还使用层间预测,以进一步减少比特开销。此外,采用替代变换和自适应系数扫描对预测残差进行更有效的编码。实验结果证明了该方案的有效性。与HEVC单层编码相比,为了实现两层信噪比和空间可扩展性,所提出的可扩展扩展的额外速率开销为1.2%至6.4%。
{"title":"Scalable Video Coding Extension for HEVC","authors":"Jianle Chen, K. Rapaka, Xiang Li, V. Seregin, Liwei Guo, M. Karczewicz, G. V. D. Auwera, J. Solé, Xianglin Wang, Chengjie Tu, Ying Chen, R. Joshi","doi":"10.1109/DCC.2013.27","DOIUrl":"https://doi.org/10.1109/DCC.2013.27","url":null,"abstract":"This paper describes a scalable video codec that was submitted as a response to the joint call for proposals issued by ISO/IEC MPEG and ITU-T VCEG on HEVC scalable extension. The proposed codec uses a multi-loop decoding structure. Several inter-layer texture prediction methods are employed to remove the inter-layer redundancy. Inter-layer prediction is also used when coding enhancement layer syntax elements such as motion parameter and intra prediction mode, to further reduce bit overhead. Additionally, alternative transforms as well as adaptive coefficients scanning are used to code the prediction residues more efficiently. Experimental results are presented to demonstrate the effectiveness of the proposed scheme. When compared to HEVC single-layer coding, the additional rate overhead for the proposed scalable extension is 1.2% to 6.4% to achieve two layers of SNR and spatial scalability.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128896425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2013 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1