首页 > 最新文献

2013 Data Compression Conference最新文献

英文 中文
Color Gamut Scalable Video Coding 色域可扩展视频编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.29
L. Kerofsky, C. A. Segall, Seung-Hwan Kim
This paper describes a scalable extension of the High Efficiency Video Coding (HEVC) standard that supports different color gamuts in an enhancement and base layer. Here, the emphasis is on scenarios with BT.2020 color gamut in an enhancement layer and BT.709 color gamut in the base layer. This is motivated by a need to provide content for both high definition and ultra-high definition devices in the near future. The paper describes a method for predicting the enhancement layer samples from a decoded base layer using a series of multiplies and adds to account for both color gamut and bit-depth changes. Results show an improvement in coding efficiency between 65% and 84% for luma (57% and 85% for chroma) compared to simulcast in quality (SNR) scalable coding.
本文描述了高效视频编码(HEVC)标准的可扩展扩展,该标准在增强层和基础层中支持不同的色域。这里,重点是在增强层中使用BT.2020色域,而在基础层中使用BT.709色域的场景。这是由于在不久的将来需要为高清和超高清设备提供内容。本文描述了一种从解码的基础层中使用一系列乘法和加法来预测增强层样本的方法,以考虑色域和位深的变化。结果显示,在质量(信噪比)可扩展编码方面,与联播相比,亮度编码效率提高了65%至84%(色度编码效率提高了57%至85%)。
{"title":"Color Gamut Scalable Video Coding","authors":"L. Kerofsky, C. A. Segall, Seung-Hwan Kim","doi":"10.1109/DCC.2013.29","DOIUrl":"https://doi.org/10.1109/DCC.2013.29","url":null,"abstract":"This paper describes a scalable extension of the High Efficiency Video Coding (HEVC) standard that supports different color gamuts in an enhancement and base layer. Here, the emphasis is on scenarios with BT.2020 color gamut in an enhancement layer and BT.709 color gamut in the base layer. This is motivated by a need to provide content for both high definition and ultra-high definition devices in the near future. The paper describes a method for predicting the enhancement layer samples from a decoded base layer using a series of multiplies and adds to account for both color gamut and bit-depth changes. Results show an improvement in coding efficiency between 65% and 84% for luma (57% and 85% for chroma) compared to simulcast in quality (SNR) scalable coding.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128075669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Multi-Level Dictionary Used in Code Compression for Embedded Systems 用于嵌入式系统代码压缩的多级字典
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.69
W. R. A. Dias, E. Moreno
This paper presents an innovative and efficient approach to code compression. Our method reduces code size by up to 32.6% and 31.9% (including all extra costs) respectively, for ARM and MIPS processor, and presents an improvement of almost 7% over the traditional Huffman method. We performed simulations and analyzes, using the applications from benchmark MiBench. In spite of these experiments, our method is orthogonal to approaches that take into account the particularities of a given instruction set architecture, becoming an independent method for any specific architecture.
本文提出了一种新颖有效的代码压缩方法。对于ARM和MIPS处理器,我们的方法分别减少了32.6%和31.9%的代码大小(包括所有额外的成本),并且比传统的霍夫曼方法改进了近7%。我们使用基准MiBench中的应用程序进行了模拟和分析。尽管有这些实验,我们的方法与考虑给定指令集体系结构特殊性的方法是正交的,成为任何特定体系结构的独立方法。
{"title":"Multi-Level Dictionary Used in Code Compression for Embedded Systems","authors":"W. R. A. Dias, E. Moreno","doi":"10.1109/DCC.2013.69","DOIUrl":"https://doi.org/10.1109/DCC.2013.69","url":null,"abstract":"This paper presents an innovative and efficient approach to code compression. Our method reduces code size by up to 32.6% and 31.9% (including all extra costs) respectively, for ARM and MIPS processor, and presents an improvement of almost 7% over the traditional Huffman method. We performed simulations and analyzes, using the applications from benchmark MiBench. In spite of these experiments, our method is orthogonal to approaches that take into account the particularities of a given instruction set architecture, becoming an independent method for any specific architecture.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132092805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Domain-Specific XML Compression 特定于领域的XML压缩
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.90
John P. T. Moore, Antonio D. Kheirkhahzadeh, Jiva N. Bagale
Our compression technique is an abstraction of Packed Encoding Rules and has been implemented by the Packed objects structured data compression tool. Rather than trying to support a complex standard we instead describe a very simple technique which allows us to implement a very light-weight encoder capable of compressing structured data represented in XML. We call this work Integer Encoding Rules (IER). The technique is based on a simple mapping of data values belonging to a set of data types to a series of integer values. The data values come from XML data and the data types come from XML Schema.
我们的压缩技术是压缩编码规则的抽象,并由压缩对象结构化数据压缩工具实现。我们没有尝试支持一个复杂的标准,而是描述了一种非常简单的技术,它允许我们实现一个非常轻量级的编码器,能够压缩XML表示的结构化数据。我们称之为整数编码规则(IER)。该技术基于属于一组数据类型的数据值到一系列整数值的简单映射。数据值来自XML数据,数据类型来自XML Schema。
{"title":"Domain-Specific XML Compression","authors":"John P. T. Moore, Antonio D. Kheirkhahzadeh, Jiva N. Bagale","doi":"10.1109/DCC.2013.90","DOIUrl":"https://doi.org/10.1109/DCC.2013.90","url":null,"abstract":"Our compression technique is an abstraction of Packed Encoding Rules and has been implemented by the Packed objects structured data compression tool. Rather than trying to support a complex standard we instead describe a very simple technique which allows us to implement a very light-weight encoder capable of compressing structured data represented in XML. We call this work Integer Encoding Rules (IER). The technique is based on a simple mapping of data values belonging to a set of data types to a series of integer values. The data values come from XML data and the data types come from XML Schema.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129799846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
3D Wavelet Encoder for Depth Map Data Compression 三维小波编码器深度图数据压缩
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.88
M. Martínez-Rach, O. López, P. Piñol, Manuel P. Malumbres
Depth Image Base Rendering (DIBR) is an effective approach for 3D-TV, however quality and time consistence of the depth map is a problem in this field. Our intermediate solution between Intra and Inter encoders is able to cope with the quality and time consistency of the captured depth map info. Our encoder achieves the same visual quality than H264/AVC and x264 in Intra mode reducing coding delays.
深度底图绘制(deep Image Base Rendering, DIBR)是一种有效的3d电视图像绘制方法,但深度图的质量和时间一致性一直是该领域存在的问题。我们在Intra和Inter编码器之间的中间解决方案能够处理捕获深度图信息的质量和时间一致性。我们的编码器在Intra模式下实现了与H264/AVC和x264相同的视觉质量,减少了编码延迟。
{"title":"3D Wavelet Encoder for Depth Map Data Compression","authors":"M. Martínez-Rach, O. López, P. Piñol, Manuel P. Malumbres","doi":"10.1109/DCC.2013.88","DOIUrl":"https://doi.org/10.1109/DCC.2013.88","url":null,"abstract":"Depth Image Base Rendering (DIBR) is an effective approach for 3D-TV, however quality and time consistence of the depth map is a problem in this field. Our intermediate solution between Intra and Inter encoders is able to cope with the quality and time consistency of the captured depth map info. Our encoder achieves the same visual quality than H264/AVC and x264 in Intra mode reducing coding delays.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"02 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129934218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visually Lossless JPEG 2000 Decoder 视觉无损JPEG 2000解码器
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.25
Leandro Jimenez-Rodriguez, Francesc Aulí Llinàs, M. Marcellin, J. Serra-Sagristà
Visually lossless coding is a method through which an image is coded with numerical losses that are not noticeable by visual inspection. Contrary to numerically lossless coding, visually lossless coding can achieve high compression ratios. In general, visually lossless coding is approached from the point of view of the encoder, i.e., as a procedure devised to generate a compressed code stream from an original image. If an image has already been encoded to a very high fidelity (higher than visually lossless - perhaps even numerically lossless), it is not straightforward to create a just visually lossless version without fully re-encoding the image. However, for large repositories, re-encoding may not be a suitable option. A visually lossless decoder might be useful to decode, or to parse and transmit, only the data needed for visually lossless reconstruction. This work introduces a decoder for JPEG 2000 code streams that identifies and decodes the minimum amount of information needed to produce a visually lossless image. The main insights behind the proposed method are to estimate the variance of the code blocks before the decoding procedure, and to determine the visibility thresholds employing a well-known model from the literature. The main advantages are faster decoding and the possibility to transmit visually lossless images employing minimal bit rates.
视觉无损编码是一种方法,通过这种方法对图像进行编码,使其具有视觉检查无法注意到的数字损失。与数字无损编码相反,视觉无损编码可以实现高压缩比。一般来说,从编码器的角度来看,视觉无损编码是接近的,也就是说,作为一个程序设计从原始图像产生压缩码流。如果图像已经被编码为非常高的保真度(高于视觉无损-甚至可能是数字无损),那么在不完全重新编码图像的情况下,创建一个视觉无损的版本是不容易的。然而,对于大型存储库,重新编码可能不是一个合适的选择。视觉无损解码器可能只对解码或解析和传输视觉无损重建所需的数据有用。这项工作介绍了一个用于JPEG 2000码流的解码器,它可以识别和解码产生视觉无损图像所需的最小信息量。提出的方法背后的主要见解是在解码过程之前估计代码块的方差,并使用文献中众所周知的模型确定可见性阈值。其主要优点是解码速度更快,并且可以使用最小的比特率传输视觉无损图像。
{"title":"Visually Lossless JPEG 2000 Decoder","authors":"Leandro Jimenez-Rodriguez, Francesc Aulí Llinàs, M. Marcellin, J. Serra-Sagristà","doi":"10.1109/DCC.2013.25","DOIUrl":"https://doi.org/10.1109/DCC.2013.25","url":null,"abstract":"Visually lossless coding is a method through which an image is coded with numerical losses that are not noticeable by visual inspection. Contrary to numerically lossless coding, visually lossless coding can achieve high compression ratios. In general, visually lossless coding is approached from the point of view of the encoder, i.e., as a procedure devised to generate a compressed code stream from an original image. If an image has already been encoded to a very high fidelity (higher than visually lossless - perhaps even numerically lossless), it is not straightforward to create a just visually lossless version without fully re-encoding the image. However, for large repositories, re-encoding may not be a suitable option. A visually lossless decoder might be useful to decode, or to parse and transmit, only the data needed for visually lossless reconstruction. This work introduces a decoder for JPEG 2000 code streams that identifies and decodes the minimum amount of information needed to produce a visually lossless image. The main insights behind the proposed method are to estimate the variance of the code blocks before the decoding procedure, and to determine the visibility thresholds employing a well-known model from the literature. The main advantages are faster decoding and the possibility to transmit visually lossless images employing minimal bit rates.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130601988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Efficient Coding of Signal Distances Using Universal Quantized Embeddings 利用通用量化嵌入的有效信号距离编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.33
P. Boufounos, S. Rane
Traditional rate-distortion theory is focused on how to best encode a signal using as few bits as possible and incurring as low a distortion as possible. However, very often, the goal of transmission is to extract specific information from the signal at the receiving end, and the distortion should be measured on that extracted information. In this paper we examine the problem of encoding signals such that sufficient information is preserved about their pair wise distances. For that goal, we consider randomized embeddings as an encoding mechanism and provide a framework to analyze their performance. We also propose the recently developed universal quantized embeddings as a solution to that problem and experimentally demonstrate that, in image retrieval experiments, universal embedding can achieve up to 25% rate reduction over the state of the art.
传统的率失真理论关注的是如何使用尽可能少的比特对信号进行最佳编码,并产生尽可能低的失真。然而,很多时候,传输的目标是从接收端信号中提取特定的信息,并且应该在提取的信息上测量失真。在本文中,我们研究了信号的编码问题,使其对距离的足够信息得以保留。为了实现这一目标,我们将随机嵌入作为一种编码机制,并提供了一个框架来分析其性能。我们还提出了最近开发的通用量化嵌入作为该问题的解决方案,并通过实验证明,在图像检索实验中,通用嵌入可以比目前的状态实现高达25%的率降低。
{"title":"Efficient Coding of Signal Distances Using Universal Quantized Embeddings","authors":"P. Boufounos, S. Rane","doi":"10.1109/DCC.2013.33","DOIUrl":"https://doi.org/10.1109/DCC.2013.33","url":null,"abstract":"Traditional rate-distortion theory is focused on how to best encode a signal using as few bits as possible and incurring as low a distortion as possible. However, very often, the goal of transmission is to extract specific information from the signal at the receiving end, and the distortion should be measured on that extracted information. In this paper we examine the problem of encoding signals such that sufficient information is preserved about their pair wise distances. For that goal, we consider randomized embeddings as an encoding mechanism and provide a framework to analyze their performance. We also propose the recently developed universal quantized embeddings as a solution to that problem and experimentally demonstrate that, in image retrieval experiments, universal embedding can achieve up to 25% rate reduction over the state of the art.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121363607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Inter-view Reference Frame Selection in Multi-view Video Coding 多视点视频编码中的视点间参考帧选择
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.113
Guang Y. Zhang, Abdelrahman Abdelazim, S. Mein, M. Varley, D. Ait-Boudaoud
Summary form only given. Multiple video cameras are used to capture the same scene simultaneously to acquire the multiview view coding data, obviously, over-large data will affect the coding efficiency. Due to the video data is acquired from the same scene, the inter-view similarities between adjacent camera views are exploited for efficient compression. Generally, the same objects with different viewpoints are shown on adjacent views. On the other hand, containing objects at different depth planes, and therefore perfect correlation over the entire image area will never occur. Additionally, the scene complexity and the differences in brightness and color between the video of the individual cameras will also affect the current block to find its best match in the inter-view reference picture. Consequently, the temporal-view reference picture is referred more frequently. In order to gain the compression efficiency, it is a core part to disable the unnecessary inter-view reference. The idea of this paper is to exploit the phase correlation to estimate the dependencies between the inter-view reference and the current picture. If the two frames with low correlation, the inter-view reference frame will be disabled. In addition, this approach works only on non-anchor pictures. Experimental results show that the proposed algorithm can save 16% computational complexity on average, with negligible loss of quality and bit rate. The phase correlation process only takes up 0.1% of the whole process.
只提供摘要形式。采用多台摄像机同时拍摄同一场景,获取多视图视图编码数据,显然,数据量过大会影响编码效率。由于视频数据来自同一场景,因此利用相邻摄像机视图之间的视图间相似性进行有效压缩。通常,相同的物体以不同的视点显示在相邻的视图上。另一方面,包含不同深度平面的对象,因此整个图像区域的完美相关将永远不会发生。此外,场景的复杂性和各个摄像机视频之间的亮度和色彩差异也会影响当前块在访谈参考图像中找到最佳匹配。因此,时间视图参考图被更频繁地引用。为了提高压缩效率,取消不必要的视图间引用是核心部分。本文的思想是利用相位相关来估计视点与当前图像之间的依赖关系。如果两帧相关性较低,则会禁用互视参考帧。此外,这种方法只适用于非锚点图片。实验结果表明,该算法平均可节省16%的计算复杂度,且质量和比特率的损失可以忽略不计。相位相关过程只占整个过程的0.1%。
{"title":"Inter-view Reference Frame Selection in Multi-view Video Coding","authors":"Guang Y. Zhang, Abdelrahman Abdelazim, S. Mein, M. Varley, D. Ait-Boudaoud","doi":"10.1109/DCC.2013.113","DOIUrl":"https://doi.org/10.1109/DCC.2013.113","url":null,"abstract":"Summary form only given. Multiple video cameras are used to capture the same scene simultaneously to acquire the multiview view coding data, obviously, over-large data will affect the coding efficiency. Due to the video data is acquired from the same scene, the inter-view similarities between adjacent camera views are exploited for efficient compression. Generally, the same objects with different viewpoints are shown on adjacent views. On the other hand, containing objects at different depth planes, and therefore perfect correlation over the entire image area will never occur. Additionally, the scene complexity and the differences in brightness and color between the video of the individual cameras will also affect the current block to find its best match in the inter-view reference picture. Consequently, the temporal-view reference picture is referred more frequently. In order to gain the compression efficiency, it is a core part to disable the unnecessary inter-view reference. The idea of this paper is to exploit the phase correlation to estimate the dependencies between the inter-view reference and the current picture. If the two frames with low correlation, the inter-view reference frame will be disabled. In addition, this approach works only on non-anchor pictures. Experimental results show that the proposed algorithm can save 16% computational complexity on average, with negligible loss of quality and bit rate. The phase correlation process only takes up 0.1% of the whole process.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121723810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computing Convolution on Grammar-Compressed Text 基于语法压缩文本的卷积计算
Pub Date : 2013-03-15 DOI: 10.1109/DCC.2013.53
Toshiya Tanaka, T. I., Shunsuke Inenaga, H. Bannai, M. Takeda
The convolution between a text string S of length N and a pattern string P of length m can be computed in O(N log m) time by FFT. It is known that various types of approximate string matching problems are reducible to convolution. In this paper, we assume that the input text string is given in a compressed form, as a straight-line program (SLP), which is a context free grammar in the Chomsky normal form that derives a single string. Given an SLP S of size n describing a text S of length N, and an uncompressed pattern P of length m, we present a simple O(nm log m)-time algorithm to compute the convolution between S and P. We then show that this can be improved to O(min{nm, N - α} log m) time, where α ≥ 0 is a value that represents the amount of redundancy that the SLP captures with respect to the length-m substrings. The key of the improvement is our new algorithm that computes the convolution between a trie of size r and a pattern string P of length m in O(r log m) time.
长度为N的文本字符串S与长度为m的模式字符串P之间的卷积可以通过FFT在O(N log m)时间内计算出来。已知各种类型的近似字符串匹配问题都可约化为卷积。在本文中,我们假设输入文本字符串以压缩形式给出,作为直线程序(SLP),这是乔姆斯基范式中的上下文无关语法,派生单个字符串。给定一个大小为n的SLP S,描述一个长度为n的文本S,以及一个长度为m的未压缩模式P,我们提出了一个简单的O(nm log m)时间算法来计算S与P之间的卷积。然后我们证明这可以改进到O(min{nm, n - α} log m)时间,其中α≥0是表示SLP捕获的冗余量的值相对于长度为m的子串。改进的关键是我们的新算法,它在O(r log m)时间内计算大小为r的树与长度为m的模式字符串P之间的卷积。
{"title":"Computing Convolution on Grammar-Compressed Text","authors":"Toshiya Tanaka, T. I., Shunsuke Inenaga, H. Bannai, M. Takeda","doi":"10.1109/DCC.2013.53","DOIUrl":"https://doi.org/10.1109/DCC.2013.53","url":null,"abstract":"The convolution between a text string S of length N and a pattern string P of length m can be computed in O(N log m) time by FFT. It is known that various types of approximate string matching problems are reducible to convolution. In this paper, we assume that the input text string is given in a compressed form, as a straight-line program (SLP), which is a context free grammar in the Chomsky normal form that derives a single string. Given an SLP S of size n describing a text S of length N, and an uncompressed pattern P of length m, we present a simple O(nm log m)-time algorithm to compute the convolution between S and P. We then show that this can be improved to O(min{nm, N - α} log m) time, where α ≥ 0 is a value that represents the amount of redundancy that the SLP captures with respect to the length-m substrings. The key of the improvement is our new algorithm that computes the convolution between a trie of size r and a pattern string P of length m in O(r log m) time.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124752674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Image Blocking Artifacts Reduction via Patch Clustering and Low-Rank Minimization 基于Patch聚类和低秩最小化的图像块伪影减少
Pub Date : 2013-03-01 DOI: 10.1109/dcc.2013.95
Jie Ren, Jiaying Liu, Mading Li, Wei Bai, Zongming Guo
{"title":"Image Blocking Artifacts Reduction via Patch Clustering and Low-Rank Minimization","authors":"Jie Ren, Jiaying Liu, Mading Li, Wei Bai, Zongming Guo","doi":"10.1109/dcc.2013.95","DOIUrl":"https://doi.org/10.1109/dcc.2013.95","url":null,"abstract":"","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"21 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120856662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Compression of Optimal Value Functions for Markov Decision Processes 马尔可夫决策过程的最优值函数压缩
Pub Date : 2013-03-01 DOI: 10.1109/DCC.2013.81
Mykel J. Kochenderfer, Nicholas Monath
Summary form only given. A Markov decision process (MDP) is defined by a state space, action space, transition model, and reward model. The objective is to maximize accumulation of reward over time. Solutions can be found through dynamic programming, which generally involves discretization, resulting in significant memory and computational requirements. Although computer clusters can be used to solve large problems, many applications require that solutions be executed on less capable hardware. We explored a general method for compressing solutions in a way that preserves fast random-access lookups. The method was applied to an MDP for an aircraft collision avoidance system. In our problem, S consists of aircraft positions and velocities and A consists of resolution advisories provided by the collision avoidance system, with S > 1.5 x 106, and A = 10. The solution to an MDP can be represented by an |S| x |A| matrix specifying Q*(s,a), the expected return of the optimal strategy from s after executing action a. Since, on average, only 6.6 actions are available from every state in our problem, it is more efficient to use a sparse representation consisting of an array of the permissible values of Q*, organized into into variable-length blocks with one block per state. An index provides offsets into this Q* array corresponding to the block boundaries, and an action array lists the actions available from each state. The values for Q* are stored using a 32-bit floating point representation, resulting in 534 MB for the three arrays associated with the sparse representation. Our method first converts to a 16-bit half-precision representation, sorts the state-action values within each block, adjusts the action array appropriately, and then removes redundant blocks. Although LZMA has a better compression ratio, it does not support real-time random access decompression. The behavior of the proposed method was demonstrated in simulation with negligible impact on safety and operational performance metrics. Although this compression methodology was demonstrated on related MDPs with similar compression ratios, further work will apply this technique to other domains.
只提供摘要形式。马尔可夫决策过程由状态空间、动作空间、转移模型和奖励模型定义。目标是随着时间的推移最大化奖励的积累。可以通过动态规划找到解决方案,这通常涉及离散化,导致大量内存和计算需求。尽管计算机集群可用于解决大型问题,但许多应用程序要求在性能较差的硬件上执行解决方案。我们探索了一种压缩解决方案的通用方法,以保持快速随机访问查找。将该方法应用于某型飞机避碰系统的MDP中。在我们的问题中,S为飞机位置和速度,A为避碰系统提供的解决建议,S > 1.5 x 106, A = 10。MDP的解决方案可以用一个指定Q*(S, A)的|S| x |A|矩阵来表示,Q*(S, A)是执行动作A后从S得到的最优策略的预期回报。因为,在我们的问题中,每个状态平均只有6.6个动作可用,所以使用由Q*的允许值数组组成的稀疏表示更有效,这些值被组织成可变长度的块,每个状态一个块。索引提供了与块边界相对应的Q*数组的偏移量,动作数组列出了每个状态中可用的动作。Q*的值使用32位浮点表示进行存储,因此与稀疏表示相关联的三个数组占用534 MB。我们的方法首先转换为16位半精度表示,对每个块中的状态-动作值进行排序,适当地调整动作数组,然后删除多余的块。虽然LZMA具有更好的压缩比,但它不支持实时随机访问解压缩。所提出的方法的行为在仿真中得到了证明,对安全和运行性能指标的影响可以忽略不计。尽管这种压缩方法在具有相似压缩比的相关mdp上得到了证明,但进一步的工作将把这种技术应用于其他领域。
{"title":"Compression of Optimal Value Functions for Markov Decision Processes","authors":"Mykel J. Kochenderfer, Nicholas Monath","doi":"10.1109/DCC.2013.81","DOIUrl":"https://doi.org/10.1109/DCC.2013.81","url":null,"abstract":"Summary form only given. A Markov decision process (MDP) is defined by a state space, action space, transition model, and reward model. The objective is to maximize accumulation of reward over time. Solutions can be found through dynamic programming, which generally involves discretization, resulting in significant memory and computational requirements. Although computer clusters can be used to solve large problems, many applications require that solutions be executed on less capable hardware. We explored a general method for compressing solutions in a way that preserves fast random-access lookups. The method was applied to an MDP for an aircraft collision avoidance system. In our problem, S consists of aircraft positions and velocities and A consists of resolution advisories provided by the collision avoidance system, with S > 1.5 x 106, and A = 10. The solution to an MDP can be represented by an |S| x |A| matrix specifying Q*(s,a), the expected return of the optimal strategy from s after executing action a. Since, on average, only 6.6 actions are available from every state in our problem, it is more efficient to use a sparse representation consisting of an array of the permissible values of Q*, organized into into variable-length blocks with one block per state. An index provides offsets into this Q* array corresponding to the block boundaries, and an action array lists the actions available from each state. The values for Q* are stored using a 32-bit floating point representation, resulting in 534 MB for the three arrays associated with the sparse representation. Our method first converts to a 16-bit half-precision representation, sorts the state-action values within each block, adjusts the action array appropriately, and then removes redundant blocks. Although LZMA has a better compression ratio, it does not support real-time random access decompression. The behavior of the proposed method was demonstrated in simulation with negligible impact on safety and operational performance metrics. Although this compression methodology was demonstrated on related MDPs with similar compression ratios, further work will apply this technique to other domains.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129693578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2013 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1