首页 > 最新文献

Proceedings DCC '97. Data Compression Conference最新文献

英文 中文
Compression comparisons for multiview stereo 多视点立体的压缩比较
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582103
D.K. Jones, M.W. Maier
Summary form only given. Multiview stereo imaging uses arrays of cameras to capture scenes from multiple perspectives. This form of imagery is used in systems that allow the user to survey the scene, for example by head motion. Very little work has been reported on compression schemes for multiview images. Multiview image sets tend to be very large because they may contain several hundred views, but there is considerable redundancy among the views which makes them highly compressible. This paper compares methods for compressing large multiview stereo image sets. There is an obvious similarity between multiview image sets and video sequences. As a baseline we compressed a set of multiview stereo images with JPEG on each image individually and MPEG-1 applied to the whole set. The average bits per pixel were reduced by roughly a factor of two over individual frame compression, at constant mean square error (MSE). Stereo specific perceptual distortions can be viewed in anaglyph representations of the data set. Another method, unique to this data type, is based on residual coding with respect to a synthetic "panoramic still" containing information from all of the images in the set. In this method we synthesize a single panoramic image from all of the members of a registered set, code the panoramic image, and then code the residual images formed by subtracting the individual images from the corresponding position on the panorama. Initial results with this method appear to give a similar MSE rate distortion curve as the MPEG based techniques. However, the panoramic still method is inherently random access.
只提供摘要形式。多视角立体成像使用相机阵列从多个角度捕捉场景。这种形式的图像用于允许用户调查场景的系统中,例如通过头部运动。关于多视图图像压缩方案的研究很少。多视图图像集往往非常大,因为它们可能包含数百个视图,但视图之间存在相当大的冗余,这使得它们具有高度可压缩性。本文比较了压缩大型多视点立体图像集的几种方法。多视图图像集和视频序列之间有明显的相似性。作为基线,我们压缩了一组多视图立体图像,每个图像单独使用JPEG,并将MPEG-1应用于整个图像集。在均方误差(MSE)不变的情况下,每个像素的平均比特数在单个帧压缩中大约减少了两倍。立体特定的感知扭曲可以在数据集的象形表示中查看。这种数据类型特有的另一种方法是基于对包含集合中所有图像信息的合成“全景静止图像”的残差编码。在该方法中,我们将注册集的所有成员合成为单个全景图像,对全景图像进行编码,然后对全景图像上相应位置的单个图像进行减去形成的残差图像进行编码。该方法的初步结果与基于MPEG的技术的MSE率失真曲线相似。然而,全景静止方法本身就是随机存取。
{"title":"Compression comparisons for multiview stereo","authors":"D.K. Jones, M.W. Maier","doi":"10.1109/DCC.1997.582103","DOIUrl":"https://doi.org/10.1109/DCC.1997.582103","url":null,"abstract":"Summary form only given. Multiview stereo imaging uses arrays of cameras to capture scenes from multiple perspectives. This form of imagery is used in systems that allow the user to survey the scene, for example by head motion. Very little work has been reported on compression schemes for multiview images. Multiview image sets tend to be very large because they may contain several hundred views, but there is considerable redundancy among the views which makes them highly compressible. This paper compares methods for compressing large multiview stereo image sets. There is an obvious similarity between multiview image sets and video sequences. As a baseline we compressed a set of multiview stereo images with JPEG on each image individually and MPEG-1 applied to the whole set. The average bits per pixel were reduced by roughly a factor of two over individual frame compression, at constant mean square error (MSE). Stereo specific perceptual distortions can be viewed in anaglyph representations of the data set. Another method, unique to this data type, is based on residual coding with respect to a synthetic \"panoramic still\" containing information from all of the images in the set. In this method we synthesize a single panoramic image from all of the members of a registered set, code the panoramic image, and then code the residual images formed by subtracting the individual images from the corresponding position on the panorama. Initial results with this method appear to give a similar MSE rate distortion curve as the MPEG based techniques. However, the panoramic still method is inherently random access.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126516205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancements to the JPEG implementation of block smoothing method 对JPEG块平滑方法实现的改进
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582108
G. Lakhani
Summary form only given. This paper proposes several enhancements to the AC prediction approach, adapted by the Joint Photographic Expert Group (JPEG), for reduction of the blocking artifact effects. Our decoder uses value of reconstructed pixels of the already decoded part of the image, instead of the their DCT components. The major contribution of the paper is that we divide the prediction of DCT coefficients in two parts. For the low frequency coefficients, we solve a minimization problem. Its objective is to reduce the block boundary edge variance (BEV). The problem is solved analytically and its solution predicts DCT coefficients of a block in the terms of the first four coefficients of the four adjacent blocks. In this process, we also determine an optimal solution to the minimization of the mean squared difference of slopes (MSDS) considered for the same problem and computed using a quadratic programming method. For the mid-range frequency coefficients, we follow the interpolation method and interpolate image segments by ternary polynomials (JPEG uses quadratic polynomials). The smallest possible 9/spl times/9 pixel image segments are considered for the prediction of coefficients of 8/spl times/8 blocks (JPEG considers 24/spl times/24 pixel segments). The paper presents a complete formulation of the prediction equations, not provided by the JPEG. The paper also proposes three new statistical criterion to measure block boundary discontinuities. All enhancements have been added to a JPEG software. Results of several experiments using this software are given to compare the performance of different implementations of the AC prediction approach.
只提供摘要形式。本文提出了几个改进的AC预测方法,采用联合摄影专家组(JPEG),以减少块伪影的影响。我们的解码器使用图像已解码部分的重构像素值,而不是它们的DCT分量。本文的主要贡献在于我们将DCT系数的预测分为两部分。对于低频系数,我们解决了最小化问题。其目标是减小块边界边缘方差(BEV)。对该问题进行了解析求解,其解根据相邻四个块的前四个系数来预测块的DCT系数。在此过程中,我们还确定了一个最优解,以最小的均方差的斜率(MSDS)考虑相同的问题,并使用二次规划方法计算。对于中频系数,我们遵循插值方法,使用三元多项式插值图像片段(JPEG使用二次多项式)。对于8/spl次/8块的系数预测,考虑最小的9/spl次/9像素图像段(JPEG考虑24/spl次/24像素段)。本文给出了JPEG没有提供的预测方程的完整公式。本文还提出了测量块体边界不连续的三个新的统计判据。所有增强功能都已添加到JPEG软件中。用该软件进行了几个实验,比较了交流预测方法不同实现的性能。
{"title":"Enhancements to the JPEG implementation of block smoothing method","authors":"G. Lakhani","doi":"10.1109/DCC.1997.582108","DOIUrl":"https://doi.org/10.1109/DCC.1997.582108","url":null,"abstract":"Summary form only given. This paper proposes several enhancements to the AC prediction approach, adapted by the Joint Photographic Expert Group (JPEG), for reduction of the blocking artifact effects. Our decoder uses value of reconstructed pixels of the already decoded part of the image, instead of the their DCT components. The major contribution of the paper is that we divide the prediction of DCT coefficients in two parts. For the low frequency coefficients, we solve a minimization problem. Its objective is to reduce the block boundary edge variance (BEV). The problem is solved analytically and its solution predicts DCT coefficients of a block in the terms of the first four coefficients of the four adjacent blocks. In this process, we also determine an optimal solution to the minimization of the mean squared difference of slopes (MSDS) considered for the same problem and computed using a quadratic programming method. For the mid-range frequency coefficients, we follow the interpolation method and interpolate image segments by ternary polynomials (JPEG uses quadratic polynomials). The smallest possible 9/spl times/9 pixel image segments are considered for the prediction of coefficients of 8/spl times/8 blocks (JPEG considers 24/spl times/24 pixel segments). The paper presents a complete formulation of the prediction equations, not provided by the JPEG. The paper also proposes three new statistical criterion to measure block boundary discontinuities. All enhancements have been added to a JPEG software. Results of several experiments using this software are given to compare the performance of different implementations of the AC prediction approach.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132678230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An experimental comparison of several lossless image coders for medical images 几种医学图像无损编码的实验比较
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582091
K. Denecker, J. Van Overloop, I. Lemahieu
Summary form only given. The output of medical imaging devices is increasingly digital and both storage space and transmission time of the images profit from compression. The introduction of PACS systems into the hospital environment fortifies this need. Since any loss of diagnostic information is to be avoided, lossless compression techniques are preferable. We present an experimental comparison of several lossless coders and investigate their compression efficiency and speed for different types of medical images. The coders are: five image coders (LJPEG, BTPC, FELICS, S+P, CALIC), and two general-purpose coders (GnuZIP, STAT). The medical imaging techniques are: CT, MRI, X-ray, angiography, mammography, PET and echography. Lossless JPEG (LJPEG), the current lossless compression standard, combines simple linear prediction with Huffman coding. Binary tree predictive coding (BTPC) is a multi-resolution technique which decomposes the image into a binary tree. The fast and efficient lossless image compression system (FELICS) conditions the pixel data on the values of the two nearest neighbours. Compression with reversible embedded wavelets (S+P) uses a lossless wavelet transform. The context-based, adaptive, lossless/nearly-lossless coding scheme for continuous-tone images (CALIC) combines non-linear prediction with advanced statistical error modelling techniques. GnuZIP uses LZ77, a form of sliding window compression. STAT is a PPM-lilte general-purpose compression technique. We give combined compression ratio vs. speed results for the different compression methods as an average over the different image types.
只提供摘要形式。医学成像设备的输出越来越数字化,图像的存储空间和传输时间都得益于压缩。在医院环境中引入PACS系统加强了这一需求。由于要避免任何诊断信息的丢失,因此首选无损压缩技术。本文对几种无损编码器进行了实验比较,并研究了它们对不同类型医学图像的压缩效率和速度。编码器是:五个图像编码器(LJPEG、BTPC、FELICS、S+P、CALIC)和两个通用编码器(GnuZIP、STAT)。医学成像技术有:CT、MRI、x线、血管造影、乳房x线摄影、PET和超声。无损JPEG (LJPEG)是当前的无损压缩标准,它将简单的线性预测与霍夫曼编码相结合。二叉树预测编码(BTPC)是一种将图像分解成二叉树的多分辨率编码技术。快速高效的无损图像压缩系统(FELICS)将像素数据以两个最近邻的值为条件。压缩与可逆嵌入小波(S+P)使用无损小波变换。基于上下文的、自适应的、无损/近无损的连续色调图像编码方案(CALIC)结合了非线性预测和先进的统计误差建模技术。GnuZIP使用LZ77,一种滑动窗口压缩形式。STAT是一种ppm级别的通用压缩技术。我们给出了不同压缩方法的压缩比与速度的综合结果,作为不同图像类型的平均值。
{"title":"An experimental comparison of several lossless image coders for medical images","authors":"K. Denecker, J. Van Overloop, I. Lemahieu","doi":"10.1109/DCC.1997.582091","DOIUrl":"https://doi.org/10.1109/DCC.1997.582091","url":null,"abstract":"Summary form only given. The output of medical imaging devices is increasingly digital and both storage space and transmission time of the images profit from compression. The introduction of PACS systems into the hospital environment fortifies this need. Since any loss of diagnostic information is to be avoided, lossless compression techniques are preferable. We present an experimental comparison of several lossless coders and investigate their compression efficiency and speed for different types of medical images. The coders are: five image coders (LJPEG, BTPC, FELICS, S+P, CALIC), and two general-purpose coders (GnuZIP, STAT). The medical imaging techniques are: CT, MRI, X-ray, angiography, mammography, PET and echography. Lossless JPEG (LJPEG), the current lossless compression standard, combines simple linear prediction with Huffman coding. Binary tree predictive coding (BTPC) is a multi-resolution technique which decomposes the image into a binary tree. The fast and efficient lossless image compression system (FELICS) conditions the pixel data on the values of the two nearest neighbours. Compression with reversible embedded wavelets (S+P) uses a lossless wavelet transform. The context-based, adaptive, lossless/nearly-lossless coding scheme for continuous-tone images (CALIC) combines non-linear prediction with advanced statistical error modelling techniques. GnuZIP uses LZ77, a form of sliding window compression. STAT is a PPM-lilte general-purpose compression technique. We give combined compression ratio vs. speed results for the different compression methods as an average over the different image types.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131201565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Lossy/lossless coding of bi-level images 双级图像的有损/无损编码
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582116
Bo Martins, Soren Forchhammer
Summary form only given. We present improvements to a general type of lossless, lossy, and refinement coding of bi-level images (Martins and Forchhammer, 1996). Loss is introduced by flipping pixels. The pixels are coded using arithmetic coding of conditional probabilities obtained using a template as is known from JBIG and proposed in JBIG-2 (Martins and Forchhammer). Our new state-of-the-art results are obtained using the more general free tree instead of a template. Also we introduce multiple refinement template coding. The lossy algorithm is analogous to the greedy 'rate-distortion'-algorithm of Martins and Forchhammer but is based on the free tree.
只提供摘要形式。我们对双级图像的无损、有损和细化编码进行了改进(Martins和Forchhammer, 1996)。损失是由翻转像素引入的。像素使用条件概率的算术编码进行编码,这些条件概率是使用JBIG中已知的模板获得的,并在JBIG-2 (Martins和Forchhammer)中提出。我们最新的结果是使用更一般的自由树而不是模板获得的。同时还介绍了多细化模板编码。有损算法类似于Martins和Forchhammer的贪婪“率失真”算法,但基于自由树。
{"title":"Lossy/lossless coding of bi-level images","authors":"Bo Martins, Soren Forchhammer","doi":"10.1109/DCC.1997.582116","DOIUrl":"https://doi.org/10.1109/DCC.1997.582116","url":null,"abstract":"Summary form only given. We present improvements to a general type of lossless, lossy, and refinement coding of bi-level images (Martins and Forchhammer, 1996). Loss is introduced by flipping pixels. The pixels are coded using arithmetic coding of conditional probabilities obtained using a template as is known from JBIG and proposed in JBIG-2 (Martins and Forchhammer). Our new state-of-the-art results are obtained using the more general free tree instead of a template. Also we introduce multiple refinement template coding. The lossy algorithm is analogous to the greedy 'rate-distortion'-algorithm of Martins and Forchhammer but is based on the free tree.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126603806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 85
Motion-adapted content-based temporal scalability in very low bitrate video coding 非常低比特率视频编码中基于动作的基于内容的时间可扩展性
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582087
C. Chu, D. Anastassiou, Shih-Fu Chang
Summary form only given. Because of stringent bandwidth requirement, very low bitrate video coding usually uses lower frame rates in order to comply with the bitrate constraint. With a reasonably low frame rate, it can reserve basic visual information of an image sequence. However, on special occasions or for specific human understanding purposes, it can barely provide enough temporal resolution. In these cases, we would apply content-based temporal scalability to enhance temporal resolution for desired objects/areas in an image, with a reasonable increase of bitrate. We propose a motion-adapted encoding scheme for content-based temporal scalability in very low bitrate video coding. This coding scheme selectively encodes desired objects and makes proper adjustment to the rest of the scene. Content-based scalability and temporal scalability are achieved via two separate coding steps. This coding scheme is efficient for image sequences with hierarchical structure, such as sequences with background motion and sequences with single moving object. Simulations are done on videotelephony sequences and selected MPEG4 test sequences.
只提供摘要形式。由于对带宽的要求非常严格,极低比特率视频编码通常采用较低的帧率来满足比特率的限制。它具有较低的帧率,可以保留图像序列的基本视觉信息。然而,在特殊场合或为了特定的人类理解目的,它几乎不能提供足够的时间分辨率。在这些情况下,我们将应用基于内容的时间可扩展性来增强图像中所需对象/区域的时间分辨率,并合理提高比特率。我们提出了一种运动适应的编码方案,用于非常低比特率视频编码中基于内容的时间可扩展性。这种编码方案选择性地编码所需的对象,并对场景的其余部分进行适当的调整。基于内容的可伸缩性和临时可伸缩性通过两个单独的编码步骤实现。该编码方案对于具有层次结构的图像序列,如具有背景运动的序列和具有单个运动对象的序列具有较好的编码效率。对视频电话序列和选定的MPEG4测试序列进行了仿真。
{"title":"Motion-adapted content-based temporal scalability in very low bitrate video coding","authors":"C. Chu, D. Anastassiou, Shih-Fu Chang","doi":"10.1109/DCC.1997.582087","DOIUrl":"https://doi.org/10.1109/DCC.1997.582087","url":null,"abstract":"Summary form only given. Because of stringent bandwidth requirement, very low bitrate video coding usually uses lower frame rates in order to comply with the bitrate constraint. With a reasonably low frame rate, it can reserve basic visual information of an image sequence. However, on special occasions or for specific human understanding purposes, it can barely provide enough temporal resolution. In these cases, we would apply content-based temporal scalability to enhance temporal resolution for desired objects/areas in an image, with a reasonable increase of bitrate. We propose a motion-adapted encoding scheme for content-based temporal scalability in very low bitrate video coding. This coding scheme selectively encodes desired objects and makes proper adjustment to the rest of the scene. Content-based scalability and temporal scalability are achieved via two separate coding steps. This coding scheme is efficient for image sequences with hierarchical structure, such as sequences with background motion and sequences with single moving object. Simulations are done on videotelephony sequences and selected MPEG4 test sequences.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130737212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The search accuracy of tree-structured VQ 树状结构VQ的搜索精度
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582112
J. Lin
Summary form only given. It is well-known that tree-structured vector quantization may sacrifice performance for reduced computation. The performance loss can be attributed to two separate sources, the design approximation and the search inaccuracy. To measure the search performance, we define the search accuracy as the percentage of input vectors that are quantized with minimum distortion. Our studies show that low search accuracy is the main cause of performance loss for some of the best current tree-structured vector quantizers. Although the design approximation and search performance can be analyzed separately, we observe that the result of design may actually affect the search accuracy. Most of the current design techniques seek to minimize the distortion in the design without any consideration of their effect on the search. The tree search accuracy as a result of these designs could be as low as 50 percent. In order to improve the overall performance, the tree design should not be optimized without consideration of tree search accuracy. The difficulty is that it is not possible to measure the search accuracy at the design stage. We develop a design algorithm that incorporates the search accuracy and produce a tree-structured that improves the search accuracy significantly. Experimental results in image compression show that the strategy works surprisingly well in improving the tree search accuracy from a low of 50% to over 80 and 90%.
只提供摘要形式。众所周知,树结构矢量量化可能会牺牲性能以减少计算量。性能损失可以归结为两个不同的来源,设计近似和搜索不准确。为了衡量搜索性能,我们将搜索精度定义为以最小失真量化的输入向量的百分比。我们的研究表明,低搜索精度是目前一些最好的树结构矢量量化器性能损失的主要原因。虽然设计近似和搜索性能可以分开分析,但我们观察到设计结果实际上可能影响搜索精度。目前的大多数设计技术都是为了尽量减少设计中的失真,而不考虑它们对搜索的影响。由于这些设计,树搜索的准确率可能低至50%。为了提高整体性能,不能在不考虑树搜索精度的情况下优化树的设计。难点在于无法在设计阶段测量搜索精度。我们开发了一种结合搜索精度的设计算法,并产生了一个显著提高搜索精度的树状结构。图像压缩实验结果表明,该策略将树搜索准确率从50%提高到80%和90%以上。
{"title":"The search accuracy of tree-structured VQ","authors":"J. Lin","doi":"10.1109/DCC.1997.582112","DOIUrl":"https://doi.org/10.1109/DCC.1997.582112","url":null,"abstract":"Summary form only given. It is well-known that tree-structured vector quantization may sacrifice performance for reduced computation. The performance loss can be attributed to two separate sources, the design approximation and the search inaccuracy. To measure the search performance, we define the search accuracy as the percentage of input vectors that are quantized with minimum distortion. Our studies show that low search accuracy is the main cause of performance loss for some of the best current tree-structured vector quantizers. Although the design approximation and search performance can be analyzed separately, we observe that the result of design may actually affect the search accuracy. Most of the current design techniques seek to minimize the distortion in the design without any consideration of their effect on the search. The tree search accuracy as a result of these designs could be as low as 50 percent. In order to improve the overall performance, the tree design should not be optimized without consideration of tree search accuracy. The difficulty is that it is not possible to measure the search accuracy at the design stage. We develop a design algorithm that incorporates the search accuracy and produce a tree-structured that improves the search accuracy significantly. Experimental results in image compression show that the strategy works surprisingly well in improving the tree search accuracy from a low of 50% to over 80 and 90%.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117284377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Study of Japanese text compression 日语文本压缩研究
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582134
N. Satoh, T. Morihara, Y. Okada, S. Yoshida
Summary form only given. The Japanese language has several thousand distinct characters, and the character code length is 16 bits. In such documents the 16-bit units are interrelated. Conventional text compression employs 8-bit sampling because the compressed object is usually English text. We investigated compression schemes based on 16-bit sampling, expecting it to improve the compression performance. In Japanese text where words are short, statistical schemes with a PPM provide better compression ratios than slide dictionary schemes. So we investigated the 16-bit sampling based on statistical schemes with a PPM model. We show the 16-bit sampling scheme provides good compression ratios in short documents under several tens of kilobytes, such as office reports. The processing speed is also better.
只提供摘要形式。日语有几千个不同的字符,字符码长度为16位。在这样的文档中,16位单元是相互关联的。传统的文本压缩采用8位采样,因为压缩对象通常是英文文本。我们研究了基于16位采样的压缩方案,期望它能提高压缩性能。在单词较短的日语文本中,具有PPM的统计方案比幻灯片字典方案提供更好的压缩比。因此,我们用PPM模型研究了基于统计方案的16位采样。我们展示了16位采样方案在几十kb以下的简短文档中提供了良好的压缩比,例如办公室报告。处理速度也更好。
{"title":"Study of Japanese text compression","authors":"N. Satoh, T. Morihara, Y. Okada, S. Yoshida","doi":"10.1109/DCC.1997.582134","DOIUrl":"https://doi.org/10.1109/DCC.1997.582134","url":null,"abstract":"Summary form only given. The Japanese language has several thousand distinct characters, and the character code length is 16 bits. In such documents the 16-bit units are interrelated. Conventional text compression employs 8-bit sampling because the compressed object is usually English text. We investigated compression schemes based on 16-bit sampling, expecting it to improve the compression performance. In Japanese text where words are short, statistical schemes with a PPM provide better compression ratios than slide dictionary schemes. So we investigated the 16-bit sampling based on statistical schemes with a PPM model. We show the 16-bit sampling scheme provides good compression ratios in short documents under several tens of kilobytes, such as office reports. The processing speed is also better.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"186 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120873584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
"Universal" transform image coding based on joint adaptation of filter banks, tree structures and quantizers 基于滤波器组、树形结构和量化器联合自适应的“通用”变换图像编码
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582130
V. Pavlovic, K. Ramchandran, P. Moulin
Summary form only given. Transform coding has become the de facto standard for image and video compression. The design of adaptive signal transforms for image coding usually follows one of the two approaches: adaptive tree/quantizer design with fixed subband filter banks and adaptive subband filter bank design with fixed quantizers and tree topology. The main objective of our work is to integrate these two paradigms in an image coder in which subband filter banks, tree structures and quantizers are all adapted. We design a codebook for the filters, tree and quantizers. The codebook design algorithm uses a training set made of images that are assumed to be representative of the broad class of images of interest. We first design the filters and then the quantizers. In the filter design phase, we visit nodes in a top-down fashion and design a filter codebook for each tree node. The optimal filter codebook for each node is designed so as to minimize the theoretical coding gain-based rate. The design of the quantizers and the weights for the splitting decisions is done jointly using a greedy iterative algorithm based on the single tree algorithm of Ramchandran et al. (1993). The actual coding algorithm finds, based on the codebook design, the optimized filter banks, tree structure, and quantizer choices for each node of the tree. In our experimental setup we used a training set of 20 images representative of four image classes.
只提供摘要形式。变换编码已经成为图像和视频压缩的事实上的标准。用于图像编码的自适应信号变换的设计通常遵循两种方法中的一种:具有固定子带滤波器组的自适应树/量化器设计和具有固定量化器和树拓扑的自适应子带滤波器组设计。我们工作的主要目标是将这两种范式集成到图像编码器中,其中子带滤波器组,树结构和量化器都适用。我们为滤波器、树和量化器设计了一个码本。码本设计算法使用一个由图像组成的训练集,这些图像被认为是感兴趣的大类图像的代表。我们首先设计滤波器,然后是量化器。在过滤器设计阶段,我们以自顶向下的方式访问节点,并为每个树节点设计一个过滤器代码本。设计了每个节点的最优滤波器码本,使理论编码增益率最小。在Ramchandran et al.(1993)的单树算法的基础上,采用贪心迭代算法联合设计分拆决策的量化器和权值。实际的编码算法根据码本设计,为树的每个节点找到优化的滤波器组、树结构和量化器选择。在我们的实验设置中,我们使用了一个由代表四个图像类别的20个图像组成的训练集。
{"title":"\"Universal\" transform image coding based on joint adaptation of filter banks, tree structures and quantizers","authors":"V. Pavlovic, K. Ramchandran, P. Moulin","doi":"10.1109/DCC.1997.582130","DOIUrl":"https://doi.org/10.1109/DCC.1997.582130","url":null,"abstract":"Summary form only given. Transform coding has become the de facto standard for image and video compression. The design of adaptive signal transforms for image coding usually follows one of the two approaches: adaptive tree/quantizer design with fixed subband filter banks and adaptive subband filter bank design with fixed quantizers and tree topology. The main objective of our work is to integrate these two paradigms in an image coder in which subband filter banks, tree structures and quantizers are all adapted. We design a codebook for the filters, tree and quantizers. The codebook design algorithm uses a training set made of images that are assumed to be representative of the broad class of images of interest. We first design the filters and then the quantizers. In the filter design phase, we visit nodes in a top-down fashion and design a filter codebook for each tree node. The optimal filter codebook for each node is designed so as to minimize the theoretical coding gain-based rate. The design of the quantizers and the weights for the splitting decisions is done jointly using a greedy iterative algorithm based on the single tree algorithm of Ramchandran et al. (1993). The actual coding algorithm finds, based on the codebook design, the optimized filter banks, tree structure, and quantizer choices for each node of the tree. In our experimental setup we used a training set of 20 images representative of four image classes.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"96 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132708190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Some entropic bounds for Lempel-Ziv algorithms Lempel-Ziv算法的一些熵界
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582106
S. Rao Kosaraju, G. Manzini
Summary form only given, as follows. We initiate a study of parsing-based compression algorithms such as LZ77 and LZ78 by considering the empirical entropy of the input string. For any string s, we define the k-th order entropy H/sub k/(s) by looking at the number of occurrences of each symbol following each k-length substring inside s. The value H/sub k/(s) is a lower bound to the compression ratio of a statistical modeling algorithm which predicts the probability of the next symbol by looking at the k most recently seen characters. Therefore, our analysis provides a means for comparing Lempel-Ziv methods with the more powerful, but slower, PPM algorithms. Our main contribution is a comparison of the compression ratio of Lempel-Ziv algorithms with the zeroth order entropy H/sub 0/. First we show that for low entropy strings LZ78 compression ratio can be much higher than H/sub 0/. Then, we present a modified algorithm which combines LZ78 with run length encoding and is able to compress efficiently also low entropy strings.
仅给出摘要形式,如下。通过考虑输入字符串的经验熵,我们开始研究基于解析的压缩算法,如LZ77和LZ78。对于任何字符串s,我们定义k阶熵H/sub k/(s),通过查看s内每个k长度的子字符串后面的每个符号的出现次数。值H/sub k/(s)是统计建模算法的压缩比的下界,该算法通过查看最近看到的k个字符来预测下一个符号的概率。因此,我们的分析提供了一种将Lempel-Ziv方法与更强大但更慢的PPM算法进行比较的方法。我们的主要贡献是比较了零阶熵H/sub 0/下Lempel-Ziv算法的压缩比。首先,我们证明了低熵字符串的LZ78压缩比可以远远高于H/sub 0/。然后,我们提出了一种改进算法,该算法将LZ78与运行长度编码相结合,能够有效地压缩低熵字符串。
{"title":"Some entropic bounds for Lempel-Ziv algorithms","authors":"S. Rao Kosaraju, G. Manzini","doi":"10.1109/DCC.1997.582106","DOIUrl":"https://doi.org/10.1109/DCC.1997.582106","url":null,"abstract":"Summary form only given, as follows. We initiate a study of parsing-based compression algorithms such as LZ77 and LZ78 by considering the empirical entropy of the input string. For any string s, we define the k-th order entropy H/sub k/(s) by looking at the number of occurrences of each symbol following each k-length substring inside s. The value H/sub k/(s) is a lower bound to the compression ratio of a statistical modeling algorithm which predicts the probability of the next symbol by looking at the k most recently seen characters. Therefore, our analysis provides a means for comparing Lempel-Ziv methods with the more powerful, but slower, PPM algorithms. Our main contribution is a comparison of the compression ratio of Lempel-Ziv algorithms with the zeroth order entropy H/sub 0/. First we show that for low entropy strings LZ78 compression ratio can be much higher than H/sub 0/. Then, we present a modified algorithm which combines LZ78 with run length encoding and is able to compress efficiently also low entropy strings.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129173042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Intraframe low bit rate video coding robust to packet erasure 帧内低比特率视频编码对数据包擦除的鲁棒性
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582088
V.J. Crump, T. Fischer
Summary form only given. A real-time, low bit rate, intraframe video codec that is robust to packet erasure is developed for coding and QCIF gray scale video sequences. The system combines subband image coding, entropy coded scalar quantization, subband trees of wavelet coefficients, runlength coding, and Huffman coding. The encoded bit stream is encapsulated in independent variable-length packets. Isolation of spatially related trees of subband coefficients makes the system robust to packet erasure. The design objectives are to use a small encoding rate, recover gracefully from erasures in which packets of data have been erased, and have a software-only implementation that runs in real-time. The packet loss rate can be as large as 30%. We investigate several passive methods of recovering from packet erasures, including (i) replace missing pixels by the subband mean, (ii) replace missing pixels with the most recently received subband pixels in previous frames, (iii) explicit error control coding of the lower frequency subbands, and (iv) the use of interpolation and a one-frame delay to estimate the erased subband pixels.
只提供摘要形式。针对编码和QCIF灰度视频序列,开发了一种实时、低比特率、帧内视频编解码器,该编解码器对数据包擦除具有鲁棒性。该系统结合了子带图像编码、熵编码标量量化、小波系数子带树、游程编码和霍夫曼编码。编码的比特流被封装在独立的可变长度数据包中。子带系数空间相关树的隔离使系统对包擦除具有鲁棒性。设计目标是使用较小的编码速率,从数据包被擦除的擦除中优雅地恢复,并具有实时运行的纯软件实现。丢包率最高可达30%。我们研究了几种从包擦除中恢复的被动方法,包括(i)用子带平均值替换丢失的像素,(ii)用前一帧中最近接收的子带像素替换丢失的像素,(iii)对低频子带进行显式错误控制编码,以及(iv)使用插值和一帧延迟来估计擦除的子带像素。
{"title":"Intraframe low bit rate video coding robust to packet erasure","authors":"V.J. Crump, T. Fischer","doi":"10.1109/DCC.1997.582088","DOIUrl":"https://doi.org/10.1109/DCC.1997.582088","url":null,"abstract":"Summary form only given. A real-time, low bit rate, intraframe video codec that is robust to packet erasure is developed for coding and QCIF gray scale video sequences. The system combines subband image coding, entropy coded scalar quantization, subband trees of wavelet coefficients, runlength coding, and Huffman coding. The encoded bit stream is encapsulated in independent variable-length packets. Isolation of spatially related trees of subband coefficients makes the system robust to packet erasure. The design objectives are to use a small encoding rate, recover gracefully from erasures in which packets of data have been erased, and have a software-only implementation that runs in real-time. The packet loss rate can be as large as 30%. We investigate several passive methods of recovering from packet erasures, including (i) replace missing pixels by the subband mean, (ii) replace missing pixels with the most recently received subband pixels in previous frames, (iii) explicit error control coding of the lower frequency subbands, and (iv) the use of interpolation and a one-frame delay to estimate the erased subband pixels.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116941484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
Proceedings DCC '97. Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1