首页 > 最新文献

Proceedings DCC '97. Data Compression Conference最新文献

英文 中文
Quadtree based variable rate oriented mean shape-gain vector quantization 基于四叉树的可变速率定向平均形状增益矢量量化
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582056
R. Hamzaoui, Bertram Ganz, D. Saupe
Mean shape-gain vector quantization (MSGVQ) is extended to include negative gains and square isometries. Square isometries together with a classification technique based on average block intensities enable us to enlarge the MSGVQ codebook size without any additional storage requirements while keeping the complexity of both the codebook generation and the encoding manageable. Variable rate codes are obtained with a quadtree segmentation based on a rate-distortion criterion. Experimental results show that our scheme performs favorably when compared to previous product code techniques or quadtree based VQ methods.
将平均形状增益矢量量化(MSGVQ)扩展到包括负增益和平方等距。平方等边和基于平均块强度的分类技术使我们能够在没有任何额外存储需求的情况下扩大MSGVQ码本大小,同时保持码本生成和编码可管理的复杂性。采用基于率失真准则的四叉树分割获得可变率码。实验结果表明,与以前的产品编码技术或基于四叉树的VQ方法相比,我们的方案具有更好的性能。
{"title":"Quadtree based variable rate oriented mean shape-gain vector quantization","authors":"R. Hamzaoui, Bertram Ganz, D. Saupe","doi":"10.1109/DCC.1997.582056","DOIUrl":"https://doi.org/10.1109/DCC.1997.582056","url":null,"abstract":"Mean shape-gain vector quantization (MSGVQ) is extended to include negative gains and square isometries. Square isometries together with a classification technique based on average block intensities enable us to enlarge the MSGVQ codebook size without any additional storage requirements while keeping the complexity of both the codebook generation and the encoding manageable. Variable rate codes are obtained with a quadtree segmentation based on a rate-distortion criterion. Experimental results show that our scheme performs favorably when compared to previous product code techniques or quadtree based VQ methods.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131394408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A pipelined architecture algorithm for image compression 一种用于图像压缩的流水线架构算法
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582080
S. Bhattacharjee, S. Das, Y. Chowdhury, P. P. Chaudhuri
Summary form only given. The article reports a pipelined architecture that can support on-line compression/decompression of image data. Spatial and spectral redundancy of an image data file are detected and removed with a simple and elegant scheme that can be easily implemented on a pipelined hardware. The scheme provides the user with the facility of trading off the image quality with the compression ratio. The basic theory of byte error correcting code (ECC) is employed to compress a pixel row with reference to its adjacent row. A simple scheme is developed to encode pixel rows of an image, both monochrome and colour. The compression ratio and quality obtained by this new technique has been compared with JPEG which shows comparable compression ratio with acceptable quality. The scheme is hardware based for both color and monochrome image compression, that can match a high speed communication link, thereby supporting on-line applications.
只提供摘要形式。本文报告了一种支持图像数据在线压缩/解压缩的流水线架构。用一种简单而优雅的方案检测和去除图像数据文件的空间和频谱冗余,该方案可以很容易地在流水线硬件上实现。该方案为用户提供了在图像质量和压缩比之间进行权衡的便利。采用字节纠错码(ECC)的基本理论,根据相邻行压缩像素行。开发了一种简单的方案来编码单色和彩色图像的像素行。将该方法得到的压缩比和质量与JPEG进行了比较,结果表明压缩比相当,质量可以接受。该方案是基于硬件的彩色和单色图像压缩,可以匹配高速通信链路,从而支持在线应用。
{"title":"A pipelined architecture algorithm for image compression","authors":"S. Bhattacharjee, S. Das, Y. Chowdhury, P. P. Chaudhuri","doi":"10.1109/DCC.1997.582080","DOIUrl":"https://doi.org/10.1109/DCC.1997.582080","url":null,"abstract":"Summary form only given. The article reports a pipelined architecture that can support on-line compression/decompression of image data. Spatial and spectral redundancy of an image data file are detected and removed with a simple and elegant scheme that can be easily implemented on a pipelined hardware. The scheme provides the user with the facility of trading off the image quality with the compression ratio. The basic theory of byte error correcting code (ECC) is employed to compress a pixel row with reference to its adjacent row. A simple scheme is developed to encode pixel rows of an image, both monochrome and colour. The compression ratio and quality obtained by this new technique has been compared with JPEG which shows comparable compression ratio with acceptable quality. The scheme is hardware based for both color and monochrome image compression, that can match a high speed communication link, thereby supporting on-line applications.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133440958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Towards understanding and improving escape probabilities in PPM 致力于理解和提高PPM中的逃逸概率
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581954
J. Åberg, Y. Shtarkov, B. Smeets
The choice of expressions for the coding probabilities in general, and the escape probability in particular, is of great importance in the family of prediction by partial matching (PPM) algorithms. We present a parameterized version of the escape probability estimator which, together with a "compactness" criterion, provides guidelines for the estimator design given a "representative" set of files. This parameterization also makes it possible to adapt the expression of the escape probability during one-pass coding. Finally, we present results for one such compression scheme that illustrates the usefulness of our approach.
在部分匹配(PPM)算法的预测中,编码概率表达式的选择,特别是逃逸概率表达式的选择是非常重要的。我们提出了一个参数化版本的转义概率估计器,它与“紧凑性”准则一起,为给定一组“代表性”文件的估计器设计提供了指导方针。这种参数化也使得在一遍编码过程中适应转义概率的表达式成为可能。最后,我们给出了一个这样的压缩方案的结果,说明了我们的方法的有用性。
{"title":"Towards understanding and improving escape probabilities in PPM","authors":"J. Åberg, Y. Shtarkov, B. Smeets","doi":"10.1109/DCC.1997.581954","DOIUrl":"https://doi.org/10.1109/DCC.1997.581954","url":null,"abstract":"The choice of expressions for the coding probabilities in general, and the escape probability in particular, is of great importance in the family of prediction by partial matching (PPM) algorithms. We present a parameterized version of the escape probability estimator which, together with a \"compactness\" criterion, provides guidelines for the estimator design given a \"representative\" set of files. This parameterization also makes it possible to adapt the expression of the escape probability during one-pass coding. Finally, we present results for one such compression scheme that illustrates the usefulness of our approach.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133843994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Noncausal image prediction and reconstruction 非因果图像预测与重建
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582114
J. Marchand, H. Rhody
Summary form only given. Prediction of the value of the pixels in an image is often used in image compression. The residual image, the difference between the image and its predicted value, can usually be coded with fewer bits than the original image. In linear prediction the value of each pixel of an image is estimated from the value of surrounding pixels using a predictor P. In noncausal prediction pixels surrounding the pixel to be predicted are used. In causal prediction only "earlier" pixels are used. Usually noncausal prediction offers better prediction than causal prediction because all pixels surrounding the pixel to be predicted are considered. The reconstruction of the image from the residual after noncausal prediction is more difficult than when causal prediction is used. This paper explores two methods of reconstruction for noncausal prediction: iterative reconstruction and direct reconstruction. As an example, the effect of quantization of the residual on the reconstructed image is considered. It shows an improved image quality using the noncausal predictor.
只提供摘要形式。图像中像素值的预测常用于图像压缩。残差图像,即图像与其预测值之间的差值,通常可以用比原始图像更少的比特进行编码。在线性预测中,使用预测器p从周围像素的值估计图像的每个像素的值。在非因果预测中,使用待预测像素周围的像素。在因果预测中,只使用“更早”的像素。通常非因果预测比因果预测提供更好的预测,因为要预测的像素周围的所有像素都被考虑在内。非因果预测后的残差图像重建比使用因果预测时更为困难。本文探讨了非因果预测的两种重构方法:迭代重构和直接重构。作为一个例子,考虑了残差量化对重构图像的影响。使用非因果预测器,图像质量得到了改善。
{"title":"Noncausal image prediction and reconstruction","authors":"J. Marchand, H. Rhody","doi":"10.1109/DCC.1997.582114","DOIUrl":"https://doi.org/10.1109/DCC.1997.582114","url":null,"abstract":"Summary form only given. Prediction of the value of the pixels in an image is often used in image compression. The residual image, the difference between the image and its predicted value, can usually be coded with fewer bits than the original image. In linear prediction the value of each pixel of an image is estimated from the value of surrounding pixels using a predictor P. In noncausal prediction pixels surrounding the pixel to be predicted are used. In causal prediction only \"earlier\" pixels are used. Usually noncausal prediction offers better prediction than causal prediction because all pixels surrounding the pixel to be predicted are considered. The reconstruction of the image from the residual after noncausal prediction is more difficult than when causal prediction is used. This paper explores two methods of reconstruction for noncausal prediction: iterative reconstruction and direct reconstruction. As an example, the effect of quantization of the residual on the reconstructed image is considered. It shows an improved image quality using the noncausal predictor.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124331367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Selective resolution for surveillance video compression 监控视频压缩的选择性分辨率
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582136
I. Schiller, Chun-Ksiung Chuang, S.M. King, J. Storer
This paper describes selective resolution (SR), an image compression method which allows the efficient use of available bandwidth with selective preservation of details. SR simply applies perceptually lossless compression to the central part of the image while compressing the peripheral of the image with higher compression. The central part of the image allows higher quality imagery for details while the peripheral efficiently cues the viewers on interesting sites. SR is especially valuable in video with reduced frame rate because successive images would have much less correlation needed for effective interframe algorithms. In fact, SR, which takes advantage of human vision habits, may be viewed as an alternative to interframe compression. We have implemented SR with the motion compensated VQ algorithm.
本文描述了选择性分辨率(SR),这是一种图像压缩方法,它可以有效地利用可用带宽并选择性地保留细节。SR只是对图像的中心部分进行感知上的无损压缩,同时对图像的外围部分进行更高的压缩。图像的中心部分为细节提供了更高质量的图像,而外围部分则有效地提示观众关注有趣的站点。SR在帧率较低的视频中特别有价值,因为连续图像的相关性要小得多,因此需要有效的帧间算法。事实上,SR利用了人类的视觉习惯,可以被视为帧间压缩的替代方案。我们用运动补偿VQ算法实现了SR。
{"title":"Selective resolution for surveillance video compression","authors":"I. Schiller, Chun-Ksiung Chuang, S.M. King, J. Storer","doi":"10.1109/DCC.1997.582136","DOIUrl":"https://doi.org/10.1109/DCC.1997.582136","url":null,"abstract":"This paper describes selective resolution (SR), an image compression method which allows the efficient use of available bandwidth with selective preservation of details. SR simply applies perceptually lossless compression to the central part of the image while compressing the peripheral of the image with higher compression. The central part of the image allows higher quality imagery for details while the peripheral efficiently cues the viewers on interesting sites. SR is especially valuable in video with reduced frame rate because successive images would have much less correlation needed for effective interframe algorithms. In fact, SR, which takes advantage of human vision habits, may be viewed as an alternative to interframe compression. We have implemented SR with the motion compensated VQ algorithm.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122781451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimode image coding for noisy channels 噪声信道的多模图像编码
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581974
S. Regunathan, K. Rose, S. Gadkari
We attack the problem of robust and efficient image compression for transmission over noisy channels. To achieve the dual goals of high compression efficiency and low sensitivity to channel noise we introduce a multimode coding framework. Multimode coders are quasi-fixed length in nature, and allow optimization of the tradeoff between the compression capability of variable-length coding and the robustness to channel errors of fixed length coding. We apply our framework to develop multimode image coding (MIC) schemes for noisy channels, based on the adaptive DCT. The robustness of the proposed MIC is further enhanced by the incorporation of a channel protection scheme suitable for the constraints on complexity and delay. To demonstrate the power of the technique we develop two specific image coding algorithms optimized for the binary symmetric channel. The first, MIC1, incorporates channel optimized quantizers and the second, MIC2, uses rate compatible punctured convolutional codes within the multimode framework. Simulations demonstrate that the multimode coders obtain significant performance gains of up to 6 dB over conventional fixed length coding techniques.
我们研究了在噪声信道上传输的鲁棒和高效图像压缩问题。为了实现高压缩效率和低信道噪声敏感性的双重目标,我们引入了一种多模编码框架。多模编码器本质上是准定长编码,可以在变长编码的压缩能力和定长编码对信道误差的鲁棒性之间进行优化权衡。我们将我们的框架应用于基于自适应DCT的噪声信道的多模图像编码(MIC)方案。通过引入适合于复杂性和延迟约束的信道保护方案,进一步增强了所提出的MIC的鲁棒性。为了展示该技术的强大功能,我们开发了两种针对二进制对称信道优化的特定图像编码算法。第一个MIC1集成了信道优化量化器,第二个MIC2在多模框架内使用速率兼容的穿孔卷积码。仿真表明,与传统的固定长度编码技术相比,多模编码器获得了高达6 dB的显著性能增益。
{"title":"Multimode image coding for noisy channels","authors":"S. Regunathan, K. Rose, S. Gadkari","doi":"10.1109/DCC.1997.581974","DOIUrl":"https://doi.org/10.1109/DCC.1997.581974","url":null,"abstract":"We attack the problem of robust and efficient image compression for transmission over noisy channels. To achieve the dual goals of high compression efficiency and low sensitivity to channel noise we introduce a multimode coding framework. Multimode coders are quasi-fixed length in nature, and allow optimization of the tradeoff between the compression capability of variable-length coding and the robustness to channel errors of fixed length coding. We apply our framework to develop multimode image coding (MIC) schemes for noisy channels, based on the adaptive DCT. The robustness of the proposed MIC is further enhanced by the incorporation of a channel protection scheme suitable for the constraints on complexity and delay. To demonstrate the power of the technique we develop two specific image coding algorithms optimized for the binary symmetric channel. The first, MIC1, incorporates channel optimized quantizers and the second, MIC2, uses rate compatible punctured convolutional codes within the multimode framework. Simulations demonstrate that the multimode coders obtain significant performance gains of up to 6 dB over conventional fixed length coding techniques.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124984348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Text compression by context tree weighting 通过上下文树加权进行文本压缩
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582062
J. Åberg, Y. Shtarkov
The results of an experimental study of different modifications of the context tree weighting algorithm are described. In particular, the combination of this algorithm with the well-known PPM approach is studied. For one of the considered modifications the decrease of the average (for the Calgary Corpus) coding rate is 0.091 bits compared with PPMD.
描述了上下文树加权算法的不同修改的实验研究结果。特别研究了该算法与著名的PPM方法的结合。对于考虑的修改之一,与PPMD相比,平均(对于卡尔加里语料库)编码率降低了0.091位。
{"title":"Text compression by context tree weighting","authors":"J. Åberg, Y. Shtarkov","doi":"10.1109/DCC.1997.582062","DOIUrl":"https://doi.org/10.1109/DCC.1997.582062","url":null,"abstract":"The results of an experimental study of different modifications of the context tree weighting algorithm are described. In particular, the combination of this algorithm with the well-known PPM approach is studied. For one of the considered modifications the decrease of the average (for the Calgary Corpus) coding rate is 0.091 bits compared with PPMD.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128516745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
An analytical treatment of channel-induced distortion in run length coded subbands 码长子带中信道畸变的解析处理
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581965
J. Garcia-Frías, J. Villasenor
We present an analytical framework for describing the distortion in an image communication system that includes wavelet transformation, uniform scalar quantization, run length coding, entropy coding, forward error control, and transmission over a binary symmetric channel. Simulations performed using ideal source models as well as real image subbands confirm the accuracy of the distortion description. The resulting equations can be used to choose channel code rates in an unequal error protection scheme in which subbands are protected according to their importance.
我们提出了一个分析框架来描述图像通信系统中的失真,包括小波变换、均匀标量量化、运行长度编码、熵编码、前向误差控制和二进制对称信道上的传输。使用理想源模型和真实图像子带进行的仿真验证了畸变描述的准确性。所得方程可用于在不等错误保护方案中选择信道码率,该方案根据子带的重要性对其进行保护。
{"title":"An analytical treatment of channel-induced distortion in run length coded subbands","authors":"J. Garcia-Frías, J. Villasenor","doi":"10.1109/DCC.1997.581965","DOIUrl":"https://doi.org/10.1109/DCC.1997.581965","url":null,"abstract":"We present an analytical framework for describing the distortion in an image communication system that includes wavelet transformation, uniform scalar quantization, run length coding, entropy coding, forward error control, and transmission over a binary symmetric channel. Simulations performed using ideal source models as well as real image subbands confirm the accuracy of the distortion description. The resulting equations can be used to choose channel code rates in an unequal error protection scheme in which subbands are protected according to their importance.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121784896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Library-based coding: a representation for efficient video compression and retrieval 基于库的编码:一种高效视频压缩和检索的表示
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581989
N. Vasconcelos, A. Lippman
The ubiquity of networking and computational capacity associated with the new communications media unveil a universe of new requirements for image representation. Among such requirements is the ability of the representation used for coding to support higher-level tasks such as content-based retrieval. We explore the relationships between probabilistic modeling and data compression to introduce a representation-library-based coding-which, by enabling retrieval in the compressed domain, satisfies this requirement. Because it contains an embedded probabilistic description of the source, this new representation allows the construction of good inference models without compromise of compression efficiency, leads to very efficient procedures for query and retrieval, and provides a framework for higher level tasks such as the analysis and classification of video shots.
与新通信媒体相关的无处不在的网络和计算能力揭示了对图像表示的新要求。这些需求中包括用于编码的表示的能力,以支持更高级别的任务,如基于内容的检索。我们探讨了概率建模和数据压缩之间的关系,引入了一种基于表示库的编码,它通过在压缩域中启用检索来满足这一要求。由于它包含对源的嵌入式概率描述,这种新的表示允许在不影响压缩效率的情况下构建良好的推理模型,导致非常有效的查询和检索过程,并为更高级别的任务(如视频镜头的分析和分类)提供框架。
{"title":"Library-based coding: a representation for efficient video compression and retrieval","authors":"N. Vasconcelos, A. Lippman","doi":"10.1109/DCC.1997.581989","DOIUrl":"https://doi.org/10.1109/DCC.1997.581989","url":null,"abstract":"The ubiquity of networking and computational capacity associated with the new communications media unveil a universe of new requirements for image representation. Among such requirements is the ability of the representation used for coding to support higher-level tasks such as content-based retrieval. We explore the relationships between probabilistic modeling and data compression to introduce a representation-library-based coding-which, by enabling retrieval in the compressed domain, satisfies this requirement. Because it contains an embedded probabilistic description of the source, this new representation allows the construction of good inference models without compromise of compression efficiency, leads to very efficient procedures for query and retrieval, and provides a framework for higher level tasks such as the analysis and classification of video shots.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130678188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Capturing global redundancy to improve compression of large images 捕获全局冗余以改善大图像的压缩
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581967
B. L. Kess, S. Reichenbach
A Source Specific Model for Global Earth Data (SSM-GED) is a lossless compression method for large images that captures global redundancy in the data and achieves a significant improvement over CALIC and DCXT-BT/CARP, two leading lossless compression schemes. The Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) data, which contains 662 Megabytes (MB) per band, is an example of a large data set that requires decompression of regions of the data. For this reason, SSM-GED compresses the AVHRR data as a collection of subwindows. This approach defines the statistical parameters for the model prior to compression. Unlike universal models that assume no a priori knowledge of the data, SSM-GED captures global redundancy that exists among all of the subwindows of data. The overlap in parameters among subwindows of data enables SSM-GED to improve the compression rate by increasing the number of parameters and maintaining a small model cost for each subwindow of data.
全球地球数据源特定模型(SSM-GED)是一种用于大图像的无损压缩方法,它捕获了数据中的全局冗余,并且比CALIC和dxt - bt /CARP这两种领先的无损压缩方案取得了显着改进。全球陆地1公里高级甚高分辨率辐射计(AVHRR)数据,每个波段包含662兆字节(MB),是需要对数据区域进行解压的大型数据集的一个例子。出于这个原因,SSM-GED将AVHRR数据压缩为一个子窗口的集合。这种方法在压缩之前为模型定义统计参数。与假定没有数据先验知识的通用模型不同,SSM-GED捕获存在于数据所有子窗口之间的全局冗余。数据子窗口间参数的重叠使得SSM-GED可以通过增加参数数量和保持每个数据子窗口较小的模型代价来提高压缩率。
{"title":"Capturing global redundancy to improve compression of large images","authors":"B. L. Kess, S. Reichenbach","doi":"10.1109/DCC.1997.581967","DOIUrl":"https://doi.org/10.1109/DCC.1997.581967","url":null,"abstract":"A Source Specific Model for Global Earth Data (SSM-GED) is a lossless compression method for large images that captures global redundancy in the data and achieves a significant improvement over CALIC and DCXT-BT/CARP, two leading lossless compression schemes. The Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) data, which contains 662 Megabytes (MB) per band, is an example of a large data set that requires decompression of regions of the data. For this reason, SSM-GED compresses the AVHRR data as a collection of subwindows. This approach defines the statistical parameters for the model prior to compression. Unlike universal models that assume no a priori knowledge of the data, SSM-GED captures global redundancy that exists among all of the subwindows of data. The overlap in parameters among subwindows of data enables SSM-GED to improve the compression rate by increasing the number of parameters and maintaining a small model cost for each subwindow of data.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129575340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings DCC '97. Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1