首页 > 最新文献

Data Compression Conference, 1992.最新文献

英文 中文
Perceptually based coding of monochrome and color still images 基于感知的单色和彩色静止图像编码
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227467
T. Reed, V. Algazi, G. Ford, I. Hussain
The authors describe a human visual perception approach and report some results for this problem. The approach is based on the differential quantization of images, in which smooth approximations are subtracted from the image prior to quantization. They consider two such approximations. The first one is an approximation by splines obtained from a sparse and fixed subsampled array of the image. The second one segments the image into piecewise constant regions on the basis of the local activity of the image. Both these approximations result in remainders of residual images where large errors are localized in portions of the image of high activity. Because of visual masking the remainder image can now be coarsely quantized without visual impairment to the reconstructed image. The coarsely quantized remainder is now encoded in an error free manner. In such a perceptually based encoding method the mean square error is now dependent on the activity of the image.<>
作者描述了一种人类视觉感知方法,并报告了一些解决这个问题的结果。该方法基于图像的微分量化,在量化之前从图像中减去平滑近似。他们考虑了两个这样的近似。第一种方法是从图像的稀疏和固定的下采样数组中得到的样条近似。第二种方法基于图像的局部活动将图像分割成分段的恒定区域。这两种近似都会产生残差图像的余数,其中较大的误差定位在高活动图像的部分。由于视觉掩蔽,剩余图像现在可以粗量化,而不会对重建图像造成视觉损害。粗量化余数现在以无错误的方式编码。在这种基于感知的编码方法中,均方误差现在依赖于图像的活动。
{"title":"Perceptually based coding of monochrome and color still images","authors":"T. Reed, V. Algazi, G. Ford, I. Hussain","doi":"10.1109/DCC.1992.227467","DOIUrl":"https://doi.org/10.1109/DCC.1992.227467","url":null,"abstract":"The authors describe a human visual perception approach and report some results for this problem. The approach is based on the differential quantization of images, in which smooth approximations are subtracted from the image prior to quantization. They consider two such approximations. The first one is an approximation by splines obtained from a sparse and fixed subsampled array of the image. The second one segments the image into piecewise constant regions on the basis of the local activity of the image. Both these approximations result in remainders of residual images where large errors are localized in portions of the image of high activity. Because of visual masking the remainder image can now be coarsely quantized without visual impairment to the reconstructed image. The coarsely quantized remainder is now encoded in an error free manner. In such a perceptually based encoding method the mean square error is now dependent on the activity of the image.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128201238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Experiments using minimal-length encoding to solve machine learning problems 使用最小长度编码解决机器学习问题的实验
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227445
A. Gammerman, T. Bellotti
Describes a system called Emily which was designed to implement the minimal-length encoding principle for induction, and a series of experiments that was carried out with some success by that system. Emily is based on the principle that the formulation of concepts (i.e., theories or explanations) over a set of data can be achieved by the process of minimally encoding that data. Thus, a learning problem can be solved by minimising its descriptions.<>
描述了一个名为Emily的系统,该系统旨在实现归纳的最小长度编码原则,以及该系统进行的一系列实验,并取得了一定的成功。Emily是基于这样的原则:对一组数据的概念(即理论或解释)的表述可以通过对数据进行最小程度的编码来实现。因此,学习问题可以通过最小化其描述来解决。
{"title":"Experiments using minimal-length encoding to solve machine learning problems","authors":"A. Gammerman, T. Bellotti","doi":"10.1109/DCC.1992.227445","DOIUrl":"https://doi.org/10.1109/DCC.1992.227445","url":null,"abstract":"Describes a system called Emily which was designed to implement the minimal-length encoding principle for induction, and a series of experiments that was carried out with some success by that system. Emily is based on the principle that the formulation of concepts (i.e., theories or explanations) over a set of data can be achieved by the process of minimally encoding that data. Thus, a learning problem can be solved by minimising its descriptions.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131260180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coding for compression in full-text retrieval systems 全文检索系统中的压缩编码
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227474
Alistair Moffat, J. Zobel
Witten, Bell and Nevill (see ibid., p.23, 1991) have described compression models for use in full-text retrieval systems. The authors discuss other coding methods for use with the same models, and give results that show their scheme yielding virtually identical compression, and decoding more than forty times faster. One of the main features of their implementation is the complete absence of arithmetic coding; this, in part, is the reason for the high speed. The implementation is also particularly suited to slow devices such as CD-ROM, in that the answering of a query requires one disk access for each term in the query and one disk access for each answer. All words and numbers are indexed, and there are no stop words. They have built two compressed databases.<>
Witten, Bell和Nevill(见同上,第23页,1991)描述了用于全文检索系统的压缩模型。作者讨论了使用相同模型的其他编码方法,并给出了结果,表明他们的方案产生几乎相同的压缩,并且解码速度提高了40倍以上。其实现的主要特点之一是完全没有算术编码;这在一定程度上是高速度的原因。该实现还特别适合于慢速设备,如CD-ROM,因为查询的回答需要对查询中的每个术语进行一次磁盘访问,对每个答案进行一次磁盘访问。所有的单词和数字都有索引,没有停止词。他们建立了两个压缩数据库。
{"title":"Coding for compression in full-text retrieval systems","authors":"Alistair Moffat, J. Zobel","doi":"10.1109/DCC.1992.227474","DOIUrl":"https://doi.org/10.1109/DCC.1992.227474","url":null,"abstract":"Witten, Bell and Nevill (see ibid., p.23, 1991) have described compression models for use in full-text retrieval systems. The authors discuss other coding methods for use with the same models, and give results that show their scheme yielding virtually identical compression, and decoding more than forty times faster. One of the main features of their implementation is the complete absence of arithmetic coding; this, in part, is the reason for the high speed. The implementation is also particularly suited to slow devices such as CD-ROM, in that the answering of a query requires one disk access for each term in the query and one disk access for each answer. All words and numbers are indexed, and there are no stop words. They have built two compressed databases.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114537396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Efficient two-dimensional compressed matching 高效二维压缩匹配
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227453
A. Amir, Gary Benson
Digitized images are known to be extremely space consuming. However, regularities in the images can often be exploited to reduce the necessary storage area. Thus, many systems store images in a compressed form. The authors propose that compression be used as a time saving tool, in addition to its traditional role of space saving. They introduce a new pattern matching paradigm, compressed matching. A text array T and pattern array P are given in compressed forms c(T) and c(P). They seek all appearances of P in T, without decompressing T. This achieves a search time that is sublinear in the size of the uncompressed text mod T mod . They show that for the two-dimensional run-length compression there is a O( mod c(T) mod log mod P mod + mod P mod ), or almost optimal algorithm. The algorithm uses a novel multidimensional pattern matching technique, two-dimensional periodicity analysis.<>
众所周知,数字化图像非常消耗空间。然而,通常可以利用图像中的规律来减少必要的存储面积。因此,许多系统以压缩形式存储图像。作者建议将压缩作为一种节省时间的工具,除了其传统的节省空间的作用。他们引入了一种新的模式匹配范式——压缩匹配。文本数组T和模式数组P以压缩形式c(T)和c(P)给出。它们在T中寻找P的所有表现形式,而不解压缩T。这使得搜索时间在未压缩文本的大小上是次线性的。他们表明,对于二维运行长度压缩,存在O(mod c(T) mod log mod P mod + mod P mod),或者几乎是最优算法。该算法采用了一种新颖的多维模式匹配技术——二维周期性分析。
{"title":"Efficient two-dimensional compressed matching","authors":"A. Amir, Gary Benson","doi":"10.1109/DCC.1992.227453","DOIUrl":"https://doi.org/10.1109/DCC.1992.227453","url":null,"abstract":"Digitized images are known to be extremely space consuming. However, regularities in the images can often be exploited to reduce the necessary storage area. Thus, many systems store images in a compressed form. The authors propose that compression be used as a time saving tool, in addition to its traditional role of space saving. They introduce a new pattern matching paradigm, compressed matching. A text array T and pattern array P are given in compressed forms c(T) and c(P). They seek all appearances of P in T, without decompressing T. This achieves a search time that is sublinear in the size of the uncompressed text mod T mod . They show that for the two-dimensional run-length compression there is a O( mod c(T) mod log mod P mod + mod P mod ), or almost optimal algorithm. The algorithm uses a novel multidimensional pattern matching technique, two-dimensional periodicity analysis.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134484172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 176
Random access in Huffman-coded files 随机访问霍夫曼编码文件
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227444
G. Jacobson
Presents a technique for building an index into a Huffman-coded file that permits efficient random access to the encoded data. The technique provides the ability to find the starting position of the jth symbol of the uncompressed file in an n-bit compressed file in O(log n) bit-examinations of the compressed file plus its index. Furthermore, the size of the index is o(n) bits. In other words, the ratio of the space occupied by the index to the space occupied by the data approaches zero as the length of the data file increases without bound.<>
介绍一种在霍夫曼编码文件中建立索引的技术,该技术允许对编码数据进行有效的随机访问。该技术提供了在O(log n)位的压缩文件中找到未压缩文件的第j个符号的起始位置的能力——对压缩文件加上它的索引进行检查。此外,索引的大小为o(n)位。换句话说,随着数据文件长度无限制地增加,索引占用的空间与数据占用的空间之比趋于零。
{"title":"Random access in Huffman-coded files","authors":"G. Jacobson","doi":"10.1109/DCC.1992.227444","DOIUrl":"https://doi.org/10.1109/DCC.1992.227444","url":null,"abstract":"Presents a technique for building an index into a Huffman-coded file that permits efficient random access to the encoded data. The technique provides the ability to find the starting position of the jth symbol of the uncompressed file in an n-bit compressed file in O(log n) bit-examinations of the compressed file plus its index. Furthermore, the size of the index is o(n) bits. In other words, the ratio of the space occupied by the index to the space occupied by the data approaches zero as the length of the data file increases without bound.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"60 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134138691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Improving search for tree-structured vector quantization 改进树结构矢量量化的搜索
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227447
Jianhua Lin, J. Storer
The authors analyze the approximate performance of tree search and provide tight upper bounds on the amount of error resulting from tree search and for a single input vector. These bounds are not encouraging but fortunately, the performance of tree-structured VQ in practice does not seem to be as bad. From the analysis, they derive a simple heuristic to improve the approximation of tree search. The strategy is to identify for each code vector some of its closest neighboring code vectors determined by the partition. After a code vector is found for an input vector by tree search, the closest neighboring code vectors are then searched for the best match. Unfortunately, the average number of neighboring code vectors of a given code vector can be as many as the total number of code vectors. Thus, the performance improvement of the strategy depends on the number of code vectors that are searched. Experimental results show that a number logarithmic in the size of the codebook provides significant performance gain while preserving the asymptotic search time complexity.<>
作者分析了树搜索的近似性能,并提供了由树搜索和单个输入向量产生的错误量的严格上限。这些界限并不令人鼓舞,但幸运的是,树状结构VQ在实践中的表现似乎并不那么糟糕。从分析中,他们推导出一种简单的启发式算法来改进树搜索的近似值。该策略是为每个代码向量识别由分区确定的最接近的相邻代码向量。通过树搜索找到输入向量的代码向量后,然后搜索最接近的相邻代码向量以寻找最佳匹配。不幸的是,给定代码向量的相邻代码向量的平均数量可能与代码向量的总数一样多。因此,该策略的性能改进取决于所搜索的代码向量的数量。实验结果表明,在保持渐近搜索时间复杂度的同时,码本大小的对数数提供了显著的性能增益。
{"title":"Improving search for tree-structured vector quantization","authors":"Jianhua Lin, J. Storer","doi":"10.1109/DCC.1992.227447","DOIUrl":"https://doi.org/10.1109/DCC.1992.227447","url":null,"abstract":"The authors analyze the approximate performance of tree search and provide tight upper bounds on the amount of error resulting from tree search and for a single input vector. These bounds are not encouraging but fortunately, the performance of tree-structured VQ in practice does not seem to be as bad. From the analysis, they derive a simple heuristic to improve the approximation of tree search. The strategy is to identify for each code vector some of its closest neighboring code vectors determined by the partition. After a code vector is found for an input vector by tree search, the closest neighboring code vectors are then searched for the best match. Unfortunately, the average number of neighboring code vectors of a given code vector can be as many as the total number of code vectors. Thus, the performance improvement of the strategy depends on the number of code vectors that are searched. Experimental results show that a number logarithmic in the size of the codebook provides significant performance gain while preserving the asymptotic search time complexity.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129953191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Lossless interframe compression of medical images 医学图像的无损帧间压缩
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227456
Xiaolin Wu, Yonggang Fang
A sequence of medical images is converted to a 4-dimensional volume of bits using gray code. Then an interframe sequential and intraframe interlacing prediction scheme is used for reversible image sequence compression. Higher compression ratios than the current intraframe compression methods are achieved due to interframe decorrelation.<>
使用灰度码将一系列医学图像转换为4维的位体。然后采用帧间序列和帧内交错预测方案进行可逆图像序列压缩。由于帧间去相关,实现了比当前帧内压缩方法更高的压缩比。
{"title":"Lossless interframe compression of medical images","authors":"Xiaolin Wu, Yonggang Fang","doi":"10.1109/DCC.1992.227456","DOIUrl":"https://doi.org/10.1109/DCC.1992.227456","url":null,"abstract":"A sequence of medical images is converted to a 4-dimensional volume of bits using gray code. Then an interframe sequential and intraframe interlacing prediction scheme is used for reversible image sequence compression. Higher compression ratios than the current intraframe compression methods are achieved due to interframe decorrelation.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114233928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Constructing word-based text compression algorithms 构建基于单词的文本压缩算法
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227475
R. Horspool, G. Cormack
Text compression algorithms are normally defined in terms of a source alphabet Sigma of 8-bit ASCII codes. The authors consider choosing Sigma to be an alphabet whose symbols are the words of English or, in general, alternate maximal strings of alphanumeric characters and nonalphanumeric characters. The compression algorithm would be able to take advantage of longer-range correlations between words and thus achieve better compression. The large size of Sigma leads to some implementation problems, but these are overcome to construct word-based LZW, word-based adaptive Huffman, and word-based context modelling compression algorithms.<>
文本压缩算法通常根据8位ASCII码的源字母Sigma来定义。作者考虑选择Sigma作为一个字母,其符号是英语单词,或者通常是字母数字字符和非字母数字字符交替的最大字符串。压缩算法将能够利用单词之间较长距离的相关性,从而实现更好的压缩。Sigma的大尺寸导致了一些实现问题,但这些问题已经被克服,以构建基于单词的LZW,基于单词的自适应Huffman和基于单词的上下文建模压缩算法。
{"title":"Constructing word-based text compression algorithms","authors":"R. Horspool, G. Cormack","doi":"10.1109/DCC.1992.227475","DOIUrl":"https://doi.org/10.1109/DCC.1992.227475","url":null,"abstract":"Text compression algorithms are normally defined in terms of a source alphabet Sigma of 8-bit ASCII codes. The authors consider choosing Sigma to be an alphabet whose symbols are the words of English or, in general, alternate maximal strings of alphanumeric characters and nonalphanumeric characters. The compression algorithm would be able to take advantage of longer-range correlations between words and thus achieve better compression. The large size of Sigma leads to some implementation problems, but these are overcome to construct word-based LZW, word-based adaptive Huffman, and word-based context modelling compression algorithms.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125225629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 96
Possible harmonic-wavelet hybrids in image compression 图像压缩中可能的谐波-小波混合
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227462
Michael Rollins, F. Carden
The paper describes some image compression research at New Mexico State University. It explores the possibility of combining multi-resolution and harmonic analysis in signal decomposition. This is in recognition of the fact that both local and global characteristics are found in most images and that the ideal compression system should be able to contend with both types of features. A hybrid is proposed and discussed.<>
本文介绍了新墨西哥州立大学在图像压缩方面的一些研究。探讨了多分辨率与谐波分析相结合在信号分解中的可能性。这是因为大多数图像都存在局部特征和全局特征,理想的压缩系统应该能够同时处理这两种特征。提出并讨论了一种混合型。
{"title":"Possible harmonic-wavelet hybrids in image compression","authors":"Michael Rollins, F. Carden","doi":"10.1109/DCC.1992.227462","DOIUrl":"https://doi.org/10.1109/DCC.1992.227462","url":null,"abstract":"The paper describes some image compression research at New Mexico State University. It explores the possibility of combining multi-resolution and harmonic analysis in signal decomposition. This is in recognition of the fact that both local and global characteristics are found in most images and that the ideal compression system should be able to contend with both types of features. A hybrid is proposed and discussed.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130885172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Transpose coding on the systolic array 对收缩数组进行转置编码
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227465
L. M. Stauffer, D. Hirschberg
The authors present systolic array implementations of transpose coding, which uses an alternative self-organizing list strategy but otherwise is similar to move-to-front coding. They present implementations for fixed-length word lists which provide improved system bandwidth by accelerating transpose coding.<>
作者提出了转置编码的收缩数组实现,它使用另一种自组织列表策略,但在其他方面类似于移动到前面编码。他们提出了固定长度单词列表的实现,通过加速转置编码提供改进的系统带宽。
{"title":"Transpose coding on the systolic array","authors":"L. M. Stauffer, D. Hirschberg","doi":"10.1109/DCC.1992.227465","DOIUrl":"https://doi.org/10.1109/DCC.1992.227465","url":null,"abstract":"The authors present systolic array implementations of transpose coding, which uses an alternative self-organizing list strategy but otherwise is similar to move-to-front coding. They present implementations for fixed-length word lists which provide improved system bandwidth by accelerating transpose coding.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122896015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Data Compression Conference, 1992.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1