首页 > 最新文献

Data Compression Conference, 1992.最新文献

英文 中文
Parallel algorithms for optimal compression using dictionaries with the prefix property 使用具有前缀属性的字典进行最佳压缩的并行算法
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227476
S. Agostino, J. Storer
The authors study parallel algorithms for lossless data compression via textual substitution. Dynamic dictionary compression is known to be P-complete, however, if the dictionary is given in advance, they show that compression can be efficiently parallelized and a computational advantage is obtained when the dictionary has the prefix property. The approach can be generalized to the sliding window method where the dictionary is a window that passes continuously from left to right over the input string.<>
作者研究了基于文本替换的并行数据无损压缩算法。动态字典压缩是p完备的,但是,如果提前给出字典,它们表明压缩可以有效地并行化,并且当字典具有前缀属性时,可以获得计算优势。该方法可以推广到滑动窗口方法,其中字典是一个从左到右连续传递输入字符串的窗口
{"title":"Parallel algorithms for optimal compression using dictionaries with the prefix property","authors":"S. Agostino, J. Storer","doi":"10.1109/DCC.1992.227476","DOIUrl":"https://doi.org/10.1109/DCC.1992.227476","url":null,"abstract":"The authors study parallel algorithms for lossless data compression via textual substitution. Dynamic dictionary compression is known to be P-complete, however, if the dictionary is given in advance, they show that compression can be efficiently parallelized and a computational advantage is obtained when the dictionary has the prefix property. The approach can be generalized to the sliding window method where the dictionary is a window that passes continuously from left to right over the input string.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116645941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Real time implementation of pruned tree search vector quantization 实时实现修剪树搜索矢量量化
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227466
A. Madisetti, R. Jain, R. Baker
Discusses the design of a CMOS integrated circuit for real time vector quantization (VQ) of images at MPEG rates. It has been designed as a slave processor which can implement binary, non-binary, and pruned tree search VQ algorithms. Inputs include the image source vectors, the VQ codevectors and external control signals that direct the search. The chip outputs the index of the codevector that best approximates the input in a mean square error sense. The layout has been generated using a 1.2 mu CMOS library and measures 5.76*6.6 mm/sup 2/. Critical path simulation with SPICE indicates a maximum clock rate of 40 MHz.<>
讨论了一种用于MPEG图像实时矢量量化(VQ)的CMOS集成电路的设计。它被设计成一个从处理器,可以实现二进制、非二进制和修剪树搜索VQ算法。输入包括图像源矢量、VQ编码矢量和指导搜索的外部控制信号。芯片输出在均方误差意义上最接近输入的编码器的索引。该布局是使用1.2亩CMOS库生成的,尺寸为5.76*6.6 mm/sup 2/。使用SPICE进行关键路径仿真,时钟速率最高可达40mhz。
{"title":"Real time implementation of pruned tree search vector quantization","authors":"A. Madisetti, R. Jain, R. Baker","doi":"10.1109/DCC.1992.227466","DOIUrl":"https://doi.org/10.1109/DCC.1992.227466","url":null,"abstract":"Discusses the design of a CMOS integrated circuit for real time vector quantization (VQ) of images at MPEG rates. It has been designed as a slave processor which can implement binary, non-binary, and pruned tree search VQ algorithms. Inputs include the image source vectors, the VQ codevectors and external control signals that direct the search. The chip outputs the index of the codevector that best approximates the input in a mean square error sense. The layout has been generated using a 1.2 mu CMOS library and measures 5.76*6.6 mm/sup 2/. Critical path simulation with SPICE indicates a maximum clock rate of 40 MHz.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122390411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A forward-mapping realization of the inverse discrete cosine transform 离散余弦逆变换的前向映射实现
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227459
L. McMillan, L. Westover
The paper presents a new realization of the inverse discrete cosine transform (IDCT). It exploits both the decorrelation properties of the discrete cosine transform (DCT) and the quantization process that is frequently applied to the DCT's resultant coefficients. This formulation has several advantages over previous approaches, including the elimination of multiplies from the central loop of the algorithm and its adaptability to incremental evaluation. The technique provides a significant reduction in computational requirements of the IDCT, enabling a software-based implementation to perform at rates which were previously achievable only through dedicated hardware.<>
提出了一种新的离散余弦逆变换(IDCT)实现方法。它利用离散余弦变换(DCT)的去相关特性和经常应用于DCT的合成系数的量化过程。与以前的方法相比,这个公式有几个优点,包括消除了算法中心循环中的乘法,以及它对增量计算的适应性。该技术大大减少了IDCT的计算需求,使基于软件的实现能够以以前只能通过专用硬件才能实现的速率执行。
{"title":"A forward-mapping realization of the inverse discrete cosine transform","authors":"L. McMillan, L. Westover","doi":"10.1109/DCC.1992.227459","DOIUrl":"https://doi.org/10.1109/DCC.1992.227459","url":null,"abstract":"The paper presents a new realization of the inverse discrete cosine transform (IDCT). It exploits both the decorrelation properties of the discrete cosine transform (DCT) and the quantization process that is frequently applied to the DCT's resultant coefficients. This formulation has several advantages over previous approaches, including the elimination of multiplies from the central loop of the algorithm and its adaptability to incremental evaluation. The technique provides a significant reduction in computational requirements of the IDCT, enabling a software-based implementation to perform at rates which were previously achievable only through dedicated hardware.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124423504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Model based concordance compression 基于模型的一致性压缩
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227473
A. Bookstein, S. T. Klein, T. Raita
The authors discuss concordance compression using the framework now customary in compression theory. They begin by creating a mathematical model of concordance generation, and then use optimal compression engines, such as Huffman or arithmetic coding, to do the actual compression. It should be noted that in the context of a static information retrieval system, compression and decompression are not symmetrical tasks. Compression is done only once, while building the system, whereas decompression is needed during the processing of every query and directly affects the response time. One may thus use extensive and costly preprocessing for compression, provided reasonably fast decompression methods are possible. Moreover, compression is applied to the full files (text, concordance, etc.), but decompression is needed only for (possibly many) short pieces, which may be accessed at random by means of pointers to their exact locations. Therefore the use of adaptive methods based on tables that systematically change from the beginning to the end of the file is ruled out. However, their concern is less the speed of encoding or decoding than relating concordance compression conceptually to the modern approach of data compression, and testing the effectiveness of their models.<>
作者使用压缩理论中常用的框架来讨论一致性压缩。他们首先创建一个一致性生成的数学模型,然后使用最优压缩引擎,如霍夫曼或算术编码,来进行实际的压缩。需要注意的是,在静态信息检索系统中,压缩和解压缩不是对称的任务。在构建系统时,压缩只执行一次,而在处理每个查询期间都需要解压缩,并直接影响响应时间。因此,只要有合理快速的解压缩方法,就可以对压缩使用广泛而昂贵的预处理。此外,压缩应用于整个文件(文本、一致性等),但只需要对(可能很多)短文件进行解压缩,这些文件可以通过指向其确切位置的指针随机访问。因此,排除了使用基于表的自适应方法,这些表从开始到结束系统地更改文件。然而,他们关心的不是编码或解码的速度,而是将一致性压缩概念与现代数据压缩方法联系起来,并测试其模型的有效性。
{"title":"Model based concordance compression","authors":"A. Bookstein, S. T. Klein, T. Raita","doi":"10.1109/DCC.1992.227473","DOIUrl":"https://doi.org/10.1109/DCC.1992.227473","url":null,"abstract":"The authors discuss concordance compression using the framework now customary in compression theory. They begin by creating a mathematical model of concordance generation, and then use optimal compression engines, such as Huffman or arithmetic coding, to do the actual compression. It should be noted that in the context of a static information retrieval system, compression and decompression are not symmetrical tasks. Compression is done only once, while building the system, whereas decompression is needed during the processing of every query and directly affects the response time. One may thus use extensive and costly preprocessing for compression, provided reasonably fast decompression methods are possible. Moreover, compression is applied to the full files (text, concordance, etc.), but decompression is needed only for (possibly many) short pieces, which may be accessed at random by means of pointers to their exact locations. Therefore the use of adaptive methods based on tables that systematically change from the beginning to the end of the file is ruled out. However, their concern is less the speed of encoding or decoding than relating concordance compression conceptually to the modern approach of data compression, and testing the effectiveness of their models.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126587116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
The use of fractal theory in a video compression system 分形理论在视频压缩系统中的应用
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227455
Maaruf Ali, C. Papadopoulos, T. Clarkson
The paper describes how fractal coding theory may be applied to compress video images using an image resampling sequencer (IRS) in a video compression system on a modular image processing system. It describes the background theory of image (image) coding using a form of fractal equation known as iterated function system (IFS) codes. The second part deals with the modular image processing system on which to implement these operations. It briefly covers how IFS codes may be calculated. It is shown how the IRS and 2/sup nd/ order geometric transformations may be used to describe inter-frame changes to compress motion video.<>
本文描述了在模块化图像处理系统中,如何将分形编码理论应用于视频压缩系统中的图像重采样序列器(IRS)。它描述了使用一种称为迭代函数系统(IFS)编码的分形方程形式的图像(图像)编码的背景理论。第二部分讨论了实现这些操作的模块化图像处理系统。它简要介绍了如何计算IFS代码。演示了如何使用IRS和2/sup和/ order几何变换来描述帧间变化以压缩运动视频。
{"title":"The use of fractal theory in a video compression system","authors":"Maaruf Ali, C. Papadopoulos, T. Clarkson","doi":"10.1109/DCC.1992.227455","DOIUrl":"https://doi.org/10.1109/DCC.1992.227455","url":null,"abstract":"The paper describes how fractal coding theory may be applied to compress video images using an image resampling sequencer (IRS) in a video compression system on a modular image processing system. It describes the background theory of image (image) coding using a form of fractal equation known as iterated function system (IFS) codes. The second part deals with the modular image processing system on which to implement these operations. It briefly covers how IFS codes may be calculated. It is shown how the IRS and 2/sup nd/ order geometric transformations may be used to describe inter-frame changes to compress motion video.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126322606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Optical techniques for image compression 图像压缩的光学技术
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227478
J. Reif, A. Yoshida
Optical computing has recently become a very active research field. The advantage of optics is its capability of providing highly parallel operations in a three dimensional space. The authors propose optical architectures to execute various image compression techniques. They optically implement the following compression techniques: transform coding; vector quantization; and interframe coding; They show many generally used transform coding methods, for example, the cosine transform, can be implemented by a simple optical system. The transform coding can be carried out in constant time. Most of this paper is concerned with a sophisticated optical system for vector quantization using holographic associative matching. Holographic associative matching provided by multiple exposure holograms can offer advantageous techniques for vector quantization based compression schemes. Photorefractive crystals, which provide high density recording in real time, are used as the holographic media. The reconstruction alphabet can be dynamically constructed through training or stored in the photorefractive crystal in advance. Encoding a new vector can be carried out by holographic associative matching in constant time. An extension to interframe coding is also discussed.<>
近年来,光计算已成为一个非常活跃的研究领域。光学的优点是它能够在三维空间中提供高度并行的操作。作者提出了执行各种图像压缩技术的光学架构。它们光学地实现了以下压缩技术:转换编码;矢量量化;帧间编码;他们展示了许多常用的变换编码方法,例如,余弦变换,可以由一个简单的光学系统实现。变换编码可以在常数时间内完成。本文主要研究了一种利用全息关联匹配实现矢量量化的复杂光学系统。多重曝光全息图提供的全息关联匹配为基于矢量量化的压缩方案提供了有利的技术。光折变晶体可以提供高密度的实时记录,被用作全息介质。重构字母表可以通过训练动态构造,也可以预先存储在光折变晶体中。在常数时间内,通过全息关联匹配实现对新向量的编码。对帧间编码的扩展也进行了讨论。
{"title":"Optical techniques for image compression","authors":"J. Reif, A. Yoshida","doi":"10.1109/DCC.1992.227478","DOIUrl":"https://doi.org/10.1109/DCC.1992.227478","url":null,"abstract":"Optical computing has recently become a very active research field. The advantage of optics is its capability of providing highly parallel operations in a three dimensional space. The authors propose optical architectures to execute various image compression techniques. They optically implement the following compression techniques: transform coding; vector quantization; and interframe coding; They show many generally used transform coding methods, for example, the cosine transform, can be implemented by a simple optical system. The transform coding can be carried out in constant time. Most of this paper is concerned with a sophisticated optical system for vector quantization using holographic associative matching. Holographic associative matching provided by multiple exposure holograms can offer advantageous techniques for vector quantization based compression schemes. Photorefractive crystals, which provide high density recording in real time, are used as the holographic media. The reconstruction alphabet can be dynamically constructed through training or stored in the photorefractive crystal in advance. Encoding a new vector can be carried out by holographic associative matching in constant time. An extension to interframe coding is also discussed.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124052482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Image reconstruction for hybrid video coding systems 混合视频编码系统的图像重建
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227458
Qin-Fan Zhu, Yao Wang, Leonard Shaw
Presents a new technique for image reconstruction from partially received information for hybrid video coding systems using DCT and motion compensated prediction and interpolation. The technique makes use of the smoothness property of typical video signals by requiring the reconstructed samples be smoothly connected with their adjacent samples, both spatially and temporally. This is fulfilled by minimizing the differences between neighboring pixels in the current as well as adjacent frames. The optimal solution is obtained through three linear transformations. This approach can yield more satisfactory results than the existing algorithms, especially for images with large motions or scene changes.<>
提出了一种基于DCT和运动补偿预测插值的混合视频编码系统部分接收信息图像重建新技术。该技术利用了典型视频信号的平滑特性,要求重构后的样本在空间和时间上都与相邻的样本保持平滑连接。这是通过最小化当前和相邻帧中相邻像素之间的差异来实现的。通过三次线性变换得到最优解。这种方法比现有的算法产生更令人满意的结果,特别是对于大运动或场景变化的图像。
{"title":"Image reconstruction for hybrid video coding systems","authors":"Qin-Fan Zhu, Yao Wang, Leonard Shaw","doi":"10.1109/DCC.1992.227458","DOIUrl":"https://doi.org/10.1109/DCC.1992.227458","url":null,"abstract":"Presents a new technique for image reconstruction from partially received information for hybrid video coding systems using DCT and motion compensated prediction and interpolation. The technique makes use of the smoothness property of typical video signals by requiring the reconstructed samples be smoothly connected with their adjacent samples, both spatially and temporally. This is fulfilled by minimizing the differences between neighboring pixels in the current as well as adjacent frames. The optimal solution is obtained through three linear transformations. This approach can yield more satisfactory results than the existing algorithms, especially for images with large motions or scene changes.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128777541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
On the JPEG model for lossless image compression 对JPEG模型进行无损图像压缩
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227464
G. Langdon, A. Gulati, E. Seiler
The JPEG lossless arithmetic coding algorithm and a predecessor algorithm called Sunset both employ adaptive arithmetic coding with the context model and parameter reduction approach of Todd et al. The authors compare the Sunset and JPEG context models for the lossless compression of gray-scale images, and derive new algorithms based on the strengths of each. The context model and binarization tree variations are compared in terms of their speed (the number of binary encodings required per test image) and their compression gain. In this study, the Bostelmann (1974) technique is studied for use at all resolutions, whereas in the arithmetic coded JPEG lossless, the technique is applied only at the 16-bit per pixel resolution.<>
JPEG无损算术编码算法及其前身算法Sunset都采用了Todd等人的上下文模型和参数约简方法的自适应算术编码。作者比较了用于灰度图像无损压缩的Sunset和JPEG上下文模型,并基于各自的优势推导出新的算法。上下文模型和二值化树的变化在速度(每个测试图像所需的二值编码数量)和压缩增益方面进行了比较。在本研究中,研究了Bostelmann(1974)技术在所有分辨率下的使用,而在算法编码的JPEG无损中,该技术仅在每像素16位分辨率下应用。
{"title":"On the JPEG model for lossless image compression","authors":"G. Langdon, A. Gulati, E. Seiler","doi":"10.1109/DCC.1992.227464","DOIUrl":"https://doi.org/10.1109/DCC.1992.227464","url":null,"abstract":"The JPEG lossless arithmetic coding algorithm and a predecessor algorithm called Sunset both employ adaptive arithmetic coding with the context model and parameter reduction approach of Todd et al. The authors compare the Sunset and JPEG context models for the lossless compression of gray-scale images, and derive new algorithms based on the strengths of each. The context model and binarization tree variations are compared in terms of their speed (the number of binary encodings required per test image) and their compression gain. In this study, the Bostelmann (1974) technique is studied for use at all resolutions, whereas in the arithmetic coded JPEG lossless, the technique is applied only at the 16-bit per pixel resolution.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"52 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132640667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Arithmetic coding for memoryless cost channels 无内存开销信道的算术编码
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227472
S. Savari, R. Gallager
The authors analyze the expected delay for infinite precision arithmetic codes and suggest a practical implementation that concentrates on the issue of delay.<>
本文分析了无限精度算术码的期望延迟,并提出了一种针对延迟问题的实际实现方法。
{"title":"Arithmetic coding for memoryless cost channels","authors":"S. Savari, R. Gallager","doi":"10.1109/DCC.1992.227472","DOIUrl":"https://doi.org/10.1109/DCC.1992.227472","url":null,"abstract":"The authors analyze the expected delay for infinite precision arithmetic codes and suggest a practical implementation that concentrates on the issue of delay.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"30 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131787315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Textual image compression 文本图像压缩
Pub Date : 1992-03-24 DOI: 10.1109/DCC.1992.227477
I. Witten, T. Bell, M. Harrison, Mark L. James, Alistair Moffat
The authors describe a method for lossless compression of images that contain predominantly typed or typeset text-they call these textual images. An increasingly popular application is document archiving, where documents are scanned by a computer and stored electronically for later retrieval. Their project was motivated by such an application: Trinity College in Dublin, Ireland, are archiving their 1872 printed library catalogues onto disk, and in order to preserve the exact form of the original document, pages are being stored as scanned images rather than being converted to text. The test images are taken from this catalogue. These typeset documents have a rather old-fashioned look, and contain a wide variety of symbols from several different typefaces-the five test images used contain text in English, Flemish, Latin and Greek, and include italics and small capitals as well as roman letters. The catalogue also contains Hebrew, Syriac, and Russian text.<>
作者描述了一种无损压缩主要包含打字或排版文本的图像的方法,他们称之为文本图像。一个日益流行的应用是文档归档,其中文档由计算机扫描并以电子方式存储以供以后检索。他们的项目是由这样一个应用程序激发的:爱尔兰都柏林的三一学院正在将他们1872年印刷的图书馆目录存档到磁盘上,为了保持原始文件的确切形式,页面被存储为扫描图像,而不是转换为文本。测试图像取自本目录。这些排版文档具有相当老式的外观,并且包含来自几种不同字体的各种符号—所使用的五个测试图像包含英语、佛兰德语、拉丁语和希腊语的文本,并且包括斜体和小大写以及罗马字母。该目录还包含希伯来语、叙利亚语和俄语文本。
{"title":"Textual image compression","authors":"I. Witten, T. Bell, M. Harrison, Mark L. James, Alistair Moffat","doi":"10.1109/DCC.1992.227477","DOIUrl":"https://doi.org/10.1109/DCC.1992.227477","url":null,"abstract":"The authors describe a method for lossless compression of images that contain predominantly typed or typeset text-they call these textual images. An increasingly popular application is document archiving, where documents are scanned by a computer and stored electronically for later retrieval. Their project was motivated by such an application: Trinity College in Dublin, Ireland, are archiving their 1872 printed library catalogues onto disk, and in order to preserve the exact form of the original document, pages are being stored as scanned images rather than being converted to text. The test images are taken from this catalogue. These typeset documents have a rather old-fashioned look, and contain a wide variety of symbols from several different typefaces-the five test images used contain text in English, Flemish, Latin and Greek, and include italics and small capitals as well as roman letters. The catalogue also contains Hebrew, Syriac, and Russian text.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123008704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
Data Compression Conference, 1992.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1