首页 > 最新文献

Proceedings DCC '95 Data Compression Conference最新文献

英文 中文
Video coding using 3 dimensional DCT and dynamic code selection 视频编码采用三维DCT和动态码选择
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515561
M. Bauer, K. Sayood
Summary only given. We address the quality issue, and present a method for improved coding of the 3D DCT coefficients. Performance gain is achieved through the use of dynamically selected multiple coding algorithms. The resulting performance is excellent giving a compression ratio of greater than to 100:1 for image reproduction. The process consists of stacking 8 frames and breaking the data into 8/spl times/8/spl times/8 pixel cubes. The three dimensional DCT is applied to each cube. Each cube is then scanned in each dimension to determine if significant energy exists beyond the first two coefficients. Significance is determined with separate thresholds for each dimension. A single bit of side information is transmitted for each dimension of each cube to indicate whether more than two coefficients will be transmitted. The remaining coefficients of all cubes are reordered into a linear array such that the elements with the highest expected energies appear first and lower expected energies appear last. This tends to group coefficients with similar statistical properties for the most efficient coding. Eight different encoding methods are used to convert the coefficients into bits for transmission. The Viterbi algorithm is used to select the best coding method. The cost function is the number of bits that need to be sent. Each of the eight coding methods is optimized for a different range of values.
仅给出摘要。我们解决了质量问题,并提出了一种改进的3D DCT系数编码方法。性能增益是通过使用动态选择的多种编码算法实现的。由此产生的性能非常出色,压缩比大于100:1用于图像再现。该过程包括堆叠8帧并将数据分解为8/spl次/8/spl次/8个像素立方体。将三维DCT应用于每个立方体。然后在每个维度上扫描每个立方体,以确定在前两个系数之外是否存在显著的能量。显著性是用每个维度单独的阈值来确定的。为每个立方体的每个维度传输单个位的侧信息,以指示是否要传输两个以上的系数。所有立方体的剩余系数被重新排序成一个线性数组,这样期望能量最高的元素首先出现,期望能量较低的元素最后出现。这倾向于将具有相似统计属性的系数分组,以获得最有效的编码。使用八种不同的编码方法将系数转换成比特进行传输。采用Viterbi算法选择最佳编码方法。cost函数是需要发送的比特数。八种编码方法中的每一种都针对不同的值范围进行了优化。
{"title":"Video coding using 3 dimensional DCT and dynamic code selection","authors":"M. Bauer, K. Sayood","doi":"10.1109/DCC.1995.515561","DOIUrl":"https://doi.org/10.1109/DCC.1995.515561","url":null,"abstract":"Summary only given. We address the quality issue, and present a method for improved coding of the 3D DCT coefficients. Performance gain is achieved through the use of dynamically selected multiple coding algorithms. The resulting performance is excellent giving a compression ratio of greater than to 100:1 for image reproduction. The process consists of stacking 8 frames and breaking the data into 8/spl times/8/spl times/8 pixel cubes. The three dimensional DCT is applied to each cube. Each cube is then scanned in each dimension to determine if significant energy exists beyond the first two coefficients. Significance is determined with separate thresholds for each dimension. A single bit of side information is transmitted for each dimension of each cube to indicate whether more than two coefficients will be transmitted. The remaining coefficients of all cubes are reordered into a linear array such that the elements with the highest expected energies appear first and lower expected energies appear last. This tends to group coefficients with similar statistical properties for the most efficient coding. Eight different encoding methods are used to convert the coefficients into bits for transmission. The Viterbi algorithm is used to select the best coding method. The cost function is the number of bits that need to be sent. Each of the eight coding methods is optimized for a different range of values.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"5 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115675198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Algorithm evaluation for the synchronous data compression standards 同步数据压缩标准的算法评价
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515596
M. Maier
In association with an industry standardization effort, we have developed an evaluation procedure for compression algorithms for communication networks. The Synchronous Data Compression Consortium is a group of data transmission equipment makers who are promoting an interoperable standard for link layer compression. The target market is synchronous interconnection of routers and bridges for intemetworking over the public digital transmission network. Compression is desirable for such links to better match their speed to that of the interconnected local area networks. But achievable performance is effected by interaction of algorithm, the networking protocols, and implementation details. The compression environment is different from traditional file compression in inducing a tradeoff between compression ratio, compression time, and the performance metric (network throughput). In addition, other parameters and behaviors -=e introduced, including robustness to data retransmission and multiple interleaved streams. Specifically, we have evaluated the following issues through both synchronous queuing and direct network simulation:
与行业标准化工作相结合,我们开发了通信网络压缩算法的评估程序。同步数据压缩联盟是一组数据传输设备制造商,他们正在促进链路层压缩的互操作标准。目标市场是在公共数字传输网上用于互联互通的路由器和网桥的同步互连。这种链路需要压缩,以便更好地使它们的速度与相互连接的局域网的速度相匹配。但可实现的性能受算法、网络协议和实现细节的交互影响。压缩环境与传统的文件压缩不同,它在压缩比、压缩时间和性能指标(网络吞吐量)之间进行了权衡。此外,还介绍了其他参数和行为-=e,包括对数据重传和多交叉流的鲁棒性。具体来说,我们通过同步排队和直接网络模拟评估了以下问题:
{"title":"Algorithm evaluation for the synchronous data compression standards","authors":"M. Maier","doi":"10.1109/DCC.1995.515596","DOIUrl":"https://doi.org/10.1109/DCC.1995.515596","url":null,"abstract":"In association with an industry standardization effort, we have developed an evaluation procedure for compression algorithms for communication networks. The Synchronous Data Compression Consortium is a group of data transmission equipment makers who are promoting an interoperable standard for link layer compression. The target market is synchronous interconnection of routers and bridges for intemetworking over the public digital transmission network. Compression is desirable for such links to better match their speed to that of the interconnected local area networks. But achievable performance is effected by interaction of algorithm, the networking protocols, and implementation details. The compression environment is different from traditional file compression in inducing a tradeoff between compression ratio, compression time, and the performance metric (network throughput). In addition, other parameters and behaviors -=e introduced, including robustness to data retransmission and multiple interleaved streams. Specifically, we have evaluated the following issues through both synchronous queuing and direct network simulation:","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"346 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124272523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction of fixed pattern background and restoration of JPEG compressed CCD images 固定图案背景的校正和JPEG压缩CCD图像的恢复
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515534
M. Datcu, G. Schwarz, K. Schmidt, C. Reck
Summary form only given; substantially as follows. The present paper addresses the problem of the removal of the sensor background patterns, dark current and responsivity, from CCD images, when the uncorrected image was transmitted through a JPEG like block transform coding system. The work is of particular interest for imaging systems which operate under severe hardware restrictions, and require high accuracy, e.g. deep space cameras. The complexity of the problem comes from the aliasing of the image signal and CCD background patterns during the quantization in the transformed domain. The authors investigated several solutions and selected the optimal one based on three objectives: the radiometric accuracy, the visual quality, and the computational complexity. The solution selected for the background pattern removal and image restoration uses a combination of different methods: correction in space domain and iterative regularization in both space and DCT domain.
只提供摘要形式;大体上如下。本文研究了在采用类似JPEG的块变换编码系统传输未校正图像时,CCD图像中传感器背景图案、暗电流和响应性的去除问题。这项工作对在严格的硬件限制下运行并要求高精度的成像系统特别感兴趣,例如深空相机。问题的复杂性来自于变换域量化过程中图像信号和CCD背景图案的混叠。作者从辐射测量精度、视觉质量和计算复杂度三个方面研究了几种解决方案,并选择了最优方案。所选择的背景模式去除和图像恢复的解决方案使用了不同方法的组合:空间域的校正和空间和DCT域的迭代正则化。
{"title":"Correction of fixed pattern background and restoration of JPEG compressed CCD images","authors":"M. Datcu, G. Schwarz, K. Schmidt, C. Reck","doi":"10.1109/DCC.1995.515534","DOIUrl":"https://doi.org/10.1109/DCC.1995.515534","url":null,"abstract":"Summary form only given; substantially as follows. The present paper addresses the problem of the removal of the sensor background patterns, dark current and responsivity, from CCD images, when the uncorrected image was transmitted through a JPEG like block transform coding system. The work is of particular interest for imaging systems which operate under severe hardware restrictions, and require high accuracy, e.g. deep space cameras. The complexity of the problem comes from the aliasing of the image signal and CCD background patterns during the quantization in the transformed domain. The authors investigated several solutions and selected the optimal one based on three objectives: the radiometric accuracy, the visual quality, and the computational complexity. The solution selected for the background pattern removal and image restoration uses a combination of different methods: correction in space domain and iterative regularization in both space and DCT domain.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116985894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple-dictionary compression using partial matching 使用部分匹配的多字典压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515517
Dzung T. Hoang, Philip M. Long, J. Vitter
Motivated by the desire to find text compressors that compress better than existing dictionary methods, but run faster than PPM implementations, we describe methods for text compression using multiple dictionaries, one for each context of preceding characters, where the contexts have varying lengths. The context to be used is determined using an escape mechanism similar to that of PPM methods. We describe modifications of three popular dictionary coders along these lines and experiments evaluating their efficacy using the text files in the Calgary corpus. Our results suggest that modifying LZ77 along these lines yields an improvement in compression of about 4%, that modifying LZFG yields a compression improvement of about 8%, and that modifying LZW in this manner yields an average improvement on the order of 12%.
由于希望找到比现有字典方法压缩得更好、但运行速度比PPM实现更快的文本压缩器,我们描述了使用多个字典的文本压缩方法,每个字典对应前面字符的上下文,其中上下文具有不同的长度。使用类似于PPM方法的逃逸机制来确定要使用的上下文。我们沿着这些思路描述了三种流行的字典编码器的修改,并使用卡尔加里语料库中的文本文件评估了它们的有效性。我们的结果表明,沿着这条线修改LZ77可以使压缩性能提高约4%,修改LZFG可以使压缩性能提高约8%,以这种方式修改LZW可以使压缩性能平均提高约12%。
{"title":"Multiple-dictionary compression using partial matching","authors":"Dzung T. Hoang, Philip M. Long, J. Vitter","doi":"10.1109/DCC.1995.515517","DOIUrl":"https://doi.org/10.1109/DCC.1995.515517","url":null,"abstract":"Motivated by the desire to find text compressors that compress better than existing dictionary methods, but run faster than PPM implementations, we describe methods for text compression using multiple dictionaries, one for each context of preceding characters, where the contexts have varying lengths. The context to be used is determined using an escape mechanism similar to that of PPM methods. We describe modifications of three popular dictionary coders along these lines and experiments evaluating their efficacy using the text files in the Calgary corpus. Our results suggest that modifying LZ77 along these lines yields an improvement in compression of about 4%, that modifying LZFG yields a compression improvement of about 8%, and that modifying LZW in this manner yields an average improvement on the order of 12%.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124816552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Coding gain of intra/inter-frame subband systems 帧内/帧间子带系统的编码增益
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515559
G. Galvagno, G. Mian, R. Rinaldo
Summary form only given. Typical image sequence coders use motion compensation techniques in connection to coding of the motion compensated difference images (interframe coding). Moreover, the difference loop is initialized from time to time by intraframe coding of images. It is therefore important to have a procedure that allows to evaluate the performance of a particular coding scheme: coding gain and rate-distortion figures are used in this work to this purpose. We present an explicit procedure to compute the coding gain for two-dimensional separable subband systems, both in the case of a uniform and a pyramid subband decomposition, and for the case of interframe coding. The technique operates in the signal domain and requires the knowledge of the autocorrelation function of the input process. In the case of a separable subband system and image spectrum, the coding gain can be computed by combining the results relative to appropriately defined one-dimensional filtering schemes, thus making the technique very attractive in terms of computational complexity. We consider both the case of a uniform subband decomposition and of a pyramid decomposition. The developed procedure is applied to compute the subband coding gain for motion compensated signals in the case of images modeled as separable Markov processes: different filter banks are compared to each other and to transform coding. In order to have indications on the effectiveness of motion compensation, we also compute the coding gain for intraframe images. We show that the results for the image models are in very good agreement with those obtained with real-world data.
只提供摘要形式。典型的图像序列编码器使用运动补偿技术来对运动补偿的差分图像进行编码(帧间编码)。此外,差分循环通过帧内图像编码不时初始化。因此,重要的是要有一个程序,允许评估一个特定的编码方案的性能:编码增益和率失真图在这项工作中用于此目的。我们提出了一个明确的程序来计算二维可分离子带系统的编码增益,在均匀和金字塔子带分解的情况下,以及在帧间编码的情况下。该技术在信号域中操作,需要了解输入过程的自相关函数。在可分离子带系统和图像频谱的情况下,可以通过结合相对于适当定义的一维滤波方案的结果来计算编码增益,从而使该技术在计算复杂性方面非常有吸引力。我们考虑了均匀子带分解和金字塔分解的情况。所开发的程序应用于计算运动补偿信号的子带编码增益,在图像建模为可分离的马尔可夫过程的情况下:不同的滤波器组相互比较并变换编码。为了表明运动补偿的有效性,我们还计算了帧内图像的编码增益。我们表明,图像模型的结果与实际数据的结果非常吻合。
{"title":"Coding gain of intra/inter-frame subband systems","authors":"G. Galvagno, G. Mian, R. Rinaldo","doi":"10.1109/DCC.1995.515559","DOIUrl":"https://doi.org/10.1109/DCC.1995.515559","url":null,"abstract":"Summary form only given. Typical image sequence coders use motion compensation techniques in connection to coding of the motion compensated difference images (interframe coding). Moreover, the difference loop is initialized from time to time by intraframe coding of images. It is therefore important to have a procedure that allows to evaluate the performance of a particular coding scheme: coding gain and rate-distortion figures are used in this work to this purpose. We present an explicit procedure to compute the coding gain for two-dimensional separable subband systems, both in the case of a uniform and a pyramid subband decomposition, and for the case of interframe coding. The technique operates in the signal domain and requires the knowledge of the autocorrelation function of the input process. In the case of a separable subband system and image spectrum, the coding gain can be computed by combining the results relative to appropriately defined one-dimensional filtering schemes, thus making the technique very attractive in terms of computational complexity. We consider both the case of a uniform subband decomposition and of a pyramid decomposition. The developed procedure is applied to compute the subband coding gain for motion compensated signals in the case of images modeled as separable Markov processes: different filter banks are compared to each other and to transform coding. In order to have indications on the effectiveness of motion compensation, we also compute the coding gain for intraframe images. We show that the results for the image models are in very good agreement with those obtained with real-world data.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125588579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FFT based fast architecture & algorithm for discrete wavelet transforms 基于FFT的离散小波变换快速结构与算法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515550
A. Sri-Krishna, C. Chu, M. Bayoumi
Summary form only given. A non-recursive (unlike classical dyadic decomposition) and fast Fourier transform based architecture for computing discrete wavelet transforms (DWT) of a one dimensional sequence is presented. The DWT coefficients at all resolutions can be generated simultaneously without waiting for generation of coefficients at a lower octave level. This architecture is faster than architectures proposed so far for DWT decomposition (which are implementations based on recursion) and can be fully pipelined. The complexity of the control circuits for this architecture is much lower as compared to implementation of recursive methods. Consider the computation of the DWT (four octaves) of a sequence. Recursive dyadic decomposition can be converted to a non-recursive method as shown. We can move all the decimators shown to the extreme right (towards output end) and have a single filter and a single decimator in each path. We note that a decimator (of factor k) when so moved across a filter of length L will increase the length of the filter by a factor of k. Thus we will get first octave DWT coefficients by convolving input sequence with a filter of length L and decimating the output by a factor of 2.
只提供摘要形式。提出了一种计算一维序列离散小波变换(DWT)的非递归(不同于经典的二进分解)和快速傅立叶变换的结构。所有分辨率的DWT系数可以同时生成,而无需等待在较低的倍频程水平上生成系数。这种体系结构比目前提出的DWT分解(基于递归的实现)的体系结构要快,并且可以完全流水线化。与递归方法的实现相比,这种体系结构的控制电路的复杂性要低得多。考虑一下序列的DWT(四个八度)的计算。递归并进分解可以转换为如图所示的非递归方法。我们可以将显示的所有十进制数移动到最右边(向输出端),并在每个路径中使用单个过滤器和单个十进制数。我们注意到,当一个十进制数(因子k)在长度为L的滤波器上移动时,将使滤波器的长度增加k倍。因此,我们将通过将输入序列与长度为L的滤波器进行卷积并将输出抽取因子2来获得第一个八度DWT系数。
{"title":"FFT based fast architecture & algorithm for discrete wavelet transforms","authors":"A. Sri-Krishna, C. Chu, M. Bayoumi","doi":"10.1109/DCC.1995.515550","DOIUrl":"https://doi.org/10.1109/DCC.1995.515550","url":null,"abstract":"Summary form only given. A non-recursive (unlike classical dyadic decomposition) and fast Fourier transform based architecture for computing discrete wavelet transforms (DWT) of a one dimensional sequence is presented. The DWT coefficients at all resolutions can be generated simultaneously without waiting for generation of coefficients at a lower octave level. This architecture is faster than architectures proposed so far for DWT decomposition (which are implementations based on recursion) and can be fully pipelined. The complexity of the control circuits for this architecture is much lower as compared to implementation of recursive methods. Consider the computation of the DWT (four octaves) of a sequence. Recursive dyadic decomposition can be converted to a non-recursive method as shown. We can move all the decimators shown to the extreme right (towards output end) and have a single filter and a single decimator in each path. We note that a decimator (of factor k) when so moved across a filter of length L will increase the length of the filter by a factor of k. Thus we will get first octave DWT coefficients by convolving input sequence with a filter of length L and decimating the output by a factor of 2.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129603990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating fractal image compression by multi-dimensional nearest neighbor search 多维最近邻搜索加速分形图像压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515512
D. Saupe
In fractal image compression the encoding step is computationally expensive. A large number of sequential searches through a list of domains (portions of the image) are carried out while trying to find the best match for another image portion. Our theory developed here shows that this basic procedure of fractal image compression is equivalent to multi-dimensional nearest neighbor search. This result is useful for accelerating the encoding procedure in fractal image compression. The traditional sequential search takes linear time whereas the nearest neighbor search can be organized to require only logarithmic time. The fast search has been integrated into an existing state-of-the-art classification method thereby accelerating the searches carried out in the individual domain classes. In this case we record acceleration factors from 1.3 up to 11.5 depending on image and domain pool size with negligible or minor degradation in both image quality and compression ratio. Furthermore, as compared to plain classification our method is demonstrated to be able to search through larger portions of the domain pool without increased the computation time.
在分形图像压缩中,编码步骤的计算量非常大。在尝试为另一个图像部分找到最佳匹配时,对域列表(图像的部分)进行大量顺序搜索。我们在这里发展的理论表明,分形图像压缩的基本过程相当于多维最近邻搜索。该结果对加快分形图像压缩的编码过程具有重要意义。传统的顺序搜索需要线性时间,而最近邻搜索可以组织为只需要对数时间。快速搜索已集成到现有的最先进的分类方法中,从而加快了在各个领域类中进行的搜索。在这种情况下,我们记录了从1.3到11.5的加速因子,这取决于图像和域池的大小,图像质量和压缩比的退化可以忽略不计。此外,与普通分类相比,我们的方法能够在不增加计算时间的情况下搜索更大的域池部分。
{"title":"Accelerating fractal image compression by multi-dimensional nearest neighbor search","authors":"D. Saupe","doi":"10.1109/DCC.1995.515512","DOIUrl":"https://doi.org/10.1109/DCC.1995.515512","url":null,"abstract":"In fractal image compression the encoding step is computationally expensive. A large number of sequential searches through a list of domains (portions of the image) are carried out while trying to find the best match for another image portion. Our theory developed here shows that this basic procedure of fractal image compression is equivalent to multi-dimensional nearest neighbor search. This result is useful for accelerating the encoding procedure in fractal image compression. The traditional sequential search takes linear time whereas the nearest neighbor search can be organized to require only logarithmic time. The fast search has been integrated into an existing state-of-the-art classification method thereby accelerating the searches carried out in the individual domain classes. In this case we record acceleration factors from 1.3 up to 11.5 depending on image and domain pool size with negligible or minor degradation in both image quality and compression ratio. Furthermore, as compared to plain classification our method is demonstrated to be able to search through larger portions of the domain pool without increased the computation time.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129255106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 132
Wireless video coding system demonstration 无线视频编码系统演示
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515558
J. Villasenor, R. Jain, B. Belzer, W. Boring, C. Chien, C. Jones, J. Liao, S. Molloy, S. Nazareth, B. Schoner, J. Short
Summary form only given. We have developed and present here a prototype point-to-point wireless video system that has been implemented using a combination of commercial components and custom hardware. The coding algorithm being used consists of subband decomposition using low-complexity, integer-coefficient filters, scalar quantization, and run-length and entropy coding. The prototype system consists of the following major components: spread spectrum radio with interface card and driver, compression board, and an NEC laptop and docking station which provide the PC bus slots and control. The compression algorithms are implemented on a board with a single 10000-gate FPGA. Prior to implementing the algorithms in hardware, a study was performed to resolve issues of word length and scaling, and to select quantization and run length parameters. It was determined that 16-bit precision in the wavelet transform stage is sufficient to prevent under-low and overflow provided that rescaling of data is correctly performed. After processing by the FPGA, the compressed video is transferred to the PC for transmission over the radio. A commercial serial card (PI Card) provides a synchronous serial interface to the radio. The serial controller chip used by this card supports several serial protocols and thus the effect of the these protocols on the data in a wireless environment can be tested.
只提供摘要形式。我们已经开发并展示了一个点对点无线视频系统的原型,该系统已经使用商业组件和定制硬件的组合来实现。所使用的编码算法包括使用低复杂度的子带分解、整数系数滤波器、标量量化以及运行长度和熵编码。该原型系统由以下主要组件组成:带接口卡和驱动器的扩频无线电、压缩板、NEC笔记本电脑和提供PC总线插槽和控制的对接站。压缩算法是在一个单万门FPGA板上实现的。在硬件实现算法之前,进行了一项研究,以解决字长和缩放问题,并选择量化和运行长度参数。确定在小波变换阶段的16位精度足以防止过低和溢出,只要正确地进行数据的重新缩放。经过FPGA处理后,将压缩后的视频传输到PC机进行无线传输。商用串行卡(PI卡)为无线电提供同步串行接口。该卡使用的串行控制器芯片支持多种串行协议,因此可以测试这些协议对无线环境中数据的影响。
{"title":"Wireless video coding system demonstration","authors":"J. Villasenor, R. Jain, B. Belzer, W. Boring, C. Chien, C. Jones, J. Liao, S. Molloy, S. Nazareth, B. Schoner, J. Short","doi":"10.1109/DCC.1995.515558","DOIUrl":"https://doi.org/10.1109/DCC.1995.515558","url":null,"abstract":"Summary form only given. We have developed and present here a prototype point-to-point wireless video system that has been implemented using a combination of commercial components and custom hardware. The coding algorithm being used consists of subband decomposition using low-complexity, integer-coefficient filters, scalar quantization, and run-length and entropy coding. The prototype system consists of the following major components: spread spectrum radio with interface card and driver, compression board, and an NEC laptop and docking station which provide the PC bus slots and control. The compression algorithms are implemented on a board with a single 10000-gate FPGA. Prior to implementing the algorithms in hardware, a study was performed to resolve issues of word length and scaling, and to select quantization and run length parameters. It was determined that 16-bit precision in the wavelet transform stage is sufficient to prevent under-low and overflow provided that rescaling of data is correctly performed. After processing by the FPGA, the compressed video is transferred to the PC for transmission over the radio. A commercial serial card (PI Card) provides a synchronous serial interface to the radio. The serial controller chip used by this card supports several serial protocols and thus the effect of the these protocols on the data in a wireless environment can be tested.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127970054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling word occurrences for the compression of concordances 为索引的压缩建模单词出现
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515572
A. Bookstein, S. T. Klein, T. Raita
Summary form only given. Effective compression of a text-based information retrieval system involves compression not only the text itself, but also of the concordance by which one accesses that text and which occupies an amount of storage comparable to the text itself. The concordance can be a rather complicated data structure, especially if it permits hierarchical access to the database. But one or more components of the hierarchy can usually be conceptualized as a bit-map. We conceptualize our bit-map as being generated as follows. At any bit-map site we are in one of two states: a cluster state (C), or a between-cluster state (B). In a given state, we generate a bit-map-value of zero or one and, governed by the transition probabilities of the model, enter a new state as we move to the next bit-map site. Such a model has been referred to as a hidden Markov model in the literature. Unfortunately, this model is analytically difficult to use. To approximate it, we introduce several traditional Markov models with four states each, B and C as above, and two transitional states. We present the models, show how they are connected, and state the formal compression algorithm based on these models. We also include some experimental results.
只提供摘要形式。基于文本的信息检索系统的有效压缩不仅包括对文本本身的压缩,还包括对访问该文本的一致性的压缩,并且该一致性占用与文本本身相当的存储量。一致性可以是一个相当复杂的数据结构,特别是如果它允许对数据库进行分层访问的话。但是层次结构的一个或多个组件通常可以概念化为位图。我们将生成的位图概念化如下。在任何位图站点,我们都处于两种状态中的一种:集群状态(C)或集群间状态(B)。在给定状态下,我们生成一个位图值为0或1,并在模型的转移概率的控制下,在我们移动到下一个位图站点时进入一个新状态。这种模型在文献中被称为隐马尔可夫模型。不幸的是,这个模型在分析上很难使用。为了近似它,我们引入了几个传统的马尔可夫模型,每个模型都有四个状态,B和C,以及两个过渡状态。我们介绍了这些模型,展示了它们是如何连接的,并陈述了基于这些模型的形式化压缩算法。我们还包括一些实验结果。
{"title":"Modeling word occurrences for the compression of concordances","authors":"A. Bookstein, S. T. Klein, T. Raita","doi":"10.1109/DCC.1995.515572","DOIUrl":"https://doi.org/10.1109/DCC.1995.515572","url":null,"abstract":"Summary form only given. Effective compression of a text-based information retrieval system involves compression not only the text itself, but also of the concordance by which one accesses that text and which occupies an amount of storage comparable to the text itself. The concordance can be a rather complicated data structure, especially if it permits hierarchical access to the database. But one or more components of the hierarchy can usually be conceptualized as a bit-map. We conceptualize our bit-map as being generated as follows. At any bit-map site we are in one of two states: a cluster state (C), or a between-cluster state (B). In a given state, we generate a bit-map-value of zero or one and, governed by the transition probabilities of the model, enter a new state as we move to the next bit-map site. Such a model has been referred to as a hidden Markov model in the literature. Unfortunately, this model is analytically difficult to use. To approximate it, we introduce several traditional Markov models with four states each, B and C as above, and two transitional states. We present the models, show how they are connected, and state the formal compression algorithm based on these models. We also include some experimental results.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134025516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
The Implementation of Data Compression in the Cassini RPWS Dedicated Compression Processor 卡西尼RPWS专用压缩处理器中数据压缩的实现
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515601
I. Willis, L. Woolliscroft, T. Averkamp, D. Gurnett, R. Johnson, D. Kirchner, W. Kurth, W. Robison
The Radio and Plasma Wave Science instrument is a part of the scientific payload of the NASA/ESA Cassini mission that is due to be launched to study the planet Saturn in 1997. Such instruments are capable of generating vastly more data than the data systems of the spacecraft and the link to the Earth can handle and so data selection and data compression are important. Within RF'WS some data compression is performed in a dedicated compression processor, DCP. This processor is based on an HS-80C85 processor and includes several algorithms which have been tested for their efficacy in the compression of plasma wave data. Criteria have been derived for the acceptable data distortion that will not adversely affect the scientific value of the data. The main algorithms that are installed in the DCP are the Rice algorithm and a Walsh transform. These are complemented with simple bit stripping and packing algorithms. The hardware of the DCP is described. A discussion of the software structure is given together with performance statistics on the software as implemented in the engineering model, of RF'WS. The software structure in the DCP makes it a suitable host for further scientific software. One such is an algorithm to detect dust impacts and this wdl also be installed in the engineering model.
无线电和等离子体波科学仪器是美国宇航局/欧洲航天局卡西尼号任务的科学有效载荷的一部分,该任务将于1997年发射,用于研究土星。这些仪器能够产生的数据远远超过航天器的数据系统和与地球的链路所能处理的数据,因此数据选择和数据压缩非常重要。在RF'WS中,一些数据压缩是在专用压缩处理器DCP中执行的。该处理器以HS-80C85处理器为基础,包含了几种算法,这些算法在等离子体波数据压缩方面的有效性已经过测试。对于不会对数据的科学价值产生不利影响的可接受的数据失真,已经得出了标准。安装在DCP中的主要算法是Rice算法和Walsh变换。这些是简单的位剥离和打包算法的补充。介绍了DCP的硬件组成。对软件结构进行了讨论,并对在RF'WS工程模型中实现的软件进行了性能统计。DCP的软件结构使其成为进一步的科学软件的合适宿主。其中之一是检测尘埃影响的算法,这也可以安装在工程模型中。
{"title":"The Implementation of Data Compression in the Cassini RPWS Dedicated Compression Processor","authors":"I. Willis, L. Woolliscroft, T. Averkamp, D. Gurnett, R. Johnson, D. Kirchner, W. Kurth, W. Robison","doi":"10.1109/DCC.1995.515601","DOIUrl":"https://doi.org/10.1109/DCC.1995.515601","url":null,"abstract":"The Radio and Plasma Wave Science instrument is a part of the scientific payload of the NASA/ESA Cassini mission that is due to be launched to study the planet Saturn in 1997. Such instruments are capable of generating vastly more data than the data systems of the spacecraft and the link to the Earth can handle and so data selection and data compression are important. Within RF'WS some data compression is performed in a dedicated compression processor, DCP. This processor is based on an HS-80C85 processor and includes several algorithms which have been tested for their efficacy in the compression of plasma wave data. Criteria have been derived for the acceptable data distortion that will not adversely affect the scientific value of the data. The main algorithms that are installed in the DCP are the Rice algorithm and a Walsh transform. These are complemented with simple bit stripping and packing algorithms. The hardware of the DCP is described. A discussion of the software structure is given together with performance statistics on the software as implemented in the engineering model, of RF'WS. The software structure in the DCP makes it a suitable host for further scientific software. One such is an algorithm to detect dust impacts and this wdl also be installed in the engineering model.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131195094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings DCC '95 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1