Deming Zhai, Xianming Liu, Debin Zhao, Hong Chang, Wen Gao
In this paper, we propose a unified framework to perform progressive image restoration based on hybrid graph Laplacian regularized regression. We first construct a multi-scale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space by exploring non-local self-similarity. In this procedure, the intrinsic manifold structure is considered by using both measured and unmeasured samples. On the other hand, between two scales, the proposed model is extended to the parametric manner through explicit kernel mapping to model the inter-scale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art image restoration algorithms.
{"title":"Progressive Image Restoration through Hybrid Graph Laplacian Regularization","authors":"Deming Zhai, Xianming Liu, Debin Zhao, Hong Chang, Wen Gao","doi":"10.1109/DCC.2013.18","DOIUrl":"https://doi.org/10.1109/DCC.2013.18","url":null,"abstract":"In this paper, we propose a unified framework to perform progressive image restoration based on hybrid graph Laplacian regularized regression. We first construct a multi-scale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space by exploring non-local self-similarity. In this procedure, the intrinsic manifold structure is considered by using both measured and unmeasured samples. On the other hand, between two scales, the proposed model is extended to the parametric manner through explicit kernel mapping to model the inter-scale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art image restoration algorithms.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116324178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel Hernández-Cabronero, Victor Sanchez, M. Marcellin, J. Serra-Sagristà
DNA micro arrays are state-of-the-art tools in biological and medical research. In this work, we discuss the suitability of lossy compression for DNA micro array images and highlight the necessity for a distortion metric to assess the loss of relevant information. We also propose one possible metric that considers the basic image features employed by most DNA micro array analysis techniques. Experimental results indicate that the proposed metric can identify and differentiate important and unimportant changes in DNA micro array images.
{"title":"A Distortion Metric for the Lossy Compression of DNA Microarray Images","authors":"Miguel Hernández-Cabronero, Victor Sanchez, M. Marcellin, J. Serra-Sagristà","doi":"10.1109/DCC.2013.26","DOIUrl":"https://doi.org/10.1109/DCC.2013.26","url":null,"abstract":"DNA micro arrays are state-of-the-art tools in biological and medical research. In this work, we discuss the suitability of lossy compression for DNA micro array images and highlight the necessity for a distortion metric to assess the loss of relevant information. We also propose one possible metric that considers the basic image features employed by most DNA micro array analysis techniques. Experimental results indicate that the proposed metric can identify and differentiate important and unimportant changes in DNA micro array images.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122063506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. In this paper, we take a different approach from the coding community. Instead of taking the usual route of quantization plus Slepian-Wolf coding, we do not perform any Slepian-Wolf coding on the transmitter side. We simply perform quantization on the sensor readings, compress the quantization indexes with conventional entropy coding, and send the compressed indexes to the receiver. On the decoder side, we simply perform entropy decoding and Gaussian process regression to reconstruct the joint source. To reduce the sum rate over all sensors, some sensors are censored and do not transmit anything to the decoder.
{"title":"Multiterminal Source Coding for Many Sensors with Entropy Coding and Gaussian Process Regression","authors":"Samuel Cheng","doi":"10.1109/DCC.2013.62","DOIUrl":"https://doi.org/10.1109/DCC.2013.62","url":null,"abstract":"Summary form only given. In this paper, we take a different approach from the coding community. Instead of taking the usual route of quantization plus Slepian-Wolf coding, we do not perform any Slepian-Wolf coding on the transmitter side. We simply perform quantization on the sensor readings, compress the quantization indexes with conventional entropy coding, and send the compressed indexes to the receiver. On the decoder side, we simply perform entropy decoding and Gaussian process regression to reconstruct the joint source. To reduce the sum rate over all sensors, some sensors are censored and do not transmit anything to the decoder.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122195954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper describes a highly efficient coding scheme based on the Complete Oscillator Method (COM). The COM has a number of powerful theoretical properties which enable it to provide very compact models for a wide range of deterministic signals. Several of these properties are studied here. The theoretical COM is shown to model and synthesize exactly all types of stationary signals, irrespective of the dimension of the system that generated it, using only one model parameter. The exact representation property independent of data dimension extends to certain amplitude-variable signals as well. The COM also reconstructs with high fidelity a number of other classes of nonstationary signals for which the exact representation cannot be guaranteed. One such class encompasses frequency-modulated signals presented here. The theoretical results obtained under idealized conditions are related to practical discrete implementations, where the COM is shown to be robust to deviations from the ideal conditions. In non-ideal conditions, increasing the order of the COM to two terms, four parameters total, delivers near exact models in many cases. The compact representation property of the COM is illustrated on several canonical waveforms, which provide representative examples for each class of signal studied here.
{"title":"Evaluation of Efficient Compression Properties of the Complete Oscillator Method, Part 1: Canonical Signals","authors":"I. Gorodnitsky, Anton Y. Yen","doi":"10.1109/DCC.2013.73","DOIUrl":"https://doi.org/10.1109/DCC.2013.73","url":null,"abstract":"The paper describes a highly efficient coding scheme based on the Complete Oscillator Method (COM). The COM has a number of powerful theoretical properties which enable it to provide very compact models for a wide range of deterministic signals. Several of these properties are studied here. The theoretical COM is shown to model and synthesize exactly all types of stationary signals, irrespective of the dimension of the system that generated it, using only one model parameter. The exact representation property independent of data dimension extends to certain amplitude-variable signals as well. The COM also reconstructs with high fidelity a number of other classes of nonstationary signals for which the exact representation cannot be guaranteed. One such class encompasses frequency-modulated signals presented here. The theoretical results obtained under idealized conditions are related to practical discrete implementations, where the COM is shown to be robust to deviations from the ideal conditions. In non-ideal conditions, increasing the order of the COM to two terms, four parameters total, delivers near exact models in many cases. The compact representation property of the COM is illustrated on several canonical waveforms, which provide representative examples for each class of signal studied here.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116023690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider compression of unordered sets of distinct elements, focusing particularly on compressing sets of fixed-length bit strings in the presence of statistical information. We address previous work, and outline a novel compression algorithm that allows transparent incorporation of various estimates for probability distribution. Experiments allow the conclusion that set compression can benefit from incorporating statistics, using our method or variants of previously known techniques.
{"title":"Considerations and Algorithms for Compression of Sets","authors":"N. Larsson","doi":"10.1109/DCC.2013.83","DOIUrl":"https://doi.org/10.1109/DCC.2013.83","url":null,"abstract":"We consider compression of unordered sets of distinct elements, focusing particularly on compressing sets of fixed-length bit strings in the presence of statistical information. We address previous work, and outline a novel compression algorithm that allows transparent incorporation of various estimates for probability distribution. Experiments allow the conclusion that set compression can benefit from incorporating statistics, using our method or variants of previously known techniques.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116082610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shahin Kamali, Susana Ladra, A. López-Ortiz, Diego Seco
The List-Update Problem is a well studied online problem with direct applications in data compression. Although the model proposed by Sleator & Tarjan has become the standard in the field for the problem, its applicability in some domains, and in particular for compression purposes, has been questioned. In this paper, we focus on two alternative models for the problem that arguably have more practical significance than the standard model. We provide new algorithms for these models, and show that these algorithms outperform all classical algorithms under the discussed models. This is done via an empirical study of the performance of these algorithms on the reference data set for the list-update problem. The presented algorithms make use of the context-based strategies for compression, which have not been considered before in the context of the list-update problem and lead to improved compression algorithms. In addition, we study the adaptability of these algorithms to different measures of locality of reference and compressibility.
{"title":"Context-Based Algorithms for the List-Update Problem under Alternative Cost Models","authors":"Shahin Kamali, Susana Ladra, A. López-Ortiz, Diego Seco","doi":"10.1109/DCC.2013.44","DOIUrl":"https://doi.org/10.1109/DCC.2013.44","url":null,"abstract":"The List-Update Problem is a well studied online problem with direct applications in data compression. Although the model proposed by Sleator & Tarjan has become the standard in the field for the problem, its applicability in some domains, and in particular for compression purposes, has been questioned. In this paper, we focus on two alternative models for the problem that arguably have more practical significance than the standard model. We provide new algorithms for these models, and show that these algorithms outperform all classical algorithms under the discussed models. This is done via an empirical study of the performance of these algorithms on the reference data set for the list-update problem. The presented algorithms make use of the context-based strategies for compression, which have not been considered before in the context of the list-update problem and lead to improved compression algorithms. In addition, we study the adaptability of these algorithms to different measures of locality of reference and compressibility.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"89 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120901267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Compressive Sensing (CS) theory shows that a signal can be decoded from many fewer measurements than suggested by the Nyquist sampling theory, when the signal is sparse in some domain. Most of conventional CS recovery approaches, however, exploited a set of fixed bases (e.g. DCT, wavelet, contour let and gradient domain) for the entirety of a signal, which are irrespective of the nonstationarity of natural signals and cannot achieve high enough degree of sparsity, thus resulting in poor rate-distortion performance. In this paper, we propose a new framework for image compressive sensing recovery via structural group sparse representation (SGSR) modeling, which enforces image sparsity and self-similarity simultaneously under a unified framework in an adaptive group domain, thus greatly confining the CS solution space. In addition, an efficient iterative shrinkage/thresholding algorithm based technique is developed to solve the above optimization problem. Experimental results demonstrate that the novel CS recovery strategy achieves significant performance improvements over the current state-of-the-art schemes and exhibits nice convergence.
压缩感知(CS)理论表明,当信号在某些域中稀疏时,可以从比奈奎斯特采样理论所建议的更少的测量中解码信号。然而,传统的CS恢复方法大多对整个信号使用一组固定基(如DCT、小波、轮廓let和梯度域),不考虑自然信号的非平稳性,不能达到足够高的稀疏度,导致率失真性能较差。本文提出了一种基于结构群稀疏表示(structural group sparse representation, SGSR)建模的图像压缩感知恢复新框架,该框架在自适应群域的统一框架下同时增强了图像的稀疏性和自相似性,从而极大地限制了CS解空间。此外,本文还提出了一种基于迭代收缩/阈值算法的优化方法。实验结果表明,与现有方案相比,新的CS恢复策略取得了显著的性能改进,并表现出良好的收敛性。
{"title":"Structural Group Sparse Representation for Image Compressive Sensing Recovery","authors":"Jian Zhang, Debin Zhao, F. Jiang, Wen Gao","doi":"10.1109/DCC.2013.41","DOIUrl":"https://doi.org/10.1109/DCC.2013.41","url":null,"abstract":"Compressive Sensing (CS) theory shows that a signal can be decoded from many fewer measurements than suggested by the Nyquist sampling theory, when the signal is sparse in some domain. Most of conventional CS recovery approaches, however, exploited a set of fixed bases (e.g. DCT, wavelet, contour let and gradient domain) for the entirety of a signal, which are irrespective of the nonstationarity of natural signals and cannot achieve high enough degree of sparsity, thus resulting in poor rate-distortion performance. In this paper, we propose a new framework for image compressive sensing recovery via structural group sparse representation (SGSR) modeling, which enforces image sparsity and self-similarity simultaneously under a unified framework in an adaptive group domain, thus greatly confining the CS solution space. In addition, an efficient iterative shrinkage/thresholding algorithm based technique is developed to solve the above optimization problem. Experimental results demonstrate that the novel CS recovery strategy achieves significant performance improvements over the current state-of-the-art schemes and exhibits nice convergence.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"15 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120928649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. This paper examines the performance of the recently proposed Complete Oscillator Method (COM) in the context of coding speech. The COM is shown to provide several advantages over traditional predictive coding techniques. Unlike the cascaded method employed by codecs such as Adaptive Multi-Rate (AMR), the COM encodes short and long-term data features jointly using a single, flexible representation. Joint approaches have previously been shown to yield efficiency gains [1]. Furthermore, the COM does not always require an explicit encoding of the residual error to reconstruct the signal. As AMR can allocate as much as 85% of its coding budget towards encoding the residual, there is substantial motivation for finding alternatives to source-filter coding methods. The first part of the paper compares the synthesis of speech frames using the COM versus a combination of linear predictor and adaptive codebook (LPAC) in order to assess the deterministic modeling capabilities of the COM relative to linear predictive codes. With both approaches optimized by minimizing the perceptually-weighted error (PWE) between the original and reconstructed speech, the COM is shown to achieve lower PWE on average than LPAC as implemented in the AMR standard for several types of speech. The COM improved PWE in 78.20% of voiced frames yielding a 2.02 dB PWE gain on average. For voiced to unvoiced transitions, the COM improved PWE in 76.75% of the frames with a 1.26 dB average gain. For unvoiced speech, the COM consistently improved PWE but the average gain was not significant. Only for unvoiced to voiced transitions did the COM not produce gains in average PWE. The second part of the paper compares the synthesis of speech frames using the COM at several bit rates to standard AMR and Speex codecs to show that the COM can produce comparable quality speech in a significant percentage of frames. Using weighted spectral slope distance (WSS) as a metric, a 5.5 kbps COM was seen to outperform 12.2 kbps AMR in 24.12% of speech frames. These results are not intended to demonstrate the workings of a COM-only speech coder, but rather to suggest how existing codecs can achieve lower bit rates by using the COM to encode some subset of frames. For example, by using the COM in the lowest bit rate mode sufficient to achieve a similar WSS as 12.2 kbps AMR, the average bit rate can potentially be reduced to 9.16 kbps.
{"title":"Evaluation of Efficient Compression Properties of the Complete Oscillator Method, Part 2: Speech Coding","authors":"Anton Y. Yen, I. Gorodnitsky","doi":"10.1109/DCC.2013.110","DOIUrl":"https://doi.org/10.1109/DCC.2013.110","url":null,"abstract":"Summary form only given. This paper examines the performance of the recently proposed Complete Oscillator Method (COM) in the context of coding speech. The COM is shown to provide several advantages over traditional predictive coding techniques. Unlike the cascaded method employed by codecs such as Adaptive Multi-Rate (AMR), the COM encodes short and long-term data features jointly using a single, flexible representation. Joint approaches have previously been shown to yield efficiency gains [1]. Furthermore, the COM does not always require an explicit encoding of the residual error to reconstruct the signal. As AMR can allocate as much as 85% of its coding budget towards encoding the residual, there is substantial motivation for finding alternatives to source-filter coding methods. The first part of the paper compares the synthesis of speech frames using the COM versus a combination of linear predictor and adaptive codebook (LPAC) in order to assess the deterministic modeling capabilities of the COM relative to linear predictive codes. With both approaches optimized by minimizing the perceptually-weighted error (PWE) between the original and reconstructed speech, the COM is shown to achieve lower PWE on average than LPAC as implemented in the AMR standard for several types of speech. The COM improved PWE in 78.20% of voiced frames yielding a 2.02 dB PWE gain on average. For voiced to unvoiced transitions, the COM improved PWE in 76.75% of the frames with a 1.26 dB average gain. For unvoiced speech, the COM consistently improved PWE but the average gain was not significant. Only for unvoiced to voiced transitions did the COM not produce gains in average PWE. The second part of the paper compares the synthesis of speech frames using the COM at several bit rates to standard AMR and Speex codecs to show that the COM can produce comparable quality speech in a significant percentage of frames. Using weighted spectral slope distance (WSS) as a metric, a 5.5 kbps COM was seen to outperform 12.2 kbps AMR in 24.12% of speech frames. These results are not intended to demonstrate the workings of a COM-only speech coder, but rather to suggest how existing codecs can achieve lower bit rates by using the COM to encode some subset of frames. For example, by using the COM in the lowest bit rate mode sufficient to achieve a similar WSS as 12.2 kbps AMR, the average bit rate can potentially be reduced to 9.16 kbps.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127005446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the next generation standard of video coding, the High Efficiency Video Coding (HEVC) achieves significantly better coding efficiency than all existing video coding standards. A Coding Unit (CU) quad tree concept is introduced to HEVC to improve the coding efficiency. Each CU node in quad tree will be traversed by depth first search process to find the best Coding Tree Unit (CTU) partition. Although this quad tree search process can obtain the best CTU partition, it is very time consuming, especially in interframe coding. To alleviate the encoder computation load in interframe coding, a fast CU depth decision method is proposed by reducing the depth search range. Based on the depth information correlation between spatio-temporal adjacent CTUs and the current CTU, some depths can be adaptively excluded from the depth search process in advance. Experimental results show that the proposed scheme provides almost 30% encoder time savings on average compared to the default encoding scheme in HM8.0 with only 0.38% bit rate increment in coding performance.
高效视频编码(High Efficiency video coding, HEVC)作为下一代视频编码标准,其编码效率明显高于现有的所有视频编码标准。为了提高编码效率,在HEVC中引入了编码单元四叉树的概念。通过深度优先搜索遍历四叉树中的每个CU节点,找到最佳的编码树单元分区。虽然这种四叉树搜索过程可以获得最佳的CTU分区,但非常耗时,特别是在帧间编码时。为了减轻帧间编码时编码器的计算负担,提出了一种通过减小深度搜索范围来快速确定帧间编码深度的方法。基于时空相邻CTU与当前CTU之间的深度信息相关性,可以提前自适应地从深度搜索过程中排除某些深度。实验结果表明,与HM8.0的默认编码方案相比,该方案平均节省了近30%的编码器时间,编码性能仅提高0.38%。
{"title":"Fast Coding Unit Depth Decision Algorithm for Interframe Coding in HEVC","authors":"Yongfei Zhang, Haibo Wang, Zhe Li","doi":"10.1109/DCC.2013.13","DOIUrl":"https://doi.org/10.1109/DCC.2013.13","url":null,"abstract":"As the next generation standard of video coding, the High Efficiency Video Coding (HEVC) achieves significantly better coding efficiency than all existing video coding standards. A Coding Unit (CU) quad tree concept is introduced to HEVC to improve the coding efficiency. Each CU node in quad tree will be traversed by depth first search process to find the best Coding Tree Unit (CTU) partition. Although this quad tree search process can obtain the best CTU partition, it is very time consuming, especially in interframe coding. To alleviate the encoder computation load in interframe coding, a fast CU depth decision method is proposed by reducing the depth search range. Based on the depth information correlation between spatio-temporal adjacent CTUs and the current CTU, some depths can be adaptively excluded from the depth search process in advance. Experimental results show that the proposed scheme provides almost 30% encoder time savings on average compared to the default encoding scheme in HM8.0 with only 0.38% bit rate increment in coding performance.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124185378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel technique for Intra-cerebral Electroencephalogram (iEEG) compression in real-time is proposed in this article. This technique uses eigendecomposition and dynamic dictionary update to reduce the EEG channels to only one decor related channel or eigenchannel. Experimental results show that this technique is able to provide low distortion values at very low bit rates (BRs). In addition, performance results of this method show to be better and more stable than JPEG2000. Results do not vary a lot both in time and between different patients which proves the stability of the method.
{"title":"Real-Time Compression of Intra-Cerebral EEG Using Eigendecomposition with Dynamic Dictionary","authors":"H. Daou, F. Labeau","doi":"10.1109/DCC.2013.68","DOIUrl":"https://doi.org/10.1109/DCC.2013.68","url":null,"abstract":"A novel technique for Intra-cerebral Electroencephalogram (iEEG) compression in real-time is proposed in this article. This technique uses eigendecomposition and dynamic dictionary update to reduce the EEG channels to only one decor related channel or eigenchannel. Experimental results show that this technique is able to provide low distortion values at very low bit rates (BRs). In addition, performance results of this method show to be better and more stable than JPEG2000. Results do not vary a lot both in time and between different patients which proves the stability of the method.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128051896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}