首页 > 最新文献

2009 Data Compression Conference最新文献

英文 中文
A Comparative Study of Lossless Compression Algorithms on Multi-spectral Imager Data 多光谱成像仪数据无损压缩算法的比较研究
Pub Date : 2009-03-16 DOI: 10.1117/12.821007
M. Grossberg, I. Gladkova, S. Gottipati, M. Rabinowitz, P. Alabi, T. George, António Pacheco
High resolution multi-spectral imagers are becoming increasingly important tools for studying and monitoring the earth. As much of the data from these multi-spectral imagers is used for quantitative analysis, the role of lossless compression is critical in the transmission, distribution, archiving, and management of the data. To evaluate the performance of various compression algorithms on multi-spectral images, we conducted statistical evaluation on datasets consisting of hundreds of granules from both geostationary and polar imagers. We broke these datasets up by different criteria such as hemisphere, season, and time-of-day in order to ensure the results are robust, reliable, and applicable for future imagers.
高分辨率多光谱成像仪正日益成为研究和监测地球的重要工具。由于这些多光谱成像仪的大部分数据用于定量分析,因此无损压缩在数据的传输、分发、存档和管理中起着至关重要的作用。为了评估各种压缩算法在多光谱图像上的性能,我们对来自地球静止和极地成像仪的数百个颗粒组成的数据集进行了统计评估。我们根据不同的标准(如半球、季节和一天中的时间)对这些数据集进行了分解,以确保结果稳健、可靠,并适用于未来的成像仪。
{"title":"A Comparative Study of Lossless Compression Algorithms on Multi-spectral Imager Data","authors":"M. Grossberg, I. Gladkova, S. Gottipati, M. Rabinowitz, P. Alabi, T. George, António Pacheco","doi":"10.1117/12.821007","DOIUrl":"https://doi.org/10.1117/12.821007","url":null,"abstract":"High resolution multi-spectral imagers are becoming increasingly important tools for studying and monitoring the earth. As much of the data from these multi-spectral imagers is used for quantitative analysis, the role of lossless compression is critical in the transmission, distribution, archiving, and management of the data. To evaluate the performance of various compression algorithms on multi-spectral images, we conducted statistical evaluation on datasets consisting of hundreds of granules from both geostationary and polar imagers. We broke these datasets up by different criteria such as hemisphere, season, and time-of-day in order to ensure the results are robust, reliable, and applicable for future imagers.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129097341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Optimized Source-Channel Coding of Video Signals in Packet Loss Environments 丢包环境下视频信号的优化源信道编码
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.67
U. Celikcan, E. Tuncel
A novel predictive joint source-channel video coding scheme is proposed and its superiority against standard video coding is demonstrated in environments with heavy packet loss. The strength of the scheme stems from the fact that it explicitly takes into account the two modes of operation (packet loss or no packet loss) at the decoder and optimizes the corresponding reconstruction filters together with the the prediction filter at the encoder simultaneously. As a result, the prediction coefficient tends to be much smaller than both the correlation coefficient between two corresponding frames and what standard video coding techniques use (i.e., 1), thereby leaving most of the inter-frame correlation intact and increasing the error resilience.
提出了一种新的预测联合源信道视频编码方案,并在严重丢包环境下证明了该方案相对于标准视频编码的优越性。该方案的优点在于它明确考虑了解码器的两种工作模式(丢包或无丢包),并同时优化了相应的重构滤波器和编码器的预测滤波器。因此,预测系数往往比两个对应帧之间的相关系数和标准视频编码技术使用的相关系数(即1)小得多,从而使大部分帧间相关性保持完整,增加了错误恢复能力。
{"title":"Optimized Source-Channel Coding of Video Signals in Packet Loss Environments","authors":"U. Celikcan, E. Tuncel","doi":"10.1109/DCC.2009.67","DOIUrl":"https://doi.org/10.1109/DCC.2009.67","url":null,"abstract":"A novel predictive joint source-channel video coding scheme is proposed and its superiority against standard video coding is demonstrated in environments with heavy packet loss. The strength of the scheme stems from the fact that it explicitly takes into account the two modes of operation (packet loss or no packet loss) at the decoder and optimizes the corresponding reconstruction filters together with the the prediction filter at the encoder simultaneously. As a result, the prediction coefficient tends to be much smaller than both the correlation coefficient between two corresponding frames and what standard video coding techniques use (i.e., 1), thereby leaving most of the inter-frame correlation intact and increasing the error resilience.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"317 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115222940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
pFPC: A Parallel Compressor for Floating-Point Data pFPC:浮点数据的并行压缩器
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.43
Martin Burtscher, P. Ratanaworabhan
This paper describes and evaluates pFPC, a parallel implementation of the lossless FPC compression algorithm for 64-bit floating-point data. pFPC can trade off compression ratio for throughput. For example, on a 4-core 3 GHz Xeon system, it compresses our nine datasets by 18% at a throughput of 1.36 gigabytes per second and by 41% at a throughput of 570 megabytes per second. Decompression is even faster. Our experiments show that the thread count should match or be a small multiple of the data's dimensionality to maximize the compression ratio and the chunk size should be at least equal to the system's page size to maximize the throughput.
本文描述并评价了一种并行实现的64位浮点数据无损FPC压缩算法pFPC。pFPC可以权衡压缩比的吞吐量。例如,在4核3 GHz至强系统上,它以每秒1.36千兆字节的吞吐量将我们的9个数据集压缩18%,以每秒570兆字节的吞吐量将我们的9个数据集压缩41%。解压甚至更快。我们的实验表明,线程数应该匹配或是数据维数的一个小倍数,以最大化压缩比,而块大小应该至少等于系统的页面大小,以最大化吞吐量。
{"title":"pFPC: A Parallel Compressor for Floating-Point Data","authors":"Martin Burtscher, P. Ratanaworabhan","doi":"10.1109/DCC.2009.43","DOIUrl":"https://doi.org/10.1109/DCC.2009.43","url":null,"abstract":"This paper describes and evaluates pFPC, a parallel implementation of the lossless FPC compression algorithm for 64-bit floating-point data. pFPC can trade off compression ratio for throughput. For example, on a 4-core 3 GHz Xeon system, it compresses our nine datasets by 18% at a throughput of 1.36 gigabytes per second and by 41% at a throughput of 570 megabytes per second. Decompression is even faster. Our experiments show that the thread count should match or be a small multiple of the data's dimensionality to maximize the compression ratio and the chunk size should be at least equal to the system's page size to maximize the throughput.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121687401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Bits in Asymptotically Optimal Lossy Source Codes Are Asymptotically Bernoulli 渐近最优有损源代码中的比特是渐近伯努利的
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.21
R. Gray, T. Linder
A formal result is stated and proved showing that the bit stream produced by the encoder of a nearly optimal sliding-block source coding of a stationary and ergodic source is close to an equiprobable i.i.d. binary process.
给出了一个形式化的结果,证明了平稳遍历源的近最优滑动块源编码编码器产生的比特流接近于等概率i - id二进制过程。
{"title":"Bits in Asymptotically Optimal Lossy Source Codes Are Asymptotically Bernoulli","authors":"R. Gray, T. Linder","doi":"10.1109/DCC.2009.21","DOIUrl":"https://doi.org/10.1109/DCC.2009.21","url":null,"abstract":"A formal result is stated and proved showing that the bit stream produced by the encoder of a nearly optimal sliding-block source coding of a stationary and ergodic source is close to an equiprobable i.i.d. binary process.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116443135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optimization of Correlated Source Coding for Event-Based Monitoring in Sensor Networks 传感器网络中基于事件监测的相关源编码优化
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.56
J. Singh, A. Saxena, K. Rose, Upamanyu Madhow
Motivated by the paradigm of event-based monitoring,which can potentially alleviate the inherent bandwidth and energy constraints associated with wireless sensor networks, we consider the problem of joint coding of correlated sources under a cost criterion that is appropriately conditioned on event occurrences. The underlying premise is that individual sensors only have access to partial information and, in general, cannot reliably detect events. Hence, sensors optimally compress and transmit the data to a fusion center, so as to minimize the {emph{expected distortion in segments containing events}}. In this work, we derive and demonstrate the approach in the setting of entropy constrained distributed vector quantizer design,using a modified distortion criterion that appropriately accounts for the joint statistics of the events and the observation data. Simulation results show significant gains over conventional design as well as existing heuristic based methods, and provide experimental evidence to support the promise of our approach.
基于事件的监测范式可以潜在地缓解与无线传感器网络相关的固有带宽和能量限制,我们考虑了在适当以事件发生为条件的成本标准下相关源的联合编码问题。其基本前提是单个传感器只能访问部分信息,并且通常不能可靠地检测事件。因此,传感器以最佳方式压缩数据并将其传输到融合中心,以最小化{emph{包含事件的片段中的预期失真}}。在这项工作中,我们推导并演示了熵约束分布式矢量量化器设计的方法,使用修改的失真准则,适当地解释了事件和观测数据的联合统计。仿真结果表明,与传统设计和现有的启发式方法相比,该方法取得了显著的进步,并为支持我们的方法提供了实验证据。
{"title":"Optimization of Correlated Source Coding for Event-Based Monitoring in Sensor Networks","authors":"J. Singh, A. Saxena, K. Rose, Upamanyu Madhow","doi":"10.1109/DCC.2009.56","DOIUrl":"https://doi.org/10.1109/DCC.2009.56","url":null,"abstract":"Motivated by the paradigm of event-based monitoring,which can potentially alleviate the inherent bandwidth and energy constraints associated with wireless sensor networks, we consider the problem of joint coding of correlated sources under a cost criterion that is appropriately conditioned on event occurrences. The underlying premise is that individual sensors only have access to partial information and, in general, cannot reliably detect events. Hence, sensors optimally compress and transmit the data to a fusion center, so as to minimize the {emph{expected distortion in segments containing events}}. In this work, we derive and demonstrate the approach in the setting of entropy constrained distributed vector quantizer design,using a modified distortion criterion that appropriately accounts for the joint statistics of the events and the observation data. Simulation results show significant gains over conventional design as well as existing heuristic based methods, and provide experimental evidence to support the promise of our approach.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128288286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adaptive Rate Allocation Algorithm for Transmission of Multiple Embedded Bit Streams over Time-Varying Noisy Channels 时变噪声信道中多嵌入比特流传输的自适应速率分配算法
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.11
Ahmad Hatam, A. Banihashemi
An efficient rate allocation algorithm for the progressive transmission of multiple images over time-varying noisy channels is proposed. The algorithm is initiated by the distortion optimal solution [1] for the first image and searches for the optimal rate-allocation for each subsequent image in the neighborhood of the solution for the previous image. Given the initial solution, the algorithm is linear-time in the number of transmitted packets per image and its rate allocation solution for each image can achieve a performance equal or very close to the distortion optimal solution for that image. Our simulations for the transmission of images, encoded by embedded source coders, over the binary symmetric channel (BSC) show that with very low complexity the proposed algorithm successfully adapts the channel code rates to the changes of the channel parameter.
提出了一种时变噪声信道中多图像累进传输的有效速率分配算法。该算法由第一幅图像的失真最优解[1]启动,并在前一幅图像解的邻域内搜索每个后续图像的最优速率分配。给定初始解,该算法在每张图像的传输数据包数量上是线性时间的,其对每张图像的速率分配解决方案可以实现等于或非常接近该图像的失真最优解决方案的性能。在二进制对称信道(BSC)上对嵌入式源编码器编码的图像传输进行了仿真,结果表明,该算法能以极低的复杂度成功地适应信道参数变化的信道码率。
{"title":"Adaptive Rate Allocation Algorithm for Transmission of Multiple Embedded Bit Streams over Time-Varying Noisy Channels","authors":"Ahmad Hatam, A. Banihashemi","doi":"10.1109/DCC.2009.11","DOIUrl":"https://doi.org/10.1109/DCC.2009.11","url":null,"abstract":"An efficient rate allocation algorithm for the progressive transmission of multiple images over time-varying noisy channels is proposed. The algorithm is initiated by the distortion optimal solution [1] for the first image and searches for the optimal rate-allocation for each subsequent image in the neighborhood of the solution for the previous image. Given the initial solution, the algorithm is linear-time in the number of transmitted packets per image and its rate allocation solution for each image can achieve a performance equal or very close to the distortion optimal solution for that image. Our simulations for the transmission of images, encoded by embedded source coders, over the binary symmetric channel (BSC) show that with very low complexity the proposed algorithm successfully adapts the channel code rates to the changes of the channel parameter.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132111288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Direction Prediction Vector Quantization for Lossless Compression of LASIS Data 面向LASIS数据无损压缩的双向预测矢量量化
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.13
Jing Ma, Chengke Wu, Yunsong Li, Keyan Wang
Large Aperture Static Imaging Spectrometer(LASIS) is a new kind ofinterferometer spectrometer with the advantages of high throughputand large field of view. The LASIS data contains both spatial andspectral information in each frame which indicate the location shifting and modulatedoptical signal along Optical Path Difference(OPD). Based on these characteristics,we propose a lossless data compression method named Dual-directionPrediction Vector Quantization(DPVQ). With a dual-directionprediction on both spatial and spectral direction, redundancy inLASIS data is largely removed by minimizing the prediction residuein DPVQ. Then a fast vector quantization(VQ) avoiding codebooksplitting process is applied after prediction. Considering timeefficiency, the prediction and VQ in DPVQ are optimized to reducethe calculations, so that optimized prediction saves 60% runningtime and fast VQ saves about 25% running time with a similarquantization quality compared with classical generalized Lloydalgorithm(GLA). Experimental results show that DPVQ can achieve amaximal Compression Ratio(CR) at about 3.4, which outperforms manyexisting lossless compression algorithms.
大孔径静态成像光谱仪(LASIS)是一种新型干涉仪光谱仪,具有高通量和大视场的优点。LASIS数据在每一帧中都包含空间和光谱信息,这些信息表明了沿光程差(OPD)的位置移位和调制光信号。基于这些特点,我们提出了一种无损数据压缩方法——双向预测矢量量化(dual - directional prediction Vector Quantization, DPVQ)。通过对空间和光谱方向的双向预测,通过最小化DPVQ的预测残差,lasis数据中的冗余很大程度上被消除了。然后在预测后应用快速矢量量化(VQ)避免码薄分裂过程。考虑到时间效率,对DPVQ中的预测和VQ进行了优化,减少了计算量,与经典的广义劳埃德算法(GLA)相比,优化的预测节省了60%的运行时间,快速的VQ节省了25%的运行时间,并且量化质量相似。实验结果表明,DPVQ可以实现3.4左右的最大压缩比(CR),优于现有的许多无损压缩算法。
{"title":"Dual-Direction Prediction Vector Quantization for Lossless Compression of LASIS Data","authors":"Jing Ma, Chengke Wu, Yunsong Li, Keyan Wang","doi":"10.1109/DCC.2009.13","DOIUrl":"https://doi.org/10.1109/DCC.2009.13","url":null,"abstract":"Large Aperture Static Imaging Spectrometer(LASIS) is a new kind ofinterferometer spectrometer with the advantages of high throughputand large field of view. The LASIS data contains both spatial andspectral information in each frame which indicate the location shifting and modulatedoptical signal along Optical Path Difference(OPD). Based on these characteristics,we propose a lossless data compression method named Dual-directionPrediction Vector Quantization(DPVQ). With a dual-directionprediction on both spatial and spectral direction, redundancy inLASIS data is largely removed by minimizing the prediction residuein DPVQ. Then a fast vector quantization(VQ) avoiding codebooksplitting process is applied after prediction. Considering timeefficiency, the prediction and VQ in DPVQ are optimized to reducethe calculations, so that optimized prediction saves 60% runningtime and fast VQ saves about 25% running time with a similarquantization quality compared with classical generalized Lloydalgorithm(GLA). Experimental results show that DPVQ can achieve amaximal Compression Ratio(CR) at about 3.4, which outperforms manyexisting lossless compression algorithms.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133051453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Nonuniform Dithered Quantization 非均匀抖动量化
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.78
E. Akyol, K. Rose
Dithered quantization has useful properties such as producing quantization noise independent of the source and continuous reconstruction at the decoder side. Dithered quantizers have traditionally been considered within their natural setting of uniform quantization framework. A uniformly distributed (with step size matched to the quantization interval) dither signal is added before quantization and the same dither signal is subtracted from the quantized value at the decoder side (only subtractive dithering is considered in this paper). The quantized values are entropy coded conditioned on the dither signal.
抖动量化具有产生与信号源无关的量化噪声和在解码器侧进行连续重构等优点。传统上,抖动量化器被认为是在其统一量化框架的自然设置中。在量化前加入一个均匀分布(步长与量化间隔相匹配)的抖动信号,并在解码器侧从量化后的值中减去相同的抖动信号(本文只考虑减法抖动)。量子化值以抖动信号为条件进行熵编码。
{"title":"Nonuniform Dithered Quantization","authors":"E. Akyol, K. Rose","doi":"10.1109/DCC.2009.78","DOIUrl":"https://doi.org/10.1109/DCC.2009.78","url":null,"abstract":"Dithered quantization has useful properties such as producing quantization noise independent of the source and continuous reconstruction at the decoder side. Dithered quantizers have traditionally been considered within their natural setting of uniform quantization framework. A uniformly distributed (with step size matched to the quantization interval) dither signal is added before quantization and the same dither signal is subtracted from the quantized value at the decoder side (only subtractive dithering is considered in this paper). The quantized values are entropy coded conditioned on the dither signal.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133138574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Zero Padding SVD Encoder to Compress Electrocardiogram 零填充SVD编码器压缩心电图
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.48
C. Agulhari, I. S. Bonatti, P. Peres
A new method to compress electrocardiogram (ECG) signals, whose novelty is related to the choice of an appropriate basis of representation for each ECG to be compressed using the Singular Values Decomposition (SVD), is proposed in this paper. The proposed method, named Zero Padding SVD Encoder, consists of two steps: a preprocessing step where the ECG is separated into a set of signals, which are the beat pulses of the ECG; and a compression step where the SVD is applied to the set of beat pulses in order to find the basis that better represents the entire ECG. The elements of the basis are encoded using a wavelet procedure and the coefficientes of projection of the signal on the basis are quantized using an adaptive quantization procedure. Numerical experiments are performed with the electrocardiograms of the MIT-BIH database, demonstrating the efficiency of the proposed method.
本文提出了一种新的心电信号压缩方法,其新颖性在于利用奇异值分解(SVD)方法为每个要压缩的心电信号选择合适的表示基。所提出的方法被称为零填充SVD编码器,它包括两个步骤:预处理步骤,将心电信号分离成一组信号,这些信号是心电的拍脉冲;在压缩步骤中,将奇异值分解应用于一组心跳脉冲,以找到更好地代表整个心电图的基。基的元素采用小波编码,信号在基上的投影系数采用自适应量化。利用MIT-BIH数据库的心电图进行了数值实验,验证了该方法的有效性。
{"title":"A Zero Padding SVD Encoder to Compress Electrocardiogram","authors":"C. Agulhari, I. S. Bonatti, P. Peres","doi":"10.1109/DCC.2009.48","DOIUrl":"https://doi.org/10.1109/DCC.2009.48","url":null,"abstract":"A new method to compress electrocardiogram (ECG) signals, whose novelty is related to the choice of an appropriate basis of representation for each ECG to be compressed using the Singular Values Decomposition (SVD), is proposed in this paper. The proposed method, named Zero Padding SVD Encoder, consists of two steps: a preprocessing step where the ECG is separated into a set of signals, which are the beat pulses of the ECG; and a compression step where the SVD is applied to the set of beat pulses in order to find the basis that better represents the entire ECG. The elements of the basis are encoded using a wavelet procedure and the coefficientes of projection of the signal on the basis are quantized using an adaptive quantization procedure. Numerical experiments are performed with the electrocardiograms of the MIT-BIH database, demonstrating the efficiency of the proposed method.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126977762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Set Partitioning in Hierarchical Frequency Bands (SPHFB) 设置分层频带划分(SPHFB)
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.63
H. Ochoa, O. Vergara-Villegas, V. Sánchez, G. Rosiles, J. Vega-Pineda
A novel algorithm for very low bit rate based on hierarchical partition of subbands in the wavelet domain is proposed. The algorithm uses the set partitioning technique to sort the transformed coefficients. The threshold of each subband is calculated and the subbands scanning sequence is determined by the magnitude of the thresholds which establish a hierarchical scanning not only for the set of coefficients with large magnitude, but also for the subbands. Results show that SPHFB provides good image quality for very low bit rates.
提出了一种基于小波域子带分层分割的超低比特率算法。该算法采用集合划分技术对变换后的系数进行排序。计算各子带的阈值,并根据阈值的大小确定子带的扫描顺序,不仅对幅度较大的系数集,而且对子带也建立了分层扫描。结果表明,SPHFB在非常低的比特率下提供了良好的图像质量。
{"title":"Set Partitioning in Hierarchical Frequency Bands (SPHFB)","authors":"H. Ochoa, O. Vergara-Villegas, V. Sánchez, G. Rosiles, J. Vega-Pineda","doi":"10.1109/DCC.2009.63","DOIUrl":"https://doi.org/10.1109/DCC.2009.63","url":null,"abstract":"A novel algorithm for very low bit rate based on hierarchical partition of subbands in the wavelet domain is proposed. The algorithm uses the set partitioning technique to sort the transformed coefficients. The threshold of each subband is calculated and the subbands scanning sequence is determined by the magnitude of the thresholds which establish a hierarchical scanning not only for the set of coefficients with large magnitude, but also for the subbands. Results show that SPHFB provides good image quality for very low bit rates.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123378576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2009 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1