首页 > 最新文献

2013 Data Compression Conference最新文献

英文 中文
Space-Efficient Construction Algorithm for the Circular Suffix Tree 圆形后缀树的空间高效构造算法
Pub Date : 2013-03-20 DOI: 10.1007/978-3-642-38905-4_15
W. Hon, Tsung-Han Ku, R. Shah, Sharma V. Thankachan
{"title":"Space-Efficient Construction Algorithm for the Circular Suffix Tree","authors":"W. Hon, Tsung-Han Ku, R. Shah, Sharma V. Thankachan","doi":"10.1007/978-3-642-38905-4_15","DOIUrl":"https://doi.org/10.1007/978-3-642-38905-4_15","url":null,"abstract":"","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131145861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Tunneling High-Resolution Color Content through 4:2:0 HEVC and AVC Video Coding Systems 通过4:2:0 HEVC和AVC视频编码系统隧道高分辨率彩色内容
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.8
Yongjun Wu, S. Kanumuri, Yifu Zhang, Shyam Sadhwani, G. Sullivan, Henrique S. Malvar
We present a method to convey high-resolution color (4:4:4) video content through a video coding system designed for chroma-sub sampled (4:2:0) operation. The method operates by packing the samples of a 4:4:4 frame into two frames that are then encoded as if they were ordinary 4:2:0 content. After being received and decoded, the packing process is reversed to recover a 4:4:4 video frame. As 4:2:0 is the most widely supported digital color format, the described scheme provides an effective way of transporting 4:4:4 content through existing mass-market encoders and decoders, for applications such as coding of screen content. The described packing arrangement is designed such that the spatial correspondence and motion vector displacement relationships between the nominally-luma and nominally-chroma components are preserved. The use of this scheme can be indicated by a metadata tag such as the frame packing arrangement supplemental enhancement information (SEI) message defined in the HEVC and AVC (Rec. ITU-T H.264 | ISO/IEC 14496-10) video coding standards. In this context the scheme would operate in a similar manner as is commonly used for packing the two views of stereoscopic 3D video for compatible encoding. The technique can also be extended to transport 4:2:2 video through 4:2:0 systems or 4:4:4 video through 4:2:2 systems.
我们提出了一种通过专为色度采样(4:2:0)操作设计的视频编码系统来传输高分辨率彩色(4:4:4)视频内容的方法。该方法通过将4:4:4帧的样本打包成两个帧,然后将其编码为普通的4:2:0内容。接收解码后,反向打包恢复为4:4:4视频帧。由于4:2:0是最广泛支持的数字彩色格式,因此所描述的方案提供了一种通过现有的大众市场编码器和解码器传输4:4:4内容的有效方法,用于诸如屏幕内容编码等应用。所描述的填充排列的设计使得名义亮度分量和名义色度分量之间的空间对应关系和运动矢量位移关系得以保留。该方案的使用可以通过元数据标签来表示,例如HEVC和AVC (Rec. ITU-T H.264 | ISO/IEC 14496-10)视频编码标准中定义的帧打包排列补充增强信息(SEI)消息。在这种情况下,该方案的操作方式与通常用于将立体3D视频的两个视图打包以进行兼容编码的方式类似。该技术还可以扩展到通过4:2:0系统传输4:2:2视频或通过4:2:2系统传输4:4:4视频。
{"title":"Tunneling High-Resolution Color Content through 4:2:0 HEVC and AVC Video Coding Systems","authors":"Yongjun Wu, S. Kanumuri, Yifu Zhang, Shyam Sadhwani, G. Sullivan, Henrique S. Malvar","doi":"10.1109/DCC.2013.8","DOIUrl":"https://doi.org/10.1109/DCC.2013.8","url":null,"abstract":"We present a method to convey high-resolution color (4:4:4) video content through a video coding system designed for chroma-sub sampled (4:2:0) operation. The method operates by packing the samples of a 4:4:4 frame into two frames that are then encoded as if they were ordinary 4:2:0 content. After being received and decoded, the packing process is reversed to recover a 4:4:4 video frame. As 4:2:0 is the most widely supported digital color format, the described scheme provides an effective way of transporting 4:4:4 content through existing mass-market encoders and decoders, for applications such as coding of screen content. The described packing arrangement is designed such that the spatial correspondence and motion vector displacement relationships between the nominally-luma and nominally-chroma components are preserved. The use of this scheme can be indicated by a metadata tag such as the frame packing arrangement supplemental enhancement information (SEI) message defined in the HEVC and AVC (Rec. ITU-T H.264 | ISO/IEC 14496-10) video coding standards. In this context the scheme would operate in a similar manner as is commonly used for packing the two views of stereoscopic 3D video for compatible encoding. The technique can also be extended to transport 4:2:2 video through 4:2:0 systems or 4:4:4 video through 4:2:2 systems.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132462743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Angular Disparity Map: A Scalable Perceptual-Based Representation of Binocular Disparity 角视差图:一个可扩展的基于感知的双眼视差表示
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.85
Yu-Hsun Lin, Ja-Ling Wu
This work addresses the data representation and the compression issues of angular disparity map following the way of HVS to perceive depth information. The continued fraction is utilized to represent the angular disparity map which enables the use of the state-of-the-art video codec (e.g. HEVC) to compress the data directly and maintains quality scalability properties. We observe that there is a non-monotonic phenomenon of the RD curves by applying HEVC compression to angular disparity map directly. This implies that the correlations among inter-layer (i.e., the neighboring integers in (2)) do not follow the traditional models of normal 2D video codecs. Of course, the detailed relationship between the sensitivities and the quantization errors of the newly proposed representation needs in depth further derivations. There are many interesting research issues may be introduced by the proposed data format (e.g., the sensitivities to quantization errors of θ and the rate-distortion optimization scheme for θ) which will, of course, be the research topics of our future work. We expect this work can be a bridge to connect the 3D perception and the 3D compression research fields.
本文采用HVS感知深度信息的方式,解决了角视差图的数据表示和压缩问题。连分式用于表示角视差图,这使得使用最先进的视频编解码器(例如HEVC)可以直接压缩数据并保持高质量的可伸缩性属性。通过对角视差图直接进行HEVC压缩,我们观察到RD曲线存在非单调现象。这意味着层间(即(2)中的相邻整数)之间的相关性不遵循常规2D视频编解码器的传统模型。当然,新提出的表示的灵敏度和量化误差之间的详细关系需要进一步深入推导。提出的数据格式可能会引入许多有趣的研究问题(例如,θ对量化误差的敏感性和θ的率失真优化方案),这当然是我们未来工作的研究课题。我们期望这项工作能够成为连接三维感知和三维压缩研究领域的桥梁。
{"title":"Angular Disparity Map: A Scalable Perceptual-Based Representation of Binocular Disparity","authors":"Yu-Hsun Lin, Ja-Ling Wu","doi":"10.1109/DCC.2013.85","DOIUrl":"https://doi.org/10.1109/DCC.2013.85","url":null,"abstract":"This work addresses the data representation and the compression issues of angular disparity map following the way of HVS to perceive depth information. The continued fraction is utilized to represent the angular disparity map which enables the use of the state-of-the-art video codec (e.g. HEVC) to compress the data directly and maintains quality scalability properties. We observe that there is a non-monotonic phenomenon of the RD curves by applying HEVC compression to angular disparity map directly. This implies that the correlations among inter-layer (i.e., the neighboring integers in (2)) do not follow the traditional models of normal 2D video codecs. Of course, the detailed relationship between the sensitivities and the quantization errors of the newly proposed representation needs in depth further derivations. There are many interesting research issues may be introduced by the proposed data format (e.g., the sensitivities to quantization errors of θ and the rate-distortion optimization scheme for θ) which will, of course, be the research topics of our future work. We expect this work can be a bridge to connect the 3D perception and the 3D compression research fields.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121279970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantization Games on Networks 网络上的量化游戏
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.37
Ankur Mani, L. Varshney, A. Pentland
We consider a network quantizer design setting where agents must balance fidelity in representing their local source distributions against their ability to successfully communicate with other connected agents. By casting the problem as a network game, we show existence of Nash equilibrium quantizer designs. For any agent, under Nash equilibrium, the word representing a given partition region is the conditional expectation of the mixture of local and social source probability distributions within the region. Further, the network may converge to equilibrium through a distributed version of the Lloyd-Max algorithm. In contrast to traditional results in the evolution of language, we find several vocabularies may coexist in the Nash equilibrium, with each individual having exactly one of these vocabularies. The overlap between vocabularies is high for individuals that communicate frequently and have similar local sources. Finally, we argue error in translation along a chain of communication does not grow if and only if the chain consists of agents with shared vocabulary.
我们考虑一个网络量化器设计设置,其中代理必须在表示其本地源分布的保真度与成功与其他连接的代理进行通信的能力之间取得平衡。通过将该问题视为一个网络博弈,我们证明了纳什均衡量化器设计的存在性。对于任何智能体,在纳什均衡下,表示给定划分区域的词是该区域内局部和社会源概率分布混合的条件期望。此外,网络可以通过劳埃德-马克斯算法的分布式版本收敛到平衡状态。与语言进化的传统结果相反,我们发现在纳什均衡中可能存在多个词汇,每个个体恰好拥有其中一个词汇。对于频繁交流和拥有相似本地来源的个人来说,词汇表之间的重叠程度很高。最后,我们认为,当且仅当沟通链由具有共享词汇的代理组成时,翻译中的错误不会增加。
{"title":"Quantization Games on Networks","authors":"Ankur Mani, L. Varshney, A. Pentland","doi":"10.1109/DCC.2013.37","DOIUrl":"https://doi.org/10.1109/DCC.2013.37","url":null,"abstract":"We consider a network quantizer design setting where agents must balance fidelity in representing their local source distributions against their ability to successfully communicate with other connected agents. By casting the problem as a network game, we show existence of Nash equilibrium quantizer designs. For any agent, under Nash equilibrium, the word representing a given partition region is the conditional expectation of the mixture of local and social source probability distributions within the region. Further, the network may converge to equilibrium through a distributed version of the Lloyd-Max algorithm. In contrast to traditional results in the evolution of language, we find several vocabularies may coexist in the Nash equilibrium, with each individual having exactly one of these vocabularies. The overlap between vocabularies is high for individuals that communicate frequently and have similar local sources. Finally, we argue error in translation along a chain of communication does not grow if and only if the chain consists of agents with shared vocabulary.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123443946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hierarchical-and-Adaptive Bit-Allocation with Selective Background Prediction for High Efficiency Video Coding (HEVC) 基于选择性背景预测的分层自适应位分配高效视频编码(HEVC)
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.114
Xianguo Zhang, Tiejun Huang, Yonghong Tian, Wen Gao
Summary form only given. Recently, a low-delay and high-efficiency hierarchical prediction structure (HPS) has been proposed for the forthcoming HEVC. Actually, frames and coding units (CUs) at different HPS positions have different importance to predict following frames and CUs. This paper firstly analyzes what frames and CUs should be quantified less. Based on the analysis, we propose a Hierarchical-and-Adaptive BIT-allocation method with Selective background prediction (HABITS) to optimize the video performance of HEVC. Extensive experiments on HM8.0 show that, HABITS saves 13.3% and 35.5% of the total bit rate for eight HEVC conference videos and eight common used surveillance videos. Even for the normal videos in HEVC's Class B and C, there is still 2.2% bit-saving.
只提供摘要形式。最近,针对即将到来的HEVC,提出了一种低延迟、高效率的分层预测结构(HPS)。实际上,不同HPS位置的帧和编码单元对预测后续帧和编码单元的重要性是不同的。本文首先分析了哪些帧和库应该少量化。在此基础上,提出了一种基于选择性背景预测(HABITS)的分层自适应比特分配方法来优化HEVC的视频性能。在HM8.0上进行的大量实验表明,在8个HEVC会议视频和8个常用监控视频中,habit分别节省了13.3%和35.5%的总比特率。即使对于HEVC的B级和C级的普通视频,仍然有2.2%的比特节省。
{"title":"Hierarchical-and-Adaptive Bit-Allocation with Selective Background Prediction for High Efficiency Video Coding (HEVC)","authors":"Xianguo Zhang, Tiejun Huang, Yonghong Tian, Wen Gao","doi":"10.1109/DCC.2013.114","DOIUrl":"https://doi.org/10.1109/DCC.2013.114","url":null,"abstract":"Summary form only given. Recently, a low-delay and high-efficiency hierarchical prediction structure (HPS) has been proposed for the forthcoming HEVC. Actually, frames and coding units (CUs) at different HPS positions have different importance to predict following frames and CUs. This paper firstly analyzes what frames and CUs should be quantified less. Based on the analysis, we propose a Hierarchical-and-Adaptive BIT-allocation method with Selective background prediction (HABITS) to optimize the video performance of HEVC. Extensive experiments on HM8.0 show that, HABITS saves 13.3% and 35.5% of the total bit rate for eight HEVC conference videos and eight common used surveillance videos. Even for the normal videos in HEVC's Class B and C, there is still 2.2% bit-saving.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124786406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Highly Parallel Framework for HEVC Motion Estimation on Many-Core Platform 多核平台上HEVC运动估计的高度并行框架
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.14
C. Yan, Yongdong Zhang, Feng Dai, L. Li
As the next generation standard of video coding, High Efficiency Video Coding (HEVC) is expected to be more complex than H.264/AVC. Many-core platforms are good candidates for speeding up HEVC in the case that HEVC can provide sufficient parallelism. The local parallel method (LPM) is the most promising parallel proposal for HEVC motion estimation (ME), but it can't provide sufficient parallelism for many-core platforms. On the premise of keeping the data dependencies and coding efficiency the same as the LPM, we propose a highly parallel framework to exploit the implicit parallelism. Compared with the well-known LPM, experiments conducted on a 64-core system show that our proposed method achieves averagely more than 10 and 13 times speedup for 1920×1080 and 2560×1600 video sequences, respectively.
高效视频编码(High Efficiency video coding, HEVC)作为下一代视频编码标准,其复杂度有望超过H.264/AVC。在HEVC能够提供足够并行性的情况下,多核平台是加速HEVC的好选择。局部并行方法(LPM)是HEVC运动估计(ME)中最有前途的并行方案,但它不能为多核平台提供足够的并行性。在保持数据依赖关系和编码效率与LPM相同的前提下,我们提出了一个高度并行的框架来利用隐式并行性。与已知的LPM相比,在64核系统上进行的实验表明,我们提出的方法对1920×1080和2560×1600视频序列的平均加速分别超过10倍和13倍。
{"title":"Highly Parallel Framework for HEVC Motion Estimation on Many-Core Platform","authors":"C. Yan, Yongdong Zhang, Feng Dai, L. Li","doi":"10.1109/DCC.2013.14","DOIUrl":"https://doi.org/10.1109/DCC.2013.14","url":null,"abstract":"As the next generation standard of video coding, High Efficiency Video Coding (HEVC) is expected to be more complex than H.264/AVC. Many-core platforms are good candidates for speeding up HEVC in the case that HEVC can provide sufficient parallelism. The local parallel method (LPM) is the most promising parallel proposal for HEVC motion estimation (ME), but it can't provide sufficient parallelism for many-core platforms. On the premise of keeping the data dependencies and coding efficiency the same as the LPM, we propose a highly parallel framework to exploit the implicit parallelism. Compared with the well-known LPM, experiments conducted on a 64-core system show that our proposed method achieves averagely more than 10 and 13 times speedup for 1920×1080 and 2560×1600 video sequences, respectively.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125317667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 74
Universal Numerical Encoder and Profiler Reduces Computing's Memory Wall with Software, FPGA, and SoC Implementations 通用数字编码器和分析器通过软件、FPGA和SoC实现减少计算的内存墙
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.107
Al Wegener
Summary form only given. Numerical computations have accelerated significantly since 2005 thanks to two complementary, silicon-enabled trends: multi-core processing and single instruction, multiple data (SIMD) accelerators. Unfortunately, due to fundamental limitations of physics, these two trends could not be accompanied by a corresponding increase in memory, storage, and I/O bandwidth. High-performance computing (HPC) is the proverbial “canary in the coal mine” of multi-core processing. When HPC hits a multi-core will likely encounter a similar limit in few years. We describe the computationally efficient (Fig 1b) and adaptive APplication AXceleration (APAX) numerical encoding method to reduce the memory wall for integers and floating-point operands. APAX achieves encoding rates between 3:1 and 10:1 without changing the dataset's statistical or spectral characteristics. APAX encoding takes advantage of three characteristics of all numerical sequences: peak-to-average ratio, oversampling, and effective number of bits (ENOB). Uncertainty quantification and spectral methods quantify the degree of uncertainty (accuracy) in numerical datasets. APAX profiler creates a rate-correlation graph with recommended operating signals, and fundamental limit, consumer point, provides 18 quantitative metrics comparing the original and decoded displays input and residual spectra with a residual histogram. On 24 integer and floating-point HPC datasets taken from climate, multi-physics, and seismic simulations, APAX averaged 7.95:1 encoding ratio at a Pearson's correlation coefficient of 0. 999948, and a spectral margin (input spectrum min - residual spectrum mean) of 24 dB. HPC scientists confirmed that APAX did not change HPC simulation results DRAM and disk transfers by 8x, accelerating HPC “time to results” by 20% while reducing to 50%.
只提供摘要形式。自2005年以来,由于两种互补的硅驱动趋势:多核处理和单指令多数据(SIMD)加速器,数值计算显著加速。不幸的是,由于物理的基本限制,这两种趋势不能伴随着内存、存储和I/O带宽的相应增加。高性能计算(HPC)是多核处理中众所周知的“煤矿里的金丝雀”。当高性能计算进入多核时,可能会在几年内遇到类似的限制。我们描述了计算效率高(图1b)和自适应应用加速(APAX)的数字编码方法,以减少整数和浮点操作数的内存墙。APAX在不改变数据集的统计或光谱特征的情况下实现3:1到10:1之间的编码率。APAX编码利用了所有数字序列的三个特征:峰均比、过采样和有效位数(ENOB)。不确定度量化和光谱方法量化数值数据集的不确定度(精度)。APAX profiler创建了一个带有推荐操作信号的速率相关图,基本限制,消费者点,提供了18个定量指标,比较原始和解码的显示输入和残差直方图的残差光谱。在气候、多物理场和地震模拟的24个整数和浮点HPC数据集上,APAX的平均编码比为7.95:1,Pearson相关系数为0。999948,频谱裕度(输入频谱最小-残差频谱平均值)为24db。HPC科学家证实,APAX并没有改变HPC模拟结果,DRAM和磁盘传输速度提高了8倍,将HPC“到结果的时间”提高了20%,同时减少了50%。
{"title":"Universal Numerical Encoder and Profiler Reduces Computing's Memory Wall with Software, FPGA, and SoC Implementations","authors":"Al Wegener","doi":"10.1109/DCC.2013.107","DOIUrl":"https://doi.org/10.1109/DCC.2013.107","url":null,"abstract":"Summary form only given. Numerical computations have accelerated significantly since 2005 thanks to two complementary, silicon-enabled trends: multi-core processing and single instruction, multiple data (SIMD) accelerators. Unfortunately, due to fundamental limitations of physics, these two trends could not be accompanied by a corresponding increase in memory, storage, and I/O bandwidth. High-performance computing (HPC) is the proverbial “canary in the coal mine” of multi-core processing. When HPC hits a multi-core will likely encounter a similar limit in few years. We describe the computationally efficient (Fig 1b) and adaptive APplication AXceleration (APAX) numerical encoding method to reduce the memory wall for integers and floating-point operands. APAX achieves encoding rates between 3:1 and 10:1 without changing the dataset's statistical or spectral characteristics. APAX encoding takes advantage of three characteristics of all numerical sequences: peak-to-average ratio, oversampling, and effective number of bits (ENOB). Uncertainty quantification and spectral methods quantify the degree of uncertainty (accuracy) in numerical datasets. APAX profiler creates a rate-correlation graph with recommended operating signals, and fundamental limit, consumer point, provides 18 quantitative metrics comparing the original and decoded displays input and residual spectra with a residual histogram. On 24 integer and floating-point HPC datasets taken from climate, multi-physics, and seismic simulations, APAX averaged 7.95:1 encoding ratio at a Pearson's correlation coefficient of 0. 999948, and a spectral margin (input spectrum min - residual spectrum mean) of 24 dB. HPC scientists confirmed that APAX did not change HPC simulation results DRAM and disk transfers by 8x, accelerating HPC “time to results” by 20% while reducing to 50%.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130621638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Practical Parallel Lempel-Ziv Factorization 实用并行Lempel-Ziv分解
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.20
Julian Shun, Fuyao Zhao
In the age of big data, the need for efficient data compression algorithms has grown. A widely used data compression method is the Lempel-Ziv-77 (LZ77) method, being a subroutine in popular compression packages such as gzip and PKZIP. There has been a lot of recent effort on developing practical sequential algorithms for Lempel-Ziv factorization (equivalent to LZ77 compression), but research in practical parallel implementations has been less satisfactory. In this work, we present a simple work-efficient parallel algorithm for Lempel-Ziv factorization. We show theoretically that our algorithm requires linear work and runs in O(log2 n) time (randomized) for constant alphabets and O(nϵ) time (ϵ <; 1) for integer alphabets. We present experimental results showing that our algorithm is efficient and achieves good speedup with respect to the best sequential implementations of Lempel-Ziv factorization.
在大数据时代,对高效数据压缩算法的需求日益增长。一种广泛使用的数据压缩方法是Lempel-Ziv-77 (LZ77)方法,它是gzip和PKZIP等流行压缩包中的子例程。最近有很多人致力于开发用于Lempel-Ziv分解(相当于LZ77压缩)的实用顺序算法,但在实际并行实现方面的研究并不令人满意。在这项工作中,我们提出了一个简单的工作效率并行算法的Lempel-Ziv分解。我们从理论上证明,我们的算法需要线性工作,对于常数字母,运行时间为O(log2 n)(随机化),运行时间为O(nλ) (λ <;1)整数字母。实验结果表明,我们的算法是有效的,并且相对于最佳顺序实现的Lempel-Ziv分解获得了良好的加速。
{"title":"Practical Parallel Lempel-Ziv Factorization","authors":"Julian Shun, Fuyao Zhao","doi":"10.1109/DCC.2013.20","DOIUrl":"https://doi.org/10.1109/DCC.2013.20","url":null,"abstract":"In the age of big data, the need for efficient data compression algorithms has grown. A widely used data compression method is the Lempel-Ziv-77 (LZ77) method, being a subroutine in popular compression packages such as gzip and PKZIP. There has been a lot of recent effort on developing practical sequential algorithms for Lempel-Ziv factorization (equivalent to LZ77 compression), but research in practical parallel implementations has been less satisfactory. In this work, we present a simple work-efficient parallel algorithm for Lempel-Ziv factorization. We show theoretically that our algorithm requires linear work and runs in O(log2 n) time (randomized) for constant alphabets and O(nϵ) time (ϵ <; 1) for integer alphabets. We present experimental results showing that our algorithm is efficient and achieves good speedup with respect to the best sequential implementations of Lempel-Ziv factorization.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130552382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Faster Compressed Top-k Document Retrieval 更快的压缩Top-k文档检索
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.42
W. Hon, Sharma V. Thankachan, R. Shah, J. Vitter
Let D = {d1, d2,...dD} be a given collection of D string documents of total length n, our task is to index D, such that whenever a pattern P (of length p) and an integer k come as a query, those k documents in which P appears the most number of times can be listed efficiently. In this paper, we propose a compressed index taking 2|CSA| + D logn/D + O(D) + o(n) bits of space, which answers a query with O(tsa log k logϵ n) per document report time. This improves the O(tsa log k log1+ϵ n) per document report time of the previously best-known index with (asymptotically) the same space requirements [Belazzougui and Navarro, SPIRE 2011]. Here, |CSA| represents the size (in bits) of the compressed suffix array (CSA) of the text obtained by concatenating all documents in V, and tsa is the time for decoding a suffix array value using the CSA.
令D = {d1, d2,…dD}是一个给定的总长度为n的D个字符串文档的集合,我们的任务是对D进行索引,以便当一个模式P(长度为P)和一个整数k作为查询出现时,可以有效地列出P出现次数最多的k个文档。在本文中,我们提出了一个压缩索引,占用2|CSA| + D logn/D + O(D) + O(n)位空间,它回答每个文档报告时间为O(tsa log k logλ n)的查询。这提高了(渐近地)具有相同空间要求的以前最知名的索引的每个文档报告时间的O(tsa log k log1+柱n) [Belazzougui和Navarro, SPIRE 2011]。其中|CSA|表示通过将V中的所有文档连接得到的文本的压缩后缀数组(CSA)的大小(以位为单位),tsa是使用CSA解码后缀数组值的时间。
{"title":"Faster Compressed Top-k Document Retrieval","authors":"W. Hon, Sharma V. Thankachan, R. Shah, J. Vitter","doi":"10.1109/DCC.2013.42","DOIUrl":"https://doi.org/10.1109/DCC.2013.42","url":null,"abstract":"Let D = {d<sub>1</sub>, d<sub>2</sub>,...d<sub>D</sub>} be a given collection of D string documents of total length n, our task is to index D, such that whenever a pattern P (of length p) and an integer k come as a query, those k documents in which P appears the most number of times can be listed efficiently. In this paper, we propose a compressed index taking 2|CSA| + D logn/D + O(D) + o(n) bits of space, which answers a query with O(t<sub>sa</sub> log k log<sup>ϵ</sup> n) per document report time. This improves the O(t<sub>sa</sub> log k log<sup>1+ϵ</sup> n) per document report time of the previously best-known index with (asymptotically) the same space requirements [Belazzougui and Navarro, SPIRE 2011]. Here, |CSA| represents the size (in bits) of the compressed suffix array (CSA) of the text obtained by concatenating all documents in V, and t<sub>sa</sub> is the time for decoding a suffix array value using the CSA.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130569515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
An Optimal Switched Adaptive Prediction Method for Lossless Video Coding 一种最优切换自适应无损视频编码预测方法
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.63
Dinesh Kumar Chobey, Mohit Vaishnav, A. Tiwari
In this work, we propose a method of lossless video coding which not has only the decoder simple but encoder is also simple, unlike other reported methods which has computationally complex encoder. The computation is mainly due to not using motion compensation method, which is computationally complex process. The coefficient of the predictors are obtained based on an averaging process and then the thus obtained set of switched predictors is used for prediction. The parameters have been obtained after undergoing a statistical process of averaging so that proper relationship can be established between the predicted pixel and their context.
在这项工作中,我们提出了一种无损视频编码方法,它不仅具有简单的解码器,而且编码器也很简单,而不像其他报道的方法具有计算复杂的编码器。这主要是由于没有使用运动补偿方法,这是一个计算复杂的过程。通过平均方法得到各预测因子的系数,然后将得到的切换预测因子集用于预测。这些参数是经过平均的统计过程后得到的,以便在预测像素与其上下文之间建立适当的关系。
{"title":"An Optimal Switched Adaptive Prediction Method for Lossless Video Coding","authors":"Dinesh Kumar Chobey, Mohit Vaishnav, A. Tiwari","doi":"10.1109/DCC.2013.63","DOIUrl":"https://doi.org/10.1109/DCC.2013.63","url":null,"abstract":"In this work, we propose a method of lossless video coding which not has only the decoder simple but encoder is also simple, unlike other reported methods which has computationally complex encoder. The computation is mainly due to not using motion compensation method, which is computationally complex process. The coefficient of the predictors are obtained based on an averaging process and then the thus obtained set of switched predictors is used for prediction. The parameters have been obtained after undergoing a statistical process of averaging so that proper relationship can be established between the predicted pixel and their context.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115805162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2013 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1