Pub Date : 2013-03-20DOI: 10.1007/978-3-642-38905-4_15
W. Hon, Tsung-Han Ku, R. Shah, Sharma V. Thankachan
{"title":"Space-Efficient Construction Algorithm for the Circular Suffix Tree","authors":"W. Hon, Tsung-Han Ku, R. Shah, Sharma V. Thankachan","doi":"10.1007/978-3-642-38905-4_15","DOIUrl":"https://doi.org/10.1007/978-3-642-38905-4_15","url":null,"abstract":"","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131145861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongjun Wu, S. Kanumuri, Yifu Zhang, Shyam Sadhwani, G. Sullivan, Henrique S. Malvar
We present a method to convey high-resolution color (4:4:4) video content through a video coding system designed for chroma-sub sampled (4:2:0) operation. The method operates by packing the samples of a 4:4:4 frame into two frames that are then encoded as if they were ordinary 4:2:0 content. After being received and decoded, the packing process is reversed to recover a 4:4:4 video frame. As 4:2:0 is the most widely supported digital color format, the described scheme provides an effective way of transporting 4:4:4 content through existing mass-market encoders and decoders, for applications such as coding of screen content. The described packing arrangement is designed such that the spatial correspondence and motion vector displacement relationships between the nominally-luma and nominally-chroma components are preserved. The use of this scheme can be indicated by a metadata tag such as the frame packing arrangement supplemental enhancement information (SEI) message defined in the HEVC and AVC (Rec. ITU-T H.264 | ISO/IEC 14496-10) video coding standards. In this context the scheme would operate in a similar manner as is commonly used for packing the two views of stereoscopic 3D video for compatible encoding. The technique can also be extended to transport 4:2:2 video through 4:2:0 systems or 4:4:4 video through 4:2:2 systems.
{"title":"Tunneling High-Resolution Color Content through 4:2:0 HEVC and AVC Video Coding Systems","authors":"Yongjun Wu, S. Kanumuri, Yifu Zhang, Shyam Sadhwani, G. Sullivan, Henrique S. Malvar","doi":"10.1109/DCC.2013.8","DOIUrl":"https://doi.org/10.1109/DCC.2013.8","url":null,"abstract":"We present a method to convey high-resolution color (4:4:4) video content through a video coding system designed for chroma-sub sampled (4:2:0) operation. The method operates by packing the samples of a 4:4:4 frame into two frames that are then encoded as if they were ordinary 4:2:0 content. After being received and decoded, the packing process is reversed to recover a 4:4:4 video frame. As 4:2:0 is the most widely supported digital color format, the described scheme provides an effective way of transporting 4:4:4 content through existing mass-market encoders and decoders, for applications such as coding of screen content. The described packing arrangement is designed such that the spatial correspondence and motion vector displacement relationships between the nominally-luma and nominally-chroma components are preserved. The use of this scheme can be indicated by a metadata tag such as the frame packing arrangement supplemental enhancement information (SEI) message defined in the HEVC and AVC (Rec. ITU-T H.264 | ISO/IEC 14496-10) video coding standards. In this context the scheme would operate in a similar manner as is commonly used for packing the two views of stereoscopic 3D video for compatible encoding. The technique can also be extended to transport 4:2:2 video through 4:2:0 systems or 4:4:4 video through 4:2:2 systems.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132462743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work addresses the data representation and the compression issues of angular disparity map following the way of HVS to perceive depth information. The continued fraction is utilized to represent the angular disparity map which enables the use of the state-of-the-art video codec (e.g. HEVC) to compress the data directly and maintains quality scalability properties. We observe that there is a non-monotonic phenomenon of the RD curves by applying HEVC compression to angular disparity map directly. This implies that the correlations among inter-layer (i.e., the neighboring integers in (2)) do not follow the traditional models of normal 2D video codecs. Of course, the detailed relationship between the sensitivities and the quantization errors of the newly proposed representation needs in depth further derivations. There are many interesting research issues may be introduced by the proposed data format (e.g., the sensitivities to quantization errors of θ and the rate-distortion optimization scheme for θ) which will, of course, be the research topics of our future work. We expect this work can be a bridge to connect the 3D perception and the 3D compression research fields.
{"title":"Angular Disparity Map: A Scalable Perceptual-Based Representation of Binocular Disparity","authors":"Yu-Hsun Lin, Ja-Ling Wu","doi":"10.1109/DCC.2013.85","DOIUrl":"https://doi.org/10.1109/DCC.2013.85","url":null,"abstract":"This work addresses the data representation and the compression issues of angular disparity map following the way of HVS to perceive depth information. The continued fraction is utilized to represent the angular disparity map which enables the use of the state-of-the-art video codec (e.g. HEVC) to compress the data directly and maintains quality scalability properties. We observe that there is a non-monotonic phenomenon of the RD curves by applying HEVC compression to angular disparity map directly. This implies that the correlations among inter-layer (i.e., the neighboring integers in (2)) do not follow the traditional models of normal 2D video codecs. Of course, the detailed relationship between the sensitivities and the quantization errors of the newly proposed representation needs in depth further derivations. There are many interesting research issues may be introduced by the proposed data format (e.g., the sensitivities to quantization errors of θ and the rate-distortion optimization scheme for θ) which will, of course, be the research topics of our future work. We expect this work can be a bridge to connect the 3D perception and the 3D compression research fields.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121279970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a network quantizer design setting where agents must balance fidelity in representing their local source distributions against their ability to successfully communicate with other connected agents. By casting the problem as a network game, we show existence of Nash equilibrium quantizer designs. For any agent, under Nash equilibrium, the word representing a given partition region is the conditional expectation of the mixture of local and social source probability distributions within the region. Further, the network may converge to equilibrium through a distributed version of the Lloyd-Max algorithm. In contrast to traditional results in the evolution of language, we find several vocabularies may coexist in the Nash equilibrium, with each individual having exactly one of these vocabularies. The overlap between vocabularies is high for individuals that communicate frequently and have similar local sources. Finally, we argue error in translation along a chain of communication does not grow if and only if the chain consists of agents with shared vocabulary.
{"title":"Quantization Games on Networks","authors":"Ankur Mani, L. Varshney, A. Pentland","doi":"10.1109/DCC.2013.37","DOIUrl":"https://doi.org/10.1109/DCC.2013.37","url":null,"abstract":"We consider a network quantizer design setting where agents must balance fidelity in representing their local source distributions against their ability to successfully communicate with other connected agents. By casting the problem as a network game, we show existence of Nash equilibrium quantizer designs. For any agent, under Nash equilibrium, the word representing a given partition region is the conditional expectation of the mixture of local and social source probability distributions within the region. Further, the network may converge to equilibrium through a distributed version of the Lloyd-Max algorithm. In contrast to traditional results in the evolution of language, we find several vocabularies may coexist in the Nash equilibrium, with each individual having exactly one of these vocabularies. The overlap between vocabularies is high for individuals that communicate frequently and have similar local sources. Finally, we argue error in translation along a chain of communication does not grow if and only if the chain consists of agents with shared vocabulary.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123443946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Recently, a low-delay and high-efficiency hierarchical prediction structure (HPS) has been proposed for the forthcoming HEVC. Actually, frames and coding units (CUs) at different HPS positions have different importance to predict following frames and CUs. This paper firstly analyzes what frames and CUs should be quantified less. Based on the analysis, we propose a Hierarchical-and-Adaptive BIT-allocation method with Selective background prediction (HABITS) to optimize the video performance of HEVC. Extensive experiments on HM8.0 show that, HABITS saves 13.3% and 35.5% of the total bit rate for eight HEVC conference videos and eight common used surveillance videos. Even for the normal videos in HEVC's Class B and C, there is still 2.2% bit-saving.
{"title":"Hierarchical-and-Adaptive Bit-Allocation with Selective Background Prediction for High Efficiency Video Coding (HEVC)","authors":"Xianguo Zhang, Tiejun Huang, Yonghong Tian, Wen Gao","doi":"10.1109/DCC.2013.114","DOIUrl":"https://doi.org/10.1109/DCC.2013.114","url":null,"abstract":"Summary form only given. Recently, a low-delay and high-efficiency hierarchical prediction structure (HPS) has been proposed for the forthcoming HEVC. Actually, frames and coding units (CUs) at different HPS positions have different importance to predict following frames and CUs. This paper firstly analyzes what frames and CUs should be quantified less. Based on the analysis, we propose a Hierarchical-and-Adaptive BIT-allocation method with Selective background prediction (HABITS) to optimize the video performance of HEVC. Extensive experiments on HM8.0 show that, HABITS saves 13.3% and 35.5% of the total bit rate for eight HEVC conference videos and eight common used surveillance videos. Even for the normal videos in HEVC's Class B and C, there is still 2.2% bit-saving.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124786406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the next generation standard of video coding, High Efficiency Video Coding (HEVC) is expected to be more complex than H.264/AVC. Many-core platforms are good candidates for speeding up HEVC in the case that HEVC can provide sufficient parallelism. The local parallel method (LPM) is the most promising parallel proposal for HEVC motion estimation (ME), but it can't provide sufficient parallelism for many-core platforms. On the premise of keeping the data dependencies and coding efficiency the same as the LPM, we propose a highly parallel framework to exploit the implicit parallelism. Compared with the well-known LPM, experiments conducted on a 64-core system show that our proposed method achieves averagely more than 10 and 13 times speedup for 1920×1080 and 2560×1600 video sequences, respectively.
高效视频编码(High Efficiency video coding, HEVC)作为下一代视频编码标准,其复杂度有望超过H.264/AVC。在HEVC能够提供足够并行性的情况下,多核平台是加速HEVC的好选择。局部并行方法(LPM)是HEVC运动估计(ME)中最有前途的并行方案,但它不能为多核平台提供足够的并行性。在保持数据依赖关系和编码效率与LPM相同的前提下,我们提出了一个高度并行的框架来利用隐式并行性。与已知的LPM相比,在64核系统上进行的实验表明,我们提出的方法对1920×1080和2560×1600视频序列的平均加速分别超过10倍和13倍。
{"title":"Highly Parallel Framework for HEVC Motion Estimation on Many-Core Platform","authors":"C. Yan, Yongdong Zhang, Feng Dai, L. Li","doi":"10.1109/DCC.2013.14","DOIUrl":"https://doi.org/10.1109/DCC.2013.14","url":null,"abstract":"As the next generation standard of video coding, High Efficiency Video Coding (HEVC) is expected to be more complex than H.264/AVC. Many-core platforms are good candidates for speeding up HEVC in the case that HEVC can provide sufficient parallelism. The local parallel method (LPM) is the most promising parallel proposal for HEVC motion estimation (ME), but it can't provide sufficient parallelism for many-core platforms. On the premise of keeping the data dependencies and coding efficiency the same as the LPM, we propose a highly parallel framework to exploit the implicit parallelism. Compared with the well-known LPM, experiments conducted on a 64-core system show that our proposed method achieves averagely more than 10 and 13 times speedup for 1920×1080 and 2560×1600 video sequences, respectively.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125317667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Numerical computations have accelerated significantly since 2005 thanks to two complementary, silicon-enabled trends: multi-core processing and single instruction, multiple data (SIMD) accelerators. Unfortunately, due to fundamental limitations of physics, these two trends could not be accompanied by a corresponding increase in memory, storage, and I/O bandwidth. High-performance computing (HPC) is the proverbial “canary in the coal mine” of multi-core processing. When HPC hits a multi-core will likely encounter a similar limit in few years. We describe the computationally efficient (Fig 1b) and adaptive APplication AXceleration (APAX) numerical encoding method to reduce the memory wall for integers and floating-point operands. APAX achieves encoding rates between 3:1 and 10:1 without changing the dataset's statistical or spectral characteristics. APAX encoding takes advantage of three characteristics of all numerical sequences: peak-to-average ratio, oversampling, and effective number of bits (ENOB). Uncertainty quantification and spectral methods quantify the degree of uncertainty (accuracy) in numerical datasets. APAX profiler creates a rate-correlation graph with recommended operating signals, and fundamental limit, consumer point, provides 18 quantitative metrics comparing the original and decoded displays input and residual spectra with a residual histogram. On 24 integer and floating-point HPC datasets taken from climate, multi-physics, and seismic simulations, APAX averaged 7.95:1 encoding ratio at a Pearson's correlation coefficient of 0. 999948, and a spectral margin (input spectrum min - residual spectrum mean) of 24 dB. HPC scientists confirmed that APAX did not change HPC simulation results DRAM and disk transfers by 8x, accelerating HPC “time to results” by 20% while reducing to 50%.
{"title":"Universal Numerical Encoder and Profiler Reduces Computing's Memory Wall with Software, FPGA, and SoC Implementations","authors":"Al Wegener","doi":"10.1109/DCC.2013.107","DOIUrl":"https://doi.org/10.1109/DCC.2013.107","url":null,"abstract":"Summary form only given. Numerical computations have accelerated significantly since 2005 thanks to two complementary, silicon-enabled trends: multi-core processing and single instruction, multiple data (SIMD) accelerators. Unfortunately, due to fundamental limitations of physics, these two trends could not be accompanied by a corresponding increase in memory, storage, and I/O bandwidth. High-performance computing (HPC) is the proverbial “canary in the coal mine” of multi-core processing. When HPC hits a multi-core will likely encounter a similar limit in few years. We describe the computationally efficient (Fig 1b) and adaptive APplication AXceleration (APAX) numerical encoding method to reduce the memory wall for integers and floating-point operands. APAX achieves encoding rates between 3:1 and 10:1 without changing the dataset's statistical or spectral characteristics. APAX encoding takes advantage of three characteristics of all numerical sequences: peak-to-average ratio, oversampling, and effective number of bits (ENOB). Uncertainty quantification and spectral methods quantify the degree of uncertainty (accuracy) in numerical datasets. APAX profiler creates a rate-correlation graph with recommended operating signals, and fundamental limit, consumer point, provides 18 quantitative metrics comparing the original and decoded displays input and residual spectra with a residual histogram. On 24 integer and floating-point HPC datasets taken from climate, multi-physics, and seismic simulations, APAX averaged 7.95:1 encoding ratio at a Pearson's correlation coefficient of 0. 999948, and a spectral margin (input spectrum min - residual spectrum mean) of 24 dB. HPC scientists confirmed that APAX did not change HPC simulation results DRAM and disk transfers by 8x, accelerating HPC “time to results” by 20% while reducing to 50%.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130621638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the age of big data, the need for efficient data compression algorithms has grown. A widely used data compression method is the Lempel-Ziv-77 (LZ77) method, being a subroutine in popular compression packages such as gzip and PKZIP. There has been a lot of recent effort on developing practical sequential algorithms for Lempel-Ziv factorization (equivalent to LZ77 compression), but research in practical parallel implementations has been less satisfactory. In this work, we present a simple work-efficient parallel algorithm for Lempel-Ziv factorization. We show theoretically that our algorithm requires linear work and runs in O(log2 n) time (randomized) for constant alphabets and O(nϵ) time (ϵ <; 1) for integer alphabets. We present experimental results showing that our algorithm is efficient and achieves good speedup with respect to the best sequential implementations of Lempel-Ziv factorization.
{"title":"Practical Parallel Lempel-Ziv Factorization","authors":"Julian Shun, Fuyao Zhao","doi":"10.1109/DCC.2013.20","DOIUrl":"https://doi.org/10.1109/DCC.2013.20","url":null,"abstract":"In the age of big data, the need for efficient data compression algorithms has grown. A widely used data compression method is the Lempel-Ziv-77 (LZ77) method, being a subroutine in popular compression packages such as gzip and PKZIP. There has been a lot of recent effort on developing practical sequential algorithms for Lempel-Ziv factorization (equivalent to LZ77 compression), but research in practical parallel implementations has been less satisfactory. In this work, we present a simple work-efficient parallel algorithm for Lempel-Ziv factorization. We show theoretically that our algorithm requires linear work and runs in O(log2 n) time (randomized) for constant alphabets and O(nϵ) time (ϵ <; 1) for integer alphabets. We present experimental results showing that our algorithm is efficient and achieves good speedup with respect to the best sequential implementations of Lempel-Ziv factorization.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130552382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let D = {d1, d2,...dD} be a given collection of D string documents of total length n, our task is to index D, such that whenever a pattern P (of length p) and an integer k come as a query, those k documents in which P appears the most number of times can be listed efficiently. In this paper, we propose a compressed index taking 2|CSA| + D logn/D + O(D) + o(n) bits of space, which answers a query with O(tsa log k logϵ n) per document report time. This improves the O(tsa log k log1+ϵ n) per document report time of the previously best-known index with (asymptotically) the same space requirements [Belazzougui and Navarro, SPIRE 2011]. Here, |CSA| represents the size (in bits) of the compressed suffix array (CSA) of the text obtained by concatenating all documents in V, and tsa is the time for decoding a suffix array value using the CSA.
令D = {d1, d2,…dD}是一个给定的总长度为n的D个字符串文档的集合,我们的任务是对D进行索引,以便当一个模式P(长度为P)和一个整数k作为查询出现时,可以有效地列出P出现次数最多的k个文档。在本文中,我们提出了一个压缩索引,占用2|CSA| + D logn/D + O(D) + O(n)位空间,它回答每个文档报告时间为O(tsa log k logλ n)的查询。这提高了(渐近地)具有相同空间要求的以前最知名的索引的每个文档报告时间的O(tsa log k log1+柱n) [Belazzougui和Navarro, SPIRE 2011]。其中|CSA|表示通过将V中的所有文档连接得到的文本的压缩后缀数组(CSA)的大小(以位为单位),tsa是使用CSA解码后缀数组值的时间。
{"title":"Faster Compressed Top-k Document Retrieval","authors":"W. Hon, Sharma V. Thankachan, R. Shah, J. Vitter","doi":"10.1109/DCC.2013.42","DOIUrl":"https://doi.org/10.1109/DCC.2013.42","url":null,"abstract":"Let D = {d<sub>1</sub>, d<sub>2</sub>,...d<sub>D</sub>} be a given collection of D string documents of total length n, our task is to index D, such that whenever a pattern P (of length p) and an integer k come as a query, those k documents in which P appears the most number of times can be listed efficiently. In this paper, we propose a compressed index taking 2|CSA| + D logn/D + O(D) + o(n) bits of space, which answers a query with O(t<sub>sa</sub> log k log<sup>ϵ</sup> n) per document report time. This improves the O(t<sub>sa</sub> log k log<sup>1+ϵ</sup> n) per document report time of the previously best-known index with (asymptotically) the same space requirements [Belazzougui and Navarro, SPIRE 2011]. Here, |CSA| represents the size (in bits) of the compressed suffix array (CSA) of the text obtained by concatenating all documents in V, and t<sub>sa</sub> is the time for decoding a suffix array value using the CSA.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130569515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we propose a method of lossless video coding which not has only the decoder simple but encoder is also simple, unlike other reported methods which has computationally complex encoder. The computation is mainly due to not using motion compensation method, which is computationally complex process. The coefficient of the predictors are obtained based on an averaging process and then the thus obtained set of switched predictors is used for prediction. The parameters have been obtained after undergoing a statistical process of averaging so that proper relationship can be established between the predicted pixel and their context.
{"title":"An Optimal Switched Adaptive Prediction Method for Lossless Video Coding","authors":"Dinesh Kumar Chobey, Mohit Vaishnav, A. Tiwari","doi":"10.1109/DCC.2013.63","DOIUrl":"https://doi.org/10.1109/DCC.2013.63","url":null,"abstract":"In this work, we propose a method of lossless video coding which not has only the decoder simple but encoder is also simple, unlike other reported methods which has computationally complex encoder. The computation is mainly due to not using motion compensation method, which is computationally complex process. The coefficient of the predictors are obtained based on an averaging process and then the thus obtained set of switched predictors is used for prediction. The parameters have been obtained after undergoing a statistical process of averaging so that proper relationship can be established between the predicted pixel and their context.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115805162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}