Summary form only given. In this paper, Robust Adaptive Image Coding (RAIC) is proposed to increase compression performance for Frame Memory Reduction in LCD Overdrive. The RAIC contains two techniques to improve the quality of decompressed images. The first is a Min-Max (of Block) Adaptive Uniform Quantization Coding (MMAUQC) to improve the quality of decompressed block images. The second is a Multiple Adaptive Quantization Coding (MAQC), which is combined of Range-based Bit Distribution Technique (RBBDT) and MMAUQC. This combination gives multiple adaptive ability that improving the image quality while preserving fixed word length compression feature. RAIC shows the flexible ability in bit distribution of RBBDT, which can expand the search capability to other optimize models. Experimental results show that, compared with other coding methods used in the same type of application, RAIC has outperforming features.
{"title":"Robust Adaptive Image Coding for Frame Memory Reduction in LCD Overdrive","authors":"Tai Nguyen Huu, H. Thi, H. Ban","doi":"10.1109/DCC.2013.77","DOIUrl":"https://doi.org/10.1109/DCC.2013.77","url":null,"abstract":"Summary form only given. In this paper, Robust Adaptive Image Coding (RAIC) is proposed to increase compression performance for Frame Memory Reduction in LCD Overdrive. The RAIC contains two techniques to improve the quality of decompressed images. The first is a Min-Max (of Block) Adaptive Uniform Quantization Coding (MMAUQC) to improve the quality of decompressed block images. The second is a Multiple Adaptive Quantization Coding (MAQC), which is combined of Range-based Bit Distribution Technique (RBBDT) and MMAUQC. This combination gives multiple adaptive ability that improving the image quality while preserving fixed word length compression feature. RAIC shows the flexible ability in bit distribution of RBBDT, which can expand the search capability to other optimize models. Experimental results show that, compared with other coding methods used in the same type of application, RAIC has outperforming features.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114250814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we provide an overview of the DCT/DST transform scheme for intra coding in the HEVC standard. A unique feature of this scheme is the use of DST-VII transforms in addition to DCT-II. We further derive factorizations for fast joint computation of DCT-II and DST-VII transforms of several sizes. Simulation results for the DCT/DST scheme in the HM reference software for HEVC are also provided together with a discussion on computational complexity.
{"title":"Fast Transforms for Intra-prediction-based Image and Video Coding","authors":"A. Saxena, Felix C. A. Fernandes, Y. Reznik","doi":"10.1109/DCC.2013.9","DOIUrl":"https://doi.org/10.1109/DCC.2013.9","url":null,"abstract":"In this paper, we provide an overview of the DCT/DST transform scheme for intra coding in the HEVC standard. A unique feature of this scheme is the use of DST-VII transforms in addition to DCT-II. We further derive factorizations for fast joint computation of DCT-II and DST-VII transforms of several sizes. Simulation results for the DCT/DST scheme in the HM reference software for HEVC are also provided together with a discussion on computational complexity.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115586826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work describes a perceptual method (pGBbBShift) for coding of Region of Interest (ROI) areas. It introduces perceptual criteria to the pGBbBShift method when bit planes of ROI and background areas are shifted. This additional feature is intended for balancing perceptual importance of some coefficients regardless their numerical importance. Perceptual criteria are applied using the CIWaM, which is a low-level computational model that reproduces color perception in the Human Visual System. Results show that there is no perceptual difference at ROI between the MaxShift method and pGBbBShift and, at the same time, perceptual quality of the entire image is improved when using pGBbBShift. Furthermore, when pGBbBShift method is applied to Hi-SET coder and it is compared against MaxShift method applied to both the JPEG2000 standard and the Hi-SET, the images coded by the combination pGBbBShift-Hi-SET get the best results when the overall perceptual image quality is estimated. The pGBbBShift method is a generalized algorithm that can be applied to other Wavelet based image compression algorithms such as JPEG2000, SPIHT or SPECK.
{"title":"pGBbBShift: Method for Introducing Perceptual Criteria to Region of Interest Coding","authors":"J. Moreno, Beatriz Jaime, C. Fernandez-Maloigne","doi":"10.1109/DCC.2013.91","DOIUrl":"https://doi.org/10.1109/DCC.2013.91","url":null,"abstract":"This work describes a perceptual method (pGBbBShift) for coding of Region of Interest (ROI) areas. It introduces perceptual criteria to the pGBbBShift method when bit planes of ROI and background areas are shifted. This additional feature is intended for balancing perceptual importance of some coefficients regardless their numerical importance. Perceptual criteria are applied using the CIWaM, which is a low-level computational model that reproduces color perception in the Human Visual System. Results show that there is no perceptual difference at ROI between the MaxShift method and pGBbBShift and, at the same time, perceptual quality of the entire image is improved when using pGBbBShift. Furthermore, when pGBbBShift method is applied to Hi-SET coder and it is compared against MaxShift method applied to both the JPEG2000 standard and the Hi-SET, the images coded by the combination pGBbBShift-Hi-SET get the best results when the overall perceptual image quality is estimated. The pGBbBShift method is a generalized algorithm that can be applied to other Wavelet based image compression algorithms such as JPEG2000, SPIHT or SPECK.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114934094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe the design of a perceptual preprocessing filter for improving the effectiveness of video coding. This filter uses known parameters of the reproduction setup, such as viewing distance, pixel density, and contrast ratio of the screen, as well as a contrast sensitivity model of human vision to identify spatial oscillations that are invisible. By removing such oscillations the filter simplifies the video content, therefore leading to more efficient encoding without causing any visible alterations of the content. Through experiments, we demonstrate the use of our filter can yield significant bit rate savings compared to conventional encoding methods that are not tailored to specific viewing conditions.
{"title":"Improving the Efficiency of Video Coding by Using Perceptual Preprocessing Filter","authors":"R. Vanam, Y. Reznik","doi":"10.1109/DCC.2013.103","DOIUrl":"https://doi.org/10.1109/DCC.2013.103","url":null,"abstract":"We describe the design of a perceptual preprocessing filter for improving the effectiveness of video coding. This filter uses known parameters of the reproduction setup, such as viewing distance, pixel density, and contrast ratio of the screen, as well as a contrast sensitivity model of human vision to identify spatial oscillations that are invisible. By removing such oscillations the filter simplifies the video content, therefore leading to more efficient encoding without causing any visible alterations of the content. Through experiments, we demonstrate the use of our filter can yield significant bit rate savings compared to conventional encoding methods that are not tailored to specific viewing conditions.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130226131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this work is to define a no-referenced perceptual image quality estimator applying the perceptual concepts of the Chromatic Induction Model. The approach consists in comparing the received image, presumably degraded, against the perceptual versions (different distances) of this image degraded by means of a Model of Chromatic Induction, which uses some of the human visual system properties. Also we compare our model with a original estimator in image quality assessment, PSNR. Results are highly correlated with the ones obtained by PSNR for image (99.32% Lenna and 96.95% for image Baboon), but this proposal does not need an original image or a reference one in order to give an estimation of the quality of the degraded image.
这项工作的目的是定义一个无参考的感知图像质量估计应用着色感应模型的感知概念。该方法包括将接收到的图像(可能已经降级)与通过使用一些人类视觉系统属性的色彩感应模型(Model of Chromatic Induction)降级的图像的感知版本(不同距离)进行比较。此外,我们还将我们的模型与原始估计器在图像质量评估PSNR方面进行了比较。结果与图像的PSNR (Lenna为99.32%,狒狒为96.95%)的结果高度相关,但该方案不需要原始图像或参考图像就可以对退化图像的质量进行估计。
{"title":"NRPSNR: No-Reference Peak Signal-to-Noise Ratio for JPEG2000","authors":"J. Moreno, Beatriz Jaime, C. Fernandez-Maloigne","doi":"10.1109/DCC.2013.117","DOIUrl":"https://doi.org/10.1109/DCC.2013.117","url":null,"abstract":"The aim of this work is to define a no-referenced perceptual image quality estimator applying the perceptual concepts of the Chromatic Induction Model. The approach consists in comparing the received image, presumably degraded, against the perceptual versions (different distances) of this image degraded by means of a Model of Chromatic Induction, which uses some of the human visual system properties. Also we compare our model with a original estimator in image quality assessment, PSNR. Results are highly correlated with the ones obtained by PSNR for image (99.32% Lenna and 96.95% for image Baboon), but this proposal does not need an original image or a reference one in order to give an estimation of the quality of the degraded image.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130946355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we consider robust source coding in closed-loop systems. In particular, we consider a (possibly) unstable LTI system, which is to be stabilized via a network. The network has random delays and erasures on the data-rate limited (digital) forward channel between the encoder (controller) and the decoder (plant). The feedback channel from the decoder to the encoder is assumed noiseless. Since the forward channel is digital, we need to employ quantization. We combine two techniques to enhance the reliability of the system. First, in order to guarantee that the system remains stable during packet dropouts and delays, we transmit quantized control vectors containing current control values for the decoder as well as future predicted control values. Second, we utilize multiple description coding based on forward error correction codes to further aid in the robustness towards packet erasures. In particular, we transmit M redundant packets, which are constructed such that when receiving any J packets, the current control signal as well as J-1 future control signals can be reliably reconstructed at the decoder. We prove stability subject to quantization constraints, random dropouts, and delays by showing that the system can be cast as a Markov jump linear system.
{"title":"Multiple Description Coding for Closed Loop Systems over Erasure Channels","authors":"Jan Østergaard, D. Quevedo","doi":"10.1109/DCC.2013.39","DOIUrl":"https://doi.org/10.1109/DCC.2013.39","url":null,"abstract":"In this paper, we consider robust source coding in closed-loop systems. In particular, we consider a (possibly) unstable LTI system, which is to be stabilized via a network. The network has random delays and erasures on the data-rate limited (digital) forward channel between the encoder (controller) and the decoder (plant). The feedback channel from the decoder to the encoder is assumed noiseless. Since the forward channel is digital, we need to employ quantization. We combine two techniques to enhance the reliability of the system. First, in order to guarantee that the system remains stable during packet dropouts and delays, we transmit quantized control vectors containing current control values for the decoder as well as future predicted control values. Second, we utilize multiple description coding based on forward error correction codes to further aid in the robustness towards packet erasures. In particular, we transmit M redundant packets, which are constructed such that when receiving any J packets, the current control signal as well as J-1 future control signals can be reliably reconstructed at the decoder. We prove stability subject to quantization constraints, random dropouts, and delays by showing that the system can be cast as a Markov jump linear system.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122674598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With consumer-grade 3D displays becoming widely available, stereoscopic 3D imaging has received increased attention over the last few years. Since a left eye and a right eye image are contained in a stereo pair, stereoscopic imaging requires double the amount of data compared to 2D images. Therefore, efficient data compression techniques are especially critical in these applications. In this paper, we propose a method to determine visibility thresholds (VTs) for 3D displays with active shutter glasses. These VTs are then used in a novel visually lossless compression method for monochrome stereoscopic 3D images.
{"title":"Visually Lossless Compression of Stereo Images","authors":"Hsin-Chang Feng, M. Marcellin, A. Bilgin","doi":"10.1109/DCC.2013.71","DOIUrl":"https://doi.org/10.1109/DCC.2013.71","url":null,"abstract":"With consumer-grade 3D displays becoming widely available, stereoscopic 3D imaging has received increased attention over the last few years. Since a left eye and a right eye image are contained in a stereo pair, stereoscopic imaging requires double the amount of data compared to 2D images. Therefore, efficient data compression techniques are especially critical in these applications. In this paper, we propose a method to determine visibility thresholds (VTs) for 3D displays with active shutter glasses. These VTs are then used in a novel visually lossless compression method for monochrome stereoscopic 3D images.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125655833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatial and spectral decor relations are necessary for hyper spectral data compression. The two dimensional wavelet transform based spatial transform and the Karhunen-Loève transform (KLT) based spectral transform have been employed successfully for hyper spectral data compression. In this paper a hyper spectral asymmetrical data compression is proposed as an improvement of the low complexity version of the Karhunen-Loève transform following the energy distribution in the wavelet transform domain. In the improved low complexity KLT, the computation processing of the covariance matrix is carried out on a spectral data which is extracted from the region of high energy distribution. The new method highlights the physical difference between the spatial and spectral characteristics of hyper spectral data. Experimental results show that the new method has improved significantly, not only the computation time but also has a good performance for the compressed data.
{"title":"Low Complexity Improvement for Hyperspectral Asymmetrical Data Compression","authors":"Simplice A. Alissou, Ye Zhang, Hao Chen, Meng Yan","doi":"10.1109/DCC.2013.56","DOIUrl":"https://doi.org/10.1109/DCC.2013.56","url":null,"abstract":"Spatial and spectral decor relations are necessary for hyper spectral data compression. The two dimensional wavelet transform based spatial transform and the Karhunen-Loève transform (KLT) based spectral transform have been employed successfully for hyper spectral data compression. In this paper a hyper spectral asymmetrical data compression is proposed as an improvement of the low complexity version of the Karhunen-Loève transform following the energy distribution in the wavelet transform domain. In the improved low complexity KLT, the computation processing of the covariance matrix is carried out on a spectral data which is extracted from the region of high energy distribution. The new method highlights the physical difference between the spatial and spectral characteristics of hyper spectral data. Experimental results show that the new method has improved significantly, not only the computation time but also has a good performance for the compressed data.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126376940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marko Esche, M. Tok, A. Glantz, A. Krutz, T. Sikora
Summary form only given. Spatial in loop filters are a well established tool to improve the compression performance of today's video codecs. Temporal denoising and deblocking filters have recently also received some attention, because of their ability to stabilize pictures and to reduce flickering artifacts. One such filter, the previously introduced Quad tree-based Temporal Trajectory Filter, can produce good results, provided that the associated quad tree is sufficiently detailed. In this paper a novel, generally applicable scheme to compress such quad tree information is presented. In addition, the performance of the filter within the current HEVC test model HM 8.0 is investigated.
{"title":"Efficient Quadtree Compression for Temporal Trajectory Filtering","authors":"Marko Esche, M. Tok, A. Glantz, A. Krutz, T. Sikora","doi":"10.1109/DCC.2013.118","DOIUrl":"https://doi.org/10.1109/DCC.2013.118","url":null,"abstract":"Summary form only given. Spatial in loop filters are a well established tool to improve the compression performance of today's video codecs. Temporal denoising and deblocking filters have recently also received some attention, because of their ability to stabilize pictures and to reduce flickering artifacts. One such filter, the previously introduced Quad tree-based Temporal Trajectory Filter, can produce good results, provided that the associated quad tree is sufficiently detailed. In this paper a novel, generally applicable scheme to compress such quad tree information is presented. In addition, the performance of the filter within the current HEVC test model HM 8.0 is investigated.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132044012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Many-core platforms are good candidates for speeding up High Efficiency Video Coding (HEVC) in the case that HEVC can provide sufficient parallelism. As the most promising proposal for parallelizing HEVC deblocking filter (DF), the order-changed parallel method (OCPM) changes the order of filtering and incurs considerable loss in coding efficiency. Meanwhile, the parallelism of OCPM still has some room for improvement. In this paper, we propose an efficient parallel framework for HEVC DF, which exploits the implicit parallelism and keeps the filtering order of DF unchanged. Compared with the well-known OCPM, experiments conducted on a 64-core system show that our proposed method saves averagely 37.18% and 37.93% DF time with different quantization parameters (QPs). Meanwhile, our proposed method improves coding efficiency, which achieves an average BD-rate reduction of 0.09%, 0.11% and 0.12% for Y, U and V components, respectively.
{"title":"Efficient Parallel Framework for HEVC Deblocking Filter on Many-Core Platform","authors":"C. Yan, Yongdong Zhang, Feng Dai, L. Li","doi":"10.1109/DCC.2013.109","DOIUrl":"https://doi.org/10.1109/DCC.2013.109","url":null,"abstract":"Summary form only given. Many-core platforms are good candidates for speeding up High Efficiency Video Coding (HEVC) in the case that HEVC can provide sufficient parallelism. As the most promising proposal for parallelizing HEVC deblocking filter (DF), the order-changed parallel method (OCPM) changes the order of filtering and incurs considerable loss in coding efficiency. Meanwhile, the parallelism of OCPM still has some room for improvement. In this paper, we propose an efficient parallel framework for HEVC DF, which exploits the implicit parallelism and keeps the filtering order of DF unchanged. Compared with the well-known OCPM, experiments conducted on a 64-core system show that our proposed method saves averagely 37.18% and 37.93% DF time with different quantization parameters (QPs). Meanwhile, our proposed method improves coding efficiency, which achieves an average BD-rate reduction of 0.09%, 0.11% and 0.12% for Y, U and V components, respectively.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133071320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}