首页 > 最新文献

2013 Data Compression Conference最新文献

英文 中文
A Method for Fast Rough Mode Decision in HEVC HEVC中一种快速粗糙模式判定方法
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.58
Manoj Alwani, S. Johar
In this paper we propose a fast candidate selection method in RMD step of Intra prediction. The proposed method consist of two steps, in the first step, coarser steps between the prediction directions are used as possible candidates for comparison, where the coarser step size is a function of Prediction Unit size. This way we find the dominant direction with minimum cost. In the second step, the dominant direction is refined by checking the prediction directions around the dominant direction to find the best match.
本文提出了一种基于RMD步骤的快速候选点选择方法。该方法分为两步,第一步,使用预测方向之间的粗步作为可能的候选进行比较,其中粗步大小是预测单元大小的函数。这样我们就能以最小的成本找到优势方向。第二步,通过检查主导方向周围的预测方向,找到最佳匹配,对主导方向进行细化。
{"title":"A Method for Fast Rough Mode Decision in HEVC","authors":"Manoj Alwani, S. Johar","doi":"10.1109/DCC.2013.58","DOIUrl":"https://doi.org/10.1109/DCC.2013.58","url":null,"abstract":"In this paper we propose a fast candidate selection method in RMD step of Intra prediction. The proposed method consist of two steps, in the first step, coarser steps between the prediction directions are used as possible candidates for comparison, where the coarser step size is a function of Prediction Unit size. This way we find the dominant direction with minimum cost. In the second step, the dominant direction is refined by checking the prediction directions around the dominant direction to find the best match.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126985586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Scalable Video Coding Extension of HEVC HEVC的可扩展视频编码扩展
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.28
Philipp Helle, H. Lakshman, Mischa Siekmann, J. Stegemann, Tobias Hinz, H. Schwarz, D. Marpe, T. Wiegand
The paper describes a scalable video coding extension of the upcoming HEVC video coding standard for spatial and quality scalable coding. Besides coding tools known from scalable profiles of prior video coding standards, it includes new coding tools that further improve the enhancement layer coding efficiency. The effectiveness of the proposed scalable HEVC extension is demonstrated by comparing the coding efficiency to simulcast and single-layer coding for several test sequences and coding conditions.
本文描述了即将推出的HEVC视频编码标准的可扩展视频编码扩展,用于空间和质量可扩展编码。除了从先前视频编码标准的可扩展配置文件中已知的编码工具外,它还包括进一步提高增强层编码效率的新编码工具。通过对多个测试序列和编码条件下的编码效率与联播和单层编码的比较,证明了所提出的可扩展HEVC扩展的有效性。
{"title":"A Scalable Video Coding Extension of HEVC","authors":"Philipp Helle, H. Lakshman, Mischa Siekmann, J. Stegemann, Tobias Hinz, H. Schwarz, D. Marpe, T. Wiegand","doi":"10.1109/DCC.2013.28","DOIUrl":"https://doi.org/10.1109/DCC.2013.28","url":null,"abstract":"The paper describes a scalable video coding extension of the upcoming HEVC video coding standard for spatial and quality scalable coding. Besides coding tools known from scalable profiles of prior video coding standards, it includes new coding tools that further improve the enhancement layer coding efficiency. The effectiveness of the proposed scalable HEVC extension is demonstrated by comparing the coding efficiency to simulcast and single-layer coding for several test sequences and coding conditions.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114404162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Image Super-Resolution via Hierarchical and Collaborative Sparse Representation 基于分层和协同稀疏表示的图像超分辨率
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.17
Xianming Liu, Deming Zhai, Debin Zhao, Wen Gao
In this paper, we propose an efficient image super-resolution algorithm based on hierarchical and collaborative sparse representation (HCSR). Motivated by the observation that natural images typically exhibit multi-modal statistics, we propose a hierarchical sparse coding model which includes two layers: the first layer encodes individual patches, and the second layer jointly encodes the set of patches that belong to the same homogeneous subset of image space. We further present a simple alternative to achieve such target by identifying optimal sparse representation that is adaptive to specific statistics of images. Specially, we cluster images from the offline training set into regions of similar geometric structure, and model each region (cluster) by learning adaptive bases describing the patches within that cluster using principal component analysis (PCA). This cluster-specific dictionary is then exploited to optimally estimate the underlying HR pixel values using the idea of collaborative sparse coding, in which the similarity between patches in the same cluster is further considered. It conceptually and computationally remedies the limitation of many existing algorithms based on standard sparse coding, in which patches are independently encoded. Experimental results demonstrate the proposed method appears to be competitive with state-of-the-art algorithms.
本文提出了一种基于分层协同稀疏表示(HCSR)的高效图像超分辨算法。基于自然图像具有多模态统计特征的特点,本文提出了一种分层稀疏编码模型,该模型包括两层:第一层对单个patch进行编码,第二层对属于同一图像空间同质子集的patch集合进行联合编码。我们进一步提出了一种简单的替代方案,通过识别适合特定图像统计的最佳稀疏表示来实现这一目标。特别地,我们将来自离线训练集的图像聚类到具有相似几何结构的区域中,并通过主成分分析(PCA)学习自适应基来描述每个区域(聚类)内的斑块,从而对每个区域(聚类)进行建模。然后利用这个特定于聚类的字典,利用协作稀疏编码的思想来优化估计潜在的HR像素值,其中进一步考虑了同一聚类中补丁之间的相似性。它在概念上和计算上弥补了许多基于标准稀疏编码的现有算法的局限性,其中补丁是独立编码的。实验结果表明,所提出的方法与最先进的算法相比具有竞争力。
{"title":"Image Super-Resolution via Hierarchical and Collaborative Sparse Representation","authors":"Xianming Liu, Deming Zhai, Debin Zhao, Wen Gao","doi":"10.1109/DCC.2013.17","DOIUrl":"https://doi.org/10.1109/DCC.2013.17","url":null,"abstract":"In this paper, we propose an efficient image super-resolution algorithm based on hierarchical and collaborative sparse representation (HCSR). Motivated by the observation that natural images typically exhibit multi-modal statistics, we propose a hierarchical sparse coding model which includes two layers: the first layer encodes individual patches, and the second layer jointly encodes the set of patches that belong to the same homogeneous subset of image space. We further present a simple alternative to achieve such target by identifying optimal sparse representation that is adaptive to specific statistics of images. Specially, we cluster images from the offline training set into regions of similar geometric structure, and model each region (cluster) by learning adaptive bases describing the patches within that cluster using principal component analysis (PCA). This cluster-specific dictionary is then exploited to optimally estimate the underlying HR pixel values using the idea of collaborative sparse coding, in which the similarity between patches in the same cluster is further considered. It conceptually and computationally remedies the limitation of many existing algorithms based on standard sparse coding, in which patches are independently encoded. Experimental results demonstrate the proposed method appears to be competitive with state-of-the-art algorithms.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114811129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Sample Adaptive Offset Design in HEVC HEVC中的样本自适应偏移设计
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.57
A. Alshin, E. Alshina, Jeonghoon Park
This paper is devoted to Sample Adaptive Offset (SAO). This technique was recently added into High Efficiency Video Coding (HEVC) standard. The concept of SAO is to reduce sample distortion of a region by classifying the region samples into multiple categories, obtaining an offset for each category, and then adding the offset to each sample, where the classifier index and the offsets are coded in the bit stream.
本文主要研究样本自适应偏移(SAO)。该技术最近被加入到高效视频编码(HEVC)标准中。SAO的概念是通过将区域样本分为多个类别,获得每个类别的偏移量,然后将偏移量添加到每个样本中,从而减少区域的样本失真,其中分类器索引和偏移量在比特流中进行编码。
{"title":"Sample Adaptive Offset Design in HEVC","authors":"A. Alshin, E. Alshina, Jeonghoon Park","doi":"10.1109/DCC.2013.57","DOIUrl":"https://doi.org/10.1109/DCC.2013.57","url":null,"abstract":"This paper is devoted to Sample Adaptive Offset (SAO). This technique was recently added into High Efficiency Video Coding (HEVC) standard. The concept of SAO is to reduce sample distortion of a region by classifying the region samples into multiple categories, obtaining an offset for each category, and then adding the offset to each sample, where the classifier index and the offsets are coded in the bit stream.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124746306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Compression of Distributed Correlated Temperature Data in Sensor Networks 传感器网络中分布式相关温度数据的压缩
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.61
Feng Chen, M. Rutkowski, Christopher Fenner, R. Huck, Shuang Wang, Samuel Cheng
Summary form only given. Distributed Source Coding (DSC) is rapidly gaining popularity, and has many good applications. However, some important correlations are sometimes omitted, such as temporal correlation. In this paper, we consider the correlations of the source data both in spatial and temporal domains for DSC decoding. And this is equally to integrate a Kalman filter in our algorithm. We tested our algorithm on the practical temperature network, and the results turn out it achieved better performance than the algorithm without temporal correlation.
只提供摘要形式。分布式源编码(DSC)正在迅速普及,并有许多很好的应用。然而,有时忽略了一些重要的相关性,例如时间相关性。在本文中,我们考虑了源数据在空间和时间域上的相关性来进行DSC解码。这等同于在我们的算法中积分一个卡尔曼滤波器。在实际的温度网络上进行了测试,结果表明该算法比无时间相关的算法具有更好的性能。
{"title":"Compression of Distributed Correlated Temperature Data in Sensor Networks","authors":"Feng Chen, M. Rutkowski, Christopher Fenner, R. Huck, Shuang Wang, Samuel Cheng","doi":"10.1109/DCC.2013.61","DOIUrl":"https://doi.org/10.1109/DCC.2013.61","url":null,"abstract":"Summary form only given. Distributed Source Coding (DSC) is rapidly gaining popularity, and has many good applications. However, some important correlations are sometimes omitted, such as temporal correlation. In this paper, we consider the correlations of the source data both in spatial and temporal domains for DSC decoding. And this is equally to integrate a Kalman filter in our algorithm. We tested our algorithm on the practical temperature network, and the results turn out it achieved better performance than the algorithm without temporal correlation.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124685677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Backwards Compatible Coding of High Dynamic Range Images with JPEG 向后兼容编码的高动态范围图像与JPEG
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.24
T. Richter
In its Paris meeting, the JPEG committee decided to work on a backwards compatible extension of the popular JPEG (10918-1) standard enabling lossy and lossless coding of high-dynamic range (HDR) images, the new standard shall allow legacy applications to decompress new code streams into a tone mapped version of the HDR image while codecs aware of the extensions will decompress the stream with full dynamic range. This paper proposes a set of extensions that have rather low implementation complexity, and use - whenever possible - functional design blocks already present in 10918-1. It is seen that, despite its simplicity, the proposed extension performs close to JPEG 2000 (15444-2) and JPEG XR (29199-2) on the HDR test image set of the JPEG for high bit-rates.
在巴黎会议上,JPEG委员会决定研究流行的JPEG(10918-1)标准的向后兼容扩展,支持高动态范围(HDR)图像的有损和无损编码,新标准将允许遗留应用程序将新代码流解压缩为HDR图像的色调映射版本,而编解码器意识到扩展将解压缩流具有全动态范围。本文提出了一组具有相当低的实现复杂性的扩展,并尽可能地使用10918-1中已经存在的功能设计块。可以看出,该扩展虽然简单,但在JPEG的高比特率HDR测试图像集上,其性能接近JPEG 2000(15444-2)和JPEG XR(29199-2)。
{"title":"Backwards Compatible Coding of High Dynamic Range Images with JPEG","authors":"T. Richter","doi":"10.1109/DCC.2013.24","DOIUrl":"https://doi.org/10.1109/DCC.2013.24","url":null,"abstract":"In its Paris meeting, the JPEG committee decided to work on a backwards compatible extension of the popular JPEG (10918-1) standard enabling lossy and lossless coding of high-dynamic range (HDR) images, the new standard shall allow legacy applications to decompress new code streams into a tone mapped version of the HDR image while codecs aware of the extensions will decompress the stream with full dynamic range. This paper proposes a set of extensions that have rather low implementation complexity, and use - whenever possible - functional design blocks already present in 10918-1. It is seen that, despite its simplicity, the proposed extension performs close to JPEG 2000 (15444-2) and JPEG XR (29199-2) on the HDR test image set of the JPEG for high bit-rates.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121322588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
VDH-Grid Search Algorithm for Fast Motion Estimation 快速运动估计的vdh -网格搜索算法
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.86
Robson Lins, Diogo B. Henriques, Emerson Lima, Silvio Melo
This work presents a fast block matching method for motion estimation algorithms by using low discrepancy sequences. The proposed technique (VDHS) was developed after analyzing the UMHS algorithm implemented within the H.264/AVC Reference Software. The optimizations are focused both in the reduction of the number of candidate blocks and the usage of pixel sub sampling based on VDH in order to accelerate SAD's computation. Experimental results show that the proposed technique has a lower computational effort with an insignificant loss on PSNR and slight increment on the bit-rate.
本文提出了一种基于低差异序列的快速块匹配运动估计算法。在分析了H.264/AVC参考软件中实现的UMHS算法后,提出了该技术(VDHS)。优化主要集中在减少候选块数量和使用基于VDH的像素子采样,以加快SAD的计算速度。实验结果表明,该方法计算量小,PSNR损失小,比特率增加小。
{"title":"VDH-Grid Search Algorithm for Fast Motion Estimation","authors":"Robson Lins, Diogo B. Henriques, Emerson Lima, Silvio Melo","doi":"10.1109/DCC.2013.86","DOIUrl":"https://doi.org/10.1109/DCC.2013.86","url":null,"abstract":"This work presents a fast block matching method for motion estimation algorithms by using low discrepancy sequences. The proposed technique (VDHS) was developed after analyzing the UMHS algorithm implemented within the H.264/AVC Reference Software. The optimizations are focused both in the reduction of the number of candidate blocks and the usage of pixel sub sampling based on VDH in order to accelerate SAD's computation. Experimental results show that the proposed technique has a lower computational effort with an insignificant loss on PSNR and slight increment on the bit-rate.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126016007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoder-Side Super-Resolution and Frame Interpolation for Improved H.264 Video Coding 解码器侧超分辨率和帧插值改进的H.264视频编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.16
H. Ateş
In literature decoder-side motion estimation is shown to improve video coding efficiency of both H.264 and HEVC standards. In this paper we introduce enhanced skip and direct modes for H.264 coding using decoder-side super-resolution (SR) and frame interpolation. P- and B-frames are down sampled and H.264 encoded at lower resolution (LR). Then reconstructed LR frames are super-resolved using decoder-side motion estimation. Alternatively for B-frames, bidirectional true motion estimation is performed to synthesize a B-frame from its reference frames. For P-frames, bicubic interpolation of the LR frame is used as an alternative to SR reconstruction. A rate-distortion optimal mode selection algorithm determines for each MB which of the two reconstructions to use as skip/direct mode prediction. Simulations indicate an average of 1.04 dB PSNR improvement or 23.0% bit rate reduction at low bit rates when compared to H.264 standard. Average PSNR gains reach as high as 3.95 dB depending on the video content and frame rate.
文献表明,解码器侧运动估计可以提高H.264和HEVC标准的视频编码效率。本文介绍了利用解码器侧超分辨率(SR)和帧插值的H.264编码增强的跳过和直接模式。P帧和b帧的采样和H.264编码在较低的分辨率(LR)。然后利用解码器侧运动估计对重构的LR帧进行超分辨。对于b帧,执行双向真运动估计以从其参考帧合成b帧。对于p帧,双三次插值LR帧被用作替代SR重建。速率失真最优模式选择算法确定每个MB的两个重构中哪一个用作跳过/直接模式预测。仿真表明,与H.264标准相比,在低比特率下,平均PSNR提高1.04 dB或比特率降低23.0%。根据视频内容和帧率的不同,平均PSNR增益可高达3.95 dB。
{"title":"Decoder-Side Super-Resolution and Frame Interpolation for Improved H.264 Video Coding","authors":"H. Ateş","doi":"10.1109/DCC.2013.16","DOIUrl":"https://doi.org/10.1109/DCC.2013.16","url":null,"abstract":"In literature decoder-side motion estimation is shown to improve video coding efficiency of both H.264 and HEVC standards. In this paper we introduce enhanced skip and direct modes for H.264 coding using decoder-side super-resolution (SR) and frame interpolation. P- and B-frames are down sampled and H.264 encoded at lower resolution (LR). Then reconstructed LR frames are super-resolved using decoder-side motion estimation. Alternatively for B-frames, bidirectional true motion estimation is performed to synthesize a B-frame from its reference frames. For P-frames, bicubic interpolation of the LR frame is used as an alternative to SR reconstruction. A rate-distortion optimal mode selection algorithm determines for each MB which of the two reconstructions to use as skip/direct mode prediction. Simulations indicate an average of 1.04 dB PSNR improvement or 23.0% bit rate reduction at low bit rates when compared to H.264 standard. Average PSNR gains reach as high as 3.95 dB depending on the video content and frame rate.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127093984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Differential Base Pattern Coding for Cache Line Data Compression 缓存线数据压缩的差分基模式编码
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.79
H. Kaneko, S. Fujii, Hiroaki Sasaki
The computational performance of recent processors is often restricted by the delay of off-chip memory accesses, and so low-delay data compression should be effective to improve the processor performance. This paper proposes differential base pattern coding suitable for high-speed parallel decoding. Evaluation shows that the compression ratio of the coding is comparable or superior to that of conventional codings.
当前处理器的计算性能经常受到片外存储器访问延迟的限制,因此低延迟数据压缩是提高处理器性能的有效方法。提出了一种适用于高速并行译码的差分基模编码。评价表明,该编码的压缩比与传统编码相当或优于传统编码。
{"title":"Differential Base Pattern Coding for Cache Line Data Compression","authors":"H. Kaneko, S. Fujii, Hiroaki Sasaki","doi":"10.1109/DCC.2013.79","DOIUrl":"https://doi.org/10.1109/DCC.2013.79","url":null,"abstract":"The computational performance of recent processors is often restricted by the delay of off-chip memory accesses, and so low-delay data compression should be effective to improve the processor performance. This paper proposes differential base pattern coding suitable for high-speed parallel decoding. Evaluation shows that the compression ratio of the coding is comparable or superior to that of conventional codings.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126928767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LBP-Guided Depth Image Filter lbp引导深度图像滤波器
Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.115
Rui Zhong, R. Hu, Zhongyuan Wang, Lu Liu, Zhen Han
The multi-view video plus depth (MVD) format has been put forward for the call for proposals in free view video (FVV) and 3DTV. Since representing the 3D scene geometry, depth maps are used for synthesizing virtual views. However, compression artifacts of the depth images always lead to geometry distortions in synthesized views. By exploiting LBP features of the corresponding color samples, we propose a novel local binary pattern (LBP) guided depth filter which enables the local neighborhood samples those are in the same object of the current pixel to be filtering input. In recognition of its ability for describing the object edges, the LBP operator is used to calculate the weighted values of the local depth pixels for the depth-map filter. Furthermore, the filter is incorporated into the framework of H.264/MVC as an in-loop filter. The experimental results demonstrate that the proposed approach offers 0.45dB and 0.66dB average PSNR gains in terms of video rendering quality and depth coding efficiency, as well as significant subjective improvement in rendering views.
多视点视频加深度(MVD)格式已被提出用于自由视点视频(FVV)和3DTV的方案征集。由于深度图代表了三维场景的几何形状,因此用于合成虚拟视图。然而,深度图像的压缩伪影往往会导致合成视图的几何变形。通过利用相应颜色样本的LBP特征,提出了一种新的局部二值模式(LBP)引导深度滤波器,使当前像素的同一对象中的局部邻域样本成为滤波输入。考虑到LBP算子描述目标边缘的能力,利用LBP算子计算深度图滤波器局部深度像素的加权值。此外,该过滤器作为环内过滤器被纳入H.264/MVC框架中。实验结果表明,该方法在视频渲染质量和深度编码效率方面分别提供了0.45dB和0.66dB的平均PSNR增益,并且在渲染视图方面有显著的主观改进。
{"title":"LBP-Guided Depth Image Filter","authors":"Rui Zhong, R. Hu, Zhongyuan Wang, Lu Liu, Zhen Han","doi":"10.1109/DCC.2013.115","DOIUrl":"https://doi.org/10.1109/DCC.2013.115","url":null,"abstract":"The multi-view video plus depth (MVD) format has been put forward for the call for proposals in free view video (FVV) and 3DTV. Since representing the 3D scene geometry, depth maps are used for synthesizing virtual views. However, compression artifacts of the depth images always lead to geometry distortions in synthesized views. By exploiting LBP features of the corresponding color samples, we propose a novel local binary pattern (LBP) guided depth filter which enables the local neighborhood samples those are in the same object of the current pixel to be filtering input. In recognition of its ability for describing the object edges, the LBP operator is used to calculate the weighted values of the local depth pixels for the depth-map filter. Furthermore, the filter is incorporated into the framework of H.264/MVC as an in-loop filter. The experimental results demonstrate that the proposed approach offers 0.45dB and 0.66dB average PSNR gains in terms of video rendering quality and depth coding efficiency, as well as significant subjective improvement in rendering views.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131319084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2013 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1