首页 > 最新文献

2014 IEEE Visual Communications and Image Processing Conference最新文献

英文 中文
Disocclusion hole-filling in DIBR-synthesized images using multi-scale template matching 基于多尺度模板匹配的dibr合成图像去咬合补孔
Pub Date : 2014-12-07 DOI: 10.1109/VCIP.2014.7051614
S. Reel, Kam Cheung Patrick Wong, Gene Cheung, L. Dooley
Transmitting texture and depth images of captured camera view(s) of a 3D scene enables a receiver to synthesize novel virtual viewpoint images via Depth-Image-Based Rendering (DIBR). However, a DIBR-synthesized image often contains disocclusion holes, which are spatial regions in the virtual view image that were occluded by foreground objects in the captured camera view(s). In this paper, we propose to complete these disocclusion holes by exploiting the self-similarity characteristic of natural images via nonlocal template-matching (TM). Specifically, we first define self-similarity as nonlocal recurrences of pixel patches within the same image across different scales-one characterization of self-similarity in a given image is the scale range in which these patch recurrences take place. Then, at encoder we segment an image into multiple depth layers using available per-pixel depth values, and characterize self-similarity in each layer with a scale range; scale ranges for all layers are transmitted as side information to the decoder. At decoder, disocclusion holes are completed via TM on a per-layer basis by searching for similar patches within the designated scale range. Experimental results show that our method improves the quality of rendered images over previous disocclusion hole-filling algorithms by up to 3.9dB in PSNR.
通过传输三维场景的纹理和深度图像,接收器可以通过深度图像渲染(deep - image - based Rendering, DIBR)合成新的虚拟视点图像。然而,dibr合成的图像通常包含消光洞,这是虚拟视图图像中被捕获的相机视图中前景物体遮挡的空间区域。在本文中,我们提出利用非局部模板匹配(non - local template-matching, TM)来利用自然图像的自相似特性来弥补这些错位。具体来说,我们首先将自相似性定义为同一图像中不同尺度像素块的非局部递归——给定图像中自相似性的一个特征是这些斑块递归发生的尺度范围。然后,在编码器中,我们使用可用的每像素深度值将图像分割成多个深度层,并使用尺度范围表征每层的自相似性;所有层的尺度范围作为侧信息传输到解码器。在解码器处,通过TM逐层搜索指定尺度范围内的相似斑块,完成咬合孔。实验结果表明,我们的方法在PSNR上比之前的去咬合填充算法提高了3.9dB。
{"title":"Disocclusion hole-filling in DIBR-synthesized images using multi-scale template matching","authors":"S. Reel, Kam Cheung Patrick Wong, Gene Cheung, L. Dooley","doi":"10.1109/VCIP.2014.7051614","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051614","url":null,"abstract":"Transmitting texture and depth images of captured camera view(s) of a 3D scene enables a receiver to synthesize novel virtual viewpoint images via Depth-Image-Based Rendering (DIBR). However, a DIBR-synthesized image often contains disocclusion holes, which are spatial regions in the virtual view image that were occluded by foreground objects in the captured camera view(s). In this paper, we propose to complete these disocclusion holes by exploiting the self-similarity characteristic of natural images via nonlocal template-matching (TM). Specifically, we first define self-similarity as nonlocal recurrences of pixel patches within the same image across different scales-one characterization of self-similarity in a given image is the scale range in which these patch recurrences take place. Then, at encoder we segment an image into multiple depth layers using available per-pixel depth values, and characterize self-similarity in each layer with a scale range; scale ranges for all layers are transmitted as side information to the decoder. At decoder, disocclusion holes are completed via TM on a per-layer basis by searching for similar patches within the designated scale range. Experimental results show that our method improves the quality of rendered images over previous disocclusion hole-filling algorithms by up to 3.9dB in PSNR.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116636554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Robust image registration using adaptive expectation maximisation based PCA 基于PCA的自适应期望最大化鲁棒图像配准
Pub Date : 2014-12-07 DOI: 10.1109/VCIP.2014.7051515
P. Reel, L. Dooley, Kam Cheung Patrick Wong, A. Börner
Images having either the same or different modalities can be aligned using the systematic process of image registration. Inherent image characteristics including intensity non-uniformities in magnetic resonance images and large homogeneous non-vascular regions in retinal and other generic image types however, pose a significant challenge to their registration. This paper presents an adaptive expectation maximisation for principal component analysis with mutual information (aEMPCA-MI) similarity measure for image registration. It introduces a novel iterative process to adaptively select the most significant principal components using Kaiser rule and applies 4-pixel connectivity for feature extraction together with Wichard's bin size selection in calculating the MI. Both quantitative and qualitative results on a diverse range of image datasets, conclusively demonstrate the superior image registration performance of aEMPCA-MI compared with existing Mi-based similarity measures.
具有相同或不同模态的图像可以使用图像配准的系统过程进行对齐。然而,固有的图像特征,包括磁共振图像的强度不均匀性和视网膜和其他一般图像类型的大均匀非血管区域,对它们的配准提出了重大挑战。本文提出了一种基于互信息相似性度量的自适应期望最大化主成分分析方法。它引入了一种新颖的迭代过程,使用Kaiser规则自适应地选择最重要的主成分,并在计算MI时应用4像素连通性进行特征提取,同时使用Wichard的bin大小选择。在各种图像数据集上的定量和定性结果都最终证明了与现有基于MI的相似性度量相比,aEMPCA-MI具有优越的图像配准性能。
{"title":"Robust image registration using adaptive expectation maximisation based PCA","authors":"P. Reel, L. Dooley, Kam Cheung Patrick Wong, A. Börner","doi":"10.1109/VCIP.2014.7051515","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051515","url":null,"abstract":"Images having either the same or different modalities can be aligned using the systematic process of image registration. Inherent image characteristics including intensity non-uniformities in magnetic resonance images and large homogeneous non-vascular regions in retinal and other generic image types however, pose a significant challenge to their registration. This paper presents an adaptive expectation maximisation for principal component analysis with mutual information (aEMPCA-MI) similarity measure for image registration. It introduces a novel iterative process to adaptively select the most significant principal components using Kaiser rule and applies 4-pixel connectivity for feature extraction together with Wichard's bin size selection in calculating the MI. Both quantitative and qualitative results on a diverse range of image datasets, conclusively demonstrate the superior image registration performance of aEMPCA-MI compared with existing Mi-based similarity measures.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128406367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Rate-distortion optimised transform competition for intra coding in HEVC HEVC中率失真优化的帧内编码变换竞争
Pub Date : 2014-12-07 DOI: 10.1109/VCIP.2014.7051507
A. Arrufat, P. Philippe, O. Déforges
State of the art video coders are based on prediction and transform coding. The transform decorrelates the signal to achieve high compression levels. In this paper we propose improving the performances of the latest video coding standard, HEVC, by adding a set of rate-distortion optimised transforms (RDOTs). The transform design is based upon a cost function that incorporates a bit rate constraint. These new RDOTs compete against classical HEVC transforms in the rate-distortion optimisation (RDO) loop in the same way as prediction modes and block sizes, providing additional coding possibilities. Reductions in BD-rate of around 2% are demonstrated when making these transforms available in HEVC.
目前最先进的视频编码器是基于预测和变换编码。该变换对信号进行去相关处理,以达到高压缩水平。在本文中,我们提出通过增加一组率失真优化变换(RDOTs)来提高最新视频编码标准HEVC的性能。变换设计基于包含比特率约束的成本函数。这些新的RDOTs以与预测模式和块大小相同的方式,在率失真优化(RDO)循环中与经典HEVC变换竞争,提供了额外的编码可能性。当这些转换在HEVC中可用时,bd率降低了约2%。
{"title":"Rate-distortion optimised transform competition for intra coding in HEVC","authors":"A. Arrufat, P. Philippe, O. Déforges","doi":"10.1109/VCIP.2014.7051507","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051507","url":null,"abstract":"State of the art video coders are based on prediction and transform coding. The transform decorrelates the signal to achieve high compression levels. In this paper we propose improving the performances of the latest video coding standard, HEVC, by adding a set of rate-distortion optimised transforms (RDOTs). The transform design is based upon a cost function that incorporates a bit rate constraint. These new RDOTs compete against classical HEVC transforms in the rate-distortion optimisation (RDO) loop in the same way as prediction modes and block sizes, providing additional coding possibilities. Reductions in BD-rate of around 2% are demonstrated when making these transforms available in HEVC.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125613878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A joint 3D image semantic segmentation and scalable coding scheme with ROI approach 一种结合ROI方法的三维图像语义分割和可扩展编码方案
Pub Date : 2014-12-07 DOI: 10.1109/VCIP.2014.7051556
Khouloud Samrouth, O. Déforges, Yi Liu, W. Falou, Mohamad Khalil
Along with the digital evolution, image post-production and indexing have become one of the most advanced and desired services in the lossless 3D image domain. The 3D context provides a significant gain in terms of semantics for scene representation. However, it also induces many drawbacks including monitoring visual degradation of compressed 3D image (especially upon edges), and increased complexity for scene representation. In this paper, we propose a semantic region representation and a scalable coding scheme. First, the semantic region representation scheme is based on a low resolution version of the 3D image. It provides the possibility to segment the image according to a desirable balance between 2D and depth. Second, the scalable coding scheme consists in selecting a number of regions as a Region of Interest (RoI), based on the region representation, in order to be refined at a higher bitrate. Experiments show that the proposed scheme provides a high coherence between texture, depth and regions and ensures an efficient solution to the problems of compression and scene representation in the 3D image domain.
随着数字技术的发展,图像后期制作和索引已成为无损三维图像领域最先进和最受欢迎的服务之一。3D上下文在场景表示的语义方面提供了显著的增益。然而,它也有许多缺点,包括监测压缩3D图像的视觉退化(特别是在边缘上),以及增加场景表示的复杂性。本文提出了一种语义区域表示和可扩展编码方案。首先,语义区域表示方案基于3D图像的低分辨率版本。它提供了根据2D和深度之间的理想平衡分割图像的可能性。其次,可扩展编码方案包括根据区域表示选择一些区域作为感兴趣区域(RoI),以便以更高的比特率进行细化。实验表明,该方法在纹理、深度和区域之间具有较高的一致性,有效地解决了三维图像域的压缩和场景表示问题。
{"title":"A joint 3D image semantic segmentation and scalable coding scheme with ROI approach","authors":"Khouloud Samrouth, O. Déforges, Yi Liu, W. Falou, Mohamad Khalil","doi":"10.1109/VCIP.2014.7051556","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051556","url":null,"abstract":"Along with the digital evolution, image post-production and indexing have become one of the most advanced and desired services in the lossless 3D image domain. The 3D context provides a significant gain in terms of semantics for scene representation. However, it also induces many drawbacks including monitoring visual degradation of compressed 3D image (especially upon edges), and increased complexity for scene representation. In this paper, we propose a semantic region representation and a scalable coding scheme. First, the semantic region representation scheme is based on a low resolution version of the 3D image. It provides the possibility to segment the image according to a desirable balance between 2D and depth. Second, the scalable coding scheme consists in selecting a number of regions as a Region of Interest (RoI), based on the region representation, in order to be refined at a higher bitrate. Experiments show that the proposed scheme provides a high coherence between texture, depth and regions and ensures an efficient solution to the problems of compression and scene representation in the 3D image domain.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"11 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114002988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-separable mode dependent transforms for intra coding in HEVC HEVC中不可分离模式相关的帧内编码变换
Pub Date : 2014-12-07 DOI: 10.1109/VCIP.2014.7051504
A. Arrufat, P. Philippe, O. Déforges
Transform coding plays a crucial role in video coders. Recently, additional transforms based on the DST and the DCT have been included in the latest video coding standard, HEVC. Those transforms were introduced after a thoroughly analysis of the video signal properties. In this paper, we design additional transforms by using an alternative learning approach. The appropriateness of the design over the classical KLT learning is also shown. Subsequently, the additional designed transforms are applied to the latest HEVC scheme. Results show that coding performance is improved compared to the standard. Additional results show that the coding performance can be significantly further improved by using non-separable transforms. Bitrate reductions in the range of 2% over HEVC are achieved with those proposed transforms.
变换编码在视频编码器中起着至关重要的作用。最近,在最新的视频编码标准HEVC中加入了基于DST和DCT的附加变换。在深入分析了视频信号的特性后,介绍了这些变换。在本文中,我们通过使用一种替代学习方法来设计额外的变换。在经典的KLT学习的适当性设计也显示。随后,将附加设计的变换应用于最新的HEVC方案。结果表明,与标准编码相比,编码性能有所提高。实验结果表明,采用不可分变换可以进一步显著提高编码性能。通过这些变换,比特率比HEVC降低了2%。
{"title":"Non-separable mode dependent transforms for intra coding in HEVC","authors":"A. Arrufat, P. Philippe, O. Déforges","doi":"10.1109/VCIP.2014.7051504","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051504","url":null,"abstract":"Transform coding plays a crucial role in video coders. Recently, additional transforms based on the DST and the DCT have been included in the latest video coding standard, HEVC. Those transforms were introduced after a thoroughly analysis of the video signal properties. In this paper, we design additional transforms by using an alternative learning approach. The appropriateness of the design over the classical KLT learning is also shown. Subsequently, the additional designed transforms are applied to the latest HEVC scheme. Results show that coding performance is improved compared to the standard. Additional results show that the coding performance can be significantly further improved by using non-separable transforms. Bitrate reductions in the range of 2% over HEVC are achieved with those proposed transforms.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131922137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Optimized spatial and temporal resolution based on subjective quality estimation without encoding 基于主观质量估计的优化时空分辨率,无需编码
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051497
M. Takagi, H. Fujii, A. Shimizu
In this paper, we propose a method of estimating subjective video quality with various spatial and temporal resolutions without encoding. Under a given bitrate constraint, the combination of resolution and frame rate that provides best subjective video quality depends on the video content. To maximize subjective video quality, several studies have proposed models that can estimate subjective quality with various resolutions and frame rates. However, to determine the optimal resolution and frame rate that maximize subjective video quality, it is necessary to estimate subjective video quality at each combination of resolution/frame rate/bitrate. This takes considerable time with previously reported methods because they require an encoding process for decoding videos or obtaining pre analysis. To address this issue, we developed a method that does not require an encoding process to estimate subjective video quality.
在本文中,我们提出了一种无需编码的主观视频质量估计方法,该方法具有不同的空间和时间分辨率。在给定的比特率约束下,提供最佳主观视频质量的分辨率和帧率的组合取决于视频内容。为了最大限度地提高主观视频质量,一些研究提出了可以在不同分辨率和帧率下估计主观质量的模型。然而,为了确定最大限度地提高主观视频质量的最佳分辨率和帧率,有必要在分辨率/帧率/比特率的每种组合下估计主观视频质量。之前报道的方法需要相当长的时间,因为它们需要解码视频或获得预分析的编码过程。为了解决这个问题,我们开发了一种不需要编码过程来估计主观视频质量的方法。
{"title":"Optimized spatial and temporal resolution based on subjective quality estimation without encoding","authors":"M. Takagi, H. Fujii, A. Shimizu","doi":"10.1109/VCIP.2014.7051497","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051497","url":null,"abstract":"In this paper, we propose a method of estimating subjective video quality with various spatial and temporal resolutions without encoding. Under a given bitrate constraint, the combination of resolution and frame rate that provides best subjective video quality depends on the video content. To maximize subjective video quality, several studies have proposed models that can estimate subjective quality with various resolutions and frame rates. However, to determine the optimal resolution and frame rate that maximize subjective video quality, it is necessary to estimate subjective video quality at each combination of resolution/frame rate/bitrate. This takes considerable time with previously reported methods because they require an encoding process for decoding videos or obtaining pre analysis. To address this issue, we developed a method that does not require an encoding process to estimate subjective video quality.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116958530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Complexity control of HEVC based on region-of-interest attention model 基于兴趣区域注意模型的HEVC复杂度控制
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051545
Xin Deng, Mai Xu, Shengxi Li, Zulin Wang
In this paper, we present a novel complexity control method of HEVC to adjust its encoding complexity. First, a region-of-interest (ROI) attention model is established, which defines different weights for various regions according to their importance. Then, the complexity control algorithm is proposed with a distortion-complexity optimization model, to determine the maximum depth of the largest coding units (LCUs) according to their weights. We can reduce the encoding complexity to a given target level at the cost of little distortion loss. Finally, the experimental results show that the encoding complexity can drop to a pre-defined target complexity as low as 20% with bias less than 7%. Meanwhile, our method is verified to preserve the quality of ROI better than another state-of-the-art approach.
本文提出了一种新的HEVC复杂度控制方法来调整其编码复杂度。首先,建立感兴趣区域(ROI)注意力模型,根据不同区域的重要性定义不同的权重;然后,提出了基于失真复杂度优化模型的复杂度控制算法,根据最大编码单元(lcu)的权重确定其最大深度。我们可以以很小的失真损失为代价,将编码复杂度降低到给定的目标水平。最后,实验结果表明,编码复杂度可以降低到预定义目标复杂度的20%,偏差小于7%。同时,我们的方法被证明比另一种最先进的方法更好地保持ROI的质量。
{"title":"Complexity control of HEVC based on region-of-interest attention model","authors":"Xin Deng, Mai Xu, Shengxi Li, Zulin Wang","doi":"10.1109/VCIP.2014.7051545","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051545","url":null,"abstract":"In this paper, we present a novel complexity control method of HEVC to adjust its encoding complexity. First, a region-of-interest (ROI) attention model is established, which defines different weights for various regions according to their importance. Then, the complexity control algorithm is proposed with a distortion-complexity optimization model, to determine the maximum depth of the largest coding units (LCUs) according to their weights. We can reduce the encoding complexity to a given target level at the cost of little distortion loss. Finally, the experimental results show that the encoding complexity can drop to a pre-defined target complexity as low as 20% with bias less than 7%. Meanwhile, our method is verified to preserve the quality of ROI better than another state-of-the-art approach.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"418 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127076880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Accelerated hybrid image reconstruction for non-regular sampling color sensors 非规则采样颜色传感器的加速混合图像重建
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051543
M. Bätz, Andrea Eichenseer, Markus Jonscher, Jürgen Seiler, André Kaup
Increasing the spatial resolution is an ongoing research topic in image processing. A recently presented approach applies a non-regular sampling mask on a low resolution sensor and subsequently reconstructs the masked area via an extrapolation algorithm to obtain a high resolution image. This paper introduces an acceleration of this approach for use with full color sensors. Instead of employing the effective, yet computationally expensive extrapolation algorithm on each of the three RGB channels, a color space conversion is performed and only the luminance channel is then reconstructed using this algorithm. As natural images contain much less information in the chrominance channels, a fast linear interpolation technique can here be used to accelerate the whole reconstruction procedure. Simulation results show that an average speed up factor of 2.9 is thus achieved, while the loss in visual quality stays imperceptible. Comparisons of PSNR results confirm this.
提高空间分辨率一直是图像处理领域的研究课题。最近提出的一种方法是在低分辨率传感器上应用不规则采样掩模,然后通过外推算法重建掩模区域以获得高分辨率图像。本文介绍了一种用于全彩传感器的加速方法。在三个RGB通道上使用有效但计算代价昂贵的外推算法,而是执行颜色空间转换,然后使用该算法重建亮度通道。由于自然图像在色度通道中包含的信息较少,因此可以采用快速线性插值技术来加快整个重建过程。仿真结果表明,平均加速系数为2.9,而视觉质量的损失则难以察觉。PSNR结果的比较证实了这一点。
{"title":"Accelerated hybrid image reconstruction for non-regular sampling color sensors","authors":"M. Bätz, Andrea Eichenseer, Markus Jonscher, Jürgen Seiler, André Kaup","doi":"10.1109/VCIP.2014.7051543","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051543","url":null,"abstract":"Increasing the spatial resolution is an ongoing research topic in image processing. A recently presented approach applies a non-regular sampling mask on a low resolution sensor and subsequently reconstructs the masked area via an extrapolation algorithm to obtain a high resolution image. This paper introduces an acceleration of this approach for use with full color sensors. Instead of employing the effective, yet computationally expensive extrapolation algorithm on each of the three RGB channels, a color space conversion is performed and only the luminance channel is then reconstructed using this algorithm. As natural images contain much less information in the chrominance channels, a fast linear interpolation technique can here be used to accelerate the whole reconstruction procedure. Simulation results show that an average speed up factor of 2.9 is thus achieved, while the loss in visual quality stays imperceptible. Comparisons of PSNR results confirm this.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126196543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A hardware-oriented IME algorithm and its implementation for HEVC 一种面向硬件的HEVC IME算法及其实现
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051540
Xin Ye, Dandan Ding, Lu Yu
The flexible coding structure in High Efficiency Video Coding (HEVC) introduces many challenges to real-time implementation of the integer-pel motion estimation (IME). In this paper, a hardware-oriented IME algorithm naming parallel clustering tree search (PCTS) is proposed, where various prediction units (PU) are processed simultaneously with a parallel scheme. The PCTS consists of four hierarchical search steps. After each search step, PUs with the same MV candidate are clustered to one group. And the next search step is shared by PUs in the same group. Owing to the top-down tree-structure search strategy of the PCTS, search processes are highly shared among different PUs and system throughput is thus significantly increased. As a result, the hardware implementation based on the proposed algorithm can support real-time video applications of QFHD (3840×2160) at 30fps.
高效视频编码(HEVC)中灵活的编码结构给整点运动估计(IME)的实时实现带来了许多挑战。提出了一种面向硬件的并行聚类树搜索算法(PCTS),该算法以并行方式同时处理多个预测单元(PU)。PCTS由四个层次搜索步骤组成。在每个搜索步骤之后,具有相同候选MV的pu聚类到一组。下一个搜索步骤由同一组中的pu共享。由于PCTS采用自顶向下的树状结构搜索策略,搜索过程在不同的pu之间高度共享,从而显著提高了系统吞吐量。结果表明,基于该算法的硬件实现可以支持30fps的QFHD (3840×2160)实时视频应用。
{"title":"A hardware-oriented IME algorithm and its implementation for HEVC","authors":"Xin Ye, Dandan Ding, Lu Yu","doi":"10.1109/VCIP.2014.7051540","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051540","url":null,"abstract":"The flexible coding structure in High Efficiency Video Coding (HEVC) introduces many challenges to real-time implementation of the integer-pel motion estimation (IME). In this paper, a hardware-oriented IME algorithm naming parallel clustering tree search (PCTS) is proposed, where various prediction units (PU) are processed simultaneously with a parallel scheme. The PCTS consists of four hierarchical search steps. After each search step, PUs with the same MV candidate are clustered to one group. And the next search step is shared by PUs in the same group. Owing to the top-down tree-structure search strategy of the PCTS, search processes are highly shared among different PUs and system throughput is thus significantly increased. As a result, the hardware implementation based on the proposed algorithm can support real-time video applications of QFHD (3840×2160) at 30fps.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125390856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Fast mode decision method for all intra spatial scalability in SHVC 基于全空间可扩展性的SHVC快速模式决策方法
Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051589
Xuguang Zuo, Lu Yu
Scalable high efficiency video coding (SHVC) is now being developed by the Joint Collaborative Team on Video Coding (JCT-VC). In SHVC, the enhancement layer (EL) employs the same tree structured coding unit (CU) and 35 intra prediction modes as the base layer (BL), which results in heavy computation load. To speed up the mode decision process in the EL, the correlations of the CU depth and intra prediction modes between the BL and the EL are exploited in this paper. Based on the correlations an EL CU depth early skip algorithm and a fast intra prediction mode decision algorithm are proposed for all intra spatial scalability. Experimental results show that 45.3% and 42.3% coding time of the EL can be saved in AH Intra 1.5× spatial scalability and 2× spatial scalability respectively. In the meantime, the R-D performance degraded less than 0.05% compared with SHVC Test Model (SHM) 5.0.
可扩展高效视频编码(SHVC)目前正在由视频编码联合协作小组(JCT-VC)开发。在SHVC中,增强层(EL)与基础层(BL)使用相同的树结构编码单元(CU)和35种内预测模式,导致计算量很大。为了加快EL中的模式决策过程,本文利用了BL和EL之间的CU深度和内部预测模式的相关性。在此基础上,提出了一种具有空间可扩展性的EL - CU深度早期跳过算法和快速预测模式决策算法。实验结果表明,在AH Intra 1.5倍空间可扩展性和2倍空间可扩展性下,EL编码时间分别节省45.3%和42.3%。与SHVC测试模型(SHM) 5.0相比,R-D性能下降幅度小于0.05%。
{"title":"Fast mode decision method for all intra spatial scalability in SHVC","authors":"Xuguang Zuo, Lu Yu","doi":"10.1109/VCIP.2014.7051589","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051589","url":null,"abstract":"Scalable high efficiency video coding (SHVC) is now being developed by the Joint Collaborative Team on Video Coding (JCT-VC). In SHVC, the enhancement layer (EL) employs the same tree structured coding unit (CU) and 35 intra prediction modes as the base layer (BL), which results in heavy computation load. To speed up the mode decision process in the EL, the correlations of the CU depth and intra prediction modes between the BL and the EL are exploited in this paper. Based on the correlations an EL CU depth early skip algorithm and a fast intra prediction mode decision algorithm are proposed for all intra spatial scalability. Experimental results show that 45.3% and 42.3% coding time of the EL can be saved in AH Intra 1.5× spatial scalability and 2× spatial scalability respectively. In the meantime, the R-D performance degraded less than 0.05% compared with SHVC Test Model (SHM) 5.0.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116420267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
期刊
2014 IEEE Visual Communications and Image Processing Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1