首页 > 最新文献

IVMSP 2013最新文献

英文 中文
Simplified inter-component depth modeling in 3D-HEVC 简化了3D-HEVC中组件间深度建模
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611943
Yunseok Song, Yo-Sung Ho
In this paper, we present a method to reduce complexity of depth modeling modes (DMM), which is currently used in the 3D-HEVC standardization activity. DMM adds four modes to the existing HEVC intra prediction modes; the main purpose is to accurately represent object edges in depth video. Mode 3 of DMM requires distortion calculation of all pre-defined wedgelets. The proposed method employs absolute differences of neighboring pixels in the reference block. The number of wedgelets that need to be concerned can be reduced to six. Experimental results show 3.1% complexity reduction on average while maintaining coding performance, which implies that the correct wedgelet is included, while non-viable wedgelets are disregarded.
在本文中,我们提出了一种降低深度建模模式(DMM)复杂性的方法,该方法目前用于3D-HEVC标准化活动。DMM在现有HEVC帧内预测模式的基础上增加了4种模式;其主要目的是在深度视频中准确地表示物体边缘。DMM的模式3要求对所有预定义楔片进行畸变计算。该方法利用参考块中相邻像素的绝对差值。需要关注的楔片数量可以减少到6个。实验结果表明,在保持编码性能的情况下,平均降低了3.1%的复杂度,这意味着包含了正确的小块,而忽略了不可行的小块。
{"title":"Simplified inter-component depth modeling in 3D-HEVC","authors":"Yunseok Song, Yo-Sung Ho","doi":"10.1109/IVMSPW.2013.6611943","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611943","url":null,"abstract":"In this paper, we present a method to reduce complexity of depth modeling modes (DMM), which is currently used in the 3D-HEVC standardization activity. DMM adds four modes to the existing HEVC intra prediction modes; the main purpose is to accurately represent object edges in depth video. Mode 3 of DMM requires distortion calculation of all pre-defined wedgelets. The proposed method employs absolute differences of neighboring pixels in the reference block. The number of wedgelets that need to be concerned can be reduced to six. Experimental results show 3.1% complexity reduction on average while maintaining coding performance, which implies that the correct wedgelet is included, while non-viable wedgelets are disregarded.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133059999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Subjective assessment methodology for preference of experience in 3DTV 3DTV经验偏好的主观评价方法
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611917
Jing Li, M. Barkowsky, P. Callet
The measurement of the Quality of Experience (QoE) in 3DTV recently became an important research topic as it relates to the development of the 3D industry. Pair comparison is a reliable method as it is easier for the observers to provide their preference on a pair rather than give an absolute scale value to a stimulus. The QoE measured by pair comparison is thus called “Preference of Experience (PoE)”. In this paper, we introduce some efficient designs for pair comparison which can reduce the number of comparisons. The constraints of the presentation order of the stimuli in pair comparison test are listed. Finally, some analysis methods for pair comparison data are provided accompanied with some examples from the studies of the measurement of PoE.
3D电视体验质量(QoE)的测量关系到3D产业的发展,近年来成为一个重要的研究课题。配对比较是一种可靠的方法,因为它更容易让观察者提供他们对一对的偏好,而不是给刺激一个绝对的尺度值。因此,通过配对比较测量的QoE被称为“经验偏好(PoE)”。在本文中,我们介绍了一些有效的对比较设计,可以减少比较的次数。给出了配对比较检验中刺激呈现顺序的约束条件。最后,给出了对配对比较数据的分析方法,并给出了PoE测量研究中的一些实例。
{"title":"Subjective assessment methodology for preference of experience in 3DTV","authors":"Jing Li, M. Barkowsky, P. Callet","doi":"10.1109/IVMSPW.2013.6611917","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611917","url":null,"abstract":"The measurement of the Quality of Experience (QoE) in 3DTV recently became an important research topic as it relates to the development of the 3D industry. Pair comparison is a reliable method as it is easier for the observers to provide their preference on a pair rather than give an absolute scale value to a stimulus. The QoE measured by pair comparison is thus called “Preference of Experience (PoE)”. In this paper, we introduce some efficient designs for pair comparison which can reduce the number of comparisons. The constraints of the presentation order of the stimuli in pair comparison test are listed. Finally, some analysis methods for pair comparison data are provided accompanied with some examples from the studies of the measurement of PoE.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115029401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Sharp disparity reconstruction using sparse disparity measurement and color information 利用稀疏视差测量和颜色信息进行锐视差重建
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611899
Lee-Kang Liu, Zucheul Lee, Truong Nguyen
Recently, the work on dense disparity map reconstruction from 5% sparse initial estimates containing edges in disparity, has been proposed [1]. Practically, however, edges in disparity is unknown unless a dense disparity map has already been generated. In this paper, we present a realistic reconstruction framework for obtaining sharp and dense disparity maps from fixed number of sparse initial estimates with the aid of color image information. Experimental results show that sharp and dense disparity maps can be reconstructed at the cost of one pixel accuracy.
最近,有人提出了从包含视差边缘的5%稀疏初始估计重建密集视差图的工作[[1]]。然而,实际上,除非已经生成密集的视差图,否则视差中的边是未知的。本文提出了一种利用彩色图像信息从固定数量的稀疏初始估计中获得清晰密集视差图的现实重建框架。实验结果表明,以1像素的精度为代价,可以重建出尖锐和密集的视差图。
{"title":"Sharp disparity reconstruction using sparse disparity measurement and color information","authors":"Lee-Kang Liu, Zucheul Lee, Truong Nguyen","doi":"10.1109/IVMSPW.2013.6611899","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611899","url":null,"abstract":"Recently, the work on dense disparity map reconstruction from 5% sparse initial estimates containing edges in disparity, has been proposed [1]. Practically, however, edges in disparity is unknown unless a dense disparity map has already been generated. In this paper, we present a realistic reconstruction framework for obtaining sharp and dense disparity maps from fixed number of sparse initial estimates with the aid of color image information. Experimental results show that sharp and dense disparity maps can be reconstructed at the cost of one pixel accuracy.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"240 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123258658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
QP initialization and adaptive MAD prediction for rate control in HEVC-based multi-view video coding 基于hevc的多视点视频编码中QP初始化和自适应MAD预测的速率控制
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611925
Woong Lim, I. Bajić, D. Sim
Rate control is an important component of an end-to-end video communication system. Although rate control is not a part of a video coding standard, it is necessary for practical deployment. Currently, there are several proposals for rate control in the upcoming High Efficiency Video Coding (HEVC) standard, but there is no rate control scheme for HEVC-based multi-view extension. In this paper, we apply the newly recommended R-λ model-based HEVC rate control to the multi-view scenario, and propose two improvements. One improvement deals with Quantization Parameter (QP) initialization, and the other deals with adaptive Mean Absolute Difference (MAD) prediction. Results demonstrate the accuracy of the proposed methods, the resulting reduced fluctuation of instantaneous bitrate, as well as an improvement in rate-distortion performance compared to the R-λ rate control alone.
速率控制是端到端视频通信系统的重要组成部分。虽然速率控制不是视频编码标准的一部分,但在实际部署中是必要的。目前,在即将推出的高效视频编码(HEVC)标准中,有几种速率控制方案,但基于HEVC的多视图扩展还没有速率控制方案。本文将新推荐的基于R-λ模型的HEVC速率控制应用于多视图场景,并提出了两个改进方案。一种改进处理量化参数(QP)初始化,另一种改进处理自适应平均绝对差(MAD)预测。结果表明,与单独的R-λ率控制相比,所提出的方法的准确性,减少了瞬时比特率的波动,并改善了率失真性能。
{"title":"QP initialization and adaptive MAD prediction for rate control in HEVC-based multi-view video coding","authors":"Woong Lim, I. Bajić, D. Sim","doi":"10.1109/IVMSPW.2013.6611925","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611925","url":null,"abstract":"Rate control is an important component of an end-to-end video communication system. Although rate control is not a part of a video coding standard, it is necessary for practical deployment. Currently, there are several proposals for rate control in the upcoming High Efficiency Video Coding (HEVC) standard, but there is no rate control scheme for HEVC-based multi-view extension. In this paper, we apply the newly recommended R-λ model-based HEVC rate control to the multi-view scenario, and propose two improvements. One improvement deals with Quantization Parameter (QP) initialization, and the other deals with adaptive Mean Absolute Difference (MAD) prediction. Results demonstrate the accuracy of the proposed methods, the resulting reduced fluctuation of instantaneous bitrate, as well as an improvement in rate-distortion performance compared to the R-λ rate control alone.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123174767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Predicting 3D quality based on content analysis 基于内容分析预测3D质量
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611916
Philippe Hanhart, T. Ebrahimi
Development of objective quality metrics that can reliably predict perceived quality of 3D video sequences is challenging. Various 3D objective metrics have been proposed, but PSNR is still widely used. Several studies have shown that PSNR is strongly content dependent, but the exact relationship between PSNR values and perceived quality has not been established yet. In this paper, we propose a model to predict the relationship between PSNR values and perceived quality of stereoscopic video sequences based on content analysis. The model was trained and evaluated on a dataset of stereoscopic video sequences with associated ground truth MOS. Results showed that the proposed model achieved high correlation with perceived quality and was quite robust across contents when the training set contained various contents.
开发能够可靠地预测3D视频序列感知质量的客观质量指标是具有挑战性的。各种三维客观指标已经被提出,但PSNR仍被广泛使用。一些研究表明PSNR对内容有很强的依赖性,但PSNR值与感知质量之间的确切关系尚未确定。本文提出了一种基于内容分析的立体视频序列PSNR值与感知质量关系预测模型。该模型在具有相关地真MOS的立体视频序列数据集上进行训练和评估。结果表明,当训练集包含多种内容时,所提出的模型与感知质量具有较高的相关性,并且具有较强的鲁棒性。
{"title":"Predicting 3D quality based on content analysis","authors":"Philippe Hanhart, T. Ebrahimi","doi":"10.1109/IVMSPW.2013.6611916","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611916","url":null,"abstract":"Development of objective quality metrics that can reliably predict perceived quality of 3D video sequences is challenging. Various 3D objective metrics have been proposed, but PSNR is still widely used. Several studies have shown that PSNR is strongly content dependent, but the exact relationship between PSNR values and perceived quality has not been established yet. In this paper, we propose a model to predict the relationship between PSNR values and perceived quality of stereoscopic video sequences based on content analysis. The model was trained and evaluated on a dataset of stereoscopic video sequences with associated ground truth MOS. Results showed that the proposed model achieved high correlation with perceived quality and was quite robust across contents when the training set contained various contents.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133495655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Free viewpoint video synthesis using multi-view depth and color cameras 免费视点视频合成使用多视图深度和彩色相机
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611922
Kazuki Matsumoto, Chiyoung Song, François de Sorbier, H. Saito
In this paper, we propose an approach for generating free viewpoint videos based on multiple depth and color cameras to resolve issues encountered with traditional color cameras techniques. Our system is based on consumer products such as Kinect that does not provide satisfying quality in terms of resolution and noise. Our contribution is then to propose a full pipeline for enhancing the depth maps and finally improving the quality of the novel viewpoint generated.
本文提出了一种基于多深度和彩色摄像机的免费视点视频生成方法,以解决传统彩色摄像机技术遇到的问题。我们的系统是基于像Kinect这样的消费类产品,在分辨率和噪音方面不能提供令人满意的质量。我们的贡献是提出一个完整的管道来增强深度图,并最终提高新视点生成的质量。
{"title":"Free viewpoint video synthesis using multi-view depth and color cameras","authors":"Kazuki Matsumoto, Chiyoung Song, François de Sorbier, H. Saito","doi":"10.1109/IVMSPW.2013.6611922","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611922","url":null,"abstract":"In this paper, we propose an approach for generating free viewpoint videos based on multiple depth and color cameras to resolve issues encountered with traditional color cameras techniques. Our system is based on consumer products such as Kinect that does not provide satisfying quality in terms of resolution and noise. Our contribution is then to propose a full pipeline for enhancing the depth maps and finally improving the quality of the novel viewpoint generated.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130113405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gaze correction for 3D tele-immersive communication system 三维远程沉浸式通信系统的凝视校正
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611942
Wei Yong Eng, Dongbo Min, V. Nguyen, Jiangbo Lu, M. Do
The lack of eye contact between participants in a tele-conferencing makes nonverbal communication unnatural and ineffective. A lot of research has focused on correcting the user gaze for a natural communication. Most of prior solutions require expensive and bulky hardware, or incorporate a complicated algorithm causing inefficiency and deployment. In this paper, we propose an effective and efficient gaze correction solution for a 3D tele-conferencing system in a single color/depth camera set-up. A raw depth map is first refined using the corresponding color image. Then, both color and depth data of the participant are accurately segmented. A novel view is synthesized in the location of the display screen which coincides with the user gaze. Stereoscopic views, i.e. virtual left and right images, can also be generated for 3D immersive conferencing, and are displayed in a 3D monitor with 3D virtual background scenes. Finally, to handle large hole regions that often occur in the view synthesized with a single color camera, we propose a simple yet robust hole filling technique that works in real-time. This novel inpainting method can effectively reconstruct missing parts of the synthesized image under various challenging situations. Our proposed system works in real-time on a single core CPU without requiring dedicated hardware, including data acquisition, post-processing, rendering, and so on.
在电话会议中,参与者之间缺乏眼神交流会使非语言交流变得不自然和无效。许多研究都集中在纠正用户的目光,以实现自然的交流。大多数先前的解决方案需要昂贵而笨重的硬件,或者包含复杂的算法,导致效率低下和部署。在本文中,我们提出了一个有效的和高效的凝视校正解决方案,在单一颜色/深度相机设置的三维远程会议系统。首先使用相应的彩色图像对原始深度图进行细化。然后,对参与者的颜色和深度数据进行准确分割。在与用户视线重合的显示屏位置合成一个新的视图。立体视图,即虚拟左右图像,也可以为3D沉浸式会议生成,并显示在具有3D虚拟背景场景的3D显示器上。最后,为了处理在单彩色相机合成的视图中经常出现的大孔区域,我们提出了一种简单而强大的实时孔填充技术。该方法可以在各种复杂情况下有效地重建合成图像的缺失部分。我们提出的系统在单核CPU上实时工作,不需要专用硬件,包括数据采集、后处理、渲染等。
{"title":"Gaze correction for 3D tele-immersive communication system","authors":"Wei Yong Eng, Dongbo Min, V. Nguyen, Jiangbo Lu, M. Do","doi":"10.1109/IVMSPW.2013.6611942","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611942","url":null,"abstract":"The lack of eye contact between participants in a tele-conferencing makes nonverbal communication unnatural and ineffective. A lot of research has focused on correcting the user gaze for a natural communication. Most of prior solutions require expensive and bulky hardware, or incorporate a complicated algorithm causing inefficiency and deployment. In this paper, we propose an effective and efficient gaze correction solution for a 3D tele-conferencing system in a single color/depth camera set-up. A raw depth map is first refined using the corresponding color image. Then, both color and depth data of the participant are accurately segmented. A novel view is synthesized in the location of the display screen which coincides with the user gaze. Stereoscopic views, i.e. virtual left and right images, can also be generated for 3D immersive conferencing, and are displayed in a 3D monitor with 3D virtual background scenes. Finally, to handle large hole regions that often occur in the view synthesized with a single color camera, we propose a simple yet robust hole filling technique that works in real-time. This novel inpainting method can effectively reconstruct missing parts of the synthesized image under various challenging situations. Our proposed system works in real-time on a single core CPU without requiring dedicated hardware, including data acquisition, post-processing, rendering, and so on.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130348121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Automatic detection of depth jump cuts and bent window effects in stereoscopic videos 在立体视频中自动检测深度跳切和弯曲窗口效果
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611907
Sotirios Delis, N. Nikolaidis, I. Pitas
3DTV and 3D cinema witness a significant increase in their popularity nowadays. New movie titles are released in 3D and there are more than 35 TV channels in various countries that broadcast in 3D worldwide. It is well known today and becomes more obvious, as the 3D video content availability increases, that stereoscopy is associated with certain 3D video quality issues that may affect in a negative way the 3D viewing experience. In this paper, we propose two novel algorithms that exploit available disparity information to detect two disturbing stereoscopic issues, namely depth jump cuts and bent window effects. Representative examples are provided to assess the algorithms performance. The proposed algorithms can be helpful in the post-production stage, where, in most cases, the detected issues can be fixed, and also in assessing the overall quality of stereoscopic video content.
如今,3D电视和3D电影的普及程度显著提高。新的电影片名以3D形式发行,全世界有超过35个国家的电视频道播放3D节目。众所周知,随着3D视频内容的可用性增加,立体感与某些3D视频质量问题相关,这可能会对3D观看体验产生负面影响,这一点变得更加明显。在本文中,我们提出了两种新的算法,利用现有的视差信息来检测两个令人不安的立体问题,即深度跳切和弯曲窗口效应。给出了典型的示例来评估算法的性能。所提出的算法可以在后期制作阶段提供帮助,在大多数情况下,可以修复检测到的问题,也可以评估立体视频内容的整体质量。
{"title":"Automatic detection of depth jump cuts and bent window effects in stereoscopic videos","authors":"Sotirios Delis, N. Nikolaidis, I. Pitas","doi":"10.1109/IVMSPW.2013.6611907","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611907","url":null,"abstract":"3DTV and 3D cinema witness a significant increase in their popularity nowadays. New movie titles are released in 3D and there are more than 35 TV channels in various countries that broadcast in 3D worldwide. It is well known today and becomes more obvious, as the 3D video content availability increases, that stereoscopy is associated with certain 3D video quality issues that may affect in a negative way the 3D viewing experience. In this paper, we propose two novel algorithms that exploit available disparity information to detect two disturbing stereoscopic issues, namely depth jump cuts and bent window effects. Representative examples are provided to assess the algorithms performance. The proposed algorithms can be helpful in the post-production stage, where, in most cases, the detected issues can be fixed, and also in assessing the overall quality of stereoscopic video content.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"515 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123081970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast inter mode decision process for HEVC encoder HEVC编码器的快速模式间决策过程
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611927
Seungha Yang, Hoyoung Lee, H. Shim, B. Jeon
In this paper, we propose a fast inter mode decision method for High Efficiency Video Coding (HEVC) in order to reduce computational complexity of its encoder. It utilizes correlated tendency of PU mode. Compared to the early termination schemes for fast mode decision already implemented in HEVC reference software, it reduces the loss of coding efficiency. Experimental results show that the proposed method decreases encoding time by about 22.5% with little increment in bit-rate. Furthermore, the proposed method shows encoding time reduction of 39.1% with only 0.8% increase of bit-rate when it is combined with existing fast methods such as early CU termination scheme.
本文提出了一种高效视频编码(HEVC)的快速模式间决策方法,以降低其编码器的计算复杂度。它利用了PU模式的相关趋势。与HEVC参考软件中已经实现的快速模式决策的早期终止方案相比,该方案降低了编码效率的损失。实验结果表明,该方法在比特率增量很小的情况下,编码时间减少了约22.5%。此外,该方法与现有的快速方法(如早期CU终止方案)相结合,编码时间减少了39.1%,比特率仅提高了0.8%。
{"title":"Fast inter mode decision process for HEVC encoder","authors":"Seungha Yang, Hoyoung Lee, H. Shim, B. Jeon","doi":"10.1109/IVMSPW.2013.6611927","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611927","url":null,"abstract":"In this paper, we propose a fast inter mode decision method for High Efficiency Video Coding (HEVC) in order to reduce computational complexity of its encoder. It utilizes correlated tendency of PU mode. Compared to the early termination schemes for fast mode decision already implemented in HEVC reference software, it reduces the loss of coding efficiency. Experimental results show that the proposed method decreases encoding time by about 22.5% with little increment in bit-rate. Furthermore, the proposed method shows encoding time reduction of 39.1% with only 0.8% increase of bit-rate when it is combined with existing fast methods such as early CU termination scheme.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115703257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Flicker-free 3D shutter glasses by retardnace control of LC cell 由液晶电池延迟控制的无闪烁3D快门眼镜
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611892
Dae-Sik Kim, Ho-Sup Lee, S. Shestak, SungWoo Cho
Ambient light inside viewing field of shutter-glasses 3DTV system can cause perceivable flicker due to high brightness of the light source. Omitting front polarizer of shutter glasses can be a solution for improving ambient light flicker, but it makes noticeable ghosting whenever 3D viewers tilt their heads. In this paper, we propose the new flicker-free shutter glasses compensated for viewers head tilt using tilt sensor. The crosstalk level, inserted by the shutter is below 1.6% within the tilt angle range from 0 to ±50°.
快门式3DTV系统视场内的环境光由于光源亮度高,会引起可感知的闪烁。省略快门眼镜的前偏光片可以改善环境光的闪烁,但当3D观众倾斜头部时,它会产生明显的鬼影。在本文中,我们提出了一种使用倾斜传感器补偿观众头部倾斜的新型无闪烁快门眼镜。在0到±50°的倾斜角度范围内,快门插入的串扰电平低于1.6%。
{"title":"Flicker-free 3D shutter glasses by retardnace control of LC cell","authors":"Dae-Sik Kim, Ho-Sup Lee, S. Shestak, SungWoo Cho","doi":"10.1109/IVMSPW.2013.6611892","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611892","url":null,"abstract":"Ambient light inside viewing field of shutter-glasses 3DTV system can cause perceivable flicker due to high brightness of the light source. Omitting front polarizer of shutter glasses can be a solution for improving ambient light flicker, but it makes noticeable ghosting whenever 3D viewers tilt their heads. In this paper, we propose the new flicker-free shutter glasses compensated for viewers head tilt using tilt sensor. The crosstalk level, inserted by the shutter is below 1.6% within the tilt angle range from 0 to ±50°.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114567765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IVMSP 2013
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1