首页 > 最新文献

2011 International Conference on 3D Imaging (IC3D)最新文献

英文 中文
Transforming 3D cinema content for an enhanced 3DTV experience 转换3D电影内容,增强3DTV体验
Pub Date : 2011-12-05 DOI: 10.1109/IC3D.2011.6584376
Lasith Yasakethu, L. Blondé, D. Doyen, Q. Huynh-Thu
3D cinema and 3DTV are at two different levels in the screen size spectrum. When the same stereoscopic-3D content is viewed on a cinema screen and 3DTV screen, it will produce a different 3D impression. As a result, it is difficult to fulfill the requirements of 3DTV with content captured for 3D cinema. Thus, it is important to properly address the issue of 3DTV content creation to avoid possible delays in the deployment of 3DTV. In this paper, we first explore the effects of using the same content for 3D cinema and 3DTV and then analyze the performance of several disparity based transformations for 3D cinema to 3DTV content conversion, by subjective testing. Effectiveness of the transformations is analyzed in terms of both depth quality and visual comfort of 3D experience. We show that by using a simple shift-based disparity transformation technique, it is possible to enhance the 3DTV experience from a common input signal which is originally captured for cinema viewing.
3D电影和3D电视在屏幕尺寸范围内处于两个不同的水平。同样的立体3D内容在影院屏幕和3DTV屏幕上观看时,会产生不同的3D印象。因此,为3D影院捕获的内容很难满足3DTV的要求。因此,重要的是要妥善解决3DTV内容创建的问题,以避免3DTV部署的可能延迟。本文首先探讨了在3D电影和3DTV中使用相同内容的效果,然后通过主观测试分析了几种基于视差的3D电影到3DTV内容转换的性能。从深度、质量和视觉舒适度两方面分析了变换的有效性。我们表明,通过使用简单的基于移位的视差变换技术,可以从最初为影院观看而捕获的公共输入信号中增强3DTV体验。
{"title":"Transforming 3D cinema content for an enhanced 3DTV experience","authors":"Lasith Yasakethu, L. Blondé, D. Doyen, Q. Huynh-Thu","doi":"10.1109/IC3D.2011.6584376","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584376","url":null,"abstract":"3D cinema and 3DTV are at two different levels in the screen size spectrum. When the same stereoscopic-3D content is viewed on a cinema screen and 3DTV screen, it will produce a different 3D impression. As a result, it is difficult to fulfill the requirements of 3DTV with content captured for 3D cinema. Thus, it is important to properly address the issue of 3DTV content creation to avoid possible delays in the deployment of 3DTV. In this paper, we first explore the effects of using the same content for 3D cinema and 3DTV and then analyze the performance of several disparity based transformations for 3D cinema to 3DTV content conversion, by subjective testing. Effectiveness of the transformations is analyzed in terms of both depth quality and visual comfort of 3D experience. We show that by using a simple shift-based disparity transformation technique, it is possible to enhance the 3DTV experience from a common input signal which is originally captured for cinema viewing.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126246392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Educational benefits of stereoscopic visualization from multiple viewpoints, illustrated with an electrical motor model 多视点立体可视化的教育效益,用电机模型说明
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584372
Takuya Yoshii, Shu Matsuura
In this study, we created a three-dimensional (3D) model of a simplified direct current (DC) motor and used it to explain the mechanism of actual DC motors to junior high school science classes. We implemented a questionnaire for students before and after the presentation of the model. Before the presentation, many students were not confident about explaining the mechanism of a DC motor based on Fleming's left-hand rule. Then, we showed the stereoscopic display of our DC motor model from various viewpoints and explained the application of Fleming's left-hand rule. The results of the questionnaire suggested that students gained confidence in explaining the application after viewing the stereoscopic display. It was suggested that the change of viewpoint in the stereoscopic display was effective in improving their understanding.
在这项研究中,我们建立了一个简化的直流电机的三维模型,并用它来解释实际直流电机的机制,以初中科学课。我们在模型展示前后对学生进行了问卷调查。在演示之前,许多学生对基于弗莱明左手规则解释直流电机的机制没有信心。然后,我们从不同的角度展示了直流电动机模型的立体显示,并解释了弗莱明左手定则的应用。问卷调查结果显示,学生在观看立体展示后,有信心解释应用程式。提示在立体显示中变换视点能有效提高他们的理解能力。
{"title":"Educational benefits of stereoscopic visualization from multiple viewpoints, illustrated with an electrical motor model","authors":"Takuya Yoshii, Shu Matsuura","doi":"10.1109/IC3D.2011.6584372","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584372","url":null,"abstract":"In this study, we created a three-dimensional (3D) model of a simplified direct current (DC) motor and used it to explain the mechanism of actual DC motors to junior high school science classes. We implemented a questionnaire for students before and after the presentation of the model. Before the presentation, many students were not confident about explaining the mechanism of a DC motor based on Fleming's left-hand rule. Then, we showed the stereoscopic display of our DC motor model from various viewpoints and explained the application of Fleming's left-hand rule. The results of the questionnaire suggested that students gained confidence in explaining the application after viewing the stereoscopic display. It was suggested that the change of viewpoint in the stereoscopic display was effective in improving their understanding.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"81 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125888703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
3D holographic video system 3D全息视频系统
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584377
Wauthier d'Ursel
Device for presenting 3D animated images, obtained by taking pictures through a diffraction grating, from all size objects, and a projection onto a holographic screen. The diffraction, orientated horizontally, gets by each ray of light a way done which depends on its wavelength and allows to obtain as many angles of vision as wavelengths. The screen diffracts horizontally and works as a complex lens reconstituing an integral 3D effect. Each pixel of the recording support realises a spectrum analysis and, at the time of the projection, return the information. With three recording supports, RGB natural colours are obtained. The data are converted into digital form. The data processing allows to create synthetic three-dimentional images.
通过衍射光栅从所有尺寸的物体上拍照,并将其投影到全息屏幕上,以呈现3D动画图像的装置。衍射是水平方向的,它根据光线的波长来处理每一束光线,并允许获得与波长一样多的视觉角度。屏幕水平衍射,并作为一个复杂的镜头重建一个整体的3D效果。记录支持的每个像素实现频谱分析,并在投影时返回信息。通过三个记录支持,获得RGB自然色。数据被转换成数字形式。数据处理允许创建合成三维图像。
{"title":"3D holographic video system","authors":"Wauthier d'Ursel","doi":"10.1109/IC3D.2011.6584377","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584377","url":null,"abstract":"Device for presenting 3D animated images, obtained by taking pictures through a diffraction grating, from all size objects, and a projection onto a holographic screen. The diffraction, orientated horizontally, gets by each ray of light a way done which depends on its wavelength and allows to obtain as many angles of vision as wavelengths. The screen diffracts horizontally and works as a complex lens reconstituing an integral 3D effect. Each pixel of the recording support realises a spectrum analysis and, at the time of the projection, return the information. With three recording supports, RGB natural colours are obtained. The data are converted into digital form. The data processing allows to create synthetic three-dimentional images.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129490982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A visual hull free algorithm for fast and robust multi-view stereo 一种快速鲁棒多视点立体视觉无壳算法
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584375
Yilong Liu, Yuanyuan Jiang, Yebin Liu
An image based multi-view reconstruction system, fast and robust, is introduced in this article. Instead of using visual hull as input, multi-view images are directly used to generate a precise and watertight 3D model. At the same time, point cloud and visual hull can also be produced as by-products. Our system is made up of three stages: point cloud generation, fusion and meshing. With well-designed algorithm and data structure, state-of-the-art speed is achieved in this highly reliable system, as revealed in our comparison with related work.
本文介绍了一种快速、鲁棒的基于图像的多视图重建系统。而不是使用视觉船体作为输入,多视图图像直接用于生成精确和水密的3D模型。同时,点云和视觉船体也可以作为副产品产生。我们的系统由三个阶段组成:点云生成、融合和网格划分。通过与相关工作的比较,我们发现,通过设计良好的算法和数据结构,在这个高可靠性的系统中实现了最先进的速度。
{"title":"A visual hull free algorithm for fast and robust multi-view stereo","authors":"Yilong Liu, Yuanyuan Jiang, Yebin Liu","doi":"10.1109/IC3D.2011.6584375","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584375","url":null,"abstract":"An image based multi-view reconstruction system, fast and robust, is introduced in this article. Instead of using visual hull as input, multi-view images are directly used to generate a precise and watertight 3D model. At the same time, point cloud and visual hull can also be produced as by-products. Our system is made up of three stages: point cloud generation, fusion and meshing. With well-designed algorithm and data structure, state-of-the-art speed is achieved in this highly reliable system, as revealed in our comparison with related work.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124468748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A compact 3D representation for multi-view video 一个紧凑的3D表示多视图视频
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584371
Jordi Salvador, J. Casas
This paper presents a methodology for obtaining a 3D reconstruction of a dynamic scene in multi-camera settings. Our target is to derive a compact representation of the 3D scene which is effective and accurate, whatever the number of cameras and even for very-wide baseline settings. Easing real-time 3D scene capture has outstanding applications in 2D and 3D content production, free viewpoint video of natural scenes and interactive video applications.
本文提出了一种在多摄像机环境下获得动态场景三维重建的方法。我们的目标是推导出一个紧凑的3D场景表示,这是有效和准确的,无论相机的数量,甚至是非常宽的基线设置。轻松实时3D场景捕捉在2D和3D内容制作、自然场景的免费视点视频和交互式视频应用中有着突出的应用。
{"title":"A compact 3D representation for multi-view video","authors":"Jordi Salvador, J. Casas","doi":"10.1109/IC3D.2011.6584371","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584371","url":null,"abstract":"This paper presents a methodology for obtaining a 3D reconstruction of a dynamic scene in multi-camera settings. Our target is to derive a compact representation of the 3D scene which is effective and accurate, whatever the number of cameras and even for very-wide baseline settings. Easing real-time 3D scene capture has outstanding applications in 2D and 3D content production, free viewpoint video of natural scenes and interactive video applications.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116809094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the distinction between perceived & predicted depth in S3D films 浅谈S3D电影中感知深度与预测深度的区别
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584389
Karim Benzeroual, L. Wilcox, Ali Kazimi, R. Allison
A primary concern when making stereoscopic 3D (S3D) movies is to promote an effective and comfortable S3D experience for the audience when displayed on the screen. The amount of depth produced on-screen can be controlled using a variety of parameters. Many of these are lighting related such as lighting architecture and technology. Others are optical or positional and thus have a geometrical effect including camera interaxial distance, camera convergence, lens properties, viewing distance and angle, screen/projector properties and viewer anatomy (interocular distance). The amount of estimated depth from disparity alone can be precisely predicted from simple trigonometry; however, perceived depth from disparity in complex scenes is difficult to evaluate and most likely different from the predicted depth based on geometry. This discrepancy is mediated by perceptual and cognitive factors, including resolution of the combination/conflict of pictorial, motion and binocular depth cues. This paper will review geometric predictions of depth from disparity and present the results of experiments which assess perceived S3D depth and the effect of the complexity of scene content.
制作立体3D (S3D)电影的主要关注点是在屏幕上展示时为观众提供有效和舒适的S3D体验。可以使用各种参数来控制屏幕上产生的深度。其中许多与照明相关,如照明建筑和技术。其他是光学或位置的,因此具有几何效应,包括相机轴间距离,相机收敛,镜头属性,观看距离和角度,屏幕/投影仪属性和观看者解剖(眼间距离)。仅从视差估计的深度就可以用简单的三角法精确地预测;然而,在复杂场景中,从视差中感知到的深度很难评估,而且很可能与基于几何的预测深度不同。这种差异是由知觉和认知因素介导的,包括图像、运动和双目深度线索的组合/冲突的解决。本文将回顾视差对深度的几何预测,并介绍评估感知S3D深度和场景内容复杂性影响的实验结果。
{"title":"On the distinction between perceived & predicted depth in S3D films","authors":"Karim Benzeroual, L. Wilcox, Ali Kazimi, R. Allison","doi":"10.1109/IC3D.2011.6584389","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584389","url":null,"abstract":"A primary concern when making stereoscopic 3D (S3D) movies is to promote an effective and comfortable S3D experience for the audience when displayed on the screen. The amount of depth produced on-screen can be controlled using a variety of parameters. Many of these are lighting related such as lighting architecture and technology. Others are optical or positional and thus have a geometrical effect including camera interaxial distance, camera convergence, lens properties, viewing distance and angle, screen/projector properties and viewer anatomy (interocular distance). The amount of estimated depth from disparity alone can be precisely predicted from simple trigonometry; however, perceived depth from disparity in complex scenes is difficult to evaluate and most likely different from the predicted depth based on geometry. This discrepancy is mediated by perceptual and cognitive factors, including resolution of the combination/conflict of pictorial, motion and binocular depth cues. This paper will review geometric predictions of depth from disparity and present the results of experiments which assess perceived S3D depth and the effect of the complexity of scene content.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128914258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A new jump edge detection method for 3D cameras 一种新的三维摄像机跳跃边缘检测方法
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584393
A. Lejeune, S. Piérard, Marc Van Droogenbroeck, J. Verly
Edges are a fundamental clue for analyzing, interpreting, and understanding 3D scenes: they describe objects boundaries. Available edge detection methods are not suited for 3D cameras such as the Microsoft Kinect or a time-of-flight camera: they are slow and do not take into consideration the characteristics of the cameras. In this paper, we present a fast jump edge detection technique for 3D cameras based on the principles of Canny's edge detector. We first analyze the characteristics of the range signal for two different kinds of cameras: a time-of-flight camera (the PMD[vision] CamCube) and the Microsoft Kinect. From this analysis, we define appropriate operators and thresholds to perform the edge detection. Then, we present some results of the developed algorithms for both cameras.
边缘是分析、解释和理解3D场景的基本线索:它们描述对象的边界。现有的边缘检测方法不适合3D相机,如微软Kinect或飞行时间相机:它们很慢,没有考虑到相机的特性。本文基于Canny边缘检测器的原理,提出了一种用于三维摄像机的快速跳跃边缘检测技术。我们首先分析了两种不同类型相机的距离信号特征:一种是飞行时间相机(PMD[vision] CamCube),另一种是微软Kinect。从这个分析中,我们定义了适当的算子和阈值来执行边缘检测。然后,我们给出了两种相机的算法的一些结果。
{"title":"A new jump edge detection method for 3D cameras","authors":"A. Lejeune, S. Piérard, Marc Van Droogenbroeck, J. Verly","doi":"10.1109/IC3D.2011.6584393","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584393","url":null,"abstract":"Edges are a fundamental clue for analyzing, interpreting, and understanding 3D scenes: they describe objects boundaries. Available edge detection methods are not suited for 3D cameras such as the Microsoft Kinect or a time-of-flight camera: they are slow and do not take into consideration the characteristics of the cameras. In this paper, we present a fast jump edge detection technique for 3D cameras based on the principles of Canny's edge detector. We first analyze the characteristics of the range signal for two different kinds of cameras: a time-of-flight camera (the PMD[vision] CamCube) and the Microsoft Kinect. From this analysis, we define appropriate operators and thresholds to perform the edge detection. Then, we present some results of the developed algorithms for both cameras.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116103986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Parallel implementation of depth-image-based rendering 基于深度图像渲染的并行实现
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584366
Kun Xu, Xiangyang Ji, Ruiping Wang, Qionghai Dai
Depth-image-based rendering (DIBR) is a key step in 3D video generation. Parallel implementation of DIBR is able to improve rendering efficiency. General DIBR algorithms include two steps: pixel shifting (warping) and hole filling. There are memory correlations in these steps. To minimize memory conflict, we employ an auxiliary matrix to record maximum shifting distance. Implementation details on OpenMP and CUDA are presented and experimental results on GPU and multi-core CPU are compared.
基于深度图像的渲染(DIBR)是三维视频生成的关键步骤。并行实现DIBR能够提高渲染效率。一般的DIBR算法包括两个步骤:像素移动(翘曲)和填充孔。这些步骤与记忆有关。为了最小化内存冲突,我们使用一个辅助矩阵来记录最大移动距离。给出了在OpenMP和CUDA上的实现细节,并比较了在GPU和多核CPU上的实验结果。
{"title":"Parallel implementation of depth-image-based rendering","authors":"Kun Xu, Xiangyang Ji, Ruiping Wang, Qionghai Dai","doi":"10.1109/IC3D.2011.6584366","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584366","url":null,"abstract":"Depth-image-based rendering (DIBR) is a key step in 3D video generation. Parallel implementation of DIBR is able to improve rendering efficiency. General DIBR algorithms include two steps: pixel shifting (warping) and hole filling. There are memory correlations in these steps. To minimize memory conflict, we employ an auxiliary matrix to record maximum shifting distance. Implementation details on OpenMP and CUDA are presented and experimental results on GPU and multi-core CPU are compared.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123868195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Enhanced rate-distortion optimization for stereo interleaving video coding 增强的立体声交错视频编码的率失真优化
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584364
Qian Ma, Yongbing Zhang, Qiong Liu, Xiangyang Ji, Qionghai Dai
Stereo interleaving format attracts more and more attention due to its back compatibility with all existing 2D video coding standards as well as its high efficiency of stereoscopic video compression. As one of the most significant coding components, rate-distortion optimization (RDO) in stereo interleaving video coding does not take into account its application scenarios where the reconstructed video needs up-sampling for displaying. To promote the efficiency of stereo interleaving video compression, we propose an enhanced RDO where the up-sampling is taken into consideration on distortion measurement. The proposed algorithm changes nothing concerning the syntax and is compatible with current decoders. Experimental results demonstrate that the proposed enhanced RDO is able to reduce the bitrates by 10%-44% on average, and achieve 0.1-0.65dB gain on PSNR compared with conventional RDO.
立体交错格式由于其与现有的所有二维视频编码标准的反向兼容以及对立体视频的高效压缩而越来越受到人们的关注。作为立体交错视频编码中最重要的编码成分之一,率失真优化(RDO)并没有考虑到重构后的视频需要上采样才能显示的应用场景。为了提高立体交错视频压缩的效率,我们提出了一种考虑失真测量上采样的增强RDO。提出的算法对语法没有改变,并且与当前的解码器兼容。实验结果表明,与传统RDO相比,所提增强RDO能将比特率平均降低10% ~ 44%,PSNR增益达到0.1 ~ 0.65 db。
{"title":"Enhanced rate-distortion optimization for stereo interleaving video coding","authors":"Qian Ma, Yongbing Zhang, Qiong Liu, Xiangyang Ji, Qionghai Dai","doi":"10.1109/IC3D.2011.6584364","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584364","url":null,"abstract":"Stereo interleaving format attracts more and more attention due to its back compatibility with all existing 2D video coding standards as well as its high efficiency of stereoscopic video compression. As one of the most significant coding components, rate-distortion optimization (RDO) in stereo interleaving video coding does not take into account its application scenarios where the reconstructed video needs up-sampling for displaying. To promote the efficiency of stereo interleaving video compression, we propose an enhanced RDO where the up-sampling is taken into consideration on distortion measurement. The proposed algorithm changes nothing concerning the syntax and is compatible with current decoders. Experimental results demonstrate that the proposed enhanced RDO is able to reduce the bitrates by 10%-44% on average, and achieve 0.1-0.65dB gain on PSNR compared with conventional RDO.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125100305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A toolchain for capturing and rendering stereo and multi-view datasets 用于捕获和呈现立体和多视图数据集的工具链
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584392
F. Klose, C. Lipski, K. Ruhl, B. Meyer, M. Magnor
We present our toolchain of free-viewpoint video and dynamic scene reconstruction for video and stereoscopic content creation. Our tools take video data from set of sparse unsynchronized cameras and give great freedom during the post production. From input data we can either generate new viewpoints employing the Virtual Video Camera a purely image based system or generate 3D scene models. The approaches are explained and guidelines for weighing their specific advantages and disadvantages in respect to a concrete application are given.
我们提出了自由视点视频和动态场景重建的工具链,用于视频和立体内容的创建。我们的工具从一组稀疏的非同步摄像机中获取视频数据,并在后期制作中给予很大的自由。从输入数据中,我们可以使用虚拟摄像机(一个纯粹基于图像的系统)生成新的视点,也可以生成3D场景模型。解释了这些方法,并给出了在具体应用中权衡其具体优点和缺点的指导方针。
{"title":"A toolchain for capturing and rendering stereo and multi-view datasets","authors":"F. Klose, C. Lipski, K. Ruhl, B. Meyer, M. Magnor","doi":"10.1109/IC3D.2011.6584392","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584392","url":null,"abstract":"We present our toolchain of free-viewpoint video and dynamic scene reconstruction for video and stereoscopic content creation. Our tools take video data from set of sparse unsynchronized cameras and give great freedom during the post production. From input data we can either generate new viewpoints employing the Virtual Video Camera a purely image based system or generate 3D scene models. The approaches are explained and guidelines for weighing their specific advantages and disadvantages in respect to a concrete application are given.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127681374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2011 International Conference on 3D Imaging (IC3D)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1