首页 > 最新文献

2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video最新文献

英文 中文
Real-time free-viewpoint viewer from multiview video plus depth representation coded by H.264/AVC MVC extension 实时自由视点查看器从多视点视频加上深度表示编码的H.264/AVC MVC扩展
S. Shimizu, H. Kimata, Y. Ohtani
This paper presents a real-time video-based rendering system that uses multiview video data with depth representation for free-viewpoint navigation. The proposed rendering algorithm not only achieves high quality rendering but also increases viewpoint flexibility to cover viewpoints that do not lie on the camera baselines. The proposed system achieves real-time decoding of multiple videos and depth maps that are encoded by the H.264/AVC Multiview Video Coding Extension on a regular CPU. The rendering process is fully implemented on a commercial GPU. A performance evaluation shows that our system can generate XGA free-viewpoint images at 30 fps.
提出了一种利用深度表示的多视点视频数据进行自由视点导航的实时视频绘制系统。提出的渲染算法不仅实现了高质量的渲染,而且增加了视点的灵活性,可以覆盖不在相机基线上的视点。该系统实现了在普通CPU上对H.264/AVC多视图视频编码扩展编码的多个视频和深度图的实时解码。渲染过程在商用GPU上完全实现。性能评估表明,该系统能够以30fps的速度生成XGA自由视点图像。
{"title":"Real-time free-viewpoint viewer from multiview video plus depth representation coded by H.264/AVC MVC extension","authors":"S. Shimizu, H. Kimata, Y. Ohtani","doi":"10.1109/3DTV.2009.5069656","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069656","url":null,"abstract":"This paper presents a real-time video-based rendering system that uses multiview video data with depth representation for free-viewpoint navigation. The proposed rendering algorithm not only achieves high quality rendering but also increases viewpoint flexibility to cover viewpoints that do not lie on the camera baselines. The proposed system achieves real-time decoding of multiple videos and depth maps that are encoded by the H.264/AVC Multiview Video Coding Extension on a regular CPU. The rendering process is fully implemented on a commercial GPU. A performance evaluation shows that our system can generate XGA free-viewpoint images at 30 fps.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127532677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
An improved multiview stereo video FGS scalable scheme 一种改进的多视点立体视频FGS可扩展方案
Lei Yang, Xiaowei Song, Chunping Hou, Jichang Guo, Sumei Li, Yuan Zhou
A multiview stereo video FGS (Fine Granular Scalability) scalable scheme is presented in this paper. The similarity among adjacent views is fully utilized, A tradeoff scheme is presented in order to adapt to different demands of Quality First (QF) and View First (VF) of the decoder. The scheme is composed of three cases: I, P, B frame. The middle view is encoded as the basic layer, while the other views are predicted from the partly retrieved FGS enhancement layers of adjacent views. The FGS enhancement layer of the current view is generated based on that. Experimental results show that the presented scheme is of more flexible and extensive scalable characteristic, which could better adapt different demands on view image quality and stereo immersion of different users.
提出了一种多视点立体视频FGS (Fine Granular Scalability)可扩展方案。充分利用相邻视图之间的相似性,提出了一种折衷方案,以适应解码器对质量优先和视图优先的不同要求。该方案由三种情况组成:I、P、B框架。中间视图被编码为基础层,而其他视图则从相邻视图的部分检索的FGS增强层中进行预测。在此基础上生成当前视图的FGS增强层。实验结果表明,该方案具有更加灵活和广泛的可扩展性,能够更好地适应不同用户对视像质量和立体沉浸感的不同需求。
{"title":"An improved multiview stereo video FGS scalable scheme","authors":"Lei Yang, Xiaowei Song, Chunping Hou, Jichang Guo, Sumei Li, Yuan Zhou","doi":"10.1109/3DTV.2009.5069658","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069658","url":null,"abstract":"A multiview stereo video FGS (Fine Granular Scalability) scalable scheme is presented in this paper. The similarity among adjacent views is fully utilized, A tradeoff scheme is presented in order to adapt to different demands of Quality First (QF) and View First (VF) of the decoder. The scheme is composed of three cases: I, P, B frame. The middle view is encoded as the basic layer, while the other views are predicted from the partly retrieved FGS enhancement layers of adjacent views. The FGS enhancement layer of the current view is generated based on that. Experimental results show that the presented scheme is of more flexible and extensive scalable characteristic, which could better adapt different demands on view image quality and stereo immersion of different users.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114416001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Compression of depth information for 3D rendering 为3D渲染压缩深度信息
P. Zanuttigh, G. Cortelazzo
This paper presents a novel strategy for the compression of depth maps. The proposed scheme starts with a segmentation step which identifies and extracts edges and main objects, then it introduces an efficient compression strategy for the segmented regions' shape. In the subsequent step a novel algorithm is used to predict the surface shape from the segmented regions and a set of regularly spaced samples. Finally the few prediction residuals are efficiently compressed using standard image compression techniques. Experimental results show that the proposed scheme not only offers a significant gain over JPEG2000 on various types of depth maps but also produces depth maps without edge artifacts particularly suited to 3D warping and free viewpoint video applications.
本文提出了一种新的深度图压缩策略。该方案从识别和提取边缘和主要目标的分割步骤开始,然后引入一种有效的分割区域形状压缩策略。在接下来的步骤中,使用一种新的算法从分割的区域和一组规则间隔的样本中预测表面形状。最后利用标准图像压缩技术对少量预测残差进行有效压缩。实验结果表明,该方案不仅在各种类型的深度图上比JPEG2000有明显的增益,而且产生的深度图没有边缘伪影,特别适合3D翘曲和自由视点视频应用。
{"title":"Compression of depth information for 3D rendering","authors":"P. Zanuttigh, G. Cortelazzo","doi":"10.1109/3DTV.2009.5069669","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069669","url":null,"abstract":"This paper presents a novel strategy for the compression of depth maps. The proposed scheme starts with a segmentation step which identifies and extracts edges and main objects, then it introduces an efficient compression strategy for the segmented regions' shape. In the subsequent step a novel algorithm is used to predict the surface shape from the segmented regions and a set of regularly spaced samples. Finally the few prediction residuals are efficiently compressed using standard image compression techniques. Experimental results show that the proposed scheme not only offers a significant gain over JPEG2000 on various types of depth maps but also produces depth maps without edge artifacts particularly suited to 3D warping and free viewpoint video applications.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128980970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Accurate multi-view depth reconstruction with occlusions handling 精确的多视图深度重建与闭塞处理
Cédric Niquin, S. Prévost, Y. Rémion
We present an offline method for stereo matching using a large number of views. Our method is based on occlusions detection. It is composed of two steps, one global and one local. In the first step we formulate an energy function that handles data, occlusions, and smooth terms through a global graph-cuts optimization. In our second step we introduce a local cost that handles occlusions from the first step in order to refine the result. This cost takes advantage of both the multi-view aspect and the occlusions. The experimental results show how our algorithm joins the advantages of both global and local methods, and how much it is accurate on boundaries detection and on details.
我们提出了一种使用大量视图进行立体匹配的离线方法。我们的方法是基于遮挡检测。它由两个步骤组成,一个是全局的,一个是局部的。在第一步中,我们制定了一个能量函数,通过全局图切割优化来处理数据、遮挡和平滑项。在我们的第二步中,我们引入了一个本地成本来处理第一步的遮挡,以改进结果。这种成本利用了多视图和遮挡的优势。实验结果表明,该算法结合了全局方法和局部方法的优点,在边界检测和细节检测上具有较高的准确性。
{"title":"Accurate multi-view depth reconstruction with occlusions handling","authors":"Cédric Niquin, S. Prévost, Y. Rémion","doi":"10.1109/3DTV.2009.5069638","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069638","url":null,"abstract":"We present an offline method for stereo matching using a large number of views. Our method is based on occlusions detection. It is composed of two steps, one global and one local. In the first step we formulate an energy function that handles data, occlusions, and smooth terms through a global graph-cuts optimization. In our second step we introduce a local cost that handles occlusions from the first step in order to refine the result. This cost takes advantage of both the multi-view aspect and the occlusions. The experimental results show how our algorithm joins the advantages of both global and local methods, and how much it is accurate on boundaries detection and on details.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116322117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Interactive free viewpoint video from multiple stereo 交互式免费视点视频从多个立体声
C. Weigel, S. Schwarz, T. Korn, Martin Wallebohr
We present a system for rendering free viewpoint video from data acquired by one or more stereo camera pairs in advance. The free viewpoint video can be observed standalone or shown embedded in a synthetic computer graphics scene. Compared to state-of-the art free viewpoint video applications less cameras are required. The system is scalable in terms of adding more stereo pairs in order to increase the viewing latitude around the object and is therefore adaptable to different kinds of application such as quality assessment tasks or virtual fairs. The main contribution of this paper are i) the scalable extension of the system by additional stereo pairs and ii) the embedding of the object into a synthetic scene in a pseudo 3D manner. We implement the application using a highly customizable software framework for image processing tasks.
我们提出了一种从一个或多个立体摄像机对预先获取的数据中绘制免费视点视频的系统。免费视点视频可以独立观察或显示嵌入在合成计算机图形场景。与最先进的自由视点视频应用相比,需要的摄像机更少。该系统是可扩展的,可以增加更多的立体对,以增加物体周围的观察纬度,因此适用于不同类型的应用,如质量评估任务或虚拟集市。本文的主要贡献是i)通过额外的立体对对系统进行可扩展扩展和ii)以伪3D方式将对象嵌入到合成场景中。我们使用高度可定制的软件框架来实现该应用程序,用于图像处理任务。
{"title":"Interactive free viewpoint video from multiple stereo","authors":"C. Weigel, S. Schwarz, T. Korn, Martin Wallebohr","doi":"10.1109/3DTV.2009.5069663","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069663","url":null,"abstract":"We present a system for rendering free viewpoint video from data acquired by one or more stereo camera pairs in advance. The free viewpoint video can be observed standalone or shown embedded in a synthetic computer graphics scene. Compared to state-of-the art free viewpoint video applications less cameras are required. The system is scalable in terms of adding more stereo pairs in order to increase the viewing latitude around the object and is therefore adaptable to different kinds of application such as quality assessment tasks or virtual fairs. The main contribution of this paper are i) the scalable extension of the system by additional stereo pairs and ii) the embedding of the object into a synthetic scene in a pseudo 3D manner. We implement the application using a highly customizable software framework for image processing tasks.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127704190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Accurate 3D reconstruction via surface-consistency 通过表面一致性进行精确的3D重建
Chenglei Wu, Xun Cao, Qionghai Dai
We present an algorithm that fuses Multi-view stereo (MVS) and photometric stereo to reconstruct 3D model of objects filmed by multiple cameras under varying illuminations. Firstly, we obtain the surface normal scaled by albedo for each view through photometric stereo techniques. Then, based on the scaled normal, a new correspondence matching method, namely surface-consistency metric, is proposed to acquire accurate 3D positions of pixels through triangulation. After filtering the point cloud, a Poisson surface reconstruction is applied to obtain a watertight mesh. The algorithm has been implemented based on our multi-camera and multi-light acquisition system. We validate the method by complete reconstruction of challenging real objects and show experimentally that this technique can greatly improve on previous MVS results.
本文提出了一种融合多视角立体视觉和光度立体视觉的算法,用于重建由多台摄像机在不同光照下拍摄的物体的三维模型。首先,通过光度立体技术获得各视点的地表反照率标尺;然后,在缩放法线的基础上,提出了一种新的对应匹配方法,即曲面一致性度量,通过三角剖分获得像素的精确三维位置。对点云进行滤波后,采用泊松曲面重构得到水密网格。该算法已在我们的多相机多光采集系统上实现。我们通过对具有挑战性的真实物体的完全重建来验证该方法,并通过实验证明该技术可以大大改善先前的MVS结果。
{"title":"Accurate 3D reconstruction via surface-consistency","authors":"Chenglei Wu, Xun Cao, Qionghai Dai","doi":"10.1109/3DTV.2009.5069625","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069625","url":null,"abstract":"We present an algorithm that fuses Multi-view stereo (MVS) and photometric stereo to reconstruct 3D model of objects filmed by multiple cameras under varying illuminations. Firstly, we obtain the surface normal scaled by albedo for each view through photometric stereo techniques. Then, based on the scaled normal, a new correspondence matching method, namely surface-consistency metric, is proposed to acquire accurate 3D positions of pixels through triangulation. After filtering the point cloud, a Poisson surface reconstruction is applied to obtain a watertight mesh. The algorithm has been implemented based on our multi-camera and multi-light acquisition system. We validate the method by complete reconstruction of challenging real objects and show experimentally that this technique can greatly improve on previous MVS results.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131227687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Multi-view stereo using multi-luminance images 多视角立体使用多亮度图像
Xiaoduan Feng, Yebin Liu, Qionghai Dai
More and more multi-luminance image acquisition systems are designed for relighting. Besides the basic purpose of the multi-luminance images, they can also be adopted to enhance the performance of multi-view stereo. By fusing the point-clouds from images under different luminance setups, a good model of the object can be achieved, with high robustness to image noise, shadows and high-lights. This is the basic idea of our novel multi-view stereo method. Supported by our own multi-view multi-luminance image acquisition system, our method can produce good models for the real world objects.
越来越多的多亮度图像采集系统被设计用于重照明。除了多亮度图像的基本目的外,还可以用来增强多视点立体的性能。通过融合不同亮度设置下的图像点云,可以获得良好的目标模型,对图像噪声、阴影和高光具有较高的鲁棒性。这是我们新的多视点立体方法的基本思想。在我们自己的多视角多亮度图像采集系统的支持下,我们的方法可以为真实世界的物体生成良好的模型。
{"title":"Multi-view stereo using multi-luminance images","authors":"Xiaoduan Feng, Yebin Liu, Qionghai Dai","doi":"10.1109/3DTV.2009.5069640","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069640","url":null,"abstract":"More and more multi-luminance image acquisition systems are designed for relighting. Besides the basic purpose of the multi-luminance images, they can also be adopted to enhance the performance of multi-view stereo. By fusing the point-clouds from images under different luminance setups, a good model of the object can be achieved, with high robustness to image noise, shadows and high-lights. This is the basic idea of our novel multi-view stereo method. Supported by our own multi-view multi-luminance image acquisition system, our method can produce good models for the real world objects.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123321419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Horizontal parallax distortion correction method in toed-in camera with wide-angle lens 广角摄像机水平视差畸变校正方法
Wooseong Kang, Seunghyun Lee
An effect of the toed-in camera configuration is keystone distortion, which causes vertical and horizontal parallax in the stereoscopic image. However if the stereoscopic image captured by the toed-in camera system with fish-eye lens is displayed on mobile device, it is uncomfortable to view because the horizontal parallax contain horizontal parallax distortion occurred by the wide field of view of the lenses. Therefore, in this paper, we propose a novel correction method of the horizontal parallax distortion, which is one of the keystone distortions. We have experimented to attest the proposed method. The captured stereoscopic image was corrected for the barrel distortion and the horizontal parallax distortion. Therefore, the proposed method provides correcting of the horizontal parallax distortion from a toed-in camera system in order that users can enjoy three-dimensional effects without the visual fatigue.
倾斜相机配置的一个影响是梯形畸变,它导致立体图像中的垂直和水平视差。然而,如果在移动设备上显示由鱼眼镜头拍摄的立体图像,由于水平视差包含了镜头宽视场引起的水平视差失真,因此观看起来不舒服。因此,本文提出了一种新的水平视差畸变校正方法。我们用实验来验证所提出的方法。对捕获的立体图像进行了枪管畸变和水平视差畸变的校正。因此,所提出的方法提供了一个脚插入式相机系统的水平视差畸变的校正,以便用户可以享受三维效果,而不会产生视觉疲劳。
{"title":"Horizontal parallax distortion correction method in toed-in camera with wide-angle lens","authors":"Wooseong Kang, Seunghyun Lee","doi":"10.1109/3DTV.2009.5069617","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069617","url":null,"abstract":"An effect of the toed-in camera configuration is keystone distortion, which causes vertical and horizontal parallax in the stereoscopic image. However if the stereoscopic image captured by the toed-in camera system with fish-eye lens is displayed on mobile device, it is uncomfortable to view because the horizontal parallax contain horizontal parallax distortion occurred by the wide field of view of the lenses. Therefore, in this paper, we propose a novel correction method of the horizontal parallax distortion, which is one of the keystone distortions. We have experimented to attest the proposed method. The captured stereoscopic image was corrected for the barrel distortion and the horizontal parallax distortion. Therefore, the proposed method provides correcting of the horizontal parallax distortion from a toed-in camera system in order that users can enjoy three-dimensional effects without the visual fatigue.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123491393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Free-View TV watermark selection based on the distribution characteristics 基于分布特征的自由视场电视水印选择
Evlambios E. Apostolidis, G. Triantafyllidis
In Free-View Television (FTV), the user can interactively control the viewpoint and generate new arbitrary views of a dynamic scene from any 3D position. The new views might be recorded and misused. Therefore the problem of copyright and copy protection in FTV should be solved. Among many alternative rights management methods, the copyright problem for visual data can be approached by means of embedding hidden imperceptible information, called watermark, into the image and video content. But this approach differs from the simple watermarking technique, since watermark in FTV should not only be resistant to common video processing and multi-view video processing operations, it should also be easily extracted from a generated video of an arbitrary view. In this paper, we focus on the evaluation of the performance of several watermarks according to their distribution characteristics in order to survive in the new generated arbitrary views of FTV.
在自由视点电视(FTV)中,用户可以交互式地控制视点,并从任何3D位置生成动态场景的新任意视图。新的视图可能会被记录和滥用。因此,FTV的版权和拷贝保护问题必须得到解决。在许多可供选择的版权管理方法中,可视数据的版权问题可以通过在图像和视频内容中嵌入隐藏的不可感知信息(称为水印)来解决。但是这种方法不同于简单的水印技术,因为FTV中的水印不仅要抵抗常见的视频处理和多视图视频处理操作,而且要易于从任意视图生成的视频中提取。为了在新生成的任意视场中生存,本文重点研究了几种水印根据其分布特征进行性能评价的方法。
{"title":"Free-View TV watermark selection based on the distribution characteristics","authors":"Evlambios E. Apostolidis, G. Triantafyllidis","doi":"10.1109/3DTV.2009.5069639","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069639","url":null,"abstract":"In Free-View Television (FTV), the user can interactively control the viewpoint and generate new arbitrary views of a dynamic scene from any 3D position. The new views might be recorded and misused. Therefore the problem of copyright and copy protection in FTV should be solved. Among many alternative rights management methods, the copyright problem for visual data can be approached by means of embedding hidden imperceptible information, called watermark, into the image and video content. But this approach differs from the simple watermarking technique, since watermark in FTV should not only be resistant to common video processing and multi-view video processing operations, it should also be easily extracted from a generated video of an arbitrary view. In this paper, we focus on the evaluation of the performance of several watermarks according to their distribution characteristics in order to survive in the new generated arbitrary views of FTV.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125940064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporally consistent layer depth ordering via pixel voting for pseudo 3D representation 时间一致的层深度排序通过像素投票伪3D表示
Engin Turetken, A. Alatan
A new region-based depth ordering algorithm is proposed based on the segmented motion layers with affine motion models. Starting from an initial set of layers that are independently extracted for each frame of an input sequence, relative depth order of every layer is determined following a bottom-to-top approach from local pair-wise relations to a global ordering. Layer sets of consecutive time instants are warped in two opposite directions in time to capture pair-wise occlusion relations of neighboring layers in the form of pixel voting statistics. Global depth order of layers is estimated by mapping the pair-wise relations to a directed acyclic graph and solving the longest path problem via a breadth-first search strategy. Temporal continuity is enforced both at the region segmentation and depth ordering stages to achieve temporally coherent layer support maps and depth order relations. Experimental results show that the proposed algorithm yields quite promising results even on dynamic scenes with multiple motions.
提出了一种基于仿射运动模型的分段运动层深度排序算法。从为输入序列的每一帧独立提取的一组初始层开始,按照从底部到顶部的方法确定每一层的相对深度顺序,从局部成对关系到全局顺序。连续时间瞬间的层集在时间上向两个相反的方向扭曲,以像素投票统计的形式捕获相邻层的成对遮挡关系。通过将成对关系映射到有向无环图,并采用宽度优先的搜索策略解决最长路径问题,估计了层的全局深度顺序。在区域分割阶段和深度排序阶段都强制时间连续性,以实现时间连贯的层支持图和深度顺序关系。实验结果表明,该算法在多运动动态场景下也能取得较好的效果。
{"title":"Temporally consistent layer depth ordering via pixel voting for pseudo 3D representation","authors":"Engin Turetken, A. Alatan","doi":"10.1109/3DTV.2009.5069679","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069679","url":null,"abstract":"A new region-based depth ordering algorithm is proposed based on the segmented motion layers with affine motion models. Starting from an initial set of layers that are independently extracted for each frame of an input sequence, relative depth order of every layer is determined following a bottom-to-top approach from local pair-wise relations to a global ordering. Layer sets of consecutive time instants are warped in two opposite directions in time to capture pair-wise occlusion relations of neighboring layers in the form of pixel voting statistics. Global depth order of layers is estimated by mapping the pair-wise relations to a directed acyclic graph and solving the longest path problem via a breadth-first search strategy. Temporal continuity is enforced both at the region segmentation and depth ordering stages to achieve temporally coherent layer support maps and depth order relations. Experimental results show that the proposed algorithm yields quite promising results even on dynamic scenes with multiple motions.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126625343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1