首页 > 最新文献

2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video最新文献

英文 中文
Real-time free-viewpoint viewer from multiview video plus depth representation coded by H.264/AVC MVC extension 实时自由视点查看器从多视点视频加上深度表示编码的H.264/AVC MVC扩展
S. Shimizu, H. Kimata, Y. Ohtani
This paper presents a real-time video-based rendering system that uses multiview video data with depth representation for free-viewpoint navigation. The proposed rendering algorithm not only achieves high quality rendering but also increases viewpoint flexibility to cover viewpoints that do not lie on the camera baselines. The proposed system achieves real-time decoding of multiple videos and depth maps that are encoded by the H.264/AVC Multiview Video Coding Extension on a regular CPU. The rendering process is fully implemented on a commercial GPU. A performance evaluation shows that our system can generate XGA free-viewpoint images at 30 fps.
提出了一种利用深度表示的多视点视频数据进行自由视点导航的实时视频绘制系统。提出的渲染算法不仅实现了高质量的渲染,而且增加了视点的灵活性,可以覆盖不在相机基线上的视点。该系统实现了在普通CPU上对H.264/AVC多视图视频编码扩展编码的多个视频和深度图的实时解码。渲染过程在商用GPU上完全实现。性能评估表明,该系统能够以30fps的速度生成XGA自由视点图像。
{"title":"Real-time free-viewpoint viewer from multiview video plus depth representation coded by H.264/AVC MVC extension","authors":"S. Shimizu, H. Kimata, Y. Ohtani","doi":"10.1109/3DTV.2009.5069656","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069656","url":null,"abstract":"This paper presents a real-time video-based rendering system that uses multiview video data with depth representation for free-viewpoint navigation. The proposed rendering algorithm not only achieves high quality rendering but also increases viewpoint flexibility to cover viewpoints that do not lie on the camera baselines. The proposed system achieves real-time decoding of multiple videos and depth maps that are encoded by the H.264/AVC Multiview Video Coding Extension on a regular CPU. The rendering process is fully implemented on a commercial GPU. A performance evaluation shows that our system can generate XGA free-viewpoint images at 30 fps.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127532677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
An improved multiview stereo video FGS scalable scheme 一种改进的多视点立体视频FGS可扩展方案
Lei Yang, Xiaowei Song, Chunping Hou, Jichang Guo, Sumei Li, Yuan Zhou
A multiview stereo video FGS (Fine Granular Scalability) scalable scheme is presented in this paper. The similarity among adjacent views is fully utilized, A tradeoff scheme is presented in order to adapt to different demands of Quality First (QF) and View First (VF) of the decoder. The scheme is composed of three cases: I, P, B frame. The middle view is encoded as the basic layer, while the other views are predicted from the partly retrieved FGS enhancement layers of adjacent views. The FGS enhancement layer of the current view is generated based on that. Experimental results show that the presented scheme is of more flexible and extensive scalable characteristic, which could better adapt different demands on view image quality and stereo immersion of different users.
提出了一种多视点立体视频FGS (Fine Granular Scalability)可扩展方案。充分利用相邻视图之间的相似性,提出了一种折衷方案,以适应解码器对质量优先和视图优先的不同要求。该方案由三种情况组成:I、P、B框架。中间视图被编码为基础层,而其他视图则从相邻视图的部分检索的FGS增强层中进行预测。在此基础上生成当前视图的FGS增强层。实验结果表明,该方案具有更加灵活和广泛的可扩展性,能够更好地适应不同用户对视像质量和立体沉浸感的不同需求。
{"title":"An improved multiview stereo video FGS scalable scheme","authors":"Lei Yang, Xiaowei Song, Chunping Hou, Jichang Guo, Sumei Li, Yuan Zhou","doi":"10.1109/3DTV.2009.5069658","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069658","url":null,"abstract":"A multiview stereo video FGS (Fine Granular Scalability) scalable scheme is presented in this paper. The similarity among adjacent views is fully utilized, A tradeoff scheme is presented in order to adapt to different demands of Quality First (QF) and View First (VF) of the decoder. The scheme is composed of three cases: I, P, B frame. The middle view is encoded as the basic layer, while the other views are predicted from the partly retrieved FGS enhancement layers of adjacent views. The FGS enhancement layer of the current view is generated based on that. Experimental results show that the presented scheme is of more flexible and extensive scalable characteristic, which could better adapt different demands on view image quality and stereo immersion of different users.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114416001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Compression of depth information for 3D rendering 为3D渲染压缩深度信息
P. Zanuttigh, G. Cortelazzo
This paper presents a novel strategy for the compression of depth maps. The proposed scheme starts with a segmentation step which identifies and extracts edges and main objects, then it introduces an efficient compression strategy for the segmented regions' shape. In the subsequent step a novel algorithm is used to predict the surface shape from the segmented regions and a set of regularly spaced samples. Finally the few prediction residuals are efficiently compressed using standard image compression techniques. Experimental results show that the proposed scheme not only offers a significant gain over JPEG2000 on various types of depth maps but also produces depth maps without edge artifacts particularly suited to 3D warping and free viewpoint video applications.
本文提出了一种新的深度图压缩策略。该方案从识别和提取边缘和主要目标的分割步骤开始,然后引入一种有效的分割区域形状压缩策略。在接下来的步骤中,使用一种新的算法从分割的区域和一组规则间隔的样本中预测表面形状。最后利用标准图像压缩技术对少量预测残差进行有效压缩。实验结果表明,该方案不仅在各种类型的深度图上比JPEG2000有明显的增益,而且产生的深度图没有边缘伪影,特别适合3D翘曲和自由视点视频应用。
{"title":"Compression of depth information for 3D rendering","authors":"P. Zanuttigh, G. Cortelazzo","doi":"10.1109/3DTV.2009.5069669","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069669","url":null,"abstract":"This paper presents a novel strategy for the compression of depth maps. The proposed scheme starts with a segmentation step which identifies and extracts edges and main objects, then it introduces an efficient compression strategy for the segmented regions' shape. In the subsequent step a novel algorithm is used to predict the surface shape from the segmented regions and a set of regularly spaced samples. Finally the few prediction residuals are efficiently compressed using standard image compression techniques. Experimental results show that the proposed scheme not only offers a significant gain over JPEG2000 on various types of depth maps but also produces depth maps without edge artifacts particularly suited to 3D warping and free viewpoint video applications.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128980970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Accurate multi-view depth reconstruction with occlusions handling 精确的多视图深度重建与闭塞处理
Cédric Niquin, S. Prévost, Y. Rémion
We present an offline method for stereo matching using a large number of views. Our method is based on occlusions detection. It is composed of two steps, one global and one local. In the first step we formulate an energy function that handles data, occlusions, and smooth terms through a global graph-cuts optimization. In our second step we introduce a local cost that handles occlusions from the first step in order to refine the result. This cost takes advantage of both the multi-view aspect and the occlusions. The experimental results show how our algorithm joins the advantages of both global and local methods, and how much it is accurate on boundaries detection and on details.
我们提出了一种使用大量视图进行立体匹配的离线方法。我们的方法是基于遮挡检测。它由两个步骤组成,一个是全局的,一个是局部的。在第一步中,我们制定了一个能量函数,通过全局图切割优化来处理数据、遮挡和平滑项。在我们的第二步中,我们引入了一个本地成本来处理第一步的遮挡,以改进结果。这种成本利用了多视图和遮挡的优势。实验结果表明,该算法结合了全局方法和局部方法的优点,在边界检测和细节检测上具有较高的准确性。
{"title":"Accurate multi-view depth reconstruction with occlusions handling","authors":"Cédric Niquin, S. Prévost, Y. Rémion","doi":"10.1109/3DTV.2009.5069638","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069638","url":null,"abstract":"We present an offline method for stereo matching using a large number of views. Our method is based on occlusions detection. It is composed of two steps, one global and one local. In the first step we formulate an energy function that handles data, occlusions, and smooth terms through a global graph-cuts optimization. In our second step we introduce a local cost that handles occlusions from the first step in order to refine the result. This cost takes advantage of both the multi-view aspect and the occlusions. The experimental results show how our algorithm joins the advantages of both global and local methods, and how much it is accurate on boundaries detection and on details.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116322117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Interactive free viewpoint video from multiple stereo 交互式免费视点视频从多个立体声
C. Weigel, S. Schwarz, T. Korn, Martin Wallebohr
We present a system for rendering free viewpoint video from data acquired by one or more stereo camera pairs in advance. The free viewpoint video can be observed standalone or shown embedded in a synthetic computer graphics scene. Compared to state-of-the art free viewpoint video applications less cameras are required. The system is scalable in terms of adding more stereo pairs in order to increase the viewing latitude around the object and is therefore adaptable to different kinds of application such as quality assessment tasks or virtual fairs. The main contribution of this paper are i) the scalable extension of the system by additional stereo pairs and ii) the embedding of the object into a synthetic scene in a pseudo 3D manner. We implement the application using a highly customizable software framework for image processing tasks.
我们提出了一种从一个或多个立体摄像机对预先获取的数据中绘制免费视点视频的系统。免费视点视频可以独立观察或显示嵌入在合成计算机图形场景。与最先进的自由视点视频应用相比,需要的摄像机更少。该系统是可扩展的,可以增加更多的立体对,以增加物体周围的观察纬度,因此适用于不同类型的应用,如质量评估任务或虚拟集市。本文的主要贡献是i)通过额外的立体对对系统进行可扩展扩展和ii)以伪3D方式将对象嵌入到合成场景中。我们使用高度可定制的软件框架来实现该应用程序,用于图像处理任务。
{"title":"Interactive free viewpoint video from multiple stereo","authors":"C. Weigel, S. Schwarz, T. Korn, Martin Wallebohr","doi":"10.1109/3DTV.2009.5069663","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069663","url":null,"abstract":"We present a system for rendering free viewpoint video from data acquired by one or more stereo camera pairs in advance. The free viewpoint video can be observed standalone or shown embedded in a synthetic computer graphics scene. Compared to state-of-the art free viewpoint video applications less cameras are required. The system is scalable in terms of adding more stereo pairs in order to increase the viewing latitude around the object and is therefore adaptable to different kinds of application such as quality assessment tasks or virtual fairs. The main contribution of this paper are i) the scalable extension of the system by additional stereo pairs and ii) the embedding of the object into a synthetic scene in a pseudo 3D manner. We implement the application using a highly customizable software framework for image processing tasks.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127704190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Accurate 3D reconstruction via surface-consistency 通过表面一致性进行精确的3D重建
Chenglei Wu, Xun Cao, Qionghai Dai
We present an algorithm that fuses Multi-view stereo (MVS) and photometric stereo to reconstruct 3D model of objects filmed by multiple cameras under varying illuminations. Firstly, we obtain the surface normal scaled by albedo for each view through photometric stereo techniques. Then, based on the scaled normal, a new correspondence matching method, namely surface-consistency metric, is proposed to acquire accurate 3D positions of pixels through triangulation. After filtering the point cloud, a Poisson surface reconstruction is applied to obtain a watertight mesh. The algorithm has been implemented based on our multi-camera and multi-light acquisition system. We validate the method by complete reconstruction of challenging real objects and show experimentally that this technique can greatly improve on previous MVS results.
本文提出了一种融合多视角立体视觉和光度立体视觉的算法,用于重建由多台摄像机在不同光照下拍摄的物体的三维模型。首先,通过光度立体技术获得各视点的地表反照率标尺;然后,在缩放法线的基础上,提出了一种新的对应匹配方法,即曲面一致性度量,通过三角剖分获得像素的精确三维位置。对点云进行滤波后,采用泊松曲面重构得到水密网格。该算法已在我们的多相机多光采集系统上实现。我们通过对具有挑战性的真实物体的完全重建来验证该方法,并通过实验证明该技术可以大大改善先前的MVS结果。
{"title":"Accurate 3D reconstruction via surface-consistency","authors":"Chenglei Wu, Xun Cao, Qionghai Dai","doi":"10.1109/3DTV.2009.5069625","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069625","url":null,"abstract":"We present an algorithm that fuses Multi-view stereo (MVS) and photometric stereo to reconstruct 3D model of objects filmed by multiple cameras under varying illuminations. Firstly, we obtain the surface normal scaled by albedo for each view through photometric stereo techniques. Then, based on the scaled normal, a new correspondence matching method, namely surface-consistency metric, is proposed to acquire accurate 3D positions of pixels through triangulation. After filtering the point cloud, a Poisson surface reconstruction is applied to obtain a watertight mesh. The algorithm has been implemented based on our multi-camera and multi-light acquisition system. We validate the method by complete reconstruction of challenging real objects and show experimentally that this technique can greatly improve on previous MVS results.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131227687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Quality assessment of 3D asymmetric view coding using spatial frequency dominance model 基于空间频率优势模型的三维非对称视图编码质量评价
Feng Lu, Haoqian Wang, Xiangyang Ji, Guihua Er
To save bit-rate in stereo video application, asymmetric view coding is introduced, which encodes the stereo views with different qualities. However, quality assessment on asymmetric view coding is difficult, because the impact of the degraded view upon the 3D percept depends on Human Visual System (HVS) and cannot be indicated by conventional metrics. This paper introduces a quality assessment model based on the observed phenomenon that spatial frequency determines view domination under the action of HVS. A metric is proposed based on this model for assessing the quality of asymmetric view coding. Experimental results are presented to show that the proposed metric provides accordant assessment with the subjective evaluation.
为了在立体视频应用中节省比特率,引入了非对称视图编码,对不同质量的立体视图进行编码。然而,由于非对称视图编码对三维感知的影响依赖于人类视觉系统(HVS),无法用传统的度量指标来表示,因此对非对称视图编码的质量评估是一个困难的问题。本文介绍了一种基于观测到的在HVS作用下空间频率决定视野支配现象的质量评价模型。在此基础上提出了一种评价非对称视图编码质量的度量。实验结果表明,所提出的指标与主观评价是一致的。
{"title":"Quality assessment of 3D asymmetric view coding using spatial frequency dominance model","authors":"Feng Lu, Haoqian Wang, Xiangyang Ji, Guihua Er","doi":"10.1109/3DTV.2009.5069630","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069630","url":null,"abstract":"To save bit-rate in stereo video application, asymmetric view coding is introduced, which encodes the stereo views with different qualities. However, quality assessment on asymmetric view coding is difficult, because the impact of the degraded view upon the 3D percept depends on Human Visual System (HVS) and cannot be indicated by conventional metrics. This paper introduces a quality assessment model based on the observed phenomenon that spatial frequency determines view domination under the action of HVS. A metric is proposed based on this model for assessing the quality of asymmetric view coding. Experimental results are presented to show that the proposed metric provides accordant assessment with the subjective evaluation.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123223915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Objective quality assessment of depth image based rendering in 3DTV system 3DTV系统中基于深度图像绘制的客观质量评价
Hang Shao, Xun Cao, Guihua Er
In this paper, a novel objective evaluation of depth image based rendering(DIBR) is proposed for the 3D video in format of a monocular video augmented by the gray-scale depth image. The metric is composed of Color and Sharpness of Edge Distortion(CSED) measure. Color distortion measures the luminance loss of the rendered image compared with the reference, and sharpness of edge distortion calculates a depth-weighted proportion of remaining edge to the original edge. Comparing to the conventional quality metrics such as MSE and PSNR, our metric represents not only the color artifact but also the synthesis error with above two aspects. Subjective assessment of the different rendering methods is done as well, and the obtained results show significant agreement with our objective metric.
针对灰度深度图像增强的单眼视频格式的三维视频,提出了一种新的基于深度图像的客观评价方法。该度量由颜色和边缘失真清晰度(CSED)度量组成。颜色失真衡量的是渲染图像与参考图像相比的亮度损失,边缘失真的清晰度计算的是剩余边缘与原始边缘的深度加权比例。与MSE和PSNR等传统质量度量相比,我们的度量不仅代表了颜色伪影,而且代表了这两个方面的综合误差。对不同的渲染方法进行了主观评价,所得结果与我们的客观度量有显著的一致性。
{"title":"Objective quality assessment of depth image based rendering in 3DTV system","authors":"Hang Shao, Xun Cao, Guihua Er","doi":"10.1109/3DTV.2009.5069619","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069619","url":null,"abstract":"In this paper, a novel objective evaluation of depth image based rendering(DIBR) is proposed for the 3D video in format of a monocular video augmented by the gray-scale depth image. The metric is composed of Color and Sharpness of Edge Distortion(CSED) measure. Color distortion measures the luminance loss of the rendered image compared with the reference, and sharpness of edge distortion calculates a depth-weighted proportion of remaining edge to the original edge. Comparing to the conventional quality metrics such as MSE and PSNR, our metric represents not only the color artifact but also the synthesis error with above two aspects. Subjective assessment of the different rendering methods is done as well, and the obtained results show significant agreement with our objective metric.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"16 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125835419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Real-time transmission of high-resolution multi-view stereo video over IP networks 通过IP网络实时传输高分辨率多视点立体视频
Yuan Zhou, Chunping Hou, Zhigang Jin, Lei Yang, Jiachen Yang, Jichang Guo
In this paper, a real-time high-resolution multi-view video transport system which can deliver multi-view video over IP networks is proposed. Video streams are encoded with H.264/AVS. Owing to the massive amount of data involved, multi-view video is delivered in two separate IP channels. Since packets losses always occur in IP networks, a novel packets processing method is employed in the proposed system to hold the correlation between views for loss data recover. Additionally, an error concealment scheme for multi-view stereo video is exploited in this transport system, in order to solve the packet loss problem in IP networks. The experimental results represent that the proposed transport system is feasible for multi-view video in IP networks.
本文提出了一种能够在IP网络上传输多视点视频的实时高分辨率多视点视频传输系统。视频流用H.264/AVS编码。由于涉及的数据量巨大,多视点视频在两个独立的IP通道中传输。针对IP网络中经常出现的丢包现象,提出了一种新颖的数据包处理方法,以保持视图之间的相关性,从而实现丢包数据的恢复。此外,为了解决IP网络中的丢包问题,在传输系统中采用了多视点立体视频的错误隐藏方案。实验结果表明,该传输系统对于IP网络中的多视点视频传输是可行的。
{"title":"Real-time transmission of high-resolution multi-view stereo video over IP networks","authors":"Yuan Zhou, Chunping Hou, Zhigang Jin, Lei Yang, Jiachen Yang, Jichang Guo","doi":"10.1109/3DTV.2009.5069657","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069657","url":null,"abstract":"In this paper, a real-time high-resolution multi-view video transport system which can deliver multi-view video over IP networks is proposed. Video streams are encoded with H.264/AVS. Owing to the massive amount of data involved, multi-view video is delivered in two separate IP channels. Since packets losses always occur in IP networks, a novel packets processing method is employed in the proposed system to hold the correlation between views for loss data recover. Additionally, an error concealment scheme for multi-view stereo video is exploited in this transport system, in order to solve the packet loss problem in IP networks. The experimental results represent that the proposed transport system is feasible for multi-view video in IP networks.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126006777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Distortions of synthesized views caused by compression of views and depth maps 由于视图和深度图的压缩导致合成视图失真
K. Klimaszewski, K. Wegner, M. Domański
The paper deals with prospective 3D video transmission systems that would use compression of both multiview video and depth maps. The paper addresses the problem of quality of views synthesized from other views transmitted together with depth information. For the state-of-the-art depth map estimation and view synthesize techniques, the paper proves that AVC/SVC-based Multiview Video Coding technique can be used for compression of both view pictures and depth maps. The paper reports extensive experiments where synthesized video quality has been estimated by use of both PSNR index and subjective assessment. Defined is the critical value of depth quantization parameter as a function of the reference view quantization parameter. For smaller depth map quantization parameters, depth map compression has negligible influence on fidelity of synthesized views.
本文讨论了未来的3D视频传输系统,该系统将同时使用多视点视频和深度图的压缩。本文解决了与深度信息一起传输的其他视图合成视图的质量问题。对于目前最先进的深度图估计和视图合成技术,本文证明了基于AVC/ svc的多视图视频编码技术可以同时用于视图图像和深度图的压缩。本文报道了大量的实验,其中合成视频质量已经使用PSNR指数和主观评价来估计。定义了深度量化参数的临界值作为参考视图量化参数的函数。对于较小的深度图量化参数,深度图压缩对合成视图保真度的影响可以忽略不计。
{"title":"Distortions of synthesized views caused by compression of views and depth maps","authors":"K. Klimaszewski, K. Wegner, M. Domański","doi":"10.1109/3DTV.2009.5069662","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069662","url":null,"abstract":"The paper deals with prospective 3D video transmission systems that would use compression of both multiview video and depth maps. The paper addresses the problem of quality of views synthesized from other views transmitted together with depth information. For the state-of-the-art depth map estimation and view synthesize techniques, the paper proves that AVC/SVC-based Multiview Video Coding technique can be used for compression of both view pictures and depth maps. The paper reports extensive experiments where synthesized video quality has been estimated by use of both PSNR index and subjective assessment. Defined is the critical value of depth quantization parameter as a function of the reference view quantization parameter. For smaller depth map quantization parameters, depth map compression has negligible influence on fidelity of synthesized views.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132495118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
期刊
2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1