首页 > 最新文献

2019 International Conference on 3D Immersion (IC3D)最新文献

英文 中文
A Novel Approach for Multi-View 3D HDR Content Generation via Depth Adaptive Cross Trilateral Tone Mapping 一种基于深度自适应交叉三边色调映射的多视角3D HDR内容生成新方法
Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975988
Mansi Sharma, M. S. Venkatesh, Gowtham Ragavan, Rohan Lal
In this work, we proposed a novel depth adaptive tone mapping scheme for stereo HDR imaging and 3D display. We are interested in the case where different exposures are taken from different viewpoints. The scheme employed a new depth-adaptive cross-trilateral filter (DA-CTF) for recovering High Dynamic Range (HDR) images from multiple Low Dynamic Range (LDR) images captured at different exposure levels. Explicitly leveraging additional depth information in the tone mapping operation correctly identify global contrast change and detail visibility change by preserving the edges and reducing halo artifacts in the synthesized 3D views by depth-image-based rendering (DIBR) procedure. The experiments show that the proposed DA-CTF and DIBR scheme outperforms state-of-the-art operators in the enhanced depiction of tone mapped HDR stereo images on LDR displays.
在这项工作中,我们提出了一种新的深度自适应色调映射方案,用于立体HDR成像和3D显示。我们感兴趣的是从不同的角度拍摄不同的照片。该方案采用一种新的深度自适应交叉三边滤波器(DA-CTF)从不同曝光水平下捕获的多幅低动态范围(LDR)图像中恢复高动态范围(HDR)图像。在色调映射操作中显式地利用额外的深度信息,通过保留边缘和减少基于深度图像渲染(DIBR)过程合成的3D视图中的晕影,正确识别全局对比度变化和细节可见性变化。实验表明,所提出的DA-CTF和DIBR方案在LDR显示器上对色调映射HDR立体图像的增强描绘方面优于最先进的算子。
{"title":"A Novel Approach for Multi-View 3D HDR Content Generation via Depth Adaptive Cross Trilateral Tone Mapping","authors":"Mansi Sharma, M. S. Venkatesh, Gowtham Ragavan, Rohan Lal","doi":"10.1109/IC3D48390.2019.8975988","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975988","url":null,"abstract":"In this work, we proposed a novel depth adaptive tone mapping scheme for stereo HDR imaging and 3D display. We are interested in the case where different exposures are taken from different viewpoints. The scheme employed a new depth-adaptive cross-trilateral filter (DA-CTF) for recovering High Dynamic Range (HDR) images from multiple Low Dynamic Range (LDR) images captured at different exposure levels. Explicitly leveraging additional depth information in the tone mapping operation correctly identify global contrast change and detail visibility change by preserving the edges and reducing halo artifacts in the synthesized 3D views by depth-image-based rendering (DIBR) procedure. The experiments show that the proposed DA-CTF and DIBR scheme outperforms state-of-the-art operators in the enhanced depiction of tone mapped HDR stereo images on LDR displays.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128059548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Semantic Web3d: Towards Comprehensive Representation of 3d Content on the Semantic Web 语义Web3d:面向语义Web上3d内容的综合表示
Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975906
J. Flotyński, D. Brutzman, Felix G. Hamza-Lup, A. Malamos, Nicholas F. Polys, L. Sikos, K. Walczak
One of the main obstacles for wide dissemination of immersive virtual and augmented reality environments on the Web is the lack of integration between 3D technologies and web technologies, which are increasingly focused on collaboration, annotation and semantics. This gap can be filled by combining VR and AR with the Semantic Web, which is a significant trend in the development of theWeb. The use of the Semantic Web may improve creation, representation, indexing, searching and processing of 3D web content by linking the content with formal and expressive descriptions of its meaning. Although several semantic approaches have been developed for 3D content, they are not explicitly linked to the available well-established 3D technologies, cover a limited set of 3D components and properties, and do not combine domain-specific and 3D-specific semantics. In this paper, we present the main motivations, concepts and development of the Semantic Web3D approach. It enables semantic ontology-based representation of 3D content built upon the Extensible 3D (X3D) standard. The approach can integrate the Semantic Web with interactive 3D technologies within different domains, thereby serving as a step towards building the next generation of the Web that incorporates semantic 3D contents.
沉浸式虚拟和增强现实环境在网络上广泛传播的主要障碍之一是3D技术与网络技术之间缺乏集成,这些技术越来越注重协作、注释和语义。这一差距可以通过将VR和AR与语义Web相结合来填补,这是Web发展的一个重要趋势。语义网的使用可以通过将内容与其意义的正式和表达性描述联系起来,从而改进3D Web内容的创建、表示、索引、搜索和处理。虽然已经为3D内容开发了几种语义方法,但它们并没有明确地链接到可用的成熟3D技术,覆盖了有限的3D组件和属性集,并且没有结合特定于领域和特定于3D的语义。在本文中,我们介绍了语义Web3D方法的主要动机、概念和发展。它支持基于可扩展3D (Extensible 3D, X3D)标准的基于语义本体的3D内容表示。该方法可以将语义网与不同领域的交互式3D技术集成在一起,从而为构建包含语义3D内容的下一代Web迈出了一步。
{"title":"The Semantic Web3d: Towards Comprehensive Representation of 3d Content on the Semantic Web","authors":"J. Flotyński, D. Brutzman, Felix G. Hamza-Lup, A. Malamos, Nicholas F. Polys, L. Sikos, K. Walczak","doi":"10.1109/IC3D48390.2019.8975906","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975906","url":null,"abstract":"One of the main obstacles for wide dissemination of immersive virtual and augmented reality environments on the Web is the lack of integration between 3D technologies and web technologies, which are increasingly focused on collaboration, annotation and semantics. This gap can be filled by combining VR and AR with the Semantic Web, which is a significant trend in the development of theWeb. The use of the Semantic Web may improve creation, representation, indexing, searching and processing of 3D web content by linking the content with formal and expressive descriptions of its meaning. Although several semantic approaches have been developed for 3D content, they are not explicitly linked to the available well-established 3D technologies, cover a limited set of 3D components and properties, and do not combine domain-specific and 3D-specific semantics. In this paper, we present the main motivations, concepts and development of the Semantic Web3D approach. It enables semantic ontology-based representation of 3D content built upon the Extensible 3D (X3D) standard. The approach can integrate the Semantic Web with interactive 3D technologies within different domains, thereby serving as a step towards building the next generation of the Web that incorporates semantic 3D contents.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126385711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Novel Image Fusion Scheme for FTV View Synthesis Based on Layered Depth Scene Representation & Scale Periodic Transform 一种基于分层深度场景表示和尺度周期变换的FTV视图合成图像融合新方案
Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975902
Mansi Sharma, Gowtham Ragavan
This paper presents a novel image fusion scheme for view synthesis based on a layered depth profile of the scene and scale periodic transform. To create a layered depth profile of the scene, we utilize the unique properties of scale transform considering the problem of depth map computation from reference images as a certain shift-variant problem. The problem of depth computation is solved without deterministic stereo correspondences or rather than representing image signals in terms of shifts. Instead, we pose the problem of image signals being representable as scale periodic function, and compute appropriate depth estimates determining the scalings of a basis function. The rendering process is formulated as a novel image fusion in which the textures of all probable matching points are adaptively determined, leveraging implicitly the geometric information. The results demonstrate superiority of the proposed approach in suppressing geometric, blurring or flicker artifacts in rendered wide-baseline virtual videos.
提出了一种基于场景分层深度轮廓和尺度周期变换的图像融合方案。为了创建场景的分层深度轮廓,我们将参考图像的深度图计算问题作为一个特定的位移变问题,利用尺度变换的独特属性。深度计算的问题是解决不确定性立体对应,而不是表示图像信号的移位。相反,我们提出了图像信号可表示为尺度周期函数的问题,并计算适当的深度估计来确定基函数的尺度。渲染过程被表述为一种新的图像融合,其中自适应地确定所有可能匹配点的纹理,隐式地利用几何信息。结果表明,该方法在抑制绘制宽基线虚拟视频中的几何、模糊或闪烁伪影方面具有优越性。
{"title":"A Novel Image Fusion Scheme for FTV View Synthesis Based on Layered Depth Scene Representation & Scale Periodic Transform","authors":"Mansi Sharma, Gowtham Ragavan","doi":"10.1109/IC3D48390.2019.8975902","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975902","url":null,"abstract":"This paper presents a novel image fusion scheme for view synthesis based on a layered depth profile of the scene and scale periodic transform. To create a layered depth profile of the scene, we utilize the unique properties of scale transform considering the problem of depth map computation from reference images as a certain shift-variant problem. The problem of depth computation is solved without deterministic stereo correspondences or rather than representing image signals in terms of shifts. Instead, we pose the problem of image signals being representable as scale periodic function, and compute appropriate depth estimates determining the scalings of a basis function. The rendering process is formulated as a novel image fusion in which the textures of all probable matching points are adaptively determined, leveraging implicitly the geometric information. The results demonstrate superiority of the proposed approach in suppressing geometric, blurring or flicker artifacts in rendered wide-baseline virtual videos.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125077401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Analysis of Intended Viewing Area vs Estimated Saliency on Narrative Plot Structures in VR Film VR电影叙事情节结构的预期观看面积与估计显著性分析
Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975990
Colm O. Fearghail, S. Knorr, A. Smolic
In cinematic virtual reality film one of the primary challenges from a storytelling perceptive is that of leading the attention of the viewers to ensure that the narrative is understood as desired. Methods from traditional cinema have been applied to varying levels of success. This paper explores the use of a saliency convolutional neural network model and measures it’s results against the intending viewing area as denoted by the creators and the ground truth as to where the viewers actually looked. This information could then be used to further increase the effectiveness of a director’s ability to focus attention in cinematic VR.
在电影虚拟现实电影中,叙事感知的主要挑战之一是引导观众的注意力,以确保叙事被理解。传统电影的方法在不同程度上取得了成功。本文探讨了显著性卷积神经网络模型的使用,并根据创建者所表示的预期观看区域和观众实际观看位置的基本事实来衡量其结果。这些信息可以用来进一步提高导演在电影VR中集中注意力的能力。
{"title":"Analysis of Intended Viewing Area vs Estimated Saliency on Narrative Plot Structures in VR Film","authors":"Colm O. Fearghail, S. Knorr, A. Smolic","doi":"10.1109/IC3D48390.2019.8975990","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975990","url":null,"abstract":"In cinematic virtual reality film one of the primary challenges from a storytelling perceptive is that of leading the attention of the viewers to ensure that the narrative is understood as desired. Methods from traditional cinema have been applied to varying levels of success. This paper explores the use of a saliency convolutional neural network model and measures it’s results against the intending viewing area as denoted by the creators and the ground truth as to where the viewers actually looked. This information could then be used to further increase the effectiveness of a director’s ability to focus attention in cinematic VR.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130318998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
IC3D 2019 Conference Program IC3D 2019会议计划
Pub Date : 2019-12-01 DOI: 10.1109/ic3d48390.2019.8975904
{"title":"IC3D 2019 Conference Program","authors":"","doi":"10.1109/ic3d48390.2019.8975904","DOIUrl":"https://doi.org/10.1109/ic3d48390.2019.8975904","url":null,"abstract":"","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131018601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing Spatial and Temporal Reliability of the Vive System as a Tool for Naturalistic Behavioural Research 作为自然行为研究工具的Vive系统的时空可靠性评估
Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975994
G. Verdelet, R. Salemme, C. Désoche, F. Volland, A. Farnè, A. Coudert, R. Hermann, Eric Truy, V. Gaveau, F. Pavani
Nowadays behavioral and cognitive neuroscience studies have turned ‘naturalistic’, aiming at understanding brain functions by maintaining complexity close to everyday life. Many scholars started using commercially available VR devices, which, were not conceived as research tools. It is therefore important to assess their spatio-temporal reliability and inform scholars about the basic resolutions they can achieve. Here we provide such an assessment for the VIVE (HTC Vive) by comparing it with a VICON (BONITA 10) system. Results show a submillimeter Vive precision (0.237mm) and a nearest centimeter accuracy (8.7mm static, 8.5mm dynamic). we also report the Vive reaction to a tracking loss: the system takes 319.5 +/− 16.8 ms to detect the loss and can still be perturbed for about 3 seconds after tracking recovery. The Vive device allows for fairly accurate and reliable spatiotemporal measurements and may be well-suited for studies with typical human behavior, provided tracking loss is prevented.
如今,行为和认知神经科学研究已转向“自然主义”,旨在通过维持接近日常生活的复杂性来理解大脑功能。许多学者开始使用商用的VR设备,这些设备并不是研究工具。因此,评估它们的时空可靠性并告知学者它们可以实现的基本解决方案是重要的。在这里,我们通过将VIVE (HTC VIVE)与VICON (BONITA 10)系统进行比较,为VIVE (HTC VIVE)提供这样的评估。结果显示,Vive精度达到亚毫米(0.237mm),精度接近厘米(8.7mm静态,8.5mm动态)。我们还报告了Vive对跟踪丢失的反应:系统需要319.5 +/−16.8 ms来检测丢失,并且在跟踪恢复后仍然可以被扰动约3秒。Vive设备允许相当准确和可靠的时空测量,并且可能非常适合研究典型的人类行为,前提是防止跟踪丢失。
{"title":"Assessing Spatial and Temporal Reliability of the Vive System as a Tool for Naturalistic Behavioural Research","authors":"G. Verdelet, R. Salemme, C. Désoche, F. Volland, A. Farnè, A. Coudert, R. Hermann, Eric Truy, V. Gaveau, F. Pavani","doi":"10.1109/IC3D48390.2019.8975994","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975994","url":null,"abstract":"Nowadays behavioral and cognitive neuroscience studies have turned ‘naturalistic’, aiming at understanding brain functions by maintaining complexity close to everyday life. Many scholars started using commercially available VR devices, which, were not conceived as research tools. It is therefore important to assess their spatio-temporal reliability and inform scholars about the basic resolutions they can achieve. Here we provide such an assessment for the VIVE (HTC Vive) by comparing it with a VICON (BONITA 10) system. Results show a submillimeter Vive precision (0.237mm) and a nearest centimeter accuracy (8.7mm static, 8.5mm dynamic). we also report the Vive reaction to a tracking loss: the system takes 319.5 +/− 16.8 ms to detect the loss and can still be perturbed for about 3 seconds after tracking recovery. The Vive device allows for fairly accurate and reliable spatiotemporal measurements and may be well-suited for studies with typical human behavior, provided tracking loss is prevented.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"61 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131874522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
[IC3D 2019 Title Page] [IC3D 2019 Title Page]
Pub Date : 2019-12-01 DOI: 10.1109/ic3d48390.2019.8975998
{"title":"[IC3D 2019 Title Page]","authors":"","doi":"10.1109/ic3d48390.2019.8975998","DOIUrl":"https://doi.org/10.1109/ic3d48390.2019.8975998","url":null,"abstract":"","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124181288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Annotation-Based Development of Explorable Immersive VR/AR Environments 基于注解的可探索沉浸式VR/AR环境开发
Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975907
J. Flotyński, Adrian Nowak
Virtual and augmented reality environments consist of objects that typically interact with other objects and users, leading to evolution of 3D objects and scenes over time. In multiple VR/AR applications in different domains, interactions and temporal properties of 3D content may be represented using general or domain knowledge, which makes them comprehensible to average users or domain experts without an expertise in IT. Logging interactions and their results can be especially useful in VR/AR environments that are intended to monitor and gain knowledge about the system behavior as well as users’ behavior and preferences. However, the available approaches to development of VR/AR environments do not enable logging interactions in an explorable way. The main contribution of this paper is a method of developing explorable VR/AR environments on the basis of existing environments developed using well established tools, such as game engines and imperative programming languages. In the approach, interactions can be represented with general or domain knowledge. The method is discussed in the context of an immersive car showroom, which enables acquisition of knowledge about customers’ interests and preferences for marketing and merchandising purposes.
虚拟和增强现实环境由通常与其他对象和用户交互的对象组成,导致3D对象和场景随着时间的推移而演变。在不同领域的多个VR/AR应用程序中,3D内容的交互和时间属性可以使用一般或领域知识来表示,这使得普通用户或没有IT专业知识的领域专家可以理解它们。记录交互及其结果在VR/AR环境中特别有用,这些环境旨在监控和获取有关系统行为以及用户行为和偏好的知识。然而,现有的VR/AR环境开发方法并不能以一种可探索的方式记录交互。本文的主要贡献是在使用完善的工具(如游戏引擎和命令式编程语言)开发的现有环境的基础上开发可探索的VR/AR环境的方法。在这种方法中,交互可以用一般知识或领域知识表示。该方法是在一个沉浸式汽车展厅的背景下讨论的,它可以获得关于客户的兴趣和偏好的知识,用于营销和销售目的。
{"title":"Annotation-Based Development of Explorable Immersive VR/AR Environments","authors":"J. Flotyński, Adrian Nowak","doi":"10.1109/IC3D48390.2019.8975907","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975907","url":null,"abstract":"Virtual and augmented reality environments consist of objects that typically interact with other objects and users, leading to evolution of 3D objects and scenes over time. In multiple VR/AR applications in different domains, interactions and temporal properties of 3D content may be represented using general or domain knowledge, which makes them comprehensible to average users or domain experts without an expertise in IT. Logging interactions and their results can be especially useful in VR/AR environments that are intended to monitor and gain knowledge about the system behavior as well as users’ behavior and preferences. However, the available approaches to development of VR/AR environments do not enable logging interactions in an explorable way. The main contribution of this paper is a method of developing explorable VR/AR environments on the basis of existing environments developed using well established tools, such as game engines and imperative programming languages. In the approach, interactions can be represented with general or domain knowledge. The method is discussed in the context of an immersive car showroom, which enables acquisition of knowledge about customers’ interests and preferences for marketing and merchandising purposes.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125047711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Local-Convexity Reinforcement for Scene Reconstruction from Sparse Point Clouds 稀疏点云场景重建的局部凸性增强
Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975900
M. Lhuillier
Several methods reconstruct surfaces from sparse point clouds that are estimated from images. Most of them build 3D Delaunay triangulation of the points and compute occupancy labeling of the tetrahedra thanks to visibility information and surface constraints. However their most notable errors are falsely-labeled freespace tetrahedra. We present labeling corrections of these errors based on a new shape constraint: local-convexity. In the simplest case, this means that a freespace tetrahedron of the Delaunay is relabeled matter if its size is small enough and all its vertices are in matter tetrahedra. The allowed corrections are more important in the vertical direction than in the horizontal ones to take into account the anisotropy of usual scenes. In the experiments, our corrections improve the results of previous surface reconstruction methods applied to videos taken by a consumer 360 camera.
有几种方法是从图像中估计的稀疏点云重建表面。它们大多建立点的三维Delaunay三角剖分,并利用可见性信息和表面约束计算四面体的占用标记。然而,他们最显著的错误是错误地标记了自由空间四面体。我们提出了一种新的形状约束:局部凸性对这些误差进行标记校正。在最简单的情况下,这意味着如果一个Delaunay的自由空间四面体的尺寸足够小,并且它的所有顶点都在物质四面体中,那么它就被重新标记为物质。考虑到通常场景的各向异性,在垂直方向上允许的校正比在水平方向上更重要。在实验中,我们的修正改进了先前用于消费者360相机拍摄的视频的表面重建方法的结果。
{"title":"Local-Convexity Reinforcement for Scene Reconstruction from Sparse Point Clouds","authors":"M. Lhuillier","doi":"10.1109/IC3D48390.2019.8975900","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975900","url":null,"abstract":"Several methods reconstruct surfaces from sparse point clouds that are estimated from images. Most of them build 3D Delaunay triangulation of the points and compute occupancy labeling of the tetrahedra thanks to visibility information and surface constraints. However their most notable errors are falsely-labeled freespace tetrahedra. We present labeling corrections of these errors based on a new shape constraint: local-convexity. In the simplest case, this means that a freespace tetrahedron of the Delaunay is relabeled matter if its size is small enough and all its vertices are in matter tetrahedra. The allowed corrections are more important in the vertical direction than in the horizontal ones to take into account the anisotropy of usual scenes. In the experiments, our corrections improve the results of previous surface reconstruction methods applied to videos taken by a consumer 360 camera.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124349341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
MPEG-I Depth Estimation Reference Software MPEG-I深度估计参考软件
Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975995
Ségolène Rogge, Daniele Bonatto, Jaime Sancho, R. Salvador, E. Juárez, A. Munteanu, G. Lafruit
For enabling virtual reality on natural content, Depth Image-Based Rendering (DIBR) techniques have been steadily developed over the past decade, but their quality highly depends on that of the depth estimation. This paper is an attempt to deliver good-quality Depth Estimation Reference Software (DERS) that is well-structured for further use in the worldwide MPEG standardization committee.The existing DERS has been refactored, debugged and extended to any number of input views for generating accurate depth maps. Their quality has been validated by synthesizing DIBR virtual views with the Reference View Synthesizer (RVS) and the Versatile View Synthesizer (VVS), using the available MPEG test sequences. Resulting images and runtimes are reported.
为了在自然内容上实现虚拟现实,深度图像渲染技术(deep Image-Based Rendering, DIBR)在过去十年中得到了稳步发展,但其质量在很大程度上取决于深度估计的质量。本文试图提供高质量的深度估计参考软件(DERS),该软件结构良好,可在全球MPEG标准化委员会中进一步使用。现有的DERS已被重构、调试并扩展到任意数量的输入视图,以生成精确的深度图。通过使用参考视图合成器(RVS)和通用视图合成器(VVS)合成DIBR虚拟视图,使用可用的MPEG测试序列,验证了它们的质量。生成的映像和运行时将被报告。
{"title":"MPEG-I Depth Estimation Reference Software","authors":"Ségolène Rogge, Daniele Bonatto, Jaime Sancho, R. Salvador, E. Juárez, A. Munteanu, G. Lafruit","doi":"10.1109/IC3D48390.2019.8975995","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975995","url":null,"abstract":"For enabling virtual reality on natural content, Depth Image-Based Rendering (DIBR) techniques have been steadily developed over the past decade, but their quality highly depends on that of the depth estimation. This paper is an attempt to deliver good-quality Depth Estimation Reference Software (DERS) that is well-structured for further use in the worldwide MPEG standardization committee.The existing DERS has been refactored, debugged and extended to any number of input views for generating accurate depth maps. Their quality has been validated by synthesizing DIBR virtual views with the Reference View Synthesizer (RVS) and the Versatile View Synthesizer (VVS), using the available MPEG test sequences. Resulting images and runtimes are reported.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122841344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
2019 International Conference on 3D Immersion (IC3D)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1