首页 > 最新文献

SIGGRAPH Asia 2015 Technical Briefs最新文献

英文 中文
Learning motion manifolds with convolutional autoencoders 用卷积自编码器学习运动流形
Pub Date : 2015-11-02 DOI: 10.1145/2820903.2820918
Daniel Holden, Jun Saito, T. Komura, T. Joyce
We present a technique for learning a manifold of human motion data using Convolutional Autoencoders. Our approach is capable of learning a manifold on the complete CMU database of human motion. This manifold can be treated as a prior probability distribution over human motion data, which has many applications in animation research, including projecting invalid or corrupt motion onto the manifold for removing error, computing similarity between motions using geodesic distance along the manifold, and interpolation of motion along the manifold for avoiding blending artefacts.
我们提出了一种使用卷积自编码器学习多种人体运动数据的技术。我们的方法能够在完整的CMU人体运动数据库上学习流形。该流形可以被视为人类运动数据的先验概率分布,在动画研究中有许多应用,包括将无效或损坏的运动投影到流形上以消除误差,使用沿流形的测地线距离计算运动之间的相似性,以及沿流形的运动插值以避免混合伪影。
{"title":"Learning motion manifolds with convolutional autoencoders","authors":"Daniel Holden, Jun Saito, T. Komura, T. Joyce","doi":"10.1145/2820903.2820918","DOIUrl":"https://doi.org/10.1145/2820903.2820918","url":null,"abstract":"We present a technique for learning a manifold of human motion data using Convolutional Autoencoders. Our approach is capable of learning a manifold on the complete CMU database of human motion. This manifold can be treated as a prior probability distribution over human motion data, which has many applications in animation research, including projecting invalid or corrupt motion onto the manifold for removing error, computing similarity between motions using geodesic distance along the manifold, and interpolation of motion along the manifold for avoiding blending artefacts.","PeriodicalId":21720,"journal":{"name":"SIGGRAPH Asia 2015 Technical Briefs","volume":"58 3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82450021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 253
Perception-based interactive sound synthesis of morphing solids' interactions 基于感知的变形固体相互作用的交互声音合成
Pub Date : 2015-11-02 DOI: 10.1145/2820903.2820914
L. Pruvost, B. Scherrer, M. Aramaki, S. Ystad, R. Kronland-Martinet
This brief introduces a novel framework for the interactive and real-time synthesis of solids' interaction sounds driven by a game engine. The sound synthesizer used in this work relies on an action-object paradigm, itself based on the notion of perceptual invariants. An intuitive control strategy, based on those invariants and inspired by physics, was developed. The action and the object can be controlled independently, simultaneously, and continuously. This allows the synthesis of sounds for solids' interactions whose nature evolves continuously over time (e.g. from rolling to slipping) and/or where the objects' properties (shape, size and material) vary continuously in time.
本文介绍了一种由游戏引擎驱动的交互式和实时合成固体交互声音的新框架。这项工作中使用的声音合成器依赖于一个动作-对象范式,它本身基于感知不变量的概念。基于这些不变量并受物理学启发,开发了一种直观的控制策略。动作和物体可以独立、同时和连续地控制。这允许合成固体相互作用的声音,其性质随时间不断发展(例如,从滚动到滑动)和/或物体的属性(形状,大小和材料)随时间不断变化。
{"title":"Perception-based interactive sound synthesis of morphing solids' interactions","authors":"L. Pruvost, B. Scherrer, M. Aramaki, S. Ystad, R. Kronland-Martinet","doi":"10.1145/2820903.2820914","DOIUrl":"https://doi.org/10.1145/2820903.2820914","url":null,"abstract":"This brief introduces a novel framework for the interactive and real-time synthesis of solids' interaction sounds driven by a game engine. The sound synthesizer used in this work relies on an action-object paradigm, itself based on the notion of perceptual invariants. An intuitive control strategy, based on those invariants and inspired by physics, was developed. The action and the object can be controlled independently, simultaneously, and continuously. This allows the synthesis of sounds for solids' interactions whose nature evolves continuously over time (e.g. from rolling to slipping) and/or where the objects' properties (shape, size and material) vary continuously in time.","PeriodicalId":21720,"journal":{"name":"SIGGRAPH Asia 2015 Technical Briefs","volume":"286 3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79630415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Gradient domain binary image hiding using color difference metric 基于色差度量的梯度域二值图像隐藏
Pub Date : 2015-11-02 DOI: 10.1145/2820903.2820919
Lu Hao, J. Feng, Bingfeng Zhou
In this paper, we propose a method for hiding binary images into the gradient domain of a host color image, by modifying the gradient vectors to a chosen hiding vector orientation in CIELAB color space. The color changes are constrained by the just-noticeable difference (JND) to guarantee the imperceptibility of embedded data. Generally, one pair of pixels is used to embed one binary bit. Using multiple hiding vectors, indexed color images or multiple binary images can be embedded in one image simultaneously. Hence high capacity can be achieved in comparison with the existing methods.
在本文中,我们提出了一种将二值图像隐藏到主彩色图像的梯度域中的方法,通过将梯度向量修改为CIELAB颜色空间中选择的隐藏向量方向。颜色变化受刚可察觉差分(JND)的约束,以保证嵌入数据的不可感知性。通常,一对像素用于嵌入一个二进制位。使用多个隐藏向量,索引彩色图像或多个二值图像可以同时嵌入到一个图像。因此,与现有方法相比,可以实现更高的容量。
{"title":"Gradient domain binary image hiding using color difference metric","authors":"Lu Hao, J. Feng, Bingfeng Zhou","doi":"10.1145/2820903.2820919","DOIUrl":"https://doi.org/10.1145/2820903.2820919","url":null,"abstract":"In this paper, we propose a method for hiding binary images into the gradient domain of a host color image, by modifying the gradient vectors to a chosen hiding vector orientation in CIELAB color space. The color changes are constrained by the just-noticeable difference (JND) to guarantee the imperceptibility of embedded data. Generally, one pair of pixels is used to embed one binary bit. Using multiple hiding vectors, indexed color images or multiple binary images can be embedded in one image simultaneously. Hence high capacity can be achieved in comparison with the existing methods.","PeriodicalId":21720,"journal":{"name":"SIGGRAPH Asia 2015 Technical Briefs","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89679196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grease pencil: integrating animated freehand drawings into 3D production environments 润滑脂铅笔:将动画手绘图集成到3D生产环境中
Pub Date : 2015-11-02 DOI: 10.1145/2820903.2820924
Joshua Leung, Daniel M. Lara
Freehand drawing is one of the most flexible and efficient ways of expressing creative ideas. However, it can often also be tedious and technically challenging to animate complex dimensional environments or dynamic choreographies. We present a case study of how freehand drawing tools can be integrated into an end-to-end 3D content creation platform to reap the benefits from both worlds. Creative opportunities and challenges in achieving this type of integration are discussed. We also present examples from short films demonstrating the potential of how these techniques can be deployed in production environments.
徒手绘画是表达创意的最灵活、最有效的方式之一。然而,为复杂的维度环境或动态编排制作动画通常也很乏味,而且在技术上具有挑战性。我们介绍了一个案例研究,说明徒手绘图工具如何集成到端到端3D内容创建平台中,从而从两个世界中获益。讨论了实现这种类型集成的创造性机会和挑战。我们还提供了短片中的例子,展示了如何在生产环境中部署这些技术的潜力。
{"title":"Grease pencil: integrating animated freehand drawings into 3D production environments","authors":"Joshua Leung, Daniel M. Lara","doi":"10.1145/2820903.2820924","DOIUrl":"https://doi.org/10.1145/2820903.2820924","url":null,"abstract":"Freehand drawing is one of the most flexible and efficient ways of expressing creative ideas. However, it can often also be tedious and technically challenging to animate complex dimensional environments or dynamic choreographies. We present a case study of how freehand drawing tools can be integrated into an end-to-end 3D content creation platform to reap the benefits from both worlds. Creative opportunities and challenges in achieving this type of integration are discussed. We also present examples from short films demonstrating the potential of how these techniques can be deployed in production environments.","PeriodicalId":21720,"journal":{"name":"SIGGRAPH Asia 2015 Technical Briefs","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89594121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Real-time expression-sensitive HMD face reconstruction 实时表情敏感HMD面部重建
Pub Date : 2015-11-02 DOI: 10.1145/2820903.2820910
X. Burgos-Artizzu, J. Fleureau, Olivier Dumas, Thierry Tapie, F. Clerc, N. Mollet
One of the main issues of current Head-Mounted Displays (HMD) is that they hide completely the wearer's face. This can be an issue in social experiences where two or more users want to share the 3D immersive experience. We propose a novel method to recover the face of the user in real-time. First, we learn the user appearance offline by building a 3D textured model of his head from a series of pictures. Then, by calibrating the camera and tracking the HMD's position in real-time we reproject the model on top of the video frames mimicking exactly the user's head pose. Finally, we remove the HMD and replace the occluded part of the face in a seamingless manner by performing image in-painting with the background. We further propose an extension to detect facial expressions on the visible part of the face and use it to change the upper face model accordingly. We show the promise of our method via some qualitative results on a variety of users.
当前头戴式显示器(HMD)的主要问题之一是它们完全隐藏了佩戴者的脸。在两个或更多用户想要分享3D沉浸式体验的社交体验中,这可能是一个问题。我们提出了一种实时恢复用户面部的新方法。首先,我们通过从一系列图片中构建用户头部的3D纹理模型来离线学习用户的外观。然后,通过校准相机和实时跟踪HMD的位置,我们在视频帧的顶部重新投影模型,模仿用户的头部姿势。最后,我们移除HMD,并通过与背景进行图像绘制,以无缝的方式替换被遮挡的面部部分。我们进一步提出了一个扩展来检测面部可见部分的面部表情,并使用它来相应地改变上面部模型。我们通过对不同用户的一些定性结果展示了我们的方法的前景。
{"title":"Real-time expression-sensitive HMD face reconstruction","authors":"X. Burgos-Artizzu, J. Fleureau, Olivier Dumas, Thierry Tapie, F. Clerc, N. Mollet","doi":"10.1145/2820903.2820910","DOIUrl":"https://doi.org/10.1145/2820903.2820910","url":null,"abstract":"One of the main issues of current Head-Mounted Displays (HMD) is that they hide completely the wearer's face. This can be an issue in social experiences where two or more users want to share the 3D immersive experience. We propose a novel method to recover the face of the user in real-time. First, we learn the user appearance offline by building a 3D textured model of his head from a series of pictures. Then, by calibrating the camera and tracking the HMD's position in real-time we reproject the model on top of the video frames mimicking exactly the user's head pose. Finally, we remove the HMD and replace the occluded part of the face in a seamingless manner by performing image in-painting with the background. We further propose an extension to detect facial expressions on the visible part of the face and use it to change the upper face model accordingly. We show the promise of our method via some qualitative results on a variety of users.","PeriodicalId":21720,"journal":{"name":"SIGGRAPH Asia 2015 Technical Briefs","volume":"105 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80867605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Depth-aware patch-based image disocclusion for virtual view synthesis 基于深度感知补丁的虚拟视图合成图像消光
Pub Date : 2015-11-02 DOI: 10.1145/2820903.2820920
P. Buyssens, M. Daisy, D. Tschumperlé, O. Lézoray
In this paper we propose a depth-aided patch based inpainting method to perform the disocclusion of holes that appear when synthesizing virtual views from RGB-D scenes. Depth information is added to each key step of the classical patch-based algorithm from [Criminisi et al. 2004] to guide the synthesis of missing structures and textures. These contributions result in a new inpainting method which is efficient compared to state-of-the-art approaches (both in visual quality and computational burden), while requiring only a single easy-to-adjust additional parameter.
本文提出了一种基于深度辅助补片的补图方法,用于对RGB-D场景合成虚拟视图时出现的孔洞进行去遮挡。在经典的[Criminisi et al. 2004]基于patch的算法的每个关键步骤中加入深度信息,以指导缺失结构和纹理的合成。这些贡献产生了一种新的绘制方法,与最先进的方法相比(在视觉质量和计算负担方面),它更有效,同时只需要一个易于调整的附加参数。
{"title":"Depth-aware patch-based image disocclusion for virtual view synthesis","authors":"P. Buyssens, M. Daisy, D. Tschumperlé, O. Lézoray","doi":"10.1145/2820903.2820920","DOIUrl":"https://doi.org/10.1145/2820903.2820920","url":null,"abstract":"In this paper we propose a depth-aided patch based inpainting method to perform the disocclusion of holes that appear when synthesizing virtual views from RGB-D scenes. Depth information is added to each key step of the classical patch-based algorithm from [Criminisi et al. 2004] to guide the synthesis of missing structures and textures. These contributions result in a new inpainting method which is efficient compared to state-of-the-art approaches (both in visual quality and computational burden), while requiring only a single easy-to-adjust additional parameter.","PeriodicalId":21720,"journal":{"name":"SIGGRAPH Asia 2015 Technical Briefs","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77192183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A linear blending scheme for rigid and non-rigid deformations 刚性和非刚性变形的线性混合方案
Pub Date : 2015-11-02 DOI: 10.1145/2820903.2820912
Gengdai Liu, K. Anjyo
Linear blending techniques based on generalized barycentric cordinates have been well recognized in a digital production workplace for shape deformation due to its simplicity. However, the dense weights, non-rigid deformations and the lack of an intuitive control interface limit its practical use. In this paper we present a novel linear blending scheme utilizing existing barycentric coordinates to overcome these difficulties. The scheme enables cage vertices associated with sparse weights to be inferred automatically by a user's manipulation on constrained vertices or handles in real-time. For this scheme, we have developed two new techniques. The first one is a weight reduction technique to reduce the number of control points on the cage, while still keeping the surface quality. Another technique computes positions of the cage vertices, in order to preserve the rigidity of the shape by minimizing nonlinear rigidity energies. Our prototype system demonstrates that our linear blending scheme can deal with rigid and non-rigid deformation more consistently and efficiently than previous approaches.
基于广义质心坐标的线性混合技术由于其简单性在数字化生产工作场所中得到了广泛的认可。然而,密集的权重,非刚性变形和缺乏直观的控制界面限制了它的实际使用。在本文中,我们提出了一种新的线性混合方案,利用现有的重心坐标来克服这些困难。该方案允许与稀疏权重相关的笼顶点通过用户对约束顶点或句柄的操作实时自动推断。对于这个方案,我们开发了两种新技术。第一个是减重技术,减少笼上控制点的数量,同时仍保持表面质量。另一种技术计算笼顶点的位置,以便通过最小化非线性刚度能量来保持形状的刚度。我们的原型系统表明,我们的线性混合方案可以比以前的方法更一致和有效地处理刚性和非刚性变形。
{"title":"A linear blending scheme for rigid and non-rigid deformations","authors":"Gengdai Liu, K. Anjyo","doi":"10.1145/2820903.2820912","DOIUrl":"https://doi.org/10.1145/2820903.2820912","url":null,"abstract":"Linear blending techniques based on generalized barycentric cordinates have been well recognized in a digital production workplace for shape deformation due to its simplicity. However, the dense weights, non-rigid deformations and the lack of an intuitive control interface limit its practical use. In this paper we present a novel linear blending scheme utilizing existing barycentric coordinates to overcome these difficulties. The scheme enables cage vertices associated with sparse weights to be inferred automatically by a user's manipulation on constrained vertices or handles in real-time. For this scheme, we have developed two new techniques. The first one is a weight reduction technique to reduce the number of control points on the cage, while still keeping the surface quality. Another technique computes positions of the cage vertices, in order to preserve the rigidity of the shape by minimizing nonlinear rigidity energies. Our prototype system demonstrates that our linear blending scheme can deal with rigid and non-rigid deformation more consistently and efficiently than previous approaches.","PeriodicalId":21720,"journal":{"name":"SIGGRAPH Asia 2015 Technical Briefs","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75801235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Illustration2Vec: a semantic vector representation of illustrations Illustration2Vec:插图的语义向量表示
Pub Date : 2015-11-02 DOI: 10.1145/2820903.2820907
Masaki Saito, Yusuke Matsui
Referring to existing illustrations helps novice drawers to realize their ideas. To find such helpful references from a large image collection, we first build a semantic vector representation of illustrations by training convolutional neural networks. As the proposed vector space correctly reflects the semantic meanings of illustrations, users can efficiently search for references with similar attributes. Besides the search with a single query, a semantic morphing algorithm that searches the intermediate illustrations that gradually connect two queries is proposed. Several experiments were conducted to demonstrate the effectiveness of our methods.
参考现有的插图可以帮助新手实现他们的想法。为了从大型图像集合中找到有用的参考,我们首先通过训练卷积神经网络构建插图的语义向量表示。由于所提出的向量空间正确地反映了插图的语义含义,用户可以高效地搜索具有相似属性的参考文献。在对单个查询进行搜索的基础上,提出了一种语义变形算法,对逐渐连接两个查询的中间图进行搜索。我们做了几个实验来证明我们方法的有效性。
{"title":"Illustration2Vec: a semantic vector representation of illustrations","authors":"Masaki Saito, Yusuke Matsui","doi":"10.1145/2820903.2820907","DOIUrl":"https://doi.org/10.1145/2820903.2820907","url":null,"abstract":"Referring to existing illustrations helps novice drawers to realize their ideas. To find such helpful references from a large image collection, we first build a semantic vector representation of illustrations by training convolutional neural networks. As the proposed vector space correctly reflects the semantic meanings of illustrations, users can efficiently search for references with similar attributes. Besides the search with a single query, a semantic morphing algorithm that searches the intermediate illustrations that gradually connect two queries is proposed. Several experiments were conducted to demonstrate the effectiveness of our methods.","PeriodicalId":21720,"journal":{"name":"SIGGRAPH Asia 2015 Technical Briefs","volume":"161 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73657167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
MergeTree: a HLBVH constructor for mobile systems 一个用于移动系统的HLBVH构造函数
Pub Date : 2015-11-02 DOI: 10.1145/2820903.2820916
T. Viitanen, M. Koskela, P. Jääskeläinen, Heikki O. Kultala, J. Takala
Powerful hardware accelerators have been recently developed that put interactive ray-tracing even in the reach of mobile devices. However, supplying the rendering unit with up-to date acceleration trees remains difficult, so the rendered scenes are mostly static. The restricted memory bandwidth of a mobile device is a challenge with applying GPU-based tree construction algorithms. This paper describes MergeTree, a BVH tree constructor architecture based on the HLBVH algorithm, whose main features of interest are a streaming hierarchy emitter, an external sorting algorithm with provably minimal memory usage, and a hardware priority queue used to accelerate the external sort. In simulations, the resulting unit is faster by a factor of three than the state-of-the art hardware builder based on the binned SAH sweep algorithm.
最近已经开发出强大的硬件加速器,甚至可以在移动设备上实现交互式光线追踪。然而,为渲染单元提供最新的加速树仍然很困难,所以渲染的场景大多是静态的。移动设备有限的内存带宽是应用基于gpu的树构造算法的一个挑战。本文介绍了一种基于HLBVH算法的BVH树构造器体系结构MergeTree,其主要特点是流层次结构发射器、可证明内存占用最小的外部排序算法和用于加速外部排序的硬件优先级队列。在模拟中,生成的单元比基于分箱SAH扫描算法的最先进的硬件构建器快三倍。
{"title":"MergeTree: a HLBVH constructor for mobile systems","authors":"T. Viitanen, M. Koskela, P. Jääskeläinen, Heikki O. Kultala, J. Takala","doi":"10.1145/2820903.2820916","DOIUrl":"https://doi.org/10.1145/2820903.2820916","url":null,"abstract":"Powerful hardware accelerators have been recently developed that put interactive ray-tracing even in the reach of mobile devices. However, supplying the rendering unit with up-to date acceleration trees remains difficult, so the rendered scenes are mostly static. The restricted memory bandwidth of a mobile device is a challenge with applying GPU-based tree construction algorithms. This paper describes MergeTree, a BVH tree constructor architecture based on the HLBVH algorithm, whose main features of interest are a streaming hierarchy emitter, an external sorting algorithm with provably minimal memory usage, and a hardware priority queue used to accelerate the external sort. In simulations, the resulting unit is faster by a factor of three than the state-of-the art hardware builder based on the binned SAH sweep algorithm.","PeriodicalId":21720,"journal":{"name":"SIGGRAPH Asia 2015 Technical Briefs","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86700466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Panorama to cube: a content-aware representation method 全景到立方体:一种内容感知的表示方法
Pub Date : 2015-11-02 DOI: 10.1145/2820903.2820911
Zeyu Wang, Xiaohan Jin, Fei Xue, Xin He, R. Li, H. Zha
As panoramas provide a brand new viewpoint for the public, relevant cameras and software such as RICOH Theta, Microsoft Photosynth are embracing more and more users. However, the display methods for panoramas remain monotonous. In this paper, we propose a novel representation method called Content-Aware Cube Unwrapping using the effective and interactive techniques of orientational rectification, image modification and energy estimation. Thus, a number of fascinating applications will come into reality. For instance, six surfaces of a Rubiks cube can be automatically rendered from a vertically oriented panorama, without cutting any person or significant object apart. Moreover, seam carving and inserting are applied to each surface to enhance the key content and to make the scenery more consistent.
全景摄影为大众提供了一个全新的视角,RICOH Theta、Microsoft Photosynth等相关相机和软件正在被越来越多的用户所接受。然而,全景图的显示方式仍然单调。在本文中,我们提出了一种新的表示方法,即内容感知立方体展开,该方法使用了方向校正、图像修改和能量估计等有效的交互式技术。因此,许多令人着迷的应用将成为现实。例如,魔方的六个表面可以从垂直方向的全景自动渲染,而不会切割任何人或重要物体。此外,在每个表面都进行了缝的雕刻和插入,以增强重点内容,使景色更加一致。
{"title":"Panorama to cube: a content-aware representation method","authors":"Zeyu Wang, Xiaohan Jin, Fei Xue, Xin He, R. Li, H. Zha","doi":"10.1145/2820903.2820911","DOIUrl":"https://doi.org/10.1145/2820903.2820911","url":null,"abstract":"As panoramas provide a brand new viewpoint for the public, relevant cameras and software such as RICOH Theta, Microsoft Photosynth are embracing more and more users. However, the display methods for panoramas remain monotonous. In this paper, we propose a novel representation method called Content-Aware Cube Unwrapping using the effective and interactive techniques of orientational rectification, image modification and energy estimation. Thus, a number of fascinating applications will come into reality. For instance, six surfaces of a Rubiks cube can be automatically rendered from a vertically oriented panorama, without cutting any person or significant object apart. Moreover, seam carving and inserting are applied to each surface to enhance the key content and to make the scenery more consistent.","PeriodicalId":21720,"journal":{"name":"SIGGRAPH Asia 2015 Technical Briefs","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89653202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
SIGGRAPH Asia 2015 Technical Briefs
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1