首页 > 最新文献

ACM Trans. Graph.最新文献

英文 中文
Retrieval on parametric shape collections 参数形状集合的检索
Pub Date : 2017-02-13 DOI: 10.1145/3072959.3126792
Adriana Schulz, Ariel Shamir, Ilya Baran, D. Levin, Pitchaya Sitthi-amorn, W. Matusik
While collections of parametric shapes are growing in size and use, little progress has been made on the fundamental problem of shape-based matching and retrieval for parametric shapes in a collection. The search space for such collections is both discrete (number of shapes) and continuous (parameter values). In this work, we propose representing this space using descriptors that have shown to be effective for single shape retrieval. While single shapes can be represented as points in a descriptor space, parametric shapes are mapped into larger continuous regions. For smooth descriptors, we can assume that these regions are bounded low-dimensional manifolds where the dimensionality is given by the number of shape parameters. We propose representing these manifolds with a set of primitives, namely, points and bounded tangent spaces. Our algorithm describes how to define these primitives and how to use them to construct a manifold approximation that allows accurate and fast retrieval. We perform an analysis based on curvature, boundary evaluation, and the allowed approximation error to select between primitive types. We show how to compute decision variables with no need for empirical parameter adjustments and discuss theoretical guarantees on retrieval accuracy. We validate our approach with experiments that use different types of descriptors on a collection of shapes from multiple categories.
虽然参数形状集合的规模和使用都在不断增长,但在基于形状的匹配和集合中参数形状的检索的基本问题上取得的进展很少。这种集合的搜索空间是离散的(形状的数量)和连续的(参数值)。在这项工作中,我们建议使用描述符来表示该空间,这些描述符已被证明对单个形状检索是有效的。虽然单个形状可以表示为描述符空间中的点,但参数形状被映射到更大的连续区域。对于光滑描述子,我们可以假设这些区域是有界的低维流形,其维数由形状参数的数量给出。我们建议用一组原语来表示这些流形,即点和有界切空间。我们的算法描述了如何定义这些原语,以及如何使用它们来构建允许准确和快速检索的流形近似。我们根据曲率、边界评估和允许的近似误差进行分析,以便在基本类型之间进行选择。我们展示了如何在不需要经验参数调整的情况下计算决策变量,并讨论了检索精度的理论保证。我们通过实验对来自多个类别的形状集合使用不同类型的描述符来验证我们的方法。
{"title":"Retrieval on parametric shape collections","authors":"Adriana Schulz, Ariel Shamir, Ilya Baran, D. Levin, Pitchaya Sitthi-amorn, W. Matusik","doi":"10.1145/3072959.3126792","DOIUrl":"https://doi.org/10.1145/3072959.3126792","url":null,"abstract":"While collections of parametric shapes are growing in size and use, little progress has been made on the fundamental problem of shape-based matching and retrieval for parametric shapes in a collection. The search space for such collections is both discrete (number of shapes) and continuous (parameter values). In this work, we propose representing this space using descriptors that have shown to be effective for single shape retrieval. While single shapes can be represented as points in a descriptor space, parametric shapes are mapped into larger continuous regions. For smooth descriptors, we can assume that these regions are bounded low-dimensional manifolds where the dimensionality is given by the number of shape parameters. We propose representing these manifolds with a set of primitives, namely, points and bounded tangent spaces. Our algorithm describes how to define these primitives and how to use them to construct a manifold approximation that allows accurate and fast retrieval. We perform an analysis based on curvature, boundary evaluation, and the allowed approximation error to select between primitive types. We show how to compute decision variables with no need for empirical parameter adjustments and discuss theoretical guarantees on retrieval accuracy. We validate our approach with experiments that use different types of descriptors on a collection of shapes from multiple categories.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82484100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A compressed representation for ray tracing parametric surfaces 射线追踪参数曲面的压缩表示
Pub Date : 2017-02-13 DOI: 10.1145/3072959.3126820
Kai Selgrad, Alexander Lier, Magdalena Martinek, Christoph Buchenau, M. Guthe, Franziska Kranz, Henry Schäfer, M. Stamminger
Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on the fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this article, we present a novel solution to this problem. We propose a compression scheme for a priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.
参数化曲面是计算机辅助设计和电影制作中必不可少的建模工具。尽管它们的使用在工业中已经很好地建立起来,但生成光线跟踪图像在时间和内存消耗方面增加了显著的成本。光线追踪这样的表面通常是通过对表面进行细分或转换为多边形表示来完成的。然而,即时细分在计算上是非常昂贵的,而多边形网格需要大量的内存。对于具有位移的参数曲面来说,这是一个特殊的问题,需要非常精细的镶嵌来忠实地表示形状。因此,内存限制是生产呈现中的主要挑战。在本文中,我们提出了一种新的解决方案。我们提出了一种基于参数补丁的先验边界体层次结构(BVHs)的压缩方案,该方案将层次结构所需的数据减少了多达48倍。我们进一步提出了一种不需要叶片几何形状的近似评估方法,其内存消耗总体上比索引面部集上的常规bvh减少60倍,比已建立的最先进的压缩方案减少16倍。或者,我们的压缩可以简单地应用于标准BVH,同时保持叶片的几何形状,与当前方法相比,压缩率高达2:1。尽管解压缩在遍历过程中会产生额外的成本,但我们可以在具有竞争性渲染时间的内存受限GPU上管理非常复杂的场景。
{"title":"A compressed representation for ray tracing parametric surfaces","authors":"Kai Selgrad, Alexander Lier, Magdalena Martinek, Christoph Buchenau, M. Guthe, Franziska Kranz, Henry Schäfer, M. Stamminger","doi":"10.1145/3072959.3126820","DOIUrl":"https://doi.org/10.1145/3072959.3126820","url":null,"abstract":"Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on the fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this article, we present a novel solution to this problem. We propose a compression scheme for a priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"473 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79348965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive sound propagation and rendering for large multi-source scenes 交互式声音传播和渲染大型多源场景
Pub Date : 2017-02-13 DOI: 10.1145/3072959.3126830
Carl Schissler, Dinesh Manocha
We present an approach to generate plausible acoustic effects at interactive rates in large dynamic environments containing many sound sources. Our formulation combines listener-based backward ray tracing with sound source clustering and hybrid audio rendering to handle complex scenes. We present a new algorithm for dynamic late reverberation that performs high-order ray tracing from the listener against spherical sound sources. We achieve sublinear scaling with the number of sources by clustering distant sound sources and taking relative visibility into account. We also describe a hybrid convolution-based audio rendering technique that can process hundreds of thousands of sound paths at interactive rates. We demonstrate the performance on many indoor and outdoor scenes with up to 200 sound sources. In practice, our algorithm can compute more than 50 reflection orders at interactive rates on a multicore PC, and we observe a 5x speedup over prior geometric sound propagation algorithms.
我们提出了一种在包含许多声源的大型动态环境中以交互速率产生合理的声学效果的方法。我们的配方结合了基于听众的向后光线追踪与声源聚类和混合音频渲染来处理复杂的场景。我们提出了一种动态晚混响的新算法,该算法对球形声源执行高阶射线跟踪。我们通过聚类远距离声源并考虑相对可见性来实现声源数量的亚线性缩放。我们还描述了一种基于混合卷积的音频渲染技术,该技术可以以交互速率处理数十万个声音路径。我们用多达200个声源在许多室内和室外场景中演示了性能。实际上,我们的算法可以在多核PC上以交互速率计算超过50个反射顺序,并且我们观察到比先前的几何声音传播算法加快了5倍。
{"title":"Interactive sound propagation and rendering for large multi-source scenes","authors":"Carl Schissler, Dinesh Manocha","doi":"10.1145/3072959.3126830","DOIUrl":"https://doi.org/10.1145/3072959.3126830","url":null,"abstract":"We present an approach to generate plausible acoustic effects at interactive rates in large dynamic environments containing many sound sources. Our formulation combines listener-based backward ray tracing with sound source clustering and hybrid audio rendering to handle complex scenes. We present a new algorithm for dynamic late reverberation that performs high-order ray tracing from the listener against spherical sound sources. We achieve sublinear scaling with the number of sources by clustering distant sound sources and taking relative visibility into account. We also describe a hybrid convolution-based audio rendering technique that can process hundreds of thousands of sound paths at interactive rates. We demonstrate the performance on many indoor and outdoor scenes with up to 200 sound sources. In practice, our algorithm can compute more than 50 reflection orders at interactive rates on a multicore PC, and we observe a 5x speedup over prior geometric sound propagation algorithms.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"108 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80844159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Momentum-mapped inverted pendulum models for controlling dynamic human motions 动量映射倒立摆模型用于控制动态人体运动
Pub Date : 2017-02-13 DOI: 10.1145/3072959.3126851
Tae-Joung Kwon, J. Hodgins
Designing a unified framework for simulating a broad variety of human behaviors has proven to be challenging. In this article, we present an approach for control system design that can generate animations of a diverse set of behaviors including walking, running, and a variety of gymnastic behaviors. We achieve this generalization with a balancing strategy that relies on a new form of inverted pendulum model (IPM), which we call the momentum-mapped IPM (MMIPM). We analyze reference motion capture data in a pre-processing step to extract the motion of the MMIPM. To compute a new motion, the controller plans a desired motion, frame by frame, based on the current pendulum state and a predicted pendulum trajectory. By tracking this time-varying trajectory, the controller creates a character that dynamically balances, changes speed, makes turns, jumps, and performs gymnastic maneuvers.
设计一个统一的框架来模拟各种各样的人类行为已被证明是具有挑战性的。在本文中,我们提出了一种控制系统设计方法,该方法可以生成各种行为的动画,包括步行,跑步和各种体操行为。我们通过一种平衡策略来实现这种推广,该策略依赖于一种新的倒立摆模型(IPM),我们称之为动量映射IPM (MMIPM)。我们在预处理步骤中分析参考动作捕捉数据以提取MMIPM的运动。为了计算一个新的运动,控制器根据当前的摆状态和预测的摆轨迹,逐帧规划一个期望的运动。通过跟踪这个随时间变化的轨迹,控制器创造了一个动态平衡、改变速度、转弯、跳跃和执行体操动作的角色。
{"title":"Momentum-mapped inverted pendulum models for controlling dynamic human motions","authors":"Tae-Joung Kwon, J. Hodgins","doi":"10.1145/3072959.3126851","DOIUrl":"https://doi.org/10.1145/3072959.3126851","url":null,"abstract":"Designing a unified framework for simulating a broad variety of human behaviors has proven to be challenging. In this article, we present an approach for control system design that can generate animations of a diverse set of behaviors including walking, running, and a variety of gymnastic behaviors. We achieve this generalization with a balancing strategy that relies on a new form of inverted pendulum model (IPM), which we call the momentum-mapped IPM (MMIPM). We analyze reference motion capture data in a pre-processing step to extract the motion of the MMIPM. To compute a new motion, the controller plans a desired motion, frame by frame, based on the current pendulum state and a predicted pendulum trajectory. By tracking this time-varying trajectory, the controller creates a character that dynamically balances, changes speed, makes turns, jumps, and performs gymnastic maneuvers.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"155 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86297766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Learning to schedule control fragments for physics-based characters using deep Q-learning 学习使用深度Q-learning为基于物理的字符安排控制片段
Pub Date : 2017-01-01 DOI: 10.1145/3072959.3126784
Libin Liu, J. Hodgins
{"title":"Learning to schedule control fragments for physics-based characters using deep Q-learning","authors":"Libin Liu, J. Hodgins","doi":"10.1145/3072959.3126784","DOIUrl":"https://doi.org/10.1145/3072959.3126784","url":null,"abstract":"","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74830774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Real-time geometry, albedo and motion reconstruction using a single RGBD camera 实时几何,反照率和运动重建使用单个RGBD相机
Pub Date : 2017-01-01 DOI: 10.1145/3072959.3126786
Kaiwen Guo, F. Xu, Tao Yu, Xiaoyang Liu, Qionghai Dai, Yebin Liu
This paper proposes a real-time method that uses a single-view RGBD input to simultaneously reconstruct a casual scene with a detailed geometry model, surface albedo, per-frame non-rigid motion and per-frame low-frequency lighting, without requiring any template or motion priors. The key observation is that accurate scene motion can be used to integrate temporal information to recover the precise appearance, whereas the intrinsic appearance can help to establish true correspondence in the temporal domain to recover motion. Based on this observation, we rst propose a shading-based scheme to leverage appearance information for motion estimation. Then, using the reconstructed motion, a volumetric albedo fusing scheme is proposed to complete and re ne the intrinsic appearance of the scene by incorporating information from multiple frames. Since the two schemes are iteratively applied during recording, the reconstructed appearance and motion become increasingly more accurate. In addition to the reconstruction results, our experiments also show that additional applications can be achieved, such as relighting, albedo editing and free-viewpoint rendering of a dynamic scene, since geometry, appearance and motion are all reconstructed by our technique.
本文提出了一种实时方法,该方法使用单视图RGBD输入,在不需要任何模板或运动先验的情况下,同时重建具有详细几何模型、表面反照率、逐帧非刚性运动和逐帧低频照明的随机场景。关键的观察是,精确的场景运动可以用来整合时间信息来恢复精确的外观,而内在外观可以帮助在时域建立真正的对应关系来恢复运动。基于这一观察,我们首先提出了一种基于阴影的方案来利用外观信息进行运动估计。然后,利用重建的运动,提出了一种融合多帧信息的体反照率融合方案来完成和恢复场景的内在外观。由于这两种方案在记录过程中迭代应用,重建的外观和运动变得越来越精确。除了重建结果外,我们的实验还表明,由于我们的技术可以重建几何,外观和运动,因此可以实现其他应用,例如动态场景的重照明,反照率编辑和自由视点渲染。
{"title":"Real-time geometry, albedo and motion reconstruction using a single RGBD camera","authors":"Kaiwen Guo, F. Xu, Tao Yu, Xiaoyang Liu, Qionghai Dai, Yebin Liu","doi":"10.1145/3072959.3126786","DOIUrl":"https://doi.org/10.1145/3072959.3126786","url":null,"abstract":"This paper proposes a real-time method that uses a single-view RGBD input to simultaneously reconstruct a casual scene with a detailed geometry model, surface albedo, per-frame non-rigid motion and per-frame low-frequency lighting, without requiring any template or motion priors. The key observation is that accurate scene motion can be used to integrate temporal information to recover the precise appearance, whereas the intrinsic appearance can help to establish true correspondence in the temporal domain to recover motion. Based on this observation, we rst propose a shading-based scheme to leverage appearance information for motion estimation. Then, using the reconstructed motion, a volumetric albedo fusing scheme is proposed to complete and re ne the intrinsic appearance of the scene by incorporating information from multiple frames. Since the two schemes are iteratively applied during recording, the reconstructed appearance and motion become increasingly more accurate. In addition to the reconstruction results, our experiments also show that additional applications can be achieved, such as relighting, albedo editing and free-viewpoint rendering of a dynamic scene, since geometry, appearance and motion are all reconstructed by our technique.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"387 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91509987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Jump: virtual reality video 跳跃:虚拟现实视频
Pub Date : 2016-11-11 DOI: 10.1145/2980179.2980257
Robert Anderson, D. Gallup, J. Barron, Janne Kontkanen, Noah Snavely, Carlos Hernández, Sameer Agarwal, S. Seitz
We present Jump, a practical system for capturing high resolution, omnidirectional stereo (ODS) video suitable for wide scale consumption in currently available virtual reality (VR) headsets. Our system consists of a video camera built using off-the-shelf components and a fully automatic stitching pipeline capable of capturing video content in the ODS format. We have discovered and analyzed the distortions inherent to ODS when used for VR display as well as those introduced by our capture method and show that they are small enough to make this approach suitable for capturing a wide variety of scenes. Our stitching algorithm produces robust results by reducing the problem to one of pairwise image interpolation followed by compositing. We introduce novel optical flow and compositing methods designed specifically for this task. Our algorithm is temporally coherent and efficient, is currently running at scale on a distributed computing platform, and is capable of processing hours of footage each day.
我们提出Jump,一个实用的系统,用于捕获高分辨率,全向立体声(ODS)视频,适用于当前可用的虚拟现实(VR)头显的大规模消费。我们的系统由一个使用现成组件构建的摄像机和一个能够捕获ODS格式视频内容的全自动拼接流水线组成。我们已经发现并分析了用于VR显示时ODS固有的扭曲以及我们的捕获方法引入的扭曲,并表明它们足够小,使这种方法适合捕获各种场景。我们的拼接算法通过将问题简化为一对图像插值然后合成的问题来产生鲁棒性的结果。我们介绍了专门为这项任务设计的新型光流和合成方法。我们的算法在时间上是连贯和高效的,目前在分布式计算平台上大规模运行,每天能够处理数小时的镜头。
{"title":"Jump: virtual reality video","authors":"Robert Anderson, D. Gallup, J. Barron, Janne Kontkanen, Noah Snavely, Carlos Hernández, Sameer Agarwal, S. Seitz","doi":"10.1145/2980179.2980257","DOIUrl":"https://doi.org/10.1145/2980179.2980257","url":null,"abstract":"We present Jump, a practical system for capturing high resolution, omnidirectional stereo (ODS) video suitable for wide scale consumption in currently available virtual reality (VR) headsets. Our system consists of a video camera built using off-the-shelf components and a fully automatic stitching pipeline capable of capturing video content in the ODS format. We have discovered and analyzed the distortions inherent to ODS when used for VR display as well as those introduced by our capture method and show that they are small enough to make this approach suitable for capturing a wide variety of scenes. Our stitching algorithm produces robust results by reducing the problem to one of pairwise image interpolation followed by compositing. We introduce novel optical flow and compositing methods designed specifically for this task. Our algorithm is temporally coherent and efficient, is currently running at scale on a distributed computing platform, and is capable of processing hours of footage each day.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"127 1","pages":"198:1-198:13"},"PeriodicalIF":0.0,"publicationDate":"2016-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75271802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 228
Perform: perceptual approach for adding OCEAN personality to human motion using laban movement analysis 表演:用拉班动作分析将OCEAN个性添加到人体动作中的感知方法
Pub Date : 2016-10-01 DOI: 10.1145/3072959.3126789
Funda Durupinar, M. Kapadia, Susan Deutsch, Michael Neff, N. Badler
A major goal of research on virtual humans is the animation of expressive characters that display distinct psychological attributes. Body motion is an effective way of portraying different personalities and differentiating characters. The purpose and contribution of this work is to describe a formal, broadly applicable, procedural, and empirically grounded association between personality and body motion and apply this association to modify a given virtual human body animation that can be represented by these formal concepts. Because the body movement of virtual characters may involve different choices of parameter sets depending on the context, situation, or application, formulating a link from personality to body motion requires an intermediate step to assist generalization. For this intermediate step, we refer to Laban Movement Analysis, which is a movement analysis technique for systematically describing and evaluating human motion. We have developed an expressive human motion generation system with the help of movement experts and conducted a user study to explore how the psychologically validated OCEAN personality factors were perceived in motions with various Laban parameters. We have then applied our findings to procedurally animate expressive characters with personality, and validated the generalizability of our approach across different models and animations via another perception study.
虚拟人研究的一个主要目标是使具有不同心理属性的富有表现力的人物动画化。肢体动作是刻画不同性格、区分人物的有效方式。这项工作的目的和贡献是描述人格和身体运动之间的正式的,广泛适用的,程序性的和经验基础的关联,并应用这种关联来修改可以由这些形式概念表示的给定虚拟人体动画。由于虚拟角色的身体运动可能涉及根据上下文、情况或应用而选择不同的参数集,因此从个性到身体运动的联系需要一个中间步骤来辅助泛化。对于这个中间步骤,我们参考拉班运动分析,这是一种系统地描述和评估人体运动的运动分析技术。在运动专家的帮助下,我们开发了一个表达人类运动生成系统,并进行了一项用户研究,以探索心理验证的OCEAN人格因素如何在不同拉班参数的运动中被感知。然后,我们将我们的发现应用于程序动画中具有个性的富有表现力的角色,并通过另一项感知研究验证了我们的方法在不同模型和动画中的普遍性。
{"title":"Perform: perceptual approach for adding OCEAN personality to human motion using laban movement analysis","authors":"Funda Durupinar, M. Kapadia, Susan Deutsch, Michael Neff, N. Badler","doi":"10.1145/3072959.3126789","DOIUrl":"https://doi.org/10.1145/3072959.3126789","url":null,"abstract":"A major goal of research on virtual humans is the animation of expressive characters that display distinct psychological attributes. Body motion is an effective way of portraying different personalities and differentiating characters. The purpose and contribution of this work is to describe a formal, broadly applicable, procedural, and empirically grounded association between personality and body motion and apply this association to modify a given virtual human body animation that can be represented by these formal concepts. Because the body movement of virtual characters may involve different choices of parameter sets depending on the context, situation, or application, formulating a link from personality to body motion requires an intermediate step to assist generalization. For this intermediate step, we refer to Laban Movement Analysis, which is a movement analysis technique for systematically describing and evaluating human motion. We have developed an expressive human motion generation system with the help of movement experts and conducted a user study to explore how the psychologically validated OCEAN personality factors were perceived in motions with various Laban parameters. We have then applied our findings to procedurally animate expressive characters with personality, and validated the generalizability of our approach across different models and animations via another perception study.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88626190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Nonuniform spatial deformation of light fields by locally linear transformations 局部线性变换的光场非均匀空间变形
Pub Date : 2016-09-22 DOI: 10.1145/3072959.3126846
C. Birklbauer, D. Schedl, O. Bimber
Light-field cameras offer new imaging possibilities compared to conventional digital cameras. However, the additional angular domain of light fields prohibits direct application of frequently used image processing algorithms, such as warping, retargeting, or stitching. We present a general and efficient framework for nonuniform light-field warping, which forms the basis for extending many of these image processing techniques to light fields. It propagates arbitrary spatial deformations defined in one light-field perspective consistently to all other perspectives by means of 4D patch matching instead of relying on explicit depth reconstruction. This allows processing light-field recordings of complex scenes with non-Lambertian properties such as transparency and refraction. We show application examples of our framework in panorama light-field imaging, light-field retargeting, and artistic manipulation of light fields.
与传统数码相机相比,光场相机提供了新的成像可能性。然而,光场的额外角域禁止直接应用常用的图像处理算法,如扭曲、重定向或拼接。我们提出了一种通用的、有效的非均匀光场扭曲框架,它构成了将许多这些图像处理技术扩展到光场的基础。它通过四维补丁匹配将一个光场视角中定义的任意空间变形一致地传播到所有其他视角,而不是依赖于明确的深度重建。这允许处理具有非朗伯属性(如透明度和折射)的复杂场景的光场记录。我们展示了我们的框架在全景光场成像、光场重瞄准和光场艺术操纵中的应用实例。
{"title":"Nonuniform spatial deformation of light fields by locally linear transformations","authors":"C. Birklbauer, D. Schedl, O. Bimber","doi":"10.1145/3072959.3126846","DOIUrl":"https://doi.org/10.1145/3072959.3126846","url":null,"abstract":"Light-field cameras offer new imaging possibilities compared to conventional digital cameras. However, the additional angular domain of light fields prohibits direct application of frequently used image processing algorithms, such as warping, retargeting, or stitching. We present a general and efficient framework for nonuniform light-field warping, which forms the basis for extending many of these image processing techniques to light fields. It propagates arbitrary spatial deformations defined in one light-field perspective consistently to all other perspectives by means of 4D patch matching instead of relying on explicit depth reconstruction. This allows processing light-field recordings of complex scenes with non-Lambertian properties such as transparency and refraction. We show application examples of our framework in panorama light-field imaging, light-field retargeting, and artistic manipulation of light fields.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74025172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Interactive high-quality green-screen keying via color unmixing 交互式高品质的绿屏键控通过颜色解混
Pub Date : 2016-09-22 DOI: 10.1145/3072959.3126799
Yagiz Aksoy, T. Aydin, M. Pollefeys, A. Smolic
Due to the widespread use of compositing in contemporary feature films, green-screen keying has become an essential part of postproduction workflows. To comply with the ever-increasing quality requirements of the industry, specialized compositing artists spend countless hours using multiple commercial software tools, while eventually having to resort to manual painting because of the many shortcomings of these tools. Due to the sheer amount of manual labor involved in the process, new green-screen keying approaches that produce better keying results with less user interaction are welcome additions to the compositing artist’s arsenal. We found that—contrary to the common belief in the research community—production-quality green-screen keying is still an unresolved problem with its unique challenges. In this article, we propose a novel green-screen keying method utilizing a new energy minimization-based color unmixing algorithm. We present comprehensive comparisons with commercial software packages and relevant methods in literature, which show that the quality of our results is superior to any other currently available green-screen keying solution. It is important to note that, using the proposed method, these high-quality results can be generated using only one-tenth of the manual editing time that a professional compositing artist requires to process the same content having all previous state-of-the-art tools at one’s disposal.
由于合成在当代故事片中的广泛使用,绿幕键控已经成为后期制作工作流程的重要组成部分。为了满足行业不断提高的质量要求,专业的合成艺术家花费无数时间使用多种商业软件工具,而最终由于这些工具的许多缺点而不得不诉诸手工绘画。由于这个过程中涉及到大量的体力劳动,新的绿屏键控方法可以在更少的用户交互下产生更好的键控结果,这是合成艺术家的军械库中受欢迎的补充。我们发现,与研究界的普遍看法相反,生产质量的绿屏键控仍然是一个未解决的问题,面临着独特的挑战。在本文中,我们提出了一种新的绿屏键控方法,利用一种新的基于能量最小化的颜色解混算法。我们与商业软件包和文献中的相关方法进行了全面的比较,结果表明我们的结果质量优于目前任何其他可用的绿屏键控解决方案。值得注意的是,使用所提出的方法,这些高质量的结果只需要手工编辑时间的十分之一,而专业合成艺术家需要使用所有以前最先进的工具来处理相同的内容。
{"title":"Interactive high-quality green-screen keying via color unmixing","authors":"Yagiz Aksoy, T. Aydin, M. Pollefeys, A. Smolic","doi":"10.1145/3072959.3126799","DOIUrl":"https://doi.org/10.1145/3072959.3126799","url":null,"abstract":"Due to the widespread use of compositing in contemporary feature films, green-screen keying has become an essential part of postproduction workflows. To comply with the ever-increasing quality requirements of the industry, specialized compositing artists spend countless hours using multiple commercial software tools, while eventually having to resort to manual painting because of the many shortcomings of these tools. Due to the sheer amount of manual labor involved in the process, new green-screen keying approaches that produce better keying results with less user interaction are welcome additions to the compositing artist’s arsenal. We found that—contrary to the common belief in the research community—production-quality green-screen keying is still an unresolved problem with its unique challenges. In this article, we propose a novel green-screen keying method utilizing a new energy minimization-based color unmixing algorithm. We present comprehensive comparisons with commercial software packages and relevant methods in literature, which show that the quality of our results is superior to any other currently available green-screen keying solution. It is important to note that, using the proposed method, these high-quality results can be generated using only one-tenth of the manual editing time that a professional compositing artist requires to process the same content having all previous state-of-the-art tools at one’s disposal.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84356865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
期刊
ACM Trans. Graph.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1