首页 > 最新文献

ACM SIGGRAPH 2003 Papers最新文献

英文 中文
Incorporating dynamic real objects into immersive virtual environments 将动态真实物体融入沉浸式虚拟环境
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882332
Benjamin C. Lok, Samir Naik, M. Whitton, F. Brooks
We present algorithms that enable virtual objects to interact with and respond to virtual representations, avatars, of real objects. These techniques allow dynamic real objects, such as the user, tools, and parts, to be visually and physically incorporated into the virtual environment (VE). The system uses image-based object reconstruction and a volume query mechanism to detect collisions and to determine plausible collision responses between virtual objects and the avatars. This allows our system to provide the user natural interactions with the VE.We have begun a collaboration with NASA Langley Research Center to apply the hybrid environment system to a satellite payload assembly verification task. In an informal case study, NASA LaRC payload designers and engineers conducted common assembly tasks on payload models. The results suggest that hybrid environments could provide significant advantages for assembly verification and layout evaluation tasks.
我们提出的算法使虚拟对象能够与真实对象的虚拟表示,化身进行交互和响应。这些技术允许动态的真实对象(如用户、工具和部件)在视觉上和物理上合并到虚拟环境(VE)中。该系统使用基于图像的对象重建和体积查询机制来检测碰撞,并确定虚拟对象与虚拟角色之间的合理碰撞响应。这使得我们的系统能够为用户提供与VE的自然交互。我们已经开始与美国宇航局兰利研究中心合作,将混合环境系统应用于卫星有效载荷组装验证任务。在一个非正式的案例研究中,NASA LaRC有效载荷设计师和工程师对有效载荷模型进行了常见的组装任务。结果表明,混合环境可以为装配验证和布局评估任务提供显著的优势。
{"title":"Incorporating dynamic real objects into immersive virtual environments","authors":"Benjamin C. Lok, Samir Naik, M. Whitton, F. Brooks","doi":"10.1145/1201775.882332","DOIUrl":"https://doi.org/10.1145/1201775.882332","url":null,"abstract":"We present algorithms that enable virtual objects to interact with and respond to virtual representations, avatars, of real objects. These techniques allow dynamic real objects, such as the user, tools, and parts, to be visually and physically incorporated into the virtual environment (VE). The system uses image-based object reconstruction and a volume query mechanism to detect collisions and to determine plausible collision responses between virtual objects and the avatars. This allows our system to provide the user natural interactions with the VE.We have begun a collaboration with NASA Langley Research Center to apply the hybrid environment system to a satellite payload assembly verification task. In an informal case study, NASA LaRC payload designers and engineers conducted common assembly tasks on payload models. The results suggest that hybrid environments could provide significant advantages for assembly verification and layout evaluation tasks.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129065773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Billboard clouds for extreme model simplification 广告牌云的极端模型简化
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882326
Xavier Décoret, F. Durand, F. Sillion, Julie Dorsey
We introduce billboard clouds -- a new approach for extreme simplification in the context of real-time rendering. 3D models are simplified onto a set of planes with texture and transparency maps. We present an optimization approach to build a billboard cloud given a geometric error threshold. After computing an appropriate density function in plane space, a greedy approach is used to select suitable representative planes. A good surface approximation is ensured by favoring planes that are "nearly tangent" to the model. This method does not require connectivity information, but instead avoids cracks by projecting primitives onto multiple planes when needed. For extreme simplification, our approach combines the strengths of mesh decimation and image-based impostors. We demonstrate our technique on a large class of models, including smooth manifolds and composite objects.
我们介绍了广告牌云——一种在实时渲染环境下极度简化的新方法。3D模型被简化为一组具有纹理和透明地图的平面。在给定几何误差阈值的条件下,提出了一种构建广告牌云的优化方法。在平面空间中计算合适的密度函数后,采用贪心方法选择合适的代表平面。通过选择与模型“几乎相切”的平面来确保良好的表面近似。这种方法不需要连接信息,而是通过在需要时将原语投影到多个平面上来避免裂缝。为了极度简化,我们的方法结合了网格抽取和基于图像的冒充者的优势。我们在一大类模型上展示了我们的技术,包括光滑流形和复合对象。
{"title":"Billboard clouds for extreme model simplification","authors":"Xavier Décoret, F. Durand, F. Sillion, Julie Dorsey","doi":"10.1145/1201775.882326","DOIUrl":"https://doi.org/10.1145/1201775.882326","url":null,"abstract":"We introduce billboard clouds -- a new approach for extreme simplification in the context of real-time rendering. 3D models are simplified onto a set of planes with texture and transparency maps. We present an optimization approach to build a billboard cloud given a geometric error threshold. After computing an appropriate density function in plane space, a greedy approach is used to select suitable representative planes. A good surface approximation is ensured by favoring planes that are \"nearly tangent\" to the model. This method does not require connectivity information, but instead avoids cracks by projecting primitives onto multiple planes when needed. For extreme simplification, our approach combines the strengths of mesh decimation and image-based impostors. We demonstrate our technique on a large class of models, including smooth manifolds and composite objects.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127686477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 221
Sequential point trees 顺序点树
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882321
C. Dachsbacher, C. Vogelgsang, M. Stamminger
In this paper we present sequential point trees, a data structure that allows adaptive rendering of point clouds completely on the graphics processor. Sequential point trees are based on a hierarchical point representation, but the hierarchical rendering traversal is replaced by sequential processing on the graphics processor, while the CPU is available for other tasks. Smooth transition to triangle rendering for optimized performance is integrated. We describe optimizations for backface culling and texture adaptive point selection. Finally, we discuss implementation issues and show results.
在本文中,我们提出了顺序点树,这是一种允许在图形处理器上完全自适应渲染点云的数据结构。顺序点树基于分层点表示,但分层呈现遍历被图形处理器上的顺序处理所取代,而CPU则可用于其他任务。平滑过渡到三角形渲染优化性能集成。我们描述了对背面剔除和纹理自适应点选择的优化。最后,我们讨论了实施问题并展示了结果。
{"title":"Sequential point trees","authors":"C. Dachsbacher, C. Vogelgsang, M. Stamminger","doi":"10.1145/1201775.882321","DOIUrl":"https://doi.org/10.1145/1201775.882321","url":null,"abstract":"In this paper we present sequential point trees, a data structure that allows adaptive rendering of point clouds completely on the graphics processor. Sequential point trees are based on a hierarchical point representation, but the hierarchical rendering traversal is replaced by sequential processing on the graphics processor, while the CPU is available for other tasks. Smooth transition to triangle rendering for optimized performance is integrated. We describe optimizations for backface culling and texture adaptive point selection. Finally, we discuss implementation issues and show results.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129978063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 220
Interactive shadow generation in complex environments 复杂环境中的交互式阴影生成
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882299
N. Govindaraju, Brandon Lloyd, Sung-eui Yoon, Avneesh Sud, Dinesh Manocha
We present a new algorithm for interactive generation of hard-edged, umbral shadows in complex environments with a moving light source. Our algorithm uses a hybrid approach that combines the image quality of object-precision methods with the efficiencies of image-precision techniques. We present an algorithm for computing a compact potentially visible set (PVS) using levels-of-detail (LODs) and visibility culling. We use the PVSs computed from both the eye and the light in a novel cross-culling algorithm that identifies a reduced set of potential shadow-casters and shadow-receivers. Finally, we use a combination of shadow-polygons and shadow maps to generate shadows. We also present techniques for LOD-selection to minimize possible artifacts arising from the use of LODs. Our algorithm can generate sharp shadow edges and reduces the aliasing in pure shadow map approaches. We have implemented the algorithm on a three-PC system with NVIDIA GeForce 4 cards. We achieve 7--25 frames per second in three complex environments composed of millions of triangles.
我们提出了一种新的算法,用于在具有移动光源的复杂环境中交互式生成硬边缘阴影。我们的算法采用了一种混合方法,将物体精度方法的图像质量与图像精度技术的效率相结合。我们提出了一种使用细节层次(LODs)和可见性剔除来计算紧凑潜在可见集(PVS)的算法。我们在一种新的交叉剔除算法中使用从眼睛和光线中计算的pvs,该算法识别出一组减少的潜在阴影投射者和阴影接收器。最后,我们使用阴影多边形和阴影贴图的组合来生成阴影。我们还介绍了用于lod选择的技术,以尽量减少使用lod产生的可能的工件。该算法可以生成清晰的阴影边缘,减少纯阴影映射方法中的混叠现象。我们已经在使用NVIDIA GeForce 4显卡的三台pc系统上实现了该算法。我们在由数百万个三角形组成的三个复杂环境中实现了每秒7- 25帧。
{"title":"Interactive shadow generation in complex environments","authors":"N. Govindaraju, Brandon Lloyd, Sung-eui Yoon, Avneesh Sud, Dinesh Manocha","doi":"10.1145/1201775.882299","DOIUrl":"https://doi.org/10.1145/1201775.882299","url":null,"abstract":"We present a new algorithm for interactive generation of hard-edged, umbral shadows in complex environments with a moving light source. Our algorithm uses a hybrid approach that combines the image quality of object-precision methods with the efficiencies of image-precision techniques. We present an algorithm for computing a compact potentially visible set (PVS) using levels-of-detail (LODs) and visibility culling. We use the PVSs computed from both the eye and the light in a novel cross-culling algorithm that identifies a reduced set of potential shadow-casters and shadow-receivers. Finally, we use a combination of shadow-polygons and shadow maps to generate shadows. We also present techniques for LOD-selection to minimize possible artifacts arising from the use of LODs. Our algorithm can generate sharp shadow edges and reduces the aliasing in pure shadow map approaches. We have implemented the algorithm on a three-PC system with NVIDIA GeForce 4 cards. We achieve 7--25 frames per second in three complex environments composed of millions of triangles.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124603806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Sparse matrix solvers on the GPU: conjugate gradients and multigrid GPU上的稀疏矩阵求解器:共轭梯度和多重网格
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882364
J. Bolz, I. Farmer, E. Grinspun, P. Schröder
Many computer graphics applications require high-intensity numerical simulation. We show that such computations can be performed efficiently on the GPU, which we regard as a full function streaming processor with high floating-point performance. We implemented two basic, broadly useful, computational kernels: a sparse matrix conjugate gradient solver and a regular-grid multigrid solver. Real time applications ranging from mesh smoothing and parameterization to fluid solvers and solid mechanics can greatly benefit from these, evidence our example applications of geometric flow and fluid simulation running on NVIDIA's GeForce FX.
许多计算机图形学应用需要高强度的数值模拟。我们证明了这些计算可以在GPU上有效地执行,我们认为GPU是一个具有高浮点性能的全功能流处理器。我们实现了两个基本的、广泛使用的计算核:稀疏矩阵共轭梯度求解器和规则网格多网格求解器。从网格平滑和参数化到流体求解器和固体力学的实时应用程序都可以从这些特性中受益匪浅,我们在NVIDIA的GeForce FX上运行的几何流和流体模拟示例应用程序就是证明。
{"title":"Sparse matrix solvers on the GPU: conjugate gradients and multigrid","authors":"J. Bolz, I. Farmer, E. Grinspun, P. Schröder","doi":"10.1145/1201775.882364","DOIUrl":"https://doi.org/10.1145/1201775.882364","url":null,"abstract":"Many computer graphics applications require high-intensity numerical simulation. We show that such computations can be performed efficiently on the GPU, which we regard as a full function streaming processor with high floating-point performance. We implemented two basic, broadly useful, computational kernels: a sparse matrix conjugate gradient solver and a regular-grid multigrid solver. Real time applications ranging from mesh smoothing and parameterization to fluid solvers and solid mechanics can greatly benefit from these, evidence our example applications of geometric flow and fluid simulation running on NVIDIA's GeForce FX.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116701284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 683
Boom chameleon: simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display Boom chameleon:在空间感知显示器上同时捕获3D视点,语音和手势注释
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882329
M. Tsang, G. Fitzmaurice, G. Kurtenbach, Azam Khan, W. Buxton
We review the Boom Chameleon, a novel input/output device consisting of a flat-panel display mounted on a tracked mechanical armature. The display acts as a physical window into 3D virtual environments, through which a one-to-one mapping between real and virtual space is preserved. The Boom Chameleon is further augmented with a touch-screen and a microphone/speaker combination. We created a 3D annotation application that exploits this unique configuration in order to simultaneously capture viewpoint, voice and gesture information. Results of an informal user study show that the Boom Chameleon annotation facilities have the potential to be an effective, and intuitive system for reviewing 3D designs.
我们回顾了Boom变色龙,这是一种新型的输入/输出设备,由安装在履带式机械电枢上的平板显示器组成。显示器充当了进入3D虚拟环境的物理窗口,通过它可以保存真实空间和虚拟空间之间的一对一映射。Boom变色龙进一步增加了触摸屏和麦克风/扬声器组合。我们创建了一个3D注释应用程序,利用这种独特的配置,以同时捕获视点,语音和手势信息。非正式的用户研究结果表明,Boom变色龙注释设施有潜力成为一个有效的,直观的系统来审查3D设计。
{"title":"Boom chameleon: simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display","authors":"M. Tsang, G. Fitzmaurice, G. Kurtenbach, Azam Khan, W. Buxton","doi":"10.1145/1201775.882329","DOIUrl":"https://doi.org/10.1145/1201775.882329","url":null,"abstract":"We review the Boom Chameleon, a novel input/output device consisting of a flat-panel display mounted on a tracked mechanical armature. The display acts as a physical window into 3D virtual environments, through which a one-to-one mapping between real and virtual space is preserved. The Boom Chameleon is further augmented with a touch-screen and a microphone/speaker combination. We created a 3D annotation application that exploits this unique configuration in order to simultaneously capture viewpoint, voice and gesture information. Results of an informal user study show that the Boom Chameleon annotation facilities have the potential to be an effective, and intuitive system for reviewing 3D designs.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132977110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Real-time rendering of aerodynamic sound using sound textures based on computational fluid dynamics 基于计算流体动力学的声音纹理实时渲染气动声音
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882339
Y. Dobashi, Tsuyoshi Yamamoto, T. Nishita
In computer graphics, most research focuses on creating images. However, there has been much recent work on the automatic generation of sound linked to objects in motion and the relative positions of receivers and sound sources. This paper proposes a new method for creating one type of sound called aerodynamic sound. Examples of aerodynamic sound include sound generated by swinging swords or by wind blowing. A major source of aerodynamic sound is vortices generated in fluids such as air. First, we propose a method for creating sound textures for aerodynamic sound by making use of computational fluid dynamics. Next, we propose a method using the sound textures for real-time rendering of aerodynamic sound according to the motion of objects or wind velocity.
在计算机图形学中,大多数研究集中在创建图像上。然而,最近在与运动中的物体以及接收器和声源的相对位置相关联的声音的自动产生方面已经有了很多工作。本文提出了一种新的方法来产生一种叫做气动声的声音。空气动力声音的例子包括剑摆动或风吹产生的声音。空气动力声音的一个主要来源是在诸如空气之类的流体中产生的涡流。首先,我们提出了一种利用计算流体力学为空气动力声音创建声音纹理的方法。接下来,我们提出了一种利用声音纹理根据物体运动或风速实时渲染空气动力声音的方法。
{"title":"Real-time rendering of aerodynamic sound using sound textures based on computational fluid dynamics","authors":"Y. Dobashi, Tsuyoshi Yamamoto, T. Nishita","doi":"10.1145/1201775.882339","DOIUrl":"https://doi.org/10.1145/1201775.882339","url":null,"abstract":"In computer graphics, most research focuses on creating images. However, there has been much recent work on the automatic generation of sound linked to objects in motion and the relative positions of receivers and sound sources. This paper proposes a new method for creating one type of sound called aerodynamic sound. Examples of aerodynamic sound include sound generated by swinging swords or by wind blowing. A major source of aerodynamic sound is vortices generated in fluids such as air. First, we propose a method for creating sound textures for aerodynamic sound by making use of computational fluid dynamics. Next, we propose a method using the sound textures for real-time rendering of aerodynamic sound according to the motion of objects or wind velocity.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131208263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 84
Shadow matting and compositing 阴影裁剪和合成
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882298
Yung-Yu Chuang, Dan B. Goldman, B. Curless, D. Salesin, R. Szeliski
In this paper, we describe a method for extracting shadows from one natural scene and inserting them into another. We develop physically-based shadow matting and compositing equations and use these to pull a shadow matte from a source scene in which the shadow is cast onto an arbitrary planar background. We then acquire the photometric and geometric properties of the target scene by sweeping oriented linear shadows (cast by a straight object) across it. From these shadow scans, we can construct a shadow displacement map without requiring camera or light source calibration. This map can then be used to deform the original shadow matte. We demonstrate our approach for both indoor scenes with controlled lighting and for outdoor scenes using natural lighting.
在本文中,我们描述了一种从一个自然场景中提取阴影并将其插入到另一个自然场景中的方法。我们开发了基于物理的阴影抠图和合成方程,并使用这些来从源场景中提取阴影抠图,其中阴影投射到任意平面背景上。然后,我们通过扫描定向线性阴影(由一个直线物体投射)来获取目标场景的光度和几何属性。从这些阴影扫描中,我们可以构建一个阴影位移图,而不需要相机或光源校准。这个贴图可以用来变形原始的阴影。我们展示了我们的方法在室内场景与控制照明和室外场景使用自然光。
{"title":"Shadow matting and compositing","authors":"Yung-Yu Chuang, Dan B. Goldman, B. Curless, D. Salesin, R. Szeliski","doi":"10.1145/1201775.882298","DOIUrl":"https://doi.org/10.1145/1201775.882298","url":null,"abstract":"In this paper, we describe a method for extracting shadows from one natural scene and inserting them into another. We develop physically-based shadow matting and compositing equations and use these to pull a shadow matte from a source scene in which the shadow is cast onto an arbitrary planar background. We then acquire the photometric and geometric properties of the target scene by sweeping oriented linear shadows (cast by a straight object) across it. From these shadow scans, we can construct a shadow displacement map without requiring camera or light source calibration. This map can then be used to deform the original shadow matte. We demonstrate our approach for both indoor scenes with controlled lighting and for outdoor scenes using natural lighting.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132985131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 111
Motion synthesis from annotations 来自注解的动作合成
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882284
Okan Arikan, D. Forsyth, J. F. O'Brien
This paper describes a framework that allows a user to synthesize human motion while retaining control of its qualitative properties. The user paints a timeline with annotations --- like walk, run or jump --- from a vocabulary which is freely chosen by the user. The system then assembles frames from a motion database so that the final motion performs the specified actions at specified times. The motion can also be forced to pass through particular configurations at particular times, and to go to a particular position and orientation. Annotations can be painted positively (for example, must run), negatively (for example, may not run backwards) or as a don't-care. The system uses a novel search method, based around dynamic programming at several scales, to obtain a solution efficiently so that authoring is interactive. Our results demonstrate that the method can generate smooth, natural-looking motion.The annotation vocabulary can be chosen to fit the application, and allows specification of composite motions (run and jump simultaneously, for example). The process requires a collection of motion data that has been annotated with the chosen vocabulary. This paper also describes an effective tool, based around repeated use of support vector machines, that allows a user to annotate a large collection of motions quickly and easily so that they may be used with the synthesis algorithm.
本文描述了一个框架,允许用户合成人体运动,同时保留其定性特性的控制。用户可以根据用户自由选择的词汇表绘制带有注释的时间轴,比如步行、跑步或跳跃。然后,系统从运动数据库中组装帧,以便最终的运动在指定的时间执行指定的动作。运动也可以在特定的时间通过特定的结构,并达到特定的位置和方向。注释可以是正面的(例如,必须运行),也可以是负面的(例如,不能向后运行),或者是不在意的。该系统采用了一种新颖的基于多尺度动态规划的搜索方法,有效地实现了求解的交互式。我们的结果表明,该方法可以产生光滑,自然的运动。可以选择适合应用程序的注释词汇表,并允许指定复合动作(例如,同时跑和跳)。这个过程需要一组用所选词汇表注释过的运动数据。本文还描述了一个有效的工具,基于重复使用的支持向量机,允许用户快速,轻松地注释大量的运动,以便它们可以与合成算法一起使用。
{"title":"Motion synthesis from annotations","authors":"Okan Arikan, D. Forsyth, J. F. O'Brien","doi":"10.1145/1201775.882284","DOIUrl":"https://doi.org/10.1145/1201775.882284","url":null,"abstract":"This paper describes a framework that allows a user to synthesize human motion while retaining control of its qualitative properties. The user paints a timeline with annotations --- like walk, run or jump --- from a vocabulary which is freely chosen by the user. The system then assembles frames from a motion database so that the final motion performs the specified actions at specified times. The motion can also be forced to pass through particular configurations at particular times, and to go to a particular position and orientation. Annotations can be painted positively (for example, must run), negatively (for example, may not run backwards) or as a don't-care. The system uses a novel search method, based around dynamic programming at several scales, to obtain a solution efficiently so that authoring is interactive. Our results demonstrate that the method can generate smooth, natural-looking motion.The annotation vocabulary can be chosen to fit the application, and allows specification of composite motions (run and jump simultaneously, for example). The process requires a collection of motion data that has been annotated with the chosen vocabulary. This paper also describes an effective tool, based around repeated use of support vector machines, that allows a user to annotate a large collection of motions quickly and easily so that they may be used with the synthesis algorithm.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115760885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 463
Nonconvex rigid bodies with stacking 具有堆叠的非凸刚体
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882358
Eran Guendelman, R. Bridson, Ronald Fedkiw
We consider the simulation of nonconvex rigid bodies focusing on interactions such as collision, contact, friction (kinetic, static, rolling and spinning) and stacking. We advocate representing the geometry with both a triangulated surface and a signed distance function defined on a grid, and this dual representation is shown to have many advantages. We propose a novel approach to time integration merging it with the collision and contact processing algorithms in a fashion that obviates the need for ad hoc threshold velocities. We show that this approach matches the theoretical solution for blocks sliding and stopping on inclined planes with friction. We also present a new shock propagation algorithm that allows for efficient use of the propagation (as opposed to the simultaneous) method for treating contact. These new techniques are demonstrated on a variety of problems ranging from simple test cases to stacking problems with as many as 1000 nonconvex rigid bodies with friction as shown in Figure 1.
我们考虑非凸刚体的模拟,重点是碰撞、接触、摩擦(动态、静态、滚动和旋转)和堆积等相互作用。我们提倡用三角曲面和在网格上定义的带符号距离函数来表示几何,这种对偶表示被证明具有许多优点。我们提出了一种新的时间积分方法,将其与碰撞和接触处理算法合并在一起,从而避免了对特定阈值速度的需要。我们证明了这种方法与理论解的块滑动和停止在斜面上的摩擦。我们还提出了一种新的冲击传播算法,该算法允许有效地使用传播(而不是同时)方法来处理接触。这些新技术在各种各样的问题上得到了证明,从简单的测试用例到多达1000个具有摩擦的非凸刚体的堆叠问题,如图1所示。
{"title":"Nonconvex rigid bodies with stacking","authors":"Eran Guendelman, R. Bridson, Ronald Fedkiw","doi":"10.1145/1201775.882358","DOIUrl":"https://doi.org/10.1145/1201775.882358","url":null,"abstract":"We consider the simulation of nonconvex rigid bodies focusing on interactions such as collision, contact, friction (kinetic, static, rolling and spinning) and stacking. We advocate representing the geometry with both a triangulated surface and a signed distance function defined on a grid, and this dual representation is shown to have many advantages. We propose a novel approach to time integration merging it with the collision and contact processing algorithms in a fashion that obviates the need for ad hoc threshold velocities. We show that this approach matches the theoretical solution for blocks sliding and stopping on inclined planes with friction. We also present a new shock propagation algorithm that allows for efficient use of the propagation (as opposed to the simultaneous) method for treating contact. These new techniques are demonstrated on a variety of problems ranging from simple test cases to stacking problems with as many as 1000 nonconvex rigid bodies with friction as shown in Figure 1.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115239974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 352
期刊
ACM SIGGRAPH 2003 Papers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1