首页 > 最新文献

Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games最新文献

英文 中文
3D recording as enabling technology for serious games and educational roleplaying 3D录音作为严肃游戏和教育角色扮演的支持技术
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448225
E. Prasolova-Førland, Mikhail Fominykh, Peter Leong
In this work, we propose an innovative approach, 3D recording, to support serious games and educational role-playing. 3D virtual worlds (3D VWs) are often used for generating educational content. Even though this technology allows creating full context of the real-life educational process, it is usually recorded as flat 2D video (such as Machinima in Second Life), which eliminates many advantages of the technology, such as sense of presence (Mckerlich et al. 2011). In addition, there are no systematic approaches for combining synchronous and asynchronous learning modes. We propose that 3D recording is capable of solving these challenges, as it offers an easy way for creating advanced 3D content from synchronous activities, as well as accessing this content in both synchronous and asynchronous modes.
在这项工作中,我们提出了一种创新的方法,3D记录,以支持严肃的游戏和教育角色扮演。3D虚拟世界(3D VWs)通常用于生成教育内容。尽管这项技术允许创建现实生活教育过程的完整背景,但它通常被记录为平面2D视频(例如《第二人生》中的Machinima),这消除了该技术的许多优点,例如存在感(Mckerlich et al. 2011)。此外,没有系统的方法来结合同步和异步学习模式。我们建议3D记录能够解决这些挑战,因为它提供了一种从同步活动创建高级3D内容的简单方法,以及以同步和异步模式访问这些内容。
{"title":"3D recording as enabling technology for serious games and educational roleplaying","authors":"E. Prasolova-Førland, Mikhail Fominykh, Peter Leong","doi":"10.1145/2448196.2448225","DOIUrl":"https://doi.org/10.1145/2448196.2448225","url":null,"abstract":"In this work, we propose an innovative approach, 3D recording, to support serious games and educational role-playing. 3D virtual worlds (3D VWs) are often used for generating educational content. Even though this technology allows creating full context of the real-life educational process, it is usually recorded as flat 2D video (such as Machinima in Second Life), which eliminates many advantages of the technology, such as sense of presence (Mckerlich et al. 2011). In addition, there are no systematic approaches for combining synchronous and asynchronous learning modes. We propose that 3D recording is capable of solving these challenges, as it offers an easy way for creating advanced 3D content from synchronous activities, as well as accessing this content in both synchronous and asynchronous modes.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"26 1","pages":"177"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82036784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Modeling agent determination of spatial relationships 建模agent空间关系的确定
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448236
J. Mooney, Rohan Gandhi, J. Allbeck
There is an ongoing desire to make virtual humans a more accessible tool for use in entertainment, training, and evaluations. From the graphical level to the animation level to the intelligence level, complexities abound. As research progresses some of these complexities become hidden from the end user. Ultimately, we would like to treat agents as real humans and instruct them as you might another person. Here we present a framework, inspired by natural language constructs, that aims to obfuscate the complexities and allow users to control virtual humans through structured English input. Our focus is on object and environment interactions, particularly spatial relationships.
人们一直希望让虚拟人成为娱乐、培训和评估中更容易使用的工具。从图形级到动画级再到智能级,复杂性比比皆是。随着研究的进展,这些复杂性中的一些变得对最终用户隐藏起来。最终,我们希望将智能体视为真正的人类,并像你对另一个人那样指导它们。在这里,我们提出了一个框架,受自然语言结构的启发,旨在混淆复杂性,并允许用户通过结构化的英语输入来控制虚拟人。我们的重点是对象和环境的相互作用,特别是空间关系。
{"title":"Modeling agent determination of spatial relationships","authors":"J. Mooney, Rohan Gandhi, J. Allbeck","doi":"10.1145/2448196.2448236","DOIUrl":"https://doi.org/10.1145/2448196.2448236","url":null,"abstract":"There is an ongoing desire to make virtual humans a more accessible tool for use in entertainment, training, and evaluations. From the graphical level to the animation level to the intelligence level, complexities abound. As research progresses some of these complexities become hidden from the end user. Ultimately, we would like to treat agents as real humans and instruct them as you might another person. Here we present a framework, inspired by natural language constructs, that aims to obfuscate the complexities and allow users to control virtual humans through structured English input. Our focus is on object and environment interactions, particularly spatial relationships.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"57 1","pages":"188"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76383888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximate depth of field effects using few samples per pixel 近似景深效果使用很少的样本每像素
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448215
Ke Lei, J. Hughes
We present a method for rendering depth of field (DoF) effects in a ray-tracing based rendering pipeline using very few samples (typically two or three) per pixel, with the ability to refocus at arbitrary depths at a given view point without gathering more samples. To do so, we treat each sample as a proxy for possible nearby samples and calculate its contributions to the final image with a splat-and-gather scheme. The radiance for each pixel in the output image is then obtained via compositing all contributing samples. We optimize the pipeline using mipmap-like techniques so that the running time is independent of the amount of focal blur in the image. Our method approximates the underlying physical image formation process and thus avoids many of the artifacts of other approximation algorithms. With very low budget it provides satisfactory DoF rendering for most purposes, and a quick preview of DoF effects for applications demanding high rendering quality.
我们提出了一种在基于光线跟踪的渲染管道中渲染景深(DoF)效果的方法,每个像素使用很少的样本(通常是两个或三个),能够在给定视点的任意深度重新聚焦,而无需收集更多的样本。为此,我们将每个样本视为可能附近样本的代理,并使用飞溅-收集方案计算其对最终图像的贡献。然后通过合成所有贡献样本来获得输出图像中每个像素的亮度。我们使用类似mimap的技术优化管道,这样运行时间与图像中的焦点模糊量无关。我们的方法近似底层物理图像形成过程,从而避免了许多其他近似算法的工件。在非常低的预算下,它为大多数用途提供了令人满意的DoF渲染,并为要求高渲染质量的应用程序提供了DoF效果的快速预览。
{"title":"Approximate depth of field effects using few samples per pixel","authors":"Ke Lei, J. Hughes","doi":"10.1145/2448196.2448215","DOIUrl":"https://doi.org/10.1145/2448196.2448215","url":null,"abstract":"We present a method for rendering depth of field (DoF) effects in a ray-tracing based rendering pipeline using very few samples (typically two or three) per pixel, with the ability to refocus at arbitrary depths at a given view point without gathering more samples. To do so, we treat each sample as a proxy for possible nearby samples and calculate its contributions to the final image with a splat-and-gather scheme. The radiance for each pixel in the output image is then obtained via compositing all contributing samples. We optimize the pipeline using mipmap-like techniques so that the running time is independent of the amount of focal blur in the image. Our method approximates the underlying physical image formation process and thus avoids many of the artifacts of other approximation algorithms. With very low budget it provides satisfactory DoF rendering for most purposes, and a quick preview of DoF effects for applications demanding high rendering quality.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"23 1","pages":"119-128"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76635187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Real-time physics-based ink splattering art creation 基于实时物理的喷墨艺术创作
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448239
Su Ian Eugene Lei, Ying-Chieh Chen, Hsiang-Ting Chen, Chun-Fa Chang
We present an interactive system for ink splattering, a form of abstract arts that artists drip or pour inks onto the canvas. The user interface and interactive methods are designed to be analogous to the artsitic techniques of ink splattering in real world so that digital artists can easily create the vibrant patterns of splattering ink, which are otherwise difficult to achieve in image editing software. The core of our system is a novel three-stage ink splattering framework that simulates the physical-based interaction of ink with different mediums including brush heads, air and paper. We implemented the physical engine using CUDA and the whole simulation process runs in real-time.
我们展示了一个水墨泼洒的互动系统,这是一种抽象艺术形式,艺术家将墨水滴或倒在画布上。用户界面和交互方法被设计成类似于现实世界中喷墨的艺术技术,以便数字艺术家可以轻松地创建喷墨的充满活力的图案,否则难以在图像编辑软件中实现。我们系统的核心是一个新颖的三阶段喷墨框架,它模拟了墨水与不同介质(包括笔头、空气和纸)的物理交互作用。我们使用CUDA实现了物理引擎,整个模拟过程是实时运行的。
{"title":"Real-time physics-based ink splattering art creation","authors":"Su Ian Eugene Lei, Ying-Chieh Chen, Hsiang-Ting Chen, Chun-Fa Chang","doi":"10.1145/2448196.2448239","DOIUrl":"https://doi.org/10.1145/2448196.2448239","url":null,"abstract":"We present an interactive system for ink splattering, a form of abstract arts that artists drip or pour inks onto the canvas. The user interface and interactive methods are designed to be analogous to the artsitic techniques of ink splattering in real world so that digital artists can easily create the vibrant patterns of splattering ink, which are otherwise difficult to achieve in image editing software. The core of our system is a novel three-stage ink splattering framework that simulates the physical-based interaction of ink with different mediums including brush heads, air and paper. We implemented the physical engine using CUDA and the whole simulation process runs in real-time.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"138 1","pages":"191"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77461440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Physically-based cosmetic rendering 基于物理的美容渲染
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448238
Cheng-Guo Huang, Wen-Chieh Lin, Tsung-Shian Huang, Jung-Hong Chuang
Realistic rendering of human faces with makeup is critical for many applications in the 3D facial animation and cosmetic industry. Facial makeup is perhaps one of the most important routines for many females or even for some males. Makeup is a multi-layered process. For example, people usually do the skin care first and smear some cosmetics (such as foundation, blush, lipstick and eye-shadow) on their face. By smearing the cosmetics on the face, facial appearance changes obviously.
在3D面部动画和化妆品行业的许多应用中,化妆对人脸的逼真渲染至关重要。化妆可能是许多女性甚至一些男性最重要的日常生活之一。化妆是一个多层次的过程。例如,人们通常先做皮肤护理,然后在脸上涂抹一些化妆品(如粉底、腮红、口红和眼影)。将化妆品涂抹在脸上,面部面貌发生了明显的变化。
{"title":"Physically-based cosmetic rendering","authors":"Cheng-Guo Huang, Wen-Chieh Lin, Tsung-Shian Huang, Jung-Hong Chuang","doi":"10.1145/2448196.2448238","DOIUrl":"https://doi.org/10.1145/2448196.2448238","url":null,"abstract":"Realistic rendering of human faces with makeup is critical for many applications in the 3D facial animation and cosmetic industry. Facial makeup is perhaps one of the most important routines for many females or even for some males. Makeup is a multi-layered process. For example, people usually do the skin care first and smear some cosmetics (such as foundation, blush, lipstick and eye-shadow) on their face. By smearing the cosmetics on the face, facial appearance changes obviously.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"69 1","pages":"190"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91176488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Ray casting for collision detection in haptic rendering of volume data 体数据触觉渲染中碰撞检测的光线投射
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159661
Roman Vlasov, K. Friese, Franz-Erich Wolter
A haptic exploration adds an additional dimension to working with 3D data: a sense of touch (figure 1). This is especially useful in areas such as medical training and pre-surgical planning, entertainment, CAD and others. Each haptic rendering frame consists of three stages: collision detection, collision response and force-feedback generation. In order to feel the 3D data smoothly, an update rate of at least 1 kHz is required [Brooks Jr. et al. 1990]. Unaddressed practical problems for almost all haptic rendering methods are that no guarantees for collision detection could be given and/or that a special topological structure of objects is required. Here we present an approach which does not have these drawbacks. Furthermore our algorithm has nearly constant time complexity independent of data resolution and does not require any additional precomputed structures. We focus on volumetric voxel data, since that is the direct input from the scanning devices. Other data types could be transformed to this one, if necessary.
触觉探索为处理3D数据增加了一个额外的维度:触觉(图1)。这在医疗培训和手术前计划、娱乐、CAD等领域尤其有用。每个触觉渲染帧包括三个阶段:碰撞检测、碰撞响应和力反馈生成。为了平滑地感受3D数据,至少需要1 kHz的更新速率[Brooks Jr. et al. 1990]。对于几乎所有的触觉渲染方法来说,未解决的实际问题是无法保证碰撞检测和/或需要物体的特殊拓扑结构。这里我们提出了一种没有这些缺点的方法。此外,我们的算法具有与数据分辨率无关的几乎恒定的时间复杂度,并且不需要任何额外的预计算结构。我们专注于体素数据,因为这是扫描设备的直接输入。如有必要,可以将其他数据类型转换为此类型。
{"title":"Ray casting for collision detection in haptic rendering of volume data","authors":"Roman Vlasov, K. Friese, Franz-Erich Wolter","doi":"10.1145/2159616.2159661","DOIUrl":"https://doi.org/10.1145/2159616.2159661","url":null,"abstract":"A haptic exploration adds an additional dimension to working with 3D data: a sense of touch (figure 1). This is especially useful in areas such as medical training and pre-surgical planning, entertainment, CAD and others. Each haptic rendering frame consists of three stages: collision detection, collision response and force-feedback generation. In order to feel the 3D data smoothly, an update rate of at least 1 kHz is required [Brooks Jr. et al. 1990]. Unaddressed practical problems for almost all haptic rendering methods are that no guarantees for collision detection could be given and/or that a special topological structure of objects is required. Here we present an approach which does not have these drawbacks. Furthermore our algorithm has nearly constant time complexity independent of data resolution and does not require any additional precomputed structures. We focus on volumetric voxel data, since that is the direct input from the scanning devices. Other data types could be transformed to this one, if necessary.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"252 1","pages":"215"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76314642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Interactive 3D rapid-prototyping models 交互式3D快速原型模型
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159656
C. Séquin
Today CAD programs are not just producing mind-blowing graphics pictures or convincing virtual models that can be interactively manipulated and changed. They also can yield tangible, physical artifacts, thanks to the development of many layered manufacturing techniques and the emergence of services such as Shapeways [2011] and Ponoko [2011],. Creating, manipulating, and studying such 3D artifacts adds a whole new loop of feedback to the design process. This extra experience is also highly useful in the teaching of computer-aided design. In a graduate course on Solid Modeling and Rapid Prototyping we have used the design and realization of dissection puzzles as a training tool. Dissection puzzles are particularly valuable design objects, since they not only train spatial understanding, but they also drive home issues concerning accuracy and tolerances in a very "hands-on" manner. In this poster and in the accompanying models we demonstrate some of the dissection models that we studied and elucidate the modeling issues associated with each type.
今天的CAD程序不只是产生令人兴奋的图形图片或令人信服的虚拟模型,可以交互操作和更改。由于许多分层制造技术的发展以及诸如Shapeways[2011]和Ponoko[2011]等服务的出现,它们也可以产生有形的物理工件。创建、操作和研究这样的3D工件为设计过程增加了一个全新的反馈循环。这种额外的经验对计算机辅助设计的教学也非常有用。在一门关于实体建模和快速原型的研究生课程中,我们使用了解剖谜题的设计和实现作为训练工具。解剖谜题是特别有价值的设计对象,因为它们不仅训练空间理解,而且还以非常“动手”的方式推动有关精度和公差的问题。在这张海报和附带的模型中,我们展示了我们研究过的一些解剖模型,并阐明了与每种类型相关的建模问题。
{"title":"Interactive 3D rapid-prototyping models","authors":"C. Séquin","doi":"10.1145/2159616.2159656","DOIUrl":"https://doi.org/10.1145/2159616.2159656","url":null,"abstract":"Today CAD programs are not just producing mind-blowing graphics pictures or convincing virtual models that can be interactively manipulated and changed. They also can yield tangible, physical artifacts, thanks to the development of many layered manufacturing techniques and the emergence of services such as Shapeways [2011] and Ponoko [2011],. Creating, manipulating, and studying such 3D artifacts adds a whole new loop of feedback to the design process. This extra experience is also highly useful in the teaching of computer-aided design. In a graduate course on Solid Modeling and Rapid Prototyping we have used the design and realization of dissection puzzles as a training tool. Dissection puzzles are particularly valuable design objects, since they not only train spatial understanding, but they also drive home issues concerning accuracy and tolerances in a very \"hands-on\" manner. In this poster and in the accompanying models we demonstrate some of the dissection models that we studied and elucidate the modeling issues associated with each type.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2 1","pages":"210"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88432883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A lightweight, procedural, vector watercolor painting engine 一个轻量级的,程序性的,矢量水彩画引擎
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159627
S. DiVerdi, A. Krishnaswamy, R. Mech, Daichi Ito
Existing natural media painting simulations have produced high quality results, but have required powerful compute hardware and have been limited to screen resolutions. Digital artists would like to be able to use watercolor-like painting tools, but at print resolutions and on lower end hardware such as laptops or even slates. We present a procedural algorithm for generating watercolor-like dynamic paint behaviors in a lightweight manner. Our goal is not to exactly duplicate watercolor painting, but to create a range of dynamic behaviors that allow users to achieve a similar style of process and result, while at the same time having a unique character of its own. Our stroke representation is vector-based, allowing for rendering at arbitrary resolutions, and our procedural pigment advection algorithm is fast enough to support painting on slate devices. We demonstrate our technique in a commercially available slate application used by professional artists.
现有的自然介质绘画模拟已经产生了高质量的结果,但需要强大的计算硬件,并且受到屏幕分辨率的限制。数字艺术家希望能够使用类似水彩画的绘画工具,但在打印分辨率和低端硬件,如笔记本电脑甚至平板电脑上。我们提出了一种程序算法,以一种轻量级的方式生成类似水彩的动态绘画行为。我们的目标不是完全复制水彩画,而是创造一系列动态行为,让用户实现类似风格的过程和结果,同时具有自己的独特特征。我们的笔画表示是基于矢量的,允许在任意分辨率下渲染,我们的程序色素平流算法足够快,可以支持在石板设备上绘画。我们展示了我们的技术在商业上可用的石板应用程序,由专业艺术家使用。
{"title":"A lightweight, procedural, vector watercolor painting engine","authors":"S. DiVerdi, A. Krishnaswamy, R. Mech, Daichi Ito","doi":"10.1145/2159616.2159627","DOIUrl":"https://doi.org/10.1145/2159616.2159627","url":null,"abstract":"Existing natural media painting simulations have produced high quality results, but have required powerful compute hardware and have been limited to screen resolutions. Digital artists would like to be able to use watercolor-like painting tools, but at print resolutions and on lower end hardware such as laptops or even slates. We present a procedural algorithm for generating watercolor-like dynamic paint behaviors in a lightweight manner. Our goal is not to exactly duplicate watercolor painting, but to create a range of dynamic behaviors that allow users to achieve a similar style of process and result, while at the same time having a unique character of its own. Our stroke representation is vector-based, allowing for rendering at arbitrary resolutions, and our procedural pigment advection algorithm is fast enough to support painting on slate devices. We demonstrate our technique in a commercially available slate application used by professional artists.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"35 1","pages":"63-70"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90622423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Way portals: efficient multi-agent navigation with line-segment goals 路径入口:具有线段目标的高效多代理导航
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159619
Sean Curtis, J. Snape, Dinesh Manocha
It is a common artifact of multi-agent motion planning for groups of agents, following similar paths, to converge to a line. This occurs because the agents' immediate goals, also known as waypoints, are frequently a common point in space. Contention for the point goal causes agents to line up and generally interferes with agent motion. By extending the definition of an immediate point goal to a line segment, which we call a "way portal", we enable the agents to better utilize the space available to them in responding to dynamic constraints. We present a novel multi-agent navigation algorithm to efficiently compute the trajectories of autonomous agents using these way portals. We have incorporated the concept into a velocity obstacle-based local navigation model and present a new segment optimization algorithm that efficiently computes a new agent velocity with respect to the way portal. We show how way portal data is extracted from current global navigation data structures, such as navigation meshes. The algorithm is relatively simple to implement and has a small run-time cost (approximately 3 μs per agent.) We highlight its performance in different game-like scenarios and observe improved agent behavior and better utilization of free space.
在多智能体运动规划中,多智能体沿着相似的路径收敛到一条直线是一个常见的问题。这是因为代理的直接目标(也称为路径点)通常是空间中的公共点。对点目标的争夺导致代理排队,通常会干扰代理的运动。通过将即时点目标的定义扩展到线段,我们称之为“路径入口”,我们使代理能够更好地利用可用的空间来响应动态约束。我们提出了一种新的多智能体导航算法,利用这些路径入口有效地计算自主智能体的轨迹。我们将这一概念融入到基于速度障碍的局部导航模型中,并提出了一种新的分段优化算法,该算法可以有效地计算相对于入口方式的新代理速度。我们展示了如何从当前的全局导航数据结构(如导航网格)中提取门户数据。该算法实现起来相对简单,运行时成本也很小(每个代理大约3 μs)。我们强调了它在不同的类游戏场景中的表现,并观察到智能体行为的改进和自由空间的更好利用。
{"title":"Way portals: efficient multi-agent navigation with line-segment goals","authors":"Sean Curtis, J. Snape, Dinesh Manocha","doi":"10.1145/2159616.2159619","DOIUrl":"https://doi.org/10.1145/2159616.2159619","url":null,"abstract":"It is a common artifact of multi-agent motion planning for groups of agents, following similar paths, to converge to a line. This occurs because the agents' immediate goals, also known as waypoints, are frequently a common point in space. Contention for the point goal causes agents to line up and generally interferes with agent motion. By extending the definition of an immediate point goal to a line segment, which we call a \"way portal\", we enable the agents to better utilize the space available to them in responding to dynamic constraints. We present a novel multi-agent navigation algorithm to efficiently compute the trajectories of autonomous agents using these way portals. We have incorporated the concept into a velocity obstacle-based local navigation model and present a new segment optimization algorithm that efficiently computes a new agent velocity with respect to the way portal. We show how way portal data is extracted from current global navigation data structures, such as navigation meshes. The algorithm is relatively simple to implement and has a small run-time cost (approximately 3 μs per agent.) We highlight its performance in different game-like scenarios and observe improved agent behavior and better utilization of free space.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"306 1","pages":"15-22"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75242833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
An example-based motion synthesis technique for locomotion and object manipulation 基于实例的运动合成技术,用于运动和物体操作
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159632
Andrew W. Feng, Yuyu Xu, Ari Shapiro
We synthesize natural-looking locomotion, reaching and grasping for a virtual character in order to accomplish a wide range of movement and manipulation tasks in real time. Our virtual characters can move while avoiding obstacles, as well as manipulate arbitrarily shaped objects, regardless of height, location or placement in a virtual environment. Our characters can touch, reach and grasp objects while maintaining a high quality appearance. We demonstrate a system that combines these skills in an interactive setting suitable for interactive games and simulations.
我们综合自然的运动,达到和抓住一个虚拟角色,以完成广泛的运动和实时操作任务。我们的虚拟角色可以在避开障碍物的同时移动,也可以操纵任意形状的物体,无论其在虚拟环境中的高度、位置或位置如何。我们的角色可以触摸,到达和抓住物体,同时保持高质量的外观。我们演示了一个系统,将这些技能结合在一个适合互动游戏和模拟的互动环境中。
{"title":"An example-based motion synthesis technique for locomotion and object manipulation","authors":"Andrew W. Feng, Yuyu Xu, Ari Shapiro","doi":"10.1145/2159616.2159632","DOIUrl":"https://doi.org/10.1145/2159616.2159632","url":null,"abstract":"We synthesize natural-looking locomotion, reaching and grasping for a virtual character in order to accomplish a wide range of movement and manipulation tasks in real time. Our virtual characters can move while avoiding obstacles, as well as manipulate arbitrarily shaped objects, regardless of height, location or placement in a virtual environment. Our characters can touch, reach and grasp objects while maintaining a high quality appearance. We demonstrate a system that combines these skills in an interactive setting suitable for interactive games and simulations.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"45 1","pages":"95-102"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74142310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
期刊
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1