E. Prasolova-Førland, Mikhail Fominykh, Peter Leong
In this work, we propose an innovative approach, 3D recording, to support serious games and educational role-playing. 3D virtual worlds (3D VWs) are often used for generating educational content. Even though this technology allows creating full context of the real-life educational process, it is usually recorded as flat 2D video (such as Machinima in Second Life), which eliminates many advantages of the technology, such as sense of presence (Mckerlich et al. 2011). In addition, there are no systematic approaches for combining synchronous and asynchronous learning modes. We propose that 3D recording is capable of solving these challenges, as it offers an easy way for creating advanced 3D content from synchronous activities, as well as accessing this content in both synchronous and asynchronous modes.
在这项工作中,我们提出了一种创新的方法,3D记录,以支持严肃的游戏和教育角色扮演。3D虚拟世界(3D VWs)通常用于生成教育内容。尽管这项技术允许创建现实生活教育过程的完整背景,但它通常被记录为平面2D视频(例如《第二人生》中的Machinima),这消除了该技术的许多优点,例如存在感(Mckerlich et al. 2011)。此外,没有系统的方法来结合同步和异步学习模式。我们建议3D记录能够解决这些挑战,因为它提供了一种从同步活动创建高级3D内容的简单方法,以及以同步和异步模式访问这些内容。
{"title":"3D recording as enabling technology for serious games and educational roleplaying","authors":"E. Prasolova-Førland, Mikhail Fominykh, Peter Leong","doi":"10.1145/2448196.2448225","DOIUrl":"https://doi.org/10.1145/2448196.2448225","url":null,"abstract":"In this work, we propose an innovative approach, 3D recording, to support serious games and educational role-playing. 3D virtual worlds (3D VWs) are often used for generating educational content. Even though this technology allows creating full context of the real-life educational process, it is usually recorded as flat 2D video (such as Machinima in Second Life), which eliminates many advantages of the technology, such as sense of presence (Mckerlich et al. 2011). In addition, there are no systematic approaches for combining synchronous and asynchronous learning modes. We propose that 3D recording is capable of solving these challenges, as it offers an easy way for creating advanced 3D content from synchronous activities, as well as accessing this content in both synchronous and asynchronous modes.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"26 1","pages":"177"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82036784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is an ongoing desire to make virtual humans a more accessible tool for use in entertainment, training, and evaluations. From the graphical level to the animation level to the intelligence level, complexities abound. As research progresses some of these complexities become hidden from the end user. Ultimately, we would like to treat agents as real humans and instruct them as you might another person. Here we present a framework, inspired by natural language constructs, that aims to obfuscate the complexities and allow users to control virtual humans through structured English input. Our focus is on object and environment interactions, particularly spatial relationships.
{"title":"Modeling agent determination of spatial relationships","authors":"J. Mooney, Rohan Gandhi, J. Allbeck","doi":"10.1145/2448196.2448236","DOIUrl":"https://doi.org/10.1145/2448196.2448236","url":null,"abstract":"There is an ongoing desire to make virtual humans a more accessible tool for use in entertainment, training, and evaluations. From the graphical level to the animation level to the intelligence level, complexities abound. As research progresses some of these complexities become hidden from the end user. Ultimately, we would like to treat agents as real humans and instruct them as you might another person. Here we present a framework, inspired by natural language constructs, that aims to obfuscate the complexities and allow users to control virtual humans through structured English input. Our focus is on object and environment interactions, particularly spatial relationships.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"57 1","pages":"188"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76383888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a method for rendering depth of field (DoF) effects in a ray-tracing based rendering pipeline using very few samples (typically two or three) per pixel, with the ability to refocus at arbitrary depths at a given view point without gathering more samples. To do so, we treat each sample as a proxy for possible nearby samples and calculate its contributions to the final image with a splat-and-gather scheme. The radiance for each pixel in the output image is then obtained via compositing all contributing samples. We optimize the pipeline using mipmap-like techniques so that the running time is independent of the amount of focal blur in the image. Our method approximates the underlying physical image formation process and thus avoids many of the artifacts of other approximation algorithms. With very low budget it provides satisfactory DoF rendering for most purposes, and a quick preview of DoF effects for applications demanding high rendering quality.
{"title":"Approximate depth of field effects using few samples per pixel","authors":"Ke Lei, J. Hughes","doi":"10.1145/2448196.2448215","DOIUrl":"https://doi.org/10.1145/2448196.2448215","url":null,"abstract":"We present a method for rendering depth of field (DoF) effects in a ray-tracing based rendering pipeline using very few samples (typically two or three) per pixel, with the ability to refocus at arbitrary depths at a given view point without gathering more samples. To do so, we treat each sample as a proxy for possible nearby samples and calculate its contributions to the final image with a splat-and-gather scheme. The radiance for each pixel in the output image is then obtained via compositing all contributing samples. We optimize the pipeline using mipmap-like techniques so that the running time is independent of the amount of focal blur in the image. Our method approximates the underlying physical image formation process and thus avoids many of the artifacts of other approximation algorithms. With very low budget it provides satisfactory DoF rendering for most purposes, and a quick preview of DoF effects for applications demanding high rendering quality.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"23 1","pages":"119-128"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76635187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Su Ian Eugene Lei, Ying-Chieh Chen, Hsiang-Ting Chen, Chun-Fa Chang
We present an interactive system for ink splattering, a form of abstract arts that artists drip or pour inks onto the canvas. The user interface and interactive methods are designed to be analogous to the artsitic techniques of ink splattering in real world so that digital artists can easily create the vibrant patterns of splattering ink, which are otherwise difficult to achieve in image editing software. The core of our system is a novel three-stage ink splattering framework that simulates the physical-based interaction of ink with different mediums including brush heads, air and paper. We implemented the physical engine using CUDA and the whole simulation process runs in real-time.
{"title":"Real-time physics-based ink splattering art creation","authors":"Su Ian Eugene Lei, Ying-Chieh Chen, Hsiang-Ting Chen, Chun-Fa Chang","doi":"10.1145/2448196.2448239","DOIUrl":"https://doi.org/10.1145/2448196.2448239","url":null,"abstract":"We present an interactive system for ink splattering, a form of abstract arts that artists drip or pour inks onto the canvas. The user interface and interactive methods are designed to be analogous to the artsitic techniques of ink splattering in real world so that digital artists can easily create the vibrant patterns of splattering ink, which are otherwise difficult to achieve in image editing software. The core of our system is a novel three-stage ink splattering framework that simulates the physical-based interaction of ink with different mediums including brush heads, air and paper. We implemented the physical engine using CUDA and the whole simulation process runs in real-time.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"138 1","pages":"191"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77461440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Realistic rendering of human faces with makeup is critical for many applications in the 3D facial animation and cosmetic industry. Facial makeup is perhaps one of the most important routines for many females or even for some males. Makeup is a multi-layered process. For example, people usually do the skin care first and smear some cosmetics (such as foundation, blush, lipstick and eye-shadow) on their face. By smearing the cosmetics on the face, facial appearance changes obviously.
{"title":"Physically-based cosmetic rendering","authors":"Cheng-Guo Huang, Wen-Chieh Lin, Tsung-Shian Huang, Jung-Hong Chuang","doi":"10.1145/2448196.2448238","DOIUrl":"https://doi.org/10.1145/2448196.2448238","url":null,"abstract":"Realistic rendering of human faces with makeup is critical for many applications in the 3D facial animation and cosmetic industry. Facial makeup is perhaps one of the most important routines for many females or even for some males. Makeup is a multi-layered process. For example, people usually do the skin care first and smear some cosmetics (such as foundation, blush, lipstick and eye-shadow) on their face. By smearing the cosmetics on the face, facial appearance changes obviously.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"69 1","pages":"190"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91176488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A haptic exploration adds an additional dimension to working with 3D data: a sense of touch (figure 1). This is especially useful in areas such as medical training and pre-surgical planning, entertainment, CAD and others. Each haptic rendering frame consists of three stages: collision detection, collision response and force-feedback generation. In order to feel the 3D data smoothly, an update rate of at least 1 kHz is required [Brooks Jr. et al. 1990]. Unaddressed practical problems for almost all haptic rendering methods are that no guarantees for collision detection could be given and/or that a special topological structure of objects is required. Here we present an approach which does not have these drawbacks. Furthermore our algorithm has nearly constant time complexity independent of data resolution and does not require any additional precomputed structures. We focus on volumetric voxel data, since that is the direct input from the scanning devices. Other data types could be transformed to this one, if necessary.
触觉探索为处理3D数据增加了一个额外的维度:触觉(图1)。这在医疗培训和手术前计划、娱乐、CAD等领域尤其有用。每个触觉渲染帧包括三个阶段:碰撞检测、碰撞响应和力反馈生成。为了平滑地感受3D数据,至少需要1 kHz的更新速率[Brooks Jr. et al. 1990]。对于几乎所有的触觉渲染方法来说,未解决的实际问题是无法保证碰撞检测和/或需要物体的特殊拓扑结构。这里我们提出了一种没有这些缺点的方法。此外,我们的算法具有与数据分辨率无关的几乎恒定的时间复杂度,并且不需要任何额外的预计算结构。我们专注于体素数据,因为这是扫描设备的直接输入。如有必要,可以将其他数据类型转换为此类型。
{"title":"Ray casting for collision detection in haptic rendering of volume data","authors":"Roman Vlasov, K. Friese, Franz-Erich Wolter","doi":"10.1145/2159616.2159661","DOIUrl":"https://doi.org/10.1145/2159616.2159661","url":null,"abstract":"A haptic exploration adds an additional dimension to working with 3D data: a sense of touch (figure 1). This is especially useful in areas such as medical training and pre-surgical planning, entertainment, CAD and others. Each haptic rendering frame consists of three stages: collision detection, collision response and force-feedback generation. In order to feel the 3D data smoothly, an update rate of at least 1 kHz is required [Brooks Jr. et al. 1990]. Unaddressed practical problems for almost all haptic rendering methods are that no guarantees for collision detection could be given and/or that a special topological structure of objects is required. Here we present an approach which does not have these drawbacks. Furthermore our algorithm has nearly constant time complexity independent of data resolution and does not require any additional precomputed structures. We focus on volumetric voxel data, since that is the direct input from the scanning devices. Other data types could be transformed to this one, if necessary.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"252 1","pages":"215"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76314642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today CAD programs are not just producing mind-blowing graphics pictures or convincing virtual models that can be interactively manipulated and changed. They also can yield tangible, physical artifacts, thanks to the development of many layered manufacturing techniques and the emergence of services such as Shapeways [2011] and Ponoko [2011],. Creating, manipulating, and studying such 3D artifacts adds a whole new loop of feedback to the design process. This extra experience is also highly useful in the teaching of computer-aided design. In a graduate course on Solid Modeling and Rapid Prototyping we have used the design and realization of dissection puzzles as a training tool. Dissection puzzles are particularly valuable design objects, since they not only train spatial understanding, but they also drive home issues concerning accuracy and tolerances in a very "hands-on" manner. In this poster and in the accompanying models we demonstrate some of the dissection models that we studied and elucidate the modeling issues associated with each type.
{"title":"Interactive 3D rapid-prototyping models","authors":"C. Séquin","doi":"10.1145/2159616.2159656","DOIUrl":"https://doi.org/10.1145/2159616.2159656","url":null,"abstract":"Today CAD programs are not just producing mind-blowing graphics pictures or convincing virtual models that can be interactively manipulated and changed. They also can yield tangible, physical artifacts, thanks to the development of many layered manufacturing techniques and the emergence of services such as Shapeways [2011] and Ponoko [2011],. Creating, manipulating, and studying such 3D artifacts adds a whole new loop of feedback to the design process. This extra experience is also highly useful in the teaching of computer-aided design. In a graduate course on Solid Modeling and Rapid Prototyping we have used the design and realization of dissection puzzles as a training tool. Dissection puzzles are particularly valuable design objects, since they not only train spatial understanding, but they also drive home issues concerning accuracy and tolerances in a very \"hands-on\" manner. In this poster and in the accompanying models we demonstrate some of the dissection models that we studied and elucidate the modeling issues associated with each type.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2 1","pages":"210"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88432883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing natural media painting simulations have produced high quality results, but have required powerful compute hardware and have been limited to screen resolutions. Digital artists would like to be able to use watercolor-like painting tools, but at print resolutions and on lower end hardware such as laptops or even slates. We present a procedural algorithm for generating watercolor-like dynamic paint behaviors in a lightweight manner. Our goal is not to exactly duplicate watercolor painting, but to create a range of dynamic behaviors that allow users to achieve a similar style of process and result, while at the same time having a unique character of its own. Our stroke representation is vector-based, allowing for rendering at arbitrary resolutions, and our procedural pigment advection algorithm is fast enough to support painting on slate devices. We demonstrate our technique in a commercially available slate application used by professional artists.
{"title":"A lightweight, procedural, vector watercolor painting engine","authors":"S. DiVerdi, A. Krishnaswamy, R. Mech, Daichi Ito","doi":"10.1145/2159616.2159627","DOIUrl":"https://doi.org/10.1145/2159616.2159627","url":null,"abstract":"Existing natural media painting simulations have produced high quality results, but have required powerful compute hardware and have been limited to screen resolutions. Digital artists would like to be able to use watercolor-like painting tools, but at print resolutions and on lower end hardware such as laptops or even slates. We present a procedural algorithm for generating watercolor-like dynamic paint behaviors in a lightweight manner. Our goal is not to exactly duplicate watercolor painting, but to create a range of dynamic behaviors that allow users to achieve a similar style of process and result, while at the same time having a unique character of its own. Our stroke representation is vector-based, allowing for rendering at arbitrary resolutions, and our procedural pigment advection algorithm is fast enough to support painting on slate devices. We demonstrate our technique in a commercially available slate application used by professional artists.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"35 1","pages":"63-70"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90622423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is a common artifact of multi-agent motion planning for groups of agents, following similar paths, to converge to a line. This occurs because the agents' immediate goals, also known as waypoints, are frequently a common point in space. Contention for the point goal causes agents to line up and generally interferes with agent motion. By extending the definition of an immediate point goal to a line segment, which we call a "way portal", we enable the agents to better utilize the space available to them in responding to dynamic constraints. We present a novel multi-agent navigation algorithm to efficiently compute the trajectories of autonomous agents using these way portals. We have incorporated the concept into a velocity obstacle-based local navigation model and present a new segment optimization algorithm that efficiently computes a new agent velocity with respect to the way portal. We show how way portal data is extracted from current global navigation data structures, such as navigation meshes. The algorithm is relatively simple to implement and has a small run-time cost (approximately 3 μs per agent.) We highlight its performance in different game-like scenarios and observe improved agent behavior and better utilization of free space.
{"title":"Way portals: efficient multi-agent navigation with line-segment goals","authors":"Sean Curtis, J. Snape, Dinesh Manocha","doi":"10.1145/2159616.2159619","DOIUrl":"https://doi.org/10.1145/2159616.2159619","url":null,"abstract":"It is a common artifact of multi-agent motion planning for groups of agents, following similar paths, to converge to a line. This occurs because the agents' immediate goals, also known as waypoints, are frequently a common point in space. Contention for the point goal causes agents to line up and generally interferes with agent motion. By extending the definition of an immediate point goal to a line segment, which we call a \"way portal\", we enable the agents to better utilize the space available to them in responding to dynamic constraints. We present a novel multi-agent navigation algorithm to efficiently compute the trajectories of autonomous agents using these way portals. We have incorporated the concept into a velocity obstacle-based local navigation model and present a new segment optimization algorithm that efficiently computes a new agent velocity with respect to the way portal. We show how way portal data is extracted from current global navigation data structures, such as navigation meshes. The algorithm is relatively simple to implement and has a small run-time cost (approximately 3 μs per agent.) We highlight its performance in different game-like scenarios and observe improved agent behavior and better utilization of free space.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"306 1","pages":"15-22"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75242833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We synthesize natural-looking locomotion, reaching and grasping for a virtual character in order to accomplish a wide range of movement and manipulation tasks in real time. Our virtual characters can move while avoiding obstacles, as well as manipulate arbitrarily shaped objects, regardless of height, location or placement in a virtual environment. Our characters can touch, reach and grasp objects while maintaining a high quality appearance. We demonstrate a system that combines these skills in an interactive setting suitable for interactive games and simulations.
{"title":"An example-based motion synthesis technique for locomotion and object manipulation","authors":"Andrew W. Feng, Yuyu Xu, Ari Shapiro","doi":"10.1145/2159616.2159632","DOIUrl":"https://doi.org/10.1145/2159616.2159632","url":null,"abstract":"We synthesize natural-looking locomotion, reaching and grasping for a virtual character in order to accomplish a wide range of movement and manipulation tasks in real time. Our virtual characters can move while avoiding obstacles, as well as manipulate arbitrarily shaped objects, regardless of height, location or placement in a virtual environment. Our characters can touch, reach and grasp objects while maintaining a high quality appearance. We demonstrate a system that combines these skills in an interactive setting suitable for interactive games and simulations.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"45 1","pages":"95-102"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74142310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}