Large crowds are seldom made up solely of a mass of individuals. They typically also include large collections of small groups. However the most of existing approaches to crowd modeling treat a crowd either as a collection of isolated individuals, each maintaining its own goal, or as an aggregated entity in which large number of individuals share the same goal and behavior pattern.
{"title":"Modeling small group behaviors in large crowd simulation","authors":"Seung In Park, Yong Cao, Francis K. H. Quek","doi":"10.1145/2159616.2159659","DOIUrl":"https://doi.org/10.1145/2159616.2159659","url":null,"abstract":"Large crowds are seldom made up solely of a mass of individuals. They typically also include large collections of small groups. However the most of existing approaches to crowd modeling treat a crowd either as a collection of isolated individuals, each maintaining its own goal, or as an aggregated entity in which large number of individuals share the same goal and behavior pattern.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"43 1","pages":"213"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79249014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rubens Fernandes Nunes, J. B. C. Neto, C. Vidal, P. Kry, V. Zordan
Control for physically based characters presents a challenging task because it requires not only the management of the functional aspects that lead to the successful completion of the desired task, but also the resulting movement must be visually appealing and meet the quality requirements of the application. Crafting controllers to generate desirable behaviors is difficult because the specification of the final outcome is indirect and often at odds with the functional control of the task. This paper presents a method which exploits the natural modal vibrations of a physically based character in order to provide a palette of basis coordinations that animators can use to assemble their desired motion. A visual user interface allows an animator to guide the final outcome by selecting and inhibiting the use of specific modes. Then, an optimization routine applies the user-chosen modes in the tuning of parameters for a fixed locomotion control structure. The result is an animation system that is easy for an animator to drive and is able to produce a wide variety of locomotion styles for varying character morphologies.
{"title":"Using natural vibrations to guide control for locomotion","authors":"Rubens Fernandes Nunes, J. B. C. Neto, C. Vidal, P. Kry, V. Zordan","doi":"10.1145/2159616.2159631","DOIUrl":"https://doi.org/10.1145/2159616.2159631","url":null,"abstract":"Control for physically based characters presents a challenging task because it requires not only the management of the functional aspects that lead to the successful completion of the desired task, but also the resulting movement must be visually appealing and meet the quality requirements of the application. Crafting controllers to generate desirable behaviors is difficult because the specification of the final outcome is indirect and often at odds with the functional control of the task. This paper presents a method which exploits the natural modal vibrations of a physically based character in order to provide a palette of basis coordinations that animators can use to assemble their desired motion. A visual user interface allows an animator to guide the final outcome by selecting and inhibiting the use of specific modes. Then, an optimization routine applies the user-chosen modes in the tuning of parameters for a fixed locomotion control structure. The result is an animation system that is easy for an animator to drive and is able to produce a wide variety of locomotion styles for varying character morphologies.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"42 1","pages":"87-94"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80772349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahdi M. Bagher, C. Soler, K. Subr, Laurent Belcour, Nicolas Holzschuch
Shading complex materials such as acquired reflectances in multi-light environments is computationally expensive. Estimating the shading integral requires multiple samples of the incident illumination. The number of samples required varies across the image, depending on a combination of several factors. Adaptively distributing computational budget across the pixels for shading is a challenging problem. In this paper we depict complex materials such as acquired reflectances, interactively, without any precomputation based on geometry. We first estimate the approximate spatial and angular variation in the local light field arriving at each pixel. This local bandwidth accounts for combinations of a variety of factors: the reflectance of the object projecting to the pixel, the nature of the illumination, the local geometry and the camera position relative to the geometry and lighting. We then exploit this bandwidth information to adaptively sample for reconstruction and integration. For example, fewer pixels per area are shaded for pixels projecting onto diffuse objects, and fewer samples are used for integrating illumination incident on specular objects.
{"title":"Interactive rendering of acquired materials on dynamic geometry using bandwidth prediction","authors":"Mahdi M. Bagher, C. Soler, K. Subr, Laurent Belcour, Nicolas Holzschuch","doi":"10.1145/2159616.2159637","DOIUrl":"https://doi.org/10.1145/2159616.2159637","url":null,"abstract":"Shading complex materials such as acquired reflectances in multi-light environments is computationally expensive. Estimating the shading integral requires multiple samples of the incident illumination. The number of samples required varies across the image, depending on a combination of several factors. Adaptively distributing computational budget across the pixels for shading is a challenging problem. In this paper we depict complex materials such as acquired reflectances, interactively, without any precomputation based on geometry. We first estimate the approximate spatial and angular variation in the local light field arriving at each pixel. This local bandwidth accounts for combinations of a variety of factors: the reflectance of the object projecting to the pixel, the nature of the illumination, the local geometry and the camera position relative to the geometry and lighting. We then exploit this bandwidth information to adaptively sample for reconstruction and integration. For example, fewer pixels per area are shaded for pixels projecting onto diffuse objects, and fewer samples are used for integrating illumination incident on specular objects.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"14 1","pages":"127-134"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79526925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Open-source games can benefit from stylization of their rendering engine, even after end-of-life. Without professional content artists, it is difficult to achieve the targeted realistic look to which many games aspire. Instead, the result of their attempts is often perceived as unnatural. In other words, if users see complex, realistic textures, they expect detailed geometry. If this is not delivered, the graphics system is blamed. A more consistent effect can be achieved by removing visual clutter, such as high-frequency textures or complex shading, so that only visually relevant information is emphasized. This can often result in a significant improvement for aging game engines; the simple, low-poly geometry benefits from stylized shading that roughly matches its level of abstraction (LOA). This improves both the game itself and its community by providing users a fresh experience and boosting development efforts.
{"title":"Postmortem stylization of open-source games","authors":"Michael Lester, C. Laxer","doi":"10.1145/2159616.2159660","DOIUrl":"https://doi.org/10.1145/2159616.2159660","url":null,"abstract":"Open-source games can benefit from stylization of their rendering engine, even after end-of-life. Without professional content artists, it is difficult to achieve the targeted realistic look to which many games aspire. Instead, the result of their attempts is often perceived as unnatural. In other words, if users see complex, realistic textures, they expect detailed geometry. If this is not delivered, the graphics system is blamed. A more consistent effect can be achieved by removing visual clutter, such as high-frequency textures or complex shading, so that only visually relevant information is emphasized. This can often result in a significant improvement for aging game engines; the simple, low-poly geometry benefits from stylized shading that roughly matches its level of abstraction (LOA). This improves both the game itself and its community by providing users a fresh experience and boosting development efforts.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"130 1","pages":"214"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79589854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ludovic Hoyet, K. Ryall, R. Mcdonnell, C. O'Sullivan
Subtle animation details such as finger or facial movements help to bring virtual characters to life and increase their appeal. However, it is not always possible to capture finger animations simultaneously with full-body motion, due to limitations of the setup or tight production schedules. Therefore, hand motions are often either omitted, manually created by animators, or captured during a separate session and spliced with full body animation. In this paper, we investigate the perceived fidelity of hand animations where all the degrees of freedom of the hands are computed from reduced marker sets. In a set of perceptual experiments, we found that finger motions reconstructed with inverse kinematics from a reduced marker set of eight markers per hand are perceived to be very similar to the corresponding motions reconstructed using a full set of twenty markers. We demonstrate how using this reduced set of eight large markers enabled us to capture the finger and full-body motions of two actors performing a range of relatively unconstrained actions using a 13-camera motion capture system. This serves to simplify the capture process and to significantly reduce the time for cleanup, while preserving the natural biological movements of the hands relative to the actions performed.
{"title":"Sleight of hand: perception of finger motion from reduced marker sets","authors":"Ludovic Hoyet, K. Ryall, R. Mcdonnell, C. O'Sullivan","doi":"10.1145/2159616.2159630","DOIUrl":"https://doi.org/10.1145/2159616.2159630","url":null,"abstract":"Subtle animation details such as finger or facial movements help to bring virtual characters to life and increase their appeal. However, it is not always possible to capture finger animations simultaneously with full-body motion, due to limitations of the setup or tight production schedules. Therefore, hand motions are often either omitted, manually created by animators, or captured during a separate session and spliced with full body animation. In this paper, we investigate the perceived fidelity of hand animations where all the degrees of freedom of the hands are computed from reduced marker sets. In a set of perceptual experiments, we found that finger motions reconstructed with inverse kinematics from a reduced marker set of eight markers per hand are perceived to be very similar to the corresponding motions reconstructed using a full set of twenty markers. We demonstrate how using this reduced set of eight large markers enabled us to capture the finger and full-body motions of two actors performing a range of relatively unconstrained actions using a 13-camera motion capture system. This serves to simplify the capture process and to significantly reduce the time for cleanup, while preserving the natural biological movements of the hands relative to the actions performed.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"17 1","pages":"79-86"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77216762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a new technique to simulate dynamic patterns of crowd behaviors using stress modeling. Our model accounts for permanent, stable disposition and the dynamic nature of human behaviors that change in response to the situation. The resulting approach accounts for changes in behavior in response to external stressors based on well-known theories in psychology. We combine this model with recent techniques on personality modeling for multi-agent simulations to capture a wide variety of behavioral changes and stressors. The overall formulation allows different stressors, expressed as functions of space and time, including time pressure, positional stressors, area stressors and inter-personal stressors. This model can be used to simulate dynamic crowd behaviors at interactive rates, including walking at variable speeds, breaking lane-formation over time, and cutting through a normal flow. We also perform qualitative and quantitative comparisons between our simulation results and real-world observations.
{"title":"Interactive simulation of dynamic crowd behaviors using general adaptation syndrome theory","authors":"Sujeong Kim, S. Guy, Dinesh Manocha, M. Lin","doi":"10.1145/2159616.2159626","DOIUrl":"https://doi.org/10.1145/2159616.2159626","url":null,"abstract":"We propose a new technique to simulate dynamic patterns of crowd behaviors using stress modeling. Our model accounts for permanent, stable disposition and the dynamic nature of human behaviors that change in response to the situation. The resulting approach accounts for changes in behavior in response to external stressors based on well-known theories in psychology. We combine this model with recent techniques on personality modeling for multi-agent simulations to capture a wide variety of behavioral changes and stressors. The overall formulation allows different stressors, expressed as functions of space and time, including time pressure, positional stressors, area stressors and inter-personal stressors. This model can be used to simulate dynamic crowd behaviors at interactive rates, including walking at variable speeds, breaking lane-formation over time, and cutting through a normal flow. We also perform qualitative and quantitative comparisons between our simulation results and real-world observations.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"1 1","pages":"55-62"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86982624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Kopta, Thiago Ize, J. Spjut, E. Brunvand, A. Davis, A. Kensler
Bounding volume hierarchies (BVHs) are a popular acceleration structure choice for animated scenes rendered with ray tracing. This is due to the relative simplicity of refitting bounding volumes around moving geometry. However, the quality of such a refitted tree can degrade rapidly if objects in the scene deform or rearrange significantly as the animation progresses, resulting in dramatic increases in rendering times and a commensurate reduction in the frame rate. The BVH could be rebuilt on every frame, but this could take significant time. We present a method to efficiently extend refitting for animated scenes with tree rotations, a technique previously proposed for off-line improvement of BVH quality for static scenes. Tree rotations are local restructuring operations which can mitigate the effects that moving primitives have on BVH quality by rearranging nodes in the tree during each refit rather than triggering a full rebuild. The result is a fast, lightweight, incremental update algorithm that requires negligible memory, has minor update times, parallelizes easily, avoids significant degradation in tree quality or the need for rebuilding, and maintains fast rendering times. We show that our method approaches or exceeds the frame rates of other techniques and is consistently among the best options regardless of the animated scene.
{"title":"Fast, effective BVH updates for animated scenes","authors":"D. Kopta, Thiago Ize, J. Spjut, E. Brunvand, A. Davis, A. Kensler","doi":"10.1145/2159616.2159649","DOIUrl":"https://doi.org/10.1145/2159616.2159649","url":null,"abstract":"Bounding volume hierarchies (BVHs) are a popular acceleration structure choice for animated scenes rendered with ray tracing. This is due to the relative simplicity of refitting bounding volumes around moving geometry. However, the quality of such a refitted tree can degrade rapidly if objects in the scene deform or rearrange significantly as the animation progresses, resulting in dramatic increases in rendering times and a commensurate reduction in the frame rate. The BVH could be rebuilt on every frame, but this could take significant time. We present a method to efficiently extend refitting for animated scenes with tree rotations, a technique previously proposed for off-line improvement of BVH quality for static scenes. Tree rotations are local restructuring operations which can mitigate the effects that moving primitives have on BVH quality by rearranging nodes in the tree during each refit rather than triggering a full rebuild. The result is a fast, lightweight, incremental update algorithm that requires negligible memory, has minor update times, parallelizes easily, avoids significant degradation in tree quality or the need for rebuilding, and maintains fast rendering times. We show that our method approaches or exceeds the frame rates of other techniques and is consistently among the best options regardless of the animated scene.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"380 1","pages":"197-204"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80653694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventional shadow mapping relies on uniform sampling for producing hard shadow in an efficient manner. This approach trades image quality in favor of efficiency. A number of approaches improve upon shadow mapping by combining multiple shadow maps or using complex data structures to produce shadow maps with multiple resolutions. By sacrificing some performance, these adaptive methods produce shadows that closely match ground truth. This paper introduces Rectilinear Texture Warping (RTW) for efficiently generating adaptive shadow maps. RTW images combine the advantages of conventional shadow mapping - a single shadow map, quick construction, and constant time pixel shadow tests, with the primary advantage of adaptive techniques - shadow map resolutions which more closely match those requested by output images. RTW images consist of a conventional texture paired with two 1-D warping maps that form a rectilinear grid defining the variation in sampling rate. The quality of shadows produced with RTW shadow maps of standard resolutions, i.e. 2,048×2,048 texture for 1080p output images, approaches that of raytraced results while low overhead permits rendering at hundreds of frames per second.
{"title":"Rectilinear texture warping for fast adaptive shadow mapping","authors":"P. Rosen","doi":"10.1145/2159616.2159641","DOIUrl":"https://doi.org/10.1145/2159616.2159641","url":null,"abstract":"Conventional shadow mapping relies on uniform sampling for producing hard shadow in an efficient manner. This approach trades image quality in favor of efficiency. A number of approaches improve upon shadow mapping by combining multiple shadow maps or using complex data structures to produce shadow maps with multiple resolutions. By sacrificing some performance, these adaptive methods produce shadows that closely match ground truth.\u0000 This paper introduces Rectilinear Texture Warping (RTW) for efficiently generating adaptive shadow maps. RTW images combine the advantages of conventional shadow mapping - a single shadow map, quick construction, and constant time pixel shadow tests, with the primary advantage of adaptive techniques - shadow map resolutions which more closely match those requested by output images. RTW images consist of a conventional texture paired with two 1-D warping maps that form a rectilinear grid defining the variation in sampling rate. The quality of shadows produced with RTW shadow maps of standard resolutions, i.e. 2,048×2,048 texture for 1080p output images, approaches that of raytraced results while low overhead permits rendering at hundreds of frames per second.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"37 1","pages":"151-158"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79137061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present surface based anti-aliasing (SBAA), a new approach to real-time anti-aliasing for deferred renderers that improves the performance and lowers the memory requirements for anti-aliasing methods that sample sub-pixel visibility. We introduce a novel way of decoupling visibility determination from shading that, compared to previous multi-sampling based approaches, significantly reduces the number of samples stored and shaded per pixel. Unlike post-process anti-aliasing techniques used in conjunction with deferred renderers, SBAA correctly resolves visibility of sub-pixel features, minimizing spatial and temporal artifacts.
{"title":"Surface based anti-aliasing","authors":"Marco Salvi, Kiril Vidimce","doi":"10.1145/2159616.2159643","DOIUrl":"https://doi.org/10.1145/2159616.2159643","url":null,"abstract":"We present surface based anti-aliasing (SBAA), a new approach to real-time anti-aliasing for deferred renderers that improves the performance and lowers the memory requirements for anti-aliasing methods that sample sub-pixel visibility. We introduce a novel way of decoupling visibility determination from shading that, compared to previous multi-sampling based approaches, significantly reduces the number of samples stored and shaded per pixel. Unlike post-process anti-aliasing techniques used in conjunction with deferred renderers, SBAA correctly resolves visibility of sub-pixel features, minimizing spatial and temporal artifacts.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"13 1","pages":"159-164"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78636666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new technique for interactive terrain generation. This approach provides two improvements over existing teerrain generation systems. First, we introduce an improved midpoint displacement algorithm that allows for arbitrary vertex insertion order.
{"title":"Feature-based interactively sketched terrain","authors":"Daniel Adams, P. Egbert, Seth Brunner","doi":"10.1145/2159616.2159654","DOIUrl":"https://doi.org/10.1145/2159616.2159654","url":null,"abstract":"We present a new technique for interactive terrain generation. This approach provides two improvements over existing teerrain generation systems. First, we introduce an improved midpoint displacement algorithm that allows for arbitrary vertex insertion order.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"11 1","pages":"208"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74449664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}