In order to simulate both physically and visually realistic soft tissue deformations, the Finite Element Method (FEM) is the most popular choice in the literature. However it is non-trivial to model complex behaviour of soft tissue with sufficient refresh rates, especially for haptic force feedback which requires an update rate of the order of 1 kHz. In this study the use of asynchronous regions is proposed to speed up the solution of FEM equations in real-time. In this way it is possible to solve the local neighborhood of the contact with high refresh rates, while evaluating the more distant regions at lower frequencies, saving computational power to model complex behaviour within the contact area. Solution of the different regions using different methods is also possible. To attain maximum efficiency the size of the regions can be changed, in real-time, in response to the size of the deformation.
{"title":"Dynamic deformation using adaptable, linked asynchronous FEM regions","authors":"Umut Z. Kocak, K. L. Palmerius, M. Cooper","doi":"10.1145/1980462.1980500","DOIUrl":"https://doi.org/10.1145/1980462.1980500","url":null,"abstract":"In order to simulate both physically and visually realistic soft tissue deformations, the Finite Element Method (FEM) is the most popular choice in the literature. However it is non-trivial to model complex behaviour of soft tissue with sufficient refresh rates, especially for haptic force feedback which requires an update rate of the order of 1 kHz. In this study the use of asynchronous regions is proposed to speed up the solution of FEM equations in real-time. In this way it is possible to solve the local neighborhood of the contact with high refresh rates, while evaluating the more distant regions at lower frequencies, saving computational power to model complex behaviour within the contact area. Solution of the different regions using different methods is also possible. To attain maximum efficiency the size of the regions can be changed, in real-time, in response to the size of the deformation.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123438220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A major obstacle for real-time rendering of high-fidelity graphics is computational complexity. A key point to consider in the pursuit of "realism in real-time" in computer graphics is that the Human Visual System (HVS) is a fundamental part of the rendering pipeline. The human eye is only capable of sensing image detail in a 2° foveal region, relying on rapid eye movements, or saccades, to jump between points of interest. These points of interest are prioritised based on the saliency of the objects in the scene or the task the user is performing. Such "glimpses" of a scene are then assembled by the HVS into a coherent, but inevitably imperfect, visual perception of the environment. In this process, much detail, that the HVS deems unimportant, may literally go unnoticed. Visual science research has identified that movement in the background of a scene may substantially influence how subjects perceive foreground objects. Furthermore, recent computer graphics research has shown that both static and dynamic scenes can be selectively rendered without any perceptual loss of quality, in a significantly reduced time, by exploiting knowledge of any high saliency movement that may be present. In this paper, we investigate, through detailed psychophysical experiments, including eyetracking, the influence of movement in the background versus the influence of other saliency cues. We use the results to develop an algorithm for generation of a saliency map that encompasses movement in the background. This algorithm is an integral part of the model that is used to reduce the rendering time of high-fidelity graphics by a factor of five.
{"title":"Saliency in motion: selective rendering of dynamic virtual environments","authors":"Jasminka Hasic, A. Chalmers","doi":"10.1145/1980462.1980496","DOIUrl":"https://doi.org/10.1145/1980462.1980496","url":null,"abstract":"A major obstacle for real-time rendering of high-fidelity graphics is computational complexity. A key point to consider in the pursuit of \"realism in real-time\" in computer graphics is that the Human Visual System (HVS) is a fundamental part of the rendering pipeline. The human eye is only capable of sensing image detail in a 2° foveal region, relying on rapid eye movements, or saccades, to jump between points of interest. These points of interest are prioritised based on the saliency of the objects in the scene or the task the user is performing. Such \"glimpses\" of a scene are then assembled by the HVS into a coherent, but inevitably imperfect, visual perception of the environment. In this process, much detail, that the HVS deems unimportant, may literally go unnoticed.\u0000 Visual science research has identified that movement in the background of a scene may substantially influence how subjects perceive foreground objects. Furthermore, recent computer graphics research has shown that both static and dynamic scenes can be selectively rendered without any perceptual loss of quality, in a significantly reduced time, by exploiting knowledge of any high saliency movement that may be present. In this paper, we investigate, through detailed psychophysical experiments, including eyetracking, the influence of movement in the background versus the influence of other saliency cues. We use the results to develop an algorithm for generation of a saliency map that encompasses movement in the background. This algorithm is an integral part of the model that is used to reduce the rendering time of high-fidelity graphics by a factor of five.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122475268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joung-Huem Kwon, A. Chalmers, S. Czanner, G. Czanner, J. Powell
Virtual reality exposure therapy offers the possibility of tackling social anxiety in an efficient, safe and controlled manner. A key question, however, is what is the level of realism required in virtual environments to ensure the environment is effective in helping the participant to deal with their anxiety. One concern which affects a lot of people from all walks of life is the fear of a job interview. In this paper we investigate the relationship between anxiety and varying levels of realistic fidelity. We recruited 60 volunteers and studied their anxiety levels via randomised block design, where each block was exposed to a different level of fidelity of the virtual avatars: realistic 3D human avatar, cartoon-like 3D avatar, and human photographs. We measured the social anxiety of all participants via a measure of eyes avoidance behaviour. Our main findings are that the participants exhibited more anxiety in accordance with the attitude of virtual avatars than the avatar's level of realism.
{"title":"A study of visual perception: social anxiety and virtual realism","authors":"Joung-Huem Kwon, A. Chalmers, S. Czanner, G. Czanner, J. Powell","doi":"10.1145/1980462.1980495","DOIUrl":"https://doi.org/10.1145/1980462.1980495","url":null,"abstract":"Virtual reality exposure therapy offers the possibility of tackling social anxiety in an efficient, safe and controlled manner. A key question, however, is what is the level of realism required in virtual environments to ensure the environment is effective in helping the participant to deal with their anxiety. One concern which affects a lot of people from all walks of life is the fear of a job interview. In this paper we investigate the relationship between anxiety and varying levels of realistic fidelity. We recruited 60 volunteers and studied their anxiety levels via randomised block design, where each block was exposed to a different level of fidelity of the virtual avatars: realistic 3D human avatar, cartoon-like 3D avatar, and human photographs. We measured the social anxiety of all participants via a measure of eyes avoidance behaviour. Our main findings are that the participants exhibited more anxiety in accordance with the attitude of virtual avatars than the avatar's level of realism.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129565316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces Vector Field Perpendicular Surfaces as a means to represent vector data with special focus on variations across the vectors in the field. These surfaces are a perpendicular analogue to streamlines, with the vector data always being parallel to the normals of the surface. In this way the orientation of the data is conveyed to the viewer, while providing a continuous representation across the vectors of the field. This paper describes the properties of such surfaces including an issue with helicity density in the vector data, an approach to generating them, several stop conditions and special means to handle also fields with non-zero helicity density.
{"title":"Flow field visualization using vector field perpendicular surfaces","authors":"K. L. Palmerius, M. Cooper, A. Ynnerman","doi":"10.1145/1980462.1980471","DOIUrl":"https://doi.org/10.1145/1980462.1980471","url":null,"abstract":"This paper introduces Vector Field Perpendicular Surfaces as a means to represent vector data with special focus on variations across the vectors in the field. These surfaces are a perpendicular analogue to streamlines, with the vector data always being parallel to the normals of the surface. In this way the orientation of the data is conveyed to the viewer, while providing a continuous representation across the vectors of the field. This paper describes the properties of such surfaces including an issue with helicity density in the vector data, an approach to generating them, several stop conditions and special means to handle also fields with non-zero helicity density.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126307774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traversing an acceleration data structure, such as the Bounding Volume Hierarchy or kD-tree, takes a significant amount of the total time to render a frame in real-time ray tracing. We present a two-phase algorithm based upon the Multi Level Ray-Tracing Algorthm (MLRTA) for finding deep entry points in these tree acceleration data structures in order to speed up traversal. We compare this algorithm to a base MLRTA implementation. Our results indicate an across-the-board decrease in time to find the entry point and deeper entry points. The overall performance of our real-time ray-tracing system shows an increase in frames per second of up to 36% over packet-tracing and 18% over MLRTA. The improvement is algorithmic and is therefore applicable to all architectures and implementations.
{"title":"Accelerated entry point search algorithm for real-time ray-tracing","authors":"Colin Fowler, S. Collins, M. Manzke","doi":"10.1145/1980462.1980476","DOIUrl":"https://doi.org/10.1145/1980462.1980476","url":null,"abstract":"Traversing an acceleration data structure, such as the Bounding Volume Hierarchy or kD-tree, takes a significant amount of the total time to render a frame in real-time ray tracing. We present a two-phase algorithm based upon the Multi Level Ray-Tracing Algorthm (MLRTA) for finding deep entry points in these tree acceleration data structures in order to speed up traversal. We compare this algorithm to a base MLRTA implementation. Our results indicate an across-the-board decrease in time to find the entry point and deeper entry points. The overall performance of our real-time ray-tracing system shows an increase in frames per second of up to 36% over packet-tracing and 18% over MLRTA. The improvement is algorithmic and is therefore applicable to all architectures and implementations.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126456888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Collision detection is a vital component of applications spanning myriad fields, yet there exists no means for developers to analyse the suitability of their collision detection algorithms across the spectrum of scenarios that could be encountered. To rectify this, we propose a framework for benchmarking interactive collision detection, which consists of a single generic benchmark that can be adapted using a number of parameters to create a large range of practical benchmarks. This framework allows algorithm developers to test the validity of their algorithms across a wide test space and allows developers of interactive applications to recreate their application scenarios and quickly determine the most amenable algorithm. To demonstrate the utility of our framework, we adapted it to work with three collision detection algorithms supplied with the Bullet Physics SDK. Our results demonstrate that those algorithms conventionally believed to offer the best performance are not always the correct choice. This demonstrates that conventional wisdom cannot be relied on for selecting a collision detection algorithm and that our benchmarking framework fulfils a vital need in the collision detection community. The framework has been made open source, so that developers do not have to reprogram the framework to test their own algorithms, allowing for consistent results across different algorithms and reducing development time.
{"title":"A framework for benchmarking interactive collision detection","authors":"M. Woulfe, M. Manzke","doi":"10.1145/1980462.1980501","DOIUrl":"https://doi.org/10.1145/1980462.1980501","url":null,"abstract":"Collision detection is a vital component of applications spanning myriad fields, yet there exists no means for developers to analyse the suitability of their collision detection algorithms across the spectrum of scenarios that could be encountered. To rectify this, we propose a framework for benchmarking interactive collision detection, which consists of a single generic benchmark that can be adapted using a number of parameters to create a large range of practical benchmarks. This framework allows algorithm developers to test the validity of their algorithms across a wide test space and allows developers of interactive applications to recreate their application scenarios and quickly determine the most amenable algorithm. To demonstrate the utility of our framework, we adapted it to work with three collision detection algorithms supplied with the Bullet Physics SDK. Our results demonstrate that those algorithms conventionally believed to offer the best performance are not always the correct choice. This demonstrates that conventional wisdom cannot be relied on for selecting a collision detection algorithm and that our benchmarking framework fulfils a vital need in the collision detection community. The framework has been made open source, so that developers do not have to reprogram the framework to test their own algorithms, allowing for consistent results across different algorithms and reducing development time.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125671281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a technique for modeling and animating fiber-like objects such as sea anemones tentacles in real-time. Each fiber is described by a generalized cylinder defined around an articulated skeleton. The dynamics of each individual fiber is controlled by a physically based simulation that updates the position of the skeleton's frames over time. We take into account the forces generated by the surrounding fluid as well as a stiffness function describing the bending behavior of the fiber. High level control of the animation is achieved through the use of four types of singularities to describe the three-dimensional continuous velocity field representing the fluid. We thus animate hundreds of fibers by key-framing only a small number of singularities. We apply this algorithm on a seascape composed of many sea anemones. We also show that our algorithm is more general and can be applied to other types of objects composed of fibers such as seagrasses.
{"title":"Physically based animation of sea anemones in real-time","authors":"José Juan Aliaga, Caroline Larboulette","doi":"10.1145/1980462.1980479","DOIUrl":"https://doi.org/10.1145/1980462.1980479","url":null,"abstract":"This paper presents a technique for modeling and animating fiber-like objects such as sea anemones tentacles in real-time. Each fiber is described by a generalized cylinder defined around an articulated skeleton. The dynamics of each individual fiber is controlled by a physically based simulation that updates the position of the skeleton's frames over time. We take into account the forces generated by the surrounding fluid as well as a stiffness function describing the bending behavior of the fiber. High level control of the animation is achieved through the use of four types of singularities to describe the three-dimensional continuous velocity field representing the fluid. We thus animate hundreds of fibers by key-framing only a small number of singularities. We apply this algorithm on a seascape composed of many sea anemones. We also show that our algorithm is more general and can be applied to other types of objects composed of fibers such as seagrasses.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133420972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vedad Hulusic, G. Czanner, K. Debattista, E. Sikudová, Piotr Dubla, A. Chalmers
Knowledge of the Human Visual System (HVS) may be exploited in computer graphics to significantly reduce rendering times without the viewer being aware of any resultant image quality difference. Furthermore, cross-modal effects, that is the influence of one sensory input on another, for example sound and visuals, have also recently been shown to have a substantial impact on viewer perception of image quality. In this paper we investigate the relationship between audio beat rate and video frame rate in order to manipulate temporal visual perception. This represents an initial step towards establishing a comprehensive understanding for the audio-visual integration in multisensory environments.
{"title":"Investigation of the beat rate effect on frame rate for animated content","authors":"Vedad Hulusic, G. Czanner, K. Debattista, E. Sikudová, Piotr Dubla, A. Chalmers","doi":"10.1145/1980462.1980493","DOIUrl":"https://doi.org/10.1145/1980462.1980493","url":null,"abstract":"Knowledge of the Human Visual System (HVS) may be exploited in computer graphics to significantly reduce rendering times without the viewer being aware of any resultant image quality difference. Furthermore, cross-modal effects, that is the influence of one sensory input on another, for example sound and visuals, have also recently been shown to have a substantial impact on viewer perception of image quality.\u0000 In this paper we investigate the relationship between audio beat rate and video frame rate in order to manipulate temporal visual perception. This represents an initial step towards establishing a comprehensive understanding for the audio-visual integration in multisensory environments.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132002046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roberto Torres, Pedro J. Martín, Antonio Gavilanes
In this paper, we present a real-time ray caster implemented on GPU using CUDA. It uses a BVH augmented with ropes that is traversed with ray packets to speed up performance. We present two algorithms making use of ray packets, packet-warp and packet-block, which set the packet size to a warp and a block, respectively. We also analyze the influence of the packet size and packet shape by testing several configurations. Finally, we compare the time results we have obtained to previous related papers over a batch of usual scenes.
{"title":"Ray casting using a roped BVH with CUDA","authors":"Roberto Torres, Pedro J. Martín, Antonio Gavilanes","doi":"10.1145/1980462.1980483","DOIUrl":"https://doi.org/10.1145/1980462.1980483","url":null,"abstract":"In this paper, we present a real-time ray caster implemented on GPU using CUDA. It uses a BVH augmented with ropes that is traversed with ray packets to speed up performance. We present two algorithms making use of ray packets, packet-warp and packet-block, which set the packet size to a warp and a block, respectively. We also analyze the influence of the packet size and packet shape by testing several configurations. Finally, we compare the time results we have obtained to previous related papers over a batch of usual scenes.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133337663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a method for resampling a sequence of high dynamic range light probe images into a representation of Incident Light Field (ILF) illumination which enables realtime rendering. The light probe sequences are captured at varying positions in a real world environment using a high dynamic range video camera pointed at a mirror sphere. The sequences are then resampled to a set of radiance maps in a regular three dimensional grid before projection onto spherical harmonics. The capture locations and amount of samples in the original data make it inconvenient for direct use in rendering and resampling is necessary to produce an efficient data structure. Each light probe represents a large set of incident radiance samples from different directions around the capture location. Under the assumption that the spatial volume in which the capture was performed has no internal occlusion, the radiance samples are projected through the volume along their corresponding direction in order to build a new set of radiance maps at selected locations, in this case a three dimensional grid. The resampled data is projected onto a spherical harmonic basis to allow for realtime lighting of synthetic objects inside the incident light field.
{"title":"HDR light probe sequence resampling for realtime incident light field rendering","authors":"J. Löw, A. Ynnerman, P. Larsson, J. Unger","doi":"10.1145/1980462.1980474","DOIUrl":"https://doi.org/10.1145/1980462.1980474","url":null,"abstract":"This paper presents a method for resampling a sequence of high dynamic range light probe images into a representation of Incident Light Field (ILF) illumination which enables realtime rendering. The light probe sequences are captured at varying positions in a real world environment using a high dynamic range video camera pointed at a mirror sphere. The sequences are then resampled to a set of radiance maps in a regular three dimensional grid before projection onto spherical harmonics. The capture locations and amount of samples in the original data make it inconvenient for direct use in rendering and resampling is necessary to produce an efficient data structure. Each light probe represents a large set of incident radiance samples from different directions around the capture location. Under the assumption that the spatial volume in which the capture was performed has no internal occlusion, the radiance samples are projected through the volume along their corresponding direction in order to build a new set of radiance maps at selected locations, in this case a three dimensional grid. The resampled data is projected onto a spherical harmonic basis to allow for realtime lighting of synthetic objects inside the incident light field.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115562397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}