In this paper we describe a method for designing and visualizing folds in leather car seats. Since the manufacturing process cannot completely control the final result, an accurate simulation is neither possible nor needed; instead this tool functions as a sketchpad for the designer, and as a means for communicating the design as 3D model for design and production decisions. The method supports the designers' needs to create realistic looking folds and quickly and easily manipulate their position and appearance. For this, a minimal set of intuitive controls has been selected. The tool covers a range of realistic visual results for the designers and delivers the correct sewing pattern for production.
{"title":"Generating predictable and convincing folds for leather seat design","authors":"G. Eibner, A. Fuhrmann, W. Purgathofer","doi":"10.1145/1980462.1980480","DOIUrl":"https://doi.org/10.1145/1980462.1980480","url":null,"abstract":"In this paper we describe a method for designing and visualizing folds in leather car seats. Since the manufacturing process cannot completely control the final result, an accurate simulation is neither possible nor needed; instead this tool functions as a sketchpad for the designer, and as a means for communicating the design as 3D model for design and production decisions. The method supports the designers' needs to create realistic looking folds and quickly and easily manipulate their position and appearance. For this, a minimal set of intuitive controls has been selected. The tool covers a range of realistic visual results for the designers and delivers the correct sewing pattern for production.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129839997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Volume painting is an interactive segmentation technique for volumetric datasets. While volume painting facilitates quickly creating segmentations, e.g., for illustration purposes, segmenting features precisely is often problematic. The volume painting system we present addresses two common problems. First, we introduce several data-dependent painting mechanisms to make precisely segmenting features of interest easy. Second, we use game controllers such as a joystick and a gamepad to create a simple user interface to our system while still providing full control over the large amount of parameters of the painting mechanism. We demonstrate our work giving several examples and compare the effectiveness of the various brushes. We prove that our system is intuitive to use with preliminary user testing. The results indicate that our volume painting framework is an effective, interactive segmentation tool.
{"title":"Advanced volume painting with game controllers","authors":"Veronika Soltészová, M. Termeer, E. Gröller","doi":"10.1145/1980462.1980486","DOIUrl":"https://doi.org/10.1145/1980462.1980486","url":null,"abstract":"Volume painting is an interactive segmentation technique for volumetric datasets. While volume painting facilitates quickly creating segmentations, e.g., for illustration purposes, segmenting features precisely is often problematic. The volume painting system we present addresses two common problems. First, we introduce several data-dependent painting mechanisms to make precisely segmenting features of interest easy. Second, we use game controllers such as a joystick and a gamepad to create a simple user interface to our system while still providing full control over the large amount of parameters of the painting mechanism. We demonstrate our work giving several examples and compare the effectiveness of the various brushes. We prove that our system is intuitive to use with preliminary user testing. The results indicate that our volume painting framework is an effective, interactive segmentation tool.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129815841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Glyphs are useful for the effective visualization of multi-variate data. They allow for easily relating multiple data attributes to each other in a coherent visualization approach. While the basic principle of glyph-based visualization has been known for a long time, scientific interest has recently increased focus on the question of how to achieve a clever and successful glyph design. Along this newer trend, we present a structured discussion of several critical design aspects of glyph-based visualization with a special focus on 3D data. For three consecutive steps of data mapping, glyph instantiation, and rendering, we identify a number of design considerations. We illustrate our discussion with a new glyph-based visualization of time-dependent 3D simulation data and demonstrate how effective results are achieved.
{"title":"Critical design and realization aspects of glyph-based 3D data visualization","authors":"Andreas E. Lie, J. Kehrer, H. Hauser","doi":"10.1145/1980462.1980470","DOIUrl":"https://doi.org/10.1145/1980462.1980470","url":null,"abstract":"Glyphs are useful for the effective visualization of multi-variate data. They allow for easily relating multiple data attributes to each other in a coherent visualization approach. While the basic principle of glyph-based visualization has been known for a long time, scientific interest has recently increased focus on the question of how to achieve a clever and successful glyph design. Along this newer trend, we present a structured discussion of several critical design aspects of glyph-based visualization with a special focus on 3D data. For three consecutive steps of data mapping, glyph instantiation, and rendering, we identify a number of design considerations. We illustrate our discussion with a new glyph-based visualization of time-dependent 3D simulation data and demonstrate how effective results are achieved.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126376307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visualization is a young and booming research field with a growing community and increasingly competitive conferences and journals. Often, the 1987 NSF Report on Visualization in Scientific Computing (by McCormick et al.) is seen as an important starting point into a more explicit form of visualization research. In the mid-nineties then information visualization shaped itself as an own research (sub-)field and recently we see again new developments, including the visual analytics initiative as started in 2005. Visualization is bound to deliver advantages to related application fields -- if no (application) user gains an advantage from using visualization, e.g., to speed up a process, to improve results, etc., then visualization cannot consider itself successful. There are many examples by now, where visualization was successfully applied, including biomedical applications, engineering, meteorology and climate research, business, and others. In this talk, I am giving my subjective view on the state of visualization as a research field and aim at carefully predicting a bit of its future.
{"title":"Visualization: a subjective point of view","authors":"H. Hauser","doi":"10.1145/1980462.1980464","DOIUrl":"https://doi.org/10.1145/1980462.1980464","url":null,"abstract":"Visualization is a young and booming research field with a growing community and increasingly competitive conferences and journals. Often, the 1987 NSF Report on Visualization in Scientific Computing (by McCormick et al.) is seen as an important starting point into a more explicit form of visualization research. In the mid-nineties then information visualization shaped itself as an own research (sub-)field and recently we see again new developments, including the visual analytics initiative as started in 2005. Visualization is bound to deliver advantages to related application fields -- if no (application) user gains an advantage from using visualization, e.g., to speed up a process, to improve results, etc., then visualization cannot consider itself successful. There are many examples by now, where visualization was successfully applied, including biomedical applications, engineering, meteorology and climate research, business, and others. In this talk, I am giving my subjective view on the state of visualization as a research field and aim at carefully predicting a bit of its future.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134243523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a method for simulating stable interactions between fluids and unconstrained rigid bodies. Conventional particle-based methods used a penalty-based approach to resolve collisions between fluids and rigid bodies. However, these methods are very sensitive to the setting of physical parameters such as spring coefficients, and thus the search for appropriate parameters usually results in a tedious time-consuming task. In this paper, we extend a constraint-based approach, which was originally developed for calculating interactions between rigid bodies only, so that we can simulate collisions between fluids and unconstrained rigid bodies without worrying about the parameter tweaking. Our primary contribution lies in the formulation of such interactions as a linear complementary problem in such a way that it can be resolved by straightforwardly employing Lemke's algorithm. Several animation results together with the details of GPU-based implementation are presented to demonstrate the applicability of the proposed approach.
{"title":"Constraint-based simulation of interactions between fluids and unconstrained rigid bodies","authors":"Sho Kurose, Shigeo Takahashi","doi":"10.1145/1980462.1980498","DOIUrl":"https://doi.org/10.1145/1980462.1980498","url":null,"abstract":"We present a method for simulating stable interactions between fluids and unconstrained rigid bodies. Conventional particle-based methods used a penalty-based approach to resolve collisions between fluids and rigid bodies. However, these methods are very sensitive to the setting of physical parameters such as spring coefficients, and thus the search for appropriate parameters usually results in a tedious time-consuming task. In this paper, we extend a constraint-based approach, which was originally developed for calculating interactions between rigid bodies only, so that we can simulate collisions between fluids and unconstrained rigid bodies without worrying about the parameter tweaking. Our primary contribution lies in the formulation of such interactions as a linear complementary problem in such a way that it can be resolved by straightforwardly employing Lemke's algorithm. Several animation results together with the details of GPU-based implementation are presented to demonstrate the applicability of the proposed approach.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133435323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans perceive the world with all five senses: visuals, audio, smell, touch and taste. Crossmodal effects, i.e. the interaction of the senses, can have a major influence on how environments are being perceived, even to the extent that large amounts of detail of one sense may be ignored when in the presence of other more dominant sensory inputs. Real Virtuality environments (also known as there-reality#8482;) are true high-fidelity multi-sensory virtual environments which provide the same perceptual response from viewers as if they were actually present, or "there" in the real scene being portrayed. Unlike traditional virtual reality environments, Real Virtuality allows all five senses to be stimulated concurrently in a natural way. This paper gives an overview of Real Virtuality, describes how such a system may be achieved, and shows why Real Virtuality is indeed a step-change from current virtual reality systems.
{"title":"Real virtuality: a step change from virtual reality","authors":"A. Chalmers, D. Howard, C. Moir","doi":"10.1145/1980462.1980466","DOIUrl":"https://doi.org/10.1145/1980462.1980466","url":null,"abstract":"Humans perceive the world with all five senses: visuals, audio, smell, touch and taste. Crossmodal effects, i.e. the interaction of the senses, can have a major influence on how environments are being perceived, even to the extent that large amounts of detail of one sense may be ignored when in the presence of other more dominant sensory inputs. Real Virtuality environments (also known as there-reality#8482;) are true high-fidelity multi-sensory virtual environments which provide the same perceptual response from viewers as if they were actually present, or \"there\" in the real scene being portrayed.\u0000 Unlike traditional virtual reality environments, Real Virtuality allows all five senses to be stimulated concurrently in a natural way. This paper gives an overview of Real Virtuality, describes how such a system may be achieved, and shows why Real Virtuality is indeed a step-change from current virtual reality systems.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129621124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper a novel approach for peel-away visualizations is presented. Newly developed algorithm extends existing illustrative deformation approaches which are based on deformation templates and adds new component of view-dependency of the peel region. The view-dependent property guarantees the viewer unobstructed view on inspected feature of interest. This is realized by rotating deformation template so that the peeled-away segment always faces away from the viewer. Furthermore the new algorithm computes the underlying peel template on-the-fly, which allows animating the level of peeling. When structures of interest are tagged with segmentation masks, an automatic scaling and positioning of peel deformation templates allows guided navigation and clear view at structures in focus as well as feature-aligned peeling. The overall performance allows smooth interaction with reasonably sized datasets and peel templates as the implementation maximizes utilization of computation power of modern GPUs.
{"title":"View-dependent peel-away visualization for volumetric data","authors":"Åsmund Birkeland, I. Viola","doi":"10.1145/1980462.1980487","DOIUrl":"https://doi.org/10.1145/1980462.1980487","url":null,"abstract":"In this paper a novel approach for peel-away visualizations is presented. Newly developed algorithm extends existing illustrative deformation approaches which are based on deformation templates and adds new component of view-dependency of the peel region. The view-dependent property guarantees the viewer unobstructed view on inspected feature of interest. This is realized by rotating deformation template so that the peeled-away segment always faces away from the viewer. Furthermore the new algorithm computes the underlying peel template on-the-fly, which allows animating the level of peeling. When structures of interest are tagged with segmentation masks, an automatic scaling and positioning of peel deformation templates allows guided navigation and clear view at structures in focus as well as feature-aligned peeling. The overall performance allows smooth interaction with reasonably sized datasets and peel templates as the implementation maximizes utilization of computation power of modern GPUs.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128227893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Belma Ramic-Brkic, A. Chalmers, Kevin Boulanger, S. Pattanaik, J. Covington
Smell is a key human sense which can significantly effect our perception of an environment. Although, typically not as developed as our other senses, the presence of a pleasant or unpleasant smell can alter the way we view a scene. Such a cross-modal effect can be substantial with parts of a scene literally going unnoticed as the smell dominates our senses. This paper investigates the cross-modal affect on the perception of the real-time animation of a field of grass in the presence of the smell of cut-grass. Rendering the high level of detail of a close-up view of a field of grass is computationally very demanding. In the real world the smell of grass would be present, and especially strong if the grass had just been cut, for example in preparation for a sports event. By exploiting the cross-modal interaction between smell and visuals we are able to render a lower quality version of a field of grass at a reduced computational cost, without the viewer being aware of the quality difference compared to a high quality version.
{"title":"Cross-modal affects of smell on the real-time rendering of grass","authors":"Belma Ramic-Brkic, A. Chalmers, Kevin Boulanger, S. Pattanaik, J. Covington","doi":"10.1145/1980462.1980494","DOIUrl":"https://doi.org/10.1145/1980462.1980494","url":null,"abstract":"Smell is a key human sense which can significantly effect our perception of an environment. Although, typically not as developed as our other senses, the presence of a pleasant or unpleasant smell can alter the way we view a scene. Such a cross-modal effect can be substantial with parts of a scene literally going unnoticed as the smell dominates our senses. This paper investigates the cross-modal affect on the perception of the real-time animation of a field of grass in the presence of the smell of cut-grass. Rendering the high level of detail of a close-up view of a field of grass is computationally very demanding. In the real world the smell of grass would be present, and especially strong if the grass had just been cut, for example in preparation for a sports event. By exploiting the cross-modal interaction between smell and visuals we are able to render a lower quality version of a field of grass at a reduced computational cost, without the viewer being aware of the quality difference compared to a high quality version.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133049839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In volume-rendering applications an appropriate resampling filter is usually chosen by making a compromise between quality and efficiency. Generally, the fine details can be reconstructed by filters of wider support that can better approximate the ideal low-pass filter. On the other hand, if the data is noisy, a filter of a good pass-band behavior might even emphasize the noise. Therefore, to visualize noisy data, a filter of a higher smoothing effect is more favorable. Thus, the choice of the reconstruction filter depends on the quality of the data and the purpose of the visualization as well. In this paper, we propose a scalable volume-rendering technique for interactively controlling the frequency-domain behavior of the reconstruction. Applying our method, the trade-off between the smoothing and postaliasing effects can be set on the fly by using a single slider.
{"title":"Interactively controlling the smoothing and postaliasing effects in volume visualization","authors":"B. Csébfalvi, B. Domonkos","doi":"10.1145/1980462.1980488","DOIUrl":"https://doi.org/10.1145/1980462.1980488","url":null,"abstract":"In volume-rendering applications an appropriate resampling filter is usually chosen by making a compromise between quality and efficiency. Generally, the fine details can be reconstructed by filters of wider support that can better approximate the ideal low-pass filter. On the other hand, if the data is noisy, a filter of a good pass-band behavior might even emphasize the noise. Therefore, to visualize noisy data, a filter of a higher smoothing effect is more favorable. Thus, the choice of the reconstruction filter depends on the quality of the data and the purpose of the visualization as well. In this paper, we propose a scalable volume-rendering technique for interactively controlling the frequency-domain behavior of the reconstruction. Applying our method, the trade-off between the smoothing and postaliasing effects can be set on the fly by using a single slider.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130520331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a model and algorithms for the reflection of the ambient light. Simplifying the rendering equation we derive an ambient transfer function that expresses the response of the surface and its neighborhood to ambient lighting, taking into account multiple reflection effects. The ambient transfer function is built on the obscurances of the point. If we make assumptions that the material properties are locally homogenous and incorporate a real-time obscurances algorithms, then the proposed ambient transfer can also be evaluated in real-time. Our model is physically based and thus can not only provide better results than empirical ambient occlusion techniques at the same cost, but also reveals where tradeoffs can be found between accuracy and efficiency.
{"title":"Efficient methods for ambient lighting","authors":"Tamás Umenhoffer, B. Tóth, László Szirmay-Kalos","doi":"10.1145/1980462.1980482","DOIUrl":"https://doi.org/10.1145/1980462.1980482","url":null,"abstract":"This paper presents a model and algorithms for the reflection of the ambient light. Simplifying the rendering equation we derive an ambient transfer function that expresses the response of the surface and its neighborhood to ambient lighting, taking into account multiple reflection effects. The ambient transfer function is built on the obscurances of the point. If we make assumptions that the material properties are locally homogenous and incorporate a real-time obscurances algorithms, then the proposed ambient transfer can also be evaluated in real-time. Our model is physically based and thus can not only provide better results than empirical ambient occlusion techniques at the same cost, but also reveals where tradeoffs can be found between accuracy and efficiency.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126937020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}