The Scalable, Advanced Graphics Environment (SAGE) is a new high-end, multi-chip rendering architecture. Each single SAGE board can render in excess of 80 million fully lit, textured, anti-aliased triangles per second. SAGE brings high quality antialiasing filters to video rate hardware for the first time. To achieve this, the concept of a frame buffer is replaced by a fully double-buffered sample buffer of between 1 and 16 non-uniformly placed samples per final output pixel. The video output raster of samples is subject to convolution by a 5x5 programmable reconstruction and bandpass filter that replaces the traditional RAMDAC. The reconstruction filter processes up to 400 samples per output pixel, and supports any radially symmetric filter, including those with negative lobes (full Mitchell-Netravali filter). Each SAGE board comprises four parallel rendering sub-units, and supports up to two video output channels. Multiple SAGE systems can be tiled together to support even higher fill rates, resolutions, and performance.
{"title":"The SAGE graphics architecture","authors":"M. Deering, David Naegle","doi":"10.1145/566570.566638","DOIUrl":"https://doi.org/10.1145/566570.566638","url":null,"abstract":"The Scalable, Advanced Graphics Environment (SAGE) is a new high-end, multi-chip rendering architecture. Each single SAGE board can render in excess of 80 million fully lit, textured, anti-aliased triangles per second. SAGE brings high quality antialiasing filters to video rate hardware for the first time. To achieve this, the concept of a frame buffer is replaced by a fully double-buffered sample buffer of between 1 and 16 non-uniformly placed samples per final output pixel. The video output raster of samples is subject to convolution by a 5x5 programmable reconstruction and bandpass filter that replaces the traditional RAMDAC. The reconstruction filter processes up to 400 samples per output pixel, and supports any radially symmetric filter, including those with negative lobes (full Mitchell-Netravali filter). Each SAGE board comprises four parallel rendering sub-units, and supports up to two video output channels. Multiple SAGE systems can be tiled together to support even higher fill rates, resolutions, and performance.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126388263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce Object-Based Image Editing (OBIE) for real-time animation and manipulation of static digital photographs. Individual image objects (such as an arm or nose, Figure 1) are selected, scaled, stretched, bent, warped or even deleted (with automatic hole filling) - at the object, rather than the pixel level - using simple gesture motions with a mouse. OBIE gives the user direct, local control over object shape, size, and placement while dramatically reducing the time required to perform image editing tasks.Object selection is performed by manually collecting (subobject) regions detected by a watershed algorithm. Objects are tessellated into a triangular mesh, allowing shape modification to be performed in real time using OpenGL's texture mapping hardware.Through the use of anchor points, the user is able to interactively perform editing operations on a whole object, or just part(s) of an object - including moving, scaling, rotating, stretching, bending, and deleting. Indirect manipulation of object shape is also provided through the use of sliders and Bezier curves. Holes created by movement are filled in real-time based on surrounding texture.When objects stretch or scale, we provide a method for preserving texture granularity or scale. We also present a texture brush, which allows the user to "paint" texture into different parts of an image, using existing image texture(s).OBIE allows the user to perform interactive, high-level editing of image objects in a few seconds to a few ten's of seconds
{"title":"Object-based image editing","authors":"W. Barrett, Alan S. Cheney","doi":"10.1145/566570.566651","DOIUrl":"https://doi.org/10.1145/566570.566651","url":null,"abstract":"We introduce Object-Based Image Editing (OBIE) for real-time animation and manipulation of static digital photographs. Individual image objects (such as an arm or nose, Figure 1) are selected, scaled, stretched, bent, warped or even deleted (with automatic hole filling) - at the object, rather than the pixel level - using simple gesture motions with a mouse. OBIE gives the user direct, local control over object shape, size, and placement while dramatically reducing the time required to perform image editing tasks.Object selection is performed by manually collecting (subobject) regions detected by a watershed algorithm. Objects are tessellated into a triangular mesh, allowing shape modification to be performed in real time using OpenGL's texture mapping hardware.Through the use of anchor points, the user is able to interactively perform editing operations on a whole object, or just part(s) of an object - including moving, scaling, rotating, stretching, bending, and deleting. Indirect manipulation of object shape is also provided through the use of sliders and Bezier curves. Holes created by movement are filled in real-time based on surrounding texture.When objects stretch or scale, we provide a method for preserving texture granularity or scale. We also present a texture brush, which allows the user to \"paint\" texture into different parts of an image, using existing image texture(s).OBIE allows the user to perform interactive, high-level editing of image objects in a few seconds to a few ten's of seconds","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127628212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an example-based method for calculating skeleton-driven body deformations. Our example data consists of range scans of a human body in a variety of poses. Using markers captured during range scanning, we construct a kinematic skeleton and identify the pose of each scan. We then construct a mutually consistent parameterization of all the scans using a posable subdivision surface template. The detail deformations are represented as displacements from this surface, and holes are filled smoothly within the displacement maps. Finally, we combine the range scans using k-nearest neighbor interpolation in pose space. We demonstrate results for a human upper body with controllable pose, kinematics, and underlying surface shape.
{"title":"Articulated body deformation from range scan data","authors":"Brett Allen, B. Curless, Zoran Popovic","doi":"10.1145/566570.566626","DOIUrl":"https://doi.org/10.1145/566570.566626","url":null,"abstract":"This paper presents an example-based method for calculating skeleton-driven body deformations. Our example data consists of range scans of a human body in a variety of poses. Using markers captured during range scanning, we construct a kinematic skeleton and identify the pose of each scan. We then construct a mutually consistent parameterization of all the scans using a posable subdivision surface template. The detail deformations are represented as displacements from this surface, and holes are filled smoothly within the displacement maps. Finally, we combine the range scans using k-nearest neighbor interpolation in pose space. We demonstrate results for a human upper body with controllable pose, kinematics, and underlying surface shape.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126767208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Good information design depends on clarifying the meaningful structure in an image. We describe a computational approach to stylizing and abstracting photographs that explicitly responds to this design goal. Our system transforms images into a line-drawing style using bold edges and large regions of constant color. To do this, it represents images as a hierarchical structure of parts and boundaries computed using state-of-the-art computer vision. Our system identifies the meaningful elements of this structure using a model of human perception and a record of a user's eye movements in looking at the photo; the system renders a new image using transformations that preserve and highlight these visual elements. Our method thus represents a new alternative for non-photorealistic rendering both in its visual style, in its approach to visual form, and in its techniques for interaction.
{"title":"Stylization and abstraction of photographs","authors":"D. DeCarlo, A. Santella","doi":"10.1145/566570.566650","DOIUrl":"https://doi.org/10.1145/566570.566650","url":null,"abstract":"Good information design depends on clarifying the meaningful structure in an image. We describe a computational approach to stylizing and abstracting photographs that explicitly responds to this design goal. Our system transforms images into a line-drawing style using bold edges and large regions of constant color. To do this, it represents images as a hierarchical structure of parts and boundaries computed using state-of-the-art computer vision. Our system identifies the meaningful elements of this structure using a model of human perception and a record of a user's eye movements in looking at the photo; the system renders a new image using transformations that preserve and highlight these visual elements. Our method thus represents a new alternative for non-photorealistic rendering both in its visual style, in its approach to visual form, and in its techniques for interaction.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116918147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an algorithm to efficiently and robustly process collisions, contact and friction in cloth simulation. It works with any technique for simulating the internal dynamics of the cloth, and allows true modeling of cloth thickness. We also show how our simulation data can be post-processed with a collision-aware subdivision scheme to produce smooth and interference free data for rendering.
{"title":"Robust treatment of collisions, contact and friction for cloth animation","authors":"R. Bridson, Ronald Fedkiw, John Anderson","doi":"10.1145/566570.566623","DOIUrl":"https://doi.org/10.1145/566570.566623","url":null,"abstract":"We present an algorithm to efficiently and robustly process collisions, contact and friction in cloth simulation. It works with any technique for simulating the internal dynamics of the cloth, and allows true modeling of cloth thickness. We also show how our simulation data can be post-processed with a collision-aware subdivision scheme to produce smooth and interference free data for rendering.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117193010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Placing shadows is difficult task since shadows depend on the relative positions of lights and objects in an unintuitive manner. To simplify the task of the modeler, we present a user interface for designing shadows in 3d environments. In our interface, shadows are treated as first-class modeling primitives just like objects and lights. To transform a shadow, the user can simply move, rescale or rotate the shadow as if it was a 2d object on the scene's surfaces.When the user transforms a shadow, the system moves lights or objects in the scene as required and updates the shadows in realtime during mouse movement. To facilitate interaction, the user can also specify constraints that the shadows must obey, such as never casting a shadow on the face of a character. These constraints are then verified in real-time, limiting mouse movement when necessary. We also integrate in our interface fake shadows typically used in computer animation. This allows the user to draw shadowed and non-shadowed regions directly on surfaces in the scene.
{"title":"A user interface for interactive cinematic shadow design","authors":"F. Pellacini, P. Tole, D. Greenberg","doi":"10.1145/566570.566617","DOIUrl":"https://doi.org/10.1145/566570.566617","url":null,"abstract":"Placing shadows is difficult task since shadows depend on the relative positions of lights and objects in an unintuitive manner. To simplify the task of the modeler, we present a user interface for designing shadows in 3d environments. In our interface, shadows are treated as first-class modeling primitives just like objects and lights. To transform a shadow, the user can simply move, rescale or rotate the shadow as if it was a 2d object on the scene's surfaces.When the user transforms a shadow, the system moves lights or objects in the scene as required and updates the shadows in realtime during mouse movement. To facilitate interaction, the user can also specify constraints that the shadows must obey, such as never casting a shadow on the face of a character. These constraints are then verified in real-time, limiting mouse movement when necessary. We also integrate in our interface fake shadows typically used in computer animation. This allows the user to draw shadowed and non-shadowed regions directly on surfaces in the scene.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123286681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steve Capell, Seth Green, B. Curless, T. Duchamp, Zoran Popovic
This paper presents a framework for the skeleton-driven animation of elastically deformable characters. A character is embedded in a coarse volumetric control lattice, which provides the structure needed to apply the finite element method. To incorporate skeletal controls, we introduce line constraints along the bones of simple skeletons. The bones are made to coincide with edges of the control lattice, which enables us to apply the constraints efficiently using algebraic methods. To accelerate computation, we associate regions of the volumetric mesh with particular bones and perform locally linearized simulations, which are blended at each time step. We define a hierarchical basis on the control lattice, so for detailed interactions the simulation can adapt the level of detail. We demonstrate the ability to animate complex models using simple skeletons and coarse volumetric meshes in a manner that simulates secondary motions at interactive rates.
{"title":"Interactive skeleton-driven dynamic deformations","authors":"Steve Capell, Seth Green, B. Curless, T. Duchamp, Zoran Popovic","doi":"10.1145/566570.566622","DOIUrl":"https://doi.org/10.1145/566570.566622","url":null,"abstract":"This paper presents a framework for the skeleton-driven animation of elastically deformable characters. A character is embedded in a coarse volumetric control lattice, which provides the structure needed to apply the finite element method. To incorporate skeletal controls, we introduce line constraints along the bones of simple skeletons. The bones are made to coincide with edges of the control lattice, which enables us to apply the constraints efficiently using algebraic methods. To accelerate computation, we associate regions of the volumetric mesh with particular bones and perform locally linearized simulations, which are blended at each time step. We define a hierarchical basis on the control lattice, so for detailed interactions the simulation can adapt the level of detail. We demonstrate the ability to animate complex models using simple skeletons and coarse volumetric meshes in a manner that simulates secondary motions at interactive rates.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130251962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Humphreys, M. Houston, Ren Ng, R. Frank, Sean Ahern, P. Kirchner, James T. Klosowski
We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromium's stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications that use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.
{"title":"Chromium: a stream-processing framework for interactive rendering on clusters","authors":"G. Humphreys, M. Houston, Ren Ng, R. Frank, Sean Ahern, P. Kirchner, James T. Klosowski","doi":"10.1145/566570.566639","DOIUrl":"https://doi.org/10.1145/566570.566639","url":null,"abstract":"We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromium's stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications that use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"343 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124619992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a simple method of interactive texture editing that utilizes self-similarity to replicate intended operations globally over an image. Inspired by the recent successes of hierarchical approaches to texture synthesis, this method also uses multi-scale neighborhoods to assess the similarity of pixels within a texture. However, neighborhood matching is not employed to generate new instances of a texture. We instead locate similar neighborhoods for the purpose of replicating editing operations on the original texture itself, thereby creating a fundamentally new texture. This general approach is applied to texture painting, cloning and warping. These global operations are performed interactively, most often directed with just a single mouse movement.
{"title":"Self-similarity based texture editing","authors":"Stephen Brooks, N. Dodgson","doi":"10.1145/566570.566632","DOIUrl":"https://doi.org/10.1145/566570.566632","url":null,"abstract":"We present a simple method of interactive texture editing that utilizes self-similarity to replicate intended operations globally over an image. Inspired by the recent successes of hierarchical approaches to texture synthesis, this method also uses multi-scale neighborhoods to assess the similarity of pixels within a texture. However, neighborhood matching is not employed to generate new instances of a texture. We instead locate similar neighborhoods for the purpose of replicating editing operations on the original texture itself, thereby creating a fundamentally new texture. This general approach is applied to texture painting, cloning and warping. These global operations are performed interactively, most often directed with just a single mouse movement.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117084478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Biermann, Ioana M. Boier-Martin, F. Bernardini, D. Zorin
Cutting and pasting to combine different elements into a common structure are widely used operations that have been successfully adapted to many media types. Surface design could also benefit from the availability of a general, robust, and efficient cut-and-paste tool, especially during the initial stages of design when a large space of alternatives needs to be explored. Techniques to support cut-and-paste operations for surfaces have been proposed in the past, but have been of limited usefulness due to constraints on the type of shapes supported and the lack of real-time interaction. In this paper, we describe a set of algorithms based on multiresolution subdivision surfaces that perform at interactive rates and enable intuitive cut-and-paste operations.
{"title":"Cut-and-paste editing of multiresolution surfaces","authors":"H. Biermann, Ioana M. Boier-Martin, F. Bernardini, D. Zorin","doi":"10.1145/566570.566583","DOIUrl":"https://doi.org/10.1145/566570.566583","url":null,"abstract":"Cutting and pasting to combine different elements into a common structure are widely used operations that have been successfully adapted to many media types. Surface design could also benefit from the availability of a general, robust, and efficient cut-and-paste tool, especially during the initial stages of design when a large space of alternatives needs to be explored. Techniques to support cut-and-paste operations for surfaces have been proposed in the past, but have been of limited usefulness due to constraints on the type of shapes supported and the lack of real-time interaction. In this paper, we describe a set of algorithms based on multiresolution subdivision surfaces that perform at interactive rates and enable intuitive cut-and-paste operations.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131639971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}