We describe DynaFusion, a modeling system for interactive impossible objects. Impossible objects are defined as multiple 3D polygonal meshes with edge visibility information and a set of constraints that define pointwise relationships between the meshes. A user can easily create such models with our modeling tool. The back-end of our system is a constraint solver that seamlessly combines multiple meshes in a projected 2D domain with 3D line orientations and that maintains coherence for each successive viewpoint, thereby allowing the user to rotate the impossible object without losing visual continuity of the edges. We believe that our system will stimulate the creation of innovative artworks.
{"title":"DynaFusion: a modeling system for interactive impossible objects","authors":"S. Owada, Jun Fujiki","doi":"10.1145/1377980.1377994","DOIUrl":"https://doi.org/10.1145/1377980.1377994","url":null,"abstract":"We describe DynaFusion, a modeling system for interactive impossible objects. Impossible objects are defined as multiple 3D polygonal meshes with edge visibility information and a set of constraints that define pointwise relationships between the meshes. A user can easily create such models with our modeling tool. The back-end of our system is a constraint solver that seamlessly combines multiple meshes in a projected 2D domain with 3D line orientations and that maintains coherence for each successive viewpoint, thereby allowing the user to rotate the impossible object without losing visual continuity of the edges. We believe that our system will stimulate the creation of innovative artworks.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131419566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we provide a new compact formulation of rigid shape interpolation in terms of normal equations, and propose several enhancements to previous techniques. Specifically, we propose 1) a way to improve mesh independence, making the interpolation result less influenced by variations in tessellation, 2) a faster way to make the interpolation symmetric, and 3) simple modifications to enable controllable interpolation. Finally we also identify 4) a failure mode related to large rotations that is easily triggered in practical use, and we present a solution for this as well.
{"title":"Rigid shape interpolation using normal equations","authors":"William V. Baxter, Pascal Barla, K. Anjyo","doi":"10.1145/1377980.1377993","DOIUrl":"https://doi.org/10.1145/1377980.1377993","url":null,"abstract":"In this paper we provide a new compact formulation of rigid shape interpolation in terms of normal equations, and propose several enhancements to previous techniques. Specifically, we propose 1) a way to improve mesh independence, making the interpolation result less influenced by variations in tessellation, 2) a faster way to make the interpolation symmetric, and 3) simple modifications to enable controllable interpolation. Finally we also identify 4) a failure mode related to large rotations that is easily triggered in practical use, and we present a solution for this as well.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134436096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Romain Vergne, Pascal Barla, Xavier Granier, C. Schlick
Shape depiction in non-photorealistic rendering of 3D objects has mainly been concerned with the extraction of contour lines, which are generally detected by tracking the discontinuities of a given set of shape features varying on the surface and/or the picture plane. In this paper, we investigate another approach: the depiction of shape through shading. This technique is often used in scientific illustration, comics, cartoon animation and various other artwork. A common method consists in indirectly adapting light positions to reveal shape features; but it quickly becomes impractical when the complexity of the object augments. In contrast, our approach is to directly extract a set of shape cues that are easily manipulated by a user and re-introduced during shading. The main problem raised by such an approach is that shape cues must be identified in a continuous way in image space, as opposed to line-based techniques. Our solution is a novel view-dependent shape descriptor called Apparent Relief, which carries pertinent continuous shape cues for every pixel of an image. It consists of a combination of object- and imagespace attributes. Such an approach provides appealing properties: it is simple to manipulate by a user, may be applied to a vast range of styles, and naturally brings levels-of-detail functionalities. It is also simple to implement, and works in real-time on modern graphics hardware.
{"title":"Apparent relief: a shape descriptor for stylized shading","authors":"Romain Vergne, Pascal Barla, Xavier Granier, C. Schlick","doi":"10.1145/1377980.1377987","DOIUrl":"https://doi.org/10.1145/1377980.1377987","url":null,"abstract":"Shape depiction in non-photorealistic rendering of 3D objects has mainly been concerned with the extraction of contour lines, which are generally detected by tracking the discontinuities of a given set of shape features varying on the surface and/or the picture plane. In this paper, we investigate another approach: the depiction of shape through shading. This technique is often used in scientific illustration, comics, cartoon animation and various other artwork. A common method consists in indirectly adapting light positions to reveal shape features; but it quickly becomes impractical when the complexity of the object augments. In contrast, our approach is to directly extract a set of shape cues that are easily manipulated by a user and re-introduced during shading. The main problem raised by such an approach is that shape cues must be identified in a continuous way in image space, as opposed to line-based techniques. Our solution is a novel view-dependent shape descriptor called Apparent Relief, which carries pertinent continuous shape cues for every pixel of an image. It consists of a combination of object- and imagespace attributes. Such an approach provides appealing properties: it is simple to manipulate by a user, may be applied to a vast range of styles, and naturally brings levels-of-detail functionalities. It is also simple to implement, and works in real-time on modern graphics hardware.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121497398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the past, the videogame industry didn’t have to worry about “photorealism” – game platforms just didn’t have the power to deliver highly realistic imagery, so there was no practical need for that particular debate. Now, the debate is on – game platforms are more powerful, game imagery is narrowing the gap with pre-rendered imagery, and “photorealism” (or some version of it) is a possibility sometimes an appropriate possibility, sometimes merely seductive, but always a part of the discussion.
{"title":"New power, new problems: what the videogames industry is learning about realism","authors":"Glenn Entis","doi":"10.1145/1377980.1377982","DOIUrl":"https://doi.org/10.1145/1377980.1377982","url":null,"abstract":"In the past, the videogame industry didn’t have to worry about “photorealism” – game platforms just didn’t have the power to deliver highly realistic imagery, so there was no practical need for that particular debate. Now, the debate is on – game platforms are more powerful, game imagery is narrowing the gap with pre-rendered imagery, and “photorealism” (or some version of it) is a possibility sometimes an appropriate possibility, sometimes merely seductive, but always a part of the discussion.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121890991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A variety of non-photorealistic rendering styles include lines extracted from 3D models. Conventional visibility algorithms make a binary decision for each line fragment, usually by a depth test against the polygons of the model. This binary visibility test produces aliasing where lines are partially obscured by polygons or other lines. Such aliasing artifacts are particularly objectionable in animations and where lines are drawn with texture and other stylization effects. We introduce a method for anti-aliasing the line visibility test by supersampling, analogous to anti-aliasing for polygon rendering. Our visibility test is inexpensive using current graphics hardware and produces partial visibility that largely ameliorates objectionable aliasing artifacts. In addition, we introduce a method analogous to depth peeling that further addresses artifacts where lines obscure other lines.
{"title":"Partial visibility for stylized lines","authors":"Forrester Cole, Adam Finkelstein","doi":"10.1145/1377980.1377985","DOIUrl":"https://doi.org/10.1145/1377980.1377985","url":null,"abstract":"A variety of non-photorealistic rendering styles include lines extracted from 3D models. Conventional visibility algorithms make a binary decision for each line fragment, usually by a depth test against the polygons of the model. This binary visibility test produces aliasing where lines are partially obscured by polygons or other lines. Such aliasing artifacts are particularly objectionable in animations and where lines are drawn with texture and other stylization effects. We introduce a method for anti-aliasing the line visibility test by supersampling, analogous to anti-aliasing for polygon rendering. Our visibility test is inexpensive using current graphics hardware and produces partial visibility that largely ameliorates objectionable aliasing artifacts. In addition, we introduce a method analogous to depth peeling that further addresses artifacts where lines obscure other lines.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116250917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Creating physical stencils from 3D meshes is a unique rendering challenge that has not been previously addressed. The task is a problem of two competing goals: forming a single, well-connected and stable stencil sheet, while simultaneously limiting the error introduced by pieces of bridging material. Under these conflicting goals, it can often be difficult to create visually pleasing stencils from complicated imagery by hand. Even for well-behaved images, expressive stencils can be time-consuming to craft manually. We present a method for generating expressive stencils from polygonal meshes or images. In our system, users provide input geometry and can adjust desired view, lighting conditions, line thickness, and bridge preferences to achieve their final desired stencil. The stencil creation algorithm makes use of multiple metrics to measure the appropriateness of connections between unstable stencil regions. These metrics describe local features to help minimize the distortion of the abstracted image caused by stabilizing bridges. The algorithm also uses local statistics to choose a best fit connection that maintains both structural integrity and local shape information. We demonstrate our algorithm on physical media including construction paper and sheet metal.
{"title":"Semi-automatic stencil creation through error minimization","authors":"Jonathan R. Bronson, P. Rheingans, M. Olano","doi":"10.1145/1377980.1377989","DOIUrl":"https://doi.org/10.1145/1377980.1377989","url":null,"abstract":"Creating physical stencils from 3D meshes is a unique rendering challenge that has not been previously addressed. The task is a problem of two competing goals: forming a single, well-connected and stable stencil sheet, while simultaneously limiting the error introduced by pieces of bridging material. Under these conflicting goals, it can often be difficult to create visually pleasing stencils from complicated imagery by hand. Even for well-behaved images, expressive stencils can be time-consuming to craft manually.\u0000 We present a method for generating expressive stencils from polygonal meshes or images. In our system, users provide input geometry and can adjust desired view, lighting conditions, line thickness, and bridge preferences to achieve their final desired stencil. The stencil creation algorithm makes use of multiple metrics to measure the appropriateness of connections between unstable stencil regions. These metrics describe local features to help minimize the distortion of the abstracted image caused by stabilizing bridges. The algorithm also uses local statistics to choose a best fit connection that maintains both structural integrity and local shape information. We demonstrate our algorithm on physical media including construction paper and sheet metal.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"62 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129684604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of depicting continuous-tone images using only black and white. Traditional solutions to this problem include halftoning, which approximates tones, and line drawing, which approximates edges. We introduce "artistic thresholding" as a technique that attempts to depict forms in an image. We apply segmentation to a source image and construct a planar subdivision that captures segment connectivity. Our artistic thresholding algorithm is a combinatorial optimization over this graph. The optimization is controlled by parameters that can be tuned to achieve different artistic styles.
{"title":"Artistic thresholding","authors":"Jie Xu, C. Kaplan","doi":"10.1145/1377980.1377990","DOIUrl":"https://doi.org/10.1145/1377980.1377990","url":null,"abstract":"We consider the problem of depicting continuous-tone images using only black and white. Traditional solutions to this problem include halftoning, which approximates tones, and line drawing, which approximates edges. We introduce \"artistic thresholding\" as a technique that attempts to depict forms in an image. We apply segmentation to a source image and construct a planar subdivision that captures segment connectivity. Our artistic thresholding algorithm is a combinatorial optimization over this graph. The optimization is controlled by parameters that can be tuned to achieve different artistic styles.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121031968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An image mosaic is a rendering of a large target image by arranging a collection of small source images, often in an array, each chosen specifically to fit a particular block of the target image. Most mosaicking methods are simplistic in the sense that they break the target image into regular tiles (e.g., squares or hexagons) and take extreme shortcuts when evaluating the similarity between target tiles and source images. In this paper, we propose an efficient method to obtain higher quality mosaics that incorporate a number of process improvements. The Fast Fourier Transform (FFT) is used to compute a more fine-grained image similarity metric, allowing for optimal colour correction and arbitrarily shaped target tiles. In addition, the framework can find the optimal sub-image within a source image, further improving the quality of the matching. The similarity scores generated by these high-order cost computations are fed into a matching algorithm to find the globally-optimal assignment of source images to target tiles. Experiments show that each improvement, by itself, yields a more accurate mosaic. Combined, the innovations produce very high quality image mosaics, even with only a few hundred source images.
{"title":"Cut-out image mosaics","authors":"Jeff Orchard, C. Kaplan","doi":"10.1145/1377980.1377997","DOIUrl":"https://doi.org/10.1145/1377980.1377997","url":null,"abstract":"An image mosaic is a rendering of a large target image by arranging a collection of small source images, often in an array, each chosen specifically to fit a particular block of the target image. Most mosaicking methods are simplistic in the sense that they break the target image into regular tiles (e.g., squares or hexagons) and take extreme shortcuts when evaluating the similarity between target tiles and source images. In this paper, we propose an efficient method to obtain higher quality mosaics that incorporate a number of process improvements. The Fast Fourier Transform (FFT) is used to compute a more fine-grained image similarity metric, allowing for optimal colour correction and arbitrarily shaped target tiles. In addition, the framework can find the optimal sub-image within a source image, further improving the quality of the matching. The similarity scores generated by these high-order cost computations are fed into a matching algorithm to find the globally-optimal assignment of source images to target tiles. Experiments show that each improvement, by itself, yields a more accurate mosaic. Combined, the innovations produce very high quality image mosaics, even with only a few hundred source images.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130313588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Viewing data sampled on complicated geometry, such as a helix or a torus, is hard because a single camera view can only encompass a part of the object. Either multiple views or non-linear projection can be used to expose more of the object in a single view, however, specifying such views is challenging because of the large number of parameters involved. We show that a small set of versatile widgets can be used to quickly and simply specify a wide variety of such views. These widgets are built on top of a general framework that in turn encapsulates a variety of complicated camera placement issues into a more natural set of parameters, making the specification of new widgets, or combining multiple widgets, simpler. This framework is entirely view-based and leaves intact the underlying geometry of the dataset, making it applicable to a wide range of data types.
{"title":"Non-linear perspective widgets for creating multiple-view images","authors":"Nisha Sudarsanam, C. Grimm, Karan Singh","doi":"10.1145/1377980.1377995","DOIUrl":"https://doi.org/10.1145/1377980.1377995","url":null,"abstract":"Viewing data sampled on complicated geometry, such as a helix or a torus, is hard because a single camera view can only encompass a part of the object. Either multiple views or non-linear projection can be used to expose more of the object in a single view, however, specifying such views is challenging because of the large number of parameters involved. We show that a small set of versatile widgets can be used to quickly and simply specify a wide variety of such views. These widgets are built on top of a general framework that in turn encapsulates a variety of complicated camera placement issues into a more natural set of parameters, making the specification of new widgets, or combining multiple widgets, simpler. This framework is entirely view-based and leaves intact the underlying geometry of the dataset, making it applicable to a wide range of data types.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132981712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hedlena Bezerra, E. Eisemann, Xavier Décoret, J. Thollot
In art, grouping plays a major role to convey relationships of objects and the organization of scenes. It is separated from style, which only determines how groups are rendered to achieve a visual abstraction of the depicted scene. We present an approach to interactively derive grouping information in a dynamic 3D scene. Our solution is simple and general. The resulting grouping information can be used as an input to any "rendering style". We provide an efficient solution based on an extended mean-shift algorithm customized by user-defined criteria. The resulting system is temporally coherent and real-time. The computational cost is largely determined by the scene's structure rather than by its geometric complexity.
{"title":"3D dynamic grouping for guided stylization","authors":"Hedlena Bezerra, E. Eisemann, Xavier Décoret, J. Thollot","doi":"10.1145/1377980.1377998","DOIUrl":"https://doi.org/10.1145/1377980.1377998","url":null,"abstract":"In art, grouping plays a major role to convey relationships of objects and the organization of scenes. It is separated from style, which only determines how groups are rendered to achieve a visual abstraction of the depicted scene. We present an approach to interactively derive grouping information in a dynamic 3D scene. Our solution is simple and general. The resulting grouping information can be used as an input to any \"rendering style\".\u0000 We provide an efficient solution based on an extended mean-shift algorithm customized by user-defined criteria. The resulting system is temporally coherent and real-time. The computational cost is largely determined by the scene's structure rather than by its geometric complexity.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131518438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}