Sofia Gameiro, Luís Almeida, António Freitas, Pedro Pereira, A. Marcos
In this paper a set of specific applications under development for the Portuguese Shoe Industry are presented. The applications include CAD/CAM, the visualisation of graphical information (related to shoe and shoe components models) and the management of production and prototyping information. Besides describing some of the R&D details of the applications developed by the authors, this paper aims to present a case study of a successful technological application. It is part of a more general process for the introduction, in this industrial sector, of new production, management and administrative working philosophies, which are based on the centralisation of information and integrated global solutions.
{"title":"Developing applications for the Portuguese Shoe Industry","authors":"Sofia Gameiro, Luís Almeida, António Freitas, Pedro Pereira, A. Marcos","doi":"10.1145/1029949.1029972","DOIUrl":"https://doi.org/10.1145/1029949.1029972","url":null,"abstract":"In this paper a set of specific applications under development for the Portuguese Shoe Industry are presented. The applications include CAD/CAM, the visualisation of graphical information (related to shoe and shoe components models) and the management of production and prototyping information.\u0000 Besides describing some of the R&D details of the applications developed by the authors, this paper aims to present a case study of a successful technological application. It is part of a more general process for the introduction, in this industrial sector, of new production, management and administrative working philosophies, which are based on the centralisation of information and integrated global solutions.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117208542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the real world, the human eye is confronted with a wide range of luminances from bright sunshine to low night light. Our eyes cope with this vast range of intensities by adaptation; changing their sensitivity to be responsive at different illumination levels. This adaptation is highly localized, allowing us to see both dark and bright regions of a high dynamic range environment. In this paper we present a new model of eye adaptation based on physiological data. The model, which can be easily integrated into existing renderers, can function either as a static local tone mapping operator for single high dynamic range image, or as a temporal adaptation model taking into account time elapsed and intensity of preadaptation for a dynamic sequence. We finally validate our technique with a high dynamic range display and a psychophysical study.
{"title":"A local model of eye adaptation for high dynamic range images","authors":"P. Ledda, Luís Paulo Santos, A. Chalmers","doi":"10.1145/1029949.1029978","DOIUrl":"https://doi.org/10.1145/1029949.1029978","url":null,"abstract":"In the real world, the human eye is confronted with a wide range of luminances from bright sunshine to low night light. Our eyes cope with this vast range of intensities by adaptation; changing their sensitivity to be responsive at different illumination levels. This adaptation is highly localized, allowing us to see both dark and bright regions of a high dynamic range environment. In this paper we present a new model of eye adaptation based on physiological data. The model, which can be easily integrated into existing renderers, can function either as a static local tone mapping operator for single high dynamic range image, or as a temporal adaptation model taking into account time elapsed and intensity of preadaptation for a dynamic sequence. We finally validate our technique with a high dynamic range display and a psychophysical study.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132878304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In non-photorealistic rendering sketchiness is essential to communicate visual ideas and can be used to illustrate drafts and concepts in, for instance, architecture and product design. In this paper, we present a hardware-accelerated real-time rendering algorithm for drawings that sketches visually important edges as well as inner color patches of arbitrary 3D objects even beyond the geometrical boundary. The algorithm preserves edges and color patches as intermediate rendering results using textures. To achieve sketchiness it applies uncertainty values in image-space to perturb texture coordinates when accessing intermediate rendering results. The algorithm adjusts depth information derived from 3D objects to ensure visibility when composing sketchy drawings with arbitrary 3D scene contents. Rendering correct depth values while sketching edges and colors beyond the boundary of 3D objects is achieved by depth sprite rendering. Moreover, we maintain frame-to-frame coherence because consecutive uncertainty values have been determined by a Perlin noise function, so that they are correlated in image-space. Finally, we introduce a solution to control and predetermine sketchiness by preserving geometrical properties of 3D objects in order to calculate associated uncertainty values. This method significantly reduces the inherent shower-door effect.
{"title":"Sketchy drawings","authors":"M. Nienhaus, J. Döllner","doi":"10.1145/1029949.1029963","DOIUrl":"https://doi.org/10.1145/1029949.1029963","url":null,"abstract":"In non-photorealistic rendering sketchiness is essential to communicate visual ideas and can be used to illustrate drafts and concepts in, for instance, architecture and product design.\u0000 In this paper, we present a hardware-accelerated real-time rendering algorithm for drawings that sketches visually important edges as well as inner color patches of arbitrary 3D objects even beyond the geometrical boundary. The algorithm preserves edges and color patches as intermediate rendering results using textures. To achieve sketchiness it applies uncertainty values in image-space to perturb texture coordinates when accessing intermediate rendering results. The algorithm adjusts depth information derived from 3D objects to ensure visibility when composing sketchy drawings with arbitrary 3D scene contents. Rendering correct depth values while sketching edges and colors beyond the boundary of 3D objects is achieved by depth sprite rendering. Moreover, we maintain frame-to-frame coherence because consecutive uncertainty values have been determined by a Perlin noise function, so that they are correlated in image-space.\u0000 Finally, we introduce a solution to control and predetermine sketchiness by preserving geometrical properties of 3D objects in order to calculate associated uncertainty values. This method significantly reduces the inherent shower-door effect.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125728954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new method for soft shadow visualization. This two-stage approach generates high-quality soft shadow images by projecting sampled surface points, which are kept in a "layered area light map", onto the viewing screen. The layered area light map is created in the preprocessing stage, and is multi-layered in the sense that each map cell keeps the visibility ratio of the area light source with respect to multiple surface points at varying depth. In the forward projection stage, we project the sampled surface points in the layered area light map onto the screen buffer to attenuate a shadowless reference image in order to quickly generate the final image. Dynamic splatting and object surface-based convolution are used in the process to fill "holes" and to smooth out minor artifacts that are caused by insufficient sampling. When a series of images for a given scene are to be produced this forward projection rendering technique is much faster than ray tracing and still results in soft shadows with comparable quality.
{"title":"Forward area light map projection","authors":"Elvis Ko-Yung Jeng, Zhigang Xiang","doi":"10.1145/602330.602346","DOIUrl":"https://doi.org/10.1145/602330.602346","url":null,"abstract":"We present a new method for soft shadow visualization. This two-stage approach generates high-quality soft shadow images by projecting sampled surface points, which are kept in a \"layered area light map\", onto the viewing screen. The layered area light map is created in the preprocessing stage, and is multi-layered in the sense that each map cell keeps the visibility ratio of the area light source with respect to multiple surface points at varying depth. In the forward projection stage, we project the sampled surface points in the layered area light map onto the screen buffer to attenuate a shadowless reference image in order to quickly generate the final image. Dynamic splatting and object surface-based convolution are used in the process to fill \"holes\" and to smooth out minor artifacts that are caused by insufficient sampling. When a series of images for a given scene are to be produced this forward projection rendering technique is much faster than ray tracing and still results in soft shadows with comparable quality.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125875084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dual-Use of Image Space (duis) is an interactive technique for presenting text corresponding to images within the image space. From a technical point of view, the pixels in the image space are used both as text which can be read as well as for shading. This approach raises a number of interesting new readability problems: First, in order to simulate shading, the weight and the width of the character glyphs are manipulated. We have noted that readers find reading text with weight and width variations not only difficult but also irritating. Second, the shape of the silhouettes of the objects, by their irregular nature, are not ideal for text layout.In this paper we present techniques for addressing the duis readability problems. The techniques are divided into three categories as follows: First, shading and reading functionalities have been separated by creating two text presentation modes, shading mode and reading mode. In reading mode, it is no longer necessary to vary the weight and width of the character glyphs. Second, we have introduced the concept of multiple column presentation to address the problems of interruptions in reading. Finally, distortions of the objects are also used to improve readability.
图像空间的双重使用(Dual-Use of Image Space, duis)是一种在图像空间中呈现与图像相对应的文本的交互技术。从技术角度来看,图像空间中的像素既可用作文本,也可用作着色。这种方法引发了许多有趣的可读性问题:首先,为了模拟阴影,需要对字符字形的粗细和宽度进行操作。我们注意到,读者发现阅读粗细和宽度变化的文本不仅困难而且令人恼火。其次,对象轮廓的形状,由于其不规则的性质,不适合文本布局。在本文中,我们提出了解决duis可读性问题的技术。该技术分为以下三类:首先,通过创建两种文本呈现模式,着色模式和阅读模式,将着色和阅读功能分开。在阅读模式下,不再需要改变字符字形的粗细和宽度。其次,我们引入了多栏呈现的概念,以解决阅读中断的问题。最后,对象的扭曲也被用来提高可读性。
{"title":"Improving readability of contextualized text explanations","authors":"W. Chigona, T. Strothotte","doi":"10.1145/602330.602357","DOIUrl":"https://doi.org/10.1145/602330.602357","url":null,"abstract":"Dual-Use of Image Space (duis) is an interactive technique for presenting text corresponding to images within the image space. From a technical point of view, the pixels in the image space are used both as text which can be read as well as for shading. This approach raises a number of interesting new readability problems: First, in order to simulate shading, the weight and the width of the character glyphs are manipulated. We have noted that readers find reading text with weight and width variations not only difficult but also irritating. Second, the shape of the silhouettes of the objects, by their irregular nature, are not ideal for text layout.In this paper we present techniques for addressing the duis readability problems. The techniques are divided into three categories as follows: First, shading and reading functionalities have been separated by creating two text presentation modes, shading mode and reading mode. In reading mode, it is no longer necessary to vary the weight and width of the character glyphs. Second, we have introduced the concept of multiple column presentation to address the problems of interruptions in reading. Finally, distortions of the objects are also used to improve readability.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129948668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We create realistic autonomous fish for Virtual Reality systems. The fish are realistic in appearance, movement and behaviour: the swimming behaviour being non-scripted, within real time rendering.The form of the fish is procedurally created. The size and shape of the form are controlled by a number of variables which are stored in a simple ASCII file. This allows efficient creation of different fish at run time.The behaviour is obtained by implementing a flocking algorithm.
{"title":"Realistic autonomous fish for virtual reality","authors":"A. Lobb, S. Bangay","doi":"10.1145/602330.602361","DOIUrl":"https://doi.org/10.1145/602330.602361","url":null,"abstract":"We create realistic autonomous fish for Virtual Reality systems. The fish are realistic in appearance, movement and behaviour: the swimming behaviour being non-scripted, within real time rendering.The form of the fish is procedurally created. The size and shape of the form are controlled by a number of variables which are stored in a simple ASCII file. This allows efficient creation of different fish at run time.The behaviour is obtained by implementing a flocking algorithm.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128921181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile devices with multimedia and graphics capabilities have great potential in a wide variety of applications. In addition to location-based services, mobile-commerce and multimedia entertainment that are often viewed as promising applications of third generation mobile networks mobile devices also offer a potential solution to bridging the digital divide in developing countries. Mobile devices could be used to provide essential IT services like internet access, communication, information, education and banking in areas where no fully developed infrastructures like the electric grid and wire-based net connections are available. Highly usable interfaces will be a critical factor in the development of successful mobile devices and applications. This is especially true if such IT services should become accessible to illiterate or semi-literate users and users without any previous computer experience where the interface will have to rely largely on graphics and speech as interaction mechanisms. However, the design of multimedia-based interfaces for mobile devices is currently complicated by a lack of standardized visualization techniques and interaction mechanisms and the absence of related component libraries and style guides. As a first step towards the development of a standardized set of device-independent presentation and interaction techniques we are currently working on a repository of visualization design solutions for mobile UIs which will later be extended to include general interaction techniques.
{"title":"A visualization design repository for mobile devices","authors":"V. Paelke, C. Reimann, W. Rosenbach","doi":"10.1145/602330.602341","DOIUrl":"https://doi.org/10.1145/602330.602341","url":null,"abstract":"Mobile devices with multimedia and graphics capabilities have great potential in a wide variety of applications. In addition to location-based services, mobile-commerce and multimedia entertainment that are often viewed as promising applications of third generation mobile networks mobile devices also offer a potential solution to bridging the digital divide in developing countries. Mobile devices could be used to provide essential IT services like internet access, communication, information, education and banking in areas where no fully developed infrastructures like the electric grid and wire-based net connections are available. Highly usable interfaces will be a critical factor in the development of successful mobile devices and applications. This is especially true if such IT services should become accessible to illiterate or semi-literate users and users without any previous computer experience where the interface will have to rely largely on graphics and speech as interaction mechanisms. However, the design of multimedia-based interfaces for mobile devices is currently complicated by a lack of standardized visualization techniques and interaction mechanisms and the absence of related component libraries and style guides. As a first step towards the development of a standardized set of device-independent presentation and interaction techniques we are currently working on a repository of visualization design solutions for mobile UIs which will later be extended to include general interaction techniques.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123061530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modeling and rendering of plants and trees requires generating and processing large numbers of polygons. Geometry simplification methods may be used to reduce the polygon count and obtain a multiresolution representation. However, those methods fail to preserve the visual structure of a tree. We propose a different approach: procedural multiresolution. We build procedural models that reflect a tree's visual structure at different resolution levels. The models are based on parametric L-systems. Our method takes a parametric chain representing a tree and generates a new chain with embedded multiresolution information. The algorithm is based on a metric that quantifies the relevance of the branches of a tree. The representation supports efficient geometry extraction and produces good visual results.
{"title":"Procedural multiresolution for plant and tree rendering","authors":"J. Lluch, E. Camahort, R. Vivó","doi":"10.1145/602330.602336","DOIUrl":"https://doi.org/10.1145/602330.602336","url":null,"abstract":"Modeling and rendering of plants and trees requires generating and processing large numbers of polygons. Geometry simplification methods may be used to reduce the polygon count and obtain a multiresolution representation. However, those methods fail to preserve the visual structure of a tree. We propose a different approach: procedural multiresolution. We build procedural models that reflect a tree's visual structure at different resolution levels. The models are based on parametric L-systems. Our method takes a parametric chain representing a tree and generates a new chain with embedded multiresolution information. The algorithm is based on a metric that quantifies the relevance of the branches of a tree. The representation supports efficient geometry extraction and produces good visual results.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"30 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113933705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The South African Sign Language Machine Translation System (SASL-MT System) takes as its input English text, and outputs an avatar signing the equivalent SASL. This paper describes our experiences to date with the implementation of a signing avatar, in the light of the specific requirements of Sign Language.
{"title":"South African Sign Language Machine Translation System","authors":"L. V. Zijl, Dean Barker","doi":"10.1145/602330.602339","DOIUrl":"https://doi.org/10.1145/602330.602339","url":null,"abstract":"The South African Sign Language Machine Translation System (SASL-MT System) takes as its input English text, and outputs an avatar signing the equivalent SASL. This paper describes our experiences to date with the implementation of a signing avatar, in the light of the specific requirements of Sign Language.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124594800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present work that specifically pertains to the rendering stage of stylised, non-photorealistic sketching. While a substantial body of work has been published on geometric optimisations, surface topologies, space-algorithms and natural media simulation, rendering-specific issues are rarely discussed in-depth even though they are often acknowledged. We investigate the most common stylised sketching approaches and identify possible rendering optimisations. In particular, we define uncertainty-functions, which are used to describe a human-error component, discuss how these pertain to geometric perturbation and textured silhouette sketching and explain how they can be cached to improve performance. Temporal coherence, which poses a problem for textured silhouette sketching, is addressed by means of an easily computed visibility-function. Lastly, we produce an effective yet surprisingly simple solution to seamless hatching, which commonly presents a large computational overhead, by using 3-D textures in a novel fashion. All our optimisations are cost-effective, easy to implement and work in conjunction with most existing algorithms.
{"title":"Rendering optimisations for stylised sketching","authors":"H. Winnemöller, S. Bangay","doi":"10.1145/602330.602353","DOIUrl":"https://doi.org/10.1145/602330.602353","url":null,"abstract":"We present work that specifically pertains to the rendering stage of stylised, non-photorealistic sketching. While a substantial body of work has been published on geometric optimisations, surface topologies, space-algorithms and natural media simulation, rendering-specific issues are rarely discussed in-depth even though they are often acknowledged. We investigate the most common stylised sketching approaches and identify possible rendering optimisations. In particular, we define uncertainty-functions, which are used to describe a human-error component, discuss how these pertain to geometric perturbation and textured silhouette sketching and explain how they can be cached to improve performance. Temporal coherence, which poses a problem for textured silhouette sketching, is addressed by means of an easily computed visibility-function. Lastly, we produce an effective yet surprisingly simple solution to seamless hatching, which commonly presents a large computational overhead, by using 3-D textures in a novel fashion. All our optimisations are cost-effective, easy to implement and work in conjunction with most existing algorithms.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128709949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}