Pub Date : 2002-08-07DOI: 10.1109/EGUK.2002.1011278
H. Bez, T. J. Wetzel
A path algebra and its applications in computer graphics and modeling is introduced. The emphasis is on concepts and applications rather than complete mathematical details. The algebra can be used either as a design tool or as a means of constructing rational parametrisations of curves and surfaces compatible with most current modeling and visualisation systems. A number of examples are given.
{"title":"Constructive path algebra - a tool for design, parametrisation and visualisation","authors":"H. Bez, T. J. Wetzel","doi":"10.1109/EGUK.2002.1011278","DOIUrl":"https://doi.org/10.1109/EGUK.2002.1011278","url":null,"abstract":"A path algebra and its applications in computer graphics and modeling is introduced. The emphasis is on concepts and applications rather than complete mathematical details. The algebra can be used either as a design tool or as a means of constructing rational parametrisations of curves and surfaces compatible with most current modeling and visualisation systems. A number of examples are given.","PeriodicalId":226195,"journal":{"name":"Proceedings 20th Eurographics UK Conference","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121043408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-11DOI: 10.1109/EGUK.2002.1011266
Mike King
The computer has made a whole range of new visual art forms possible. The paper describes the thinking behind the Virtual Visions project, which pursues a relatively untrodden sub-form within the digital art genres-3D still imaging for digital print. It is also an argument for the inclusion of science as a theoretical framework from which to pursue art, as an alternative or addition to contemporary cultural and media theory. The paper introduces some of the philosophical and art-theoretical implications of an artform that is predicated on computer graphics technologies, showing how an artist who is also a graphics programmer can bring together such different intellectual disciplines. The paper ends with a request for the computer graphics community to collaborate on this project in which there are many interfacing and aesthetic innovations.
{"title":"Virtual Visions-the physics and metaphysics of light and space","authors":"Mike King","doi":"10.1109/EGUK.2002.1011266","DOIUrl":"https://doi.org/10.1109/EGUK.2002.1011266","url":null,"abstract":"The computer has made a whole range of new visual art forms possible. The paper describes the thinking behind the Virtual Visions project, which pursues a relatively untrodden sub-form within the digital art genres-3D still imaging for digital print. It is also an argument for the inclusion of science as a theoretical framework from which to pursue art, as an alternative or addition to contemporary cultural and media theory. The paper introduces some of the philosophical and art-theoretical implications of an artform that is predicated on computer graphics technologies, showing how an artist who is also a graphics programmer can bring together such different intellectual disciplines. The paper ends with a request for the computer graphics community to collaborate on this project in which there are many interfacing and aesthetic innovations.","PeriodicalId":226195,"journal":{"name":"Proceedings 20th Eurographics UK Conference","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121926933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-11DOI: 10.1109/EGUK.2002.1011276
J. Valantinas, N. Morkevicius, T. Zumbakis
Fractal image compression turns out to be a powerful and competitive technology, which can be successively applied to a still image coding, especially at high compression rates. Unfortunately, a large amount of computation needed for the compression stage remains to be a serious obstacle in exploring this new perspective approach. Diversified attempts are made to improve the situation. In particular, theoretical investigations and experiments show that the problem-oriented use of invariant image parameters (image smoothness estimates) can serve the purpose.
{"title":"Accelerating compression times in block based fractal image coding procedures","authors":"J. Valantinas, N. Morkevicius, T. Zumbakis","doi":"10.1109/EGUK.2002.1011276","DOIUrl":"https://doi.org/10.1109/EGUK.2002.1011276","url":null,"abstract":"Fractal image compression turns out to be a powerful and competitive technology, which can be successively applied to a still image coding, especially at high compression rates. Unfortunately, a large amount of computation needed for the compression stage remains to be a serious obstacle in exploring this new perspective approach. Diversified attempts are made to improve the situation. In particular, theoretical investigations and experiments show that the problem-oriented use of invariant image parameters (image smoothness estimates) can serve the purpose.","PeriodicalId":226195,"journal":{"name":"Proceedings 20th Eurographics UK Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129815130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-11DOI: 10.1109/EGUK.2002.1011267
C. Machin
In a drive to produce installation artworks, particularly for public viewing, that are more appealing to the viewer, artists are increasingly turning to "the digital world". Whilst the technology behind such artworks is well established, being commonly found in controllers for industrial machines, the software engineer who provides the firmware strives to make the technology more accessible to the artist. What is required, during the design stage, is an interface that will allow the artist to visualise the artwork and its operation. This paper describes the technologies and the way in which they are made accessible to the artist, demonstrating a software-based simulator built for a particular artwork. It then poses questions for the future, through which further demands for collaboration can be met without compromising artistic creativity.
{"title":"Digital artworks: bridging the technology gap","authors":"C. Machin","doi":"10.1109/EGUK.2002.1011267","DOIUrl":"https://doi.org/10.1109/EGUK.2002.1011267","url":null,"abstract":"In a drive to produce installation artworks, particularly for public viewing, that are more appealing to the viewer, artists are increasingly turning to \"the digital world\". Whilst the technology behind such artworks is well established, being commonly found in controllers for industrial machines, the software engineer who provides the firmware strives to make the technology more accessible to the artist. What is required, during the design stage, is an interface that will allow the artist to visualise the artwork and its operation. This paper describes the technologies and the way in which they are made accessible to the artist, demonstrating a software-based simulator built for a particular artwork. It then poses questions for the future, through which further demands for collaboration can be met without compromising artistic creativity.","PeriodicalId":226195,"journal":{"name":"Proceedings 20th Eurographics UK Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126652481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-11DOI: 10.1109/EGUK.2002.1011277
G. Voulgaris, J. Jiang
A considerable amount of research has been done in the past on the use of quadtrees in pixel domain for image indexing as well as their use with the wavelet decomposition for image compression. In this paper we attempt to fuse those two approaches in order to produce a system which makes use of quadtrees for image indexing in wavelets compressed domain. The proposed system uses the quadtree representation of the DWT significance map as the indexing key. The measure of similarity between two images is given by the comparison (XOR-ing) of their respective quadtree structures. Extensive experiments have been made using a database of over 1000 images. The accuracy of the system is demonstrated by a representative set of results as well as a comparison with a state of the art benchmark system.
{"title":"Quadtree based image indexing in wavelets compressed domain","authors":"G. Voulgaris, J. Jiang","doi":"10.1109/EGUK.2002.1011277","DOIUrl":"https://doi.org/10.1109/EGUK.2002.1011277","url":null,"abstract":"A considerable amount of research has been done in the past on the use of quadtrees in pixel domain for image indexing as well as their use with the wavelet decomposition for image compression. In this paper we attempt to fuse those two approaches in order to produce a system which makes use of quadtrees for image indexing in wavelets compressed domain. The proposed system uses the quadtree representation of the DWT significance map as the indexing key. The measure of similarity between two images is given by the comparison (XOR-ing) of their respective quadtree structures. Extensive experiments have been made using a database of over 1000 images. The accuracy of the system is demonstrated by a representative set of results as well as a comparison with a state of the art benchmark system.","PeriodicalId":226195,"journal":{"name":"Proceedings 20th Eurographics UK Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128328888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-11DOI: 10.1109/EGUK.2002.1011285
Randy Paffenroth, Dana Vrajitoru
This article outlines the capabilities of a scientific visualization toolkit called Data Viewer and compares it to analogous software. DataViewer was originally designed for the construction of the visualization part of certain computational steering packages, and consequently it is particularly straightforward to closely couple DataViewer with numerical calculations. Rendering is performed through a highlevel scene graph which facilitates the easy construction of complex visualizations. Data Viewer differs from other such libraries by allowing complex geometrical objects, which efficiently encapsulate large amounts of data, to be used as. nodes in the scene graph. Graphics hardware access is through the OpenGL API.
{"title":"DataViewer: A scene graph based visualization tool","authors":"Randy Paffenroth, Dana Vrajitoru","doi":"10.1109/EGUK.2002.1011285","DOIUrl":"https://doi.org/10.1109/EGUK.2002.1011285","url":null,"abstract":"This article outlines the capabilities of a scientific visualization toolkit called Data Viewer and compares it to analogous software. DataViewer was originally designed for the construction of the visualization part of certain computational steering packages, and consequently it is particularly straightforward to closely couple DataViewer with numerical calculations. Rendering is performed through a highlevel scene graph which facilitates the easy construction of complex visualizations. Data Viewer differs from other such libraries by allowing complex geometrical objects, which efficiently encapsulate large amounts of data, to be used as. nodes in the scene graph. Graphics hardware access is through the OpenGL API.","PeriodicalId":226195,"journal":{"name":"Proceedings 20th Eurographics UK Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131690342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-11DOI: 10.1109/EGUK.2002.1011283
G. Milnthorpe, M. McCormick, N. Davies
A software model for an optical system incorporating microlens arrays to capture and replay an object or scene in real 3D is presented. The imaging methodology is usually referred to as "Integral Photography" (IP) or "Integral Imaging" (II). A brief description of II is given and the single-stage optical capture system, which the software model attempts to emulate, is discussed. The software design aims to reproduce the real optics system to produce rendered static and dynamic images in integral format. The effects and limitations caused by the relatively low display resolutions are addressed and their effect on image quality considered. The Phong illumination model along with the Flat, Gouraud and Phong shading techniques are employed and their respective applications to II are explained.
{"title":"Computer modeling of lens arrays for integral image rendering","authors":"G. Milnthorpe, M. McCormick, N. Davies","doi":"10.1109/EGUK.2002.1011283","DOIUrl":"https://doi.org/10.1109/EGUK.2002.1011283","url":null,"abstract":"A software model for an optical system incorporating microlens arrays to capture and replay an object or scene in real 3D is presented. The imaging methodology is usually referred to as \"Integral Photography\" (IP) or \"Integral Imaging\" (II). A brief description of II is given and the single-stage optical capture system, which the software model attempts to emulate, is discussed. The software design aims to reproduce the real optics system to produce rendered static and dynamic images in integral format. The effects and limitations caused by the relatively low display resolutions are addressed and their effect on image quality considered. The Phong illumination model along with the Flat, Gouraud and Phong shading techniques are employed and their respective applications to II are explained.","PeriodicalId":226195,"journal":{"name":"Proceedings 20th Eurographics UK Conference","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114452925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-11DOI: 10.1109/EGUK.2002.1011274
M. Nayan, E. Edirisinghe, H. Bez
We propose a novel stereo image coding technique, which uses an architecture similar to that of a discrete cosine transform (DCT) based baseline JPEG-CODEC (Pennebaker and Mitchell, 1993), but effectively replaces the DCT technology by the more recently popularized discrete wavelet transform (DWT) technology. We show that as a result of this hybrid design, which combines the advantage of two popular technologies, the proposed CODEC has improved rate distortion and subjective image quality performance as compared to DCT based stereo image compression techniques (Perkins, 1992). In particular, at very low bit rates (0.15 bpp), we report peak-signal-to-noise-ratio (PSNR) gains of up to 3.66 dB, whereas at higher bit rates we report gains in the order of 1 dB.
{"title":"Baseline JPEG-like DWT CODEC for disparity compensated residual coding of stereo images","authors":"M. Nayan, E. Edirisinghe, H. Bez","doi":"10.1109/EGUK.2002.1011274","DOIUrl":"https://doi.org/10.1109/EGUK.2002.1011274","url":null,"abstract":"We propose a novel stereo image coding technique, which uses an architecture similar to that of a discrete cosine transform (DCT) based baseline JPEG-CODEC (Pennebaker and Mitchell, 1993), but effectively replaces the DCT technology by the more recently popularized discrete wavelet transform (DWT) technology. We show that as a result of this hybrid design, which combines the advantage of two popular technologies, the proposed CODEC has improved rate distortion and subjective image quality performance as compared to DCT based stereo image compression techniques (Perkins, 1992). In particular, at very low bit rates (0.15 bpp), we report peak-signal-to-noise-ratio (PSNR) gains of up to 3.66 dB, whereas at higher bit rates we report gains in the order of 1 dB.","PeriodicalId":226195,"journal":{"name":"Proceedings 20th Eurographics UK Conference","volume":"389 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127589520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-11DOI: 10.1109/EGUK.2002.1011272
Wen Tang, T. Wan
In this paper, a novel AI-based animation approach is presented to simulate intelligent self-learning characters for computer games or other interactive virtual reality applications. The complex learning behaviours of the virtual characters are modelled as an evolutionary process so that adaptive AI algorithms such as genetic algorithms have been used to simulate the learning process. The simulation method enables the characters in a computer game environment to have abilities to learn for specific assigned tasks. Its skill for completing the tasks can be developed and evolved through its experiences of performing the tasks. The paper also describes techniques for performance evaluation and optimisation for virtual characters to perform jumping tasks.
{"title":"Intelligent self-learning characters for computer games","authors":"Wen Tang, T. Wan","doi":"10.1109/EGUK.2002.1011272","DOIUrl":"https://doi.org/10.1109/EGUK.2002.1011272","url":null,"abstract":"In this paper, a novel AI-based animation approach is presented to simulate intelligent self-learning characters for computer games or other interactive virtual reality applications. The complex learning behaviours of the virtual characters are modelled as an evolutionary process so that adaptive AI algorithms such as genetic algorithms have been used to simulate the learning process. The simulation method enables the characters in a computer game environment to have abilities to learn for specific assigned tasks. Its skill for completing the tasks can be developed and evolved through its experiences of performing the tasks. The paper also describes techniques for performance evaluation and optimisation for virtual characters to perform jumping tasks.","PeriodicalId":226195,"journal":{"name":"Proceedings 20th Eurographics UK Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126622303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-11DOI: 10.1109/EGUK.2002.1011268
M. Shanat, P.-A. Faylle, B. Schmitt, T. Vilbrandt
In this research, we want to improve methods of constructing a relatively accurate digital and multidimensional model of Japanese haniwa from 360/spl deg/ scan data of ancient artifacts via non-contact 3D laser scanning. It is our goal that the research will help the archeologist and geologist to generate artifacts in proper 3D representation and concurrently to provide an opportunity for attractive, accurate, informative and interactive 3D visualization, animation and VRML on CD-ROM or over the WWW. In our methodology, we use a discrete cloud of points scattered on a surface to construct the function representation (F-Rep) of a 3D-Volume. In our case the points have been obtained with a laser scanner. The algorithm used to reconstruct the F-Rep is based on the Green function and an algorithm for reconstructing a volume with radial basis functions (Savchenko et al., 1995; Carr et al., 2001). We give a short and practical description of the algorithm described in (Savchenko et al., 1995); then we present our implementation of the algorithm as a library function for the HyperFun modeling language.
在本研究中,我们希望通过非接触式三维激光扫描,从古代文物的360/spl度/扫描数据中构建相对精确的日本埴轮数字多维模型。我们的目标是,这项研究将帮助考古学家和地质学家以适当的3D形式生成文物,同时提供一个有吸引力的、准确的、信息丰富的和交互式的3D可视化、动画和VRML在CD-ROM或WWW上的机会。在我们的方法中,我们使用分散在表面上的离散点云来构建3D-Volume的函数表示(F-Rep)。在我们的例子中,这些点是用激光扫描仪获得的。用于重建F-Rep的算法基于Green函数和具有径向基函数的重建体积的算法(Savchenko et al., 1995;Carr et al., 2001)。我们对(Savchenko et al., 1995)中描述的算法进行了简短而实用的描述;然后我们将算法的实现作为HyperFun建模语言的库函数。
{"title":"Haniwa: a case study of digital visualization of virtual heritage properties","authors":"M. Shanat, P.-A. Faylle, B. Schmitt, T. Vilbrandt","doi":"10.1109/EGUK.2002.1011268","DOIUrl":"https://doi.org/10.1109/EGUK.2002.1011268","url":null,"abstract":"In this research, we want to improve methods of constructing a relatively accurate digital and multidimensional model of Japanese haniwa from 360/spl deg/ scan data of ancient artifacts via non-contact 3D laser scanning. It is our goal that the research will help the archeologist and geologist to generate artifacts in proper 3D representation and concurrently to provide an opportunity for attractive, accurate, informative and interactive 3D visualization, animation and VRML on CD-ROM or over the WWW. In our methodology, we use a discrete cloud of points scattered on a surface to construct the function representation (F-Rep) of a 3D-Volume. In our case the points have been obtained with a laser scanner. The algorithm used to reconstruct the F-Rep is based on the Green function and an algorithm for reconstructing a volume with radial basis functions (Savchenko et al., 1995; Carr et al., 2001). We give a short and practical description of the algorithm described in (Savchenko et al., 1995); then we present our implementation of the algorithm as a library function for the HyperFun modeling language.","PeriodicalId":226195,"journal":{"name":"Proceedings 20th Eurographics UK Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116569949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}