Pub Date : 1998-10-26DOI: 10.1109/PCCGA.1998.732097
Kelvin Sung, James Craighead, Changyaw Wang, Sanjay Bakshi, A. Pearce, Andrew Woo
Maya is the new 3D software package recently released by Alias I Wavefront for creating state of the art character animation and visual effects. Built on a next generation advanced architecture, Maya delivers high speed interaction and high productivity for its users. In the Fall of 1995, the Rendering Team at Alias/Wavefront started from scratch to design and implement a renderer for the Maya project. This was a very challenging task, requiring the efficient generation of high quality images for a next generation 3D application that was still under development. In addition, we were expected to match or exceed the capabilities of our existing popular rendering products (as well as those from our competitors). In January of 1998, the all new renderer was delivered with Maya 1.0. It includes a comprehensive user interface that is well integrated with the rest of the system, and a batch renderer that is capable of efficiently generating a full spectrum of high quality visual effects. Currently, there are high end computer graphics (CG) productions in progress that are using the Maya Renderer. We concentrate on our batch renderer implementation effort. We describe the philosophy, design decisions, and the tasks we set out to achieve in 1995. We then evaluate the delivered system based on images generated with the renderer.
Maya是Alias I Wavefront最近发布的新的3D软件包,用于创建最先进的角色动画和视觉效果。玛雅建立在下一代先进的架构上,为用户提供高速交互和高生产力。在1995年的秋天,Alias/Wavefront的渲染团队从零开始为Maya项目设计和实现一个渲染器。这是一项非常具有挑战性的任务,需要为仍在开发中的下一代3D应用程序高效地生成高质量图像。此外,我们希望能够匹配或超过我们现有的流行渲染产品(以及我们的竞争对手的产品)的功能。1998年1月,全新的渲染器随Maya 1.0一起发布。它包括一个全面的用户界面,与系统的其余部分很好地集成,以及一个批处理渲染器,能够有效地生成全谱的高质量视觉效果。目前,正在使用Maya Renderer进行的高端计算机图形(CG)制作。我们专注于批处理渲染器的实现工作。我们描述了我们的理念、设计决策以及我们在1995年要实现的任务。然后,我们根据渲染器生成的图像评估交付的系统。
{"title":"Design and implementation of the Maya Renderer","authors":"Kelvin Sung, James Craighead, Changyaw Wang, Sanjay Bakshi, A. Pearce, Andrew Woo","doi":"10.1109/PCCGA.1998.732097","DOIUrl":"https://doi.org/10.1109/PCCGA.1998.732097","url":null,"abstract":"Maya is the new 3D software package recently released by Alias I Wavefront for creating state of the art character animation and visual effects. Built on a next generation advanced architecture, Maya delivers high speed interaction and high productivity for its users. In the Fall of 1995, the Rendering Team at Alias/Wavefront started from scratch to design and implement a renderer for the Maya project. This was a very challenging task, requiring the efficient generation of high quality images for a next generation 3D application that was still under development. In addition, we were expected to match or exceed the capabilities of our existing popular rendering products (as well as those from our competitors). In January of 1998, the all new renderer was delivered with Maya 1.0. It includes a comprehensive user interface that is well integrated with the rest of the system, and a batch renderer that is capable of efficiently generating a full spectrum of high quality visual effects. Currently, there are high end computer graphics (CG) productions in progress that are using the Maya Renderer. We concentrate on our batch renderer implementation effort. We describe the philosophy, design decisions, and the tasks we set out to achieve in 1995. We then evaluate the delivered system based on images generated with the renderer.","PeriodicalId":164343,"journal":{"name":"Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124726195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-10-26DOI: 10.1109/PCCGA.1998.731994
L. Kobbelt
We first demonstrate how to compute exact limit points and tangents for surfaces generated by an arbitrary, stationary subdivision scheme. We then describe how to construct simple bounding volumes for the patches of a subdivision surface and present a simple numerical technique to compute guaranteed bounds for the ranges of the basis functions being associated with the subdivision scheme. Merging the local bounding volumes allows us to generate envelope meshes which tightly enclose the limit surface and which have the same structure as the initial control mesh. The prominent applications for these envelope meshes are the efficient ray tracing of subdivision surfaces as well as efficient collision detection.
{"title":"Tight bounding volumes for subdivision surfaces","authors":"L. Kobbelt","doi":"10.1109/PCCGA.1998.731994","DOIUrl":"https://doi.org/10.1109/PCCGA.1998.731994","url":null,"abstract":"We first demonstrate how to compute exact limit points and tangents for surfaces generated by an arbitrary, stationary subdivision scheme. We then describe how to construct simple bounding volumes for the patches of a subdivision surface and present a simple numerical technique to compute guaranteed bounds for the ranges of the basis functions being associated with the subdivision scheme. Merging the local bounding volumes allows us to generate envelope meshes which tightly enclose the limit surface and which have the same structure as the initial control mesh. The prominent applications for these envelope meshes are the efficient ray tracing of subdivision surfaces as well as efficient collision detection.","PeriodicalId":164343,"journal":{"name":"Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130295827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-10-26DOI: 10.1109/PCCGA.1998.732056
Ho-Lun Cheng, H. Edelsbrunner, Ping Fu
The construction of shape spaces is studied from a mathematical and a computational viewpoint. A program is outlined, reducing the problem to four tasks: the representation of geometry, the canonical deformation of geometry, the measuring of distance in shape space, and the selection of base shapes. The technical part of the paper focuses on the second task: the specification of a deformation mixing two or more shapes in continuously changing proportions.
{"title":"Shape space from deformation","authors":"Ho-Lun Cheng, H. Edelsbrunner, Ping Fu","doi":"10.1109/PCCGA.1998.732056","DOIUrl":"https://doi.org/10.1109/PCCGA.1998.732056","url":null,"abstract":"The construction of shape spaces is studied from a mathematical and a computational viewpoint. A program is outlined, reducing the problem to four tasks: the representation of geometry, the canonical deformation of geometry, the measuring of distance in shape space, and the selection of base shapes. The technical part of the paper focuses on the second task: the specification of a deformation mixing two or more shapes in continuously changing proportions.","PeriodicalId":164343,"journal":{"name":"Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121111778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-10-26DOI: 10.1109/PCCGA.1998.732000
J. Choi, Y. Shin
The paper presents an efficient image based rendering algorithm of volume data. Using intermediate image space instead of image space, mapping becomes more efficient and holes coming from the point-to-point mapping can be removed. Mapping in intermediate image space is easily performed by looking up the table with the depth value of a source pixel. We also suggest a way of minimizing space requirement for pre-acquired images. Experimental results show that the algorithm can generate 25-40 images per second without noticeable image degradation in the case of making 256/sup 2/ images with a 256/sup 3/ voxel data set.
{"title":"Efficient image-based rendering of volume data","authors":"J. Choi, Y. Shin","doi":"10.1109/PCCGA.1998.732000","DOIUrl":"https://doi.org/10.1109/PCCGA.1998.732000","url":null,"abstract":"The paper presents an efficient image based rendering algorithm of volume data. Using intermediate image space instead of image space, mapping becomes more efficient and holes coming from the point-to-point mapping can be removed. Mapping in intermediate image space is easily performed by looking up the table with the depth value of a source pixel. We also suggest a way of minimizing space requirement for pre-acquired images. Experimental results show that the algorithm can generate 25-40 images per second without noticeable image degradation in the case of making 256/sup 2/ images with a 256/sup 3/ voxel data set.","PeriodicalId":164343,"journal":{"name":"Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115320040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-10-26DOI: 10.1109/PCCGA.1998.732035
M. Stamminger, P. Slusallek, H. Seidel
Clustering is a very efficient technique to apply finite element methods to the computation of radiosity solutions of complex scenes. Both computation time and memory consumption can be reduced dramatically by grouping the primitives of the input scene into a hierarchy of clusters and allowing for light exchange between all levels of this hierarchy. However, problems can arise due to clustering, when gross approximations about a cluster's content result in unsatisfactory solutions or unnecessary computations. In the clustering approach for diffuse global information described in the paper, light exchange between two objects-patches or clusters-is bounded by using geometrical and shading information provided by every object through a uniform interface. With this uniform view of various kinds of objects, comparable and reliable error bounds on the light exchange can be computed, which then guide a standard hierarchical radiosity algorithm.
{"title":"Bounded clustering-finding good bounds on clustered light transport","authors":"M. Stamminger, P. Slusallek, H. Seidel","doi":"10.1109/PCCGA.1998.732035","DOIUrl":"https://doi.org/10.1109/PCCGA.1998.732035","url":null,"abstract":"Clustering is a very efficient technique to apply finite element methods to the computation of radiosity solutions of complex scenes. Both computation time and memory consumption can be reduced dramatically by grouping the primitives of the input scene into a hierarchy of clusters and allowing for light exchange between all levels of this hierarchy. However, problems can arise due to clustering, when gross approximations about a cluster's content result in unsatisfactory solutions or unnecessary computations. In the clustering approach for diffuse global information described in the paper, light exchange between two objects-patches or clusters-is bounded by using geometrical and shading information provided by every object through a uniform interface. With this uniform view of various kinds of objects, comparable and reliable error bounds on the light exchange can be computed, which then guide a standard hierarchical radiosity algorithm.","PeriodicalId":164343,"journal":{"name":"Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128384672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-10-26DOI: 10.1109/PCCGA.1998.732061
Xiaogang Jin, Youfu Li, Qunsheng Peng
Space deformation is an important tool in computer animation and shape design. We propose a new local deformation model based on generalized metaballs. The user specifies a series of constraints, which can be made up of points, lines, surfaces and volumes, their effective radii and maximum displacements; the deformation model creates a generalized metaball for each constraint. Each generalized metaball is associated with a potential function centered on the constraint, the potential function drops from 1 on the constraint to 0 on the effective radius. This deformation model operates on the local space and is independent of the underlining representation of the object to be deformed. The deformation can be finely controlled by adjusting the parameters of the generalized metaballs. We also present some extensions and the extended deformation model to include scale and rotation constraints. Experiments show that this deformation model is efficient and intuitive. It can deal with various constraints, which is difficult for traditional deformation model.
{"title":"General constrained deformations based on generalized metaballs","authors":"Xiaogang Jin, Youfu Li, Qunsheng Peng","doi":"10.1109/PCCGA.1998.732061","DOIUrl":"https://doi.org/10.1109/PCCGA.1998.732061","url":null,"abstract":"Space deformation is an important tool in computer animation and shape design. We propose a new local deformation model based on generalized metaballs. The user specifies a series of constraints, which can be made up of points, lines, surfaces and volumes, their effective radii and maximum displacements; the deformation model creates a generalized metaball for each constraint. Each generalized metaball is associated with a potential function centered on the constraint, the potential function drops from 1 on the constraint to 0 on the effective radius. This deformation model operates on the local space and is independent of the underlining representation of the object to be deformed. The deformation can be finely controlled by adjusting the parameters of the generalized metaballs. We also present some extensions and the extended deformation model to include scale and rotation constraints. Experiments show that this deformation model is efficient and intuitive. It can deal with various constraints, which is difficult for traditional deformation model.","PeriodicalId":164343,"journal":{"name":"Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)","volume":"4 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113976569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-10-26DOI: 10.1109/PCCGA.1998.731997
S. Teller
High fidelity textured geometric models are a fundamental starting point for computer graphics, simulation, visualization, design, and analysis. Existing tools for acquiring 3D models of large scale (e.g., urban) geometry, from imagery, require significant manual input and suffer other algorithmic scaling limitations. We are pursuing a research and engineering effort to develop a novel sensor and associated geometric algorithms, to achieve fully automated reconstruction from close range color images of textured geometric models representing built urban structures. The sensor is a geo-located camera, which annotates each acquired digital image with metadata recording the date and time of image acquisition, and estimating the position and orientation of the acquiring camera in a global (geodetic) coordinate system. This metadata enables the formulation of reconstruction algorithms which scale well both with the number and spatial density of input images, and the complexity of the reconstructed model. We describe our initial dataset of about four thousand geo-located images acquired through a prototype sensor, manual surveying, and semi automated refinement of navigation information. We demonstrate, for a small of office park on the MIT campus, the operation of fully automated algorithms for generating hemispherical image mosaics, for reconstructing vertical building facades, and for estimating high resolution texture information for each facade. Finally, we describe the status of our efforts, and discuss several significant research and engineering challenges facing the project.
{"title":"Toward urban model acquisition from geo-located images","authors":"S. Teller","doi":"10.1109/PCCGA.1998.731997","DOIUrl":"https://doi.org/10.1109/PCCGA.1998.731997","url":null,"abstract":"High fidelity textured geometric models are a fundamental starting point for computer graphics, simulation, visualization, design, and analysis. Existing tools for acquiring 3D models of large scale (e.g., urban) geometry, from imagery, require significant manual input and suffer other algorithmic scaling limitations. We are pursuing a research and engineering effort to develop a novel sensor and associated geometric algorithms, to achieve fully automated reconstruction from close range color images of textured geometric models representing built urban structures. The sensor is a geo-located camera, which annotates each acquired digital image with metadata recording the date and time of image acquisition, and estimating the position and orientation of the acquiring camera in a global (geodetic) coordinate system. This metadata enables the formulation of reconstruction algorithms which scale well both with the number and spatial density of input images, and the complexity of the reconstructed model. We describe our initial dataset of about four thousand geo-located images acquired through a prototype sensor, manual surveying, and semi automated refinement of navigation information. We demonstrate, for a small of office park on the MIT campus, the operation of fully automated algorithms for generating hemispherical image mosaics, for reconstructing vertical building facades, and for estimating high resolution texture information for each facade. Finally, we describe the status of our efforts, and discuss several significant research and engineering challenges facing the project.","PeriodicalId":164343,"journal":{"name":"Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128120400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-10-26DOI: 10.1109/PCCGA.1998.731999
H. Bao, Li Chen, Jianguo Ying, Qunsheng Peng
A new nonlinear view interpolation algorithm is presented. Unlike the linear interpolation scheme, our method can exactly simulate the perspective viewing transformation during the walkthrough. To accelerate the view interpolation, the algorithm employs a binary subdivision scheme to optimize the decomposition of the source image so that the number of the resultant blocks is greatly reduced. Holes on the intermediate image are filled by two steps, namely enlarging the transferred blocks at the side adjacent to holes and retrieving the local image within the holes from the destination images by multiple-directional interpolation. Experimental results demonstrate our algorithm is much more accurate and efficient than the traditional one.
{"title":"Nonlinear view interpolation","authors":"H. Bao, Li Chen, Jianguo Ying, Qunsheng Peng","doi":"10.1109/PCCGA.1998.731999","DOIUrl":"https://doi.org/10.1109/PCCGA.1998.731999","url":null,"abstract":"A new nonlinear view interpolation algorithm is presented. Unlike the linear interpolation scheme, our method can exactly simulate the perspective viewing transformation during the walkthrough. To accelerate the view interpolation, the algorithm employs a binary subdivision scheme to optimize the decomposition of the source image so that the number of the resultant blocks is greatly reduced. Holes on the intermediate image are filled by two steps, namely enlarging the transferred blocks at the side adjacent to holes and retrieving the local image within the holes from the destination images by multiple-directional interpolation. Experimental results demonstrate our algorithm is much more accurate and efficient than the traditional one.","PeriodicalId":164343,"journal":{"name":"Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128195859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-10-26DOI: 10.1109/PCCGA.1998.732048
K. Wong, W. W. Tsang
The paper presents a fast algorithm for computing shadow volumes of area light sources and performing shadow classification. The algorithm improves Chin and Feiner's method (N. Chin and S. Feiner, 1992) in the way that it does not subdivide an area light source before the processing of scene polygons. Instead, a light source is only subdivided when needed and is merged immediately when the need vanishes. Much redundant work is saved by avoiding the unnecessary subdivisions. Compared with Chin and Feiner's method, experiments show that the new method can run faster without loss of accuracy.
{"title":"An efficient shadow algorithm for area light sources using BSP trees","authors":"K. Wong, W. W. Tsang","doi":"10.1109/PCCGA.1998.732048","DOIUrl":"https://doi.org/10.1109/PCCGA.1998.732048","url":null,"abstract":"The paper presents a fast algorithm for computing shadow volumes of area light sources and performing shadow classification. The algorithm improves Chin and Feiner's method (N. Chin and S. Feiner, 1992) in the way that it does not subdivide an area light source before the processing of scene polygons. Instead, a light source is only subdivided when needed and is merged immediately when the need vanishes. Much redundant work is saved by avoiding the unnecessary subdivisions. Compared with Chin and Feiner's method, experiments show that the new method can run faster without loss of accuracy.","PeriodicalId":164343,"journal":{"name":"Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124504410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-10-26DOI: 10.1109/PCCGA.1998.732083
G. Baranoski, J. Rokne
Physically and biologically based reflectance and transmittance models add realism to image synthesis applications at the expense of a significant increase in rendering time. Current research efforts in this area focus on developing practical solutions to quickly access a BDF (which represents a combination of BRDF and BTDF) while preserving its original characteristics. An approach to reconstruct arbitrary isotropic BDFs is presented. The spectral curves obtained using the proposed approach are compared with measured spectral curves, and some issues regarding its performance and storage requirements are examined.
{"title":"A nondeterministic reconstruction approach for isotropic reflectances and transmittances","authors":"G. Baranoski, J. Rokne","doi":"10.1109/PCCGA.1998.732083","DOIUrl":"https://doi.org/10.1109/PCCGA.1998.732083","url":null,"abstract":"Physically and biologically based reflectance and transmittance models add realism to image synthesis applications at the expense of a significant increase in rendering time. Current research efforts in this area focus on developing practical solutions to quickly access a BDF (which represents a combination of BRDF and BTDF) while preserving its original characteristics. An approach to reconstruct arbitrary isotropic BDFs is presented. The spectral curves obtained using the proposed approach are compared with measured spectral curves, and some issues regarding its performance and storage requirements are examined.","PeriodicalId":164343,"journal":{"name":"Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127983082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}