We present new techniques for computing illumination from non-diffuse luminaires and scattering from non-diffuse surfaces. The methods are based on new closed-form expressions derived using a generalization of irradiance known as irradiance tensors. The elements of these tensors are angular moments, weighted integrals of the radiation field that are useful in simulating a variety of non-diffuse phenomena. Applications include the computation of irradiance due to directionally-varying area light sources, reflections from glossy surfaces, and transmission through glossy surfaces. The principles apply to any emission, reflection, or transmission distribution expressed as a polynomial over the unit sphere. We derive expressions for a simple but versatile subclass of these functions, called axial moments, and present complete algorithms their exact evaluation in polyhedral environments. The algorithms are demonstrated by simulating Phong-like emission and scattering effects. CR
{"title":"Applications of irradiance tensors to the simulation of non-Lambertian phenomena","authors":"J. Arvo","doi":"10.1145/218380.218467","DOIUrl":"https://doi.org/10.1145/218380.218467","url":null,"abstract":"We present new techniques for computing illumination from non-diffuse luminaires and scattering from non-diffuse surfaces. The methods are based on new closed-form expressions derived using a generalization of irradiance known as irradiance tensors. The elements of these tensors are angular moments, weighted integrals of the radiation field that are useful in simulating a variety of non-diffuse phenomena. Applications include the computation of irradiance due to directionally-varying area light sources, reflections from glossy surfaces, and transmission through glossy surfaces. The principles apply to any emission, reflection, or transmission distribution expressed as a polynomial over the unit sphere. We derive expressions for a simple but versatile subclass of these functions, called axial moments, and present complete algorithms their exact evaluation in polyhedral environments. The algorithms are demonstrated by simulating Phong-like emission and scattering effects. CR","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128065982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a radically new approach to the century old problem of computing the implicit equation of a parametric surface. For surfaces without base points, the new method expresses the implicit equation in a determinant which is one fourth the size of the conventional expression based on Dixon’s resultant. If base points do exist, previous implicitization methods either fail or become much more complicated, while the new method actually simplifies.
{"title":"Implicitization using moving curves and surfaces","authors":"T. Sederberg, Falai Chen","doi":"10.1145/218380.218460","DOIUrl":"https://doi.org/10.1145/218380.218460","url":null,"abstract":"This paper presents a radically new approach to the century old problem of computing the implicit equation of a parametric surface. For surfaces without base points, the new method expresses the implicit equation in a determinant which is one fourth the size of the conventional expression based on Dixon’s resultant. If base points do exist, previous implicitization methods either fail or become much more complicated, while the new method actually simplifies.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127795284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artistic screening is a new image reproduction technique incorporating freely created artistic screen elements for generating halftones. Fixed predefined dot contours associated with given intensity levels determine the screen dot shape’s growing behavior. Screen dot contours associated with each intensity level are obtained by interpolation between the fixed predefined dot contours. A user-defined mapping transforms screen elements from screen element definition space to screen element rendition space. This mapping can be tuned to produce various effects such as dilatations, contractions and nonlinear deformations of the screen element grid. Discrete screen elements associated with all desired intensity levels are obtained by rasterizing the interpolated screen dot shapes in the screen element rendition space. Since both the image to be reproduced and the screen shapes can be designed independently, the design freedom offered to artists is very great. The interaction between the image to be reproduced and the screen shapes enables the creation of graphic designs of high artistic quality. Artistic screening is particularly well suited for the reproduction of images on large posters. When looked at from a short distance, the poster’s screening layer may deliver its own message. Furthermore, thanks to artistic screening, both full size and microscopic letters can be incorporated into the image reproduction process. In order to avoid counterfeiting, banknotes may comprise grayscale images with intensity levels produced by microletters of varying size and shape.
{"title":"Artistic screening","authors":"V. Ostromoukhov, R. Hersch","doi":"10.1145/218380.218445","DOIUrl":"https://doi.org/10.1145/218380.218445","url":null,"abstract":"Artistic screening is a new image reproduction technique incorporating freely created artistic screen elements for generating halftones. Fixed predefined dot contours associated with given intensity levels determine the screen dot shape’s growing behavior. Screen dot contours associated with each intensity level are obtained by interpolation between the fixed predefined dot contours. A user-defined mapping transforms screen elements from screen element definition space to screen element rendition space. This mapping can be tuned to produce various effects such as dilatations, contractions and nonlinear deformations of the screen element grid. Discrete screen elements associated with all desired intensity levels are obtained by rasterizing the interpolated screen dot shapes in the screen element rendition space. Since both the image to be reproduced and the screen shapes can be designed independently, the design freedom offered to artists is very great. The interaction between the image to be reproduced and the screen shapes enables the creation of graphic designs of high artistic quality. Artistic screening is particularly well suited for the reproduction of images on large posters. When looked at from a short distance, the poster’s screening layer may deliver its own message. Furthermore, thanks to artistic screening, both full size and microscopic letters can be incorporated into the image reproduction process. In order to avoid counterfeiting, banknotes may comprise grayscale images with intensity levels produced by microletters of varying size and shape.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125192300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents new solutions to the following three problems in image morphing: feature specification, warp generation, and transition control. To reduce the burden of feature specification, we first adopt a computer vision technique called snakes. We next propose the use of multilevel free-form deformations (MFFD) to achieve -continuous and one-to-one warps among feature point pairs. The resulting technique, based on B-spline approximation, is simpler and faster than previous warp generation methods. Finally, we simplify the MFFD method to construct -continuous surfaces for deriving transition functions to control geometry and color blending.
{"title":"Image metamorphosis using snakes and free-form deformations","authors":"Seungyong Lee, Kyung-Yong Chwa, Sung-yong Shin","doi":"10.1145/218380.218501","DOIUrl":"https://doi.org/10.1145/218380.218501","url":null,"abstract":"This paper presents new solutions to the following three problems in image morphing: feature specification, warp generation, and transition control. To reduce the burden of feature specification, we first adopt a computer vision technique called snakes. We next propose the use of multilevel free-form deformations (MFFD) to achieve -continuous and one-to-one warps among feature point pairs. The resulting technique, based on B-spline approximation, is simpler and faster than previous warp generation methods. Finally, we simplify the MFFD method to construct -continuous surfaces for deriving transition functions to control geometry and color blending.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127126207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Pausch, W. Aviles, N. Durlach, W. Robinett, M. Zyda
In 1992, at the request of a consortium of federal agencies, the National Research Council established a committee to "recommend a national research and development agenda in the area of virtual reality" to set U.S. government R&D funding priorities for virtual reality (VR) for the next decade. The committee spent two years studying the current state of VR, speculating on where likely breakthroughs might happen, and deciding where funding could have the greatest impact. The result is a 500-page report that will have tremendous effect on what does and does not get funded in Virtual Reality research by agencies such as ARPA, the Air Force Office of Scientific Research, the Army Research Laboratory, Armstrong Laboratory, the Army Natrick RD&E Center, NASA, NSF, NSA, and Sandia National Lab. The committee's report tries to "describe the current state of research and technology that is relevant to the development of synthetic environment systems, provide a summary of the application domains in which such systems are likely to make major contributions, and outline a series of recommendations that we believe are crucial to rational and systematic development of the synthetic environment field." The purpose of this panel is to report the (often surprising) recommendations in the committee's report. Few researchers will have time to read this very influential document, but this forum will disseminate the basic highlights, and attempt to explain some of the more fractious points that the committee dealt with. For example, the report recommends "no aggressive federal involvement in computer hardware development in the [virtual reality] area at this time." Based on last year's SIGGRAPH, Virtual Reality is one of the hottest areas for the computer graphics community, and funding is clearly needed from the federal government. Industrial sources are not viewed as having sufficiently long-term strategies to advance the field in many necessary areas. Therefore, the funding priorities and strategies discussed in this report may have a direct impact on the future directions of the SIGGRAPH community. The report itself is Virtual Reality, Scientific and Technological Challenges, copyright 1995 National Academy of Sciences; ISBN 0-309-05135-5, Nathaniel I. Durlach and Anne S. Mavor, editors. The purpose of the panel is to disseminate the report, the various panelists will be covering the following areas of the report and its recommendations:
1992年,应联邦机构联盟的要求,国家研究委员会成立了一个委员会,“建议虚拟现实领域的国家研究和发展议程”,以确定美国政府未来十年对虚拟现实(VR)的研发资金优先次序。该委员会花了两年时间研究虚拟现实的现状,推测可能出现突破的领域,并决定哪些领域的资金可以产生最大的影响。结果是一份长达500页的报告,将对ARPA、空军科学研究办公室、陆军研究实验室、阿姆斯特朗实验室、陆军纳特里克研发与电子中心、NASA、NSF、NSA和桑迪亚国家实验室等机构的虚拟现实研究是否得到资助产生巨大影响。该委员会的报告试图“描述与合成环境系统发展相关的研究和技术的现状,总结这些系统可能做出重大贡献的应用领域,并概述我们认为对合成环境领域的合理和系统发展至关重要的一系列建议。”该小组的目的是报告委员会报告中的(通常令人惊讶的)建议。很少有研究人员有时间阅读这份非常有影响力的文件,但本论坛将传播其基本亮点,并试图解释委员会处理的一些更棘手的问题。例如,该报告建议“目前联邦政府不要积极参与[虚拟现实]领域的计算机硬件开发。”根据去年的SIGGRAPH,虚拟现实是计算机图形界最热门的领域之一,显然需要联邦政府的资金支持。工业来源被认为没有足够的长期战略来推动该领域在许多必要领域的发展。因此,本报告中讨论的资助重点和策略可能对SIGGRAPH社区的未来方向产生直接影响。报告本身是虚拟现实,科技挑战,版权1995年国家科学院;ISBN 0-309-05135-5,编辑:Nathaniel I. Durlach和Anne S. Mavor。小组的目的是传播报告,各小组成员将讨论报告的以下领域及其建议:
{"title":"A national research agenda for virtual reality (panel): report by the National Research Council Committee on VR R&D","authors":"R. Pausch, W. Aviles, N. Durlach, W. Robinett, M. Zyda","doi":"10.1145/218380.218509","DOIUrl":"https://doi.org/10.1145/218380.218509","url":null,"abstract":"In 1992, at the request of a consortium of federal agencies, the National Research Council established a committee to \"recommend a national research and development agenda in the area of virtual reality\" to set U.S. government R&D funding priorities for virtual reality (VR) for the next decade. The committee spent two years studying the current state of VR, speculating on where likely breakthroughs might happen, and deciding where funding could have the greatest impact. The result is a 500-page report that will have tremendous effect on what does and does not get funded in Virtual Reality research by agencies such as ARPA, the Air Force Office of Scientific Research, the Army Research Laboratory, Armstrong Laboratory, the Army Natrick RD&E Center, NASA, NSF, NSA, and Sandia National Lab. The committee's report tries to \"describe the current state of research and technology that is relevant to the development of synthetic environment systems, provide a summary of the application domains in which such systems are likely to make major contributions, and outline a series of recommendations that we believe are crucial to rational and systematic development of the synthetic environment field.\" The purpose of this panel is to report the (often surprising) recommendations in the committee's report. Few researchers will have time to read this very influential document, but this forum will disseminate the basic highlights, and attempt to explain some of the more fractious points that the committee dealt with. For example, the report recommends \"no aggressive federal involvement in computer hardware development in the [virtual reality] area at this time.\" Based on last year's SIGGRAPH, Virtual Reality is one of the hottest areas for the computer graphics community, and funding is clearly needed from the federal government. Industrial sources are not viewed as having sufficiently long-term strategies to advance the field in many necessary areas. Therefore, the funding priorities and strategies discussed in this report may have a direct impact on the future directions of the SIGGRAPH community. The report itself is Virtual Reality, Scientific and Technological Challenges, copyright 1995 National Academy of Sciences; ISBN 0-309-05135-5, Nathaniel I. Durlach and Anne S. Mavor, editors. The purpose of the panel is to disseminate the report, the various panelists will be covering the following areas of the report and its recommendations:","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125807093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a hybrid model for animation of soft inelastic substance which undergo topological changes, e.g. separation and fusion and which fit with the objects they are in contact with. The model uses a particle system coated with a smooth iso-surface that is used for performing collision detection, precise contact modeling and integration of response forces. The animation technique solves three problems inherent in implicit modeling. Firstly, local volume controllers are defined to insure constant volume deformation, even during highly inelastic processes such as splitting or fusion. Secondly, we avoid unwanted distance blending between disconnected pieces of the same substance. Finally, we simulate both collisions and progressive merging under compression between implicit surfaces that do not blend together. Parameter tuning is facilitated by the layered model and animation is generated at interactive rates.
{"title":"Animating soft substances with implicit surfaces","authors":"M. Desbrun, Marie-Paule Cani","doi":"10.1145/218380.218456","DOIUrl":"https://doi.org/10.1145/218380.218456","url":null,"abstract":"This paper presents a hybrid model for animation of soft inelastic substance which undergo topological changes, e.g. separation and fusion and which fit with the objects they are in contact with. The model uses a particle system coated with a smooth iso-surface that is used for performing collision detection, precise contact modeling and integration of response forces. The animation technique solves three problems inherent in implicit modeling. Firstly, local volume controllers are defined to insure constant volume deformation, even during highly inelastic processes such as splitting or fusion. Secondly, we avoid unwanted distance blending between disconnected pieces of the same substance. Finally, we simulate both collisions and progressive merging under compression between implicit surfaces that do not blend together. Parameter tuning is facilitated by the layered model and animation is generated at interactive rates.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126767082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Key extraction is an inverse problem of finding the foreground, the background, and the alpha from an image and some hints. Although the chromakey solves this for a limited case (single background color), this is often too restrictive in practical situations. When the extraction from arbitrary background is necessary, this is currently done by a time consuming manual task. In order to reduce the operator load, attempts have been made to assist operators using either color space or image space information. However, existing approaches have their limitations. Especially, they leave too much work to operators. In this paper, we present a key extraction algorithm which for the first time, addresses the problem quantitatively. We first derive a partial differential equation that relates the gradient of an image to the alpha values. We then describe an efficient algorithm that provides the alpha values as the solution of the equation. Along with our accurate motion estimation technique, it produces correct alpha values almost everywhere, leaving little work to operators. We also show that a careful design of the algorithm and the data representation greatly improves human interaction. At every step of the algorithm, human interaction is possible and it is intuitive. CR Categories: I.3.3 [Computer Graphics]: Picture / Image Generation; I.4.6 [Image Processing]: Segmentation Edge and feature detection; I.4.7 [Image Processing]: Feature Measurement; I.5.2 [Pattern Recognition]: Design Methodology Feature evaluation and selection. Additional
{"title":"AutoKey: human assisted key extraction","authors":"T. Mitsunaga, Taku Yokoyama, T. Totsuka","doi":"10.1145/218380.218450","DOIUrl":"https://doi.org/10.1145/218380.218450","url":null,"abstract":"Key extraction is an inverse problem of finding the foreground, the background, and the alpha from an image and some hints. Although the chromakey solves this for a limited case (single background color), this is often too restrictive in practical situations. When the extraction from arbitrary background is necessary, this is currently done by a time consuming manual task. In order to reduce the operator load, attempts have been made to assist operators using either color space or image space information. However, existing approaches have their limitations. Especially, they leave too much work to operators. In this paper, we present a key extraction algorithm which for the first time, addresses the problem quantitatively. We first derive a partial differential equation that relates the gradient of an image to the alpha values. We then describe an efficient algorithm that provides the alpha values as the solution of the equation. Along with our accurate motion estimation technique, it produces correct alpha values almost everywhere, leaving little work to operators. We also show that a careful design of the algorithm and the data representation greatly improves human interaction. At every step of the algorithm, human interaction is possible and it is intuitive. CR Categories: I.3.3 [Computer Graphics]: Picture / Image Generation; I.4.6 [Image Processing]: Segmentation Edge and feature detection; I.4.7 [Image Processing]: Feature Measurement; I.5.2 [Pattern Recognition]: Design Methodology Feature evaluation and selection. Additional","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126468193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the method for modeling human figure locomotions with emotions. Fourier expansions of experimental data of actual human behaviors serve as a basis from which the method can interpolate or extrapolate the human locomotions. This means, for instance, that transition from a walk to a run is smoothly and realistically performed by the method. Moreover an individual's character or mood, appearing during the human behaviors, is also extracted by the method. For example, the method gets "briskness" from the experimental data for a "normal" walk and a "brisk" walk. Then the "brisk" run is generated by the method, using another Fourier expansion of the measured data of running. The superposition of these human behaviors is shown as an efficient technique for generating rich variations of human locomotions. In addition, step-length, speed, and hip position during the locomotions are also modeled, and then interactively controlled to get a desired animation. Abstract
{"title":"Fourier principles for emotion-based human figure animation","authors":"M. Unuma, K. Anjyo, R. Takeuchi","doi":"10.1145/218380.218419","DOIUrl":"https://doi.org/10.1145/218380.218419","url":null,"abstract":"This paper describes the method for modeling human figure locomotions with emotions. Fourier expansions of experimental data of actual human behaviors serve as a basis from which the method can interpolate or extrapolate the human locomotions. This means, for instance, that transition from a walk to a run is smoothly and realistically performed by the method. Moreover an individual's character or mood, appearing during the human behaviors, is also extracted by the method. For example, the method gets \"briskness\" from the experimental data for a \"normal\" walk and a \"brisk\" walk. Then the \"brisk\" run is generated by the method, using another Fourier expansion of the measured data of running. The superposition of these human behaviors is shown as an efficient technique for generating rich variations of human locomotions. In addition, step-length, speed, and hip position during the locomotions are also modeled, and then interactively controlled to get a desired animation. Abstract","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126448174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present actively procedural multiresolution paint textures. Texture elements may be linearly combined to create complex composite textures that continue to refine themselves when viewed at successively greater magnification. Actively procedural textures constitute a powerful drawing tool that can be used in a multiresolution paint system. They provide a mechanism to generate an infinite amount of detail with a simple and compact representation. We give several examples of procedural textures and show how to create different painting effects with them.
{"title":"Live paint: painting with procedural multiscale textures","authors":"K. Perlin, L. Velho","doi":"10.1145/218380.218437","DOIUrl":"https://doi.org/10.1145/218380.218437","url":null,"abstract":"We present actively procedural multiresolution paint textures. Texture elements may be linearly combined to create complex composite textures that continue to refine themselves when viewed at successively greater magnification. Actively procedural textures constitute a powerful drawing tool that can be used in a multiresolution paint system. They provide a mechanism to generate an infinite amount of detail with a simple and compact representation. We give several examples of procedural textures and show how to create different painting effects with them.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134456698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Spears, Scott Dyer, G. Joblove, Charles Gibson, Lincoln Hu
Derek Spears This panel is focused on taking a look at what key technologies we need to push visual effects father into the realms of realism. We will start by examining what tools we have used so far in order to provide a firm reference to understand where we are going. Tools in the areas of Input/Output (Scanning/Recording/ Data Transfer), Image Processing, and Animation/Motion Capture will be covered. These are not only major concerns for the visual effects industry, but cover large areas of interest for mainstream computer graphics. Why is this important? The visual effects industry as a whole has become in and of itself a proving ground for cutting edge digital technologies. Morphing, Motion Capture and Digital Compositing were largely born out of the needs of the Visual Effects industry and Visual Effects has benefited enormously from advances in Computer Graphics. This is partially due to the Visual Effects Industry's willingness to embrace new technologies, riding the bleeding edge. These technologies have enabled us to do things never before imaginable. While we all think that the quality of visual effects has been stunning in the past, we need to stop and look at how we really achieve these images. The tools obviously work, we have produced breathtaking imagery with them. But we have to ask the questions "Are they good enough? What more do we need?" These questions relate not only to the search for solutions to previously impossible problems, but also the search to do things we already understand better, faster and cheaper. The search for new technology includes advances in algorithms, totally new approaches, and even solving more basic and historically ignored problems, such as user interfaces and artist interaction. We will take a look at our present and a glimpse into our future of where our tools are (and more importantly, should be) headed.
{"title":"Visual effects technology—do we have any? (panel session)","authors":"D. Spears, Scott Dyer, G. Joblove, Charles Gibson, Lincoln Hu","doi":"10.1145/218380.218538","DOIUrl":"https://doi.org/10.1145/218380.218538","url":null,"abstract":"Derek Spears This panel is focused on taking a look at what key technologies we need to push visual effects father into the realms of realism. We will start by examining what tools we have used so far in order to provide a firm reference to understand where we are going. Tools in the areas of Input/Output (Scanning/Recording/ Data Transfer), Image Processing, and Animation/Motion Capture will be covered. These are not only major concerns for the visual effects industry, but cover large areas of interest for mainstream computer graphics. Why is this important? The visual effects industry as a whole has become in and of itself a proving ground for cutting edge digital technologies. Morphing, Motion Capture and Digital Compositing were largely born out of the needs of the Visual Effects industry and Visual Effects has benefited enormously from advances in Computer Graphics. This is partially due to the Visual Effects Industry's willingness to embrace new technologies, riding the bleeding edge. These technologies have enabled us to do things never before imaginable. While we all think that the quality of visual effects has been stunning in the past, we need to stop and look at how we really achieve these images. The tools obviously work, we have produced breathtaking imagery with them. But we have to ask the questions \"Are they good enough? What more do we need?\" These questions relate not only to the search for solutions to previously impossible problems, but also the search to do things we already understand better, faster and cheaper. The search for new technology includes advances in algorithms, totally new approaches, and even solving more basic and historically ignored problems, such as user interfaces and artist interaction. We will take a look at our present and a glimpse into our future of where our tools are (and more importantly, should be) headed.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131145603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}