The multidimensional diffusion model for computer animation of diffuse ink painting opens up a new dimension in painting. In diffuse painting final image is a result of ink diffusion in absorbent paper. A straight-forward diffusion model however is unable to provide very specific features of real diffuse painting. In particular, it can not explain the appearance of certain singularities in intensity of color in the image which are very important features of diffuse ink painting. In our previous work, a model based on physical analysis of paper structure was proposed. Although this model provided an adequate simulation of many diffuse ink painting properties, it was still insufficient to explain the singularities of intensity distribution precisely. Now we solve this problem. A multidimensional diffusion model which we propose proves to provide exactly the same intensity distribution as in real images. The method was applied to animate ink diffusion 'Nijimi' of traditional Japanese ink painting 'Sumie'.<>
{"title":"A diffusion model for computer animation of diffuse ink painting","authors":"T. Kunii, G. V. Nosovskij, Takafumi Hayashi","doi":"10.1109/CA.1995.393542","DOIUrl":"https://doi.org/10.1109/CA.1995.393542","url":null,"abstract":"The multidimensional diffusion model for computer animation of diffuse ink painting opens up a new dimension in painting. In diffuse painting final image is a result of ink diffusion in absorbent paper. A straight-forward diffusion model however is unable to provide very specific features of real diffuse painting. In particular, it can not explain the appearance of certain singularities in intensity of color in the image which are very important features of diffuse ink painting. In our previous work, a model based on physical analysis of paper structure was proposed. Although this model provided an adequate simulation of many diffuse ink painting properties, it was still insufficient to explain the singularities of intensity distribution precisely. Now we solve this problem. A multidimensional diffusion model which we propose proves to provide exactly the same intensity distribution as in real images. The method was applied to animate ink diffusion 'Nijimi' of traditional Japanese ink painting 'Sumie'.<<ETX>>","PeriodicalId":430534,"journal":{"name":"Proceedings Computer Animation'95","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115021506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a new motion planning method for an arbitrary solid with six degrees of freedom moving in a 3D environment. The essence of the method is to efficiently capture local moving information of the solid in order to speed up the heuristic search in the configuration space. For this purpose, the moving freedoms of a solid at any configuration are directly calculated by introducing the characteristic volume and characteristic obstacles of the solid. Based on the local information about the moving freedoms, a heuristic search similar to the A* is performed in the configuration space to find a collision-free path between the initial and final configurations. A 3D motion planning example shows that the method is capable of dealing with complex moving solids and environments. To increase the search efficiency in configuration space, some extensions of the methods for incorporating with global methods are discussed. Applications of the method in computer animation and virtual reality are indicated.<>
{"title":"Motion planning for computer animation and virtual reality applications","authors":"X. Sheng","doi":"10.1109/CA.1995.393547","DOIUrl":"https://doi.org/10.1109/CA.1995.393547","url":null,"abstract":"This paper presents a new motion planning method for an arbitrary solid with six degrees of freedom moving in a 3D environment. The essence of the method is to efficiently capture local moving information of the solid in order to speed up the heuristic search in the configuration space. For this purpose, the moving freedoms of a solid at any configuration are directly calculated by introducing the characteristic volume and characteristic obstacles of the solid. Based on the local information about the moving freedoms, a heuristic search similar to the A* is performed in the configuration space to find a collision-free path between the initial and final configurations. A 3D motion planning example shows that the method is capable of dealing with complex moving solids and environments. To increase the search efficiency in configuration space, some extensions of the methods for incorporating with global methods are discussed. Applications of the method in computer animation and virtual reality are indicated.<<ETX>>","PeriodicalId":430534,"journal":{"name":"Proceedings Computer Animation'95","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125703944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a method for modeling the dynamic behavior of splashing fluids. The model simulates the behavior of a fluid when objects impact or float on its surface. The forces generated by the objects create waves and splashes on the surface of the fluid. To demonstrate the realism and limitations of the model, images from a computer-generated animation are presented and compared with video frames of actual splashes occurring under similar initial conditions.<>
{"title":"Dynamic simulation of splashing fluids","authors":"J. F. O'Brien, J. Hodgins","doi":"10.1109/CA.1995.393532","DOIUrl":"https://doi.org/10.1109/CA.1995.393532","url":null,"abstract":"We describe a method for modeling the dynamic behavior of splashing fluids. The model simulates the behavior of a fluid when objects impact or float on its surface. The forces generated by the objects create waves and splashes on the surface of the fluid. To demonstrate the realism and limitations of the model, images from a computer-generated animation are presented and compared with video frames of actual splashes occurring under similar initial conditions.<<ETX>>","PeriodicalId":430534,"journal":{"name":"Proceedings Computer Animation'95","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128016008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two-dimensional animation has generally been the poor relation of three-dimensional animation, as far as computer-assistance is concerned. The Animachine project is an attempt to view the whole production process of 2D (cel) animation as ripe for computer-based help, while still allowing artists to retain as much of their traditional way of working as possible. In this paper, we summarize our progress on a renderer for such multi-layer animated graphics. There is little emphasis on the movement aspects of our work. We discuss the various results which can be achieved without recourse to full 3D techniques, we show that the approach is versatile and can be used to achieve a wide range of useful effects.<>
{"title":"The Animachine renderer","authors":"P. Willis, T. Nettleship","doi":"10.1109/CA.1995.393543","DOIUrl":"https://doi.org/10.1109/CA.1995.393543","url":null,"abstract":"Two-dimensional animation has generally been the poor relation of three-dimensional animation, as far as computer-assistance is concerned. The Animachine project is an attempt to view the whole production process of 2D (cel) animation as ripe for computer-based help, while still allowing artists to retain as much of their traditional way of working as possible. In this paper, we summarize our progress on a renderer for such multi-layer animated graphics. There is little emphasis on the movement aspects of our work. We discuss the various results which can be achieved without recourse to full 3D techniques, we show that the approach is versatile and can be used to achieve a wide range of useful effects.<<ETX>>","PeriodicalId":430534,"journal":{"name":"Proceedings Computer Animation'95","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131250836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Methods developed in the Computer Vision community for tracking moving shapes are showing great potential for applications in real-time graphics. Tracking techniques allow graphical entities such as curves to be superimposed on a video data-stream, marking out certain objects and following their motions. The techniques have been demonstrated on a variety of moving objects including human hands, heads, lips both from the side and the front, various vehicles, cabbages viewed from a moving tractor and even a pig in its pen. The output from such trackers can be taken in the form of motion signals which could then be used to drive a remote animation.<>
{"title":"Applying visual curve tracking to graphics","authors":"A. Blake","doi":"10.1109/CA.1995.393546","DOIUrl":"https://doi.org/10.1109/CA.1995.393546","url":null,"abstract":"Methods developed in the Computer Vision community for tracking moving shapes are showing great potential for applications in real-time graphics. Tracking techniques allow graphical entities such as curves to be superimposed on a video data-stream, marking out certain objects and following their motions. The techniques have been demonstrated on a variety of moving objects including human hands, heads, lips both from the side and the front, various vehicles, cabbages viewed from a moving tractor and even a pig in its pen. The output from such trackers can be taken in the form of motion signals which could then be used to drive a remote animation.<<ETX>>","PeriodicalId":430534,"journal":{"name":"Proceedings Computer Animation'95","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126837872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The cumbersome nature of wired interfaces and the limited nature of the interaction with graphical objects has so far limited the range of application of virtual environments. We discuss the design and implementation of a novel system, called ALIVE, which allows wireless full-body interaction between a human participant and a rich graphical world inhabited by autonomous agents. Based on results obtained with real users, the paper argues that this kind of system can provide more complex and very different experiences than traditional virtual reality systems. The ALIVE system significantly broadens the range of potential applications of virtual reality systems; in particular the paper discusses novel applications in the area of training and teaching, entertainment and last but not least, digital assistants or interface agents.<>
{"title":"The ALIVE system: full-body interaction with autonomous agents","authors":"P. Maes, Trevor Darrell, B. Blumberg, A. Pentland","doi":"10.1109/CA.1995.393553","DOIUrl":"https://doi.org/10.1109/CA.1995.393553","url":null,"abstract":"The cumbersome nature of wired interfaces and the limited nature of the interaction with graphical objects has so far limited the range of application of virtual environments. We discuss the design and implementation of a novel system, called ALIVE, which allows wireless full-body interaction between a human participant and a rich graphical world inhabited by autonomous agents. Based on results obtained with real users, the paper argues that this kind of system can provide more complex and very different experiences than traditional virtual reality systems. The ALIVE system significantly broadens the range of potential applications of virtual reality systems; in particular the paper discusses novel applications in the area of training and teaching, entertainment and last but not least, digital assistants or interface agents.<<ETX>>","PeriodicalId":430534,"journal":{"name":"Proceedings Computer Animation'95","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116522179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a constructive visual software development system for interactive 3D graphic applications. Our system called the IntelligentBox is an extension of the 2D media construction system IntelligentPad to 3D application systems. While the IntelligentPad represents any object as a pad, i.e., a reactive 2D media component with a card image, which can be manually pasted on another pad to define a compound document the IntelligentBox represents any objects as reactive 3D visual objects that can be combined with other reactive 3D visual objects. Both provide uniform frameworks for the concurrent definition of both geometrical compound structures among reactive objects and their mutually interactive functional linkages. The IntelligentBox allows us to easily combine existing primitives in order to compose various interactive 3D compound objects and their coordination mechanism. It works as a user-friendly rapid-prototyping software development system for interactive 3D graphic applications and computer animations.<>
{"title":"IntelligentBox: a constructive visual software development system for interactive 3D graphic applications","authors":"Y. Okada, Yuzuru Tanaka","doi":"10.1109/CA.1995.393540","DOIUrl":"https://doi.org/10.1109/CA.1995.393540","url":null,"abstract":"This paper proposes a constructive visual software development system for interactive 3D graphic applications. Our system called the IntelligentBox is an extension of the 2D media construction system IntelligentPad to 3D application systems. While the IntelligentPad represents any object as a pad, i.e., a reactive 2D media component with a card image, which can be manually pasted on another pad to define a compound document the IntelligentBox represents any objects as reactive 3D visual objects that can be combined with other reactive 3D visual objects. Both provide uniform frameworks for the concurrent definition of both geometrical compound structures among reactive objects and their mutually interactive functional linkages. The IntelligentBox allows us to easily combine existing primitives in order to compose various interactive 3D compound objects and their coordination mechanism. It works as a user-friendly rapid-prototyping software development system for interactive 3D graphic applications and computer animations.<<ETX>>","PeriodicalId":430534,"journal":{"name":"Proceedings Computer Animation'95","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126448176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physically-based modeling remedies the problem of producing realistic animation by including forces, masses, strain energies, and other physical quantities. The behavior of physically-based models is governed by the laws of rigid and nonrigid dynamics expressed through a set of equations of motion. This paper discusses various formulations for animating deformable models. The formulations based on elasticity theory express the interactions between discrete deformable model points using the stiffness matrices. These matrices store the elastic properties of the models and they should be evolved in time according to changing elastic properties of the models. An alternative to these formulations seems to be external force formulations of different types. In these types of formulations, elastic properties of the materials are represented as external spring or other tensile forces as opposed to forming complicated stiffness matrices.<>
{"title":"Animating deformable models: different approaches","authors":"U. Güdükbay, B. Özgüç","doi":"10.1109/CA.1995.393538","DOIUrl":"https://doi.org/10.1109/CA.1995.393538","url":null,"abstract":"Physically-based modeling remedies the problem of producing realistic animation by including forces, masses, strain energies, and other physical quantities. The behavior of physically-based models is governed by the laws of rigid and nonrigid dynamics expressed through a set of equations of motion. This paper discusses various formulations for animating deformable models. The formulations based on elasticity theory express the interactions between discrete deformable model points using the stiffness matrices. These matrices store the elastic properties of the models and they should be evolved in time according to changing elastic properties of the models. An alternative to these formulations seems to be external force formulations of different types. In these types of formulations, elastic properties of the materials are represented as external spring or other tensile forces as opposed to forming complicated stiffness matrices.<<ETX>>","PeriodicalId":430534,"journal":{"name":"Proceedings Computer Animation'95","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115452538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An advanced technique for Meteosat cloud animation is presented. The main principle is a "dual approach" for the separation of thematic satellite data information as land/water and clouds: based on combined thresholding procedures, cloud picture elements (pixels) are extracted and intermediate cloud images are created by linear interpolation to get eight images per hour. Secondly, an attractive cloudfree view of the Earth surface (land/water) is created as a texture map based on a two month time series of Meteosat imagery including a digital bathymetric ocean model. This background image shows the Meteosat hemisphere in 25-km resolution consisting of visible data, infrared data, and a synthetic channel derived on VIS and IR data representing "near infrared" features for land surfaces. Ocean areas are masked with high resolution using World Data Bank II and coded in blue accordingly to water depth given by ETOPO5. The cloud layers are then digitally merged with the background image and written to a video disk. As a result of this technique, a realistic long-term animation is available showing the weather dynamics of Meteosat data for educational and presentation purposes.<>
提出了一种先进的气象卫星云动画技术。主要原理是将专题卫星数据信息分离为陆地/水和云的“双重方法”:基于组合阈值程序,提取云图元素(像素),并通过线性插值创建中间云图,每小时获得8幅图像。其次,基于两个月的气象卫星图像时间序列,包括数字海洋测深模型,创建了一个引人注目的无云地球表面(陆地/水面)的纹理图。这张背景图像显示了25公里分辨率的气象卫星半球,包括可见光数据、红外数据和基于VIS和IR数据的合成通道,这些数据代表了陆地表面的“近红外”特征。使用World Data Bank II对海洋区域进行高分辨率屏蔽,并根据ETOPO5给出的水深用蓝色编码。然后,将云层与背景图像进行数字合并,并写入视频磁盘。由于这种技术,一个真实的长期动画可以显示气象卫星数据的天气动态,用于教育和演示目的。
{"title":"An advanced technique for Meteosat cloud animation","authors":"R. Meisner, S. Dech","doi":"10.1109/CA.1995.393530","DOIUrl":"https://doi.org/10.1109/CA.1995.393530","url":null,"abstract":"An advanced technique for Meteosat cloud animation is presented. The main principle is a \"dual approach\" for the separation of thematic satellite data information as land/water and clouds: based on combined thresholding procedures, cloud picture elements (pixels) are extracted and intermediate cloud images are created by linear interpolation to get eight images per hour. Secondly, an attractive cloudfree view of the Earth surface (land/water) is created as a texture map based on a two month time series of Meteosat imagery including a digital bathymetric ocean model. This background image shows the Meteosat hemisphere in 25-km resolution consisting of visible data, infrared data, and a synthetic channel derived on VIS and IR data representing \"near infrared\" features for land surfaces. Ocean areas are masked with high resolution using World Data Bank II and coded in blue accordingly to water depth given by ETOPO5. The cloud layers are then digitally merged with the background image and written to a video disk. As a result of this technique, a realistic long-term animation is available showing the weather dynamics of Meteosat data for educational and presentation purposes.<<ETX>>","PeriodicalId":430534,"journal":{"name":"Proceedings Computer Animation'95","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133364963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a project who's goal is to make an animation simulating the activity of a mobile robot in a given environment. A parallel is drawn between animation and reactive programming, particularly with the concept of autonomous agent. The realised animation consists of a virtual world, the environment of the robot and the robot itself, as an agent acting in this world. The turning point of the project is the simulation of the sensors through which the simulated robot is supposed to see its environment. To implement this task, advanced techniques are used such as ray tracing and radiosity. An experimentation platform is designed based on the robot Nomad 200 and its simulator, with adjunction of interfaces for the virtual sensors and for the representation on the computer screen.<>
{"title":"A synthetic mobile robot","authors":"P. Erard, C. Fuhrer, Laurent Iff","doi":"10.1109/CA.1995.393551","DOIUrl":"https://doi.org/10.1109/CA.1995.393551","url":null,"abstract":"This paper presents a project who's goal is to make an animation simulating the activity of a mobile robot in a given environment. A parallel is drawn between animation and reactive programming, particularly with the concept of autonomous agent. The realised animation consists of a virtual world, the environment of the robot and the robot itself, as an agent acting in this world. The turning point of the project is the simulation of the sensors through which the simulated robot is supposed to see its environment. To implement this task, advanced techniques are used such as ray tracing and radiosity. An experimentation platform is designed based on the robot Nomad 200 and its simulator, with adjunction of interfaces for the virtual sensors and for the representation on the computer screen.<<ETX>>","PeriodicalId":430534,"journal":{"name":"Proceedings Computer Animation'95","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128237474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}