We are all affected by our immediate context and its political, environmental, cultural, and temporal influences. Climate Shifts juxtaposes various locations around the globe and their points of view through local news headlines and weather data. Differences in concerns and perspectives emerge, sometimes about the same world events, allowing a glimpse into the collective psyche of each place.
{"title":"Climate shifts","authors":"Christa Erickson","doi":"10.1145/1665137.1665148","DOIUrl":"https://doi.org/10.1145/1665137.1665148","url":null,"abstract":"We are all affected by our immediate context and its political, environmental, cultural, and temporal influences. Climate Shifts juxtaposes various locations around the globe and their points of view through local news headlines and weather data. Differences in concerns and perspectives emerge, sometimes about the same world events, allowing a glimpse into the collective psyche of each place.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"24 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130637905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Immersion in the world and intrigues of the Assassin's Creed 2 videogame, the direct sequel to Assassin's Creed, which sold eight million units worldwide. Dive into the Italian Renaissance and beautifully recreated 15th century Venice, in the midst of a mysterious street carnival, where you will meet our new master assassin, Ezio Auditore da Firenze, and discover his new "art." Follow him on his quest for vengeance to reveal a secular conspiracy and fight the masquerades of the Italian Renaissance.
沉浸在刺客信条2电子游戏的世界和阴谋中,这是刺客信条的直接续集,在全球销售了800万套。潜入意大利文艺复兴和15世纪的威尼斯,在一个神秘的街头狂欢节,在那里你会遇到我们的新刺客大师,Ezio Auditore da Firenze,并发现他的新“艺术”。跟随他的复仇之旅,揭露一个世俗的阴谋,并与意大利文艺复兴时期的假面舞会作斗争。
{"title":"Assassin's Creed 2","authors":"István Zorkóczy","doi":"10.1145/1665208.1665212","DOIUrl":"https://doi.org/10.1145/1665208.1665212","url":null,"abstract":"Immersion in the world and intrigues of the Assassin's Creed 2 videogame, the direct sequel to Assassin's Creed, which sold eight million units worldwide. Dive into the Italian Renaissance and beautifully recreated 15th century Venice, in the midst of a mysterious street carnival, where you will meet our new master assassin, Ezio Auditore da Firenze, and discover his new \"art.\" Follow him on his quest for vengeance to reveal a secular conspiracy and fight the masquerades of the Italian Renaissance.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131182199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Japanese spacedraft Kaguya (Selene) was launched on 14 September 2007 by the Japan Aerospace Exploration Agency. Its objectives are "to obtain scientific data of the lunar origin and evolution and to develop the technology for future lunar exploration."
{"title":"Entire topography of lunar surface","authors":"H. Nakayama","doi":"10.1145/1665208.1665243","DOIUrl":"https://doi.org/10.1145/1665208.1665243","url":null,"abstract":"The Japanese spacedraft Kaguya (Selene) was launched on 14 September 2007 by the Japan Aerospace Exploration Agency. Its objectives are \"to obtain scientific data of the lunar origin and evolution and to develop the technology for future lunar exploration.\"","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114338338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuichiro Yamaguchi, Takuya Saito, Yosuke Bando, Bing-Yu Chen, T. Nishita
In traditional image composition methods for cutting out a source object from a source image and pasting it onto a target image, users have to segment a foreground object in a target image when they want to partially hide a source object behind it. While recent image editing tools greatly facilitate segmentation operations, it can be tedious to segment each object if users try to place a source object in various positions in a target image before obtaining a satisfying composition. We propose a method which allows users to drag a source object and slip it behind a target object as shown in Fig. 1, so that users can move a source object around without manually segmenting each part of a target image.
{"title":"Interactive image composition through draggable objects","authors":"Yuichiro Yamaguchi, Takuya Saito, Yosuke Bando, Bing-Yu Chen, T. Nishita","doi":"10.1145/1667146.1667186","DOIUrl":"https://doi.org/10.1145/1667146.1667186","url":null,"abstract":"In traditional image composition methods for cutting out a source object from a source image and pasting it onto a target image, users have to segment a foreground object in a target image when they want to partially hide a source object behind it. While recent image editing tools greatly facilitate segmentation operations, it can be tedious to segment each object if users try to place a source object in various positions in a target image before obtaining a satisfying composition. We propose a method which allows users to drag a source object and slip it behind a target object as shown in Fig. 1, so that users can move a source object around without manually segmenting each part of a target image.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122266570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In EnhancedTV (ETV) where diverse multimedia data appear in scenes, synchronization between the video stream and other multimedia data, including graphics, is essential. Currently, the most widely used method of synchronizing video streams and other multimedia data in ETV is to match absolute time values with each other.
{"title":"A content-based synchronization approach for timing description in EnhancedTV","authors":"Hyun-Jeong Yim, Y. Choy, Soon-Bum Lim","doi":"10.1145/1666778.1666786","DOIUrl":"https://doi.org/10.1145/1666778.1666786","url":null,"abstract":"In EnhancedTV (ETV) where diverse multimedia data appear in scenes, synchronization between the video stream and other multimedia data, including graphics, is essential. Currently, the most widely used method of synchronizing video streams and other multimedia data in ETV is to match absolute time values with each other.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126941636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
All Bean Maxwell wants is for the picture on his foyer wall to hang level. With a scrutinizing eye, and an array of tools, he tirelessly pursues this exercise in perfection. But will his dedication to the little details cause him to lose sight of the bigger picture?
{"title":"On the level","authors":"Michael Rutter","doi":"10.1145/1665208.1665257","DOIUrl":"https://doi.org/10.1145/1665208.1665257","url":null,"abstract":"All Bean Maxwell wants is for the picture on his foyer wall to hang level. With a scrutinizing eye, and an array of tools, he tirelessly pursues this exercise in perfection. But will his dedication to the little details cause him to lose sight of the bigger picture?","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127631931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How does artificial-life art adapt to its environment? What is the significance of a computational ecosystem proposed as contemporary art? These are some of the ideas examined in this bio-inspired immersive art installation.
{"title":"Artificial nature: fluid space","authors":"H. Ji, Graham Wakefield","doi":"10.1145/1665137.1665153","DOIUrl":"https://doi.org/10.1145/1665137.1665153","url":null,"abstract":"How does artificial-life art adapt to its environment? What is the significance of a computational ecosystem proposed as contemporary art? These are some of the ideas examined in this bio-inspired immersive art installation.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115388450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present here a first prototype of the Virtual Haptic Radar (VHR), a wearable device helping actors become aware of the presence of invisible virtual objects in their path when evolving in a virtual studio (such as a "bluescreen" filming stage [Figure 1]). The VHR is a natural extension of the Haptic Radar (HR) and its principle [Cassinelli et al. 2006] in the realm of virtual reality: while each module of the HR had a small vibrator and a rangefinder to measure distance to real obstacles, the VHR module lacks the rangefinder but accommodates instead a (cheap) ultrasound-based indoor positioning system that gives it the ability to know exactly where it is situated relatively to an external frame of reference.
我们在这里展示了虚拟触觉雷达(VHR)的第一个原型,这是一种可穿戴设备,可以帮助演员在虚拟工作室(如“蓝屏”拍摄阶段[图1])中发展时,意识到他们路径上不可见的虚拟物体的存在。VHR是触觉雷达(HR)及其原理在虚拟现实领域的自然延伸[Cassinelli et al. 2006]:虽然HR的每个模块都有一个小振动器和一个测距仪来测量与真实障碍物的距离,但VHR模块缺乏测距仪,但可以适应(便宜的)基于超声波的室内定位系统,使其能够准确地知道它相对于外部参考框架的位置。
{"title":"Virtual Haptic Radar","authors":"A. Zerroug, Á. Cassinelli, M. Ishikawa","doi":"10.1145/1667146.1667158","DOIUrl":"https://doi.org/10.1145/1667146.1667158","url":null,"abstract":"We present here a first prototype of the Virtual Haptic Radar (VHR), a wearable device helping actors become aware of the presence of invisible virtual objects in their path when evolving in a virtual studio (such as a \"bluescreen\" filming stage [Figure 1]). The VHR is a natural extension of the Haptic Radar (HR) and its principle [Cassinelli et al. 2006] in the realm of virtual reality: while each module of the HR had a small vibrator and a rangefinder to measure distance to real obstacles, the VHR module lacks the rangefinder but accommodates instead a (cheap) ultrasound-based indoor positioning system that gives it the ability to know exactly where it is situated relatively to an external frame of reference.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115278589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Life at the witch trails is based on the idea of creating "living" structures through sound. Video material from x/y-stereo displays visualizes the phase changing from two-channel audio signals. The sound source is a special audio composition that can not be realized without direct visualization. It contains full-on, sound-dependent motion dynamics and forms complex "cathode-ray objects", which allow direct (not delayed) visual access to the smallest details of the composition. The representation is not limited by the pictures-per-second time frame of television and computer technology. The interconnection of the aural and visual senses arises in an immediate way, and the visualization of sound obtains a new meaning.
{"title":"Life at the witch trails","authors":"Natalie Bewernitz, Marek Goldowski","doi":"10.1145/1665137.1665143","DOIUrl":"https://doi.org/10.1145/1665137.1665143","url":null,"abstract":"Life at the witch trails is based on the idea of creating \"living\" structures through sound. Video material from x/y-stereo displays visualizes the phase changing from two-channel audio signals. The sound source is a special audio composition that can not be realized without direct visualization. It contains full-on, sound-dependent motion dynamics and forms complex \"cathode-ray objects\", which allow direct (not delayed) visual access to the smallest details of the composition. The representation is not limited by the pictures-per-second time frame of television and computer technology. The interconnection of the aural and visual senses arises in an immediate way, and the visualization of sound obtains a new meaning.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"464 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125823670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toshihisa Yamahata, Yuuki Uranishi, H. Sasaki, Y. Manabe, K. Chihara
The animation of such granular material as sand and grain is difficult to render in computer graphics. There are no standard methods for modeling their motions, because their non-linear behavior, which is ruled by the loss of energy, occurs when the discrete particles collide. Therefore, methods for rendering granular materials have not been studied widely. In this paper, we present a new method for rendering photo-realistic granular materials. To model the motion of granular material, we define granular materials as the mass of discrete particles. Then the scattering of light needed for photo-realistic rendering is formulated, and we propose method for efficiently evaluating radiance in granular material based on a radiance caching method.
{"title":"Glanular materials rendering based on radiance caching","authors":"Toshihisa Yamahata, Yuuki Uranishi, H. Sasaki, Y. Manabe, K. Chihara","doi":"10.1145/1666778.1666817","DOIUrl":"https://doi.org/10.1145/1666778.1666817","url":null,"abstract":"The animation of such granular material as sand and grain is difficult to render in computer graphics. There are no standard methods for modeling their motions, because their non-linear behavior, which is ruled by the loss of energy, occurs when the discrete particles collide. Therefore, methods for rendering granular materials have not been studied widely. In this paper, we present a new method for rendering photo-realistic granular materials. To model the motion of granular material, we define granular materials as the mass of discrete particles. Then the scattering of light needed for photo-realistic rendering is formulated, and we propose method for efficiently evaluating radiance in granular material based on a radiance caching method.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126984154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}