Masatoshi Kakiuchi, A. Yutani, A. Inomata, K. Fujikawa, Keishi Kandori
There are some researches in transmission of high definition image using Internet Protocol (IP) before. Materials for TV stations require lossless transmission by uncompressed real-time transmission. Also high performance camera and display require transmission methods for 4K2K (3,840 x 2,160 pixels) image over HD (High Definition; 1,920 x 1,080 pixels). Especially, projection to huge screen such planetarium requires at least 4K2K resolution. However, uncompressed transmission of both HD and 4K2K require such high bandwidth network as 1.6 Gbit/s and 6.4 Gbit/s, then we prepared dedicated networks such SONET or wide-area VLAN service. Therefore we spent a lot of procedures and costs.
在利用互联网协议(IP)传输高清图像方面,已有一些研究。电视台的资料要求通过不压缩的实时传输进行无损传输。此外,高性能相机和显示器需要传输方式4K2K (3840 x 2160像素)的高清图像(高清晰度;1,920 x 1,080像素)。特别是,像天文馆这样的大屏幕投影至少需要4K2K的分辨率。然而,高清和4K2K的非压缩传输都需要1.6 Gbit/s和6.4 Gbit/s的高带宽网络,因此我们准备了SONET或广域VLAN业务等专用网络。因此我们花了很多程序和费用。
{"title":"Uncompressed 4K2K and HD live transmission on global internet","authors":"Masatoshi Kakiuchi, A. Yutani, A. Inomata, K. Fujikawa, Keishi Kandori","doi":"10.1145/1666778.1666806","DOIUrl":"https://doi.org/10.1145/1666778.1666806","url":null,"abstract":"There are some researches in transmission of high definition image using Internet Protocol (IP) before. Materials for TV stations require lossless transmission by uncompressed real-time transmission. Also high performance camera and display require transmission methods for 4K2K (3,840 x 2,160 pixels) image over HD (High Definition; 1,920 x 1,080 pixels). Especially, projection to huge screen such planetarium requires at least 4K2K resolution. However, uncompressed transmission of both HD and 4K2K require such high bandwidth network as 1.6 Gbit/s and 6.4 Gbit/s, then we prepared dedicated networks such SONET or wide-area VLAN service. Therefore we spent a lot of procedures and costs.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127712265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. R. Bonde, Lise Vestergaard Jensen, Emil Sellström, Stephan Suesmann
In a 1930s metropolis, three criminals are secretly gathered for a game of poker, with everything to lose and even more to win. As evening turns to night, the atmosphere grows tense. They show no limit in the skills of cheating, shooting, and killing their way to the prize: a big fat pile of golden dollar bills.
{"title":"Draw poker","authors":"S. R. Bonde, Lise Vestergaard Jensen, Emil Sellström, Stephan Suesmann","doi":"10.1145/1665208.1665241","DOIUrl":"https://doi.org/10.1145/1665208.1665241","url":null,"abstract":"In a 1930s metropolis, three criminals are secretly gathered for a game of poker, with everything to lose and even more to win. As evening turns to night, the atmosphere grows tense. They show no limit in the skills of cheating, shooting, and killing their way to the prize: a big fat pile of golden dollar bills.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"132 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132801202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryosuke Ichikari, Ryohei Hatano, Toshikazu Oshima, F. Shibata, H. Tamura
This paper describes a relighting method of designing cinematic lighting for filmmaking. The relighting method enables mixed reality based pre-visualization called MR-PreViz to change conditions of illumination. The method allows the MR-PreViz to have additional virtual lighting and the removal of actual illumination in designing cinematic lighting. The effects of lighting are applied correctly to both real objects and virtual objects.
{"title":"Designing cinematic lighting by relighting in MR-based pre-visualization","authors":"Ryosuke Ichikari, Ryohei Hatano, Toshikazu Oshima, F. Shibata, H. Tamura","doi":"10.1145/1666778.1666813","DOIUrl":"https://doi.org/10.1145/1666778.1666813","url":null,"abstract":"This paper describes a relighting method of designing cinematic lighting for filmmaking. The relighting method enables mixed reality based pre-visualization called MR-PreViz to change conditions of illumination. The method allows the MR-PreViz to have additional virtual lighting and the removal of actual illumination in designing cinematic lighting. The effects of lighting are applied correctly to both real objects and virtual objects.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131434975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Surface deformations based on physically-based simulations are used to represent elastic motions such as human skins or clothes in the field of 3DCG applications. LSM (Lattice Shape Matching) [Rivers and James 2007] has particularly attracted attention as a fast and robust method which achieves elastic-like motions. However, the original LSM deformation method generates far from realistic motions especially when stretching an object, because volume is not preserved.
基于物理模拟的表面变形被用来表示弹性运动,如人体皮肤或衣服在3DCG应用领域。LSM (Lattice Shape Matching,点阵形状匹配)[Rivers and James 2007]作为一种快速、鲁棒的实现类弹性运动的方法尤其受到关注。然而,原始的LSM变形方法产生的运动与真实的运动相差甚远,特别是在拉伸物体时,因为没有保留体积。
{"title":"Volume-preserving LSM deformations","authors":"K. Takamatsu, T. Kanai","doi":"10.1145/1667146.1667165","DOIUrl":"https://doi.org/10.1145/1667146.1667165","url":null,"abstract":"Surface deformations based on physically-based simulations are used to represent elastic motions such as human skins or clothes in the field of 3DCG applications. LSM (Lattice Shape Matching) [Rivers and James 2007] has particularly attracted attention as a fast and robust method which achieves elastic-like motions. However, the original LSM deformation method generates far from realistic motions especially when stretching an object, because volume is not preserved.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131814525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Miura, K. Mitobe, Takaaki Kaiga, Takashi Yukawa, T. Taniguchi, H. Tamamoto
In the field of dance motion analysis, the development of qualitative evaluation technique for the analysis of body motions described in the form of quantitative data is needed [Nakamura et al. 2008]; it makes dance motion data acquired by motion capture systems intuitively interpretable. In this study, the authors propose a method to automatically summarize the qualitative trend in a group of quantitative dance motion data; the motion features shown in all the dances are first quantitatively extracted by statistical analysis and then qualitatively categorized by cluster analysis.
在舞蹈动作分析领域,需要发展定性评价技术,对以定量数据形式描述的身体动作进行分析[Nakamura et al. 2008];它使动作捕捉系统获取的舞蹈动作数据具有直观的可解释性。本文提出了一种从一组定量的舞蹈动作数据中自动总结定性趋势的方法;首先通过统计分析定量提取所有舞蹈的运动特征,然后通过聚类分析定性分类。
{"title":"Qualitative evaluation of quantitative dance motion data","authors":"T. Miura, K. Mitobe, Takaaki Kaiga, Takashi Yukawa, T. Taniguchi, H. Tamamoto","doi":"10.1145/1666778.1666787","DOIUrl":"https://doi.org/10.1145/1666778.1666787","url":null,"abstract":"In the field of dance motion analysis, the development of qualitative evaluation technique for the analysis of body motions described in the form of quantitative data is needed [Nakamura et al. 2008]; it makes dance motion data acquired by motion capture systems intuitively interpretable. In this study, the authors propose a method to automatically summarize the qualitative trend in a group of quantitative dance motion data; the motion features shown in all the dances are first quantitatively extracted by statistical analysis and then qualitatively categorized by cluster analysis.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114077949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new concept that achieves the 3D reconstruction of dynamic scenes from multi-view video cameras (or 3D videos) using a minimal number of cameras, as opposed to the present state of the art approaches which require either several tens of cameras or high definition devices. A 3D video consists of a sequence of 3D models in motion captured by a surrounding set of video cameras. The result is a video where observers can choose freely their viewpoints. It is a markerless motion capture system where subjects do not need to wear special equipment. Hence, this system suits to a very wide range of applications (e.g. entertainment, medicine, sports, and so on). The 3D models are obtained using image-based multi-view stereo reconstruction techniques (or MVS). The performance of MVS relies on the quality and quantity of images taken from different viewpoints. As stereo correspondences have to be found between the images, the reconstruction fails in the case of weak stereo photo-consistency due to lack of camera views or lighting variations: consistent information is necessary.
{"title":"Minimal 3D video","authors":"Tony Tung, T. Matsuyama","doi":"10.1145/1667146.1667192","DOIUrl":"https://doi.org/10.1145/1667146.1667192","url":null,"abstract":"We present a new concept that achieves the 3D reconstruction of dynamic scenes from multi-view video cameras (or 3D videos) using a minimal number of cameras, as opposed to the present state of the art approaches which require either several tens of cameras or high definition devices. A 3D video consists of a sequence of 3D models in motion captured by a surrounding set of video cameras. The result is a video where observers can choose freely their viewpoints. It is a markerless motion capture system where subjects do not need to wear special equipment. Hence, this system suits to a very wide range of applications (e.g. entertainment, medicine, sports, and so on). The 3D models are obtained using image-based multi-view stereo reconstruction techniques (or MVS). The performance of MVS relies on the quality and quantity of images taken from different viewpoints. As stereo correspondences have to be found between the images, the reconstruction fails in the case of weak stereo photo-consistency due to lack of camera views or lighting variations: consistent information is necessary.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127325478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The definition of "important" or salient points on a shape is an old and fundamental problem in computer graphics. Important points, which we will term key points, can be used for representing a shape (e.g. as vertices of a polyline or as knots of a spline). Key points that correspond to perceptually salient points are useful as handles for editing. Key points are also good points to retain in simplifying a shape.
{"title":"Identifying salient points","authors":"J. P. Lewis, K. Anjyo","doi":"10.1145/1667146.1667198","DOIUrl":"https://doi.org/10.1145/1667146.1667198","url":null,"abstract":"The definition of \"important\" or salient points on a shape is an old and fundamental problem in computer graphics. Important points, which we will term key points, can be used for representing a shape (e.g. as vertices of a polyline or as knots of a spline). Key points that correspond to perceptually salient points are useful as handles for editing. Key points are also good points to retain in simplifying a shape.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127921310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I investigate and utilize the imagery and symbolism of technological ideology and mythology, and how these images and symbols reinforce a sense of dominance over the environment and the rest of humanity. In recent work, I have forced together elements of this imagery with images of their unacceptable consequences. These are skeptical paintings, depicting mounds of old and obsolete computers and televisions rupturing the crisp, wire-frame façade of virtualesque scenes. Computers and televisions (these amalgams of plastic, heavy metals, and other toxic wastes, these transmitters of fantasy, ideology, identity, and creators of virtual worlds) are depicted as accumulating waste in the process of becoming toxic nightmares. Seen in the act of transmission, their screens flicker on and off to display scenes of pride and shame, glory and disgust, myth tainted with visions of what we wish to ignore or conceal about ourselves and our history.
{"title":"Warmth through the night","authors":"Jonathan Elliott","doi":"10.1145/1665137.1665146","DOIUrl":"https://doi.org/10.1145/1665137.1665146","url":null,"abstract":"I investigate and utilize the imagery and symbolism of technological ideology and mythology, and how these images and symbols reinforce a sense of dominance over the environment and the rest of humanity. In recent work, I have forced together elements of this imagery with images of their unacceptable consequences. These are skeptical paintings, depicting mounds of old and obsolete computers and televisions rupturing the crisp, wire-frame façade of virtualesque scenes. Computers and televisions (these amalgams of plastic, heavy metals, and other toxic wastes, these transmitters of fantasy, ideology, identity, and creators of virtual worlds) are depicted as accumulating waste in the process of becoming toxic nightmares. Seen in the act of transmission, their screens flicker on and off to display scenes of pride and shame, glory and disgust, myth tainted with visions of what we wish to ignore or conceal about ourselves and our history.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121087590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gyuwan Choe, Jin Wan Park, Seonhee Park, Eunsun Jang, Hoyeon Jang
A new urban development in the area north of the Han river raises many complex questions. How much living space will be provided for residents? What will happen to the current residents of the area? How does the new development fit into the national housing plan? Who will profit from the development? The residents? The politicians? The real estate developers?
{"title":"Special habitation","authors":"Gyuwan Choe, Jin Wan Park, Seonhee Park, Eunsun Jang, Hoyeon Jang","doi":"10.1145/1665137.1665145","DOIUrl":"https://doi.org/10.1145/1665137.1665145","url":null,"abstract":"A new urban development in the area north of the Han river raises many complex questions. How much living space will be provided for residents? What will happen to the current residents of the area? How does the new development fit into the national housing plan? Who will profit from the development? The residents? The politicians? The real estate developers?","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115241810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Photorealistic image synthesis is a challenging topic in computer graphics. Image-based techniques for capturing and reproducing the appearance of real scenes have received a great deal of attention. A long measurement time and a large amount of memory are required in order to acquire an image-based relightable dataset, i.e., light transport or reflectance field. Several approaches have been proposed with the goal of efficiently acquiring light transport [Sen et al. 2005; Fuchs et al. 2007]. However, since, with the exception of the recently proposed compressive sensing method [Peers et al. 2009], most previous studies have focused on scene adaptive sampling algorithms, conventional methods cannot perform efficiently in the case of a scene that has significant global illumination. In this paper, we present a non-adaptive sampling method for measuring light transport of a scene based on separation of the direct and global illumination components.
真实感图像合成是计算机图形学中一个具有挑战性的课题。用于捕捉和再现真实场景外观的基于图像的技术已经受到了极大的关注。为了获得基于图像的可光照数据集,即光传输或反射场,需要较长的测量时间和大量的内存。已经提出了几种有效获取光传输的方法[Sen等人,2005;Fuchs et al. 2007]。然而,由于除了最近提出的压缩感知方法[Peers等人,2009]外,大多数先前的研究都集中在场景自适应采样算法上,传统方法无法在具有显著全局照明的场景中有效地执行。在本文中,我们提出了一种基于直接和全局光照分量分离的非自适应采样方法来测量场景的光传输。
{"title":"Efficient acquisition of light transport based on separation of direct and global components","authors":"K. Ochiai, N. Tsumura, T. Nakaguchi, Y. Miyake","doi":"10.1145/1666778.1666816","DOIUrl":"https://doi.org/10.1145/1666778.1666816","url":null,"abstract":"Photorealistic image synthesis is a challenging topic in computer graphics. Image-based techniques for capturing and reproducing the appearance of real scenes have received a great deal of attention. A long measurement time and a large amount of memory are required in order to acquire an image-based relightable dataset, i.e., light transport or reflectance field. Several approaches have been proposed with the goal of efficiently acquiring light transport [Sen et al. 2005; Fuchs et al. 2007]. However, since, with the exception of the recently proposed compressive sensing method [Peers et al. 2009], most previous studies have focused on scene adaptive sampling algorithms, conventional methods cannot perform efficiently in the case of a scene that has significant global illumination. In this paper, we present a non-adaptive sampling method for measuring light transport of a scene based on separation of the direct and global illumination components.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115245115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}