A tale of two backwoods bumpkins on the hunt for Florida's mythical Skunk-Ape.
讲述了两个乡巴佬寻找佛罗里达神秘的臭鼬猿的故事。
{"title":"Scoutin' for skunk-ape!","authors":"Kevin Lu","doi":"10.1145/2542398.2542445","DOIUrl":"https://doi.org/10.1145/2542398.2542445","url":null,"abstract":"A tale of two backwoods bumpkins on the hunt for Florida's mythical Skunk-Ape.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116380040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Drochtert, Konstantin Owetschkin, L. Meyer, C. Geiger
In this demonstration we demonstrate the result of the project "Mobile Virtual Archery", a mobile phone based virtual reality simulator for traditional archery. We attached a mobile phone to a real archery bow to act as a "magic window" into a virtual outdoor 3D world. The user is able to orient the bow using all three axes and the virtual scene is updated in real-time. We provide a 360° scene with a number of targets placed at different positions. With our mobile 3D simulator we want to provide a believable archery experience and support users in practicing the motion sequence of traditional archery in a virtual environment.
{"title":"Demonstration of mobile virtual archery","authors":"Daniel Drochtert, Konstantin Owetschkin, L. Meyer, C. Geiger","doi":"10.1145/2543651.2543658","DOIUrl":"https://doi.org/10.1145/2543651.2543658","url":null,"abstract":"In this demonstration we demonstrate the result of the project \"Mobile Virtual Archery\", a mobile phone based virtual reality simulator for traditional archery. We attached a mobile phone to a real archery bow to act as a \"magic window\" into a virtual outdoor 3D world. The user is able to orient the bow using all three axes and the virtual scene is updated in real-time. We provide a 360° scene with a number of targets placed at different positions. With our mobile 3D simulator we want to provide a believable archery experience and support users in practicing the motion sequence of traditional archery in a virtual environment.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131795519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is presents an artwork that is concerned with the interactions among people rather than the interaction between an audience and the artwork. We visualize the physical motion variations from the interactions among different participants using Kinect-based depth estimation and video tracking algorithms. The proposed work can visualize the affective experiences based on the physical distance between participants. We also provide experiences in which a participant becomes a part of the artwork in the form of both shape and interface. The body of a participant plays an important role in communicating and interacting with other participant and the artwork itself.
{"title":"Participating interface","authors":"Seonah Mok, Jaehwan Jeon, M. Hayes, J. Paik","doi":"10.1145/2542256.2542263","DOIUrl":"https://doi.org/10.1145/2542256.2542263","url":null,"abstract":"This paper is presents an artwork that is concerned with the interactions among people rather than the interaction between an audience and the artwork. We visualize the physical motion variations from the interactions among different participants using Kinect-based depth estimation and video tracking algorithms. The proposed work can visualize the affective experiences based on the physical distance between participants. We also provide experiences in which a participant becomes a part of the artwork in the form of both shape and interface. The body of a participant plays an important role in communicating and interacting with other participant and the artwork itself.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128300768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The research focuses on combination of novel technology and traditional art. In this paper, a novel interactive art installation (IAI) using user's thought to interact with a digital Chinese ink painting is introduced. Meanwhile, the final purpose of this research is to establish a link between novel technology and traditional arts and further to bring out traditional art philosophy by taking the advantages of novel technology. Finally, this research aims to help people understand not only the visual expression of an art, but also its philosophy and spirit through different kinds of interaction. Based on this, the theory research focuses on four parts: traditional art philosophy, artistic and cognitive psychology, traditional art, novel technology. Meanwhile, for practice, a Chinese style IAI experiment including brain waves control technology is introduced to help people better understand the purpose of this research.
{"title":"Cerebral interaction and painting","authors":"Yiyuan Huang, A. Lioret","doi":"10.1145/2542256.2542260","DOIUrl":"https://doi.org/10.1145/2542256.2542260","url":null,"abstract":"The research focuses on combination of novel technology and traditional art. In this paper, a novel interactive art installation (IAI) using user's thought to interact with a digital Chinese ink painting is introduced. Meanwhile, the final purpose of this research is to establish a link between novel technology and traditional arts and further to bring out traditional art philosophy by taking the advantages of novel technology. Finally, this research aims to help people understand not only the visual expression of an art, but also its philosophy and spirit through different kinds of interaction. Based on this, the theory research focuses on four parts: traditional art philosophy, artistic and cognitive psychology, traditional art, novel technology. Meanwhile, for practice, a Chinese style IAI experiment including brain waves control technology is introduced to help people better understand the purpose of this research.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128332794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motion blur is a common artifact that produces disappointing blurry images with inevitable information loss. Due to the nature of imaging sensors that accumulates incoming lights, a motion blurred image will be obtained if the camera sensor moves during exposure. Image (motion) de-blurring is a computational process to remove motion blurs from a blurred image to obtain a sharp latent image. Recently image de-blurring has become a popular topic in computer graphics and vision research, and excellent methods have been developed to improve the quality of de-blurred images and accelerate the computation speed. Image de-blurring has also a variety of applications in image enhancement software and camera industry, and a practical image de-blurring method with quality and speed would be a critical factor to improve the performance of image enhancement and camera systems. This course will first introduce the concepts, theoretical model, problem definition, and basic approach of image de-blurring. Blind deconvolution and non-blind deconvolution are two main topics of image de-blurring, which are classified by the existence of given kernel (or PSF; point spread function) information that describes the camera motion. For both blind deconvolution and non-blind deconvolution, challenges, classical methods, and recent research trends and successful methods will be presented. A PhotoShop demo will be given to show the performance of a recently developed fast motion de-blurring method. This course will also cover several advanced issues of image de-blurring, such as hardware based approaches, spatially-varying camera shakes, object motions, and video de-blurring. It will conclude with remaining challenges, such as outliers and noise, computation time, and quality assessment. There will be Q&A at the end of each presentation with a short discussion at the end of the course.
{"title":"Recent advances in image deblurring","authors":"Seungyong Lee, Sunghyun Cho","doi":"10.1145/2542266.2542272","DOIUrl":"https://doi.org/10.1145/2542266.2542272","url":null,"abstract":"Motion blur is a common artifact that produces disappointing blurry images with inevitable information loss. Due to the nature of imaging sensors that accumulates incoming lights, a motion blurred image will be obtained if the camera sensor moves during exposure. Image (motion) de-blurring is a computational process to remove motion blurs from a blurred image to obtain a sharp latent image. Recently image de-blurring has become a popular topic in computer graphics and vision research, and excellent methods have been developed to improve the quality of de-blurred images and accelerate the computation speed. Image de-blurring has also a variety of applications in image enhancement software and camera industry, and a practical image de-blurring method with quality and speed would be a critical factor to improve the performance of image enhancement and camera systems.\u0000 This course will first introduce the concepts, theoretical model, problem definition, and basic approach of image de-blurring. Blind deconvolution and non-blind deconvolution are two main topics of image de-blurring, which are classified by the existence of given kernel (or PSF; point spread function) information that describes the camera motion. For both blind deconvolution and non-blind deconvolution, challenges, classical methods, and recent research trends and successful methods will be presented. A PhotoShop demo will be given to show the performance of a recently developed fast motion de-blurring method.\u0000 This course will also cover several advanced issues of image de-blurring, such as hardware based approaches, spatially-varying camera shakes, object motions, and video de-blurring. It will conclude with remaining challenges, such as outliers and noise, computation time, and quality assessment. There will be Q&A at the end of each presentation with a short discussion at the end of the course.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133851470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A short animated film by Eoin Duffy staring George Takei.
由约恩·达菲导演的动画短片,由乔治·武井主演。
{"title":"The missing scarf","authors":"E. Duffy","doi":"10.1145/2542398.2542452","DOIUrl":"https://doi.org/10.1145/2542398.2542452","url":null,"abstract":"A short animated film by Eoin Duffy staring George Takei.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133164637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The quest of a Fame-and-Fortune-hungry knight, in a not-so-fairy-tale!
在一个不那么童话的故事里,追逐名利的骑士!
{"title":"850 meters","authors":"P. Gauthier","doi":"10.1145/2542398.2542487","DOIUrl":"https://doi.org/10.1145/2542398.2542487","url":null,"abstract":"The quest of a Fame-and-Fortune-hungry knight, in a not-so-fairy-tale!","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122085040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A mischievous Monster, a naive Sidekick, ten shallow Girls, a retired Villain, an apathetic Robot, and a Megalomaniac from outer space live under one roof in the "5 Story Building". Five simultaneous stories tell the lives of the singular occupants of this confining building. These neighbors carry on with their own ambitions and inherited craziness without realizing that their stories are intertwined in this episodic interactive fiction. Jean Paul Sartre and "Sleep No More" inspire this experience for digital tablets that explores the nuances and opportunities enabled by the introduction of interactivity in storytelling. "5 Story Building" is intentionally crafted to show off things that traditional media cannot. This project explores the possibility of multiple simultaneous stories that are part of a bigger plot. These stories develop regardless if they are seen or not: the users' decisions are not only about what they sees but also, and maybe most importantly, what they decide not to see. Multiple readings are necessary and voyeurism is encouraged.
一个淘气的怪物,一个天真的伙伴,十个肤浅的女孩,一个退休的恶棍,一个冷漠的机器人和一个来自外太空的自大狂住在一个屋檐下的“五层楼”。五个同时发生的故事讲述了这栋封闭建筑中独特居住者的生活。这些邻居继续着自己的野心,继承了自己的疯狂,却没有意识到他们的故事是交织在这个情节互动小说中的。让·保罗·萨特(Jean Paul Sartre)和《Sleep No More》激发了数字平板电脑的这种体验,探索了在讲故事时引入互动性所带来的细微差别和机会。“五层楼”刻意去展示传统媒体无法展示的东西。这个项目探索了多个同时发生的故事的可能性,这些故事是一个更大情节的一部分。这些故事的发展与它们是否被看到无关:用户的决定不仅取决于他们看到了什么,而且可能最重要的是,他们决定不看到什么。多重阅读是必要的,并鼓励偷窥。
{"title":"5 story building","authors":"Ricardo Muñoz","doi":"10.1145/2542256.2542265","DOIUrl":"https://doi.org/10.1145/2542256.2542265","url":null,"abstract":"A mischievous Monster, a naive Sidekick, ten shallow Girls, a retired Villain, an apathetic Robot, and a Megalomaniac from outer space live under one roof in the \"5 Story Building\". Five simultaneous stories tell the lives of the singular occupants of this confining building. These neighbors carry on with their own ambitions and inherited craziness without realizing that their stories are intertwined in this episodic interactive fiction.\u0000 Jean Paul Sartre and \"Sleep No More\" inspire this experience for digital tablets that explores the nuances and opportunities enabled by the introduction of interactivity in storytelling.\u0000 \"5 Story Building\" is intentionally crafted to show off things that traditional media cannot. This project explores the possibility of multiple simultaneous stories that are part of a bigger plot. These stories develop regardless if they are seen or not: the users' decisions are not only about what they sees but also, and maybe most importantly, what they decide not to see.\u0000 Multiple readings are necessary and voyeurism is encouraged.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125244627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motion from motion, the unbinding of physical form.
来自运动的运动,物理形态的解脱。
{"title":"Shadow & light","authors":"M. Clark","doi":"10.1145/2542398.2542450","DOIUrl":"https://doi.org/10.1145/2542398.2542450","url":null,"abstract":"Motion from motion, the unbinding of physical form.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121682863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manually modeling of 3D garment is highly time-consuming and professional expertise demanding, and can only produce limited garments (Fig. 1(a)). The ability to create a diverse set of 3D garments is required with the trend of online fashion and apparel mass customization. This issue has been recently tackled with a fully automatic garment transfer algorithm [Brouet et al. 2012] based on pattern grading. Content creation techniques such as [Xu et al. 2012] introduce set evolution as a means for creative 3D shape modeling. However, current component assembly-based 3D shape modeling were just designed for discrete properties. An important observation is that style of many components garments can be characterized by the ratio of area and boundary length. Thus, we propose a simple but effective garment synthesis method that utilizes such a style description, instead of discretizing the style space. Results show that the method is able to produce various reasonable garments efficiently.
手工建模3D服装耗时长,对专业知识要求高,只能生产有限的服装(图1(a))。随着在线时尚和服装大规模定制的趋势,创建多样化的3D服装的能力是必需的。最近,一种基于图案分级的全自动服装转移算法[Brouet et al. 2012]解决了这个问题。内容创建技术,如[Xu et al. 2012]将集合演化作为创造性3D形状建模的一种手段。然而,目前基于组件装配的三维形状建模都是针对离散特性进行的。一个重要的观察是,许多成分服装的风格可以由面积和边界长度的比例来表征。因此,我们提出了一种简单而有效的服装合成方法,利用这种风格描述,而不是离散的风格空间。结果表明,该方法能有效地生产出各种合理的服装。
{"title":"Automatic 3D garment modeling by continuous style description","authors":"Li Liu, Ruomei Wang, Z. Su, Xiaonan Luo","doi":"10.1145/2542302.2542321","DOIUrl":"https://doi.org/10.1145/2542302.2542321","url":null,"abstract":"Manually modeling of 3D garment is highly time-consuming and professional expertise demanding, and can only produce limited garments (Fig. 1(a)). The ability to create a diverse set of 3D garments is required with the trend of online fashion and apparel mass customization. This issue has been recently tackled with a fully automatic garment transfer algorithm [Brouet et al. 2012] based on pattern grading. Content creation techniques such as [Xu et al. 2012] introduce set evolution as a means for creative 3D shape modeling. However, current component assembly-based 3D shape modeling were just designed for discrete properties. An important observation is that style of many components garments can be characterized by the ratio of area and boundary length. Thus, we propose a simple but effective garment synthesis method that utilizes such a style description, instead of discretizing the style space. Results show that the method is able to produce various reasonable garments efficiently.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126639537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}