It is the time of the American War of Independence...
这是美国独立战争时期。
{"title":"Assassin's Creed 3 cinematic trailer","authors":"Eszter Bohus","doi":"10.1145/2542398.2542492","DOIUrl":"https://doi.org/10.1145/2542398.2542492","url":null,"abstract":"It is the time of the American War of Independence...","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125995385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the literature of computer graphics and computer-aided conceptual design, the 3D reconstruction of geometric solids from single 2D line drawings becomes a popular research topic because it can offer the user a simple way to access 3D solid objects and avoid of operating professional 3D modeling software [Olsen et al. 2009; Lee and Fang 2012].
在计算机图形学和计算机辅助概念设计的文献中,从单个二维线图中重建几何实体成为一个热门的研究课题,因为它可以为用户提供一种简单的方法来访问三维实体对象,避免操作专业的三维建模软件[Olsen et al. 2009];Lee and Fang 2012]。
{"title":"3D reconstruction of complex geometric solids from 2D line drawings","authors":"Yongwei Miao, Haibin Lin","doi":"10.1145/2542302.2542314","DOIUrl":"https://doi.org/10.1145/2542302.2542314","url":null,"abstract":"In the literature of computer graphics and computer-aided conceptual design, the 3D reconstruction of geometric solids from single 2D line drawings becomes a popular research topic because it can offer the user a simple way to access 3D solid objects and avoid of operating professional 3D modeling software [Olsen et al. 2009; Lee and Fang 2012].","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133373923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The mother, acting as an orchestra's conductor, tries but without success, to gather all of them around the same table for dinner.
母亲作为一个管弦乐队的指挥,试图把他们所有人聚集在同一张桌子旁吃饭,但没有成功。
{"title":"Stewpot rhapsody","authors":"L. Grosjean","doi":"10.1145/2542398.2542481","DOIUrl":"https://doi.org/10.1145/2542398.2542481","url":null,"abstract":"The mother, acting as an orchestra's conductor, tries but without success, to gather all of them around the same table for dinner.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133602829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Touch based interaction is popular in graphical user interface (GUI) systems, as it provides natural and intuitive direct manipulation. Rotation and translation are basic tasks for manipulating graphical objects and various touch based interaction techniques has been investigated for doing this [Hancock et al. 2006]. In early GUI systems, users had to perform rotation and translation independently by switching between the two manipulation modes either using a menu system or by manipulating different widgets that in many cases make the interface visually cluttered. Recently, two-finger gestures have become common in multi-touch interfaces to perform rotation, translation, and even scaling, simultaneously, without visual clutter. However, there can be ergonomic problems when the user has to rotate objects in large angle [Hoggan et al. 2013], which causes strain on user's wrist. As a result users tend to split and perform the manipulation in multiple steps, which might not be suitable for certain applications, such as puppeteering based animation tools.
基于触摸的交互在图形用户界面(GUI)系统中很流行,因为它提供了自然和直观的直接操作。旋转和平移是操作图形对象的基本任务,为此研究了各种基于触摸的交互技术[Hancock et al. 2006]。在早期的GUI系统中,用户必须通过在两种操作模式之间切换来独立地执行旋转和转换,或者使用菜单系统,或者通过操作不同的小部件,这在许多情况下会使界面在视觉上混乱。最近,两指手势在多点触控界面中变得很常见,可以同时执行旋转、平移甚至缩放,而不会造成视觉混乱。然而,当用户必须大角度旋转物体时,可能会出现人体工程学问题[Hoggan et al. 2013],这会导致用户手腕紧张。因此,用户倾向于分割并在多个步骤中执行操作,这可能不适合某些应用程序,例如基于木偶的动画工具。
{"title":"Weighted integral rotation and translation for touch interaction","authors":"Gun A. Lee, M. Billinghurst","doi":"10.1145/2542302.2542339","DOIUrl":"https://doi.org/10.1145/2542302.2542339","url":null,"abstract":"Touch based interaction is popular in graphical user interface (GUI) systems, as it provides natural and intuitive direct manipulation. Rotation and translation are basic tasks for manipulating graphical objects and various touch based interaction techniques has been investigated for doing this [Hancock et al. 2006]. In early GUI systems, users had to perform rotation and translation independently by switching between the two manipulation modes either using a menu system or by manipulating different widgets that in many cases make the interface visually cluttered. Recently, two-finger gestures have become common in multi-touch interfaces to perform rotation, translation, and even scaling, simultaneously, without visual clutter. However, there can be ergonomic problems when the user has to rotate objects in large angle [Hoggan et al. 2013], which causes strain on user's wrist. As a result users tend to split and perform the manipulation in multiple steps, which might not be suitable for certain applications, such as puppeteering based animation tools.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133676381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this tutorial, we will describe both virtual characters and realistic humanoid social robots using the same high level models. Particularly, we will describe: 1. How to capture real-time gestures and facial emotions from real people, how to recognize any real person, how to recognize certain sounds. We will present a state of the art and some new avenues of research. 2. How to model a variety of interactive reactions of the virtual humans and social robots (facial expressions, gestures, multiparty dialog, etc) depending on the real scenes input parameters. 3. How we can define Virtual Characters that have an emotional behavior (personality and mood and emotions) and how to allow them to remember us and have a believable relationship with us. This part is to allow virtual humans and social robots to have an individual and not automatic behaviour. This tutorial will also address the modelling of long-term and short-term memory and the interactions between users and virtual humans based on gaze and how to model visual attention. We will explain different methods to identify user actions and how to allow Virtual Characters to answer to them. 4. This tutorial will also address the modelling of long-term and short-term memory and the interactions between users and virtual humans based on gaze and how to model visual attention. We will present the concepts of behavioral animation, group simulation, intercommunication between virtual humans, social humanoid robots and real people. Case studies will be presented from the Being There Centre (see http://imi.ntu.edu.sg/BeingThereCentre/Projects/Pages/Project4.aspx) where autonomous virtual humans and social robots react to a few actions from the real people.
{"title":"Interactive virtual characters","authors":"D. Thalmann, N. Magnenat-Thalmann","doi":"10.1145/2542266.2542277","DOIUrl":"https://doi.org/10.1145/2542266.2542277","url":null,"abstract":"In this tutorial, we will describe both virtual characters and realistic humanoid social robots using the same high level models. Particularly, we will describe: 1. How to capture real-time gestures and facial emotions from real people, how to recognize any real person, how to recognize certain sounds. We will present a state of the art and some new avenues of research. 2. How to model a variety of interactive reactions of the virtual humans and social robots (facial expressions, gestures, multiparty dialog, etc) depending on the real scenes input parameters. 3. How we can define Virtual Characters that have an emotional behavior (personality and mood and emotions) and how to allow them to remember us and have a believable relationship with us. This part is to allow virtual humans and social robots to have an individual and not automatic behaviour. This tutorial will also address the modelling of long-term and short-term memory and the interactions between users and virtual humans based on gaze and how to model visual attention. We will explain different methods to identify user actions and how to allow Virtual Characters to answer to them. 4. This tutorial will also address the modelling of long-term and short-term memory and the interactions between users and virtual humans based on gaze and how to model visual attention. We will present the concepts of behavioral animation, group simulation, intercommunication between virtual humans, social humanoid robots and real people. Case studies will be presented from the Being There Centre (see http://imi.ntu.edu.sg/BeingThereCentre/Projects/Pages/Project4.aspx) where autonomous virtual humans and social robots react to a few actions from the real people.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127831045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resident Evil: damnation","authors":"S. Kure","doi":"10.1145/2542398.2542428","DOIUrl":"https://doi.org/10.1145/2542398.2542428","url":null,"abstract":"Film based on the Resident Evil game series.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114335512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A reckless driver is haunted by a raccoon's ghost.
一个鲁莽的司机被浣熊的鬼魂缠住了。
{"title":"Roadkill redemption","authors":"Karl Hadrika","doi":"10.1145/2542398.2542446","DOIUrl":"https://doi.org/10.1145/2542398.2542446","url":null,"abstract":"A reckless driver is haunted by a raccoon's ghost.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121896071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We have created a movable, limitedly volumetric "immaterial" display. Our prototype is the first mobile, hand-held fogscreen. It can show e.g., slices of volumetric objects when swept across mid-air. It is based on the patented FogScreen [Fogio 2013] technology. The previous FogScreen installations have been fixed set-ups, where the screen device and a projector are typically rigged up, leaving space for the viewers to walk through the mid-air display. Also mid-air virtual reality and mid-air user interfaces have been implemented [DiVerdi et al. 2006, Rakkolainen et al. 2009].
{"title":"A movable immaterial volumetric display","authors":"I. Rakkolainen, Antti Sand","doi":"10.1145/2542302.2542305","DOIUrl":"https://doi.org/10.1145/2542302.2542305","url":null,"abstract":"We have created a movable, limitedly volumetric \"immaterial\" display. Our prototype is the first mobile, hand-held fogscreen. It can show e.g., slices of volumetric objects when swept across mid-air. It is based on the patented FogScreen [Fogio 2013] technology. The previous FogScreen installations have been fixed set-ups, where the screen device and a projector are typically rigged up, leaving space for the viewers to walk through the mid-air display. Also mid-air virtual reality and mid-air user interfaces have been implemented [DiVerdi et al. 2006, Rakkolainen et al. 2009].","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116806677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social space conventionally denotes a physical space with fixed location, e.g. a living room, a lobby in a hotel, or a town square, allowing people to interact with each other. With the rapid development of communication and networking, a social space is extended from a fixed physical location to a mobile virtual space [Harasim], built by the internet, e.g., an online chat room, instant messaging, and Facebook, with the information exchanging for textual, audio, and visual media, connecting to the people located in different places with fixed location and to the ones with mobility via hand-held mobile devices.
{"title":"Cloud-based social space: an interactive 3D social media browsing system","authors":"Ya-Ting Chang, Shih-Wei Sun","doi":"10.1145/2542302.2542338","DOIUrl":"https://doi.org/10.1145/2542302.2542338","url":null,"abstract":"Social space conventionally denotes a physical space with fixed location, e.g. a living room, a lobby in a hotel, or a town square, allowing people to interact with each other. With the rapid development of communication and networking, a social space is extended from a fixed physical location to a mobile virtual space [Harasim], built by the internet, e.g., an online chat room, instant messaging, and Facebook, with the information exchanging for textual, audio, and visual media, connecting to the people located in different places with fixed location and to the ones with mobility via hand-held mobile devices.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127259227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a multiplexing invisible shadow system named "Restive Shadow." The proposed system uses infrared lights, each of which radiates a certain wavelength of infrared light, and an object to which two different types of IR filters are attached. Directing the light toward the object causes the object's shadow to appear; the shape of the object then appears to change according to the wavelength of the radiated infrared light. With this system, a user is expected to attain a different viewpoint on shadows.
{"title":"Restive shadow: animating invisible shadows for expanding shadowgraph experience","authors":"Saki Sakaguchi, Hikari Tono, Takuma Tanaka, Mitsunori Matsushita","doi":"10.1145/2542284.2542300","DOIUrl":"https://doi.org/10.1145/2542284.2542300","url":null,"abstract":"This paper proposes a multiplexing invisible shadow system named \"Restive Shadow.\" The proposed system uses infrared lights, each of which radiates a certain wavelength of infrared light, and an object to which two different types of IR filters are attached. Directing the light toward the object causes the object's shadow to appear; the shape of the object then appears to change according to the wavelength of the radiated infrared light. With this system, a user is expected to attain a different viewpoint on shadows.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130662442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}