We propose a new paradigm of touch interface that allows one to convert daily objects to a touch pad through the use of surface mounted sensors. To achieve a successful touch interface, localization of the finger tap is important. We present an inter-disciplinary approach to improve source localization on solids by means of a mathematical model. It utilizes mechanical vibration theories to simulate the output signals derived from sensors mounted on a physical surface. Utilizing this model, we provide an insight into how phase is distorted in vibrational waves within an aluminium plate which in turn serves as a motivation for our work. We then propose a source localization algorithm based on the phase information of the received signals. We verify the performance of our algorithm using both simulated and recorded data.
{"title":"Source Localization in the Presence of Dispersion for Next Generation Touch Interface","authors":"A. Sulaiman, K. Poletkin, Andy W. H. Khong","doi":"10.1109/CW.2010.72","DOIUrl":"https://doi.org/10.1109/CW.2010.72","url":null,"abstract":"We propose a new paradigm of touch interface that allows one to convert daily objects to a touch pad through the use of surface mounted sensors. To achieve a successful touch interface, localization of the finger tap is important. We present an inter-disciplinary approach to improve source localization on solids by means of a mathematical model. It utilizes mechanical vibration theories to simulate the output signals derived from sensors mounted on a physical surface. Utilizing this model, we provide an insight into how phase is distorted in vibrational waves within an aluminium plate which in turn serves as a motivation for our work. We then propose a source localization algorithm based on the phase information of the received signals. We verify the performance of our algorithm using both simulated and recorded data.","PeriodicalId":410870,"journal":{"name":"2010 International Conference on Cyberworlds","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115371638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Situation awareness is the perception of environmental elements within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future. The quality of situation awareness directly affects the decision making process for human soldiers in Military Operations on Urban Terrain (MOUT). It is therefore important to accurately model situation awareness in order to generate realistic tactical behaviors for the non-player characters (also known as bots) in MOUT simulations. This is a very challenging problem due to the time constraints in decision-making process and the heterogeneous cue types involved in MOUT. Although there are some theoretical models on situation awareness, they generally do not provide computational mechanisms suitable for MOUT simulations. In this paper, we propose a computational model of situation awareness for the bots in MOUT simulations. The computational model aims to form up situation awareness quickly with some key cues of the tactical situation. It is also designed to work together with some novel features that help to produce realistic tactical behaviors. These features include case-based reasoning, qualitative spatial representation and expectations. The effectiveness of the computational model is assessed with Twilight City, a virtual environment that we have built for MOUT simulations.
{"title":"A Computational Model of Situation Awareness for MOUT Simulations","authors":"Shang-Ping Ting, Suiping Zhou, Nan Hu","doi":"10.1109/CW.2010.11","DOIUrl":"https://doi.org/10.1109/CW.2010.11","url":null,"abstract":"Situation awareness is the perception of environmental elements within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future. The quality of situation awareness directly affects the decision making process for human soldiers in Military Operations on Urban Terrain (MOUT). It is therefore important to accurately model situation awareness in order to generate realistic tactical behaviors for the non-player characters (also known as bots) in MOUT simulations. This is a very challenging problem due to the time constraints in decision-making process and the heterogeneous cue types involved in MOUT. Although there are some theoretical models on situation awareness, they generally do not provide computational mechanisms suitable for MOUT simulations. In this paper, we propose a computational model of situation awareness for the bots in MOUT simulations. The computational model aims to form up situation awareness quickly with some key cues of the tactical situation. It is also designed to work together with some novel features that help to produce realistic tactical behaviors. These features include case-based reasoning, qualitative spatial representation and expectations. The effectiveness of the computational model is assessed with Twilight City, a virtual environment that we have built for MOUT simulations.","PeriodicalId":410870,"journal":{"name":"2010 International Conference on Cyberworlds","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129572511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youngjun Kim, S. Koo, Deukhee Lee, Laehyun Kim, Se Hyung Park
We propose a robust mesh-to-mesh collision detection algorithm using a ray tracing method. The algorithm checks all vertices of a geometrical object based on the proposed criteria, and then the colliding vertices are detected. In order to realize a real-time calculation, acceleration by spatial subdivision is performed. Since the proposed ray-traced collision detection method can directly calculate the reacting forces between the colliding objects, this method is apt for a real-time medical simulation dealing with deformable organs. Our method addresses the limitation of the previous ray-traced approach as it can detect collisions between all arbitrarily shaped objects, including non-convex or sharp objects. Moreover, deeply penetrated collisions can be detected effectively.
{"title":"Mesh-to-Mesh Collision Detection by Ray Tracing for Medical Simulation with Deformable Bodies","authors":"Youngjun Kim, S. Koo, Deukhee Lee, Laehyun Kim, Se Hyung Park","doi":"10.1109/CW.2010.10","DOIUrl":"https://doi.org/10.1109/CW.2010.10","url":null,"abstract":"We propose a robust mesh-to-mesh collision detection algorithm using a ray tracing method. The algorithm checks all vertices of a geometrical object based on the proposed criteria, and then the colliding vertices are detected. In order to realize a real-time calculation, acceleration by spatial subdivision is performed. Since the proposed ray-traced collision detection method can directly calculate the reacting forces between the colliding objects, this method is apt for a real-time medical simulation dealing with deformable organs. Our method addresses the limitation of the previous ray-traced approach as it can detect collisions between all arbitrarily shaped objects, including non-convex or sharp objects. Moreover, deeply penetrated collisions can be detected effectively.","PeriodicalId":410870,"journal":{"name":"2010 International Conference on Cyberworlds","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129174903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer facial animation still remains a very challenging topic within the computer graphics community. In this paper, a realistic and expressive computer facial animation system is developed by automated learning from Vicon Nexus facial motion capture data. Facial motion data of different emotions collected using Vicon Nexus are processed using dimensionality reduction techniques such as PCA and EMPCA. EMPCA was found to best preserve the originality of the data the most compared with other techniques. Ultimately, the emotions data are mapped to a 3D animated face, which produced results that clearly show the motion of the eyes, eyebrows, and lips. Our approach used data captured from a real speaker, resulting in more natural and lifelike facial animations. This approach can be used for various applications and serve as prototyping tool to automatically generate realistic and expressive facial animation.
{"title":"Computer Animation of Facial Emotions","authors":"Choong Seng Chan, F. S. Tsai","doi":"10.1109/CW.2010.49","DOIUrl":"https://doi.org/10.1109/CW.2010.49","url":null,"abstract":"Computer facial animation still remains a very challenging topic within the computer graphics community. In this paper, a realistic and expressive computer facial animation system is developed by automated learning from Vicon Nexus facial motion capture data. Facial motion data of different emotions collected using Vicon Nexus are processed using dimensionality reduction techniques such as PCA and EMPCA. EMPCA was found to best preserve the originality of the data the most compared with other techniques. Ultimately, the emotions data are mapped to a 3D animated face, which produced results that clearly show the motion of the eyes, eyebrows, and lips. Our approach used data captured from a real speaker, resulting in more natural and lifelike facial animations. This approach can be used for various applications and serve as prototyping tool to automatically generate realistic and expressive facial animation.","PeriodicalId":410870,"journal":{"name":"2010 International Conference on Cyberworlds","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129932587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we propose a talking head system for animating facial expressions using a template face generated from partial differential equations (PDE). It uses a set of pre configured curves to calculate an internal template surface face. This surface is then used to associate various facial features with a given 3D face object. Motion retargeting is then used to transfer the deformations in these areas from the template to the target object. The procedure is continued until all the expressions in the database are calculated and transferred to the target 3D human face object. Additionally the system interacts with the user using an artificial intelligence (AI) chatter bot to generate response from a given text. Speech and facial animation are synchronized using the Microsoft Speech API, where the response from the AI bot is converted to speech.
{"title":"On the Development of an Interactive Talking Head System","authors":"Michael Athanasopoulos, H. Ugail, G. G. Castro","doi":"10.1109/CW.2010.53","DOIUrl":"https://doi.org/10.1109/CW.2010.53","url":null,"abstract":"In this work we propose a talking head system for animating facial expressions using a template face generated from partial differential equations (PDE). It uses a set of pre configured curves to calculate an internal template surface face. This surface is then used to associate various facial features with a given 3D face object. Motion retargeting is then used to transfer the deformations in these areas from the template to the target object. The procedure is continued until all the expressions in the database are calculated and transferred to the target 3D human face object. Additionally the system interacts with the user using an artificial intelligence (AI) chatter bot to generate response from a given text. Speech and facial animation are synchronized using the Microsoft Speech API, where the response from the AI bot is converted to speech.","PeriodicalId":410870,"journal":{"name":"2010 International Conference on Cyberworlds","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121533409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. C. M. Tavares, S. M. M. Fernandes, Maria Lencastre
Interfaces in two dimensions, like buttons and menus, are been used for 35 years. Technologies have been developed to extend interfaces for tridimensional environment. One of them, called Augmented Reality, is being viewed due to the ease on interaction with the virtual environment. By other side, due to complexity of human tasks, people are getting together to perform tasks through collaborative groups, named groupware. This article proposes a system that does collaboration and has easy interactivity and immersion, by using augmented reality resources. This integration is not so easy to find out on current market, and it is a great motivation for this innovation. The project can help several areas like, for example, distance education, engineering, architecture and marketing. Results show the viability of the system, and its efficiency in applications that needs easy manipulation of projects and high degree of immersion of users, offering facility to activities at real time, without network congestion and in a collaborative way.
{"title":"NHE: Collaborative Virtual Environment with Augmented Reality on Web","authors":"A. C. M. Tavares, S. M. M. Fernandes, Maria Lencastre","doi":"10.1109/CW.2010.71","DOIUrl":"https://doi.org/10.1109/CW.2010.71","url":null,"abstract":"Interfaces in two dimensions, like buttons and menus, are been used for 35 years. Technologies have been developed to extend interfaces for tridimensional environment. One of them, called Augmented Reality, is being viewed due to the ease on interaction with the virtual environment. By other side, due to complexity of human tasks, people are getting together to perform tasks through collaborative groups, named groupware. This article proposes a system that does collaboration and has easy interactivity and immersion, by using augmented reality resources. This integration is not so easy to find out on current market, and it is a great motivation for this innovation. The project can help several areas like, for example, distance education, engineering, architecture and marketing. Results show the viability of the system, and its efficiency in applications that needs easy manipulation of projects and high degree of immersion of users, offering facility to activities at real time, without network congestion and in a collaborative way.","PeriodicalId":410870,"journal":{"name":"2010 International Conference on Cyberworlds","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122029815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a growing interest in using game-like virtual environments for education. Massively multi-user online games such as World of War craft employ computer controlled non-player characters (NPCs) and quest activities in training or tutoring capacities. This approach is very effective, incorporating active learning, incremental progress, and creative repetition. This paper explores ways to utilize this model in educational virtual environments, using NPCs as anthropocentric keys to organize and deliver educational content. Our educational NPC design includes a knowledge model and a user performance model, in addition to the physical traits, behavior, and dialog model necessary to make them interesting members of the environment. Web-based educational content, exercises, and quizzes are imported into the virtual worlds, reducing the effort needed to create new NPCs with associated educational content. The NPC architecture supports multi-platform NPCs in two virtual environments, our own CVE (Collaborative Virtual Environment) and Second Life.
{"title":"PNQ: Portable Non-player Characters with Quests","authors":"Jafar Al-Gharaibeh, C. Jeffery","doi":"10.1109/CW.2010.30","DOIUrl":"https://doi.org/10.1109/CW.2010.30","url":null,"abstract":"There is a growing interest in using game-like virtual environments for education. Massively multi-user online games such as World of War craft employ computer controlled non-player characters (NPCs) and quest activities in training or tutoring capacities. This approach is very effective, incorporating active learning, incremental progress, and creative repetition. This paper explores ways to utilize this model in educational virtual environments, using NPCs as anthropocentric keys to organize and deliver educational content. Our educational NPC design includes a knowledge model and a user performance model, in addition to the physical traits, behavior, and dialog model necessary to make them interesting members of the environment. Web-based educational content, exercises, and quizzes are imported into the virtual worlds, reducing the effort needed to create new NPCs with associated educational content. The NPC architecture supports multi-platform NPCs in two virtual environments, our own CVE (Collaborative Virtual Environment) and Second Life.","PeriodicalId":410870,"journal":{"name":"2010 International Conference on Cyberworlds","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131377793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Resilience of the civil built environment is an ultimate mean for protecting the human lives, the private or public assets and biogeocenosis environment during natural catastrophe, major industrial accident or terrorist calamity. This paper outlines a theoretical framework of a new paradigm - "risk-informed, multi-hazard resilience of built environment" and gives a sketch of concept map for a minimal set of the shared (between different scientific and engineering cyber worlds) computational and analytic resources, which can facilitate a designing and maintaining of a higher level of the real built environment resilience.
{"title":"Plato's Atlantis Revisited: Risk-Informed, Multi-hazard Resilience of Built Environment via Cyber Worlds Sharing","authors":"I. A. Kirillov, S. Klimenko","doi":"10.1109/CW.2010.38","DOIUrl":"https://doi.org/10.1109/CW.2010.38","url":null,"abstract":"Resilience of the civil built environment is an ultimate mean for protecting the human lives, the private or public assets and biogeocenosis environment during natural catastrophe, major industrial accident or terrorist calamity. This paper outlines a theoretical framework of a new paradigm - \"risk-informed, multi-hazard resilience of built environment\" and gives a sketch of concept map for a minimal set of the shared (between different scientific and engineering cyber worlds) computational and analytic resources, which can facilitate a designing and maintaining of a higher level of the real built environment resilience.","PeriodicalId":410870,"journal":{"name":"2010 International Conference on Cyberworlds","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114982413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our goal is to create dancing crowds in cyber worlds and to use this feature to support creative endeavors such as pre-visualization of choreography and actual stage performances. In this paper we present a method of motion planning using dance motion clips. We describe a trial algorithm of collision avoidance using a grid map. We also present methods to create variety in animation for dance choreographies. As a result, we confirmed that most collisions could be avoided by the motion planning method. However, the need for some improvements in creating a conceptual dancing crowd was found.
{"title":"Motion Planning and Animation Variety Using Dance Motion Clips","authors":"A. Soga, R. Boulic, D. Thalmann","doi":"10.1109/CW.2010.62","DOIUrl":"https://doi.org/10.1109/CW.2010.62","url":null,"abstract":"Our goal is to create dancing crowds in cyber worlds and to use this feature to support creative endeavors such as pre-visualization of choreography and actual stage performances. In this paper we present a method of motion planning using dance motion clips. We describe a trial algorithm of collision avoidance using a grid map. We also present methods to create variety in animation for dance choreographies. As a result, we confirmed that most collisions could be avoided by the motion planning method. However, the need for some improvements in creating a conceptual dancing crowd was found.","PeriodicalId":410870,"journal":{"name":"2010 International Conference on Cyberworlds","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121496098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
On interactive surfaces, a precise calibration of the tracking system is necessary for an exact user interaction. So far, common calibration techniques focus on eliminating geometric distortions. This static calibration is only correct for a specific viewpoint of one single user and parallax error distortions still occur if this changes (i.e. if the user moves in front of the digital screen). In order to overcome this problem, we present an empirical model of the user’s position and movement in front of a digital screen. With this, a model predictive controller is able to correct the parallax error for future positions of the user. We deduce the model’s parameters from a user study on a large interactive whiteboard, in which we measured the 3D position of the user’s viewpoint during common interaction tasks.
{"title":"User Model for Predictive Calibration Control on Interactive Screens","authors":"B. Migge, A. Kunz","doi":"10.1109/CW.2010.18","DOIUrl":"https://doi.org/10.1109/CW.2010.18","url":null,"abstract":"On interactive surfaces, a precise calibration of the tracking system is necessary for an exact user interaction. So far, common calibration techniques focus on eliminating geometric distortions. This static calibration is only correct for a specific viewpoint of one single user and parallax error distortions still occur if this changes (i.e. if the user moves in front of the digital screen). In order to overcome this problem, we present an empirical model of the user’s position and movement in front of a digital screen. With this, a model predictive controller is able to correct the parallax error for future positions of the user. We deduce the model’s parameters from a user study on a large interactive whiteboard, in which we measured the 3D position of the user’s viewpoint during common interaction tasks.","PeriodicalId":410870,"journal":{"name":"2010 International Conference on Cyberworlds","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114405331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}